id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
265409096
|
pes2o/s2orc
|
v3-fos-license
|
Pre-treatment inflamed tumor immune microenvironment is associated with FOLFIRINOX response in pancreatic cancer
Introduction Pancreatic adenocarcinoma (PDAC) is an aggressive tumor with limited response to both chemotherapy and immunotherapy. Pre-treatment tumor features within the tumor immune microenvironment (TiME) may influence treatment response. We hypothesized that the pre-treatment TiME composition differs between metastatic and primary lesions and would be associated with response to modified FOLFIRINOX (mFFX) or gemcitabine-based (Gem-based) therapy. Methods Using RNAseq data from a cohort of treatment-naïve, advanced PDAC patients in the COMPASS trial, differential gene expression analysis of key immunomodulatory genes in were analyzed based on multiple parameters including tumor site, response to mFFX, and response to Gem-based treatment. The relative proportions of immune cell infiltration were defined using CIBERSORTx and Dirichlet regression. Results 145 samples were included in the analysis; 83 received mFFX, 62 received Gem-based therapy. Metastatic liver samples had both increased macrophage (1.2 times more, p < 0.05) and increased eosinophil infiltration (1.4 times more, p < 0.05) compared to primary lesion samples. Further analysis of the specific macrophage phenotypes revealed an increased M2 macrophage fraction in the liver samples. The pre-treatment CD8 T-cell, dendritic cell, and neutrophil infiltration of metastatic samples were associated with therapy response to mFFX (p < 0.05), while mast cell infiltration was associated with response to Gem-based therapy (p < 0.05). Multiple immunoinhibitory genes such as ADORA2A, CSF1R, KDR/VEGFR2, LAG3, PDCD1LG2, and TGFB1 and immunostimulatory genes including C10orf54, CXCL12, and TNFSF14/LIGHT were significantly associated with worse survival in patients who received mFFX (p = 0.01). There were no immunomodulatory genes associated with survival in the Gem-based cohort. Discussion Our evidence implies that essential differences in the PDAC TiME exist between primary and metastatic tumors and an inflamed pretreatment TiME is associated with mFFX response. Defining components of the PDAC TiME that influence therapy response will provide opportunities for targeted therapeutic strategies that may need to be accounted for in designing personalized therapy to improve outcomes.
Introduction
Overall survival for pancreatic adenocarcinoma (PDAC) remains dismal with 5-year survival less than 10% (1,2).Effective conventional cytotoxic chemotherapeutic regimens are limited; however, combinations such as FOLFIRINOX or gemcitabine/ nab-paclitaxel have demonstrated efficacy and can prolong survival for PDAC patients by months (3,4).Immune checkpoint inhibitors (ICI) have had dramatic success in malignancies such as melanoma (5) non-small cell lung cancer (6), and biliary tract cancer (7).Unfortunately, ICIs as monotherapy have essentially failed in PDAC (8), prompting investigations into strategies to potentiate PDAC immunotherapy.
A wide body of preclinical data (9-16) supports the concept that chemotherapy favorably modifies the tumor immune microenvironment (TiME) through a variety of mechanisms.For example, one of the components of FOLFIRINOX, oxaliplatin, causes DNA damage (17) and can induce immunogenic cell death (ICD) via release of damage-associated molecular patterns in tumors, uptake of tumor debris and neoantigens by antigenpresenting cells, and ultimately, induction of an adaptive immune response and cytotoxic T cell activity (12,13,18,19).Similarly, 5FU is thought to selectively kill myeloid-derived suppressor cells (MDSCs) to enhance T cell mediated anti-tumoral immunity (20).Results from phase III trials across a wide range of cancers have demonstrated that chemotherapy such as oxaliplatin combined with ICI (chemo-ICI) leads to improved overall survival and outcome compared with chemotherapy alone (21)(22)(23)(24)(25)(26)(27)(28)(29)(30)(31)(32)(33)(34)(35).Currently, the two main chemotherapy regimens for PDAC are FOLFIRINOX (5-fluorouracil, leucovorin, irinotecan, oxaliplatin) (3) and gemcitabine with nab-paclitaxel (4).These treatments have widespread applicability in the treatment of PDAC and are administered both as systemic chemotherapy in unresectable and metastatic PDAC (3,36) as well as in a neoadjuvant fashion to improve cancer resectability and survival (37,38).While FOLFIRINOX is typically favored as the initial chemotherapeutic strategy (3), it is associated with increased toxicity compared with Gem-based regimens (3,4).In practice, there is currently no indications to guide clinicians in choosing between the two chemotherapy regimens beyond the patient's performance status (39,40).However, in the clinic, FOLFIRINOX delivery is associated with increased tumor-infiltrating CD8 + T lymphocytes (TILs), decreased circulating regulatory T cells (Tregs), and can increase tumoral PD-L1 expression (41)(42)(43).Thus, FOLFIRNOX holds potential to augment ICI therapy in PDAC patients.Studies of combination chemo-ICI in advanced PDAC patients demonstrate improved survival compared to chemotherapy alone (44).mRNA vaccines have also been used in combination with FOLFIRINOX and anti-PD1 therapy, demonstrating the presence of persistent vaccine-expanded tumor-specific T-cells (45).These recent developments underscore the importance of understanding the dynamic interplay of the PDAC TiME and chemotherapy.
Classically, PDAC has been described to have a "cold" TiME, including multiple immunosuppressive cell lines such as Tregs, MDSCs, and M2-phenotypic tumor-associated macrophages (TAMs) (46-52).However, a growing body of evidence supports that the PDAC TiME is heterogeneous, and represented by a diverse milieu of immune cell phenotypes (53).While such heterogeneity has been well-described across a variety of cancers and associated with survival, available data in PDAC is limited.For example, the Immunoscore, which is based on quantification of CD3+/CD8+ lymphocyte heterogeneity at the core and boundary of tumors (54), can outperform traditional TNM staging in predicting disease-free survival and overall survival in colorectal cancer (54,55) and other cancers (56,57).Recent investigations have also described significant differences in the TiME between metastatic and primary lesions (58)(59)(60).It has been shown that PD-L1 expression is decreased in immune cells of metastatic lesions of triple negative breast cancer (61) and differences exist in PD-1+ TIL infiltration between metastatic and primary lung cancer lesions (62).
The association between chemotherapy response and the PDAC TiME has not been well characterized and the influence of disease site has not been investigated thoroughly.We evaluated publicly available data from the COMPASS trial, a prospective study of treatment-naïve patients with a diagnosis of locally advanced or metastatic PDAC who had core needle biopsies obtained prior to treatment used for whole genome sequencing and RNA sequencing (63,64).By analyzing the unique genomic dataset from patients with advanced disease, we investigated what molecular and cellular determinants are associated with chemotherapy response.A secondary goal was to characterize the TiME based on the primary versus metastatic site for PDAC.Considering the biologic differences between metastatic and primary lesions, as well as the established influence of chemotherapy on the TiME, we hypothesized that the components of the pre-treatment TiME would differ between metastatic and primary lesions and would be associated with therapy response and survival in a cohort of advanced PDAC patients.Going forward, these findings may have important implications for personalized therapies and for designing next-generation immunotherapy combination strategies.
COMPASS trial
Institutional Review Board approval and written consent for the COMPASS trial (63,64) was obtained from participating institutions (University Health Network, Toronto, Ontario, Canada; MUHC Centre for Applied Ethics, Montreal, Quebec, Canada; and Queen's University Health Sciences and Affiliated Teaching Hospitals Research Ethics Board, Kingston, Ontario, Canada) (63,64), and a data use agreement was completed by Baylor College of Medicine with the Ontario Institute for Cancer Research for use of the data within this study.Briefly, image-guided percutaneous core needle biopsies were obtained, and patients then received modified FOLFIRINOX (mFFX), gemcitabine/nabpaclitaxel, or a combination of these along with investigational drugs as standard first line therapy and had therapy response data by RECIST 1.1 (65).For our analysis, chemotherapy response data was defined based on tumor size change to therapy."Responders" were patients whose measured tumor decreased in size, while "nonresponders" no change or an increase in tumor size while on therapy (Figure 1A).Patients included in our subsequent analysis were comprised of those who had a confirmed diagnosis of PDAC, had a biopsy obtained from either the liver or pancreas, had longer than 30-day survival from time of trial enrollment, had received at least one cycle of either mFFX or gemcitabine-based (Gem-based) therapy as treatment, and had RECIST data available for evaluation of therapy response.
Immunomodulatory differential gene expression analysis
Raw count data of RNA sequencing data of patients included in the COMPASS trial were downloaded from EGAD00001004548 (https:// ega-archive.org/datasets/EGAD00001004548)and EGAD00001006081 (https://ega-archive.org/datasets/EGAD00001006081).Gene quantification was performed by TPMCalculator (66) and using GENCODE Human Release 43 version of gene annotation GTF file (67).As the immunomodulatory genes which influence immune cell infiltration into the TiME (68) can mediate chemoresistance to gemcitabine (69,70) or platinum-based therapies (71), we evaluated the immunomodulatory genes within these samples as well.Tumor-Immune System Interactions Database (TISIDB) is an online repository of integrated data of tumor-immune interactions (72), including a curated list of genes encoding immunomodulators based on data from 30 non-hematologic cancer types from The Cancer Genome Atlas (TCGA).The raw count data were processed using edgeR v3.42.4 (73) by filtering to remove lowly expressed genes using the "filterByExpr" function, normalization by trimmed mean of M values (74), and dispersion estimation using the negative binomial distribution method.Differential gene expression analysis was calculated using the quasi-likelihood pipeline with a nominal log fold change threshold of 0.5 and a false discovery rate correction (73) set at a nominal value of 0.05 using genes of interest were obtained from TISIDB.Immunomodulatory genes were divided into immunoinhibitory, immunostimulatory, and MHC genes.
In silico cytometry based on transcriptomics
The leukocyte composition of each sample was then characterized as an immune cellular fraction using CIBERSORTx, which estimates proportions of immune cell populations from deconvoluted bulk transcriptomic data (67).CIBERSORTx analysis was performed using the following settings: the LM22 signature matrix was used, consisting of 547 genes to distinguish 22 mature immune cell populations; B-mode batch correction was used; quantile normalization was disabled; 1000 permutations were performed for significance analysis.Only CIBERSORTx results with a p-value < 0.05 were included in subsequent analyses.To increase abundance of more comprehensive immune cell phenotypes, immune cell fractions obtained through LM22 were aggregated as outlined in "Aggregate 2" of the Supplementary Materials in Thorsson et al. (75) to obtain 9 immune cell aggregate phenotypes.Differences in immune cell infiltration proportions were analyzed using Mann-Whitney tests.
Dirichlet regression and statistical analysis
As the output from CIBERSORTx is considered compositional data that carries relative information as proportions of the total amount of immune cell infiltration, summing to 1 for each sample, traditional data analysis may violate modeling assumptions, such as homoscedasticity (76,77).Therefore, Dirichlet regression was also performed comparing immune cell fractions between groups of interest using DirichletReg v0.7-1 (76), R version 4.3.0(78) using the common parametrization model.The regression estimate coefficients obtained using this method can be interpreted similarly to odds ratios if taken as exponentiated coefficients (76).Differences in clinical characteristics of patients were analyzed using Chi-square test or ANOVA, where appropriate.Survival probabilities for each RECIST group were estimated using the Kaplan-Meier method and the log-rank test.Univariate cox proportional hazards for immunomodulators was performed in R using the survival package v3.5-5 (79).Visualization was performed using GraphPad PRISM v9.5.0 and R 4.3.0.
Clinical parameters of included PDAC patients from the COMPASS trial
In total, 145 of the 195 patients from the COMPASS trial dataset met inclusion criteria for our secondary analysis (Table 1).Compared with patients who received Gem-based therapy, mFFX treated patients were significantly younger, predominantly male, and were more likely to have locally advanced disease rather than metastatic disease.However, there was no statistically significant difference in terms of site of biopsy or chemotherapy response between patients receiving mFFX and Gem-based therapy.Similar to previous studies in the efficacy standard chemotherapy regimens in PDAC (3,64), we observed an improvement in median overall survival (OS) in patients who received FOLFIRINOX compared to Gem-based therapy, although this did not reach significance (median OS 307 vs 254 days, p-value = 0.12, Supplemental Figure 1).For the entire cohort, chemotherapy response correlated with overall survival (Figure 1B, p-value < 0.0001), similar to previous studies utilizing RECIST in a metastatic PDAC setting (80,81).As expected, patients with tumors that progressed or demonstrated no change with treatment (nonresponder) had significantly shorter median survival compared to the patients with tumors that decreased in size following treatment (responder) (199 vs 359 days).
Tumor site-specific variations in the PDAC tumor immune microenvironment
Considering the unique pre-treatment patient samples within our cohort, we initially sought to determine whether site-specific differences existed in the cellular components of the PDAC TiME (68) between primary pancreatic tumor biopsies and metastatic liver samples.Compared with pancreatic tumor samples, metastatic liver biopsies had significantly higher expression of multiple immunomodulatory genes.Immunoinhibitory genes such as ADORA2A (log fold-change 0.92, p-value < 0.0001) (82,83), CSF1R (log fold-change 0.62, p-value = 0.003) (84, 85), and CD274/PD-L1 (log fold-change 0.72, p-value 0.03) had significantly higher expression in the liver biopsy samples compared with pancreatic samples (Figure 2).We also noted differences in immunostimulatory and MHC genes based on site of biopsy.A total of 12 immunostimulatory genes within the annotation (CD70, CD80, CD86, CD276, IL2RA, MICB, NT5E, Volcano plots of differential expression of (A) immunoinhibitors, (B) immunostimulators, and (C) MHC genes from TISIDB based on site of biopsy.The threshold for log 2 fold change is set at 0.5, and the threshold for false discovery rate is set at 0.05.PVR, TNFSF14, TNFRSF14, TNFRSF18, ULBP1) were significantly differentially expressed between liver and pancreatic biopsy samples.Of these, only TNFRSF14, a membrane-bound receptor (86) with both pro-inflammatory and anti-inflammatory immune signaling pathways (87), was downregulated in the liver biopsy samples compared to the pancreatic biopsy samples (log fold-change -0.43, p-value = 0.003), while the other immunostimulatory genes were upregulated.Similarly, of the MHC genes that were significantly differentially expressed (TAP2, HLA-DOB, TAP1, HLA-DQA1), all were upregulated in the liver biopsy samples compared to the pancreatic biopsy samples.Taken together, this data suggests that the TiME of liver metastases in PDAC undergoes more dynamic regulation compared to that of primary pancreatic lesions.
Pre-treatment PDAC TiME differences associated with chemotherapy response
Next, we investigated the association of pre-treatment PDAC tumor immune cell infiltration with response to either mFFX or Gem-based therapy.We first compared the initial immune cell infiltration of each treatment group to determine if there were upfront differences in immune populations that may bias downstream analyses.While patients who received mFFX had decreased infiltration by the CD4 T cell aggregate compared to the Gem-based group, there were no statistical differences in immune infiltration by the individual CD4 T cell phenotypes included in the aggregate (Supplementary Table 1), suggesting a similar initial immune cell phenotypic infiltration between treatment groups.Chemotherapy response was associated with variations in expression of tumor pretreatment immunomodulating genes (Figures 3A, B).The immunoinhibitory gene TGFB1 was downregulated in responders to mFFX compared to nonresponders (log fold-change -0.55, p-value < 0.005), while the immunostimulator CD70 (88) was significantly upregulated (log fold-change 2.2, p-value < 0.05).In the metastatic liver-cohort, only the upregulation of CD70 remained significantly increased among responders (log fold-change 2.8, p-value < 0.05).There were no significantly upregulated or downregulated genes associated with Gem-based therapy response (Supplementary Figures 2A, B).
Clinical impact of immunomodulatory genes between biopsy sites
We wanted to evaluate the impact of immunomodulator gene expression on clinical outcomes within our patient cohort.As therapy response strongly correlated with overall survival in this patient cohort from the COMPASS trial, we investigated the association between immunomodulatory gene expression and survival in the different therapy cohorts based on treatment response (Figures 4A-D).Across all biopsies, the immunoinhibitors ADORA2A, CSF1R, KDR/ VEGFR2, LAG3, PDCD1LG2, and TGFB1 were significantly associated with worse survival in patients who received mFFX (Figures 4A, C).A subset of these immunoinhibitors including ADORA2A, CSF1R, LAG3, and TGFB1 were also significantly associated with worse survival in the subset of liver biopsies from patients who received mFFX.There were no immunoinhibitors associated with either improved or worsened survival in the Gembased cohort for all samples and the liver subset.
Multiple immunostimulatory genes were also associated with worse survival in the mFFX cohort across all samples (Figure 4B), including C10orf54, CXCL12, and TNFSF14/LIGHT.While there was an overlap in significant immunostimulators between all samples and the liver sample subset (Figure 4D), TNFSF15/TL1A was only significantly associated with improved survival in liver samples (HR 0.46, p-value = 0.03).Again, we found no immunostimulatory genes that were significantly associated with response to Gem-based therapy in either all samples or the liver subset.
B A
Volcano plots of differential expression of immunomodulatory genes from TISIDB for (A) patients who received mFFX, and (B) patients who received mFFX and had metastatic liver biopsies.
Quantification of immune cell infiltration using CIBERSORTx
Similar to our immunomodulatory analysis, we observed significant differences using CIBERSORTx in infiltrating immune cell proportions on comparison of the primary lesion pancreatic samples compared with metastatic liver biopsies (Table 2).Metastatic liver samples had both increased macrophage (1.2 times more, pvalue < 0.05) and increased eosinophil infiltration (1.4 times more, pvalue < 0.05) compared to pancreatic biopsies from the primary lesion.Further analysis of the specific macrophage phenotypes revealed an increased M2 macrophage fraction in the liver samples, indicating a more immunosuppressed metastatic TiME compared to the primary lesion.
For the entire cohort of treated patients, increased infiltration by 8 out of the 9 immune cell aggregate phenotypes except NK cells were significantly associated with response to mFFX treatment.Pretreatment tumor immune cell populations were not associated with response to Gem-based treatment (Table 3A).On subset analysis of PDAC patients with liver metastasis, only increased pre-treatment CD8 T-cell, dendritic cell, and neutrophil infiltration were only significantly associated with therapy response to mFFX (Table 3B).Conversely, increased total mast cell infiltration was only associated with response to Gem-based therapy in the liver TiME.
Taken together, these data suggest that pre-treatment infiltration by different immune cell phenotypes differs based on site and are associated with therapy response in both a site-specific and therapy-specific fashion.Response to mFFX was more associated with increased infiltration by CD8 T-cells compared to Gem-based regimens, indicating that the pre-treatment TiME may be more impactful for patients receiving mFFX.
Discussion
Across cancers, the TiME is a well described mediator of patient survival and can influence treatment response (89).A growing body of evidence supports the ability of FOLFIRINOX to augment tumoral immunity.However, the influence of pre-treatment TiME on FOLFIRINOX response is not well described.Insight into tumor microenvironment features associated with chemotherapy response may help to identify key mediators of efficacy and point to future opportunities in designing next generation chemo-ICI strategies.Utilizing access to a unique dataset of treatment-naïve biopsy samples of advanced or metastatic PDAC patients from the COMPASS trial, we hypothesized that the components of the pre-treatment TiME would differ between metastatic and primary lesions and would be associated with therapy response and survival in a cohort of advanced PDAC patients.
In the present study, we identified key site-specific differences in the cellular and genomic components of the PDAC TiME.Our analysis identified that PDAC metastatic liver biopsies had more variable expression of immunomodulatory genes compared to pancreas biopsies.Multiple immunoinhibitory genes such as CSF1R (84, 85) and components of immune checkpoint signaling pathways such as CD86-CTLA4 (90), PD-1/PD-L1 (91), and PVR-TIGIT (92) were upregulated in liver metastatic samples.Notably, no immunomodulatory genes were identified that were significantly upregulated in the primary lesion TiME relative to the metastatic tumor samples.Compared to the TiME of primary lesion pancreatic biopsies, the metastatic liver biopsies also demonstrated increased infiltration by M0 and M2 macrophages.M0 macrophages are considered undifferentiated macrophages that can be polarized into different functional phenotypes such as M1 and M2 (93).However, a growing body of evidence suggests that M0 macrophages are not a benign member of the TiME.They are associated with worse outcomes in multiple cancers such as breast (94), prostate (95), and lung cancer (96), and possess a transcriptional profile similar to that of M2 macrophages (97,98).This, combined with an increased infiltration of the immunoinhibitory M2 macrophage (46-52) and the immunomodulatory findings stated above, suggests that the metastatic liver TiME is more dynamically regulated and overall immunosuppressed compared to the TiME of the primary lesion.
An inflamed pre-treatment TiME has been recognized as a predictor of response to neoadjuvant chemotherapy in various cancers (99, 100).One of the earlier markers used to predict response was the systemic immune-inflammatory index (SII) (101), based on peripheral neutrophils, platelets, and lymphocytes that portended survival in NSCLC (102), gastric (103,104), and colorectal (105) cancer.Similarly, the Immunoscore, which was initially developed in colorectal cancer (106) and is based on CD3 +/CD8+ lymphocyte quantification in tumors, predicts disease-free survival and overall survival in colorectal cancer (54,55) and other cancers (56,57).Notably, standard chemotherapy regimens for colorectal cancer have significant overlapping antineoplastic agents with FOLFIRINOX, including FOLFOX, XELOX, FOLFIRI, FOLFOXIRI, or CAPIRI (107), implying that a pre-treatment immune contexture can impact therapy response in PDAC as well.This is supported by retrospective studies in PDAC, which have demonstrated associations between survival and various pretreatment TIL populations.For example, increased CD8 TIL presence correlated with improved survival (108, 109), while M2 macrophage infiltration correlated with worsened survival (110)(111)(112).However, limited data exist in PDAC directly addressing the effect of the pre-treatment immune contexture on chemotherapy.In this study, we show that key members of the pre-treatment TiME are also significantly associated with treatment response.Notably, increased infiltration by mutually exclusive immune cell phenotypes were associated with response to different chemotherapeutic regimens.Increased CD8 T-cell infiltration was significantly associated with tumor response to mFFX in the entire cohort and on subset analysis of patients with liver metastasis.Previous reports have demonstrated that mFFX treatment is associated with an increased infiltration of CD8+ T cells and reduced Tregs (41)(42)(43)113) suggesting that mFFX can augment the PDAC TiME.However, our analysis demonstrates that the pretreatment CD8 T-cell infiltration status of the PDAC TiME may also impact response to FOLFIRINOX.This opens the question of whether the observed increase in CD8+ T cells post-mFFX and the associated favorable response were due to the presence of a high CD8 + T cell population pre-treatment.Our findings implicate the tumor immune status and favorable biology of the treatment-naive tumor may influence response.It is highly likely that both the pre-treatment TiME and chemotherapy-induced CD8+ T cells contribute to a favorable response with mFFX.In contrast, Gem-based regimens were not associated with the presence of any immune cell population across all samples, and only mast cell aggregates in the liver subset.While mast cell infiltration is typically associated with tumor growth (114,115), higher mast cell infiltration was significantly correlated with overall survival and response to gemcitabine in a cohort of biliary tract cancer patients (116).These treatment-specific patterns in the TiME were also seen when analyzing immunomodulators associated with therapy response and survival.For example, the immunoinhibitory and pro-tumorigenic (117) cytokine TGFB1 was significantly downregulated in responders and associated with worse survival in the mFFX treatment cohort.Other genes such as ADORA2A, CSF1R, and LAG3 were significantly associated with worse survival in patients in the mFFX cohort across all samples and liver-only samples.No immunomodulatory genes were associated with therapy response in the Gem-based therapy cohort when comparing either all biopsies or focusing solely on liver biopsy samples (Supplementary Figures S2A, B).Similarly, no immunomodulatory genes were associated with survival within the Gem-based cohort (Figures 4A-D).In the context of preclinical studies which demonstrate that FOLFIRINOX (18)(19)(20) and gemcitabine (118, 119) impact tumor immunity, our data suggest that the interactions between the cellular and genomic components of the pre-treatment PDAC TiME with FOLFIRINOX and gemcitabine may be mechanistically different.
As reflected within this patient cohort, overall survival for patients with advanced stage PDAC remains abysmal, with 5-year survival of less than 10% (2) and highlights the need for strategies to improve outcomes for this large subset of PDAC patients.Currently, aside from the patient's performance status there are no indications for administering one regimen over the other (41,42).Our data suggest that FOLFIRINOX-based chemotherapy approach may be advantageous in select patients with a favorable pre-existing TiME.In such patients, and in patients of borderline performance status that may sway a clinician against the use of FOLFIRINOX (41,42), an immune-based indication for chemotherapy may provide a more nuanced approach to cancer therapy and improve patient outcomes.Furthermore, the implication of a dynamically regulated, immunosuppressed metastatic TiME in PDAC suggests potential avenues for targeted therapies and implies the need to incorporate stage of disease into future design of immune-targeted PDAC therapeutic strategies.For example, the presence of multiple upregulated immune checkpoint pathways in the metastatic PDAC TiME may be targeted via existing checkpoint blockade therapies (84, 120-122) and may represent a potential therapy target to improve outcomes for patients with metastatic PDAC.Another consideration would be to capitalize on the impact of a pre-treatment TiME on chemotherapy through immunological priming.For example, preclinical studies show administration of immunomodulatory cytokines such as interferon sensitize PDAC cell lines to gemcitabine (123,124).Another strategy could combine chemotherapy with the prior use of mRNA vaccines to expand tumor-specific T-cells (45).As multiple immunomodulatory genes were only associated with survival in the mFFX cohort, adjunctive immunotherapeutic strategies could improve mFFX response in either a chemotherapy-only or chemo-ICI regimen.Potential candidates include the use of antibodies or bispecific molecules to target TGFB, which are in clinical testing (125).Agents to block CSF1R, such as surufatinib, are also in testing, and have demonstrated anti-tumoral efficacy in phase III trials (126).
Limitations exist within this study.The COMPASS trial included only patients with advanced or metastatic PDAC, and the results may not be valid in a stage I/II, resectable cohort.CIBERSORTx only provides relative proportions of immune cell infiltration and may not be reflective of absolute infiltration especially for inter-sample comparison (127); as such, we are unable to quantify how potential differences in the absolute immune infiltration may impact chemotherapy response, and will require further study.Additionally, the LM22 signature matrix used in deconvolutional analysis was derived from microarray-derived data from PBMCs (128) and may not be reflective of the totality of immune cell phenotypes within the PDAC TiME.However, this study utilized a large, unique dataset of treatment-naïve PDAC samples to analyze the TiME; future efforts would investigate the in silico analysis in both in vitro and in vivo PDAC models to validate the results and advance our understanding of the PDAC tumor biology.
that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
FIGURE 1 (A) Waterfall plot of tumor response of patients included in the analysis from the COMPASS trial.Patients were recoded from PD (Progressive Disease), SD (Stable Disease), and PR (Partial Response) to those with a decrease in tumor size on treatment as "responders", and patients with an increase in tumor size on treatment as "nonresponders".(B) Kaplan-Meier estimate of responders and nonresponders.
4
FIGURE 4Forest plot of univariate Cox proportional hazard analyses based on chemotherapy received.(A) Reports hazard ratios for immunoinhibitors in all biopsies, (B) reports hazard ratios for immunostimulators in all biopsies.(C) Reports hazard ratios for immunoinhibitors in metastatic liver biopsies, and (D) reports hazard ratios for immunostimulators in metastatic liver biopsies.
TABLE 1
Clinical characteristics of patients from the COMPASS trial included in analysis.
Bolded values are those with p-values below a cutoff of 0.05, suggesting significance.
TABLE 3A Comparison of immune infiltration in responders versus nonresponders for patients treated with mFFX and Gem-based therapy.
TABLE 2
Comparison of immune infiltration of liver vs pancreas biopsies.
TABLE 3B Comparison of immune infiltration in responders versus nonresponders for patients treated with mFFX and Gem-based therapy, metastatic liver biopsies only.Bolded values are those with p-values below a cutoff of 0.05, suggesting significance.
|
2023-11-25T16:19:16.842Z
|
2023-11-23T00:00:00.000
|
{
"year": 2023,
"sha1": "b75b2a2eef7db4ed1b853fd32dc7fd35050cbbef",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2023.1274783/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0a93c49aced6adb26914881fddbc0e5f6bcb6554",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13890972
|
pes2o/s2orc
|
v3-fos-license
|
High-Throughput Screening for GPR119 Modulators Identifies a Novel Compound with Anti-Diabetic Efficacy in db/db Mice
G protein-coupled receptor 119 (GPR119) is highly expressed in pancreatic β cells and enteroendocrine cells. It is involved in glucose-stimulated insulin secretion and glucagon-like peptide-1 (GLP-1) release, thereby representing a promising target for the treatment of type 2 diabetes. Although a number of GPR119 agonists were developed, no positive allosteric modulator (PAM) to this receptor has been reported. Here we describe a high-throughput assay for screening GPR119 PAMs and agonists simultaneously. Following screening of a small molecule compound library containing 312,000 synthetic and natural product-derived samples, one potent GPR119 agonist with novel chemical structure, MW1219, was identified. Exposure of MIN6 and GLUTag cells to MW1219 enhanced glucose-stimulated insulin secretion and GLP-1 release; once-daily oral dosing of MW1219 for 6 weeks in diabetic db/db mice reduced hemoglobin A1c (HbA1c) and improved plasma glucose, insulin and GLP-1 levels; it also increased glucose tolerance. The results demonstrate that MW1219 is capable of effectively controlling blood glucose level and may have the potential to be developed as a new class of anti-diabetic agents.
Introduction
Type 2 diabetes mellitus (T2DM), characteristic of defects in both insulin secretion and sensitivity [1,2], is an increasing threat to human health. Due to the multiplicity of pathologies and the complexity of control mechanism of human body, there exist many therapeutics for T2DM presently [3].
In recent years, compounds that enhance incretin activities have been of particular interest to pharmaceutical companies. Incretinbased therapies are also becoming popular which use either GLP-1 mimetics or DPP-4 inhibitors [4]. However, each has shown its limitations clinically. For example, the efficacy of DPP-4 inhibitors is modest because their action is dependent upon endogenous GLP-1 while GLP-1 mimetics require frequent injections [5]. Therefore, the strategy to identify orally active agents capable of stimulating GLP-1 release remains attractive [6].
GPR119 is a member of class A G protein-coupled receptor (GPCR) family. It is highly expressed in pancreatic b-cells and intestinal endocrine cells [7]. Additionally, GPR119 mRNA is known to be significantly elevated in the islets of obese db/db mice compared with that of normal [8]. Upon activation by endogenous ligand, oleoylethanolamide (OEA), the resultant accumulation of intracellular cAMP via adenylate cyclase activation enhances the effect of glucose-stimulated insulin secretion (GSIS) and GLP-1 release, thus GPR119 represents a promising target for the treatment of type 2 diabetes [7,9,10] and a potential product development attraction to many drug makers. Based on the expression profile and biological actions of GPR119, a number of small molecule GPR119 agonists have been reported [11] but efforts to discover positive allosteric modulators (PAMs) met with difficulties. Allosteric modulators bind to sites different from endogenous ligands. Due to the fact that they are able to provide receptor specificity and selectivity, allosteric modulation has gained much traction as a means to overcome the limitations of many orthosteric ligands [12].
In this paper, we studied the allosteric activity between OEA and the first small molecule GPR119 agonist, AR-231453 [13]. A high-throughput screening (HTS) assay was developed and applied to search for GPR119 PAMs and agonists. A novel GPR119 agonist (MW1219) was subsequently identified and characterized both in vitro and in vivo using a variety of bioassays as well as diabetic db/db mice.
Reporter Gene Assay HEK293-hGPR119 cells or control HEK293 cells were seeded onto 384-well plates with a density of 16,000 cells per well. At the time of assaying, AR-231453 or test compounds dissolved in DMSO were added. After 42 h of incubation at 37uC and 5% CO 2 in a cell culture incubator, cells were lysed and quantified for luciferase activity using the Steady-Glo luciferase assay system (Promega, Madison, WI, USA) according to manufacturer's protocol. For allosteric activity studies, test compounds were 5fold serially diluted and added to AR-231453 or OEA concentra-tion response reaction. Luciferase signals were determined with EnVision (PerkinElmer, Boston, MA, USA).
cAMP Accumulation Assay cAMP accumulation was measured using HTRF-cAMP dynamic kit (Cisbio International, Gif sur Yvette Cedex, France) according to manufacturer's instructions. Briefly, HEK293-hGPR119 cells were suspended in assay buffer (DMEM, 1 mM 3-isobutyl-1-methylxanthine) and transferred to 384-well microplates (Greiner Bio-One, Frickenhausen, Germany) at a density of 16,000 cells/well. Plates were incubated for 30 min at 37uC before adding test compounds. After treatment for 30 min at 37uC, the reactions were stopped by addition of lysis buffer containing HTRF reagents. Plates were then incubated for 60 min at room temperature, and time-resolved FRET signals were measured after excitation at 320 nm. Both the emission signal from the europium cryptate-labeled anti-cAMP antibody (620 nm) and the FRET signal resulting from the labeled cAMP-d2 (665 nm) were detected by EnVision (PerkinElmer).
Insulin Secretion Assay
Insulin-secreting MIN6 cells were plated in 96-well plates (20,000 cells per well) for 2 days. On the day of experiment, culture medium was aspirated and cells were washed twice with KRBH buffer. Cells were then placed at 37uC for 30 min in KRBH containing 2.8 mM glucose. Test compounds were dissolved in either 2.8 or 16.8 mM glucose medium and added
GLP-1 Release Assay
GLUTag cells were plated in 24-well plates on day 1 in low glucose DMEM supplemented with 10% FBS. Culture medium was replaced with DMEM (2.8 mM glucose) supplemented with 10% FBS 24 h before analysis of GLP-1 release. On the day of experiment, cells were washed twice with PBS and incubated with different compounds at desired concentrations in serum-free DMEM with 2.8 mM or 16.8 mM glucose for 1 h at 37uC and 5% CO 2 . The supernatants were collected by centrifugation for 3 min. GLP-1 in the supernatant was detected by a GLP-1 ELISA kit (Linco Research Laboratory).
Small Interfering RNA Transfection
MIN6 and GLUTag cells were cultured as described above. siRNA targeting murine GPR119 coding sequences (Forward: 59-CUAUGCUGCUAUCAAUCUATT-39, Reverse: 59-UAGAUU-GAUAGCAGCAUAGTT-39) was purchased from GenePharma company (Shanghai, China). Transfection was performed using siRNA and Lipofectamine 2000 (Invitrogen, Burlington, ON, Canada) as instructed by the manufacturer. After 36 h, knockdown efficiency of siRNA was quantified by real-time RT-PCR, and cells were used for secretion experiments.
Animal Experiments
Animal experimentation was conducted in accordance with the regulations adopted by the Animal Care and Use Committee, Shanghai Institute of Materia Medica, Chinese Academy of Sciences (approval number: SIMM-2012-07-WMW-04). Male C57BL/KsJ db/db mice (Model Animal Research Center of Nanjing University, Nanjing, China) were housed in a temperature-controlled room (2262uC), with a light/dark cycle of 12 h. At 6 weeks of age, mice were randomly assigned to chronic treatments. The experimental groups and respective doses of compounds were as follows: (1) 0.5% w/v sodium carboxyl methyl cellulose (CMC), (2) 10 mg/kg AR-231453, (3) 1 mg/kg sitagliptin (Beijing Huikang Boyuan Chemical Technology Co., Ltd., Beijing, China), (4) 10 mg/kg AR-231453 plus 1 mg/kg sitagliptin, (5) 100 mg/kg MW1219, and (6) 100 mg/kg MW1219 plus 1 mg/kg sitagliptin. Each regimen was administered once daily by oral gavaging for 6 weeks. Body weight and food intake were recorded every other day and fasting glucose levels measured every week. Oral glucose tolerance test was performed in overnight-fasting mice every other week. After six weeks of treatment, HbA1c were quantified, plasma insulin and GLP-1 contents were measured using respective ELISA kit. For oral glucose tolerance test, overnight fasting mice (n = 6/treatment) were given compounds orally at desired doses and after 30 min, an oral glucose bonus (3 g/kg) was delivered. Plasma glucose levels were determined at desired time points over a 2-h period using blood collected from the tail vein. For circulating GLP-1 analysis, compounds were administered orally to fasting animals followed by an oral glucose bolus (3 g/kg) 30 min later. Blood was drawn 2 min thereafter in Eppendorf tubes containing EDTA and a DPP-4 inhibitor. Plasma samples were obtained via centrifugation and assayed for active GLP-1 by the ELISA kit. Blood was collected again after 20 min to determine insulin levels.
Statistical Analysis
Results are presented as means 6 SEM. Differences between groups were analyzed by One-Way ANOVA. All statistical analysis was performed using Prism statistical methods (GraphPad, San Diego, CA, USA).
Assay Development for Identifying GPR119 Modulators
Both AR-231453 and OEA could stimulate luciferase expression in HEK293-hGPR119 cells with EC 50 values measured at 1.0560.11 nM and 2.7860.18 mM, respectively ( Figure 1A), consistent with those reported in the literature [13,14]. It is thus established that this cell line stably transfected with human GPR119 is suitable for screening purposes.
Although OEA and AR-231453 behave as agonists in GPR119related bioassays, given their difference in chemical structures, we investigated the question whether their effects are mediated through the same binding site on GPR119. A series of compound combination studies using both reporter gene and cAMP accumulation assays were carried out. It was found that AR-231453 was able to increase basal luciferase and cAMP responses elicited by OEA without affecting its potency (Figures 1B and 1C). EC 50 values are shown in Table S1. Such an increase of basal signal is due to the activity of AR-231453 on GPR119 alone. The lack of a shift in potency of OEA suggests that AR-231453 and OEA either bind to the same site or their binding areas are somehow overlapped. Similarly, addition of OEA to AR-231453 also increased basal luciferase and cAMP levels compared to that induced by AR-231453 alone; again, it did not alter the potency of AR-231453 ( Figures 1D and 1E, Table S2).
A number of agonists for GPR119 have been discovered to date, but none of them are PAM. To identify compounds with PAM or agonist activities for GPR119, we optimized the luciferase assay to meet high-throughput screening requirements. It exhibited high signal-to-background ratio and Z9 factor (6.57 and 0.74, respectively) ( Figure 1F) [15]. Test compounds obtained from the National Center for Drug Screening and a certain concentration of OEA with 15% efficacy on GPR119 (1 mM) were applied to stimulate HEK293-hGPR119 cells simultaneously. Theoretically, compounds with PAM activity or showing agonist effect on GPR119 (efficacy above 15%) could be selected following HTS campaigns.
Identification of a Novel GPR119 Agonist by HTS
A total of 312,000 synthetic and natural compounds were screened against HEK293-hGPR119 cells. Through primary screening and subsequent confirmation studies, a synthetic compound, MW1219 (Figure 2A), was found to invoke luciferase reaction through GPR119 in a concentration-dependent manner with an EC 50 of 0.9660.08 mM ( Figure 2B). To further study whether the observed cellular responses are receptor-mediated, the effect of MW1219 was measured in control HEK293 cells. In the absence of GPR119, it did not show any activity on luciferase expression thereby demonstrating its specificity for GPR119. Compared to forskolin, efficacy of MW1219 was much lower (10%) which may reflect normal fluctuation of the assay ( Figure 2C). Because GPR119 is mainly coupled to Ga s pathway, the ability of MW1219 to activate this signal transduction route was then explored in a Ga s -coupled cAMP accumulation assay. MW1219 only stimulated the increase of cAMP in HEK-hGPR119 cells and had no influence in control HEK239 cells (Figures 2D and 2E). In this system, the potency displayed by MW1219 was different from that seen in the reporter gene assay. An efficacy achieved with 50 mM MW1219 was about 40% of that elicited by forskolin, whereas it could induce the same level of efficacy as forskolin in the reporter gene assay.
To determine whether MW1219 could allosterically modulate the activity of OEA, 10 mM, 1 mM and 0 mM MW1219 were added to various OEA concentrations. The EC 50 of OEA did not change significantly, suggesting that MW1219 is not PAM for OEA ( Figures 2F and 2G, Table S3).
Based on the results presented above, we conclude that MW1219 is an agonist for GPR119 which is capable of activating Ga s -coupled signal pathways.
MW1219 Stimulates Insulin and GLP-1 Secretion in vitro
To confirm the direct effects of MW1219 on pancreatic b-cells, we examined glucose-stimulated insulin secretion by mouse pancreatic MIN6 insulinoma cells which endogenously express GPR119 [16]. Two batches of MIN6 cells were exposed to 2.8 mM and 16.8 mM glucose medium, respectively. MIN6 cells exposed to 16.8 mM glucose increased insulin secretion when treated with 1 mM, 5 mM, 15 mM or 50 mM MW1219 or 10 mM OEA. Compared with DMSO control, MW1219 at 15 mM and 50 mM, as well as 10 mM OEA elevated insulin significantly (P,0.05 for MW1219, P,0.01 for OEA). In contrast, MW1219 had no effect on insulin release in MIN6 cells under low glucose (2.8 mM) conditions ( Figure 3A).
To determine whether MW1219 could stimulate GLP-1 release, we measured the activity of MW1219 in GLUTag enteroendocrine cells expressing proglucagon gene and secreting GLP-1 in a regulated manner [17]. In high glucose conditions, the content of GLP-1 released to supernatant by GLUTag cells was elevated dose-dependently. Neither MW1219 nor OEA exhibited any effect in low glucose medium ( Figure 3B).
We next studied if the insulinotropic effect of MW1219 described above is GPR119 dependent. Specific siRNA was employed to knockdown GPR119 in MIN6 cells and quantitative RT-PCR analysis demonstrated that the expression level of GPR119 was less than 70% of normal MIN6 cells (data not shown) accompanied by diminished effect of MW1219 on insulin secretion. The action of 15 mM MW1219 shown above was suppressed by siRNA and the significant difference between 15 mM MW1219 treatment and DMSO control disappeared ( Figure 3C). A similar phenomenon was observed in GLUTag cells. The content of GLP-1 released by the cells treated with 20 mM and 50 mM MW1219 became indistinguishable from the medium control. Our results thus suggest that insulin and GLP-1 releases induced by MW1219 requires the presence of GPR119 ( Figure 3D).
Anti-diabetic Efficacy of Chronic MW1219 in db/db Mice
To study potential glucose control property of MW1219 in vivo, male db/db mice were orally treated with various agents for 6 weeks. Food intake and body weight examined every other day during the treatment period did not show significant difference between six groups ( Figure S1). After cessation of therapy, fasting blood glucose levels in all treatment groups markedly decreased compared with the control. Reduction by 16% and 29% was achieved following 100 mg/kg MW1219 and combination of MW1219 and 1 mg/kg sitagliptin administration, respectively. The level in the latter was significantly lower than that of mice received sitagliptin alone ( Figure 4A). Through 6 weeks of treatment, HbA1c also decreased: compared with the control, there was a 0.67% decrease in animals treated with 100 mg/kg MW1219 (P,0.05); no difference was observed in 1 mg/kg sitagliptin and combination of MW1219 and sitagliptin treated groups ( Figure 4B). Glucose tolerance tests demonstrate that 8.6% and 18.6% inhibition of glycemic excursion was realized in mice treated with either 100 mg/kg MW1219 or a combination of 100 mg/kg MW1219 and 1 mg/kg sitagliptin compared to the vehicle-treated control (P,0.05 and P,0.001, respectively; Figures 4C and 4D). The sensitivity of pancreas to glucose challenge was also improved following MW1219 treatment as insulin response to a glucose bolus was notably more pronounced than that of controls ( Figure 5A). However, circulating GLP-1 levels after 100 mg/kg MW1219 administration was not elevated ( Figure 5B).
Discussion
GPR119 is coupled to Ga s protein and activation of which can induce increases of cAMP. The cAMP accumulation and reporter gene assays described here are two functional methods mainly applied to identify and characterize hits in a HTS setting. Each assay has its own advantages and limitations [18]. Reporter gene assay has a number of amplification steps and this feature requires longer incubation time. It can increase the potential to identify both partial and full agonists which affords us to use it in HTS of GPR119 modulators including PAMs. After primary screening and subsequent confirmation, 248 hits were found (data not shown). MW1219 was selected based on its novel structure for further characterization.
MW1219 was capable of stimulating luciferase expression and cAMP accumulation in a concentration-dependent manner, but potency and efficacy varied between the two assay systems ( Figures 2B to 2E). This discrepancy may be caused by different conditions and features of these two assays. For example, compound treatment time in the reporter gene assay (42 h) was much longer than that of the cAMP accumulation assay (30 min), and small changes in cAMP levels could be augmented by luciferase reaction in the reporter gene assay. Therefore, it appears that potency and efficacy measured by the reporter gene assay are generally more attractive. Nonetheless, this does not alter the fact that both assays identified MW1219 as a specific agonist for GPR119.
Compared with AR231453 and OEA, the molecular mass of MW1219 is smaller with a simple chemical structure. We studied its allosteric potential to AR231453 and OEA on GPR119. Our results show that MW1219 could increase their basal signals but not potencies to the receptor (Figures 2F and 2G). This phenomenon was also seen in the allosteric activity studies between AR231453 and OEA. It seems that MW1219 binds to the same site as AR231453 and OEA, and hence, is a pure GPR119 agonist.
Once GPR119 is activated, it will enhance the glucosestimulated insulin release in pancreatic b cells. We thus studied this effect in MIN6 cells and the result showed that the compound was only effective in high glucose (16.8 mM) medium. After siRNA treatment, this action of MW1219 was suppressed, a property consistent with other GPR119 agonists reported previ- ously [19]. This compound was also evaluated in GLUTag cells and the result was similar to that seen in MIN6 cells: MW1219 could only enhance GLP-1 release in 16.8 mM glucose medium. These data suggest that MW1219, when used in vivo, may devoid hypoglycemia, a complication commonly associated with insulin therapy.
Encouraged by our pilot dose-response studies in vivo in which MW1219 at 100 mg/kg improved both fasting plasma glucose levels and glucose tolerance (data not shown), we further examined the anti-diabetic effects of MW1219, either alone or in combination with sitagliptin in db/db mice as presented in this paper. Clearly, subchronic treatment of diabetic mice with MW1219 led to reduction in HbA1c and fasting glucose levels accompanied by improved glucose tolerance and insulin sensitivity. Additive effect was observed when it was co-administered with sitagliptin, pointing to a potential as a drug lead for further development.
Obesity is one of the most important factors in the development of insulin resistance. Associated with obesity, metabolic disorders including hyperinsulinemia, impaired glucose tolerance and dyslipidemia are often noted, which increase the risk for type 2 diabetes. GPR119 can stimulate GLP-1 secretion resulting in suppression of food intake [20] and gastric emptying [21]. Some agonists of GPR119, such as OEA, are also known to reduce food intake [9,22]. However, we observed neither food intake inhibition nor weight loss following 6-week MW1219 treatment ( Figure S1). According to the literature, AR-231453 could reduce food intake only at very high doses. Sitagliptin at 1 mg/kg raised plasma GLP-1 levels but failed to suppress food intake and decrease body weight. Combination of AR-231453 with sitagliptin also did not induce noticeable effects on feeding and weight gain. Thus, our observation on these two metabolic parameters after MW1219 intervention is in line with previous findings.
Finally, we also evaluated the ability of MW1219, either alone or in combination with sitagliptin, to stimulate the release of insulin and GLP-1 in db/db mice. After an oral glucose bolus, levels of insulin and GLP-1 were elevated at 20 min and 2 min, respectively [7,10]. In our hands, we found that MW1219 at 100 mg/kg was only able to induce the secretion of insulin ( Figure 5A) but not GLP-1 ( Figure 5B). This may be explained by the fact that MW1219 is a weak GPR119 agonist and it is generally accepted that GLP-1 is hard to measure due to its short half-life [23].
In summary, we have identified a novel small molecule (MW1219) capable of activating GPR119 and exerting beneficial metabolic effects in vitro and in vivo. Like other GPR119 agonists reported so far, MW1219 does not show any allostreric activities and comparison with AR231453, the agonist activity of MW1219 is rather weak. Obviously, this would not prevent it from becoming a scaffold for structural modification and optimization through medicinal chemistry efforts. Figure S1 Effects of chronic MW1219 treatment on food intake and body weight in db/db mice. (A) Food intake in db/db mice treated with different regimens; (B) Body weight after 6 weeks of treatment. Data are shown as means 6 SEM (n = 10). *P,0.05, **P,0.01 vs. vehicle group as determined with One-Way ANOVA test. (TIF)
|
2017-04-13T05:20:31.440Z
|
2013-05-21T00:00:00.000
|
{
"year": 2013,
"sha1": "45061d05b09fbe0bfefdb5d23d55a659691d4d56",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0063861&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "45061d05b09fbe0bfefdb5d23d55a659691d4d56",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
260315855
|
pes2o/s2orc
|
v3-fos-license
|
A Time-Frequency Generative Adversarial based method for Audio Packet Loss Concealment
Packet loss is a major cause of voice quality degradation in VoIP transmissions with serious impact on intelligibility and user experience. This paper describes a system based on a generative adversarial approach, which aims to repair the lost fragments during the transmission of audio streams. Inspired by the powerful image-to-image translation capability of Generative Adversarial Networks (GANs), we propose bin2bin, an improved pix2pix framework to achieve the translation task from magnitude spectrograms of audio frames with lost packets, to noncorrupted speech spectrograms. In order to better maintain the structural information after spectrogram translation, this paper introduces the combination of two STFT-based loss functions, mixed with the traditional GAN objective. Furthermore, we employ a modified PatchGAN structure as discriminator and we lower the concealment time by a proper initialization of the phase reconstruction algorithm. Experimental results show that the proposed method has obvious advantages when compared with the current state-of-the-art methods, as it can better handle both high packet loss rates and large gaps.
I. INTRODUCTION
Speech signals are often subject to localized distortions or even total loss of information, when data is transmitted through unreliable channels. This happens, for example, in applications such as mobile digital communications, videoconferencing systems and Voice over Internet Protocol (VoIP) calls. In such scenarios, audio frames are often encapsulated into packets, which are then routed individually through the network, sometimes taking different paths, resulting in out-oforder delivery. At the destination, the original sequence may be reassembled in the correct order, based on the packet sequence numbers. Hence, a variety of issues can occur, like packet losses, over-delay or jitter.
The process of restoration of missing packets is known as Packet Loss Concealment (PLC) [1]. This term refers to any technique that attempts to overcome the packet-loss problem, by concealing the lost fragments by an estimated reconstruction, which should be meaningful and consistent with the informative content of the speech message. The system should also prevent audible artifacts and decrease listening fatigue, so that the listener remains unaware of any problems that have occurred.
A. Related works
Some techniques refer to a similar task with the terms Audio Inpainting [2], [3], Waveform Interpolation [4] or Extrapolation [5]. These techniques address the reconstruction problem from a sparsity point of view, by approximating the waveform with a combination of frequency atoms, extracted from a given dictionary. However they are not suitable for real-time applications, as the computational cost can lead to excessive latency times.
Most of the current approaches to PLC are based on codecs that implement algorithmic solutions: sender-based techniques like Interleaving and Forward-Error Correction (FEC) [6], or receiver-based concealment techniques, like Silence/Noise Substitution, Waveform Substitution, or Linear Predictive Coding (LPC) [7].
In this study, we apply a Generative method, based on the pix2pix [14] framework which exploits a Fully Convolutional Network (FCN) architecture, to address the spectrogram inpainting task. We show that this solution, while preserving global temporal and spectral information along with local information, can outperform competing approaches, based either on classical digital signal processing solutions or learning methods.
II. GENERATIVE ADVERSARIAL NETWORKS
Generative Adversarial Networks (GANs) [15] have emerged in the past years as a powerful generative modeling technique. A typical GAN consists of two networks, a generator (G) and a discriminator (D). Given an input of random values sampled from a normal distribution, z (latent variable), the generator performs an upsampling in order to obtain a sample of suitable dimensions. On the other hand, the discriminator acts as a binary classifier, trying to distinguish "real" samples x (belonging to the dataset distribution) from "fake" samples, generated by G.
Both G and D are trained simultaneously in a min-max competition with respect to binary cross-entropy loss. The final objective for G is to output samples that follow as close as possible the "real" data distribution, while D learns to spot the fake samples from real ones, by penalizing G for producing implausible results.
Given the success achieved in the field of image processing, GANs have also been effective in speech processing tasks. In this regard, WaveGAN [16] represents the pioneering attempt to adapt a deep convolutional GAN (DCGAN) structure for speech, by compressing the two-dimensional image input into one-dimensional. It laid the foundations for GAN-based practical audio synthesis and for converting different image generation GANs to operate on waveforms.
Several extensions have been derived from WaveGAN; to name a few, cWaveGAN [17], which allows conditioning both G and D with additional information to drive the generation process, and Parallel WaveGAN [18], which uses a multiresolution STFT loss along with the adversarial loss.
As outlined in [16], in the generative setting, working with compressed time-frequency representations may be problematic as the generated spectrograms are non-invertible, hence they cannot be listened to without lossy estimations, nevertheless, the practice of bootstrapping image recognition algorithms for audio tasks has become commonplace; examples include SpecGAN [16], MelGAN [19], VocGAN [20] and StyleGAN [21].
A. Pix2pix
Pix2pix is a conditional GAN (cGAN) originally developed in 2017 by Phillip Isola, et al. [14] for synthesizing photos from label maps, reconstructing objects from edge maps and colorizing images. Unlike a vanilla GAN which uses only random noise seeds to trigger generation, a cGAN introduces a sort of supervision by feeding the generator with the target information c, categorical labels or contextual samples. The discriminator is also conditioned by c, to help distinguish more accurately the matching and alignment of two images: Unlike other cGAN-based works (e.g. [22] [23]), Isola et al. demonstrate that the input noise vector z does not have a significant impact if the conditioning information is strong enough, so they removed it, getting the same stochastic behavior by adding dropout layers to the generator.
III. NEURAL CONCEALMENT ARCHITECTURE
An overview of our bin2bin architecture is presented in Fig. 1. The main contribution of this paper is the adaptation of the pix2pix architecture, for the audio packet loss concealment task, through an in-depth evaluation of both generative and discriminative processes, optimized to inpaint spectrograms gaps. We adopt the term bin2bin as a direct translation of pix2pix, inspired by the fundamental unit (bin) of the discretized time and frequency axes of the spectrogram.
A. Generator
In the proposed bin2bin scheme, the generator architecture makes use of the U-Net [24] structural design with the insertion of skip-connections between affine layers. The U-Net is composed of a convolutional encoder that down-samples the input image in the first half of the architecture, and a decoder that upsamples the latent representation applying 2D transposed-convolutions.
The clean signal s and its lossy counterparts, are first transformed into time-frequency spectrograms. In the provided implementation, all STFTs are computed with a 512 points Hann window, corresponding to 32 milliseconds at the sample rate of 16000 Hz, and a hop size of 64. The STFT parameters have been chosen to ensure a balanced resolution between the regions to be reconstructed and the reliable parts acting as conditioning contexts.
Our generator G accepts 1 × 256 × 256 inputs, where each dimension represents, respectively, the number of Channels, Frequency and Time bins, hence, a portion of such size is extracted at a random time, from the aforementioned spectrograms S andS, regardless of the amount of lost fragments present inside.
Only the log-magnitude spectrogram is fed into the generator; for the training stage, the phase information is discarded, while for the test stage it is used to initialize the Griffin-Lim [25] phase reconstruction algorithm.
B. Discriminator
The discriminator is built on a custom architecture, specifically designed for the pix2pix framework, called PatchGAN [14]. It is basically a fully convolutional network that maps the input image into an N × N feature map of outputs Y , in which each patch y ij indicates whether the corresponding portion of input is real or fake. The patches originate from overlapped receptive fields, which can be retrieved through simple backtracking operations.
In the original paper [14], an ablation study was conducted to determine the best configuration of D (number of conv. layers, kernels size), to maximize the evaluated metrics. In this work we focused on a similar aspect: we tested the effect of varying the size of the discriminator convolutional kernels, to achieve a rectangular receptive field, instead of the square dimension (70 × 70 pixels) used in pix2pix. We motivated this decision by observing that the portions of the spectrogram to be concealed extend over the entire frequency dimension, and a relatively small part of the time dimension. We traded-off between the complexity of D and the desired shape, obtaining an optimal receptive field of 162 × 24, with rectangular 8 × 2 kernels for all conv layers.
C. Post-processing
The generator output represents the magnitudes of the TF coefficients, both of the reliable and lost regions. The synthesis by the inverse STFT introduces an inherent crossfading, which significantly reduces artifacts. For the phase reconstruction we used a modified version of Griffin-Lim [25] algorithm, by providing the phase of the lossy frame as an initial estimate. In this way the synthesis of the reconstructed waveform is considerably sped up; we can obtain maximum quality, with less than 10 iterations of the algorithm. Fig. 1. The proposed framework is composed of the U-Net for spectrogram inpainting. Deep feature loss for training the U-Net is obtained by ensembling the discriminator loss (binary cross-entropy between patches), along with the spectral distances (Lmag and Lsc), between the representations of the recovered and the actual STFT log-magnitudes.
D. Loss functions
The generator model is trained by mixing the GAN objective with a traditional pixel-wise loss, between the generated reconstruction of the source spectrogram and the expected target spectrogram.
Differently from the original paper, we have found it more beneficial to use loss functions related to the perceptual quality of the audio signal: log-STFT magnitude loss (L mag ) and Spectral Convergence loss (L sc ), defined as follows: where |S t,f | and |S t,f | represent the STFT magnitude vector of s ands respectively, at time t, while T and N denote the number of time bins and frequency bins of a frame. As outlined in [26], L sc highly emphasizes large spectral components, which helps especially in early phases of training, while L mag accurately fits small amplitude variations, which tends to be more important towards the later phases of training.
The goal of the adversarial loss is to drive the generator model to output T-F representations that are plausible in the target domain, whereas the spectral losses regularize the generator model to output spectrograms that are a plausible translation of the source context. The combination of the adversarial loss and the spectral losses is controlled by the hyperparameters λ 1 and λ 2 , both set to 250, since it has been observed that the spectral loss is more important for reconstruction than the adversarial one.
The discriminator model is trained in a standalone manner in the same way as in a traditional GAN model, minimizing the negative log-likelihood of identifying real and fake images, although conditioned on the clean spectrogram, which is concatenated with G(S) to form the input of D.
We followed a common practice in training generative networks [27], which consists in balancing the evolution of training by iterating n G times the generator weights update, for every one of D. We used the value n G = 10.
The models were trained for 50 epochs, following an early stopping policy based on the spectral losses observed on the validation set. We used the Adam [28] optimizer with a learning rate of 0.0002 for both the generator and the discriminator, and a batch size of 8.
IV. DATASETS
We used the VCTK Corpus (Centre for Speech Technology Voice Cloning Toolkit) [29] set of data to simulate loss traces, for training and evaluation of the speech PLC model. VCTK contains about 44 hours of clean speech from 109 English speakers, 47 males and 62 females, with different accents. To comply with the policy followed by the comparing methods, we downsampled the audio to 16 kHz, trimmed leading and trailing silence, and split into three subsets: train, validation and test, the latter containing 5 speakers held out from the train and validation sets. We assumed that the lost packets have a duration multiple of 20 ms, and were simulated by zeroing samples of the clean waveform, finally we limited to 120 ms the maximum gap length, equivalent to 6 consecutive packets. Fig. 3 shows the distribution of lost gaps, obtained by injecting packets with rates in the range 10% -40%.
V. RESULTS AND COMPARISONS
The proposed PLC method has been compared with three algorithmic solutions, represented by the general purpose codecs Opus [30], WebRTC [31] and Enhanced Voice Services (EVS) [32], and against four state-of-the-art deep PLC methods: the wave-to-wave generative adversarial network (PLCNet) [33], the mel-to-wave non-autoregressive adversarial auto-encoder (PLAAE) [34], the wave-to-wave adaptive recurrent neural network (RNN) [9] and the time-frequency hybrid generative adversarial network (TFGAN) [12]. In addition, the evaluation metrics obtained by simply zero-filling the lost gaps were also reported as a baseline.
We evaluated the performances of the proposed generative inpainting method, in terms of Wide-Band Perceptual Evaluation of Speech Quality (PESQ) [35] and Short-Time Objective Intelligibility (STOI) [36]. The implementations used in this paper are from [37] for PESQ, and from [38] for STOI. Table I shows the experimental results for PESQ and STOI, under different packet loss rates, compared with the PLCNet method, It can be seen that the proposed model can achieve a significant improvement in performance, the more the loss rate increases, so it is also able to cope better with large gaps of adjacent lost packets. The improvement is notable on PESQ scores; it ranges from +6.0% (loss rate 10%) to +27.5% (loss rate 40%). The STOI shows less noticeable gains, only for higher loss rates: +2.3% (loss rate 30%) and +7.8% (loss rate 40%). Table II summarizes the results of the proposed method with all the competing approaches. Values represent the average score of PESQ and STOI under all packet loss rates investigated. Compared with the best performing network among previous state-of-the-art systems (PLCNet), bin2bin improves PESQ by 15.3% and STOI by 2.4%, while, in comparison with the best codec-based concealment (EVS), the improvement rises up to 43.9% for PESQ and 12.8% for STOI. Figure 2 shows the qualitative results of a concealed 120 ms wide gap, within a test sample. This represents the worst scenario, in terms of extent of lost fragments, the network is trained to face.
In addition, we timed the forward execution of the bin2bin inpainting process, both in a CPU environment (Intel core i7-6850K) and a GPU environment (Nvidia Titan Xp), obtaining real-time (RT) factor values of 0.17 and 0.11 respectively.
VI. CONCLUSIONS
In this paper, we proposed an end-to-end pipeline for spectrogram inpainting and audio concealment using a cGANbased architecture, inspired by the popular pix2pix framework. We combined the classical discriminative loss with a linear combination of two loss functions, that are correlated with the perceptual quality of speech. In addition, we adapted the receptive field of the PatchGAN discriminator and we used a custom initialization of the Griffin-Lim algorithm to speed up post-processing. We demonstrated experimentally that the proposed method is capable of simultaneously identifying and recovering missing parts, thus outperforming the state-of-theart DNN method by +15.3% on PESQ and +2.4% on STOI, respectively. Finally, inference time evaluation suggests that this approach can be integrated into a real-time application, even with a mid-range hardware setting.
As future developments we plan to investigate the generator to directly process complex-valued spectrograms, in order to incorporate the phase reconstruction directly into the generative model.
|
2023-07-31T06:42:49.806Z
|
2023-07-28T00:00:00.000
|
{
"year": 2023,
"sha1": "d31f49bd46b9b12d1f8936d93e6ba8ba11be692f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d31f49bd46b9b12d1f8936d93e6ba8ba11be692f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
8468877
|
pes2o/s2orc
|
v3-fos-license
|
Plant Oils as Potential Sources of Vitamin D
To combat vitamin D insufficiency in a population, reliable diet sources of vitamin D are required. The recommendations to consume more oily fish and the use of UVB-treated yeast are already applied strategies to address vitamin D insufficiency. This study aimed to elucidate the suitability of plant oils as an alternative vitamin D source. Therefore, plant oils that are commonly used in human nutrition were first analyzed for their content of vitamin D precursors and metabolites. Second, selected oils were exposed to a short-term UVB irradiation to stimulate the synthesis of vitamin D. Finally, to elucidate the efficacy of plant-derived vitamin D to improve the vitamin D status, we fed UVB-exposed wheat germ oil (WGO) for 4 weeks to mice and compared them with mice that received non-exposed or vitamin D3 supplemented WGO. Sterol analysis revealed that the selected plant oils contained high amounts of not only ergosterol but also 7-dehydrocholesterol (7-DHC), with the highest concentrations found in WGO. Exposure to UVB irradiation resulted in a partial conversion of ergosterol and 7-DHC to vitamin D2 and D3 in these oils. Mice fed the UVB-exposed WGO were able to improve their vitamin D status as shown by the rise in the plasma concentration of 25-hydroxyvitamin D [25(OH)D] and the liver content of vitamin D compared with mice fed the non-exposed oil. However, the plasma concentration of 25(OH)D of mice fed the UVB-treated oil did not reach the values observed in the group fed the D3 supplemented oil. It was striking that the intake of the UVB-exposed oil resulted in distinct accumulation of vitamin D2 in the livers of these mice. In conclusion, plant oils, in particular WGO, contain considerable amounts of vitamin D precursors which can be converted to vitamin D via UVB exposure. However, the UVB-exposed WGO was less effective to improve the 25(OH)D plasma concentration than a supplementation with vitamin D3.
To combat vitamin D insufficiency in a population, reliable diet sources of vitamin D are required. The recommendations to consume more oily fish and the use of UVB-treated yeast are already applied strategies to address vitamin D insufficiency. This study aimed to elucidate the suitability of plant oils as an alternative vitamin D source. Therefore, plant oils that are commonly used in human nutrition were first analyzed for their content of vitamin D precursors and metabolites. Second, selected oils were exposed to a shortterm UVB irradiation to stimulate the synthesis of vitamin D. Finally, to elucidate the efficacy of plant-derived vitamin D to improve the vitamin D status, we fed UVB-exposed wheat germ oil (WGO) for 4 weeks to mice and compared them with mice that received non-exposed or vitamin D3 supplemented WGO. Sterol analysis revealed that the selected plant oils contained high amounts of not only ergosterol but also 7-dehydrocholesterol (7-DHC), with the highest concentrations found in WGO. Exposure to UVB irradiation resulted in a partial conversion of ergosterol and 7-DHC to vitamin D2 and D3 in these oils. Mice fed the UVB-exposed WGO were able to improve their vitamin D status as shown by the rise in the plasma concentration of 25-hydroxyvitamin D [25(OH)D] and the liver content of vitamin D compared with mice fed the non-exposed oil. However, the plasma concentration of 25(OH)D of mice fed the UVB-treated oil did not reach the values observed in the group fed the D3 supplemented oil. It was striking that the intake of the UVB-exposed oil resulted in distinct accumulation of vitamin D2 in the livers of these mice. In conclusion, plant oils, in particular WGO, contain considerable amounts of vitamin D precursors which can be converted to vitamin D via UVB exposure. However, the UVB-exposed WGO was less effective to improve the 25(OH)D plasma concentration than a supplementation with vitamin D3.
Keywords: ergosterol, 7-dehydrocholesterol, vitamin D, plant oils, wheat germ oil, ultraviolet light irradiation, bioavailability, mice inTrODUcTiOn Food sources of vitamin D are scarce. Although oily fish is considered to be a good source of vitamin D3 (1, 2), its consumption and its vitamin D content is not high enough to significantly improve the vitamin D status of humans (3). Besides fish, mushrooms are often considered as another valuable source of vitamin D, in particular of vitamin D2. However, the major natural vitamin D metabolite in fungi and yeast is the vitamin D precursor ergosterol, which can be converted to vitamin D2 by UVB irradiation (4). The UVB-exposed baker's yeast, which has been approved by the European Food Safety Authority as a reliable ingredient to enrich bakery products with vitamin D, is a prominent example for a successful application of UVB irradiation to enhance vitamin D in natural foods (5). However, less data are available on vitamin D precursors and metabolites in plants.
Yellow oat grass (Trisetum flavescens) is well described for its capability to synthesize bioactive vitamin D. It contains vitamin D glycosides which can be hydrolyzed in the gut or by the gastrointestinal microflora to the biologically active 1,25-dihydroxyvitamin D (6)(7)(8). Other so-called calcinogenic plants that contain active vitamin D forms are Solanum malacoxylon, Cestrum diurnum, and Nierembergia veitchii of the Solanaceae family (6)(7)(8). These plants are presumed to cause calcinosis in grazing animals due to the hypercalcemic effect of toxic 1,25-dihydroxyvitamin D levels (9). Vitamin D metabolites were also found in Cucurbitaceae, Fabaceae, and Poaceae (10)(11)(12). Besides that, certain plants are associated with fungal endophytes (13,14) or are capable to produce the vitamin D3 precursor 7-dehydrocholesterol (7-DHC) on its own via the lanosterol pathway (15). Based on these data, we hypothesized that plant oils could also contain vitamin D precursors or metabolites. The main aims of this investigation were [1] to identify and quantify precursors and metabolites of vitamin D in plant oils that are used in human nutrition and [2] to investigate whether a short-term exposure of selected oils to UVB light could increase their vitamin D content. To elucidate possible adverse effects of UVB exposure on the quality of the oils, we analyzed oxidative biomarkers and tested the sensory quality of the UVB-exposed oils. Additional tests were conducted to assess the stability of these vitamin D metabolites subsequent to thermal treatment and storage of the UVB-exposed oil. Finally, we aimed to elucidate the efficacy of plant-derived vitamin D to improve the vitamin D status by feeding an UVB-exposed plant oil to mice.
MaTerials anD MeThODs characterization of Vitamin D Metabolites in the Plant Oils
Avocado oil, linseed oil, olive oil, pumpkinseed oil, rapeseed oil, soya oil, sunflower oil, and wheat germ oil (WGO) were used to characterize and quantify their vitamin D precursors and metabolites. From each type of oil, three commercially available representatives were obtained from local supermarkets and used for the analyses. The oil samples selected for analyses were flushed with N2 after the first opening, to avoid oxidation processes and stored at 4°C until further analyses.
UVB Exposure of Selected Oils
Rapeseed oil, avocado oil, and WGO were used for the UVB treatments and exposed to UVB light. In the first approach, aliquots of the three oils were placed into plastic vessels (thickness of the oil layer 1.0 mm) and exposed to UVB light for 0 (control), 4, and 8 min at room temperature. The UVB-emitting lamp (650 μW/cm2, in a distance of 15 cm, UV-8M, Heroloab GmbH, Wiesloch, Germany) was placed 10 cm above the oil surface. During that treatment, the oils were flushed with N2. In a second approach, WGO was used to investigate the impact of the oil layer thickness on the efficacy of vitamin D formation through UVB irradiation. Therefore, different volumes of the oil were filled in glass vessels to reach a layer thickness of either 1.6 or 3.2 mm, to be UVBexposed for 10 min at room temperature. During that time, the oil samples were constantly stirred by a magnetic stirrer under N2.
The oil samples were stored at −20°C until analyses of vitamin D2, vitamin D3, and tocopherols. In addition to that, the peroxide and the acid values were analyzed in the 10 min UVB-exposed WGO and compared with those of the non-exposed oil of the same batch. The analyses were complemented by organoleptic tests. The UVBtreated oil (exposure time: 10 min, oil layer thickness: 3.2 mm) which was intended for use in the mouse study was analyzed for vitamin D metabolites and stored at −20°C until preparation of the diet. All diets were stored at −20°C until their administration.
Thermal Treatment and Storage of UVB-Exposed Wheat Germ Oil
To estimate the stability of the UVB-exposed oils, aliquots of the 10-min UVB-exposed WGO (1.6 mm layer) were (1) heated at 100 or 180°C for 10 min and (2) stored for 1 day, 2 weeks, or 4 weeks at room temperature in the dark. After the thermal treatment and storage terms, the WGOs were flushed with N2 and stored at −20°C until further analysis. Aliquots of untreated oil samples of the same batch were used as a reference. Besides the concentration of the vitamin D metabolites, the concentration of tocopherols were analyzed to gain information about oxidation processes. The thermally treated and the 4 weeks-stored WGOs were analyzed to record the peroxide and the acid values and subjected to organoleptic tests.
Analysis of Autoxidation Markers in Oils
The concentrations of the tocopherols were measured by a modified HPLC method of Coors (19). Prior to the quantification of the tocopherols, aliquots of the oils were solved in n-hexane (1/100, w/v), mixed thoroughly, and separated isocratically by HPLC (Agilent 1100) using a LiChrospher Si 60 column (250 mm × 4.0 mm, 5 μm particle size; Agilent Technologies). A mixture of n-hexane and isopropanol (99/1, v/v) was used as mobile phase (flow rate: 1 ml/min). The α-, β-, and γ-tocopherols were detected by a fluorescent detector (emission: 330 nm, excitation: 295 nm). External standards (α-, β-, γ-tocopherols, Supelco, Bellefonte, PA, USA) were used for calibration. The peroxide and the acid values of the oils were determined according to the German official methods (20, 21).
Organoleptic Characterization of the Oils
The UVB-exposed, the thermally treated, the 4 weeks-stored, and the -untreated WGOs were evaluated by a trained panel (ÖHMI Analytik GmbH, Magdeburg, Germany) in a blinded fashion. Taste, aroma, color, and transparency of the oils were judged at 40°C (22), and the oils were ranked according to its organoleptic quality (23).
Mouse study
The experimental procedures described below followed the established guidelines for the care and handling of laboratory animals according to the National Research Council (24) and were approved by the local government (Landesverwaltungsamt Sachsen-Anhalt, Germany; approval number 42502-5-34). All mice were housed in pairs on a 12-h light, 12-h dark cycle in a room controlled for temperature (22 ± 2°C) and relative humidity (50-60%). Food and water were provided ad libitum.
Forty-two 4-week-old male mice (C57BL/6NCrl, Charles River Laboratories, Sulzfeld, Germany) were used. Five weeks prior to the actual treatment, the mice received a vitamin D-free semi-synthetic basal diet (20% casein, 20% sucrose, 38.8% starch, 10% WGO, 6% vitamin-mineral-mixture, 5% cellulose, and 0.2% dl-methionine) to reduce their vitamin D status. Except for the vitamin D, all other vitamins and minerals were supplemented according to the recommendations of the AIN (25). After the 5-week, six mice were sacrificed to determine the vitamin D status of these animals at baseline. The remaining 36 mice (mean body weight: 13.9 ± 0.8 g) were allotted to 3 groups of 12 mice each and fed the basal vitamin D-free diet with 10% of either the 10 min UVB-exposed WGO (3.2 mm-layer, WGO-UV), the untreated WGO, or WGO that was supplemented with synthetic vitamin D3 (WGO-D3) in comparable amounts to the total vitamin D content analyzed in the UVB-exposed oil. In the experimental diet fed to the WGO-UV group, a mean vitamin D2 concentration of 87.3 μg/kg diet was measured, whereas no vitamin D3 was found. The diet fed to the WGO-D3 group had a mean analyzed vitamin D3 concentration of 80.0 μg/kg and no vitamin D2, while in the diet fed to the WGO group neither vitamin D2 nor vitamin D3 could be detected. The experimental diets were fed to the mice for 4 weeks. Individual body weights and mean food intake per cage were recorded weekly. Finally, the mice were sacrificed after a 4-h food deprivation under light anesthesia with diethyl ether. Blood was collected into heparin tubes (Sarstedt, Nümbrecht, Germany). Plasma was separated by centrifugation at 3000 × g at 4°C for 20 min and stored at −20°C until analysis of the vitamin D metabolites. The livers were harvested, immediately snap-frozen in liquid N2, and stored at −80°C until analysis of the vitamin D metabolites.
The vitamin D2 and D3 concentrations in the diets and liver samples were analyzed as already described for the oil samples. The vitamin D2 and vitamin D3 concentrations of the diets were analyzed in aliquots of 1 g in triplicate. In the diets, the LLOQ for both vitamin D metabolites was 4.3 ng/g. Liver aliquots of 200 mg were analyzed for their concentrations of ergosterol, 7-DHC, vitamin D2, vitamin D3, 25(OH)D2, and 25(OH)D3. In the liver samples, the LLOQ was 5.0 ng/g for vitamin D2, 10.5 ng/g for vitamin D3, 0.3 ng/g for 25(OH)D2, and 2.1 ng/g for 25(OH)D3.
Analysis of Tocopherols in Plasma
To analyze the α-tocopherol concentrations in plasma, aliquots (30 μl) were mixed with pyrogallol solution (1% in ethanol, absolute) and saturated sodium hydroxide solution for hydrolysis. Subsequently, the samples were incubated at 70°C for 30 min, and tocopherols were extracted with n-hexane and ultrapure water. The supernatant was directly applied to the HPLC (26). HPLC conditions were the same as described for the tocopherol analysis of the oils.
Statistical Analysis
Data concerning the characterization of the plant oils were not subjected to statistical analysis. Values of the in vivo experiment are presented as means ± SD. If values were below the LLOQ, randomly generated values (between 0 and the appropriate LLOQ) were used for statistical treatment analyses. Statistical analyses were conducted using SPSS statistical software (SPSS 22, IBM; Armonk, NY, USA). All data were subjected to a normality test using the Shapiro-Wilk test. If the data followed a normal distribution, differences between the groups were analyzed by one-way analysis of variances (ANOVA), and subsequently subjected to the Levene's test for homoscedasticity. In case of homogeneity of variance, the three treatment groups were compared by the Tukey's test, in case of unequal variances by the Games-Howell test. If the data were not normal distributed, the Kruskal-Wallis test was used to analyze differences between the groups and the Mann-Whitney U test was conducted for post hoc comparisons of the three treatment groups (corrected by Bonferroni). Differences were considered to be significant at P < 0.05.
Vitamin D and Vitamin D Precursors in selected Plant Oils
Eight commercially available plant oils for human nutrition were characterized for their vitamin D precursors and vitamin D contents. Analysis revealed that the concentrations of the vitamin D precursors ergosterol and 7-DHC varied strongly between the different oils, but all oils had a markedly higher concentration of ergosterol than of 7-DHC (Figure 1). The highest ergosterol concentration was found in the WGOs (22.1-34.2 μg/g) followed by the avocado oils (4.2-23.4 μg/g) and the sunflower oils (7.9-17.4 μg/g). Oils derived from rapeseed, soya, and linseed had lower ergosterol concentrations that ranged from 4.1 to 9.5 μg/g; the lowest concentrations were found in olive and pumpkinseed oils (<4.5 μg/g). Analyses revealed that the WGOs had the highest concentrations of 7-DHC (638-669 ng/g), while other oils had very low quantities of 7-DHC (Figure 1). The 7-DHC concentration in the linseed oils ranged between 71.7 and 97.5 ng/g; the other oils had 7-DHC concentrations between 10.7 and 47.9 ng/g. Vitamin D2 and D3 were not quantifiable in the eight analyzed plant oils.
Formation of Vitamin D in the UVB-Exposed Oils
To elucidate the impact of a short-term UVB irradiation on the formation of vitamin D in the plant oils, we exposed rapeseed oil, avocado oil, and WGO that differed widely in their amounts of vitamin D precursors to UVB light. The UVB exposure of rapeseed, avocado, and WGO increased the vitamin D concentrations in these oils in a time-dependent manner (Figure 2). The amount of vitamin D2 produced by UVB irradiation was higher in the wheat germ and the avocado oil than in the rapeseed oil. The amount of the vitamin D3 increased only in the WGO upon UVB exposure, but not in the rapeseed and the avocado oil, which was probably due to the higher 7-DHC concentration in the WGO (Figure 2).
The data further showed a significant impact of the layer thickness on the efficacy of the UVB exposure to increase the vitamin D content. The concentrations of vitamin D2 and vitamin D3 in the 1.6 mm-layer of WGO, which was UVB-exposed for 10 min, were 1035 and 37.0 ng/g, respectively. UVB-exposed WGO with a 3.2 mm-oil layer thickness had still high concentrations of vitamin D2 and vitamin D3, reaching 82 and 94% of the concentrations observed in the 1.6 mm-oil layer.
Changes in Quality Parameters of the Oils upon UVB Exposure
To elucidate the impact of the UVB treatment on the oil quality, the tocopherol concentrations and markers of autoxidation were measured in the UVB-exposed oils. The tocopherol concentrations of the 8-min UVB-exposed wheat germ and avocado oil (1.0 mm-layer) were not different from those of the untreated oils ( Table 1). In the rapeseed oil, a slight decrease in the αand γ-tocopherol contents upon UVB exposure was observed. A 10-min-UVB exposure of WGO (1.6 mm-layer) had again no effect on the tocopherol concentrations, and also the peroxide and the acid value were not affected ( Table 2). Organoleptic analyses revealed that the UVB-exposed WGO had a slightly more off-flavor than the non-exposed oil sample of the same batch ( Table 3).
Susceptibility of UVB-Exposed Oils to Thermal Treatment and Storage
To elucidate the stability of a 10-min UVB-exposed WGO, we analyzed the concentrations of vitamin D metabolites and tocopherols, the peroxide and acid values, and the organoleptic quality after thermal treatment and after storage of the oil samples.
The heating of UVB-exposed WGO at 100°C for 10 min resulted in a 50% increase of vitamin D2 (Δ = 521 ng/g) and a 66% increase in the vitamin D3 (Δ = 24.4 ng/g) concentration compared with the non-heated UVB-exposed WGO. In contrast, heating the oil at 180°C for 10 min resulted in a slight reduction of the vitamin D2 (Δ = −47.0 ng/g) and vitamin D3 (Δ = −2.8 ng/g) concentrations (Figure 3). Thermal treatment of the UVB-exposed and untreated WGO at 100°C had no effect on the analyzed markers for oxidation, while a thermal treatment at 180°C resulted in a slight reduction of the tocopherol concentrations and in a decrease in the peroxide value; the acid value remained unchanged ( Table 2). Thermal treatment also affected the taste of the oil: the higher the treatment temperature, the lower was the organoleptic quality. The UVB exposure per se had only a small effect on the taste when the oil was heated at 180°C ( Table 3).
The storage of UVB-exposed oil at room temperature also resulted in a rise of the vitamin D2 and the vitamin D3 concentrations (Figure 3). The highest vitamin D2 concentration was measured after the 2-week storage (Δ = 1157 ng/g; Figure 3A). The vitamin D3 concentration rose continuously during the 4-week storage and reached the highest values after 3 | influence of thermal treatment and storage on taste and aroma of UVB-exposed a and non-exposed wheat germ oil. 4 weeks (Figure 3B). The tocopherol concentrations in the UVB-exposed oil decreased slightly with the storage time; those of the untreated oil remained unchanged ( Table 2). Both, the UVB-exposed and the -untreated oil showed increased peroxide values after the 4-week storage, no changes were observed for the acid values ( Table 2). Organoleptic tests showed that the 4-week storage lead to deteriorated taste of the UVB-exposed and the -untreated oils, without showing any substantial difference between the UVB-exposed and the -untreated oil ( Table 3). The aroma of the oils was not affected by the UVB exposure or the 4-week storage ( Table 3).
efficacy of the UVB-exposed Wheat germ Oil to improve the Vitamin D status of Mice To evaluate the efficacy of UVB-exposed WGO to improve the vitamin D status, a feeding study with mice was conducted. The analyzed concentrations of vitamin D precursors and vitamin D in the WGO demonstrate that the applied UVB treatment was capable of increasing the vitamin D2 and vitamin D3 in this oil ( Table 4). The tocopherol concentrations in the untreated and the UVB-exposed oils were comparable ( Table 4). Mice of the three groups did not differ in their daily food intake (WGO: 3.03 ± 0.23 g, WGO-UV: 3.01 ± 0.14 g, and WGO-D3: 3.11 ± 0.07 g) and final body mass (WGO: 31.5 ± 3.0 g, WGO-UV: 31.8 ± 2.9 g, and WGO-D3: 32.1 ± 1.6 g). Because any changes in 25(OH)D upon feeding vitamin D are usually becoming the higher the lower the vitamin D status is at baseline (27), all mice received a vitamin D-free diet 5 weeks prior to the treatment with the UVB-exposed or vitamin D3-supplemented WGO. The plasma concentrations of total 25(OH)D [25(OH)D2 + 25(OH)D3] after feeding the vitamin D-free basal diet for 5 weeks was below the LLOQ (n = 6). Feeding mice the diet with UVB-exposed oil (WGO-UV) or with vitamin D3 supplemented oil (WGO-D3) for 4 weeks resulted in markedly higher plasma concentrations of total 25(OH)D compared with feeding the untreated WGO-based diet without any vitamin D supplementation (WGO) (P < 0.001). However, the increase in the total plasma concentration of 25(OH)D was stronger in the WGO-D3 group than in the WGO-UV group (P < 0.001). The predominant form of the plasma 25(OH)D in the WGO-UV group was 25(OH)D2; the predominant form in the WGO-D3 group was 25(OH)D3 (Figure 4). The plasma concentration of ergosterol was below the LLOQ in all groups of mice ( Table 5). All mice had comparable plasma concentrations of 7-DHC. Mice from the WGO and WGO-D3 groups had plasma concentrations of vitamin D2 that were below the LLOQ, whereas mice from the WGO-UV group had more than 10-fold higher LLOQ values ( Table 5). By contrast, the WGO-D3 group showed a markedly higher plasma concentration of vitamin D3 than the WGO-UV group (P < 0.001); the plasma concentration of vitamin D3 in the WGO group was below the LLOQ. No differences between the three groups were observed in the plasma concentrations of α-tocopherol ( Table 5).
Analysis of the D vitamer concentrations in the livers of the mice revealed no significant differences in the concentration of ergosterol (WGO: 12.0 ± 19.6 ng/g, WGO-UV: 3.92 ± 1.27 ng/g, and WGO-D3: 5.40 ± 2.73 ng/g) and 7-DHC (WGO: 94.3 ± 31.6 ng/g, WGO-UV: 84.7 ± 16.5 ng/g, and WGO-D3: 102 ± 46 ng/g). However, data showed distinct differences in the liver concentrations of vitamin D (Figure 5). Livers of mice from the WGO-UV group were characterized by extremely high vitamin D2 concentrations and high levels of 25(OH)D2, whereas the livers of mice from the WGO-D3 group had significantly higher vitamin D3 and 25(OH)D3 concentrations than those of mice from the two other groups (Figure 5).
DiscUssiOn
The presented studies demonstrated that plant oils contain high amounts of ergosterol, but comparatively low amounts of 7-DHC. It was striking that the ergosterol concentrations in the plant oils were on average 100 times higher than the 7-DHC concentrations. It is assumed that plants are per se not capable of producing ergosterol or vitamin D2 (28), and that any of these metabolites are synthesized by endophytic fungi or by superficial fungal infections (13,14,29). Regarding 7-DHC, the analyses revealed 10 times higher concentration of this cholesterol precursor in the WGO than in the other oils. 7-DHC is an intermediate of the cholesterol synthesis pathway. It is well described that plants from the Solanaceae, Fabaceae, and Poacaea families are capable of producing cholesterol (30,31), which is assumed to be used for the synthesis of glycoalkaloids and ecdysteroids (32,33). The 7-DHC has also been proposed to function as an UV light protector (34), because the 7-DHC absorbs UVB irradiation that would otherwise damage the ribonucleic acids. The detectable amounts of 7-DHC in the linseed, rapeseed, and pumpkinseed oil suggest that cholesterol is also synthesized in plants from the Linaceae, FigUre 3 | changes in the concentrations of (a) vitamin D2 and (B) vitamin D3 in UVB-exposed wheat germ oil after thermal treatment at 100 or 180°c for 10 min, and after a storage of 1 day, 2 weeks, or 4 weeks at room temperature in darkness. UVB exposure conditions: exposure time, 10 min; oil layer thickness, 1.6 mm; UVB lamp distance, 13 cm. Analyses were run in duplicate. Brassicaceae, and Cucurbitaceae families. However, in contrast to other researchers, who measured vitamin D in certain parts of the plant (12,31,(35)(36)(37), we were not able to detect vitamin D in untreated plant oils.
The detection of vitamin D precursors in the plant oils prompted us to speculate that exposure of oils to UVB irradiation could convert ergosterol and 7-DHC into vitamin D2 and vitamin D3, respectively. Among the analyzed plant oils, the highest levels of vitamin D2 and vitamin D3 in response to an UVB irradiation were found in the WGO. After an 8-min exposure of thin-layered WGO, 1 g of this oil contained 1.5 μg vitamin D2 and 0.08 μg vitamin D3. We further found that the conversion rate of vitamin D precursors to vitamin D in the WGO was reduced by 40% if the oil layer thickness was increased from 1.0 to 3.2 mm. One gram of this thick-layered WGO provided in total a vitamin D content of 885 ng. With an average consumption of 12 g oil/day (38), a total of 10.6 μg vitamin D could be supplied by intake of UVB-exposed WGO, which matches 50% of the recommended daily vitamin D intake (1).
An interesting finding of this study was that the vitamin D content in the oils increased with the time of storage and a moderate thermal treatment. It is well described that the UVB photon converts the precursors, 7-DHC and ergosterol, to previtamin D which in turn isomerizes to vitamin D by a thermal reaction (34,39). Therefore, we assume that the preformed previtamin D can convert to vitamin D in conditions with absent UVB irradiation. Our data further indicate that taste and aroma, and also biomarkers that are indicative of autoxidation such as the tocopherol concentration, peroxides, and free acids were not significantly influenced by a short-term exposure of the plant oils to UVB irradiation. This makes the short-term UVB treatment of plant oils to a safe and reliable technique to produce vitamin D supplements.
To evaluate the efficiency of UVB-exposed plant oils to improve the vitamin D status in vivo, we conducted a study with mice that were fed diets with either UVB-exposed WGO, untreated WGO, or WGO with supplemented vitamin D3. Here, we found that the UVB-exposed WGO is suitable to improve the vitamin D status of the mice as the group fed the UVBexposed oil developed higher 25(OH)D plasma levels than the group fed the untreated oil. Compared with the group fed the vitamin D3-supplemented WGO, the UVB-exposed oil was less effective in increasing the 25(OH)D plasma concentrations. However, it should be noted that the livers of mice that received the UVB-exposed WGO stored huge amounts of vitamin D2 in comparison to that of mice fed the vitamin D3 supplemented oil. The increased storage of hepatic vitamin D2 in combination with the reduced plasma concentration of 25(OH)D2 in the group fed the UVB-exposed oil suggests that vitamin D2 is less appropriate as a substrate for hepatic hydroxylation than vitamin D3. It has been a debate for many years whether both forms of vitamin D are bioequivalent. A series of studies has shown that vitamin D2 does not increase 25(OH)D serum concentrations to the same amount as vitamin D3 does (40)(41)(42). The current data confirm the different efficacy of both vitamin D isoforms. However, we cannot exclude at this stage, that photo-isomers that are produced by the UVB treatment may also impact the bioavailability of the vitamin D form in UVB-exposed oil.
To conclude, plant oils that are commonly used in human nutrition contain considerable quantities of ergosterol, but small amounts of 7-DHC. Among the different analyzed oils, WGO has the highest amounts of vitamin D precursors. A short-term UVB irradiation was successful in increasing the vitamin D content of in mice fed diets with 10% of either wheat germ oil (WgO), UVB-exposed wheat germ oil (WgO-UV) or wheat germ oil that was supplemented with vitamin D3 (WgO-D3) for 4 weeks. Data represent means ± SD, n = 12. a-c Means not sharing a letter are significantly different (P < 0.05, Mann-Whitney U test for vitamin D2, total vitamin D, 25-hydroxyvitamin D2, and 25-hydroxyvitamin D3; Games-Howell test for vitamin D3; Tukey test for total 25-hydroxyvitamin D). # Values were below the lower limit of quantification (vitamin D2, 5.0 ng/g; vitamin D3, 10.5 ng/g; 25-hydroxyvitamin D2, 0.3 ng/g; 25-hydroxyvitamin D3, 2.1 ng/g). the selected oils. The in vivo study has shown that UVB-exposed WGO can improve the vitamin D status, although less effective than vitamin D3. aUThOr cOnTriBUTiOns CB, BK, and GS conceived and designed the experiment. AB performed the experiment. AB and FH analyzed the data. AB, CB, and GS wrote the manuscript. BK and FH critically reviewed the manuscript.
acKnOWleDgMenTs
We like to thank Alexander Böhm for his assistance in the diet preparation and feeding study, and Tiantian Li for her assistance in preparing the oil samples for LC-MS/MS-based analysis.
FUnDing
This study was partly funded by the Union for the Promotion of Oil and Protein Plants (UFOP, Berlin, Germany).
|
2017-05-04T20:21:27.396Z
|
2016-08-12T00:00:00.000
|
{
"year": 2016,
"sha1": "ee72aa05d7632734d67be9994bb75c1cf0dbe74d",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2016.00029/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ee72aa05d7632734d67be9994bb75c1cf0dbe74d",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
266045902
|
pes2o/s2orc
|
v3-fos-license
|
Prism adaptation does not alter configural processing of faces
Patients with hemispatial neglect (‘neglect’) following a brain lesion show difficulty responding or orienting to objects and events on the left side of space. Substantial evidence supports the use of a sensorimotor training technique called prism adaptation as a treatment for neglect. Reaching for visual targets viewed through prismatic lenses that induce a rightward shift in the visual image results in a leftward recalibration of reaching movements that is accompanied by a reduction of symptoms in patients with neglect. The understanding of prism adaptation has also been advanced through studies of healthy participants, in whom adaptation to leftward prismatic shifts results in temporary neglect-like performance. Interestingly, prism adaptation can also alter aspects of non-lateralised spatial attention. We previously demonstrated that prism adaptation alters the extent to which neglect patients and healthy participants process local features versus global configurations of visual stimuli. Since deficits in non-lateralised spatial attention are thought to contribute to the severity of neglect symptoms, it is possible that the effect of prism adaptation on these deficits contributes to its efficacy. This study examines the pervasiveness of the effects of prism adaptation on perception by examining the effect of prism adaptation on configural face processing using a composite face task. The composite face task is a persuasive demonstration of the automatic global-level processing of faces: the top and bottom halves of two familiar faces form a seemingly new, unknown face when viewed together. Participants identified the top or bottom halves of composite faces before and after prism adaptation. Sensorimotor adaptation was confirmed by significant pointing aftereffect, however there was no significant change in the extent to which the irrelevant face half interfered with processing. The results support the proposal that the therapeutic effects of prism adaptation are limited to dorsal stream processing.
Patients with hemispatial neglect ('neglect') following a brain lesion show difficulty responding or orienting to objects and events that appear on the left side of space 1 . A diagnosis of neglect is a strong predictor of poor functional outcome and low independence following stroke 2 . This may be partly because the disorder impairs perception in a broad range of sensory modalities ranging from vision, touch, proprioception, and motor control to more abstract aspects of cognition such as a patient's awareness of their own body 3 and their imagined images of familiar locations 4 . Furthermore, although the rightward spatial bias is the defining symptom of neglect, several other processing disturbances are associated with the disorder. These include low general arousal 5 , poor sustained attention 6 , and difficulties in keeping track of spatial locations as they move about their environment 7 . These non-lateralised spatial biases are thought to increase neglect severity and reduce the potential for recovery 8 .
Over the last fifteen years a promising behavioural intervention for neglect has emerged in the form of a sensorimotor training technique called prism adaptation 9 . During prism adaptation, patients reach for objects viewed through rightward-deflecting prisms, leading to a leftward recalibration of reaching movements that can be measured as leftward errors once the prisms are removed. In patients with neglect this leftward recalibration of reaching is accompanied by a reduction in their symptoms. A single five-minute session of prism adaptation is sufficient to improve the performance of neglect patients on tests of visuo-motor function such as copying, cancellation and reading 9,10 . These effects extend to non-visual spatial processing, such as tactile perception 11 and manual exploration of space while blindfolded 12 , and to complex mental operations such as the exploration of an internally generated map of France 13 ; and 'bisection' of numbers 14 . Evidence amassed over a number of studies suggests that this simple behavioural intervention can have broadly generalised effects, and prism adaptation is considered to be a highly promising potential treatment for neglect 15 .
Whereas adaptation to rightward-shifting prisms can reduce neglect symptoms in brain-lesioned patients, adaptation to leftward-shifting prisms, involving a rightward recalibration of reaching, leads to neglect-like changes in the spatial performance of healthy participants. These perceptual changes have been demonstrated on a similar range of visual, non-visual and mental tasks (albeit to a lesser extent than those changes observed in patients) [17][18][19] . Since prism adaptation can be used to induce similar, but opposite, changes in the performance of healthy participants as in neglect patients, it is possible to gain insights into the potential therapeutic effects of the technique by testing healthy volunteers.
One example of research from healthy participants that has complemented the understanding gained from studies in patients is in research examining the effects of prism adaptation on non-lateralised deficits. There are now several pieces of evidence from brainlesioned patients that prism adaptation alters spatial processing deficits that cannot be described in terms of orienting to the left versus the right, including reductions in spatial dysgraphia 20 and shifts 21 and reductions in perseveration 22 . We previously demonstrated that adaptation to rightward-shifting prisms reverses the tendency of patients with right hemisphere lesions to become fixated on local details of a scene in preference to the global configuration (the 'local processing bias') 23 . Patients identified the local or global level of large letters that were built from smaller letters ('Navon' figures). Reaction times to the local level increased after prism adaptation, demonstrating that there was a reduction in patients' ability to identify the local level without interference from conflicting information at the global level. Conversely, RTs to the global level decreased following prism adaptation, demonstrating that patients were better able to ignore irrelevant conflicting information from the local level.
In a similar experiment with healthy participants we demonstrated that adaptation to leftward-shifting prisms temporarily increased local processing 24 , and led to neglect-like errors in the way in which a spatial representation or 'map' of the environment is updated as we move our gaze around it 25 . Together these results demonstrate that prism adaptation has a more pervasive influence on visual perception than merely shifting attention to one side.
To further test the extent of this influence, the present study examines the effect of prism adaptation on the perception of composite faces in healthy participants. Faces, perhaps more than any other object, undergo automatic global-level processing in which individual components are highly integrated and less available to independent evaluation. This is powerfully illustrated in the composite face illusion ( Figure 1): when the upper and lower halves of two faces are recombined, the virtually unavoidable illusion is that one is viewing the face of a third, different person. When participants are asked to identify the top or bottom halves of composites that are formed from faces of well-known celebrities, they are slower compared to when performing the same task when the two face halves are offset 26 . This reaction time cost demonstrates that even when processing a face as an integrated Gestalt would impair our ability to perform the task at hand, we are unable to suppress such configural processing.
We had two main reasons for testing the influence of prism adaptation on configural face perception. First, by using a stimulus type for which normal processing is known to be strongly biased towards global processing, we reasoned that we could gain insight into the pervasiveness of the influence of prism adaptation on perceptual processes. Second, this experiment explores the possibility that prism adaptation could be used to improve face processing in individuals with prosopagnosia and autism, who have been shown to have reduced or absent configural face processing 27,28 . We predicted that adaptation to leftward-shifting prisms, which induced neglectlike processing in healthy participants, would reduce the RT cost associated with identifying composite faces. We further predicted that there would be no change in composite face processing following adaptation to rightward-shifting prisms, which does not induce perceptual changes in healthy participants.
Material and methods
Sixty-four right-handed undergraduate women (mean age=19.8 years, SEM=0.32; mean handedness=-0.83, SEM=0.026 where a score of -1 denotes complete right-handedness; 29 ) completed a composite face task before and after a brief (five-minute) session of prism adaptation (see below for a full description of the task). Only female participants were selected for the study as it was felt that the stimuli -images of Brad Pitt and George Clooney -might have, on average, higher saliency for women than men. To be included in the study participants were also required to have normal or correctedto-normal vision, and full use of their right arm. Informed consent was obtained in accordance with guidelines approved by the Bangor University ethics committee and the 2008 Declaration of Helsinki. Participants received course credits for the 45-minute session.
In a repeated-measures design, participants completed one set of configural face processing tasks before prism adaptation, and one set of configural face processing tasks after prism adaptation.
Prism adaptation and open-loop pointing
Prism adaptation and confirmation of sensorimotor realignment were performed using a similar procedure as that used for prism adaptation treatment of hemispatial neglect 24 . For prism adaptation, participants made 150 visually-guided pointing movements while wearing goggles fitted with prismatic lenses that shifted the visual field 15° to the left or right. In order to confirm adaptation, a participant pointed under target lines while vision of their pointing arm was occluded by a panel ('open-loop pointing'). Twelve open-loop pointing trials were performed immediately before and after prism adaptation ('pre-' and 'post-test'). In order to confirm that the sensorimotor realignment was retained throughout the entire post-adaptation configural face processing task, a third set of open-loop pointing errors were recorded at the end of the experiment ('late-test'). Open-loop pointing error was measured by the experimenter to the nearest 0.5°, with negative numbers indicating leftward errors and positive numbers indicating rightward errors.
Participants performed a composite face task using stimuli similar to those used by Weston and Perfect 30 . Figure 1 (adapted from Weston and Perfect 30 ) provides examples of the four stimulus types used in the present experiment. Stimuli for the composite faces task had the same form as these examples, but were created from black-and-white publicity photographs of two well-known movie stars (Brad Pitt and George Clooney). All stimuli were constructed from the same two images and were presented on a black background. All participants correctly named the celebrities when shown these photographs at the beginning of the experimental session. Congruent stimuli were the unaltered pictures: that is the top and bottom face-halves were from the same celebrity. Statistical analyses were performed using SPSS software 31 . Pointing errors and reaction time (RT) data were subjected to repeatedmeasures ANOVAs. Follow-up paired-t-tests were performed using Bonferroni correction for multiple comparisons. Composite face task Mean accuracy was at ceiling (93%), precluding meaningful analysis. For each participant, responses that were faster than 200 ms or more than 3 SD above their mean RTs were excluded from analysis. Four participants demonstrated low accuracy for incongruent trials (>3 SD from mean error rate) during one or more block of the experiment, suggesting a failure to comprehend or comply with task instructions (i.e., their responses suggested that these participants were identifying, for example, the top half of the faces in a block in which they had been instructed to identify the bottom half of the faces). These participants were excluded from the analyses. Data for one of the experimental blocks was missing for two participants due to an error made by the experimenter. Since the responses of these individuals were otherwise similar to the remaining participants (suggesting that they were able to understand the instructions) these participants were retained and their missing data was replaced by the mean for that group.
Open-loop pointing
For each prism group (leftward-or rightward-prisms), repeatedmeasures ANOVAs were conducted on the RT cost of alignment; that is, the difference between RTs for aligned and misaligned faces. By this index, a larger RT cost indicates greater interference due to configural processing, and a small RT cost indicates that participants were able to focus on the face halves with little or no interference from configural processing. The key factors of interest for the analyses were Prism (pre, post) and Congruency (congruent, incongruent). Previous studies have demonstrated temporal limitations to the effects of Navon figure processing on changes in the recognition of pre-learned faces 32 and composite halves 30 , with the effects decaying by the second half of the post-induction test phase. In order to test for such changes over time, we therefore included two further time-based factors in our analyses: Block Number (first, second) and Block Half (first, second). Finally, since any time-based effects may also be influenced by which half of the face participants identified immediately after prism adaptation, a between-subjects factor of Block Order (top-half-first, bottom-half-first) was also included. Figure 2 for both groups, and follow-up t-tests were performed on an a priori basis. In contradiction of the experimental hypothesis, there was no significant change in RT cost of alignment for congruent or incongruent faces following adaptation to leftward-shifting prisms. There was, however, a trend for a reduction in RT cost for incongruent faces for participants in the rightward-shifting prism (control) group [t(30)=2.2, p=0.04, assessed to a Bonferroni-corrected alpha-level of p=0.0125].
There were no significant interactions of Prism and Congruency with any other factor with Block Number or Block Half to suggest any short-lived effect of prism adaptation on composite face processing. This is apparent in Figure 3, which shows incongruent trial RT costs of the two Prism groups averaged across eight time points (2 Prism x 2 Block Number x 2 Block Halves).
Overall, the results demonstrate that the RT cost of alignment became numerically smaller with time for both groups, consistent with a practice effect. Importantly, there was no significant reduction in RT cost following adaptation to leftward-shifting prisms.
Discussion
Our results indicate that adaptation to leftward-shifting prisms did not reduce the RT cost associated with identifying individual halves of composite faces. Our data did reflect trends for reduced RT costs of alignment for incongruent faces for both the leftward-and rightward-shifting prism groups. However, this was not significant, and was in fact numerically larger for the rightward-shifting prism (control) group. With our large sample size (N=32 per group), it is unlikely that the lack of significant change in RT costs for incongruent trials can be attributed to type II error. We conclude instead that prism adaptation does not reduce configural processing of face stimuli.
Our research is particularly comparable to studies examining the effects of prism adaptation on the processing of chimeric faces and objects (stimuli that are formed by joining together the left and right halves of different faces or objects). Ferber and colleagues demonstrated that prism adaptation shifted the extent to which a neglect patient 33 and healthy participants 34 passed their gaze over different halves of chimeric faces. However, these changes in the visual exploration were not accompanied by any alteration in perceptual judgements of the faces. Sarri and colleagues 35 extended on this to demonstrate that although prism adaptation did not alter patients' perception of chimeric faces it did dramatically improve their awareness of the identity of the left side of non-face objects. Our findings that prism adaptation alters the global versus local processing of Navon figures 23,24 but not composite faces is consistent with this distinction between significant effects of prism adaptation on object but not face processing.
These results have bearing on an existing debate about whether the beneficial effects of prism adaptation on hemispatial neglect are restricted to tasks that have a direct motor or attentional component, or whether the technique also directly alters perceptual awareness per se 33,35-39 . Striemer and Danckert 36 proposed that the beneficial effects of prism adaptation are limited to dorsal stream attentional and visuomotor behaviours, whereas ventral stream perceptual processes are relatively unaffected. Many of the tasks on which neglect patients have shown improvement following neglect, such as pen-and-paper tasks 9,40 , reading 41 , haptic exploration 12 , postural imbalance 42 and wheel-chair navigation 43 , can be explained by a leftward shift in motor behaviour (including eye movements). In contrast, several studies have shown that prism adaptation does not alter the performance of neglect patients on tasks that require direct perceptual comparison of the left and right side of the stimuli 33,44,45 , or stimuli on the left or right sides of space 46 . Strikingly, the same patients showed leftward shifts in their ocular exploration of the stimuli 33,46 , or in similar tasks that had an overt motor component 45 . Overall, Striemer and Danckert argued that prism adaptation alters performance on perceptual tasks only under specific circumstances (see Nijboer and colleagues 47 for data that directly contradicts this conclusion, and papers by Saevarsson and Streimer and their colleagues 38,39 for further discussions of this model).
We previously attributed the effects of prism adaptation on the processing of Navon figures to changes in the relative activity of left and right temporo-parietal areas 23,24 . While object recognition per se is strongly attributed to dorsal stream processing, sensitivity to global versus local features of an object has been linked to differential specialisation of the left and right temporo-parietal cortices to these two levels of processing 48-53 . A further model of visual processing suggests that fast global processing of visual objects dominates in the dorsal stream providing rapid activation of frontoparietal attention mechanisms, whereas more detailed local processing occurs mainly through slower ventral stream mechanisms 54-56 . Thus, the effects of prism adaptation on the processing of Navon figures could be attributed to changes in dorsal stream mechanisms, either by altering relative processing weights of left and right temporo-parietal areas, or by a global enhancement or suppression of dorsal stream mechanisms.
Similar to other objects, it has been suggested that there is left hemisphere specialisation for processing face features and a right hemisphere specialisation for processing the face as a whole 57 . However these have been localised to face-selective areas of the fusiform gyrus (i.e., the dorsal stream). A mechanism of prism adaptation that operates mainly through the ventral stream would therefore explain the absence of any effect of prism adaptation on face processing.
Prism adaptation is a promising treatment for hemispatial neglect.
In order to understand the cognitive and neural mechanisms that underlie this intervention, it is important to examine tasks on which this technique has no impact, as well as those for which improvements are observed. Our finding that prism adaptation does not alter configural processing of faces is consistent with the dorsal versus ventral stream processing model proposed by Striemer and Danckert 36 . Studies that directly compare the effects of prism adaptation on classic dorsal and ventral stream tasks would further illuminate the mechanisms of the beneficial effects of this intervention on hemispatial neglect.
Author contributions JB conceived of the study, collected data for the study, analysed the data and prepared the draft manuscript. PD provided expertise on face processing and aided in interpreting the data. RR aided in the design of the study and aided in interpreting the data. All authors were involved in the revision of the draft manuscript and have agreed to the final content.
Competing interests
No relevant competing interests disclosed.
Grant information
Funding for this work was provided by the British Federation of Women Graduates (to JB).
The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. In this paper the authors have examined the after-effects of directional prism adaptation on global and local visual processing using the composite face effect. Previous work by Bultitude and colleagues has demonstrated that rightward prism adaptation in patients with right brain damage helps reduce the local processing bias. Furthermore, leftward prism adaptation in healthy individuals actually the local increases processing bias. Based on these findings, the authors predicted that, in healthy individuals, leftward prism adaptation should local processing, thereby the composite face effect (i.e., the increase reducing increase in reaction time observed when processing aligned vs. misaligned composite faces).
The results of the experiment indicated that there were no significant changes in the composite face effect following leftward prism adaptation. However, there was a trend towards a reduction in the composite face effect following rightward prism adaptation. Critically, the absence of any effect of leftward shifting prisms on the composite face effect cannot be attributed to de-adaptation, as participants remained significantly adapted at the conclusion of the experiment. Based on these results the authors argued that their data are consistent with the notion that prism adaptation primarily influences processing in the dorsal visual stream, and the dorsal attention network.
Overall I found the study to be very interesting and well motivated. Although I found study to be quite interesting, I do have some queries regarding the methods used, as well as the interpretation of the data.
In the Methods section it is not clear whether concurrent or terminal feedback was used during the prism adaptation session. Please clarify in the revised manuscript.
In the Results section, when discussing the results of the composite face task (page 5, 2 paragraph, right column) you note that, "the analyses revealed significant main effects of congruency for both leftward and rightward shifting prisms groups, reflecting lower RT costs of alignment for the incongruent faces than for the congruent faces." Perhaps I have misinterpreted the composite face effect, but isn't the prediction that participants should be slower (i.e., an RT cost) when processing aligned (compared increased misaligned) incongruent compared to congruent faces? The data from Figure 2 seem to support this interpretation in that participants are slower to respond for incongruent compared to congruent faces. Please clarify this in the revised manuscript.
It is interesting to note that the authors observed a trend towards a reduction in the composite face effect following rightward prism adaptation. However, the possible reasons for this are not addressed in the discussion. Is it possible that rightward prism adaptation may have increased activity in left temporal-parietal cortex thereby increasing attention to local features, and, by extension, decreasing configural face processing? nd In the Discussion section I believe there may be a typo (or perhaps some confusion) regarding your characterization of the dorsal and ventral streams. Specifically, on the bottom of the left column on page 7 you mention that "While object recognition is strongly attributed to dorsal stream processing ...." I per se believe what you mean to say is that object recognition is strongly tied to the visual stream. While it ventral is true that some imaging studies have observed activation in dorsal stream areas during object processing tasks, it is as of yet unclear what visual information these signals are conveying. However, it is well known that damage to the ventral stream has devastating consequences for object and face recognition.
Likewise, when you are describing hemispheric specialization for face processing (page 7, middle paragraph, right column) you refer to the processing of face features and configural processing of faces as being "localized to face-selective areas in the fusiform gyrus (i.e., the dorsal stream)." Again, what I believe you meant to say was face selective areas in the stream. ventral Finally, your interpretation of the results is somewhat difficult to reconcile with findings from Sarri and colleagues ( ; ) suggesting Sarri, Greenwood, Kalra, & Driver, 2011 Sarri, Kalra, Greenwood, & Driver, 2006 that patients with neglect can detect chimeric non-face objects following rightward prism adaptation, as both types of patterns (i.e., chimeric objects and composite faces) require ventral stream processing. One way to interpret this is that perhaps prisms have differential effects on face vs. non-face objects. However, perhaps a simpler way of interpreting this is one of task difficulty. That is, discriminating between halves of a chimeric face, or the top and bottom halves of a composite face, require a detailed within-category discrimination. In contrast, chimeric objects typically involve a much simpler between-category discrimination.
I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.
No competing interests were disclosed. Competing Interests:
|
2018-04-03T00:25:44.637Z
|
2013-10-14T00:00:00.000
|
{
"year": 2013,
"sha1": "7cbed38c02f4a2d297beec91690be0742d6c47e0",
"oa_license": "CCBY",
"oa_url": "https://f1000research.com/articles/2-215/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9b2cfd3c05a7e93ab276f77e00ad8b4533b3d202",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
210838890
|
pes2o/s2orc
|
v3-fos-license
|
Phi, Primorials, and Poisson
The primorial $p\#$ of a prime $p$ is the product of all primes $q\le p$. Let pr$(n)$ denote the largest prime $p$ with $p\# \mid \phi(n)$, where $\phi$ is Euler's totient function. We show that the normal order of pr$(n)$ is $\log\log n/\log\log\log n$. That is, pr$(n) \sim \log\log n/\log\log\log n$ as $n\to\infty$ on a set of integers of asymptotic density 1. In fact we show there is an asymptotic secondary term and, on a tertiary level, there is an asymptotic Poisson distribution. We also show an analogous result for the largest integer $k$ with $k!\mid \phi(n)$.
Introduction
Euler's totient function φ(n) may be defined as the number of units in the residue ring Z/nZ, or equivalently via the formula where the product on ℓ is over the distinct primes dividing n. Our starting point in this article is the following remarkable property of φ: For every fixed prime number p, almost every value of φ(n) is divisible by p. Here 'almost every' means that, as x → ∞, all but o(x) values of n ≤ x are such that p | φ(n). How could this possibly be the case? A small piece of probabilistic reasoning dispells the mystery: Observe that φ(n) is divisible by p whenever n is divisible by a prime ℓ ≡ 1 (mod p). Those primes ℓ make up a positive proportion of all primes, namely 1 in p − 1, by the prime number theorem for arithmetic progressions. Almost all numbers n ≤ x have ≈ log log x distinct prime factors (a classical result of Hardy and Ramanujan), and it should be unusual for these many prime factors to all avoid the residue class 1 mod p. This argument is merely heuristic, but can be made rigorous by sieve methods or by analytic methods going back to Landau (further developed by Selberg and Delange). An early reference for this fact about φ is [AE44]; that paper treats σ(n) (the sum-of-divisors function) rather than φ(n), but the proof is almost the same.
It follows that there are functions y = y(x), tending monotonically to infinity, with the property that all but o(x) values of n ≤ x are divisible by p≤y p, as x → ∞. It is implicit in the arguments of Erdős in [Erd48] (see also [Erd61]) that y = (log log x) 1−ǫ is admissible, for any fixed ǫ ∈ (0, 1). Erdős's reasoning is developed in [EGPS90] and [LP02], where it is shown that we may take y = c log log x/ log log log x for some positive constant c. Among other things, our main result yields a very precise determination of the allowable values of y.
We need a bit of set-up to state our main theorem. With log k denoting the k-fold iterated logarithm, we set For n ≤ x and Λ real, we set Theorem 1. Fix λ > 0. Then f (n, log λ), as a statistic on integers n ≤ x, is asymptotically Poisson distributed with parameter λ. That is, for each fixed nonnegative integer k, the proportion of n ≤ x with Taking k = 0 in Theorem 1, we deduce: (1) The limiting proportion of n ≤ x with φ(n) divisible by all primes up to This has the following immediate consequence.
As the astute reader may have noticed, the argument sketched at the start of the introduction works equally well to show that φ(n) is almost always divisible by any fixed integer m (not necessarily prime!). This point of view suggests studying the largest factorial dividing φ(n). In §3 we establish the natural factorial analogues of (1) and Corollary 2.
In §4 we consider the primorial and factorial problems for Carmichael's universal exponent function λ(n). Finally, in §5 we raise the related questions where instead of asking what occurs for almost all n, we ask what occurs for almost all φ-values, or λ-values.
Notation. Throughout, we reserve the letters ℓ and p for primes. We use the notation v p (n) to denote the largest integer v with p v | n.
Proof of Theorem 1
Lemma 3. Let P be a set of primes, let x ≥ 1, and let S = ℓ∈P, ℓ≤x Then Proof. See Remark 1 of [Pom77] or the Lemma on p. 699 of [Nor76].
It will be convenient for the proof of Theorem 1 to work not with f (n, Λ) but with a variant function that seems less natural but is more amenable to analysis. Put A 0 (x) = log 2 x/ log 3 x, and define g(n, Λ) = #{p : A 0 (x) < p ≤ A(x) + Λ · B(x), and p ∤φ(n)}.
The next lemma assures us that for the density results we aim at, there is no difference dealing with g versus f .
Proof. Suppose that f (n, Λ) = g(n, Λ). Then either there is a prime p counted by f and not by g, or vice versa.
In the first case, we must have p ≤ A 0 (x). Since p ∤ φ(n), there is no prime ℓ ≡ 1 (mod p) for which ℓ | n. By Lemmas 3 and 4, the proportion of n ≤ x satisfying this latter condition is Summing on p ≤ A 0 (x), we find that the proportion of n ≤ x occurring in this first case is O(1/(log 3 x) 2 ), and so is o(1).
In the second case, p > A 0 (x) and p | φ(n), but p ∤φ(n). Thus, either (i) p 2 | n, or (ii) there is a prime ℓ | n, ℓ ≡ 1 (mod p) with ℓ > x 1/ log 3 x . The proportion of n ≤ x for which (i) can occur (for some p) is ≪ p>A 0 (x) 1 p 2 , and so is o(1). The proportion of n ≤ x for which (ii) can occur is By Brun-Titchmarsh and partial summation, the inner sum on ℓ is O((log 4 x)/p), making the last display (for large x) Thus, the proportion of n ≤ x as in (ii) is also o(1).
In view of Lemma 5, to prove Theorem 1 it suffices to show that g(n, log λ) is asymptotically Poisson distributed with parameter λ. The Poisson distribution of parameter λ, which we will denote by Po(λ), is determined by its moments (see, for instance, Theorem 30.1 on p. 388 of [Bil95], along with Example 21.4 on p. 279 there). It is equivalent, but somewhat simpler here, to work with factorial moments instead of moments. The rth factorial moment of Po(λ) is λ r , and so by the Fréchet-Shohat moment theorem (see [Gal95, Theorem 28, p. 81]) it is enough to prove that lim x→∞ 1 x n≤x g(n, log λ)(g(n, log λ) − 1) · · · (g(n, log λ) − (r − 1)) = λ r for each fixed r = 1, 2, 3, . . . . By induction on r, or by first recognizing the falling factorial as the numerator of a binomial coefficient, we see that Fix distinct primes p 1 , . . . , p r as in the sum. Putting L = {ℓ ≤ x 1/ log 3 x : ℓ ≡ 1 (mod p i ) for some i}, the right-hand summand in (2) counts those n ≤ x not divisible by any prime ℓ ∈ L. By the fundamental lemma of the sieve 1 , this count is ∼ (We suppress the dependence of the implied constant on r, which is fixed.) Since each By Lemma 4, the remaining sum on ℓ is 1 All of our asymptotic results hold uniformly in p 1 , . . . , p r , and so summing on p 1 , . . . , p r yields We briefly digress to study the effect of removing the distinctness condition on the p i from this last expression. The resulting sum is the rth power of Write π(u) = u 2 du log u + E(u), so that dπ(u) = du log u + dE(u). Using that E(u) ≪ K u/(log u) K for every fixed K (a strong form of the prime number theorem), a straightforward computaton shows that the integral above, with dπ(u) replaced by dE(u), is o(1). Turning to the remaining piece of integral, we see that Make the change of variables u = A 0 (x)(1 + z). Then . Therefore, Collecting all of the estimates of this paragraph, we conclude that as x → ∞, When r = 1, the work of the last paragraph establishes that the left-hand side of (3) converges to λ r . We also see that the same convergence assertion will follow for r > 1 provided that If some p i = p j , then reordering the p i , we can force p 1 = p 2 . Thus, the above left-hand side is ≪ p,p 3 ,...,pr where, as above, p and the p i range over (A 0 (x), A(x) + (log λ)B(x)]. The final sum on p is o(1), since each summand is ≪ (log 2 x) −1.9 (say), and there are crudely O(log 2 x) summands. This completes the proof of Theorem 1.
Remark. One could ask not only for every prime up to a certain height to appear in φ(n), but for those primes to appear to at least the rth power, for one's favorite fixed positive integer r. The above analysis can be adapted to prove an analogue of Theorem 1 in this generalized setting. Define For integers n ≤ x and real Λ, set In analogy with Theorem 1 (the case r = 1), we can show that for each fixed λ > 0, the quantity f r (n, log{(r − 1)!λ}), considered on the integers n ≤ x, is asymptotically Poisson distributed with parameter λ. The broad outline of the proof is the same as before; roughly speaking, the numbers with no small prime factors from the progression 1 mod p have their role replaced by those with at most r − 1 such prime factors. This thought is made more explicit in the next section.
From primorials to factorials
In this section we prove the following analogue of (1).
Note that the obvious counterpart of Corollary 2 follows as an immediate consequence. Clearly, if ⌊y⌋! | φ(n), then n is divisible by the product of all primes up to y. We will show that if n ≤ x and φ(n) is divisible by the product of all primes up to y, then apart from o(x) exceptions, φ(n) is divisible by ⌊y⌋!. Thus, (4) follows from (1).
For this, we appeal to the following generalization of Lemma 3, due essentially to Halász [Hal72].
Proposition 6. Let P be a set of primes, let x ≥ 1, and let S = ℓ∈P, ℓ≤x 1 ℓ . Suppose that 0 < δ < 2. Then for each integer m with 0 ≤ m ≤ (2 − δ)S, the proportion of n ≤ x with exactly m distinct prime factors from P is Actually, Halász counts prime factors with multiplicity, rather than distinct prime factors. The modifications necessary to establish the proposition as we have stated it are described by Norton on p. 688 of [Nor76].
We suppose now that n ≤ x, that φ(n) is divisible by all primes up to y (with y = y(x) as in (4)), but that ⌊y⌋! ∤ φ(n). Then we can find a prime p ≤ y with Since 2 ≤ v p (⌊y⌋!) = r≥1 ⌊y/p r ⌋ < y p−1 , we have p − 1 < y/2. Choose the integer k ≥ 2 with Then v p (φ(n)) ≤ k, and so n is divisible by at most k primes from P = {ℓ ≡ 1 (mod p)}.
for large x. By Proposition 6, the proportion of n divisible by at most k primes from P is x (by Lemma 4) and k! ≥ (k/e) k , we find that the last displayed expression is where C is a certain absolute constant.
It remains to sum on the p's corresponding to a given k, and then to sum on k. To each k ≥ 2, there are (very crudely) ≪ y/ log y ≪ log 2 x/(log 3 x) 2 primes p in the range determined by (5). Hence, the proportion of n with φ(n) divisible by all primes up to y, but not by ⌊y⌋!, is which tends to 0 as desired.
Carmichael's function
One might ask about analogues of Theorem 1 and the factorial problem of §3 for other number theoretic functions similar to φ. As one might expect, we have the same theorems for the sum-of-divisors function σ, since the only complications are nontrivial prime powers, and for almost all n, nontrivial prime power divisors are small.
We now ask about Carmichael's function λ(n). It is the order of the largest cyclic subgroup of (Z/nZ) * ; namely, the exponent of the unit group of Z/nZ. Carmichael's function is closely related to Euler's function φ. In fact, from the theorem on the primitive root and from the Chinese Remainder Theorem, we have that λ(p a ) = φ(p a ) for p > 2 or p a < 8, λ(2 a ) = 1 2 φ(2 a ) for a ≥ 3, λ(mn) = lcm[λ(m), λ(n)] when gcd(m, n) = 1.
It is immediate that p | φ(n) if and only if p | λ(n), so that we have the analogue of Theorem 1 for Carmichael's function. 2 The situation though for factorials is markedly different. Let k λ (n) denote the largest integer k with k! | λ(n).
Lemma 7. Let ξ(n) → ∞ arbitrarily slowly. There is a set of integers S of asymptotic density 1 such that for n ∈ S, Proof. We may assume that ξ(x) ≤ log 3 x. Let 2 m be the least power of 2 exceeding (log 2 x)/ξ(x) and let 2 M be the largest power of 2 not exceeding ξ(x) log 2 x. It follows from Lemmas 3, 4 that but for o(x) choices for integers n ≤ x we have a prime p | n with p ≡ 1 (mod 2 m ). Further, it follows from Lemma 4 that the proportion of integers n ≤ x divisible by a prime p ≡ 1 (mod 2 M ) is ≪ (log 2 x)/2 M = o(x). Thus, we have (i). The only way that (ii) would not hold is if the 2-power in λ(n) is λ(2 v 2 (n) ). If also n satisfies (i), as we may assume, and n ≤ x, this would imply that 2 v 2 (n) > 2 m > (log 2 x)/ξ(x). The number of such n is O(xξ(x)/ log 2 x) = o(x) as x → ∞. Thus, we have (ii).
It is clear that for any positive integer N , if k! | N , then v 2 (k!) ≤ v 2 (N ). Thus, k λ (n) ≤ k 0 := max{k : v 2 (k!) ≤ v 2 (λ(n))}, so it will suffice to show that k 0 ! | λ(n) almost surely. Note that for any positive integer N we have v 2 (N !) = N + O(log N ), and for any prime p, v p (N !) ≤ N/(p − 1). Using (6) we may assume for n ≤ x that For p ≥ 3, we have It follows that we may assume for each prime p ≥ 3 that p vp(k 0 !) ≤ exp(0.8 log 3 x) = (log 2 x) 0.8 . For q a prime power at most (log 2 x) 0.8 , the number of n ≤ x not divisible by a prime r ≡ 1 (mod q) is, by Lemmas 3, 4, at most x/ exp((log 2 x) 0.19 ). Summing this count for prime powers up to (log 2 x) 0.8 we obtain an expression that is o(x) as x → ∞, so it follows that but for o(x) choices of n ≤ x we have p vp(k 0 !) | λ(n) for all primes 3 ≤ p ≤ k 0 . Since by definition we have 2 v 2 (k 0 !) | λ(n), we have k 0 ! | λ(n). This completes the proof of (iii).
Theorem 8. For a set of integers n of asymptotic density 1 we have k λ (n) = log 3 n/ log 2+ O(log 4 n).
Proof. This follows immediately from Lemma 7, the definition of k 0 in its proof, and (7).
Remark. Let s 2 (m) denote the number of 1's in the binary expansion of m. One can show that on a set of asymptotic density 1, k λ (n) is m + O(ξ(n)) where m is the largest integer with m − s 2 (m) ≤ v 2 (λ(n)). In fact, m − s 2 (m) = v 2 (m!), so the assertion follows from Lemma 7.
Let k φ (n) denote the largest integer k with k! | φ(n), namely the subject of §3. It follows from the arguments there that if p is the largest prime with p# (the primorial of p) dividing φ(n), then on a set of n of asymptotic density 1, p! | φ(n). In fact, on a set of asymptotic density 1, p!M | φ(n), where M is the product of all of the composite numbers in (p, 3 2 p). (To see this assume n is large and let q run over the primes to p. If q > 3 4 p, then q ∤ M . If 1 7 p < q < 3 4 p, then by the method of §3, we may assume that q 11 | φ(n), so that v q (φ(n)) ≥ v q (p!M ). In addition, the method of §3 can also be used to show we may assume that v q (φ(n)) > (log 2 x)/(2(q − 1)) > v q (p!M ) for all q < p/7.) Now, by a somewhat stronger version of Bertrand's postulate, we may assume the next prime r after p is < 3 2 p. We conclude that (r − 1)! | φ(n) and k φ (n) = r − 1. So on a set of asymptotic density 1, k φ (n) is even. This is a striking incongruence from the situation with k λ (n).
A related problem
Let V = φ(N), that is, V is the set of distinct values of φ. Let V (x) denote the number of members of V in [1, x]. After earlier work of Pillai, Erdős, Hall, Maier, and Pomerance, we finally learned the order of magnitude of V (x) in Ford [For98]. Ignoring subsets of V ∩ [1, x] of size o(V (x)) as x → ∞, what can be said about the largest primorial (or factorial) which divides most members of V ∩ [1, x]? We know that most values of φ come from small fibers, and in particular there is a set of integers S of asymptotic density 0 such that V (x) ∼ #(φ(S) ∩ [1, x]) as x → ∞. It seems likely to us that the key function here is exponentially smaller than log 2 x/ log 3 x and is of the form (log 3 x) 1+o(1) . It would be nice to prove this assertion.
The analogous problem for Carmichael's function λ is even more murky. Let V λ (x) denote the number of λ values in [1, x]. We do not know the order of magnitude of V λ (x), only recently learning in [FLP14] that V λ (x) = x/(log x) η+o(1) as x → ∞, where η = 1 − (1 + log log 2)/ log 2.
|
2020-01-22T00:12:10.355Z
|
2020-01-18T00:00:00.000
|
{
"year": 2020,
"sha1": "c833a4e660a6ac9dea0ad3f32181a308869e0ccf",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2001.06727",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "dfe10eedf3a9bd943bfe29d3b5b82cbefa97b612",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
4599430
|
pes2o/s2orc
|
v3-fos-license
|
“I wish they could be in my shoes”: patients’ insights into tertiary health care for type 2 diabetes mellitus
Background Insightful accounts of patient experience within a health care system can be valuable for facilitating improvements in service delivery. Objective The aim of this study was to explore patients’ perceptions and experiences regarding a tertiary hospital Diabetes and Endocrinology outpatient service for the management of type 2 diabetes mellitus (T2DM). Method Nine patients participated in discovery interviews with an independent trained facilitator. Patients’ stories were synthesized thematically using a constant comparative approach. Results Three major themes were identified from the patients’ stories: 1) understanding T2DM and diabetes management with subthemes highlighting that specialist care is highly valued by patients who experience a significant burden of diabetes on daily life and who may have low health literacy and low self confidence; 2) relationships with practitioners were viewed critical and perceived lack of empathy impacted the effectiveness of care; and 3) impact of health care systems on service delivery with lack of continuity of care relating to the tertiary hospital model and limitations with appointment bookings negatively impacting on patient experience. Discussion The patients’ stories suggest that the expectation of establishing a productive, ongoing relationship with practitioners is highly valued. Tertiary clinics for T2DM are well placed to incorporate novel technological approaches for monitoring and follow-up, which may overcome many of the perceived barriers of traditional service delivery. Conclusion Investing in strategies that promote patient–practitioner relationships may enhance effectiveness of treatment for T2DM by meeting patient expectations of personalized care. Future changes in service delivery would benefit from incorporating patients as key stakeholders in service evaluation.
Introduction
Type 2 diabetes mellitus (T2DM) is a chronic disease and a leading cause of morbidity and mortality in Australia. 1 Up to 5% of the Australian population has a T2DM diagnosis, although the true prevalence is likely double this rate due to considerable under diagnosis. 1 Poor glycemic control increases the risk of micro-and macro-vascular complications, which are a significant, yet preventable, burden on the health system. 2,3 Self-management practices such as dietary behaviors, physical activity, blood glucose monitoring, and wound care are well recognized as supporting optimal glycemic control. 4 It is paramount that patients are supported to undertake self-management practices to enhance their personal health outcomes and minimize submit your manuscript | www.dovepress.com Dovepress Dovepress 1648 cotugno et al the risk of complications. 5 Patients with T2DM typically receive ongoing care from a variety of health professionals either in primary or tertiary health care settings. 6 Optimizing the delivery of health services in these settings may benefit health outcomes.
Health care service models are traditionally founded on studies with health and economic outcomes. 7,8 However, there is currently limited awareness of consumers' perspectives about their health care experiences and expectations, and opportunities to collect this information are not typically embedded in health care systems. 9 This information is a key component of patient-centered care in which patients are incorporated as "partners" to ensure quality, appropriate care is provided. 10 The concept of partnering with patients has been endorsed by the World Health Organization for the globalized effort to redesign and improve health care processes, including the need to better understand the "patient journey" across the health care continuum. 11 Previous investigations of patients' health care experiences for T2DM have often used quantitative approaches, such as surveys, 12,13 which tend to overestimate patients' satisfaction with services. 14 In addition, qualitative focus groups are usually guided by predetermined questions and may result in truncated storytelling by patients, which limits the understanding of patients' perspectives. 15,16 Discovery interviews have been used extensively within the UK to gain unbiased consumer insights to inform service improvement activities. 17,18 The discovery interview technique provides the opportunity for consumers to tell their experiences or "stories" as opposed to the traditional questioning approach. 17,19 This process values the principles of consumer perspectives and priorities, considering consumers as the experts on how health conditions impact their day to day life, rather than asking for a judgment or assessment of the health care service. 17,20 Individual interviewing techniques can serve to overcome a fear of sharing and influence of contrasting views which may be encountered within group settings, serving as a valuable first line approach to better understanding consumer experiences.
Therefore, the aim of this study was to undertake discovery interviews that explore patients' perceptions and experiences regarding a tertiary hospital Diabetes and Endocrinology outpatient service.
Methods
This study undertook discovery interviews to explore patients' perceptions and experiences regarding a tertiary hospital diabetes and endocrine outpatient service. Discovery interviews are a qualitative approach that facilitates participants to share an open "story" prompted by an interview "spine" of laminated cards containing key words and phrases as determined by the researchers. 19,20 Ethical approval was granted by the Metro South Hospital and Health Service's Human Research Ethics Committee (Brisbane, QLD, Australia).
Purposive sampling 21 was used to identify patients of interest who had been referred to a tertiary hospital outpatient clinic in Brisbane, Australia, for management of T2DM. Potential participants were selected using the outpatient appointment management system in July 2013. The discovery interview approach is not designed to reach saturation of data but rather to generate rich empirical material describing the patient experience which provides insights that are not typically revealed using survey or structured interview approaches. New and informative insights can be gleaned from a small number of interviews. The number of interviews undertaken in this study was therefore determined by the time and resources available to the research team. A list of patient names was randomly generated; these patients were telephoned by hospital staff to describe the aim of the study and ascertain interest in participation. Verbal consent was obtained for contact details to be provided to independent researchers trained in conducting discovery interviews. Of the 32 people who had agreed to participate, nine were approached by independent researchers and informed written consent was then obtained prior to conducting each discovery interview. The identity of these nine participants was blinded to clinic staff to ensure anonymity.
An interview spine was developed to form a basic prompt for consumers in sharing their story of the health care experience. 17 The prompt words included in the spine consisted of "Living with diabetes", "Taking responsibility", "Seeking help and support", "Having expectations", "Seeing the dietitian", "Changing my life", "Being heard", "Understanding diabetes", and "Making progress". Prompts were devised via rationally derived discussion within the research team, which was comprised of experienced clinicians and qualitative researchers with experience in the discovery interview technique. The aim was to develop a spine that allowed the stories generated to address the aims of the study without being deemed leading or judgmental. Trained interviewers conducted individual face-to-face interviews with participants. Prompts or probing questions were kept to a minimum, but were used as appropriate to encourage participants to continue telling their story. 17 The discovery interviews were recorded using a digital Dictaphone and transcribed as patients' stories.
Patients' stories were synthesized thematically using a constant comparative approach informed by current patient-centered care philosophies within chronic disease management. 22 Firstly, JC coded sections of the transcripts and organized these into groups with common themes. Secondly, JC and LB further developed the themes by discussing their dimensions and properties for each participant. Post-analysis discussion and verification of themes were conducted among JC, LB, MF, and IH to identify common or dissident viewpoints among interviewed participants. Triangulation was achieved through the involvement of the clinic psychologist (JZ) verifying interpretation. Participant quotes were used to support the key themes and coded in sequential numbers such as P1 = participant 1.
Results
Nine patients (two male and seven female), average age 56 years, receiving care from the diabetes clinic participated in the discovery interviews. Thematic analysis of the stories identified three major themes with multiple sub-themes, as displayed in Table 1.
Theme 1: understanding diabetes and diabetes management specialist diabetes management is highly valued by patients
The value of specialist care, above and beyond general practitioner (GP) management, was highly evident in the patients' stories. Patients valued their relationships with their GP; however, they perceived the role of their GP to be focused on routine health care services such as script provision, general check-ups, and monitoring. Patients perceived that significant health conditions, such as T2DM, required the expertise of specialists for ideal management.
Diabetes is forever
Participants commonly felt that once they received the diagnosis of diabetes, it could not be improved or resolved, and was a significant impact upon their quality of life. Participants also reported difficulty coming to terms with the diagnosis and the chronic nature of the condition.
Diabetes is a significant burden on daily life
Participants reported that the daily imposition of decisions or behaviors necessary for diabetes management was a challenge and posed a significant workload burden on their lives.
Theme 3: impact of health care systems on service delivery
There is a lack of continuity of care Participants experienced considerable frustration with the diabetes service due to seeing multiple health care professionals over time, limiting continuity of care. Patients felt that recounting their medical history at each visit was not a constructive process and limited progress in their diabetes management. This was also highlighted as a significant contributor to unproductive health care partnerships due to the health care professionals' superficial understanding of the individual's personal circumstances, history, and goals. There are limitations with the appointment scheduling system Patients' stories highlighted limitations with the appointment scheduling system, predominantly a lack of flexibility in appointment booking, clinic times, and processes. Participants regularly reflected on instances of merely attending appointments to stay within the system or due to fear of losing contact with specialist services. Other participants described a desire for greater contact with the clinic for additional support and monitoring. Participants reported difficulties in managing other commitments around appointments because of limited notification of upcoming appointments, and this contributed to regular cancellations or re-scheduling of appointments. There are excessive waiting times in clinic prior to an appointment Participants reported significant frustration with extended waiting times immediately prior to a scheduled appointment, which impacted upon future attendance and appointment re-scheduling. Participants who were unable to allocate an
Discussion
This study contributes new information on patients' experiences of a tertiary hospital T2DM service, which can be used to inform patient-centered health care service delivery. Participants provided rich and meaningful insights on factors that affect their health care experiences. Three major themes were generated from patients' stories, which identified clear opportunities for improvement in service delivery, including continuity of care, communication, and appointment bookings. This information is important due to the recognized relationship among health care experiences, self-management practices, and health care outcomes. 13 The patients' stories clearly highlight the value placed on investing in developing partnerships between health care professionals and patients. A perceived lack of personalized service and low health literacy has the potential to limit the effectiveness of prescribed treatment options for chronic disease. Low health literacy may be associated with difficulty interpreting health information, rather than a diminished desire for information, 23 and the results of this study support this notion. When health literacy is measured and accommodated through tailored education material, diabetes knowledge can rapidly improve. 24 The inclusion of infographics in education material, carefully designed with the input of patients, can provide valuable context for health information, support comprehension, and may facilitate the steps toward self-management actions. 23 Health literacy is complex and rarely measured in clinical trials. 25 The development of validated tools to measure different components of health literacy as intermediary outcomes of clinical services seems warranted. Service delivery models or communication styles that do not meet patients' expectations of shared decision making are likely to compound barriers to effective care. Involving patients in education and decision making has been shown to enhance health literacy 25 and confidence in self-management. 26 This is noteworthy because enhanced confidence in self-management is likely to result in improved outcomes for patients. 4,27 Despite a desire to be involved in decision making associated with the management of T2DM, participants acknowledged and highly valued the specialist expertise of the tertiary health care team. It was evident that the partnerships among specialist, GP, and patient could be more clearly defined for each individual. Education that is structured with provisions for patient-directed priorities and expectations and agreed upon roles for GPs and other practitioners may positively influence the perception of individualized care. 28 Patient empowerment has gained prominence in health care as an indicator for patient-centered care. This study identified a number of factors which may act as indicators for patient empowerment, 29 including health literacy, feeling respected, and involved in decision making. Measuring indicators of patient empowerment 29 could be an innovative way of evaluating service delivery in the future.
The patients' stories suggest that establishing a productive, ongoing relationship with health care professionals is highly valued but currently unmet. A number of factors impeded the development of relationships including regular staff rotation and perceived lack of empathy. Rotation of training medical staff through tertiary centers is presently an unavoidable obligation of teaching hospitals. Innovative strategies to improve handover of patient care to new staff need to be considered. Bedside nursing handover is an example whereby improvements in facilitated handover between staff (possibly using standardized tools) has empowered patients to participate in the care process. 30,31 The burden of extended time between appointments makes the outpatient environment quite different to inpatient care; however, investing in developing modified standardized handover tools to suit outpatient settings, real time case conference discussions during clinic, multidisciplinary team meetings, utilizing non-rotating nursing staff to act as case managers, and embracing the developing robotic technologies 32,33 could all contribute to reducing the dissatisfaction felt when patients need to regularly repeat their history. The health care system is likely to be transformed by artificial intelligence devices which will enhance the flow of information between patient and health care professional and assist decision making and personalized medicine. 34,35 It appears from this study that tertiary based clinics for T2DM are a salient target to test these innovations as a strategy to enhance patient-physician relationships.
The perceived degree of empathy felt during a consultation is critical for building relationships with patients. 36,37 Empathy is complex and can be described as a caregiver understanding the private world of the client to gain insight into their situation without judgment. 38 The way it is displayed can Patient Preference and Adherence 2015:9 submit your manuscript | www.dovepress.com
1653
Patients' insights into tertiary health care for type 2 diabetes be influenced by an individual's personality and his or her own experienced emotions. There is a link between an individual physician's well-being and the quality of care delivered within the workplace. 39 Compassion fatigue is a gradual lessening of compassion over time in caregivers who are regularly exposed to patients' problems. This has most commonly been measured in professionals dealing with daily trauma victims, but the phenomenon may also occur for health professionals involved in long-term health care of conditions which have no cure. 40 Strategies to acknowledge and support the well-being of staff involved in chronic care including stress reduction and resilience 41 and social support/team building exercises could have collateral benefits for improving perceived empathy with patients. Furthermore, incorporating patients' stories into evaluations of service delivery models or regular clinic meetings may remind staff to be more aware of patients' experiences of the service. 42 Numerous system issues impacted upon the health services provided to patients, including lack of flexibility in booking appointments, long waiting times at the clinic, and lack of continuity of care. While the need for improvements in the logistical management of tertiary health services is not a new issue, it is still a critical factor in patients' perception of quality care and may impact on patients' health outcomes. 43 Logistical modifications appear warranted to ensure the delivery of appointments meet patients' needs and may require the use of flexible consultation times (wave booking system), after-hours appointment times, or telehealth technology. Telehealth programs for patients with T2DM have been shown to elicit improvements in self-management practices and health outcomes. 44,45 Again, partnering with patients to determine the most beneficial delivery of services and embracing emerging artificial intelligence monitoring systems for patients to avoid physical attendance at clinic altogether 46 may facilitate appropriate prioritization of flexible service delivery options.
Storytelling is becoming increasingly recognized in health care as a powerful tool that can be used to better understand patient experiences and thereby facilitate health care service improvement. 19 The local dissemination of the patients' stories from this study has initiated conversations across a broad range of stakeholders including front line clinic staff, middle and executive management, research institute staff, research funding bodies, and local and national health professional conferences. The stories have been used to generate discussion about service development strategies, improvement in research protocol development, and have highlighted the value of community engagement in the development and design of translational research activity.
This study has some noteworthy limitations. First, it is unclear whether lessons learned in one tertiary setting can be successfully transferred to other settings. 19 However, the patient experiences captured in this study align closely with other evaluations of diabetes management, such as in the primary care setting, and suggest that shared learning across settings may be possible. 47 Second, the limited number of patients' stories and sex bias toward greater female participants means that it is possible that other patients who did not participate in the study had experiences that have not been captured. In addition, staff perceptions were not captured in this study and there was no cross-check linking patient experience with clinical outcome. Rather than identifying definitive systems issues, the utilization of discovery interviews is most highly regarded to hear detailed stories from a small number of people, which can then provide ideas for service improvement. 17 In addition to discovery interviews, alternative strategies that might typically be restricted to research studies such as patient focus groups could inform practical strategies for developing format, content, and delivery options to meet their needs and could be embedded into annual clinical review processes. Consideration of adopting these strategies across other chronic diseases such as cardiovascular disease, respiratory disease, and hypertension is also relevant.
In conclusion, this study provides valuable information regarding patients' experience of a tertiary hospital T2DM service. Investing in strategies that enhance the perception of empathetic relationships with the health practitioner and health literacy may enhance effectiveness by meeting patient expectations of personalized care. These insights highlight that active consumer participation is not embedded in the current health care setting. Future changes in the service delivery would benefit from incorporating patients as key stakeholders in service evaluation and improvement. The utilization of patients' stories as a regular strategy for service evaluation is encouraged.
|
2016-05-04T20:20:58.661Z
|
2015-11-17T00:00:00.000
|
{
"year": 2015,
"sha1": "19511a03b42858e56058c7a2431891e2e62fd0ee",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=28040",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "86f1c115786848e52cd50f98a012238aca6ef256",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247440703
|
pes2o/s2orc
|
v3-fos-license
|
Diurnal variations of rainfall affected by complex topography based on high-density observation in Chongqing over southwest China
Located in the eastern edge of the Sichuan Basin (SCB) in the southwest China, Chongqing is a mountainous region with typical complex topographic features. Using the hourly rainfall observation data of high-density 1686 meteorological stations in Chongqing during warm season from 2009 to 2016, we investigated the diurnal characteristics of precipitation affected by complex topography. The complex mountainous terrain has a significant impact on diurnal variations and distinct regional features of rainfall amount, frequency, and intensity. The stations located in the higher complex mountainous areas have greater rainfall amount, frequency, and intensity than those in the lower surrounding areas. In addition, the detailed characteristics of the rainfall amount and frequency in the four study regions further show that the rainfall amount and frequency significantly increase with the rise of elevation, especially in the area that terrain height sharply increases along the mountain extending direction. The diurnal variation of the rainfall amount is characterized by a bimodal structure with a dominant early-morning peak occurring at approximately 0700 LST (23 UTC) and a weaker secondary late-afternoon peak at approximately 1600 LST (08 UTC), while the rainfall frequency has a single early-morning peak. The terrain height has a significant impact on the proportions of the early-morning rainfall. With the elevation increasing in the four study regions, the proportions of rainfall amount (frequency) that occurs during early-morning period decrease.
Introduction
The complex topography plays a significant role in the inhomogeneous spatial and temporal distributions of rainfall patterns, and most previous researches have shown that the mechanisms of the orographic influences are complex (Smith 1979;Jiang and Smith 2003;Colle 2004;Smith and Barstad 2004;Roe 2005;Rotunno and Houze 2007;Kirshbaum 2011;Houze 2012;Couto et al. 2016;DeHart and Houze 2017;Purnell and Kirshbaum 2018;Kang et al. 2019). Many features of mountainous topography, including elevation, relief amplitude, slope, and aspect, can initiate, enhance, and modify rainfall and further influence diurnal and spatial variations of rainfall (Weisse and Bois 2001;Burbank et al. 2003;Lee et al. 2010;Chen et al. 2013;White and Paul 2015;Kirshbaum et al. 2018;Sarmadi et al. 2019). Among the orographic and morphological characteristics, elevation differences perform a considerable important function in the spatial variation of climatic rainfall in mountainous regions (Basist et al. 1994;Goovaerts 2000;Johansson and Chen 2003;Sokol and Bližňák 2009;Silverman et al. 2013).
The topography in China has complex and various with diverse mountain ranges and plateaus; therefore, the influence of terrain on precipitation is more complex (Tao 1980;Peng et al. 1995;Liao et al. 2007;). Due to the forcing effects of spatial scale and height differences of complex topography, the spatiotemporal characteristics of precipitation demonstrate fairly obvious regional differences (Wang et al. 2013;Li et al. 2017Li et al. , 2019Li et al. , 2020aYu et al. 2018). The influences on rainfall in the high Tibetan Plateau (TP) region are not only closely related to the spatiotemporal distributions of rainfall in the local and the surrounding areas (Liu et al. 2009;Xu and Zipser 2011;Jin et al. 2013;Guo et al. 2014Guo et al. , 2016Cuo and Zhang 2017;Li 2017;Chen et al. 2018), but also to diurnal variations of rainfall in the faraway east in China (Bao et al. 2011;Zhang et al. 2014a, b;Wang et al. 2018). However, regional topography only force local circulations that affect the local features of diurnal variation (Barros and Lang 2003;Chen et al. 2013;Li et al. 2017). Yuan et al. (2014) found the spatiotemporal features of rainfall events were highly correlated with elevation heights over North China and the topography influenced the diurnal varied surface or low-level temperature, moisture, and wind fields. The locally enhanced convergence in the Dabie Mountains provides the strongest thermal and dynamic forcing lifting that initiates the earliest convective cells, which leads to condensation and formation of the morning precipitation peak Fu et al. 2019). Li et al. (2019) revealed the influences of the gauge elevations on the diurnal variation of rainfall, and the proportion of the rainfall frequency occurring during the early-morning period decreases with increasing elevations over the Qilian Mountains. Gan et al. (2019) proposed that typical small-scale Mount Tai exhibits great differences in precipitation characteristics with a large enhancement effect on rainfall, which was different from the surrounding areas. When precipitation of Mount Tai is significantly enhanced, the corresponding wind field tends to be dominated by stronger southwesterly. These studies show that the isolated mountainous topography plays a crucial role in spatial and temporal variation of precipitation. However, the details about the spatial variation of precipitation over complex topography are not consistent, especially for the possible links between mountainous features and precipitation over different areas of complex topography.
In recent years, with the appearance of multi-year continuous hourly rainfall data, an increasing number of studies have focused on the diurnal cycles of the rainfall characteristics in many regions around the world (Leahy and Kiely 2011;Deshpande et al. 2012;Hitchens et al. 2013;Stevenson and Schumacher 2014;Iwasaki 2012Iwasaki 2015. Similar researches on diurnal variations of precipitation based on hourly rainfall data have also made great progress over mainland China (Yu et al. 2007a(Yu et al. , b, 2014Chen et al. 2015;Liang and Ding 2017;Li et al. 2017). Using a long-term dataset comprising measurements made at more than 2420 stations over SCB, Zheng et al. (2016) found that the SCB experienced higher rainfall accumulations and a higher occurrence of short-duration heavy rainfall events compared with other sub-regions of China. Based on the same datasets, Luo et al. (2016) reported that extreme hourly rainfall over SCB peaks in July. Furthermore, based on daily precipitation data from 524 observation stations, Zhang et al. (2019a) revealed that, on the regional scale, some differences existed in the changes of autumn rainfall between the eastern and the western parts of West China. In addition, based on hourly rainfall data of 468 rain gauge stations, Chen et al. (2021) studied spatial and temporal characteristics of abrupt heavy rainfall events (AHRE) over southwest China, and the occurrence frequency of these AHRE exhibited large spatial variability among different regions. Nevertheless, the previous studies used observations at the limited automatic meteorological stations and lacked precipitation data with a higher spatial resolution; their results show some regional differences on precipitation diurnal variations over China. Due to the sparse gauge networks and the large spatial and temporal variations of rainfall, obtaining accurate finding has been challenging for scientists especially in complex terrain.
Located close to the eastern edge of SCB in the southwest China (Fig. 1a), Chongqing (105° 11′ ~ 110° 11′ E, 28° 10′ ~ 32° 13′ N) is a mountainous region in the middle and upper reaches of the Yangtze River and has a humid subtropical monsoon climate zone affected by the TP significantly. Chongqing, known as "mountain city", has special geographical conditions with relatively flat terrain in the Sichuan Basin and mountains along the edge of the basin. Recently, considerable studies have been conducted to reveal the effects of orography on diurnal variation of nocturnal precipitation peak time over SCB (Zhang et al. 2014a;Chen et al. 2017;Xue et al. 2018). In addition, the movement of precipitation over the SCB and the adjacent regions is closely tied to multiple regional-scale mountain-plain solenoids because of the great contrast in terrain heights between the SCB and surrounding mountain ranges (Qian et al. 2015). Zhang et al. (2019b) demonstrated that prominent diurnal inertial oscillations of south-southwesterly low-level jet into the southeast side of the SCB played an important role in modulating the diurnal variation of precipitation over the SCB. Li et al. (2020a, b) found there is a prominent northeastward time delay of precipitation peak over the SCB, and the diurnal variation of 850 hPa wind has strong easterly wind deviations in the early evening, which favors for an initiation of precipitation by orographic lift of air. However, few studies have reported on the rainfall diurnal cycle in Chongqing, especially achievement between rainfall and terrain. Using the hourly precipitation data observed at 34 gauge stations in Chongqing, Chen et al. (2019) revealed the spatial and temporal characteristics of the precipitation. By selecting 16 representative stations in two types of terrain Fig. 1 a the geographical location of Chongqing (yellow lines).The red line shows the Yangtze River. The distributions of 1868 rain gauge stations (b: colored dots).The red boxes represent four study areas. Gray shadings indicate topography (unit: m).The numbers of station (c: left y-axis) as a function of terrain height (c: x-axis, the 1600 m scale of x-axis indicates the numbers of stations above 1600 m) (c) and the proportions of different terrain height (c: right y-axis). Boxplot of stations terrain height of the four study regions (d); box shows lower and upper quartiles. Black lines inside boxes represent medians. Blue dots are for average terrain elevation. Minimum and maximum values are shown by whiskers. NW, SW, SE and NE indicate the northwestern study area, the southwestern study area. The southeastern study area, and the northeastern study area respectively, and the same meaning is shown in the following figure ◂ for analysis, the results showed that the amplitude of diurnal precipitation at the stations in higher terrain was smaller. The studies yielded somewhat significant results in this area. Nevertheless, derived from rare stations, studies cannot be conducted to analyze the detailed characteristics of rainfall over the complex topography in Chongqing. Hence, more thorough researches need to be carried out on the diurnal variation of precipitation based on hourly precipitation observed from automatic meteorological stations and highdensity regional meteorological stations.
In this study, based on high-density and quality-controlled hourly rain gauge data, detailed spatiotemporal characteristics of rainfall amount, frequency, intensity, and durations in the different topographic areas over Chongqing are investigated. In addition, the relationships between the diurnal characteristics and the complex topography will also be revealed. The structure of this paper is organized as follows. Section 2 describes study region, the datasets and analysis methods. Section 3 shows the detailed spatial-temporal distributions features of rainfall, the detailed characteristics of diurnal evolution, and the relations between early-morning rainfall and the elevation. Finally, conclusion and discussion is given in Section 4.
Description of the study area
Chongqing has one of the most typical complex topographic features located in an inland area, which is the transitional region between the second and third topographical steps of China (Fig. 1a). The west Chongqing goes deep into the SCB at low elevation, while the east Chongqing gradually rises eastward with high terrain spreading on the Wu Mountain (WM) and reaches the middle of the Yangtze River. The south Chongqing is adjacent to the Dalou Mountains (DLM) and the Wuling Mountains (WLM), and the north Chongqing lays back the Daba Mountains (DBM).The topography inclines from the west toward the Yangtze River valley and can be divided into the following categories: plains, hills, mid-height mountains, and tablelands. The mountains, hills, tablelands, and plains account for 75.9%, 17.0%, 3.57%, and 2.39%, respectively, of the total land area (Chongqing Bureau of Geology and Minerals Exploration (CBGM) 2002). Figure 1c shows numbers of station at different terrain heights. Initially, the numbers rapidly increase from 100 to 250 m with maximum value reaching between 200 and 500 m (62.6% of all stations) and then gradually reduce from 500 to 1600 m. The proportions of stations below 500 and below 1000 m to total stations are 64.7% and 93.0%, respectively.
Because of the complex terrain, in order to illustrate the detailed features of rainfall and to facilitate further obtaining links between mountainous features and rainfall over different areas in Chongqing, we select four typical areas by considering the topographical features, as shown by red boxes in Fig. 1b. The northwestern study area (NW) denotes the complex topography of mixed basin and hills, and the southwestern study area (SW) is the mountainous terrain in southwest Chongqing adjacent to the northern DLM. The southeastern study area (SE) represents mountainous terrain in southeast Chongqing, and the northeastern study area (NE) is mountainous terrain in northeast in which the DBM lays. The numbers of stations in NW, SW, SE, and NE are 149, 145, 111, and 141 stations, respectively. The statistics of station heights in every area are shown in Fig. 1d. The average elevation gradually increases from 359.7 m in the NW to 693.8 m in the NE. Meanwhile, the lower quartile, median, and upper quartile values of terrain height also increase, ranging from 261, 312.3, and 416 m, respectively, in the NW, to 394, 683.9, and 881 m in the NE, respectively. In addition, for each area in Fig. 1d, the minimum height is relatively consistent, and the maximum is relatively high, which is above 1000 m in the SW (1350 m), SE (1267 m), and NE (1723 m), respectively. This altitude difference is mainly due to the difference in the variation of the underlying surface on complex terrain, especially in the SW and NE.
Station data and data processing
The hourly rain gauge observations data were obtained from the National Meteorological Information Centre (NMIC) of the China Meteorological Administration, which includes a total of 1686 surface automatic weather stations (Fig. 1b) and covers the entire warm season (May-September) from 2016 to 2020 in Chongqing. This dataset has undergone a series of strict quality control, including extreme value check, internal consistency check, and a time consistency check (CMA 2003). The information of this data set can be obtained from http:// www. cma. gov. cn/ 2011q xfw/ 2011q sjgx/. To minimize the impact of missing values on the analysis, for the hourly precipitation data, the missing data must be less than 1% of the total records for 5 years series, and every station has more than 17, 370 h without missing or suspicious values. Although the reconstruction is necessary for guaranteeing the completeness of the dataset and for raising the statistical confidence, we find that there is little difference between the analysis results through a comparison of the reconstructed and unreconstructed data.
Methods
In this study, following the previous studies (Dai et al. 1999;Liang et al. 2004;Yu et al. 2007a, b;Zhou et al. 2008), the definitions of the four rainfall features in this study are depicted as follows.
(1) Rainfall amount: cumulative amount with measurable rainfall (rainfall rate ≥ 0.1 mm h -1 ) divided by the number of non-missing hours during the study period.
P r is the accumulated rainfall amount with measurable rainfall during the study period; N nm is the number of hours with no missing rainfall records.
(2) Rainfall frequency: cumulative total hours with measurable rainfall (rainfall rate ≥ 0.1 mm h -1 ) divided by the number of non-missing hours during the study period.
N r is the number of hours with measurable rainfall. (3) Rainfall intensity: cumulative rainfall divided by the number of rainy hours during the study period.
(4) Rainfall event: According to the definition of previous precipitation events, we define rainfall a single rainfall event by their durations without any intermittence or at most 1-h intermittence. When a rainfall event begins and its intermittence lasts for 2 h, we deem that the rainfall after the intermittence belongs to a new rainfall event. The starting time of a rainfall event is defined as the time no rainfall in the previous 2 or more than 2 h before measurable rainfall event (≥ 0.1 mm/h) occurs. The ending of a rainfall event is defined as the time there is no rainfall in the next two or more than 2 h after measurable rainfall event (≥ 0.1 mm/h) occurs. The numbers of rainfall events are defined as the accumulated numbers of the total rainfall events during the whole warm season precipitation period. The duration time (rainfall amount) is defined as the hours (accumulated rainfall amount) from the starting time to the ending time during a rainfall event, during which the intermittence is less than 1 h. The most frequent starting time of rainfall events is considered as the time when rainfall events occur most frequently, and the most frequent peaking and ending time are similarly defined.
For each type of rainfall events, let R a (h) represent the amount of hourly rainfall at time h. The normalized diurnal variation of precipitation, D a (h), is calculated by: The R a (h) is resulted from an average of the rainfall events with a specific duration. Since warm season rainfall is the main flood season and accounts for more than 85% of the annual rainfall in Chongqing (Liu et al. 2012), this study focuses on warm season.
Spatial characteristics of warm season rainfall in Chongqing
The spatial distributions of warm season rainfall amount, frequency, and intensity in Chongqing are shown in Fig. 2, which have distinct differences for stations with different gauge elevations. As illustrated in Fig. 2a, the spatial distributions of the rainfall amount are apparently inhomogeneous. The average rainfall amount at elevations below 500 m, between 500 and 1000 m, and above 1000 m is 0.21 mm/h, 0.23 mm/h, and 0.25 mm/h, respectively. Large rainfall amount value, i.e., those above 0.21 mm/h, occur in the northern sections of DLM, the southern sections of DBM, the southern sections of WLM, and mountainous areas in the southeast Chongqing. On the contrary, small value stations of rainfall amount are located in most of the central and the western regions, and the eastern and southern sections of the northeast Chongqing. It should not be ignored that scattered stations of large rainfall amount values locate in Huarong Mountain in the NW. The distribution patterns of frequency ( Fig. 2b) are consistent with the rainfall amount. The average rainfall frequency of below 500 m, between 500 and 1000 m, and above 1000 m is 0.13, 0.14, and 0.17, respectively. The ratios of stations rainfall amount exceeding 0.21 mm/h (89%) and frequencies exceeding 0.13 (94%) above 1000 m are approximately twice more than those below 500 m (44%, 46%). The ratios are 72% and 79% between 500 and 1000 m. In other words, it is clear that the rainfall amount and frequency over mountainous areas are much higher than that in surrounding low elevation areas.
The distribution of intensity (Fig. 2c) is slightly different from the previous two with the intensity below 500 m, between 500 and 1000 m, and above 1000 m are 1.62, 1.63, and 1.55 mm/h, respectively. The proportions of intensity exceeding 1.55 mm/h are 63%, 64%, and 45%, respectively. Large values appear in the northern sections DLM, the southern sections of DBM, mountainous areas in southeast, and hilly areas in NW. It can still be found that stations with large rainfall intensity value scatter in Huarong Mountain. Compared with rainfall amount and frequency large values, the distributions of large rainfall intensity values are not entirely coincident, which do not occur at the tops of DBM and WLM but in the two slope regions. The large rainfall intensity values extend southward to the south of mountainous terrain, especially in the south DBM and the south WLM, which indicates that the terrain has a significant impact on the characteristics of precipitation. The heavy rainfall is more likely to occur in the south of the piedmont, namely the windward slope zone with the terrain interacting with the southerly wind, rather than the tops (Houze 2012). The same results are found by Chen et al. (2019) in this area. The spatial correlation coefficient between the rainfall amount and frequency (intensity) in regions, where rainfall amount exceeds 0.23 mm/h and height ranging between 500 and 1000 m, is 0.73 and 0.65, respectively. These coefficients indicate that most stations with large rainfall amount also have high rainfall frequency.
In summary, the spatial distributions of rainfall amount, frequency, and intensity have obvious regional characteristics. It is notable that the distributions of large rainfall amount, frequency, and intensity values are located in the complex mountainous terrain areas, especially in the four study regions we have selected. The peak time of the rainfall factors reflects the main characteristic of diurnal variation. To describe the phase of the peak time in the diurnal variation more clearly, we divided the 24 h of a day into four time periods: night(2100-0100 LST), early morning (0200-1000 LST), noon (1100-1300 LST), and afternoon (1400-2000 LST). Figure 3 shows spatial distributions of the hourly peak over 24 h for the warm season rainfall amount, frequency, and intensity. The prevailing early-morning peaks (0200-1000 LST) of rainfall amount appear over the west, the middle, the most of southeast, and the parts of northeast regions in Chongqing. The ratio of early-morning peaks accounts for up to 81.9% of all stations. The late-afternoon peaks (1400-2000 LST) mainly appear over DBM ranges and WLM ranges, and the proportion of all stations is 11.8%. In addition, the peak hours of rainfall in the western region (west of 107°E) occur obviously earlier than those in the eastern region (east of 108°E), which has been found that the (Yu et al. 2007b).
Similar to the rainfall amount, the early-morning peaks have played dominant role for the rainfall frequency. A total of 88.1% of the stations have early-morning peaks, which distribute over the major region. The rare late-afternoon peaks (8.6%) scatter in DBM ranges and WLM ranges. Also, it is notable that the eastward delayed diurnal phases occur mainly in the peak hours of rainfall frequency from the west to the east.
Compared with the patterns of rainfall amount and rainfall frequency, the patterns of the rainfall intensity are apparently inhomogeneous. 47.0% of the total stations have earlymorning peaks, which mostly distribute over the west region and southeast region.
Overall, the above results demonstrate the dominance of early-morning peaks in determining the distinct diurnal features of warm season rainfall and the characteristics of nocturnal rainfall with eastward phase transition. In accordance with the diurnal peaks showing that the nighttime rain is evident in the SCB (Yu et al. 2007a;Bao et al. 2011;Qian et al. 2015;Zhang et al. 2019b;Li et al. 2020a, b), the same prominent nocturnal feature is found in the SW and NW, yet the diurnal peaks are not always consistent with the SE and NE. The west Chongqing with low terrain height is located in the eastern SCB, but the east Chongqing with high altitude complex terrain reaches the edge of SCB. We further analyze the detail characteristics of the four study regions in the following chapter.
The diurnal variation of precipitation in different areas
The general spatial features of the hourly warm season rainfall in Chongqing have been acknowledged. To recognize the diurnal variations in the warm season rainfall, Fig. 4 shows the standardized diurnal curves of the warm season rainfall amount, frequency, and intensity over the four study regions. Specifically, it can be seen that the diurnal variations of rainfall amount have bimodal structure with a dominant early-morning peak at approximately 0700 LST (23 UTC) and a weaker secondary late-afternoon peak at approximately 1600 LST (08 UTC) (Fig. 4a). The proportions of early-morning rainfall amount account for 57.5%, 55.0%, 48.0%, and 44.2% of the total rainfall in the SW, NW, SE, and NE (Fig. 5a), respectively, and the percentages for late-afternoon rainfall are 19.7%, 21.4%, 23.7%, and 29.8% (Fig. 5b), respectively. Similar to the diurnal variation in the SCB showing that the nocturnal rainfall is evident (Qian et al. 2015;Zhang et al. 2019b), the difference is that the rainfall amount has bimodal peaks with the maximum peak values of the dominant early-morning in the SW, NW, SE, and NE, and the values are 0.396, 0.404, 0.388, and 0.362, respectively, higher than that of the late afternoon, which are 0.183, 0.204, 0.251, and 0.307, respectively.
Being different from bimodal diurnal peaks of rainfall amount, there is single peak of the rainfall frequency at around 0700 LST (23 UTC) in the early-morning in the SW, NW, and SE, which accounts for 50.3%, 48.1%, and 43.4% of the total rainfall, respectively (Fig. 5b), but there are bimodal peaks that are the same as rainfall amount in the NE. The maximum peak values of the rainfall frequency in the SW, NW,SE,and NE are 0.197,0.212,0.191,and 0.167,respectively. Compared with the rainfall amount and frequency, the diurnal variation of the rainfall intensity is not very evident. There is bimodal peak in the SW, which is different from the multi-peaks structure in the NW, SE, and NE with the peak values located at around 0600 LST (22 UTC) in the early morning. The detailed diurnal variation of rainfall intensity in every region is inhomogeneous.
For every region, the diurnal variations of rainfall amount can be attributed to those of both the rainfall frequency and rainfall intensity. The dominant early-morning peak mainly comes from the rainfall frequency, and the weaker secondary late-afternoon peak mainly is generated due to the rainfall intensity. Especially for the NE, such kind of rainfall is likely to occur as local convective precipitation. As noted by Liao et al. (2007), due to the diurnal variation of solar heating, the lower atmosphere tends to reach an unstable state in the afternoon so that a little disturbance can trigger local convective rainfall. Yu et al. (2013) also indicated that this occurs due to the asymmetry of precipitation processes and the evolution of convective clouds. Our findings differ from previous studies obtained from the hourly national automatic weather stations ), which do not include more regional automatic weather stations.
In conclusion, it should be noted that the maximum peak time periods are relatively consistent with diurnal variation of rainfall amount, intensity, and frequency with generally reaching their maxima in the early morning. In addition, the detailed diurnal rainfall cycles of rainfall amount and intensity are the same in the SW and NW, and that is generally consistent with the SE and NE.
From the above analysis, the dominant early-morning peak is evident. Some researchers found an increasing amount of precipitation with altitude increasing in the mountain (Giorgi et al. 1997;Liu et al. 2011;Guo et al. 2016). To quantitatively assess the relationship between the early-morning rainfall and the gauge elevation over the four study regions, we study the correlation between the proportion of rainfall amount (frequency) in the early morning to the total daily rainfall amount (frequency) and the gauge elevation (Fig. 6). The linear fittings show that the proportion of the rainfall amount (frequency) occurring during the early-morning period is negatively correlated with the elevation in four regions. The linear correlation coefficients (R) are − 0.934 (− 0.934), − 0.880 (0.886), − 0.902 (− 0.906), and − 0.814 (− 0.814), respectively, which pass the significance test at the 99% confidence level, and correspond to the proportions of early-morning rainfall amount (frequency) to the total rainfall at the higher elevations to be smaller than that at the lower elevations. This suggests that the altitude effect of the early-morning rainfall frequency is significant. The main characteristics of diurnal variation of early-morning rainfall obviously diminish as elevation increases, and this phenomenon indicates that more rainfall in the higher mountainous regions occurs in other periods.
The detailed characteristics of early-morning rainfall in four regions
The previous results demonstrate the dominance of earlymorning rainfall in determining the distinct diurnal features of warm season rainfall. To reveal detailed spatial distribution of rainfall features in the four study regions during early morning, the regions with the most obvious signals were selected, and every region is divided to 4 subregions with considering the combination of mountains extending direction and the terrain height variation from the lower to the higher elevation. The mountains stretch from southwest to northeast in the NW, simultaneously, the heights gradually increase. The altitude of mountains gradually increases from west to east in the SW. The mountains present south-north-oriented extending with the terrain height gradually increasing in the SE. The altitude of mountains increases form south to north in the NE. The spatial patterns of rainfall amount (Fig. 7) during early morning are considerably similar to that of the warm season rainfall amount (Fig. 2). It is notable that the rainfall amount approximately increases as the terrain height increases in the four study regions. In every study region, the relatively small value stations of rainfall amount are located in the first sub-region, and the large value stations mainly concentrate in the fourth sub-region. In the NW and the SE, the rainfall amount gradually increases along the direction of mountains, especially in the south side of mountains. In the SW and the NE, the rainfall amount gradually increases from the foot to the top of mountains.
For more detailed investigation of the rainfall variations in the four study regions, Fig. 8 presents variations of the rainfall amount for four sub-regions in every study region at the different elevation. Along the mountains from southwest to northeast, the average elevations of the sub-regions stations increase from 346.1 to 536.4 m in the NW (Fig. 8a), but there is a slight decrease in sub-region 2. Meanwhile, the mean, median, upper quartile, and maximum values of rainfall amount in 4 sub-regions also increase, ranging from 0.315, 0.313, 329, and 0.363 mm/h, in the first sub-region, to 0.378, 0.380, 0.397, and 0.470 mm/h, respectively, in the fourth sub-region. In the SW (Fig. 8b), the average terrain heights gradually increase from 405.5 to 666.5 m, and the mean (0.344 mm/h) and median (0.345 mm/h) values in westernmost sub-region increase to 0.383 and 0.376 mm/h in the easternmost sub-region. In addition, we should notice that the mean and median values in the second sub-region are slightly smaller than those in the first sub-region, which are influenced by the stations in the low altitude terrain, but the slightly different variation does not affect the overall change with terrain height. The mean, median, and minimum values of rainfall amount in the southernmost region are 0.316, 0.319, and 0.266 mm/h, respectively, whereas those in the northernmost region reach 0.427, 0.436, and 0.378 mm/h, respectively, in the SE. For the rainfall amount, practically all statistics increase along the mountain extending in the SE (Fig. 8c). Simultaneously, the average elevations of the sub-regions stations gradually increase from 431.7 to 719.9 m. Figure 6d also presents the obvious feature that is the rainfall amount consistent variations Boxplot of stations rainfall amount in the four study regions mostly shows the characteristics of consistent variations as the terrain heights increase. The results show that terrain elevation plays an important role in affecting the regional distribution of rainfall amount. The NW denotes the complex topography of mixed basin and hills, and the SW represents the mountainous terrain in southwest Chongqing adjacent to the northern DLM. The underlying surface is mainly hilly, and the average elevations of the sub-regions stations are relatively low in the sub-regions 1 and 2 of the NW and the SW, so the rainfall amount of these stations are relatively lower. The underlying surface is mainly mountainous terrain, and the average elevations of the sub-regions stations are relatively high in the sub-regions 3 and 4, therefore, resulted in the rainfall amount values are relatively high. The underlying surface is mainly mountainous terrain, and the average elevations of the sub-regions stations increase sharply from the sub-region 2 to 3 in the SE, so the rainfall amount increases sharply. Compared with the others three study areas, the average elevations gradually increase from the sub-region 1 to 4 in the SE, and the increase gradient is relatively consistent, so the rainfall amount gradually increases. Similar to rainfall amount, the rainfall frequency also shows different spatial variations in the four study regions (Fig. 9). The rainfall frequency also gradually increases as the terrain height increases. The small value stations of rainfall amount are located in the first subregion, and the large stations are concentrated in the fourth sub-region. The rainfall frequency has a clearer increase along the extending direction of mountains in the NW and the SE, and those also have a similar increasing trend with the terrain height gradually increasing from the low inlands to the high hills of mountains in the SW and the NE. The values at the top of mountains are also bigger than those at the foot at same longitudes or latitudes. The detailed rainfall frequency changes correspondingly with the topographic elevation, as shown in Fig. 10. In the southwest-northeast direction in the NW (Fig. 10a), the rainfall exhibits consistent variations with elevation, in terms of rainfall amount and frequency. The mean rainfall frequency is 0.174 in the first sub-region; however, it reaches 0.191 in the fourth sub-region. In the SW (Fig. 10b), from west to east along the mountains, the mean terrain heights are 405.5, 418.9, and 453.9 m, respectively, increasing at interval of approximately 50 m, but the elevations increase to 666.5 m in 4 sub-region, more than 200 m higher than those in 1-3 sub-regions. Due to the terrain heights variation, the mean and median stay relatively consistent in 1-3 sub-regions (0.187, 0.188), but that obviously increases in the fourth sub-region (0.207, 0.203). The mean (0.169) and median (0.173) values in southernmost sub-region increase 0.210 and 0.210 compared with those in northernmost sub-region in the SE (Fig. 10c). Meanwhile, it should be noted that the mean and median values are slightly smaller in the second subregion than those in the first sub-region, but the slight difference does not affect the overall variation tendency. There is an increase in the rainfall frequency concurrent with the increasing elevations over 1-4 sub-regions in the NE. The mean, median, upper quartile, and maximum values increase, ranging from 0.134, 0.135, 0.139, and 0.151, respectively, in the first sub-region to 0.183, 0.178, 0.193, and 0.260, respectively, in the fourth sub-region. Boxplots of rainfall frequency of stations in the four study regions mostly show the consistent variations with rainfall amount. The rainfall frequency values of stations are relatively low in the sub-region 1 and 2 of NW and SW, but high in the sub-region 3 and 4. The rainfall frequency values increase sharply from the sub-region 2 to 3 in the SE. The rainfall frequency values gradually increase from the sub-region 1 to 4 in the SE.
From the above discussion of the spatial variations of the rainfall over four focus regions, the results show that the rainfall amount and frequency at higher elevations are larger than those at lower elevations, and the rainfall amount and frequency significantly increases as the terrain height sharply increase, which indicates that mountainous terrain has a remarkable enhancing effect on precipitation in the four study regions.
The diurnal variation of rainfall events in different areas
To explore the relationships between rainfall occurring time and the duration hours of the rainfall events, the rainfall Fig. 9 The same as Fig. 7, but for rainfall frequency amount and frequency decomposed by duration and diurnal phase for the four study regions are analyzed, which are normalized by the daily mean of each duration time. In the NW (Fig. 11a), rainfall events with duration hours of less than 6 h reach their peaks at roughly 0400 LST in the early morning. Rainfall events lasting more than 6 h tend to peak between 0200 and 0400 LST. However, it can be seen clearly that the amount and frequency of rainfall events that last longer than 6 h is higher than that of lasting less than 6 h, and this kind of rainfall occurs mostly in the early morning.
In the SW (Fig. 11b), rainfall event with duration less than 6 h reaches its peak at roughly 2000 LST in the afternoon, and a less pronounced second peak is observed at 0400 LST in the early morning. Rainfall events lasting more than 6 h tend to peak between 0200 and 0400 LST. Similar results are found in the SE (Fig. 11c), though rainfall with lasting less than 6 h reaches its peak at roughly 0000 LST in the midnight. In the NE (Fig. 11d), rainfall events lasting less and more than 6 h mainly occur in the nighttime. From the above analysis, short-duration rainfall events (1-6 h) tend to start between afternoon and night, while long-duration rainfall events (> 6 h) tend to start in the night. Nocturnal rainfall events tend to begin simultaneously while long-duration rainfall ends later, showing that long-duration rainfall events make a larger contribution to total rainfall amount. The duration is closely related to the physical mechanisms of precipitation. Yu et al. (2007a) revealed that the diurnal cycle of longduration precipitation exhibits an early-morning maximum while short-duration precipitation an afternoon to evening maximum. The late-afternoon maximum can be explained by surface solar heating, which results in maximum lowlevel atmospheric instability and thus moist convection in the afternoon . The nocturnal maximum Fig. 10 The same as Fig. 8, but for rainfall frequency may result from the diurnal variation of local circulation forced by the complex terrain. Similar to Fig. 5, the proportions of two main periods for rainfall events with different durations are shown for the four regions (Fig. 12). When the duration is between 1 and 3 h, the rainfall frequency proportion (25.6%) in the earlymorning period is greater than that (17.1%) in the late-afternoon period in the NW (Fig. 12a), but the rainfall amount proportion (4.5%) in the early morning is slightly smaller than that (5.8%) in the late afternoon. The rainfall frequency proportion of durations of 4-6 h (10.6%), 7-9 h (5.2%), 10-12 h (2.6%), and more than or equal to 13 h (3.1%) in the early-morning period are roughly triple as much as that in the late-afternoon period, which are 3.2%, 1.0%, 0.6%, and 1.9%, respectively. The rainfall amount proportion of durations of 4-6 h (9.9%), 7-9 h (9.4%), 10-12 h (7.2%), and more than or equal to 13 h (16.6%) in the early-morning period are roughly twice as much as that in the lateafternoon period, which are 4.3%, 1.4%, 1.2%, and 8.9%, respectively.
Similar results are shown in the following three regions. For proportions of rainfall events, more than 35% short-duration rainfall (1-6 h) events happen during the early-morning period. When the duration is above 7 h, the frequency proportion is slightly greater than 10%, accounting for 33.2% of the total rainfall.
Although the diurnal large value zones of rainfall events in the four regions appear in the early morning, the detailed diurnal rainfall cycles are different. We will further analyze the peak time characteristics of rainfall events with different duration in the following passages.
The peak of long persistent precipitation is usually in the morning, which accounts for more than 60% of precipitation in central and eastern China, while short persistent precipitation mainly appears in the afternoon (Yu et al. 2007a). The occurrence of the rainfall maximum is important for rainfall events (Yu et al. 2013). To quantitatively assess the relationship between the gauge elevation and the Fig. 12 The proportions of rainfall amount (black rectangle; %) and frequency (bar; %) events during different durations to their daily rainfall events in the (a) NW, (a) SW, (a) SE, and (c) NE. The blue bars represent the proportion values of the early-morning period, and the red bars indicate the late-afternoon period early-morning maximum rainfall, Fig. 13 shows the relationships of the gauge elevation and the proportion of rainfall maximum frequency during the early-morning period to the total daily rainfall frequency trend magnitude. There are obviously negative correlations between the magnitude of proportion trends and the elevation in four regions. The linear correlation coefficients (R) are − 0.932, − 0.878, − 0.914, and − 0.803 above the 99% confidence level, respectively. It means that the proportions of early-morning rainfall maximum frequency to the total rainfall at the higher elevations are smaller than that of the lower elevations. This suggests that the altitude effect on the early-morning rainfall maximum frequency is significant. The early-morning rainfall maximum trend significantly diminishes with elevation, which indicates that higher mountainous areas experience more other raining periods than that at lower elevations.
Conclusions and discussion
Based on the hourly rain gauge data of high-density stations in Chongqing during the warm season (May to September) from 2016 to 2020, the overall features and regional differences in the diurnal variations of rainfall affected by complex terrain are identified, and the detailed influences of the gauge elevations on the diurnal variations of rainfall are also investigated. The main conclusions can be summarized as follows. Fig. 13 The same as Fig. 6, but for the proportion of early-morning rainfall maximum frequency (blue) and amount (red) to their daily rainfall events in (a) NW, (b) SW, (c) SE, (d) NE regions (1) The spatial features of rainfall amount, frequency, and intensity have obvious regional characteristics under the influence of complex terrain in Chongqing. The stations with larger rainfall amount, frequency, and intensity values are located in the higher complex mountainous terrain areas. The detailed characteristics of the four study regions show that the rainfall amount and frequency at higher elevations are larger than those at lower elevations, and they significantly increase, especially in the direction that terrain height sharply increases along mountains extending.
(2) The diurnal characteristics of rainfall amount, frequency, and intensity have obvious regional characteristics. The maximum peak time periods are relatively consistent in reaching their maxima in the early-morning. The early-morning peaks play a dominant role for the rainfall frequency and amount, which account for 81.9% and 88.1% of all stations, respectively. The patterns of the rainfall intensity are apparently inhomogeneous. The rainfall amount has bimodal structure with a dominant early-morning peak at approximately 0700 LST (23 UTC) and a weaker secondary late-afternoon peak at approximately 1600 LST (08 UTC). There is a single peak diurnal characteristic of the rainfall frequency in the SW, NW, and SE, while bimodal peak in the NE. (3) The gauge elevation has a significant impact on the diurnal variations of the early-morning rainfall. As elevation decreases, the proportion of rainfall amount (frequency) that occurs during early-morning periods increase in the four study regions. In other words, the early-morning peak dominates in low elevations areas, while the high mountainous areas experience more other raining periods than that at lower elevations. The same pattern is found in the proportions of early-morning rainfall maximum frequency to the total rainfall events. (4) Different duration hours of rainfall events have distinct diurnal variation and phase features. Short-duration rainfall tends to start between afternoon and night, while long-duration rainfall tends to start in the night. Nocturnal rainfall events tend to begin simultaneously, while long-duration rainfall ends later. Consequently, long-duration rainfall events make a greater contribution to total rainfall amount.
In this study, our results indicate that the diurnal variation of rainfall in Chongqing is significantly affected by topography and demonstrates the dominance of early-morning rainfall in determining the distinct diurnal features of warm season rainfall. Prevailing nocturnal precipitation over the eastern Tibetan Plateau and its eastern periphery was discussed previously (Bai et al. 2008;Chen et al. 2009;Huang et al. 2010;Guo et al. 2016), but the underlying mechanisms remain unclear. Jin et al. (2013) suggested that the early night precipitation peak over the western SCB was largely caused by strong ascending motion over the TP and its eastern lee side, while multiple coexisting factors contributed to the late night peak of precipitation over the central and eastern SCB. The mountain-plain solenoid circulation, driven by the inhomogeneous diabatic heating associated with topographic forcing, contributes to the nocturnal precipitation over the SCB (Zhang et al. 2014b;Qian et al. 2015). In the early evening, anomalous easterly flow moves toward the TP and causes low-level convergence over the western SCB, resulting in nocturnal precipitation over the SCB (Sun and Zhang 2012;Chen et al. 2017;Zhang et al. 2019a). Prominent diurnal inertial oscillations of south-southwesterly lowlevel jet into the southeast side of the SCB play an important role in modulating the diurnal variation of precipitation over the SCB (Zhang et al. 2019b). Qian et al. (2015) found that the precipitation over the SCB propagates northeastward in the early night and decays. As mentioned before, the west Chongqing goes into the SCB, in which the diurnal variation of rainfall is the same as in the SCB, while the eastern and middle Chongqing are located in the mountainous area surrounding the basin and gradually rise eastward with high terrain spreading on the WM. Chen et al. (2019) investigated the interval of precipitation associated with the diurnal distribution of lightning and revealed that the enhancement of precipitation in the mountainous area was mainly caused by the short-time strong rainfall resulting from the convection. However, these studies are limited by the use of only station observations, due to the limited data length, the inhomogeneous spatial distribution of the stations, and scarce records of rainfall in mountainous area, which cannot fully reveal the interactions between the local topography and mesoscale processes in the generation of heavy rainfall.
It is noteworthy that there are obvious regional characteristics in the diurnal variations of rainfall. The study could be helpful in further understanding the precipitation characteristics in Chongqing and in researches on precipitation over complex terrain in the southwest China. The diurnal rainfall cycles of rainfall amount and intensity are the same in the SW and NW, and they are generally consistent with those in the SE and NE. The SW and NW are located in the west Chongqing and go deep into the SCB at low elevation. The east Chongqing gradually rises with high terrain, in which the SE and NE are located. The elevation is an obvious different topographical factor. The rainfall amount and frequency significantly increase as the terrain height sharply increases, especially in the NE. The direction of orographic slope is north-south orientation where the windward slope zone interacts with the south wind. The detailed rainfall characteristics are also different in the sub-regions over the four study regions. These differences in the sub-regions imply that apart from the elevation, some other topographyrelated factors, such as orographic slope and orientation, should also be considered when investigating the influence of topography on the diurnal variations of rainfall. Due to the few stations sparsely distributed mountainous areas, it is difficult to obtain detailed circulation characteristics and truly understand physical processes behind these differences in complex terrain area. Therefore, further studies are needed to validate the results by using other high-resolution data, including the high-resolution satellite data, and highresolution numerical experiments that should be designed and carried out in future works.
|
2022-03-15T13:44:12.437Z
|
2022-03-15T00:00:00.000
|
{
"year": 2022,
"sha1": "0c517246f75aa0744f44bd2ad7fcbee66ce86419",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00704-021-03918-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "0c517246f75aa0744f44bd2ad7fcbee66ce86419",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
258059859
|
pes2o/s2orc
|
v3-fos-license
|
Deep-learning based measurement of planetary radial velocities in the presence of stellar variability
We present a deep-learning based approach for measuring small planetary radial velocities in the presence of stellar variability. We use neural networks to reduce stellar RV jitter in three years of HARPS-N sun-as-a-star spectra. We develop and compare dimensionality-reduction and data splitting methods, as well as various neural network architectures including single line CNNs, an ensemble of single line CNNs, and a multi-line CNN. We inject planet-like RVs into the spectra and use the network to recover them. We find that the multi-line CNN is able to recover planets with 0.2 m/s semi-amplitude, 50 day period, with 8.8% error in the amplitude and 0.7% in the period. This approach shows promise for mitigating stellar RV variability and enabling the detection of small planetary RVs with unprecedented precision.
Stellar noise is one of the most problematic noise sources in Extreme Precision Radial Velocity (EPRV) measurements (e.g. Fischer et al. 2016; National Academies of Sciences 2018). Broadly speaking, stellar noise in EPRV measurements can be mitigated in either the time domain (FF' or a GP) or the wavelength domain (modeling the flux or some function of the flux). Time domain methods use activity proxies such as photometry (original FF' method, Aigrain et al. 2012), unsigned magnetic flux (Haywood et al. 2022), or activity indicators derived from the spectra and often use GPs to model the RVs and stellar noise simultaneously (Rajpaul et al. 2015;Jones et al. 2017;Gilbertson et al. 2020). Wavelength-domain methods can be used to generate activity indicators using spectra (e.g. Wise et al. 2019;Ning et al. 2019;Siegel et al. 2022), or to measure or correct RVs via clever modeling of the spectra or cross-correlation function (CCF) (e.g. Dumusque 2018;Rajpaul et al. 2020;Collier Cameron et al. 2021;de Beurs et al. 2022;Zhao et al. 2022;Cretignier et al. 2022).
Here we present a new approach for modeling the spectra in the wavelength domain where we use Deep Learning (DL) based neural networks to measure small planetary RVs in the presence of stellar noise. de Beurs et al. (2022) show that the CCF has enough information for a neural network to reduce stellar RV jitter in three years of HARPS-N sun-as-a-star spectra down from 1.47 m/s to 0.78 m/s. To push towards more precise corrections, we will leverage the differences between spectral lines' responses to stellar activity (e.g. Wise et al. 2019) by using spectra instead of CCFs as our dataset. The unprecedented Signal-to-Noise Ratio (SNR) and cadence of sun-as-a-star spectra allow us to evaluate the effectiveness and limitations of neural networks at separating stellar and planet-induced RVs in the wavelength domain at sub-m/s precision, and determine their applicability to the EPRV community's goal of mitigating stellar RV variability.
MOTIVATION AND OBJECTIVES
Neural networks are able to capture the complex nonlinear relationships between the spectral signatures of different kinds of stellar variability and RVs that are challenging for traditional EPRV approaches to capture. Hence our choice to develop DL pipelines and to explore their applicability to separating planetary and stellar activity RVs stems from a perceived potential push the limits of stellar activity correction in EPRV measurements.
The primary objective of this paper is to describe the deep learning-based pipeline we developed to measure planetary RVs in spectra.
In Section 3 we describe the data sources, Section 4 covers data preparation and preprocessing steps, Section 5 dives into the different machine learning approaches we developed, Section 6 looks at the results of the machine learning approaches, Section 7 summarizes our work and discusses future areas for improvement.
DATA SOURCES
The primary dataset that we use for this work is the HARPS-N Solar Spectra (HARPS-n) a high-resolution RV spectrograph with continuous wavelength coverage from 380 to 690 nm located on the island of La Palma, Canary Islands, Spain. In September 2020, HARPS-N released their first public dataset: 34550 disk integrated spectra of the sun taken from 2015 to 2018. This dataset includes significant numbers of sunspots, faculae, and plage, and a higher-than-average level of convective blueshift variation due to the solar magnetic cycle maximum and minimum that occurred in 2014 and 2019 respectively. The solar spectra have 5-minute integration times to average down solar p-mode oscillations, and have average SNRs of approximately 350. The data are obtained from the University of Geneva Data & Analysis Center for Exoplanets website (dace.unige.ch).
This dataset has or is expected to have stellar activity signals from pulsations, granulation, and magnetic activity in the form of spots, faculae, plage, and convective blueshift variations.
DATA PREPARATION
To evaluate the ability of DL-based neural networks to separate stellar and planetary RVs, we utilize endto-end injection and recovery of planet-like Doppler signals. We start with extracted 2D spectra, and inject the planetary signals by adding them to the heliocentric frame correction RVs as soon as possible. This way when we interpolate the spectra onto a common wavelength grid, the planet-like signals are already included. The 2D spectra are normalized before interpolating onto a common grid. The steps are summarized below.
Pre-processing of HARPS Spectra
We begin our data-processing with extracted 2-D spectra (in HARPS filenames, '.e2ds' or '.s2d' files). The first steps we perform are order-by-order normalization, RV shifting, and interpolation onto a common wavelength grid. The spectra are normalized using a Julia implementation of the RASSINE method in the public NeidSolarScripts.jl package. Normalization is an important step to make sure the ML approach does not simply focus on the highest-flux pixels because they have the highest variations (as is true for photon noise). But it has the drawback of losing information about the amount of percent variation expected in each pixel due to photon noise. However, this information loss is low due to the extremely high SNR of the solar spectra.
After normalization, we Doppler shift the normalized spectra (henceforth referred to simply as spectra) to remove barycentric motion, and interpolate all of the spectra onto a single wavelength grid (from one observation) using a sinc kernel for interpolation, which prevents the introduction of noise due to intra-pixel sensitivity. Interpolating onto the original pixel grid allows us to preserve the spectrum's pixel sizes, limiting interpolation uncertainty that would be caused by changing the wavelength grid spacing. On the HARPS-ACB data, the RVs measured by the mask CCF method before and after sinc interpolation have an RMSE of 11 cm/s introduced by the interpolation, significantly lower than a linear or cubic spline interpolation, but equal in RMSE to sampling from a gaussian process with a matern 5/2 kernel in this before-and-after test. Thanks to the identical wavelength grid spacing in HARPS and HARPS-N, we can interpolate spectra from both of these instruments onto a common wavelength grid without changing bin sizes. After interpolation, we combine the orders into a 1-D spectrum by removing the overlapping order edges, throwing away overlapping pixels that are furthest from an order center as these have the lowest SNR. Finally, we remove data below 420 nm due to their relatively high blending and photon noise and above 690 nm due to oxygen telluric lines.
During the Doppler shifting step, to remove barycentric motion, we inject a planetary signal. These planetinduced RVs become the target values for the neural network to predict. In the case of solar spectra, the barycentric corrections remove the RVs due to all significant celestial bodies, so we can be sure that the injected planetary signals are the only center-of-mass RVs in the spectra, and remaining RV signals are due to stellar jitter. This makes for an ideal test dataset for this approach, which proposes to use wavelengthdomain information to distinguish between stellar and planet-induced RVs.
Injecting Planet-like RVs into Spectra
Building off the preprocessing steps applied to the HARPS spectra, we create multiple datasets where the injected RV signals are selected to support distinct steps necessary to develop and evaluate the neural network.
Injecting Planet-like RVs into Spectra
To facilitate training the neural network we developed datasets composed of random uniform RVs. Given a single solar spectrum, a random RV was selected between the range of [-1000, 1000] m/s, and the spectrum was shifted based on that RV. This is the same RV injection described in the Doppler shifting step in Section 4.1. This process was performed 60 times for each solar spectrum in HARPS-N, resulting in a dataset composed of 2073000 spectra. We refer to this dataset as the Random Uniform 1000 m/s RV Spectra.
The Random Uniform 1000 m/s RV Spectra are intended to prevent the neural network from overfitting by exposing the network to same pattern of spectral noise (by using the same spectrum) multiple times, but each time with a different injected planetary RV target. Creating a dataset where each spectrum is considered in isolation, when injecting the planet-like RV, is possible because the neural network is not trained with data structured in a temporal manner.
Planet-like RVs
The overarching goal of this work is to develop a neural network that can accurately output activitycorrected RVs given an input spectrum. To this end, we inject sinusoidal planet-like RV signals into the HARPS-N spectra to create a series of planet test cases. The planet test cases are aimed at determining a trained neural networks sensitivity of two parameters, RV amplitude and period. The set of planets focused on testing amplitude have a fixed period of 50 days while the amplitude ranges from 0.1 m/s to 1.0 m/s, in increments of 0.1 m/s. The set of planets focused on testing period have a fixed amplitude of 1.0 m/s while the period ranges from 10 days to 250 days, in varying increments of 15, 25, and 50 days. A total of 20 planet-like RV datasets were created, 10 for each of the period and amplitude test cases ( Figure 1).
Selecting Spectral Lines
We utilize a visible-light spectral line list of 4570 lines to extract lines from the HARPS-N spectra, where the window around each line spans 15 pixels. While neural networks are capable of handling the full spectrum as input, there are some disadvantages to this approach. Input dimensionality will be much larger, which in most cases, will correspond to a larger number of trainable parameters and increase the time to train the network. Our primary goal was to determine the applicability of neural networks in measuring RVs, as such, enabling reasonable training times to allow quick iterations was prioritized. Figure 1. Full planet-like RV test cases used for evaluating the neural network's ability to recover planet-like RVs. Purple points correspond to planets testing shorter and longer period orbits (keeping amplitude fixed at 1 m/s), while red points correspond to planets testing for amplitude (keeping period fixed at 50 days).
Train, Validation, and Test Data Splits
Data are split into training, validation, and test partitions, roughly consisting of 80%, 10%, and 10% of the overall number of records, respectively. For the Random Uniform 1000 m/s RV Spectra, the train, validation, and test splits are composed of 1657680, 207180, and 207300 spectra, respectively. The validation split is consistently referenced during the training process to ensure the network is not overfitting. It further acts as a way to measure iterative changes made to the network design and their effect on the network's performance. The held-out test data is only used for reporting the network's performance and has no influence on the network development.
Each split is randomly sampled by referencing the unique timestamps associated with the HARPS-N spectra. This was done intentionally so that all spectra with a random RV shift applied to the same underlying spectrum can be collectively assigned to either the train, validation, or test split. Sampling the unique timestamps to create these splits also allowed us to generate the same splits based on timestamps for both the random RVs and the planet-like RVs, for training the neural network and evaluating the network's planet detection capabilities, respectively.
Feature Scaling
Scaling of the input spectra and the target output RVs is performed to prevent large error gradient values and attain faster convergence in a gradient-based learning process, as is the case with neural networks. Spectra were zero-centered by subtracting the mean flux from all spectra, while the RVs were min-max scaled remapping the values to a range of [0, 1]. These specific strategies were selected by experimenting with different scaling approaches and selecting the scalers that resulted in the best performance. Scalers were fit using the training data and applied to rescale the validation and test data.
Outlier Removal
Throughout the development of the neural network, a series of observations repeatedly presented themselves as problematic, having large errors relative to the collective distribution of errors. In total, 44 spectra from the original HARPS-N data were excluded for this reason. This list is available upon request from a corresponding author.
MACHINE LEARNING ARCHITECTURES
We developed a series of neural network-based approaches to isolate the planet-induced RV signal. We focused on designing network architectures that had significantly different characteristics regarding how spectra were handled. For each of the three approaches, the generic input to the neural network are the RV shifted spectra, while the target outputs are the RVs. These approaches result in a functional mapping where, given an input spectrum, the neural network will output an estimated RV value for a planet or planets.
Single Line CNNs
A single CNN architecture was used to train networks for each spectral line. From this approach, we end up with 4570 CNNs -one CNN per spectral line -where each network is able to generate a predicted RV value for a given spectrum.
The underlying CNN architecture used across all spectral lines consisted of 4 1-D convolutional layers (filter size of 3; step size of 1), followed by 3 fully connected dense layers. When the feature map output by the final convolutional layer is flattened, we concatenate feature map with the original 15 pixel spectral line input, all of which gets fed to the dense layers.
For a given CNN, the RMSE for the predicted RV is calculated on the validation split, providing a single performance metric associated with that CNN. By scoring all 4570 line-based CNNs, we are able to rank the CNNs by RMSE. This ranking gives us a way to order lines based on their predictive capability for identifying (injected) planetary RVs. The individual CNNs are not particularly useful given the best performing single-line CNN has a RMSE of 6.0183 m/s on the test set (Table 1, first row). However, the CNNs' RMSE scores have a positively skewed distribution (Figure 2) indicating that the majority of lines offer some utility in measuring RVs.
There are two follow-on approaches that take advantage of these networks and their outputs. One approach uses the predictions from the individual CNNs as inputs to train an ensemble, while the second approach takes a subset of lines with the lowest RMSE and collectively inputs those lines into a single CNN.
Ensemble of Single Line CNNs
Ensembles are useful in that they learn to recognize the strengths and weaknesses of multiple models in conjunction with one another. The model that combines the predictions from other models, CNNs in this case, is referred to as a meta-learner. A common approach is to use an ordinary least squares (OLS) regression model to server as the meta-learner. Ensembles provide benefits over a single model in that ensembles are not fully reliant on a single set of weights and inputs, thus allowing them to better generalize to a given task.
Given the approach in 5.1, we combine the output RV predictions for each single line CNN. For the Ran- dom Uniform 1000 m/s RV Spectra, this results in a 2073000x4570 matrix of predicted RVs. Figure 3 provides a sample of the inputs used to fit the OLS model. The OLS model is fit using the subset of predicted RVs that corresponds to the training split used to train each single line CNN. The ensemble OLS model was initially fit using the CNNs' predictions from all 4570 spectral lines, resulting in a test RMSE of 2.0939 m/s (Table 1). Examining the distribution of the model coefficients showed that many of the coeffincients were approximately zero (Figure 4). By iteratively pruning inputs associated with fitted coefficients approximately close to zero (less than 1e-4), and refitting the OLS model with the reduced number of inputs, the input dimension for the OLS model was reduced to 1000 variables (single line CNN RV predictions). The OLS model fit with the reduced input had a test RMSE of 1.2673 m/s (Table 1). A final meta-learner was fit after additional pruning, resulting in inputs consisting of 250 RV predictions. This model generalizes better than the ensemble with all inputs, but performs overall worse than the 1000 input ensemble.
Ensemble of Single Line CNNs
A downside to this approach occurs when evaluating other datasets. In order to generate RV predictions for a planet-like RV dataset, all 1000 CNNs (associated with the pruned ensemble model) need to be loaded and individually used to generate predictions that feed into the ensemble OLS model. To address the complexity of orchestrating hundreds of CNNs, we took an alternative approach that utilizes a single CNN with multiple spectral lines as inputs. 250 spectral lines were stacked such that each line is represented in a channel or band-like structure, similar to the RGB channels of a color image. A single input to the muti-line CNN is a 250x1x15 dimensional array where each vector indexed along the 0th axis corresponds to a single spectral line. The number of lines, 250, was selected based upon our remote compute environment's system memory and GPU memory balanced against the size of the Random Uniform 1000 m/s RV dataset.
Multi-line CNN
We considered two different sets of 250 spectral lines while training the CNN. The first set of spectral lines consisted of the 250 single line CNNs with the lowest RMSE (right plot, Figure 5). The second set of 250 spectral lines were derived from additional pruning of the 1000 spectral line ensemble (center plot, Figure 5).
The architecture of this multi-line CNN is the same as the architecture used for the single line CNNs, with the exception that the input layer is changed to accommodate 250 spectral lines, opposed to a single spectral line. In this CNN, the weights of the filters in the convolutional layers are learned feature extractors where the features are learned from all the input spectral lines collectively. This differs slightly from the single line CNN, where the weights of the filters are feature extractors that are learned from and specific to a given line.
Overall, the multi-line CNN trained on the 250 best individual lines seems well balanced, where the spread of RMSE scores across the train, validation, and test splits are not that dissimilar from one another (Table 1, CNN (Top 250)). This network performs slightly better than the 1000 spectral line ensemble (Table 1, Ensemble (1000)). The multi-line CNN trained on the 250 lines remaining from further pruning of the 1000 line ensemble, achieves the best RMSE on the training data, but has issues generalizing to the validation and test data (Table 1, CNN (250 Pruned)). Figure 6 outlines a series of error (residual) analysis plots. The plots pertaining to the test split for reporting in this document, but in practice utilized the validation split for assessing systemic issues present in the errors. Noteworthy aspects of these plots include: a near perfect linear relationship between the predicted RVs and actual RVs (upper left, Figure 6); normally distributed errors (upper right, Figure 6) albeit not zero centered; no distinct temporal trends (middle row, Figure 6); and no apparent trends or patterns in the full range of RVs or a localized subset of RVs (bottom row, Figure 6).
RESULTS
Our approach utilizes NNs to filter the RV contribution from stellar activity in a given input spectra by explicitly forcing the network's attention on what we care about the most -the (injected) planetary RVs -during the training process. As such, the predicted RVs are, in the best case scenario, exact representations of the true planetary RVs, however, the trained neural network is imperfect where the predicted RVs contain error. The output estimated planetary RV signal is still useful.
To better understand the extent of the neural network's utility in finding planets, we use the multi-line CNN (250 lines) to make RV predictions for each of the planet-like RV datasets described in Section 4.2.2. Because these planet-like datasets are created using the same underlying HARPS-N solar spectra, which was also used to create the 1000 m/s Random RV dataset, we use Figure 6. A series of error analysis plots used to assess systemic issues associated with a trained neural network. Predicted RVs and the associated errors are from the multi-line CNN (250 Top). The displayed plots are from test data, however, in the development of the neural network the training and validation splits were referenced for informing necessary changes to in preprocessing steps or network architecture design; test data is show only for reporting in this context. the same spectra in the test split, used to score the neural network, to evaluate and check the network's utility in detecting planets.
Despite training on a large quantity of diverse spectra, the neural network trained on the 1000 m/s Random RV dataset still has a small vertical offset in the predicted RVs when compared to the actual RVs for the training data. Hence, for each planet-like RV datasets' predicted RVs, we apply a standard correction corresponding to the intercept coefficient from the best fit line for the predicted vs actual RVs (upper left, Figure 6) for the training data. Since planetary mass measurements do not require an accurate aboslute RV scale, but rely only on differential RV measurements, this offset correction is not considered to be a weakness in our approach.
Following this correction, we fit a Lomb-Scargle periodogram to a given planet-like test case's predicted RVs. We take the period associated with the maximum power of the periodogram to correspond to the recovered orbital period of the injected planet (top row, Figure 7). We assume, given a neural network capable of perfectly predicting a planet's RVs, we should be able to recover any period, constrained to the sampling window and cadence. Using RadVel (Fulton et al. 2018), we perform a maximum likelihood estimation (MLE) fit to recover the amplitude of the injected planet. For initial parameter guesses, we set: the period to the highest peak in the Lomb-Scargle periodogram; the semi-amplitude to the 90th quantile of the predicted RVs; RV jitter to the training RMSE score from the 1000 m/s Random RV dataset. After fitting, we compare the model estimates for period and amplitude against the true period and amplitude.
The complete results for this process, performed on all 20 injected planet test cases, are captured in Figure 8. For this analysis, we produced results with both multiline CNNs referenced in Section 5.3, "250 Top" and "250 Pruned". While the 250 Top CNN was more balanced in regards to its train, validation, and test RMSE scores on the 1000 m/s Random RV dataset, we saw it underperform on the task of recovering the planet-like RV signals from the injected planet test cases. In comparison, the 250 Pruned CNN performed well when recovering the planet-like RVs, even on the spectra associated with the test split it had struggle to generalize as well to. In the amplitude sensitivity tests, we were able to recover a Figure 7. Two sets of results for the planet-like test cases. The left column corresponds to a planet with a 50 day period and 0.2 m/s semi-amplitude, while the right column corresponds to a planet with a 250 day period and 1.0 m/s semi-amplitude. The top row depicts power and frequency for the Lomb-Scargle periodograms fit to each planet-like test case. The red dotted line is the true period, while the red "x" indicates the frequency of the peak power. The center row shows phase-folded diagrams while the bottom row shows the time series plots, both produced following the MLE. Dark points in the phase-folded diagrams and time series plots are the predicted RVs made by the neural network -predicted RVs for the 1.0 m/s semi-amplitude planet show distinct sinesoidal characteristics. 0.2 m/s semi-amplitude, 50 day period, planetary signal, where the periodogram and MLE fit identified the true amplitude and period, within 8.8% error and 0.7% error, respectively (left column, Figure 8). In every amplitude test case, the 250 Pruned CNN outperformed the 250 Top CNN. Both CNNs had nearly identical performances on the period test cases, where they struggled to identify a planet with an amplitude of 1.0 m/s and period of 10 days, but succeeded on every planet with an amplitude of 1.0 m/s and a period of 25 days or greater.
To assess how much our approach was able to mitigate noise in the RVs and improve planet detectability, we generated another set of "results" by simply adding the sinusoidal planet RV signals to the HARPS-N RVs in the solar frame. The represent an uncorrected case, where due the large quantity of data (∼3500 points over 3 years in our test set) we are still able to fit our planet model with some success. The results of this comparison study are shown in Figure 9. We apologize for the formatting differences, these will be corrected in a future draft. As a comparison to our CNN approach, where we recovered a 0.2 m/s semi-amplitude planet to 8.8% error, here the same lomb-scargle periodgram + RadVel model approach recovered this 0.2 m/s signal with 80% error, about an order of magnitude higher. In this experiment, the minimum planetary signal that could be added to the CCF RVs while maintaining a 10% error threshold was 50 cm/s. As a result of this comparison, we suggest that our multi-line CNN approach for measuring RVs has the potential to substantially improve RV measurement precision once it has been developed to apply to non-solar targets.
CONCLUSION AND DISCUSSION
Using the public release of HARPS-N solar spectra, we demonstrated a deep learning based approach to recover injected planetary RVs in spectra. This approach is unique in that it operates specifically in the wavelength domain and does not utilize temporal features -lagged observations or aggregated observations. The neural network is trained on a diverse collection of spectra, injected with random RVs ranging from [-1000.0, 1000.0] m/s, which allow the network to generalize well to arbitrary sinusoidal planets. We apply the trained neural network to complete sinusoidal planets and utilize a Lomb-Scargle periodogram on the network's predicted RVs, showing that the frequency tied to the planet's pe-riod is associated with the highest power, down to a 10 day period and 1 m/s semi-amplitude planet. Additionally, we show that a planet with a 50 day period and 0.2 m/s semi-amplitude is recoverable from the predicted RVs using MLE.
Future work can include making improvements to the neural network's performance and our interpretation of the results. Specifically, this may include experimenting with different sampling techniques for train, validation, and test splits, improving selection of the line subsets to feed to the neural network and refining the training pipeline to operate beyond current memory constraints. Future work may also include more rigorous hyperparameter tuning across the entire pipeline and an overall improved reporting of results to include planetary mass and period uncertainties. . Performance results for the 20 planet test cases using HARPS-N RVs in the solar frame. The left column corresponds to the period recovery test cases and the right column corresponds to the amplitude recovery test cases (note this is switched from the previous figure). This figure may be compared with Figure 8 to assess how much our approach improved planet detectability. We see a substantial reduction in error in recovered planet parameters in our multi-line CNN approach when compared with these results, where we apply the same planet-fitting method to "uncorrected" RVs.
|
2023-04-12T01:16:00.074Z
|
2023-04-10T00:00:00.000
|
{
"year": 2023,
"sha1": "84fc6d0456bc84fc23aeb1f7ad462fd109023f13",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "84fc6d0456bc84fc23aeb1f7ad462fd109023f13",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
}
|
263827292
|
pes2o/s2orc
|
v3-fos-license
|
Delivery time reduction for mixed photon-electron radiotherapy by using photon MLC collimated electron arcs
Objective. Electron arcs in mixed-beam radiotherapy (Arc-MBRT) consisting of intensity-modulated electron arcs with dynamic gantry rotation potentially reduce the delivery time compared to mixed-beam radiotherapy containing electron beams with static gantry angle (Static-MBRT). This study aims to develop and investigate a treatment planning process (TPP) for photon multileaf collimator (pMLC) based Arc-MBRT. Approach. An existing TPP for Static-MBRT plans is extended to integrate electron arcs with a dynamic gantry rotation and intensity modulation using a sliding window technique. The TPP consists of a manual setup of electron arcs, and either static photon beams or photon arcs, shortening of the source-to-surface distance for the electron arcs, initial intensity modulation optimization, selection of a user-defined number of electron beam energies based on dose contribution to the target volume and finally, simultaneous photon and electron intensity modulation optimization followed by full Monte Carlo dose calculation. Arc-MBRT plans, Static-MBRT plans, and photon-only plans were created and compared for four breast cases. Dosimetric validation of two Arc-MBRT plans was performed using film measurements. Main results. The generated Arc-MBRT plans are dosimetrically similar to the Static-MBRT plans while outperforming the photon-only plans. The mean heart dose is reduced by 32% on average in the MBRT plans compared to the photon-only plans. The estimated delivery times of the Arc-MBRT plans are similar to the photon-only plans but less than half the time of the Static-MBRT plans. Measured and calculated dose distributions agree with a gamma passing rate of over 98% (3% global, 2 mm) for both delivered Arc-MBRT plans. Significance. A TPP for Arc-MBRT is successfully developed and Arc-MBRT plans showed the potential to improve the dosimetric plan quality similar as Static-MBRT while maintaining short delivery times of photon-only treatments. This further facilitates integration of pMLC-based MBRT into clinical practice.
Introduction
In external beam radiotherapy, photon treatments performed in clinical routine are typically applied using the photon multileaf collimator (pMLC) integrated into the treatment head of a linear accelerator.The introduction of the pMLC facilitated intensity-modulated radiotherapy (IMRT), which improved target dose conformality compared to 3D conformal radiotherapy (Bortfeld 2006).Volumetric modulated arc therapy (VMAT) has improved upon the delivery efficiency of IMRT while maintaining the dosimetric plan quality by combining synchronized intensity modulation and dynamic gantry rotation (Otto 2008, Teoh et al 2011).
Meanwhile, standard electron treatments are still applied using patient-specifically fabricated cerrobend cut-outs placed in dedicated electron applicators mounted onto the linear accelerator head for every field and treatment fraction.This makes electron treatments inefficient and cumbersome.Furthermore, using cut-outs for energy modulation or intensity modulation of electron beams is practically infeasible (Hogstrom and Almond 2006).This infeasibility makes electron treatments in inhomogeneous media challenging, where energy modulation is necessary (Asell et al 1997).Likewise, electron treatments of large targets such as chest wall irradiation are challenging, because multiple conformal electron beams from different directions create hot or cold spots (Khan et al 1977).To avoid such hot and cold spots, techniques such as electron arc therapy (EAT) have been developed (Khan et al 1977, Leavitt et al 1985, McNeely et al 1988, Leavitt and Stewart 1993, Gaffney et al 2001, Sharma et al 2011).In EAT, a narrow electron field is rotated around the patient.Custom secondary collimators are mounted onto the gantry and tertiary collimators, and boli are placed on the patient (Leavitt et al 1985).The main disadvantage of EAT is that the treatment planning and fabrication and mounting of the custom collimators is very labour and time intensive.More recently, Rodrigues et al (2014) proposed an EAT technique called dynamic electron arc radiotherapy (DEAR) with a mounted standard applicator and cut-out, reducing the time needed to manufacture custom collimators.To avoid collisions between the applicator and the patient, the table translates synchronously with the gantry rotation.However, an applicator still has to be mounted onto the gantry for every treatment fraction, and dynamic collimation of the beam is not possible.Furthermore, the short distance between the end of the applicator and the patient may increase the collision risk.
To overcome these limitations, some research groups investigated different motorized collimators for electron treatments aiming at replacing the cut-outs and applicators.The investigated collimators were a few leaf electron collimator (FLEC) (Al-Yahya et al 2005a, 2005b, 2007, Alexander et al 2010, 2011), a custom electron multileaf collimator (eMLC) (Ma et al 2000, Gauer et al 2008, Engel and Gauer 2009, Vatanen et Mihaljevic et al 2011, Henzen et al 2014a, 2014b, Mueller et al 2018a, Fix et al 2023).Additionally, these motorized collimators make intensity and energy modulation of electron beams feasible in modulated electron radiotherapy (MERT).The pMLC has the additional advantage that no additional hardware needs to be mounted onto the gantry head for every fraction.
However, pMLC collimated electron beams have a larger beam penumbra due to increased scatter within the larger volume of air between the end of the pMLC and the patient (Mueller et al 2018a).Reducing the source-tosurface distance (SSD) by moving the patient closer to the gantry reduces the beam penumbra.Although, a very short SSD poses a collision risk between the gantry and the patient.It has been shown that electron-only plans do not achieve the same dose homogeneity in the target as photon-only plans (Surucu et al 2010, Alexander et al 2011, Henzen et al 2014b, Mueller et al 2017, Renaud et al 2017).A possible solution to overcome these dosimetric limitations of electron beams is to combine electron and photon beams in mixed beam radiotherapy (MBRT) (Li et al 2000, Korevaar et al 2002, Mu et al 2004, Xiong et al 2004, Palma et Mueller et al (2017) showed that pMLC-based intensity-modulated electron beams combined with static photon beams or photon beams with dynamic trajectories (Mueller et al 2018b) improved dosimetric plan quality compared to photon-only treatments.However, until now MBRT only contains electron beams delivered from a static gantry angle (Static-MBRT), which results in substantially longer delivery times for Static-MBRT plans compared to VMAT.Besides less patient throughput, longer delivery times might also increase the intrafraction motion and impact patient comfort negatively.We hypothesize that using electron beams with a dynamic gantry rotation during beam-on combined with photon beams (Arc-MBRT) improves the delivery efficiency and thus further facilitates clinical implementation of mixed photon-electron beam treatments.
The aim of this work is to develop a treatment planning process (TPP) to create Arc-MBRT plans consisting of both photon and electron beams with dynamic gantry rotation and pMLC sliding window-based intensity modulation.Several breast cases are investigated retrospectively to demonstrate the delivery efficiency, dosimetric accuracy, and dosimetric plan quality of Arc-MBRT.
Methods
An existing TPP used for creating Static-MBRT plans (Mueller et al 2017(Mueller et al , 2022) ) was extended to accommodate electron beams with a dynamic gantry rotation and sliding window-based intensity modulation, called electron arcs henceforth.The TPP is described in the following subsection.The second subsection describes the investigations of the TPP for Arc-MBRT and describes the dosimetric validation of Arc-MBRT plans.
Beam setup
The first part in the TPP illustrated in figure 1 consists of the manual setup of electron arcs and setup of photon beams within a research version of Eclipse.This research version is embedded in the Aria framework v15.6 (Varian Medical Systems, Palo Alto, CA).The user needs to define the gantry range, collimator and table rotation angle for electron arcs.Due to the finite range of electron beams, the gantry range for the electron arcs is suggested to be set to the area where the planning target volume (PTV) is close to the patient's surface.For the defined gantry range, electron arcs are set up for all available electron beam energies with control points (CPs) every 5°.Additionally, the user defines photon beams, consisting either of 3D conformal or intensity-modulated photon beams with a static gantry angle or photon arcs with dynamic gantry rotation (with CPs every 5°).The beams with static gantry angle are called static beams from now on.
Next, the position of the isocenter is shifted for every CP of the electron arcs along the central axis such that the SSD matches a user defined setting SSD desired .This allows to shorten the distance between the gantry head and the patients' surface, which influences the amount of in-air scatter of the electron beams.A shorter SSD hence means a smaller beam penumbra for the electron beams.For this, the current SSD along the central axis SSD current is calculated and the position of the isocenter is shifted Δ lateral , Δ vertical , and Δ longitudinal cm along the central beam direction for every CP to match SSD desired .The central beam direction is defined by the the gantry rotation angle α gantry and This results in a dynamic table translation synchronous with the gantry rotation to keep the fixed SSD along the central axis for the electron arcs.
Electron energy selection
In the second part of the TPP, the number of electron arcs is reduced to a user-defined number to control the number of total electron arcs in the plan.Because an electron arc was set up for each available beam energy, the delivery time would be unnecessarily long if all electron arcs are used.Thus, the most important electron beam energies are selected based on an initial intensity modulation optimization of all electron arcs and photon beams.For this, a beamlet dose calculation is performed for every CP of electron and photon arcs and static beam using the Eclipse research version interfaced Swiss Monte Carlo Plan (SMCP) (Fix et al 2007).In SMCP, pre-simulated beamlet phase spaces and the Macro Monte Carlo (MMC) (Neuenschwander et al 1995, Fix et al 2013) and Voxel Monte Carlo (VMC++) (Kawrakow and Fippel 2000) dose calculation algorithms are used for electron and photon beams, respectively.The beamlet size is 0.5 cm × 0.5 cm or 0.5 cm × 1 cm in the isocenter plane, depending on the width of the pMLC leaf.For static conformal photon beams, a dose calculation of the whole beam is performed using VMC++.The beamlet dose distributions are then used for the intensity modulation optimization based on a hybrid direct aperture optimization (H-DAO) (Mueller et al 2022).In H-DAO, apertures describing the pMLC shapes and monitor unit (MU) weights are determined using a hybrid column generation and simulated annealing approach.With column generation, apertures are iteratively generated and with simulated annealing, the shapes and MU weights of the apertures are refined after each aperture addition.For each CP of electron and photon arcs, exactly one aperture is determined, while for static beams a user defined number of apertures is generated.For static conformal photon beams, no apertures are generated, but the MU weight of the static conformal photon beam is simultaneously optimized with the MU weights of the apertures of the electron arcs.The optimization is finished when every CP has exactly one aperture and the static beams have their total number of apertures assigned.For all arcs, the movement range of the pMLC leaves is restricted such that the gantry rotation is not slowed down by the leaf movement and the MU weight is restricted such that the gantry rotation is maximally slowed down to half the full speed.During the optimization, the fluence belonging to an electron or photon aperture is interpolated between consequent CPs as described by Guyer et al (2022) to account for the continuous movement of the pMLC leaves.For photons, the transmission through the pMLC is considered during the optimization, while for the electrons it is assumed that the transmission through the pMLC is zero due to the thickness of the pMLC.
After the initial DAO, the dose contribution of each electron arc to the PTV is calculated.The electron arcs are then ranked according to their PTV dose contribution from highest to lowest.Only the highest-ranking electron arcs, up to the user-defined number, are kept while the others are discarded.
Final plan creation
In the third part, a final DAO is performed with the remaining electron arcs and the photon beams.After all apertures are determined, a dose calculation is performed for each aperture using the SMCP framework (Fix et al 2007, Manser et al 2019) considering the exact geometry of the pMLC and the full dynamic movement of the pMLC, table and gantry between consecutive CPs for photon and electron arcs.The source for the electron beams is a validated multiple source model (Henzen et al 2014a, Fix et al 2023), consisting of a primary and a jaw source and the dose is calculated using the MMC algorithm.The source of the photon beams is a pre-simulated phase-space located on a plane above the secondary collimator jaws and the dose is calculated using the VMC+ + algorithm.After the dose calculation, a MU weight reoptimization is performed to mitigate the differences between the beamlet-based and final dose distributions.Finally, the dose from all apertures is summed to get the plan dose.All dose distributions in this work use a voxel size of 2.5 × 2.5 × 2.5 mm 3 and the mean statistical uncertainty of the dose in voxels receiving at least 50% of the maximum dose is less than 0.5%.
Treatment plan investigations
Four breast cases were selected for retrospective investigation, each with a prescribed total dose of 42.4 Gy to the median dose in the planning target volume (PTV) in 16 fractions.One case is a right-sided whole breast irradiation (WBI) case without axillary lymph node irradiation (LNI), which was clinically treated with 3D conformal radiotherapy (CRT) using two tangential photon beams (case 1).One case is a left-sided WBI case without axillary LNI (case 2), one case is a right-sided WBI case including axillary LNI (case 3) and one case is a left-sided WBI case including axillary LNI (case 4).The cases were selected for the following purposes: (i) To investigate the influence of the number of electron arcs on the resulting plan.
(ii) To evaluate the dosimetric plan quality and delivery time of Arc-MBRT for breast treatments compared to Static-MBRT and photon-only treatments.
(iii) To validate the deliverability of Arc-MBRT plans in terms of dosimetric accuracy.
For the first purpose, six Arc-MBRT plans are created for each of the four cases.The six Arc-MBRT plans have a varying number of electron arcs, ranging from 1 to 6 arcs.A plan with 1 electron arc means, that only one electron beam energy is used while a plan with 6 electron arcs means, that all electron beam energies are used, and no arcs were discarded in the electron arcs selection step.The available electron beam energies are 6, 9, 12, 15, 18, and 22 MeV.For all electron arcs, the SSD is shortened to 80 cm as a compromise between reducing the in-air scatter and ensuring collision-free delivery (Mueller et al 2018a, Ma et al 2019).The photon beam setup for case 1 (right WBI) consists of two static conformal tangential beams and of four partial VMAT arcs for the other three cases.The beam setups are illustrated in figure 2 and described in detail in table 1.For all plans, the dose contribution to the PTV of the electron and photon beams is investigated and the dosimetric plan quality of the plans with 2 and 6 electron arcs are analyzed in detail.
For the second purpose, Arc-MBRT plans, Static-MBRT plans, and photon-only plans are created and the dosimetric plan quality and the estimated delivery time is compared for all plans of the four cases.The different plans are described in detail in table 2. All electron arcs and static electron beams have an SSD of 80 cm.Comparisons between the dosimetric plan quality of the resulting plans is performed by analyzing dose-volume histogram (DVH) parameters for the PTV, heart, lung, contralateral breast and spinal canal.For the PTV, the Paddick conformity index (CI) (Paddick 2000) and homogeneity index (HI =(D 2% -D 98% )/D 50% ) are calculated and compared, where D X% represents the minimum dose in X% of the PTV volume.The estimated delivery times are calculated by summing the time per CPs of all arcs and beams of one plan, while the accelerations of the mechanical axes are neglected.Additionally, the time to move all axes to the starting position of the next arc/ beam is taken into account with a minimum time of 20 s for switching between photon and electron beams and between different electron energies.
For the third purpose, the Arc-MBRT plans for the left WBI and right WBI+LNI cases (cases 2 and 3) are delivered on a TrueBeam linear accelerator (Varian Medical Systems) equipped with a Millennium 120 pMLC (Varian Medical Systems) in developer mode.The dose is measured using radiochromic EBT3 film sheets (Ashland Advanced Materials, Bridgewater, NJ) placed in 1 cm depth inside a PMMA cube.Film measurements are taken for each plan for the following deliveries: (i) The total plan (each consisting of two electron and four photon arcs).
(ii) Only the electron arcs of each plan.The reason for these different deliveries is to measure individually the dosimetric accuracy of the whole plan, of the electron arcs and the sliding window technique for electrons.The film sheets are scanned using an Epson XL 10000 flatbed scanner (Seiko Epson Co., Tokyo, Japan) 18h after irradiation.The scanned films are corrected for the lateral response artifact of the scanner using a one-dimensional linear correction function (Lewis and Chan 2015), converted to absolute dose using a triple channel calibration (Micke et al 2011) and rescaled according to the one-scan protocol by using two additional film strips (Lewis et al 2012).The resulting dose distribution of the red channel is compared to the corresponding 2D plane of the dose recalculated for the PMMA cube using a gamma evaluation with a 3% (global)/2 mm criterion and a 10% low-dose threshold of the maximum dose.
Number of electron arcs
The dose contributions to the PTV of the different electron and photon beams in Arc-MBRT plans varying in the number of electron arcs are shown in figure 3 for the four cases.For case 1 (right WBI), the electron dose contribution increases from 31% to 51% with increasing number of electron arcs.For case 2 (left WBI), the electron dose contribution is between 13% and 19% for all six plans.The electron dose contribution for case 3 (right WBI+LNI) increases from 11% to 29% from one to six electron arcs.Similarly, the electron dose contribution for the Left WBI+LNI case increases from 16% to 28% from one to six electron arcs.The electron dose contribution is almost twice as high in case 1 (right WBI) compared to all other cases.Overall, the lower three electron energies contribute more than half of the electron dose contribution for all four cases.In figure 4, the DVHs of Arc-MBRT plans with 2 and 6 electron arcs are shown.For case 1 (right WBI), the maximum dose to the ipsilateral lung slightly decreases while the low-dose bath to the lung slightly increases from 2 to 6 electron arcs.The PTV coverage and dose to the OARs are similar for case 2 (left WBI).The twoelectron arc plan of case 3 (right WBI+LNI) has a higher maximum dose to the spinal canal and a slightly increased mean dose to the contralateral lung while maintaining the same PTV coverage as the six-electron arc plan.For case 4 (left WBI+LNI), the PTV coverage and dose to OARs is similar between the 2 and 6 electron arc plans.Overall, the dosimetric plan quality is similar between the two plans for each of the four cases.
Case 1: right WBI
The results of the dosimetric comparison for case 1 (right WBI) are shown in figure 5.The dosimetric values and estimated delivery times are presented in table 3. The MBRT plans with static conformal photon beams have a Table 2. Beam setup for the plans used for investigating the dosimetric plan quality of Arc-MBRT.In brackets, the gantry ranges and gantry angles of the photon and electron beams are indicated.The table angle is 0°for all beams.The photon arcs are always split beams using the x-jaws.
Plan (electrons | photons)
Electron beams Photon beams reduced PTV coverage in comparison with the CRT plan.On the other hand, the volume of normal tissue receiving 100% of the prescribed dose is reduced in the MBRT plans compared with the CRT plan.For the MBRT plans with static photon beams and MBRT plans with photon arcs, the PTV coverage is similar to the IMRT plan and VMAT plan, respectively, while the dose to the normal tissue is reduced in the MBRT plans.
Comparing Arc-MBRT plans versus Static-MBRT plans, the delivery time is reduced by at least 55%.The estimated delivery time of the photon-only plans is 35% and 16% shorter for the CRT and IMRT plans and 37% longer for the VMAT plan compared to the respective Arc-MBRT plans.
Case 2: left WBI
The Arc-MBRT, Static-MBRT and VMAT plans for case 2 (left WBI) are compared in figure 6 (dose distributions and DVHs) and in table 4 (dosimetric values and delivery time).As can be seen in the top of figure 6, the electron dose contributes mostly to the superficial part of the PTV and to the part where the heart is close to the PTV in the distal direction.The photon dose covers the more distal parts of the PTV, especially near the ribs where the ipsilateral lung is only a few millimeters apart from the PTV.
While the PTV coverage and the dose to OARs are similar in the Arc-MBRT and Static-MBRT plans, the dose to the OARs is substantially higher in the photon-only VMAT plan.Compared to the VMAT plan, the mean dose to the heart is reduced by 32%, the mean dose to the contralateral breast is reduced by 23% and the V 5Gy of the total lung is reduced by 40% in the Arc-MBRT plan.
Cases 3&4: left and right WBI+LNI
The DVH comparison of the Arc-MBRT, Static-MBRT and VMAT plans for cases 3 and 4 (right and left WBI +LNI) are shown in figure 7.In table 5, the dosimetric values and delivery times for case 3 (right WBI+LNI) are compared and the dosimetric values and delivery times for case 4 (left WBI+LNI) are compared in table 6.In case 3 (right WBI+LNI), the Arc-MBRT and Static-MBRT achieved similar dosimetric plan quality.Both plans have a similar PTV coverage as the VMAT plan.When comparing the Arc-MBRT plan to the VMAT plan, the mean dose to the heart is reduced by 60%.Similarly, the mean dose to the contralateral breast is reduced by 51% and the V 5Gy of the total lung is reduced by 24%.
The Arc-MBRT and Static-MBRT plans for case 4 (left WBI+LNI) have similar dosimetric plan quality, except for the lung, which has a lower dose bath in the Static-MBRT plan compared to the Arc-MBRT plan.The VMAT plan has the same PTV coverage as both MBRT plans, but the mean dose to the heart is reduced by 38%, the mean dose to the contralateral breast is reduced by 23% and the V 5Gy of the total lung is reduced by 15% in the MBRT plans compared to the VMAT plan.
Dosimetric validation
The Arc-MBRT plans for case 2 (left WBI) and case 3 (right WBI+LNI) case were successfully delivered on a TrueBeam and film measurements were taken for the total plans (one fraction), only the electron arcs of each plan and the electron arcs of each plan with a collapsed gantry angle to 0°.The results of the comparisons between the measured and calculated dose distributions for all six deliveries are shown in figure 8.The gamma analysis for case 2 (left WBI) resulted in a passing rate of 98.5% for the total dose, 99.5% for the electron dose and 100% for the collapsed dose, respectively.The passing rates of case 3 (right WBI+LNI) were 100% for all three dose distributions.
Discussion
In this work, a TPP for creating Arc-MBRT plans was successfully developed.The Arc-MBRT plans consist of intensity-modulated electron arcs and static or dynamic photon beams.The intensity-modulated electron arcs are achieved with a pMLC-based sliding-window technique and synchronous dynamic gantry rotation and table translation to keep a shortened SSD.In contrast to Static-MBRT, which contains intensity-modulated electron beams delivered from a static gantry angle, the gantry moves continuously during beam-on for electron arcs.This shortens the delivery time substantially.For the four investigated cases, the delivery times of the Arc-MBRT plans are less than half the time of the Static-MBRT plans.This is similar to the advantage of VMAT over IMRT, which also has reduced delivery time due to the dynamic gantry rotation (Teoh et al 2011).Additionally, creating a suitable beam setup for Static-MBRT plans is not always straightforward.Multiple beams must be chosen carefully to achieve an acceptable coverage of the PTV by the electrons.The presented TPP improves this, as setting up gantry ranges for electron arcs is more straightforward.The TPP presented here can create Arc-MBRT plans, but plans consisting of electron arcs only can also be created with the same TPP in a similar way.
The dosimetric plan quality of Arc-MBRT plans are generally similar to the Static-MBRT but are superior compared to the photon-only treatments, except for the combination of electron arcs with static conformal photon beams.A possible explanation for this is that the dose of the conformal photon beams is predetermined and only the MU weight of the conformal beams can be changed during intensity modulation optimization.This indicates that the simultaneous optimization of photon and electron intensity modulation is important.For all other setups, the mixed beam plans achieved the same PTV coverage while reducing the dose to the OARs.Most notably, MBRT plans reduced the mean dose to the heart compared to photon-only plans, which is correlated with ischemic heart disease (Darby et al 2013).Similar results were obtained by Li et al (2000), Al-Yahya et al (2005b), Alexander et al (2011), Renaud et al (2017) using different MBRT techniques.This shows the potential dosimetric superiority of MBRT plans over photon-only treatments for breast cases also for MBRT utilizing intensity modulated electron arcs.
When comparing Arc-MBRT plans with different number of electron arcs, it seemed that for the investigated breast cases no more than two electron arcs are necessary to achieve a good dosimetric plan quality and that more electron arcs only increase the delivery time without improving the dosimetric result substantially.This can be explained by the fact that energy modulation does not play a substantial role for this treatment site, as the range of treatment depths is narrow.Rather, the electron dose acts as a base dose in the superficial parts of the PTV, allowing for a lower photon dose to the OARs while maintaining a sharp dose falloff outside the PTV.In the cases including LNI, the lymph nodes are essentially only covered with photons.Because the lymph nodes are not near the patient's skin, a larger portion of normal tissue would be irradiated if the electron beams would contribute more to this area and thus only the superficial parts of the PTV in the breast are covered with electrons.
Arc-MBRT plans were successfully delivered on a TrueBeam and the dosimetric validation shows good agreement between the measured and calculated dose distributions.This shows that the multiple-source beam model and algorithm used for the electron dose calculation are suitable for Arc-MBRT plans and that a TrueBeam can deliver electron arcs accurately.Ma et al (2019) investigated dosimetric characterizations of electron arcs and achieved good agreements between Monte Carlo dose calculations and measurements as well.However, no intensity-modulated electron arcs were measured.
In the presented TPP for Arc-MBRT, the time for dose calculation can be substantially longer compared to the time required for photon-only VMAT plans.There are several approaches possible to reduce this dose computation time.One approach is to use a coarser dose scoring grid to determine suitable electron energies.The number of electron energies for which beamlet dose has to be calculated on the regular dose scoring grid can thus be reduced.Another approach is to use faster dose calculation algorithms based on a GPU implementation (Franciosini et al 2023) or on deep learning methods for denoising MC dose distributions (Bai et al 2021, Neph et al 2021).
One aspect which was not investigated in this work is the robustness of the treatment plans against setup uncertainties and patient breathing.The assumption that the dose distribution is not perturbed by setup uncertainties does not hold for electrons and the electron dose distribution moves with the patient in the incident beam direction (Thomas 2006).Additionally, electron beams might be more robust than photon beams due to their larger beam penumbra.Renaud et al (2019), Heath et al (2021) developed a clinical target volume (CTV) based robust optimization approach for Static-MBRT.They showed that robust-optimized plans exhibited less dosimetric impact due to setup uncertainties compared to plans using conventional PTV margins and that the electron dose contribution was higher in the robust-optimized plans.Additionally, it has been shown that also photon-only plans could benefit from CTV-based robust optimization as well (Byrne et al 2016).Hypothetically, robust-optimized Arc-MBRT would show the same benefit and will be investigated in future research, but the potential burden on computer memory and calculation time of the many MC beamlets needed for robust optimization needs to be addressed adequately (Mueller et al 2023).This work focused on breast cases to show the dosimetric plan quality and efficiency of Arc-MBRT plans.However, there is a potential advantage of MBRT also for other treatment sites with a superficial part such as head-and-neck cancers (Mu et al 2004, Mueller et al 2018a), brain tumors (Rosca 2012, Heath et al 2021), sarcomas (Renaud et al 2017), tumors in the abdomen (Unkelbach et al 2022) or scalp irradiations (Eldib et al 2017).Additionally, non-coplanar beam directions for photon and electron beams might offer an additional advantage.Electron beams with dynamic trajectories similar to the dynamic trajectories of photon beams in dynamic mixed beam radiotherapy (Mueller et al 2018b) might be explored in future research.Although, ensuring collision avoidance for the shortened SSD might be challenging for non-coplanar electron beams.
Conclusion
A TPP for pMLC-based Arc-MBRT containing intensity-modulated electron beams with dynamic gantry rotation was successfully developed.Created Arc-MBRT plans for four breast cases showed similar dosimetric plan quality to Static-MBRT plans while outperforming photon-only plans.For the investigated breast cases, two electron arcs were enough to achieve a good dosimetric plan quality.On average, the mean heart dose is reduced by 32% in the MBRT plans compared to the photon-only plans.The Arc-MBRT plans reduced the delivery time by half compared to Static-MBRT plans and were similar to VMAT plans, which further facilitates integration of pMLC-based mixed-beam radiotherapy into clinical practice.
Figure 1 .
Figure 1.Illustration of the treatment planning process to create Arc-MBRT plans.The upper half describes the steps for the electron beams, while the steps for the photon beams are described in the lower half.SSD: source-surface distance.DAO: direct aperture optimization.Eβ: electron arc with an energy of β MeV.X6: 6 MV photon arc/beam.MU: monitor unit.
Figure 2 .
Figure 2. Illustration of the beam setup of the Arc-MBRT plans for the four cases.The gantry angle range of the electron arcs is indicated in yellow and the static gantry angles (a) and gantry angle ranges (b), (c), (d) of the photon beams are indicated in red.WBI: whole breast irradiation.LNI: lymph node irradiation
Figure 3 .
Figure 3. Dose contribution to the PTV of electron and photon beams in Arc-MBRT plans with the number of electron arcs ranging from 1 to 6 arcs for all four cases.Eβ: Electron arc with an energy of β MeV.X6: VMAT arc with 6 MV photons.
Figure 4 .
Figure 4. DVH comparisons of Arc-MBRT plans with 2 and 6 electron arcs for each of the four cases.
Figure 5 .
Figure 5. Dose color wash comparison (top) on a representative transversal plane and DVH comparison (bottom) of the Arc-MBRT, Static-MBRT and photon-only plans for case 1 (right WBI).To distinguish between the different photon beam setups, the electron and photon beams are indicated in brackets.
Figure 6 .
Figure 6.Dose color wash comparison (top) on a representative transversal plane between the photon and electron dose contributions of the Arc-MBRT plan, dose color wash comparison (middle) and DVH comparison (bottom) of the Arc-MBRT, Static-MBRT and VMAT plans for case 2 (left WBI).
Figure 8 .
Figure 8. Measured (thin) and calculated (thick) isodose lines for dose distributions of case 2 (top) and case 3 (bottom).In (a) and (d) the total Arc-MBRT plans consisting of electron and photon arcs were delivered, in b) and e) the electron arcs were delivered with dynamic gantry and table and in (c) and (f) the electron arcs were delivered with a collapsed gantry angle.
table rotation angle α table of the CP.The isocenter shift is calculated using the following equations:
Table 3 .
Comparison of the dosimetric quantities of the Arc-MBRT, Static-MBRT and photon-only plans for case 1 (right WBI).The best value of each quantity whithin the group is highlighted in bold.
Table 4 .
Comparison of the dosimetric quantities of the Arc-MBRT, Static-MBRT and VMAT plans for case 2 (left WBI).The best value of each quantity is highlighted in bold.
Table 5 .
Comparison of the dosimetric quantities of the Arc-MBRT, Static-MBRT and VMAT plans for case 3 (right WBI+LNI).The best value of each quantity is highlighted in bold.
Table 6 .
Comparison of the dosimetric quantities of the Arc-MBRT, Static-MBRT and VMAT plans for case 4 (left WBI+LNI).The best value of each quantity is highlighted in bold.
|
2023-10-12T06:18:02.276Z
|
2023-10-10T00:00:00.000
|
{
"year": 2023,
"sha1": "ad9fbf847b17b44c27706fe3d91a763d13d53bd7",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1361-6560/ad021a/pdf",
"oa_status": "CLOSED",
"pdf_src": "IOP",
"pdf_hash": "d4e174a6f860da854f9f0bb0219eb6fe1d9e4af2",
"s2fieldsofstudy": [
"Physics",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255722771
|
pes2o/s2orc
|
v3-fos-license
|
A Non-Instrumental Green Analytical Method Based on Surfactant-Assisted Dispersive Liquid–Liquid Microextraction–Thin-Layer Chromatography–Smartphone-Based Digital Image Colorimetry(SA-DLLME-TLC-SDIC) for Determining Favipiravir in Biological Samples
Favipiravir (FAV) has become a promising antiviral agent for the treatment of COVID-19. Herein, a green, fast, high-sample-throughput, non-instrumental, and affordable analytical method is proposed based on surfactant-assisted dispersive liquid–liquid microextraction (SA-DLLME) combined with thin-layer chromatography–digital image colourimetry (TLC-DIC) for determining favipiravir in biological and pharmaceutical samples. Triton X-100 and dichloromethane (DCM) were used as the disperser and extraction solvents, respectively. The extract obtained after DLLME procedure was spotted on a TLC plate and allowed to develop with a mobile phase of chloroform:methanol (8:2, v/v). The developed plate was photographed using a smartphone under UV irradiation at 254 nm. The quantification of FAV was performed by analysing the digital images’ spots with open-source ImageJ software. Multivariate optimisation using Plackett–Burman design (PBD) and central composite design (CCD) was performed for the screening and optimisation of significant factors. Under the optimised conditions, the method was found to be linear, ranging from 5 to 100 µg/spot, with a correlation coefficient (R2) ranging from 0.991 to 0.994. The limit of detection (LOD) and limit of quantification (LOQ) were in the ranges of 1.2–1.5 µg/spot and 3.96–4.29 µg/spot, respectively. The developed approach was successfully applied for the determination of FAV in biological (i.e., human urine and plasma) and pharmaceutical samples. The results obtained using the proposed methodology were compared to those obtained using HPLC-UV analysis and found to be in close agreement with one another. Additionally, the green character of the developed method with previously reported protocols was evaluated using the ComplexGAPI, AGREE, and Eco-Scale greenness assessment tools. The proposed method is green in nature and does not require any sophisticated high-end analytical instruments, and it can therefore be routinely applied for the analysis of FAV in various resource-limited laboratories during the COVID-19 pandemic.
Introduction
The outbreak of the coronavirus represents one of the most dreadful viral diseases which has been endangering the lives of millions of people [1]. As a result, the World Health Organization declared the outbreak a pandemic in March 2020. However, the number of infections is still increasing to unprecedented levels [2]. Therefore, the immediate urge to identify new therapeutics to combat the COVID-19 pandemic prompted the development of several possible drugs, including hydroxychloroquine, ivermectin, remdesivir, and favipiravir (FAV) [3,4]. The results of the clinical trials revealed that favipiravir was a potential COVID-19 treatment option, as it ameliorated signs and symptoms and enhanced viral clearance [5].
As depicted in Figure 1, the purine nucleoside precursor FAV (6-fluoro-3-oxo-3,4dihydropyrazine -2-carboxamide) is a pyrazine carboxamide derivative providing potent antiviral activity against a variety of RNA viruses [6]. In 2014, Fujifilm Toyama Chemical Company was the first to develop FAV as a treatment for influenza in Japan. After being consumed, the medication is absorbed into the body and assimilates into the cells, where it is ribosylated and phosphorylated by the host's cellular enzymes to produce the active metabolite T-705-ribofuranosyl-5'-triphosphate (T-705-RTP) [7,8]. Then, T-705-RTP integrates into the viral RNA in small amounts and preferentially inhibits the transcription and replication of the RNA-dependent RNA polymerase (RdRp) enzyme of influenza and many other RNA viruses [9,10]. Despite their importance in clinical controls, very few analytical techniques for the quantitative analysis of FAV have been reported. Electrochemical sensors [11][12][13], spectrofluorimetric methods [1,14], reverse-phase high-performance thin-layer chromatography (RP-HPLC) [15], liquid chromatography with tandem mass spectrometry (LC-MS/MS) [16,4,[17][18][19], and ultrahigh-performance liquid chromatography with tandem mass spectrometry [20] have been used to detect FAV in pharmaceutical products and biological matrices. Although these methods offer sufficient sensitivity, their high cost of analysis, need for bulky and sophisticated instruments, time consumption, and unsuitability for onsite detection are some of the major constraints preventing them from being of routine use in resource-limited settings during the COVID-19 pandemic.
In order to identify trace levels of these drugs in complex matrices, sample preparation plays a crucial role prior to instrumental analysis [21]. Today, sample preparation Despite their importance in clinical controls, very few analytical techniques for the quantitative analysis of FAV have been reported. Electrochemical sensors [11][12][13], spectrofluorimetric methods [1,14], reverse-phase high-performance thin-layer chromatography (RP-HPLC) [15], liquid chromatography with tandem mass spectrometry (LC-MS/MS) [4,[16][17][18][19], and ultrahigh-performance liquid chromatography with tandem mass spectrometry [20] have been used to detect FAV in pharmaceutical products and biological matrices. Although these methods offer sufficient sensitivity, their high cost of analysis, need for bulky and sophisticated instruments, time consumption, and unsuitability for onsite detection are some of the major constraints preventing them from being of routine use in resource-limited settings during the COVID-19 pandemic.
In order to identify trace levels of these drugs in complex matrices, sample preparation plays a crucial role prior to instrumental analysis [21]. Today, sample preparation techniques tend to emphasise principles of green analytical chemistry (GAC). The main objective of GAC is the advancement of new-generation analytical methods with the purpose of reducing reagent consumption (possibly using biodegradable and low-toxicity solvents), minimising waste generation, consuming the least amount of energy, and ensuring operator/analyst safety, along with the automation and miniaturisation of the analytical process [22]. Dispersive liquid-liquid microextraction (DLLME) has garnered a lot of interest from analysts due to its ease of use, affordability, environmental friendliness, high enrichment factors, and quick extraction capabilities [23][24][25]. DLLME is performed after immediate injection of the extraction and disperser solvents into an aqueous phase. As a result, a cloudy solution is formed, consisting of small extraction solvent droplets scattered throughout the aqueous phase. The dense extraction solvent obtained by centrifugation settles as the sedimented phase and is used for further analysis [26]. DLLME has grown in popularity, as indicated by the growing number of applications in fields such as forensic, clinical, environmental, and pharmaceutical analysis, among many others [27][28][29][30][31][32].
TLC is one of the earliest planar chromatographic methods and is still used to separate and identify organic analytes in a mixture [33]. It is regarded as a sustainable chromatographic method owing to its benefits, such as (i) minimal solvent usage; (ii) easy execution; (iii) high sample throughput (i.e., simultaneous analysis of 8-10 samples using the same development solvent); (iv) cost-effectiveness; (v) TLC is based on capillary flow of the solvent and, therefore, requires no pressure controls, pumps, valves, etc., hence entailing no wear and tear and requirement for spare parts; and (vi) no need for specifically trained personnel. Herein, sustainability refers to the probability of system failure and the availability of resources to restore the system to an operational condition. Chromatographic methods are more sustainable when the probability of failure is lower and the availability of restoration resources is higher [34,35]. On the other hand, classical TLC has the limitation of being only a qualitative method. Therefore, quantitative analysis in TLC is carried out by its hyphenation with other detection techniques, such as UV-Vis spectrophotometry, densitometry, FID, and mass spectrometry (MS). However, although these hyphenated techniques are sensitive, they are very expensive and hard to afford in resource-limited settings [36][37][38][39].
As an alternative to this, combining TLC with smartphone-based digital image colourimetry (SDIC) can provide a simple, promising, reliable, feasible, and cost-effective alternative for quantitative analysis. Currently, DIC (digital image colourimetry) has attracted considerable interest from researchers to analyse different analytes in pharmaceuticals and to convert images into numerical data. DIC is a kind of colorimetric analysis in which digital images captured by mobile phones, webcams, digital cameras, and scanners are transformed to the RGB colour system, which is composed of three different colour intensities (red, blue, and green). Moreover, it has become a notable research area in analytical chemistry due to its affordability, ease of use, portability, and capacity to analyse data immediately. In comparison to conventional methods, TLC coupled with SDIC offers a number of benefits, such as having the least negative impacts on environment and human health, being able to provide a portable analytical system for a user-friendly experience, offering high sample throughput, energy-efficiency, and no requirement of specifically trained personnel [40,41].
Considering the significant burden on analytical laboratories during the COVID-19 pandemic, the present study proposes a novel and green analytical method based on the coupling of SA-DLLME with TLC-SDIC for instrument less detection of FAV in pharmaceutical formulations, as well as human urine and plasma samples. The proposed method does not require any complex apparatus and uses a simple TLC setup, a smartphone camera, and freely available image analysis software. The results obtained by the proposed study were compared with those obtained by the HPLC method for FAV analysis. Furthermore, the green character of the developed method was evaluated using the ComplexGAPI, AGREE, and Eco-Scale greenness assessment tools.
Screening of TLC Parameters
Commonly used solvent systems in routine systematic toxicological analysis-viz., chloroform-acetone (8:2 v/v), ethyl acetate-ethanol (8:2 v/v), chloroform-methanol (8:2 v/v), and ethyl acetate-acetone (8:2 v/v)-were screened. In order to choose the best solvent system for FAV among the four solvent systems, a series of tests were conducted [42]. The combination of chloroform-methanol (8:2 v/v) had the best separation for FAV. Additionally, saturation times ranging from 10 to 30 min were also investigated, since this had a Molecules 2023, 28, 529 4 of 18 substantial impact on the chromatographic separation. A saturation time of 15 min yielded a promising performance. As a result, the chloroform-methanol (8:2 v/v) combination was chosen as the developing system, with a 15 min saturation period [43]. The R f value was found to be 0.28.
Screening of Surfactant and Extraction Solvent
Prior to performing PBD, the initial experiments were carried out to determine the most suitable surfactant for DLLME. The selected surfactant needs to possess characteristics such as miscibility with both the organic solvent and the aqueous sample, as well as the ability to speed up the emulsification of the organic solvent into the aqueous phase. Owing to their amphipathic structure, surfactants reduce the interfacial tension between two liquids and regulate the hydrophilicity and lipophilicity of the solution [44]. Triton X-100, CTAB, and SDS-three commonly used surfactants-were employed in a number of experiments to find the optimal surfactant to use as the disperser solvent for DLLME. Three different mixtures of 0.045 mmol L −1 of disperser solvent (Triton X-100, CTAB, and SDS) along with a constant volume of CF (200 µL) were prepared. With the help of a syringe, this mixture was quickly and forcefully added to an aqueous solution fortified with FAV at 10 µg mL −1 . At this step, a turbid solution was formed, which was sonicated for 2 min before being centrifuged for 3 min at 5000 rpm. Among all of the tested surfactants, the best extraction efficiency was demonstrated by non-ionic surfactants, i.e., Triton X-100 ( Figure 2a). In comparison to ionic surfactants, non-ionic surfactants appeared to have a higher solubilisation capacity and sufficient hydrophobicity for the target analytes. Therefore, Triton X-100 was chosen as the disperser solvent for all further experiments.
The type of extraction solvent directly affects the preconcentration factor and the extraction yield; therefore, its choice is crucial for DLLME. The extraction solvent should be capable of extracting the desired analytes, immiscible in water, and should have a greater density than water. Additionally, it should show good chromatographic properties when spotted on a TLC plate. In accordance to these parameters, the three commonly utilised extraction solvents-viz., DCM, CB, and CF-were evaluated for maximum extraction efficiency for DLLME. For this purpose, a series of experiments were carried out to determine the appropriate extraction solvent among the four solvents tested. Then, 0.045 mmol L −1 of Triton X-100 was rapidly injected into the sample solution together with a constant volume of 200 µL of each extraction solvent. Similar to the earlier experiments, all other experimental conditions were identical, and the findings are depicted in Figure 2b. It is evident that employing DCM as an extraction solvent yielded higher recoveries. Therefore, DCM was selected as the suitable extraction solvent.
Plackett-Burman Design (PBD)
The selected screening design (i.e., PBD) is a mathematically based statistical tool that can minimise the number of experiments and identify the variables that have an impact on the studied process in order to perform further optimisation [45]. In fact, the current study aimed to enable estimation of the strength of influence of each factor by using Fisher's test as well as a p-value comparison with α risk (α = 0.05). Moreover, the Pareto chart was used to arrange the interactions and effects in decreasing order. This is a quick and effective tool for finding the significant parameters among a large number of factors while minimising the time required and maintaining persuasive data on each variable. In addition, the major effect of each variable was determined as the difference between the average of measurements recorded at the high level (+) and the average of measurements recorded at the low level (−) of that factor. With the help of this, the impact of each factor could be assessed.
Plackett-Burman Design (PBD)
The selected screening design (i.e., PBD) is a mathematically based statistical tool that can minimise the number of experiments and identify the variables that have an impact on the studied process in order to perform further optimisation [45]. In fact, the current study aimed to enable estimation of the strength of influence of each factor by using Fisher's test as well as a p-value comparison with α risk (α = 0.05). Moreover, the Pareto chart was used to arrange the interactions and effects in decreasing order. This is a quick and effective tool for finding the significant parameters among a large number of factors while minimising the time required and maintaining persuasive data on each variable. In addition, the major effect of each variable was determined as the difference between the average of measurements recorded at the high level (+) and the average of measurements recorded at the low level (-) of that factor. With the help of this, the impact of each factor could be assessed.
For this study, the seven independent factors involved in the PBD were as follows: (i) pH, (ii) ultrasonication time (sec), (iii) ionic strength (%), (iv) volume of extraction solvent (µ L), (v) volume of disperser solvent (mmol L −1 ), (vi) vortex time (min), and (vii) vortex speed (rpm). For each independent factor, there were two classification levels: (−1) For this study, the seven independent factors involved in the PBD were as follows: (i) pH, (ii) ultrasonication time (s), (iii) ionic strength (%), (iv) volume of extraction solvent (µL), (v) volume of disperser solvent (mmol L −1 ), (vi) vortex time (min), and (vii) vortex speed (rpm). For each independent factor, there were two classification levels: (−1) denotes a low level, while (+1) denotes a high level, as shown in Table S1. A 2 7−4 PBD was employed to identify significant factors. For 24 runs (7 + 1 = 8 × 3 = 24), each experiment was performed in triplicate and in a random order. The peak area was used as a response during the statistical analysis of these factors. In order to analyse the significant parameters, an analysis of variance (ANOVA) test was applied. Additionally, a t-test was used to identify significant variables that are represented in the Pareto chart in Figure S1 with a confidence level higher than 95% (p < 0.05). As a result, the ultrasonication time, pH, and volume of surfactant act as the disperser solvent were determined to be the most significant variables. In contrast, the vortex speed, vortex time, and volume of the extraction solvent were less significant factors for the extraction of FAV, while the ionic strength was found to be a non-significant factor. In order to further optimise these three most significant variables (ultrasonication time, pH, and volume of surfactant), a central composite design (CCD) of experiments was used.
Central Composite Design (CCD)
A quadratic model was constructed between the dependent and independent variables, using the significant parameters that were identified during the screening procedure. The response surface was used for the study type, whereas the central composite was used for the design type. For the purpose of fitting quadratic polynomials, a CCD incorporates a 2 f factorial design with at least one point in the centre of the experimental area to produce rotatability or orthogonality characteristics and additional points such as star points [46]. In this design, the experimental runs were carried out randomly to reduce the impact of uncontrolled variables. For each set of experiments, three independent parameters (pH, ultrasonication time, and volume of surfactant) were specified at three levels (low, centre, and high), with coded values (−1, 0, +1) and star points -α and +α, respectively, as shown in Tables S2 and S3. The total number of experiments (N) was determined to be 18 for these parameters (f = 3) using the following equation: The total number of experiments (N) was calculated from eight factorial points (2 f ), six axial points (2f), and four centre points (N 0 ). For the CCD, the peak area served as the response. The "goodness of fit" of the acquired results was then evaluated using an ANOVA.
The response surface plots of peak area vs. significant factors are depicted in Figure 3, along with the most pertinent fitted response surfaces for the design. The curvatures of these plots represent the interactions of the factors. In addition, desirability function (DF) is a well-known and established tool for simultaneously determining input variables that can provide optimal values for one or more responses. DF provides an easy and quick transformation of various responses into quantitative and qualitative results for a single measurement. The response is converted into a specific desirability function with a range of 0 to 1. Desirability 0 denotes undesirable or minimal circumstances, whereas desirability 1 denotes the maximum. A series of graphs are formed for each independent variable, and a red line shows the resultant optimal value ( Figure S2). In this study, the optimal values for these parameters were as follows: 1.13 mmol/L (volume of disperser solvent), 128.67 s (sonication time), and 4.9 (pH). For ease of operation, sonication time and pH were rounded to 130 s and 5, respectively.
Analytical Performance of the Method
Under the optimal conditions, the linearity, accuracy, relative recovery, LODs, and LOQs of the suggested SA-DLLME-TLC-SDIC approach were evaluated. The target analyte (i.e., FAV) was fortified into ultrapure water and biological matrices at different concentrations in the range of 5-100 µg/spot. The proposed method yielded a strong correlation between the concentration and the peak area of the analyte (R 2 = 0.991-0.994). The ranges of the LODs and LOQs were determined to be 1.2-1.5 µg/spot and 3.96-4.29 µg/spot at signal-to-noise ratios of 3 and 10, respectively. Additionally, the repeatability and reproducibility of the proposed method were assessed using intraday and interday precisions (n = 5), which were represented as %RSD. Three distinct concentration levels were used to measure the intraday and interday precisions (%RSD), which were found to be less than 5 and 10%, respectively, as highlighted in Table 1. Furthermore, the enrichment factor (EF), enrichment recovery (ER%), accuracy, and relative recovery (RR%) were also evaluated and are presented in Table 2. The matrix effect (ME, expressed as RR%) was evaluated by utilising five distinct drug-free human plasma and urine samples. This was achieved by comparing the peak area of FAV from post-extracted plasma and urine samples at low, middle and high QC levels to those prepared in pure standards at similar concentrations. The RR% in both matrices was found to be in the range of 87-98% (Table 2), indicating that there was no significant matrix effect on the extraction efficiency of the DLLME procedure. Molecules 2023, 28, x FOR PEER REVIEW 7 of 20
Analytical Performance of the Method
Under the optimal conditions, the linearity, accuracy, relative recovery, LODs, and LOQs of the suggested SA-DLLME-TLC-SDIC approach were evaluated. The target analyte (i.e., FAV) was fortified into ultrapure water and biological matrices at different concentrations in the range of 5-100 µ g/spot. The proposed method yielded a strong correlation between the concentration and the peak area of the analyte (R 2 = 0.991-0.994). The ranges of the LODs and LOQs were determined to be 1.2-1.5 µ g/spot and 3.96-4.29 µ g/spot at signal-to-noise ratios of 3 and 10, respectively. Additionally, the repeatability and reproducibility of the proposed method were assessed using intraday and interday precisions (n = 5), which were represented as %RSD. Three distinct concentration levels were used to measure the intraday and interday precisions (%RSD), which were found to be less than 5 and 10%, respectively, as highlighted in Table 1. Furthermore, the enrichment factor (EF), enrichment recovery (ER%), accuracy, and relative recovery (RR%) were also evaluated and are presented in Table 2. The matrix effect (ME, expressed as RR%) was evaluated by utilising five distinct drug-free human plasma and urine samples. This was achieved by comparing the peak area of FAV from post-extracted plasma and urine samples at low, middle and high QC levels to those prepared in pure standards at similar concentrations. The RR% in both matrices was found to be in the range of 87-98 % (Table The preconcentration factor (PF) of the proposed method was found to be 50, as the initial volume of the sample was 10 mL and the final volume of the extract was 0.2 mL. The EF was determined as the ratio of the slope of the calibration curve of the FAV obtained by the proposed method and the slope of the calibration curve of its standard solution. The EF for FAV ranged between 35.1 and 53.9 under optimal conditions. This correlated to ER% findings ranging from 70.2 to 107.8%. Furthermore, in order to evaluate the stability of FAV, low and high-QC samples were used in five replicates. The QC samples were assessed after six freeze-thaw cycles, held at~4 • C for 12 h, and then thawed separately at room temperature. The variation of accuracy at each level was well within ±15%.
Minor but deliberate changes in the chromatographic process parameters were applied to assess robustness, which was represented as the percentage relative standard deviation (% RSD). Small changes were implemented by altering the composition, the volume of the mobile phase, and the saturation time within a range of ±10%. The outcomes demonstrated that there were no significant changes in the R f values of the FAV, and the %RSD were found to be 0.89%. This indicates that the proposed method is robust and reliable.
Assessment of the Green Character of the Developed Procedure
From the perspective of GAC, evaluating the environmental friendliness of analytical techniques is essential. Since many different parameters are associated with analytical methodologies, it was essential to establish precise metric systems to measure each variable that might pose a risk in terms of its ecological impact on the environment and human safety [47,48]. There are a number of tools that can be helpful for the evaluation of greenness; however, the most well-known ones are the Green Analytical Procedure Index (GAPI), Analytical Eco-Scale, and Analytical GREEnness (AGREE) metrics. Herein, the green character of the proposed analytical method was evaluated using these three prevailing metrics. With the aid of these metrics, the assessment findings are presented in a very readable format.
The Analytical Eco-Scale is the first green assessment tool. This metric evaluates analytical procedures by eliminating penalty points from each stage of the process that does not conform to GAC guidelines. The following equation (Analytical Eco-Scale score = 100-total penalty) is used to calculate penalty points for each of the parameters of the defined procedure, including (i) amounts of reagents, (ii) occupational risks, (iii) waste, and (iv) energy. Table 3 displays the results of calculating the Eco-Scale score for the proposed method. The analytical approach is considered to have excellent greenness if the score is higher than 75. With an Eco-Scale score of 84 (Table 3), the developed approach can be regarded as having outstanding greenness.
The second assessment tool is GAPI, which was introduced by Płotka-Wasylka in 2018. In GAPI, 15 zones are distributed among five pentagrams in a three-colour pictogram. Each segment represents a phase in the analytical process, from sample collection to waste disposal. The ComplexGAPI pictogram generated for the proposed methodology is shown in Figure 4a,b. The colour of each pentagram (e.g., red, yellow, and green) signifies the level of environmental impact of each step during the analysis. In this manner, the final GAPI pictogram offers a complete and rapid overview of the greenness of the analytical method. Although the majority of the pentagrams in Figure 4a,b are either yellow or green, this illustrates compliance with the GAC principles. Hence, it is possible to infer that the proposed analytical approach is sufficiently green and has no negative impact on environmental and human safety. Figure 4a,b represent the location of the red pentagrams, at 5, 7 and 1, 3, 5, 7, respectively. These red pentagrams correspond to (1) sample collection (offline), (3) transportation (required), (5) extraction (required), and (7) usage of non-green solvent. According to the GAPI pictograms (Figure 4a,b), the main advantage of the proposed method is that no high-end analytical instrument is required for the analysis of FAV; thus, the fifth pentagram related to instrumentation (F12-F15) is not applicable in the case of SA-DLLME-TLC-SDIC. reported methods for identifying similar analytes based on greenness and various analytical parameters, respectively. The third evaluation tool is the AGREE method, which was developed in 2020 by Pena-Pereira et al. This tool provides a colourful clock-shaped pictogram. Each portion of the perimeter represents one of the 12 guiding principles of GAC. In the centre of the AGREE pictogram, a score is displayed. A colour scheme that ranges from green to yellow to red is used to symbolise each parameter, which is given a score between 0 and 1. Greenness is higher if the number is significantly closer to 1. This tool is more focused on energy use and waste production rather than being concerned with the toxic impact of specific chemicals. Figures 5 and 6 depict the overall AGREE score and the AGREE report sheets for the proposed analytical approach, respectively. It can be observed from Figures 5 and 6 that the suggested approach obtained an overall AGREE score of 0.69, indicating that it is an excellent green method. Tables 4 and 5 compare the suggested method to previously reported methods for identifying similar analytes based on greenness and various analytical parameters, respectively. Figure 6. Greenness report sheet for the proposed method calculated by using 12 different greenness criteria with the AGREE software.
Application to Real Samples
Under optimised and validated conditions, the proposed method was successfully applied to quantify FAV in spiked biological matrices such as human urine and plasma samples, as well as in pharmaceutical formulations. Using the validated methods, the amount of FAV in relation to the indicated contents (400 and 800 mg/tablet) was also determined. In addition, Tables 6 and 7 display the outcomes of applications of the proposed method for quantitative determination of FAV in biological matrices and pharmaceutical samples, respectively. Moreover, comparisons were made between the results obtained by the proposed method and by the standard HPLC method for biological matrices and pharmaceutical samples. The results from the two approaches were compared using a t-test with a 95% level of confidence and were found to be very close in agreement. There were no significant differences between the three sets of results, as the experimental t-values (-1.2, -1.5, 1.10, and 0.44) were lower than the crucial t-value (2.131 and 2.919) for p = 0.05. Table 4. Comparison between the proposed method and previously reported methods for the determination of FAV.
Methods
Eco-Scale GAPI AGREE Ref. Figure 6. Greenness report sheet for the proposed method calculated by using 12 different greenness criteria with the AGREE software. Table 4. Comparison between the proposed method and previously reported methods for the determination of FAV.
Eco-Scale GAPI AGREE
Ref. Figure 6. Greenness report sheet for the proposed method calculated by using 12 different greenness criteria with the AGREE software. Table 4. Comparison between the proposed method and previously reported methods for the determination of FAV.
Eco-Scale GAPI AGREE
Ref. Figure 6. Greenness report sheet for the proposed method calculated by using 12 different greenness criteria with the AGREE software. Figure 6. Greenness report sheet for the proposed method calculated by using 12 different greenness criteria with the AGREE software.
Sample Collection
Plasma samples was obtained by centrifuging the whole blood at 5000 rpm for 10 min. The whole blood was provided by the Rotary and Blood Bank Society Resource Centre, Chandigarh (India). Urine samples were obtained from three healthy volunteers aged between 28 and 40 years (two females and one male), who were also authors of this study. All of the biological samples were kept at~4 • C and thawed at room temperature before analysis. Two distinct FAV tablets were procured from a local Chandigarh (India) market and labelled as having 800 and 400 mg of FAV per tablet, respectively.
Preparation of Standards and Samples
The stock solution of FAV was prepared at a concentration of 1 mg mL −1 in MeOH and stored at~4 • C until needed. Working solutions of FAV at concentrations in the range of 5-100 µg/spot were prepared by appropriate dilution of stock solutions with ultrapure water. In order to imitate drug-protein binding under physiological conditions, biological samples such as urine and plasma were spiked with various amounts of FAV in the range of 5-100 µg/spot. These samples were then homogenised by vortex agitation for 5 min and incubated at 37 • C for 30 min [54].
Multivariate Analysis
Various experimental factors, including pH, volume of extraction and disperser solvents, sonication time, vortex agitation time, and vortex speed, which have significant impacts on DLLME extraction efficiency, were examined systematically. For this purpose, the following two-step strategy was utilised to optimise these factors using multivariate analysis: (i) Plackett-Burman design (PBD) to determine the significant parameters, and (ii) central composite design (CCD) to optimise the significant factors obtained by PBD. Multivariate analysis was carried out using the TIBCO STATISTICA software (Palo Alto CA, USA, Trial version).
3.5. SA-DLLME Procedure 3.5.1. Pharmaceutical Formulations Five tablets from each dose (i.e., 800 and 400 mg) were weighed and their average weight was calculated. These tablets were then crushed and converted into a fine powder. The equivalent to the average weight of the tablets was dissolved in 10 mL of MeOH for each dose (i.e., 800 and 400 mg) and then sonicated for 10 min. To achieve a concentration of 10 µg mL −1 , the filtrate was appropriately diluted using ultrapure water. Under optimal conditions, 10 µg mL −1 of FAV was spiked into 5 mL of ultrapure water at a pH of 5 with the help of a 0.1 M HCl solution. Thereafter, a mixture of DCM (100 µL) and Triton X-100 (1.13 mmol L −1 ) was instantly injected into the aqueous sample. This stage resulted in the formation of a cloudy solution with tiny Triton X-100 droplets dispersed throughout the entire aqueous phase. In order to amplify the mass transfer of the analyte from the aqueous phase to the extraction solvent, the sample was sonicated for 130 s, followed by centrifugation at 5000 rpm for 3 min. After centrifugation, the sedimented phase was kept intact, while the supernatant was carefully discarded. The process of DLLME was completed within only 10 min. Subsequently, a TLC plate was spotted with 20 µL of the sedimented phase for further analysis.
Biological Samples
Urine samples were initially centrifuged and filtered to remove any extra debris. Furthermore, 1 mL of blank biological samples (i.e., human urine and plasma) was fortified with 10 µg mL −1 of FAV and incubated for 30 min at 37 • C to stimulate drug-protein binding under physiological conditions. For the purpose of drug extraction, the biological samples were diluted to 5 mL with ultrapure water at pH 5. The samples were then subjected to the aforementioned DLLME procedure.
Thin-Layer Chromatography-Smartphone-Based Digital Image Colorimetry (TLC-SDIC) Procedure
With the aid of a micropipette, 20 µL (2 × 10 µL) of the sedimented phase obtained by DLLME was spotted on the marked start edge of the TLC plate (20 × 20 cm pre-coated silica gel 60 F254 aluminium-backed, purchased from Merck, Darmstadt, Germany) at a height of 1 cm. The TLC plate was put into a development chamber that had been pre-saturated with vapours of 10 mL of a mobile phase made up of CF:MeOH (8:2, v/v) after the spots were air-dried to remove any remaining solvent residues. Ascending mode was used to develop the plates up to 10 cm of solvent front migration away from the point of origin. Later, after 15-20 min of development, the TLC plate was taken out from the development chamber, allowed to air-dry, and then put in a UV cabinet at 254 nm for visualisation of spots. Under UV illumination, images of the developed TLC plates were captured with a smartphone camera. The FAV was visible as a blue spot against the light green background of the TLC plate. This image was transferred onto a laptop and saved in JPEG format. Furthermore, the image was split into R, B, and G channels (image > colour > split channel) using the freely available software ImageJ (National Institutes of Health, MD, USA). Since the green channel displayed the best sensitivity, it was chosen for quantitative analysis. The following steps were used to transform the spots of the G channel into peaks: (i) the "Rectangular selection tool" was used to select every spot at once; (ii) using the "plot lane tool", peaks were plotted from all of the spots; (iii) a median filter with a resolution of 5-10 pixels was applied to remove noise from the peak; (iv) a line was drawn at the bottom of each peak using the "line tool"; and (v) by clicking inside the peak with the "magic wand tool", the corresponding peak area was displayed. For statistical analysis, these peak areas were used (Figure 7). TLC densitograms of the standard, human urine, and plasma samples are depicted in Figure S3a-c, respectively. binding under physiological conditions. For the purpose of drug extraction, the biological samples were diluted to 5 mL with ultrapure water at pH 5. The samples were then subjected to the aforementioned DLLME procedure.
Thin-Layer Chromatography-Smartphone-Based Digital Image Colorimetry (TLC-SDIC) Procedure
With the aid of a micropipette, 20 µ L (2 × 10 µ L) of the sedimented phase obtained by DLLME was spotted on the marked start edge of the TLC plate (20 × 20 cm pre-coated silica gel 60 F254 aluminium-backed, purchased from Merck, Darmstadt, Germany) at a height of 1 cm. The TLC plate was put into a development chamber that had been presaturated with vapours of 10 mL of a mobile phase made up of CF:MeOH (8:2, v/v) after the spots were air-dried to remove any remaining solvent residues. Ascending mode was used to develop the plates up to 10 cm of solvent front migration away from the point of origin. Later, after 15-20 min of development, the TLC plate was taken out from the development chamber, allowed to air-dry, and then put in a UV cabinet at 254 nm for visualisation of spots. Under UV illumination, images of the developed TLC plates were captured with a smartphone camera. The FAV was visible as a blue spot against the light green background of the TLC plate. This image was transferred onto a laptop and saved in JPEG format. Furthermore, the image was split into R, B, and G channels (image > colour > split channel) using the freely available software ImageJ (National Institutes of Health, MD, USA). Since the green channel displayed the best sensitivity, it was chosen for quantitative analysis. The following steps were used to transform the spots of the G channel into peaks: (i) the "Rectangular selection tool" was used to select every spot at once; (ii) using the "plot lane tool", peaks were plotted from all of the spots; (iii) a median filter with a resolution of 5-10 pixels was applied to remove noise from the peak; (iv) a line was drawn at the bottom of each peak using the "line tool"; and (v) by clicking inside the peak with the "magic wand tool", the corresponding peak area was displayed. For statistical analysis, these peak areas were used (Figure 7). TLC densitograms of the standard, human urine, and plasma samples are depicted in Figure S3a-c, respectively.
Method Validation
The proposed method was validated for linearity, precision, recovery, and sensitivity as per the guidelines of the International Conference of Harmonisation (ICH) [55]. For
Method Validation
The proposed method was validated for linearity, precision, recovery, and sensitivity as per the guidelines of the International Conference of Harmonisation (ICH) [55]. For method validation purposes, ultrapure water, human urine, and plasma samples were spiked in a range from 5 to 100 µg mL −1 (which is equal to 5-100 µg/spot). The linearity of the proposed method was evaluated by plotting calibration curves between peak areas (obtained by image analysis with ImageJ) and their corresponding concentrations on the yand x-axes for aforementioned matrices. Linear regression analysis was used to determine the coefficient of determination (R 2 ), slope, and intercept. The sensitivity of the proposed method was expressed as the limit of detection (LOD) and limit of quantification (LOQ). Furthermore, based on the assessment of the relative standard deviation (%RSD) at three different concentration levels of the calibration graph, the intraday and interday precisions (n = 5) were calculated. Additionally, the accuracy and relative recoveries at three different concentrations-including 20 µg/spot (low QC), 60 µg/spot (medium QC), and 100 µg/spot-were evaluated (high QC).
HPLC Analysis
A Shimadzu LC-2010 HT HPLC system with a UV detector was used for the chromatographic analysis. FAV was separated using a C18 column (5 µm film thickness, 250 mm length, 4.6 I.D.). A combination of 50 mM phosphate buffer (pH = 2.5) and acetonitrile at a ratio of 60:40 v/v was chosen as the mobile phase and pumped at a flow rate of 1 mL min −1 for the elution of the analyte of interest at a 30 • C column temperature. A wavelength of 323 nm was selected for the detection of FAV. After the DLLME procedure, the sedimented phase was completely evaporated and reconstituted in MeOH, 20 µL of which was subsequently injected into the HPLC system. The retention time of FAV was found to be 3.8 min. HPLC chromatograms of the standard, pharmaceutical, human urine, and human plasma samples are shown in Figure S4a-d, respectively.
Conclusions
The development of sustainable and green analytical protocols has attracted significant attention in the recent past. This has led to the replacement of hazardous organic solvents in order to reduce the risks to both humans and the environment. As a result, sample preparation is perfectly compatible with the principles of GAC. Herein, the present study proposes a simple, green, quick, high-sample-throughput, and cost-effective analytical approach for determining FAV in biological and pharmaceutical samples. The novel aspect of the developed method is the integration of SA-DLLME with TLC-SDIC, which provides a straightforward, instrument-less, and affordable analytical platform with the least consumption of electricity and minimal waste generation. With the help of SDIC, quantitative analysis can be carried out with a basic smartphone camera and open-source image processing software, without the need for any heavy, high-end instrumental techniques such as GC-MS or HPLC. The proposed procedures were evaluated using green metrics (i.e., the Analytical Eco-Scale, AGREE, and ComplexGAPI tools) and were found to be easier, sufficiently sensitive, and more environmentally friendly. The proposed method could be of immense use in forensic and clinical laboratories for medico-legal and pharmaceutical applications. Furthermore, this approach could serve as a stepping stone for the development of such analytical methods, which could be of immense use in resource-limited laboratories.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/molecules28020529/s1, Figure S1: Pareto chart for standardized effects; Figure S2: Profiles for predicted values and desirability functions for peak responses of FAV. Vertical red line indicates current values after optimization; Figure S3: TLC densitogram of FAV obtained after SA-DLLME-TLC-SDIC (a) Standard, (b) Urine sample and (c) Plasma sample.; Figure S4: HPLC chromatogram of standard sample at 10 µg mL −1 (a); and pharmaceutical sample (b) Urine sample (c); and Plasma sample (d); Table S1: Factors and their levels tested in PBD; Table S2: Factors and their levels tested in CCD; Table S3: Experimental design matrix of CCD.
|
2023-01-12T17:28:21.350Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "8a951ea529e1cda718724f2b95472474287b48c8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/28/2/529/pdf?version=1672914131",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b5a570cf15800c1b9b85a91b74d5d7b9d3de7a3e",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
}
|
13724521
|
pes2o/s2orc
|
v3-fos-license
|
Hydroxyurea Induces Site‐specific DNA Damage via Formation of Hydrogen Peroxide and Nitric Oxide
Hydroxyurea is a chemotherapeutic agent used for the treatment of myeloproliferative disorders (MPD) and solid tumors. The mutagenic and carcinogenic potential of hydroxyurea has not been established, although hydroxyurea has been associated with an increased risk of leukemia in MPD patients. To clarify whether hydroxyurea has potential carcinogenicity, we examined site‐specific DNA damage induced by hydroxyurea using 32P‐5′‐end‐labeled DNA fragments obtained from the human p53 and p16 tumor suppressor genes and the c‐Ha‐ras‐1 protooncogene. Hydroxyurea caused Cu(II)‐mediated DNA damage especially at thymine and cytosine residues. NADH efficiently enhanced hydroxyurea‐induced DNA damage. The DNA damage was almost entirely inhibited by catalase and bathocuproine, a Cu(I)‐specific chelator, suggesting the involvement of hydrogen peroxide (H2O2) and Cu(I). Typical free hydroxyl radical scavengers did not inhibit DNA damage by hydroxyurea, but methional did. These results suggest that crypto‐hydroxyl radicals such as Cu(I)‐hydroperoxo complex (Cu(I)‐OOH) cause DNA damage. Formation of 8‐hydroxy‐2′‐deoxyguanosine (8‐OHdG) was induced by hydroxyurea in the presence of Cu(H). An electron spin resonance spectroscopic study using N‐(dithiocarboxy)sarcosine as a nitric oxide (NO)‐trapping reagent demonstrated that NO was generated from hydroxyurea in the presence and absence of catalase. In addition, the generation of formamide was detected by both gas chromatography‐mass spectrometry (GC‐MS) and time‐of‐flight‐mass spectrometry (TOF‐MS). A high concentration of hydroxyurea induced depurination at DNA bases in an H2O2‐independent manner, and endonu‐clease IV treatment led to chain cleavages. These results suggest that hydroxyurea could induce base oxidation as the major pathway of DNA modification and depurination as a minor pathway. Therefore, it is considered that DNA damage by hydroxyurea participates in not only anti‐cancer activity, but also carcinogenesis.
Larger studies have documented an increased risk of leukemia for patients with MPD on hydroxyurea therapy. [8][9][10] De Simone et al. described the occurrence of multiple squamous cell skin carcinomas in a patient treated with hydroxyurea for chronic myelogenous leukemia. 11) Hydroxyurea has been experimentally shown to have clastogenic, 12) teratogenic, 13) and, in some settings, mutagenic effects. 14) Recently, it has been reported that hydroxyurea increases DNA mutations in young patients with sickle cell disease. 15) Hydroxyurea produces chromosome damage and mutation in cultured cells, 16,17) whereas it is an inactive mutagen in bacteria. 18) Although hydroxyurea does not bind to DNA, 18) it has been reported that nitric oxide (NO) is generated from it. [19][20][21] Therefore, there remains a possibility that hydroxyurea induces DNA damage via the formation of reactive nitrogen and oxygen species.
ras-1 protooncogene. We investigated the effects of scavengers and metal chelators on the DNA damage induced by hydroxyurea in order to clarify the reactive species causing the DNA damage. We also analyzed the formation of 8-hydroxy-2′-deoxyguanosine (8-OHdG), a characteristic oxidative DNA lesion, by using an electrochemical detector coupled to a high-pressure liquid chromatograph (HPLC-ECD). Furthermore, to clarify the mechanism of DNA damage, we studied the Cu(II)-mediated autoxidation process of hydroxyurea by using an electron spin resonance (ESR) spectrometer. We analyzed the products generated by Cu(II)-mediated hydroxyurea oxidation by using gas chromatography-mass spectrometry (GC-MS) and time-of-flight-mass spectrometry (TOF-MS).
The preferred cleavage sites were determined by direct comparison of the positions of the oligonucleotides with those produced by the chemical reactions of the Maxam-Gilbert procedure 29) using a DNA-sequencing system (LKB 2010 Macrophor, Pharmacia Biotech, Uppsala, Sweden). The relative amounts of oligonucleotides from the treated DNA fragments were measured with a laser densitometer (LKB 2222 UltroScan XL, Pharmacia Biotech).
Analysis of 8-OHdG formation in calf thymus DNA by hydroxyurea
The amount of 8-OHdG was measured by a modification of the method of Kasai et al.. 30) Calf thymus DNA fragments (100 µM) were incubated with hydroxyurea and 20 µM CuCl 2 in 200 µl of 4 mM sodium phosphate buffer (pH 7.8) containing 5 µM DTPA for 60 min at 37°C. After ethanol precipitation, the DNA fragments were digested to the nucleosides with nuclease P 1 and CIP, and analyzed by HPLC-ECD, as described previously. 31) ESR spin-trapping studies ESR spectra were measured at room temperature (25°C) by using a JES-TE-100 spectrometer (JEOL, Tokyo) with 100-kHz field modulation. Fe(DTCS) 3 was used as a spin-trapping agent. 32) The reaction mixture contained 100 mM hydroxyurea, 20 µM
GC-MS and TOF-MS analyses
The reaction mixture, containing 10 mM hydroxyurea and 20 µM Cu(II) in 10 mM sodium phosphate buffer (pH 7.8) containing 5 µM DTPA, was incubated for 2 h at 37°C. GC-MS analysis was performed on a GC-17A gas chromatograph (Shimadzu, Kyoto) equipped with a CAM capillary column (0.325 mm×15 m; J&W Scientific, Köln, Germany). The constant flow rate was 19.0 ml/min. The injection (injection volume: 1 µl) was performed in the splitless mode with the temperature of the injection port set at 230°C. The temperature of the GC oven was maintained at 50°C for 1 min and then raised to 215°C at a rate of 15°C/min, and finally left at the latter temperature for 4 min. Detection of positive ions was provided by a QP-5000 mass detector using electron impact ionization (Shimadzu). TOF-MS analysis was also performed on a Voyager B-RP (PerSeptive Biosystems, Framingham, MA) equipped with a nitrogen laser (337 nm, 3 ns pulse) to determine formamide formation. A matrix solution (α-cyano-4-hydroxycinnamic acid) was added to the sample.
RESULTS
Damage to 32 P-labeled DNA fragments by hydroxyurea in the presence of metal ions The extent of damage to double-stranded DNA induced by hydroxyurea in the presence of Cu(II) was estimated by gel electrophoretic analysis (Fig. 1). Oligonucleotides were detected on the autoradiogram as a result of DNA damage. Hydroxyurea induced DNA damage in a dose-dependent manner. Even without piperidine treatment, oligonucleotides were formed by hydroxyurea (Fig. 1A), indicating breakage of the deoxyribose phosphate backbone. The amount of oligonucleotides was increased by piperidine treatment (Fig. 1B).
Since an altered base is readily removed from its sugar by piperidine treatment, it is considered that the base modification was induced by hydroxyurea. The DNA damage was significantly enhanced by the addition of 100 µM NADH (Fig. 1C). NADH is a reductant existing at high concentrations (100-200 µM) in cells. 33) The magnitude of the DNA damage induced by 10 µM hydroxyurea with NADH was greater than that caused by 100 µM hydroxyurea without NADH. Hydroxyurea did not induce DNA damage in the presence of Co(II), Ni(II), Mn(II), Mn(III), Fe(III) or Fe(III)EDTA (data not shown). After the incubation at 37°C for 60 min, followed by the piperidine treatment, the DNA fragments were analyzed by the method described in the legend to Fig. 1.
Effects of scavengers and bathocuproine on DNA damage induced by hydroxyurea
bathocuproine, a Cu(I)-specific chelator, inhibited the DNA damage, suggesting the involvement of hydrogen peroxide (H 2 O 2 ) and Cu(I). Typical free hydroxyl radical (•OH) scavengers, ethanol, mannitol, sodium formate and DMSO, showed no or little inhibitory effect on the DNA damage, whereas methional was inhibitory. Methional can scavenge not only •OH, but also species with weaker reactivity than •OH. 34) SOD enhanced the DNA damage. Fig. 3 shows an autoradiogram of DNA fragments treated with a high concentration of hydroxyurea in the presence of Cu(II) and catalase, followed by endonuclease IV treatment. Endonuclease IV is a DNA repair enzyme responsible for excision, which cleaves DNA strand at an abasic site. 35) DNA cleavage increased with increasing concentrations of hydroxyurea (Fig. 3A). Little or no oligonucleotide was produced without endonuclease IV (data not shown). On the other hand, intensity of damage to the double-stranded DNA fragments treated with endonuclease IV plus UDG, which is a DNA repair enzyme responsible for excision of uracil, 36) was similar to that with endonuclease IV alone (Fig. 3B). These results suggest that hydroxyurea induces formation of abasic sites rather than deamination.
Site specificity of DNA cleavage by hydroxyurea
The patterns of DNA cleavage induced by hydroxyurea in the presence of Cu(II) were determined by the Maxam-Gilbert procedure. 29) The relative intensity of DNA cleavage obtained by scanning autoradiogram with a laser densitometer is shown in Fig. 4. Hydroxyurea frequently generated piperidine-labile sites at thymine and cytosine residues in double-stranded DNA fragments obtained from the human p53 (Fig. 4A) and p16 (Fig. 4B) tumor suppressor genes and the c-Ha-ras-1 protooncogene (data not shown). Similar DNA cleavage patterns were observed in the presence of NADH in the same system (data not shown). When endonuclease IV treatment was performed instead of piperidine treatment, DNA damage was observed mainly at guanine residues (data not shown). Formation of 8-OHdG in calf thymus DNA by hydroxyurea in the presence of Cu(II) and NADH Using HPLC-ECD, we measured the content of 8-OHdG, an indicator of oxidative base damage, [37][38][39] in calf thymus DNA treated with hydroxyurea in the presence of Cu(II). The amount of 8-OHdG increased with increasing hydroxyurea concentration (Fig. 5). The addition of NADH enhanced hydroxyurea plus Cu(II)-induced 8-OHdG formation. Hydroxyurea did not cause 8-OHdG formation in the absence of Cu(II) (data not shown).
Production of nitric oxide and formamide from hydroxyurea
We analyzed the ESR signal generated from hydroxyurea in the presence of Cu(II) using DTCS as a NO spin-trapping agent. As shown in Fig. 6A, the ESR spectra showed distinct triplet signals with a N =1.27 mT and g iso =2.040, reasonably assigned to the Fe(DTCS) 2 (NO). 32) Catalase did not inhibit NO production from hydroxyurea (Fig. 6B). The formation of formamide was also measured by both GC-MS and TOF-MS. GC-MS analysis indicated a peak eluting at 6.09 min corresponding to formamide with the expected molecular ion M + at m/z 45 (data not shown). Mass spectral analysis using TOF-MS also confirmed formamide formation by comparison of the mass spectrum with that of an authentic standard; m/z=46 (M+H) + . These results suggest that NO and formamide were generated from hydroxyurea.
DISCUSSION
The present study has demonstrated that hydroxyurea caused DNA damage in the presence of Cu(II). Furthermore, Cu(II)-mediated 8-OHdG formation increased with increasing concentration of hydroxyurea in the presence of Cu(II). When NADH, an endogenous reductant, was added, oxidative DNA damage was enhanced. 8-OHdG is an indicator of oxidative stress and has been well-characterized as a premutagenic lesion in mammalian cells. [37][38][39] Therefore, it is considered that Cu(II)-mediated oxidative DNA damage may participate in mutagenic and carcinogenic processes caused by hydroxyurea.
In order to clarify what kinds of reactive oxygen species cause oxidative DNA damage, the effects of various scavengers on DNA damage and its site specificity were examined. Inhibitory effects of catalase and bathocuproine on DNA damage suggest the involvement of H 2 O 2 and Cu(I). Furthermore, typical •OH scavengers showed little or no inhibitory effect on DNA damage, whereas methional inhibited it. Methional scavenges not only •OH, but also a variety of reactive species other than •OH. 34) Thus, •OH does not appear to play an important role in DNA damage. The predominant DNA cleavage sites were thymine and cytosine residues. This result supports the involvement of reactive species other than •OH, because •OH causes DNA damage at all nucleotides with little site specificity. 40,41) However, the possibility that •OH participates in DNA damage cannot be ruled out. Cu(II) binds to DNA in a site-specific manner and is reduced to Cu(I) to react with H 2 O 2 generated from hydroxyurea, resulting in formation of DNA-Cu(I)-hydroperoxo complexes such as DNA-Cu(I)OOH. This complex may release •OH, which would immediately attack adjacent DNA constituents before being scavenged by •OH scavengers. 42) A high concentration of hydroxyurea induced depurination at DNA bases in the presence of catalase, and endonuclease IV treatment led to chain cleavages. Furthermore, ESR spectrometry revealed that NO was generated from hydroxyurea and that catalase did not inhibit the formation of NO. The GC-MS and TOF-MS analyses demonstrated that hydroxyurea non-enzymatically produced formamide in addition to O 2 − . Overall, it is considered that hydroxyurea can generate NO and formamide. Several papers have reported that NO can induce deamination of DNA. [43][44][45][46][47] Under these experimental conditions, however, UDG treatment did not enhance DNA damage, suggesting little or no deamination. It has been reported that peroxynitrite generated from NO plus O 2 − participates in not only 8-OHdG formation, 48) but also depurination of guanine via 8nitroguanine formation. 49,50) An apurinic site, which is probably produced by depurination of guanine in DNA, is potentially mutagenic, and induces G:C→T : A transversions. [50][51][52] Therefore, it is considered that NO may react with O 2 − to cause DNA damage and participate in mutation by a high concentration of hydroxyurea.
Possible mechanisms of oxidation and depurination of DNA bases induced by hydroxyurea are shown in Fig. 7, and could account for most of the observations. The oxidation pathway in which hydroxyurea itself causes Cu(II)mediated oxidative DNA damage is the major pathway. During the autoxidation of hydroxyurea, Cu(II) is reduced to Cu(I), which reacts with O 2 to generate O 2 − and subsequently H 2 O 2 . H 2 O 2 reacts with Cu(I) to form cryptohydroxyl radicals, such as the Cu(I)-hydroperoxo complex, capable of causing site-specific DNA damage. Endogenous reductants, such as NADH, reduce the nitroso radical to form a redox cycle resulting in enhancement of oxidative DNA damage. The concentration of NAD(P)H in certain tissues was estimated to be as high as 50-200 µM. 33) The possibility that chemicals are non-enzymatically reduced by NAD(P)H in vivo has been proposed. 53,54) There is a minor pathway called the depurination pathway. Hydroxyurea is autoxidized by Cu(II) into the nitroso radical, followed by the production of NO and formamide. Several reports have revealed that hydroxyurea is metabolized to release NO. [19][20][21] 20) we demonstrated that a small amount of NO was generated from hydroxyurea in an H 2 O 2 -independent manner.
The biological significance of copper has recently drawn much interest in connection with carcinogenicity and mutagenicity. Copper is an essential component of chromatin and is known to accumulate preferentially in the heterochromatic regions. 55,56) A case-cohort study showed a U-shaped relation between plasma copper levels and the risk of developing breast cancer. 57) Copper sulfate showed clastogenic effects on the bone marrow chromosomes of mice in vivo. 58) Copper has the ability to catalyze the production of reactive oxygen species to mediate oxidative DNA damage. [59][60][61][62] The present work suggests that copper is an important factor in DNA damage by hydroxyurea. Hydroxyurea is an often-used chemotherapeutic drug for various malignancies and MPD. 63,64) It is thought to have lower mutagenic potential than alkylating agents. The mutagenic and carcinogenic potential of hydroxyurea, however, may be a serious risk associated with long-term therapy. Hydroxyurea blocks DNA synthesis and DNA repair in vitro. 65) Furthermore, in this study, we demonstrated that hydroxyurea could induce metal-mediated DNA damage. Therefore, it is considered that hydroxyurea participates in not only anti-cancer activity, but also car-cinogenesis via DNA damage. Careful consideration of safety is required when hydroxyurea is used as a chemotherapeutic drug.
|
2018-04-03T02:48:26.305Z
|
2001-11-01T00:00:00.000
|
{
"year": 2001,
"sha1": "be67ccc8003de107216f88d411cea014e72c6bfc",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1349-7006.2001.tb02136.x",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "be67ccc8003de107216f88d411cea014e72c6bfc",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
270002193
|
pes2o/s2orc
|
v3-fos-license
|
Squamous cell carcinoma and basal cell carcinoma of the lips: 25 years of experience in a northeast Brazilian population
Background The lips are the transition zone between the facial skin and the oral mucosa and are the site of alterations related to a broad spectrum of etiologies. Squamous cell carcinoma (SCC) and basal cell carcinoma (BCC) are the most prevalent neoplasms affecting lips. This study evaluated the demographic and clinicopathological features of the SCC and BCC in the lip. Material and Methods A retrospective cross-sectional descriptive study (1994-2019) was carried out. Demographic and clinicopathologic data were collected from a hospital’s dermatological service and an oncologic hospital. The data were submitted to descriptive analysis and Pearson's chi-square and Fisher's exact tests (p ≤ 0.05). Results 417 medical records were analyzed, of which 323 corresponded to SCC (77.5%) and 94 to BCC (22.5%). SCC showed more frequency in males (58.8%) and BCC in females (54.3%). The lower lip was significantly affected in male patients (p < 0.0001) and by both neoplasms (70.6% and 56.4%, respectively; p = 0.014). SCC and BCC were mainly treated with surgery (88.3% and 93.2%, respectively). Surgical margin was frequently negative in SCC and BCC (87%; 72.3%, respectively), and no recurrence was observed in 79.9% of SCC and 69.1% of BCC cases. Conclusions SCC was more frequent in male patients, while BCC showed more frequency in female patients. Both neoplasms mainly affect the lower lip. Understanding the epidemiological profile of these lesions in the lip, as well as their etiology and clinical features, is fundamental for appropriate clinical conduct and the creation and/or amplification of preventive measures. Key words:Epidemiology, oral pathology, oral mucosal lesions.
Introduction
The lips are the anterior limit of the oral cavity and consist of the transition zone between the facial skin and the oral mucosa (1).These anatomical structures play an important role in facial expression, chewing, phonation, and tactile sensation and contribute to facial aesthetics.Also, the lips may be the site of clinical and pathological alterations related to a broad spectrum of etiologies, ranging from traumatic, inflammatory, and infectious lesions to malignant neoplasms (1)(2)(3).Lip cancer corresponds to 23.6%-30% of all oral cavity malignant neoplasms, and the lower lip is the most frequently affected (90%), followed by the upper lip (7%) and the labial commissure (3%).Among the malignant neoplasms affecting the lip, squamous cell carcinoma (SCC) and basal cell carcinoma (BCC) are the most prevalent (4).Ultraviolet (UV) radiation is the main etiological factor for developing both the SCC and BCC in the lip.Also, these neoplasms may exhibit similar clinical presentations as hard, firm, crusted, and painless ulcers (4)(5)(6).The SCC has a favorable prognosis, with an 82.1% 5-year survival rate.However, it has a worse prognosis than BCC, whose 5-year survival rate is over 90% (7,8).In this context, we investigated the occurrence and the demographic and clinicopathological characteristics of SCC and BCC of the lip over 25 years in a dermatological service and an oncologic hospital in northeastern Brazil.
Material and Methods
This retrospective cross-sectional study investigated the occurrence of squamous cell and basal cell carcinoma of the lips diagnosed at the Dermatology Service of Hospital das Clínicas of the Federal University of Pernambuco (UFPE) and Cancer Hospital of Pernambuco from 1994 to 2019.Demographic and clinicopathologic data regarding sex, age, anatomic site, clinical aspect, TNM grading system, histopathological diagnosis, surgical margin, and recurrence were collected from the patient's medical records.Nevertheless, this data was not available in all SCC and BCC cases.In this context, cases needed to contain information about the histopathological diagnosis to be included in the present study.On the other hand, cases that did not affect the lips and/or had an undetermined diagnosis were excluded from the sample.Clinical cancer staging was classified according to the American Joint Committee on Cancer -Cancer Staging Manual (9).The data were submitted for a descriptive analysis using SPSS (Statistical Package for Social Sciences; version 22.0; IBM, Chicago, IL, USA).Pearson's chi-square test was performed to assess the association between the anatomical location of the lesions and clinicopathologic characteristics (sex, histopathological diagnosis, clinical stage, and treatment).The significance level was set at 5% (p ≤ 0.05) for all tests.
Results
A total of 417 patients were diagnosed with SCC and BCC during the period studied.Of these, 323 (77.5%) lip carcinomas were histopathologically diagnosed as SCC and 94 (22.5%) as BCC in both centers.It was found that SCC was diagnosed more frequently in males (58.8%; n = 190), while BCC was more diagnosed in females (54.3%; n = 51) (Fig. 1).The mean age of SCC was 64.7 ± 14.8 years old, while BCC presented a mean age of 65 ± 15.3 years old.SCC was more frequent in patients between the sixth and eighth decades of life, and BCC was more frequent in the eighth decade of life (Fig. 1).SCC and BCC significantly affected the lower lip (70.6%; n = 228; 56.4%; n = 53, respectively; p = 0.014) (Fig. 1; Table 1).It was observed that BCC cases initially affected the skin, and the lesion extended to lip vermilion.Also, when analyzing the association between sex and anatomical location, it was observed that the lower lip of male patients was more affected than the upper one (p < 0.0001) (Table 1).Of the 417 patients, 48% reported outdoor occupational activities related to chronic sun exposure, and SCC was frequently clinically described as an ulcer and plaque.In contrast, BCC was most described as a nodule and papule.Concerning the TNM grading system findings, SCC cases presented a high frequency of tumors with sizes ≤ 20 mm (T1) (39.7%) and > 20 mm to ≤ 40 mm (T2) (42.9%), as well as the absence of regional lymph node metastasis (N0) (90.4%) and distant metastasis (M0) (84%) (Fig. 2).In contrast, only 05 cases of BCC presented this information.From these 05 cases, two cases presented tumors with size > 20 mm to ≤ 40 mm (T2), one case with size ≤ 20 mm (T1), one case with size > 40 mm (T3), and one case with size > 40 mm invading adjacent structures (T4).Besides this, three cases showed the absence of regional lymph node metastasis (N0), one case showed metastasis in a single ipsilateral lymph node > 30 mm and ≤ 60 mm (N2), and one case showed metastasis in a single ipsilateral lymph node > 60 mm Conclusions: SCC was more frequent in male patients, while BCC showed more frequency in female patients.Both neoplasms mainly affect the lower lip.Understanding the epidemiological profile of these lesions in the lip, as well as their etiology and clinical features, is fundamental for appropriate clinical conduct and the creation and/or amplification of preventive measures.
(N3).Regarding distant metastasis, four cases presented the absence (M0), and one showed the presence (M1) of distant metastasis.About the clinical stage, SCC frequently exhibited clinical stage I (39.8%; n = 62) and II Regarding the treatment approach of the neoplasms, SCC was mainly treated with surgery (88.3%; n = 264), followed by surgery with radiotherapy (11.4%, n = 34) and radiotherapy (0.3%; n = 1), while BCC was frequently treated with surgery (93.2%; n = 69), followed by surgery with radiotherapy (6.8%; n = 5).No significant association was observed between the anatomical location and the treatment (p = 0.674) (Table 1).The surgical margin was frequently negative in SCC and BCC (87%; n = 281; 72.3%; n = 68, respectively), and no recurrence was observed in 79.9% (n = 258) of SCC cases and 69.1% (n = 65) of BCC cases (Fig. 2).No significant association was observed between the type of neoplasm and the presence of regional lymph node metastasis, distant metastasis, treatment, cancer clinical stage, surgical margin, and recurrence.
Discussion
Cancer is a public health problem worldwide, and its incidence has been increasing, with 354,864 new cases of oral cancer reported in 2018.Lip neoplasms differ etiologically from intraoral lesions since they are locations highly exposed to UV radiation (10)(11)(12).UVA and UVB rays act dose-dependently and cumulatively in initiating and promoting carcinogenesis due to direct damage to the DNA structure (13).The present study was conducted in two hospitals in northeast Brazil, a region close to the equator, making it more prone to UV radiation incidence and, consequently, contributing to increased skin cancer (13).
SCC and BCC of the lips are malignant neoplasms arising from keratinocytes, mainly from chronic UV radiation exposure.Besides, SCC mostly affects male patients between the fifth and sixth decades of life (5,14).In the present study, 48% of the patients reported an outdoor occupational activity related to chronic sun exposure, and 58.8% of patients diagnosed with SCC were males between the sixth and eighth decades of life.Our results are similar to other Brazilian studies, indicating that most patients are affected after the fourth decade of life (7,15,16).Studies indicate that SCC frequently affects the lower lip, while BCC is rare in the lower lip and usually affects the upper lip (7,15,(16)(17)(18).These results differ from the present study since we observed that SCC and BCC presented a significant preference for the lower lip, and the frequency of carcinomas in the lower lip was significantly higher in male patients.We believe this change in the epidemiological profile of lip BCC is due to the extension of the skin tumor to the lower lip, causing a greater prevalence in this anatomic site.As it is known, UVB is the main BCC predisposing factor, and it increases its development risk by 1.5-fold (17,18).In this way, this finding is explained by the anatomical lip position, which allows a higher incidence of UV radiation in this anatomic site, causing the initiation and promotion of lip carcinogenesis.Also, lower frequency in female patients can be explained by using lipstick, which acts as a protective factor.At the same time, males may present more prolonged outdoor occupational sun exposure and later retirement compared with women (5,7,19).Initially, lip SCC presents clinically as a plaque/crusted lesion that may exhibit extensive ulcers, raised edges, and/or signs of infiltration in advanced stages.
In contrast, head and neck BCC usually presents as a well-defined papule or nodule of slow growth with a pearly and telangiectatic central area, which may show ulcerated and pigmented areas (20,21).Our study observed that SCC was mainly clinically described as an ulcer or plaque and BCC as a nodule or papule.In this context, the differential diagnosis of SCC and BCC will vary according to the clinical presentation of the lesion since plaque, crusted, and ulcerated lesions perform differential diagnosis with herpes simplex at a later stage and traumatic ulcers.On the other hand, papular and nodular lesions may resemble trichoepithelioma, while pigmented lesions perform differential diagnosis with melanoma, seborrheic keratosis, and nevus (22,23).Most cases of SCC and BCC of the lip are diagnosed in the initial clinical stages, exhibiting a small size, absence of metastasis in the lymph nodes, and distant metastasis (10,15).Most cases were classified as T1/T2 in the present study.This can be justified by the anatomical site affected by these carcinomas since the lip is an anatomical structure with aesthetic appeal, and the patient can quickly notice changes in it, leading the patient to seek clinical care and treatment (1,24).Furthermore, these neoplasms are characterized by less aggressive clinical behavior when they affect the lips (8,10), which may also explain the absence of a positive margin in most cases in the present study.
It is known that, in general, SCC exhibits more aggressive clinical behavior than BCC.In the initial stages, the treatment of these neoplasms consists of surgical resection; however, radio and chemotherapy may be necessary to treat advanced clinical stages of these lesions (1,7,8,20,(25)(26)(27).Our results showed that surgery was the most frequent treatment approach for SCC and BCC.Lip SCC exhibits a 5-year survival rate of 82.1% and a distant metastasis rate of 2-10%, depending on cell differentiation, location, and tumor size.In contrast, the 5-year survival rate of lip BCC exceeds 90% due to the low rates of distant metastases (0.1%) (1,7,8,16,(25)(26)(27).Our results show only 12.4% and 9.6% of recurrences in SCC and BCC cases, respectively.Although survival rates are favorable in this anatomical site, treatment-related consequences can compromise the patient's quality of life once surgical treatment affects essential functions, such as chewing, swallowing, and phonation.Besides, it may cause aesthetic consequences, interfering with the lip-image self-perception of the patient and, consequently, the patient's self-esteem (1,26).Thus, it is crucial to perform prevention campaigns to raise awareness of using sunscreen and hats, especially in tropical countries, due to the high risk of chronic exposure to UV radiation (26).
Our study investigated the occurrence of lip cancer in two reference hospitals in diagnosing dermatological lesions and oncological treatment in northeastern Brazil.Although our results may be the basis for more studies that aim to evaluate the clinicopathological characteristics of lip cancer, some limitations of our study need to be considered.It is important to emphasize that the analyzed data from the present study represent a specific sample of a Brazilian location.Also, the lack of information, such as the TNM grading system of BCC cases, is a limitation.Despite this, we highlight that multicenter studies help the understanding of occurrence profiles of these neoplasms and can cover the heterogeneity of the population, especially in countries of continental dimensions such as Brazil.
In conclusion, our study showed that SCC was more frequent in male patients and BCC in female patients.
Lesions in the lower lip were significantly frequent in male patients, and both SCC and BCC significantly affected this anatomical site.SCC was more frequent in patients between the sixth and eighth decades of life and BCC in the eighth decade of life.Besides, understanding the epidemiological profile and etiology of these lesions in the lip is essential for planning preventive measures in health education since the lip is an important functional and aesthetic region, possessing a strong relationship with the quality of life and individual self-esteem.
Fig. 1 :
Fig. 1: Demographic profile of squamous cell carcinoma (SCC) and basal cell carcinoma (BCC).(A) Distribution of sex showing high frequency of male patients diagnosed with SCC and female patients diagnosed with BCC (The number of cases (n) is represented inside (A) of the bars).(B) Age distribution according to the neoplasm.A high frequency of SCC was diagnosed between 50 and 79 years old, while BCC showed a high frequency between 70 and 79 years old (The number of cases (n) is represented above the dots in the lines).(C) Frequency of neoplasms diagnosed according to the anatomic site.It is observed that, together and individually, SCC and BCC affected the lower lip more.
Fig. 2 :
Fig. 2: Clinicopathological characteristics of squamous cell carcinoma (SCC) and basal cell carcinoma (BCC).(A)TNM grading system information.SCC exhibited a high frequency of tumors with sizes ≤ 20 mm (T1) and > 20 mm to ≤ 40 mm (T2), as well as the absence of regional lymph node metastasis (N0) and distant metastasis (M0).The data corresponds to the medical records that presented information about the variable (SCC = 156 cases) (B) Clinical stage information.SCC exhibited a high frequency of tumors in Stages I and II, while Stage IV was the most frequent in BCC.The percentage corresponds to the medical records that presented information about the variable (SCC = 156 cases; BCC = 05 cases).(C) Surgical margin and recurrence in SCC and BCC.Both SCC and BCC exhibited a high frequency of negative surgical margins as well as the absence of recurrence of the tumor.
Table 1 :
Association between clinicopathologic characteristics and anatomic site.
|
2024-05-26T06:17:21.694Z
|
2024-05-25T00:00:00.000
|
{
"year": 2024,
"sha1": "ab854af35cbc20dfb4f9875691aa391fa62232c8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4317/medoral.26454",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4b649f07b6c63c2f358591fd26a790232fc71079",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
12901749
|
pes2o/s2orc
|
v3-fos-license
|
Iohexol plasma clearance in children: validation of multiple formulas and two-point sampling times
Background In children, estimated glomerular filtration rate (eGFR) methods are hampered by inaccuracy, hence there is an obvious need for safe, simplified, and accurate measured GFR (mGFR) methods. The aim of this study was to evaluate different formulas and determine the optimal sampling points for calculating mGFR based on iohexol clearance measurements on blood samples drawn at two time points (GFR2p). Methods The GFR of 96 children with different stages of chronic kidney disease (CKD) (median age 9.2 years, range 3 months to 17.5 years) was determined using the iohexol plasma clearance, with blood sampling at seven time points within 5 h (GFR7p) as the reference method. Median GFR7p was 65.9 (range 6.3–153) mL/min/1.73 m2. The performance of seven different formulas with early and late normalization to body surface area (BSA) was validated against the reference. Results The highest percentage (95.8 %) of GFR2p within 10 % of the reference was calculated using the formula of Jødal and Brøchner–Mortensen (JBM) from 2009, with sampling at 2 and 5 h. Normalization to BSA before correction of the distribution phase improved the performance of the original Brøchner–Mortensen method from 1972; P10 of 92.7 % compared to P10 of 82.3 % with late normalization, and a similar result was obtained with other formulas. Conclusions GFR2p performed well across a wide spectrum of GFR levels with the JBM formula. Several other formulas tested performed well provided that early BSA normalization was performed. Blood sampling at 2 and 5 h is recommended for an optimal GFR2p assessment.
Introduction
Measurement of glomerular filtration rate (GFR) by iohexol plasma clearance was introduced in the 1980s [1] and has increasingly been applied due to safety and good performance [2][3][4][5][6]. Plasma clearance, in comparison to renal clearance, eliminates the errors linked to inaccurate urine collection [1,5,[7][8][9][10] and can be calculated as the ratio between the amount of the injected substance and the area under the plasma concentration curve. The slope-intercept technique (i.e., one-pool technique) needing a minimum of two blood samples, has been broadly used as it eliminates the need for many blood samples and extensive clinical examination. Chantler et al.'s fixed constant method [Clearance (Cl) = 0.80 × Cl 1 (mL/min/1.73 m 2 )] [11] has been shown to be inaccurate [12,13], leading to the development of a second-order polynomial of the form aCl 1 + bCl 1 2 , by Brøchner-Mortensen in 1972 (BMadult) which has been widely used in its original form as well as in different subsequent modifications [14][15][16][17]. Singlepoint methods have also been developed, but these have generally been shown to perform more poorly than the slope-intercept technique [18][19][20]. Importantly, in the original Brøchner-Mortensen formula, normalization to 1.73 m 2 body surface area (BSA) was undertaken after correction for the distribution phase, i.e., after the completion of the entire GFR calculation. When the formula was modified for children, normalization to BSA was done before correction for the distribution phase (BMchild) [15]. Of note, the British Nuclear Medicine Guidelines recommend early BSA normalization both in children and adults and also suggested using average coefficient values of BMadult and BMchild (BMcombined) [17]. Despite these recommendations, some pediatric nephrology centers have published several studies using the original BMadult without early normalization in children [21,22]. In 2007, Fleming developed a new formula that includes early BSA normalization and a constant factor (Flem) [12]. Jødal and Brøchner-Mortensen further refined this new formula (JBM), where the constant factor was replaced by a BSA-dependent factor [13,23]. Schwartz and coworkers have proposed several new formulas; a modification of the BMchild formula (SAM), and a minor change of the JBM formula by introducing a constant factor (NSM) [5,6,24]. The studies of Brøchner-Mortensen's group were performed with 51 Cr-EDTA, the group of Fleming used 99m Tc-DTPA, and the Schwartz group used iohexol. GFR measurements with all these three substances are comparable those of inulin clearance. However, iohexol is the only method without ionizing radiation, making it the preferred substance in many centers, especially for pediatric patients [10].
A major and as yet unsettled issue is the optimal sampling time points for the slope-intercept technique; to date, no consensus has been reached regarding a recommendation for a GFR measurements method in children that is both feasible and less time-consuming. Some centers have chosen shorter procedures with the latter blood sampling as early as 3 h after the injection of iohexol [21,25]. However, current knowledge on optimal time points is limited [9] and needs further investigation.
The purposes of our study were to: (1) assess the accuracy of the different formulas for measuring GFR (mGFR) in blood samples drawn at two time points (GFR2p) by comparison with reference iohexol plasma clearance measurements, (2) find the optimal time points for blood sampling within a feasible timeframe (i.e., last blood sampling 5 h after injection), and (3) examine the effect on GFR determination of early and late BSA correction, i.e., the before and after-versions (Table 1).
Patients and methods Patients
A total of 96 children with chronic kidney disease (CKD) were recruited for this study (ClinicalTrials.gov Identifier NCT01092260), of whom 54 were treated at Haukeland University Hospital, Bergen, Norway and 42 were treated at Oslo University Hospital, Oslo, Norway. The median age of the children (55 males, 41 females) was 9.2 years (range 3 months to 17.5 years), the median weight was 28.2 (range 6.6-84.6) kg, and the median height was 133.9 (range 59-177) cm. Median reference GFR based on seven blood sample time points (GFR7p) was 65.9 (range 6.3-153) mL/min/ 1.73 m 2 . The patients were distributed evenly over the different GFR stages, with 28, 27, 23, and 18 patients in CKD stage 1, 2, 3, and 4-5, respectively. None of the children enrolled in the study had edema.
Methods
Iohexol was administered as Omnipaque® 300 mg I/mL (GE Healthcare, Oslo, Norway; i.e., 647 mg iohexol/mL) given in a dose adapted to body weight as follows: <10 kg, 1 mL; 10-20 kg, 2 mL; 20-30 kg, 3 mL; 30-40 kg, 4 mL; ≥40 kg, 5 mL Omnipaque®. The syringe with iohexol was weighed before and after the injection to an accuracy of 0.01 g. The dose of iohexol (in milligrams) was calculated by first multiplying the difference in syringe weight by the concentration of iohexol (647 mg/mL) and then dividing the product by the density of iohexol at room temperature (1.345 g/mL). The iohexol bolus was followed by an injection of 15 mL physiologic saline.
Blood samples (0.5 mL) were drawn from a different intravenous access at 10, 30, 120, 180, 210, 240, and 300 min after the injection of iohexol. In 29 of the 96 patients, the second blood sample was drawn after 60 min instead of 30 min. Blood was also obtained before the infusion of iohexol to exclude interference of other metabolites with the iohexol assay. The blood was allowed to stand for 30-60 min before being centrifuged at 1000-1300 g for 10 min. Serum was stored at −20°C until analysis at one center (Haukeland University Hospital); the samples collected at the other center were sent frozen on dry ice for iohexol analysis.
Serum concentrations of iohexol were determined by high performance liquid chromatography. The concentration of iohexol was calculated from the area under the largest iohexol peak as compared to an internal calibration curve prepared for each set of samples. Calibrators were made up from an iohexol stock solution, 180 mg I/mL (i.e., 388 mg iohexol/mL, Omnipaque®, GE Healthcare), which were diluted in pooled normal plasma to 100, 50, and 10 μg/mL, respectively. Small aliquots of the calibrators were stored frozen at −20°C in vials for up to 1 year. The accuracy of the method was assessed by an external quality assurance program (Equalis, Uppsala, Sweden), and the precision, calculated as the total coefficient of variation over several months, was 4.1 % at 10 mg/L, 3.8 % at 25-290 mg/L, and 3.3 % at >290 mg/L. The iohexol analysis is accredited by the Norwegian Accreditation and complies with the requirements of NS-EN ISO I5189.
Calculations and statistics
The 7-point GFR (GFR7p) was calculated according to Sapirstein, as described by Schwartz et al. [5,26] (Table 1). GFR was normalized to 1.73 m 2 BSA by the ratio 1.73/BSA, using the formula of Haycock et al. [27]. The 2-point GFR (GFR2p) was calculated with the slope-intercept technique and corrected for the distribution phase as described by Brøchner-Mortensen [14], and was normalized to 1.73 m 2 BSA [27]. Due to the great variability of body size in the pediatric population and differences between children and adults, we tested the impact of BSA normalization before or after the mathematical correction for the distribution phase [i.e., normalization interposed in the calculation (early) or undertaken after the entire GFR calculation is completed (late)]. Table 1 shows the formulas used in the evaluation. The performances of the different formulas for GFR2p were compared. Accuracy was calculated as the percentage of patients with values within ±5, 10, and 15 % (P5, P10 and P15, respectively) of the reference method (GFR7p). Estimates of bias as a measure of trueness, as well as assessment of limits of agreement and correlation as measures of dispersion, were systematically performed. As a general measure of accuracy, the 95 percentile of deviations from the reference method (95POD), was also determined, with 90 % confidence intervals (CI) calculated by bootstrapping to provide estimates of variability [28] (Tables 2-4).
McNemar's test for paired categorical variables was used to compare P10 values. For the evaluation of optimal blood sampling times, pairs of 2 and 3, 2 and 4, 2 and 5, and 3 and 5 h were chosen. As the time points 2 and 5 h were considered to be the best sampling times ( Table 2), these were chosen for the comparison of the different methods (Table 3). Subanalyses were performed for age groups (<6, <10, and ≥10 years), BSA groups (<0.5, <1.0, and <1.45 m 2 ), and stage of CKD (<30, 30 to <60, 60 to <90, and ≥90 mL/min/1.73 m 2 ) ( Table 4). The software packages Excel, Analyse-it V2.26 (both Microsoft Corp., Redmond, WA), and SPSS Statistics version 22 (IBM Corp., Armonk, NY) were used for statistical analysis. Table 2 shows the evaluation of optimal time and interval for GFR2p blood sampling using the formulas JBM, BMadult before , and Flem before . The best results were obtained when blood samples were drawn at 2 and 5 h with JBM; the P10 at these time points was 95.8 % compared to <90 % with the three alternative sampling time points pairs (p < 0.05)). The 95POD value was lowest (9.8) when the GFR2p was calculated with JBM using blood drawn at 2 and 5 h ( Table 2).
Results
The performance of the seven different formulas, used as presented in their original publication, is shown in Table 3. The performance of BMadult before is also presented since this formula has been broadly recommended (Fleming et al. 2004). Figure 1 shows the accuracy of the various formulas and their before and after normalizations to BSA, as defined in Table 1.
All before formulas demonstrated high correlation factors (r ≥ 0.99), and relatively small mean biases of −2.02 to 2.15 mL/min/1.73 m 2 . Three formulas showed a high performance: the modified version of the classical BMadult (BMadult before ), Flem before , and JBM. BM before showed the smallest mean bias and the best 95 % limits of agreement as well as the highest P5, whereas JBM had the highest P10 score (Table 3). When the different formulas were evaluated by the
95POD
, JBM showed the best value for accuracy. When BSA normalization was performed after the correction for the distribution phase, only JBM after performed identically to JBM before due to a separate and equivalent mathematical formula for this purpose in JBM. All of the other formulas performed substantially better with early compared to late BSA normalizations, i.e., before versions versus after versions (Table 3; Fig. 1). In general, GFR2p calculated with the after versions (GFR2p after ) gave an overestimation of 2.88-5.03 mL/min/1.73 m 2 (mean bias) compared to GFR7p, with relatively broad limits of agreement. The correlation factor was 0.979-0.985, and only 78.1-82.3 % of the results were within ±10 % of the reference method. All GFR2p after had P5 and P15 of <60 and <90 %, respectively. Difference plots of BMadult after and BMadult before are given in Fig. 2a, b; these demonstrate the importance of normalizing to 1.73 m 2 before correction of the distribution phase in the widely used BMadult (p = 0.002 by McNemar's test for P10) The difference plot of the formula that performs the best, i.e., JBM, is shown in Fig. 2c. When the results were classified according to different CKD stages, age, and BSA, the best accuracy in general was achieved by the JBM formula (Table 4).
Discussion
The results of our study show that the optimum time points for two-point blood sampling in children are 2 and 5 h after iohexol injection (Table 2). These time points are in accordance with the following mathematical and analytical considerations: a correct determination of the slope of the slow phase of the iohexol elimination curve is of crucial importance in GFR measurements based on the slope-intercept technique. When only two data points are used for a GFR calculation, the uncertainty in the assessment of this slope is dependent on both the separation distance between these two points and the analytical variation. With a short time-lapse between the two samplings, the difference between the data points is small, which implies that the analytical variation will contribute relatively more to the uncertainty of the slope than when the data points are well separated. A too close proximity in sampling times therefore introduces an unnecessary inaccuracy in the GFR determination, which was confirmed in our study by the high 95POD values of >25 for GFR2p based on sampling at 2 and 3 h . If sampling is undertaken too early, the elimination of iohexol has not yet reached the slow and linear phase, leading to an incorrect (too high) slope. Therefore, sampling which is too early is expected to result in an overestimation of the GFR, as has been shown in several studies [21,25]. This is especially relevant at the lower GFR levels [5,6,24]. For clinical use, some centers will prefer to shorten the procedure and accept the cost of a lower accuracy, sampling at 2 and 3 h or at 2 and 4 h instead of 2 and 5 h. Other centers will choose to wait for 5 h after injection before collecting blood samples as long as this strategy results in a mGFR of higher quality.
Another important finding of our study is the dependency of the formulas on the timing of BSA normalization ( Fig. 1; Table 3,), as recently discussed by Pottel et al. and Blake et al. [29][30][31]. A significant difference and substantially lower performance was found when BSA normalization was done after the correction for the distribution phase; for example, the accuracy of BMadult after was clearly inferior to that of BMadult before (p value of 0.002 for P10). The only exception was the JBM since it provides two mathematically equivalent formulas, one for normalization before and the other for Glomerular filtration rate (GFR) (mL/min/1.73 m 2 ) was calculated bythe 2-point GFR using the formulas: BMadult before , BMchild, Flem before , and JBM. Blood was sampled at 2 and 5 h after injection, and the results were subdivided according to age, BSA, and chronic kidney disease (CKD) stages as indicated. Mean bias (95 % CI), 95 % limits of agreement, and correlation coefficient (R) were calculated by comparison with the reference method (GFR7p) NA, Not applicable due to insufficient data a Accuracy was assessed as P5, P10, and P15, which is the percentage of patients within ± 5, 10 and 15 % of the reference method, respectively b The 95POD (with 90 % CI) shows the maximum deviation for 95 % of the results normalization after, giving identical results [23]. The publication in 1972 by Brøchner-Mortensen of the correction for the distribution phase of the one-pool slope-intercept technique using a second-order polynomial [14] was a break-through in terms of simplicity and accuracy for GFR measurements. This method has been broadly used in its original form as well as in different modifications in both children and adults [9,17,20]. As early as in 1974 Brøchner-Mortensen et al. showed the importance of the different body sizes in children, and the pediatric formula was only meant to be used with early (before) normalization [15]. The study was based on a relatively small cohort of 30 children which may explain why some pediatric nephrologists have chosen to use the original method developed for adult patients (BMadult after ), which was based on a considerably higher number of patients. It is of great importance that researchers are aware that the relatively low performance of the slope-intercept technique in some pediatric studies using BMadult after is largely due to the use of BSA normalization after the entire GFR calculation was completed instead of being interposed in the calculation [21,22]. In our view, this fact has not always been properly acknowledged and may be a source of erroneous conclusions in some previous studies.
All before formulas performed relatively well, with a P10 ranging from 88.5 to 95.8 % ( Table 3). The JBM formula showed the best values for accuracy in this cohort, based on the highest P10 (95.8 %) and the lowest 95POD (9.8 %, 90 % CI 7.6-11.2). Based on the 95POD values with confidence intervals, the JBM formula performed significantly better than the recently published NSM before formula (15.4 %, 90 % CI 12.5-16.8) ( Table 3). The NSM before formula [24] was meant to be an improvement of JBM. These two formulas are relatively similar, but the innovative BSA-dependent correction factor for the distribution phase in the JBM formula has been replaced by a constant in the NSM formula, which probably explains the latter's lower performance. The BSA dependency of the correction factor was recently confirmed in a study of 142 children and adults using 99m Tc-DTPA-GFR [32]. JBM was also significantly better than the BMadult after (Table 3), as well as the other after formulas (not shown). The other before formulas shown in Table 3 were not statistically different due to partly overlapping confidence intervals, but all showed higher 95POD values than JBM.
In the subgroup analysis according to age, BSA, and GFR (Table 4), the JBM formula showed the best accuracy in all groups except for age of <6 years and GFR of ≥90 mL/min/1.73 m 2 . For the age group of <6 years, the BMchild before formula had the highest P5 and lowest 95POD, but the JBM formula showed the highest P10. Our findings suggest that these two formulas perform at a similar level. In the smaller children with a BSA of <1 m 2 , JBM had the best results, with the highest P10 and a lower Table 1 for the formulas and the Introduction for more details). GFR (mL/min/1.73 m 2 ) was calculated by 2-point GFR (GFR2p) using the different formulas as indicated and with blood sampling at 2 and 5 h after injection. N = 96 patients. Color coding of graph: Black Fraction of results within 5 % deviation of the reference method (GFR7p: GFR measured by the iohexol clearance method with blood sampling at 7 time points within 5 h), gray additional fraction within 10 % deviation of reference method, white additional fraction within 15 % deviation of reference method. Left column of each column pair Body surface area (BSA) normalization to 1.73 m 2 performed Bbefore^the mathematical correction for the distribution phase, right column of each column pair normalization performed Bafter^the entire GFR calculation was completed 95POD compared to BMchild before , BMadult before , and Flem before , likely due to the BSA-dependent correction factor used in the JBM formula and not in any of the other formulas validated in this study.
A limitation of our study is the lack of the true gold standard marker inulin. However, inulin clearance is cumbersome and difficult to perform in children due to continuous intravenous infusion and timed urine collections, the latter also with a high risk of error, especially in children with urologic disorder, which is a common cause of CKD in the pediatric population [9]. Multipoint plasma clearance for GFR measurement is seen as a high-quality procedure and the Btrue^GFR with the last time point within a normal working day [10,17,33]. The last time point of iohexol measurement at 5 h may limit the value of the study in patients with a very low GFR, as true GFR may differ from the reference GFR. The number of patients in our study was limited to 96 children, and the subgroup analysis was therefore hampered by this low number of patients. However, the validity of our study is strengthened by comparisons of a high number of blood samples at different time points.
Conclusion
The determination of GFR based on two-point iohexol plasma clearance performed well in children at all ages across a wide spectrum of GFR levels. The formula of Jødal and Brøchner-Mortensen from 2009 showed the highest percentage of GFR2p within 10 % of the reference GFR. Based on our findings, the optimum time points for blood sampling in children are 2 and 5 h after iohexol injection. The correct use of BSA normalization is essential for optimal GFR determination in children. Fig. 2 Bland-Altman plots between the glomerular filtration rate (GFR) (mL/min/1.73 m 2 ) calculated according to reference method (GFR7p) and GFR calculated according to the BMadult after formula (a), the BMadult before formula (b), and the JBM formula (c). a GFR was calculated by GFR2p according to the 1972 formula of Brøchner-Mortensen (BMadult after ) with sampling at 2 and 5 h after injection. BSA normalization to 1.73 m 2 was performed after the entire GFR calculation was completed. N = 96 patients. Y-axis Difference between BMadult after and the reference method (GFR7p), X-axis mean of the two methods. Dotted lines ±10 % difference. b GFR was calculated by GFR2p according to the 1972 formula of Brøchner-Mortensen (BMadult before ) with sampling at 2 and 5 h after injection. BSA normalization to 1.73 m 2 was performed before the mathematical correction for the distribution phase. N = 96 patients. Y-axis Difference between BMadult before and the reference method (GFR7p), X-axis mean of the two methods. Dotted lines ±10 % difference. c GFR was calculated by GFR2p according to the 2009 formula of Jødal and Brøchner-Mortensen (JBM) with sampling at 2 and 5 h after injection. N = 96 patients. Y-axis Difference between JBM and the reference method (GFR7p), X-axis mean of the two methods. Dotted lines ±10 % difference
|
2017-08-02T20:26:30.600Z
|
2016-07-01T00:00:00.000
|
{
"year": 2016,
"sha1": "7dcdbed97bde59434e2949635684947a9f993ed1",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00467-016-3436-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7dcdbed97bde59434e2949635684947a9f993ed1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225924736
|
pes2o/s2orc
|
v3-fos-license
|
Efficacy and Safety of Esolgafate, A Pre-Polymerized Cross-Linked Sucralfate Medical Device for NERD. A Randomized Double-Blind Placebo-Controlled Trial
Background: Unlike those with erosive reflux disease, patients with non-erosive reflux disease fail to adequately respond to proton pump inhibitors (PPI’s). Pre-polymerized sucralfate barrier therapy (PPSBT) recognized by US FDA as a medical device has significantly enhanced mucosal bioadherence compared to standard sucralfate drug. Aim: To evaluate whether enhanced mucosal protection by PPSBT can provide relevant symptom relief for NERD compared to placebo, even in undifferentiated population of NERD patients. Methods: In a multi-center randomized double-blind placebo controlled trial 42 patient with NERD were randomized to receive Esolgafate, a pre-polymerized cross-linked sucralfate barrier therapy or placebo. No pH monitoring was conducted to determine representative proportion of the 3 sub-types of NERD. Antacids were available to each group as rescue medication. Symptoms of heartburn, reflux sensation, retrosternal discomfort were evaluated before and after treatment. Adverse events were assessed. Results: At the end of the trial, for patients taking PPSBT primary endpoints were met in 90% for heartburn, 83.3% for reflux sensation and 88.2 % for retrosternal discomfort compared to 11.1%, 25% and 20% for those using placebo (p <0.01). Conclusion: The barrier effect of Esolgafate suggests that enhanced mucosal protection by PPSBT alone could improve symptom control in NERD patients undifferentiated by sub-type of non-erosive heartburn.
Introduction
Gastric refluxate into the lower esophagus is common and physiologic, occurring in the vast majority of people and is often asymptomatic [1,2]. However, symptomatic presence of gastric reflux is heralded by heartburn and is subsequently referred to as gastroesophageal reflux disease or GERD. Patients with heartburn and no prior history of GERD can be subdivided by endoscopy into two main cohorts -those with mucosal erosions or erosive GERD and those without erosions or non-erosive GERD [3]. With the introduction of Rome IV [4], the latter group of non-erosive heartburn, can be further differentiated into three clinical sub-categories based on frequency of acid reflux events and the association of symptoms with reflux events [5]. Functional heartburn is non-erosive heartburn with normal frequency of acid exposure but negative symptom reflux association. Reflux hypersensitivity syndrome is non-erosive heartburn with normal frequency of acid exposure but positive symptom association with reflux events. Non-erosive heartburn with above normal frequency of acid exposure that is positively associated with reflux events [6] is referred to as NERD, by Rome IV criteria -nonerosive reflux disease [4]. Besides heartburn and reflex sensation, Heartburn, based on afferent innervation of the esophagus by the vagus nerve, is a shared characteristic of each Rome IV patient sub-cohort of NERD [9]. Heartburn sensation arises from firing of sensory neurons in the distal esophagus and signals a histochemical disturbance within the mucosa. Histochemical disturbances within the esophageal mucosa excite submucosal pain receptors (called nociceptors) [10] and upregulate afferent neurons. Afferent neurons, coursing beneath the esophageal epithelium with the lamina propria, are equipped with voltage gated receptors -acid sensing ion channels (ASIC) [11] and the transient receptor potential vanilloid receptors (TRPV) [12]. These voltage-gated nociceptors require transmucosal ion flux (exchange positive and negative ions) which keep afferent neurons switchedon in NERD patients experiencing heartburn, reflux sensation and retrosternal discomfort or pain. Thus, the lack of symptomatic responsiveness to PPI's, H2RA's and antacids in these patients simply involve histochemical processes that are exclusive of the mucosa's reaction to pH. Observations that pH-responsive and pHnon-responsive NERD patients exhibit dilated intercellular spaces in the esophageal epithelium [13] and that these patient cohorts have basal cell hyperplasia not seen in controls [14], imply that the mucosal reaction specific to NERD patients is likely not a problem of acid control or incorrect pH of gastric refluxate.
Gastric refluxate is a backwash of mineral acid containing dissolved bile acids and serine proteases. Mineral acid (hydrochloric acid) and bile reflux together [15,16]. In patients with symptomatic GERD reflux events containing bile may be more numerous than reflux events of acid alone [17]. Dissolved bile acids, particularly taurine conjugates of cholic acid and chenodeoxycholic acid, [18,19] are not deterred by acid-controlling therapies, and there is evidence to suggest, that there may be facilitation [20]. Both mineral acid and dissolved bile acids can co-dependently and independently initiate histochemical mucosal events responsible for NERD symptoms.
Hints of a pro-inflammatory immunologic process have been previously reported [21][22][23][24] and recently confirmed by in vivo and in vitro observations. The data reported here was first presented in abstract form at a previous Digestive Disease Week of the American Gastroenterological Association and addresses the clinical efficacy and safety of a suspension form of pre-polymerized sucralfate barrier therapy (PPSBT). As with all sucralfate-based therapies, PPSBT is non-systemic, site specific and cytoprotective. It is prescribed to function as a physical barrier to gastric refluxate.
Limiting access of gastric refluxate to esophageal mucosa should diminish refluxate-mediated mucosal inflammation invisible to the eye but responsible for symptoms of heartburn, reflux sensation and chest discomfort in patients with NERD.
Objectives and hypotheses
The main objective of this trial was to assess the effectiveness and safety of pre-polymerized sucralfate barrier therapy suspension in the treatment of non-erosive reflux disorder. In this trial placebo was used as a comparator which is optimal in testing the efficacy of a new treatments [25].
Ethics and trial registration
The trial was registered at Medical Research Council of Bangladesh (BMRC) [26] who provided institutional review of its protocol. Patients provided written informed consent to participate in the trial in accordance with the Declaration of Helsinki.
ISO 14155 compliant clinical investigation of medical device
For scientific transparency in the evaluation of medical devices [27], the current study was designed to be compliant with current ISO 14155 standards which addresses good clinical practice for the design, conduct, recording and reporting of clinical investigations carried out in human subjects [28]. The current trial assesses the safety and performance of Esolgafate as a barrier therapy medical device for NERD. Safety of sucralfate-based products are well documented and has been established since 1968 [29]. Identified are rules and procedures of data collection, statistical power of the study and rationale of sample size. Recruitment of participants, randomization, concealment, and allocation of interventions were performed in a manner to minimize bias, ensure collection of objective and credible data, and support the overall goal of protecting patients' safety and well-being.
Study design
The study was designed as a randomized double-blind placebocontrolled trial with allocation ratio 1:1. Each of the two study arms received identically marked bottles containing a white suspension identical in color and flavor. One intervention contained prepolymerized sucralfate barrier therapy (PPSBT) suspension, while the other, placebo, contained no sucralfate.
Setting and participants
Recruitment of participants took place in three medical center clinics where patients received primary and specialty medical care services. Eleven gastroenterologists conducted patient recruitment.
Inclusion criteria
Inclusion criteria for male or females over the age of 18 years, included dyspeptic symptoms, specifically heartburn, reflux sensation and retrosternal discomfort occurring 3-7 times per week for 3 months leading up to the study with endoscopic evidence of no mucosal erosions by Hetzel-Dent grading system [30]. A Hetzel-Dent Grade of 0 or 1 was required for inclusion. Included patients Page 3 of 9 demonstrated ability to follow protocol instructions and complete self-administered questionnaire. All participants had the right to withdraw from the study at any time with no obligation to give reason for their decision.
Exclusion criteria
Individuals with erosive gastroesophageal reflux disease (eGERD) were excluded. Those with difficulty following protocol instructions (diary maintenance, keeping follow-up appointments etc) were excluded. Patients with Barrett's esophagus, peptic strictures, peptic ulcer disease or requiring motility altering drugs were excluded.
Randomization criteria
Participants were assigned by simple randomization into one of 2 groups, PPSBT or placebo. Simple randomization was aided by use of a random number table wherein patients who had been assigned a two digit number from 01 to 70 plus were sequentially assigned to one of two treatment groups based on the first digits of the random number table being either even or odd.
Interventions
The intervention under investigation include suspension of pre-polymerized cross-linked sucralfate (PPSBT) barrier therapy and placebo which was identical to PPSBT in color, favor and ingredient but contained no sucralfate. Both were manufactured by Pharmaco Laboratories Limited for Mueller Medical International (MMI). Each intervention was dispensed by a single hospital pharmacy associated with the medical centers responsible for recruitment. The pharmacology of PPSBT is relatively new and differs significantly from non-polymerized sucralfate. The latter form of sucralfate is regulated as a drug because of required in situ polymerization by gastric acid following ingestion of biologically inert sucralfate. Polymerization and cross-linking process for PCLS involves use of cationic organic acid. It is therefore more resistant to hydration than sucralfate polymerized by gastric acid. Three hours post-administration, PCLS achieves and maintains a surface concentration of sucralfate that is 7 fold (or 800%) greater on normal mucosa and 23 fold (or 2400%) greater on acid-injured mucosa compared to concentrations achieved by gastric-acid polymerized sucralfate [31]. Dosing volume for each intervention was 15ml taken twice daily for 28 days. This volume of PPSBT contained 1.5grams of sucralfate while the same volume of placebo contained no sucralfate.
Concomitant medications
Except for the use of antacids, participants were not permitted use of any anti-peptic. Two 300ml bottles of aluminum hydroxide/ magnesium hydroxide (400mg/400mg per 10ml) were provided to each participants every 7 days of the 28 day trial whether or not antacids were used. All used and unused bottles of antacids were collected at completion of trial.
Allocation concealment and blinding
Consecutive randomization numbers were given to participants and the study interventions were packaged and assigned consecutive numbers according to the randomization list from which participant numbers were generated. Bottles distributed to participants contained either pre-polymerized sucralfate barrier therapy (PPSBT) suspension or the suspension vehicle for PPSBT without sucralfate. Each intervention was identical in color and flavor. Participants, outcome assessors and person responsible for the statistical analysis were blinded to the intervention until completion of the study. Personal information about potential and enrolled participants was accessible only to the researchers involved in the trial.
Compliance
Participants were required to bring any remaining study product (including empty bottles) and diary to the study site at the end of the intervention period. Compliance with the study protocol was checked by counting the number of bottles left unused.
Generally, participants receiving less than 75% of recommended doses would be considered noncompliant [32].
Primary outcome
The primary outcome was the relief of heartburn, reflux sensation and retrosternal chest discomfort. The number and severity of each symptom episode were recorded by participant in a dairy as the episode occurred. Severity was assessed by a patient- for examination and a replacement pair of bottles were supplied.
The second secondary outcome was emergence of adverse events.
Statistical analysis
All analysis was conducted on an intention-to-treat basis, including all patients in groups to which they were randomized for whom outcomes would be available (including withdrawals and lost to follow-ups). Descriptive statistics were used to summarize baseline characteristics. The Student t test was used to compare mean values of continuous variables for approximating a normal distribution. The chi-square test was used to compare percentages.
The difference between study groups was considered significant with the p value was less than 0.05, when 95% CI for RR did not include 1.0 or when the 95% CI for mean difference did not include 0. All statistical tests were two tailed and performed at the 5% level of significance. Adequacy of sample size
Participants
Patients randomized into two treatment arms were comparable in age, gender, days per week of heartburn, reflux, and retrosternal chest discomfort (
Safety and adverse events
All patients took multiple doses of each intervention and were included in the safety analysis. Results are presented for the intent to treat population. No patient reported any adverse reactions to either PPSBT or to placebo.
Secondary outcome
Adverse events: There were no observed or reported adverse events in the trial.
Dependence on antacids:
The dependence of symptom relief on the use of antacids in patients using PPSBT and placebo was evaluated (Table 6). Placebo was associated with a high use of antacid by patients, presumably to mitigate NERD symptoms. As seen in Table 6, 75.7% of supplied antacid was required bythose in placebo group, compared to 12.5% of supplied antacid required in PPSBT group. The Yates corrected chi-square was 77.8 with p-value < 0.00001 at significance of p<0.01.
Discussion
This study on the symptomatic relief of non-erosive GERD was a 28 day randomized placebo-controlled double-blind trial assessing the efficacy and safety of pre-polymerized sucralfate barrier therapy. Efficacy was assessed by quantifying symptomatic relief [34]. Compared to those with erosive GERD, there can be up to a threefold lag in NERD patients in their response to PPI's.
Since pH monitoring for symptomatic reflux was not conducted in this current trial, the proportion of patients with functional heartburn, reflux hypersensitivity or pure NERD was not known.
Epidemiologically it is reasonable to assume that each type were present in this trial, though this could not be verified to any degree of certainty. NERD patients in this trial were not differentiated by sub-cohort and likely were representative of the heterogeneity common to this clinical population.
|
2020-09-03T09:04:41.220Z
|
2020-04-22T00:00:00.000
|
{
"year": 2020,
"sha1": "1065d714d9245ec56ce9eb9d1089def95df750bf",
"oa_license": "CCBYNC",
"oa_url": "https://irispublishers.com/ajgh/pdf/AJGH.MS.ID.000532.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "7df069469537f11aa42952115816b5ef3d4b0a8e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
216589060
|
pes2o/s2orc
|
v3-fos-license
|
More Than a Metabolic Enzyme: MTHFD2 as a Novel Target for Anticancer Therapy?
The bifunctional methylenetetrahydrofolate dehydrogenase/cyclohydrolase (MTHFD2) is a mitochondrial one-carbon folate metabolic enzyme whose role in cancer was not known until recently. MTHFD2 is highly expressed in embryos and a wide range of tumors but has low or absent expression in most adult differentiated tissues. Elevated MTHFD2 expression is associated with poor prognosis in both hematological and solid malignancy. Its depletion leads to suppression of multiple malignant phenotypes including proliferation, invasion, migration, and induction of cancer cell death. The non-metabolic functions of this enzyme, especially in cancers, have thus generated considerable research interests. This review summarizes current knowledge on both the metabolic functions and non-enzymatic roles of MTHFD2. Its expression, potential functions, and regulatory mechanism in cancers are highlighted. The development of MTHFD2 inhibitors and their implications in pre-clinical models are also discussed.
INTRODUCTION
MTHFD2 (350 amino acids, 37kDa) is one of the major enzymes involved in mitochondrial folate one-carbon metabolism and is also known as NMDMC (NAD-dependent mitochondrial methylenetetrahydrofolate dehydrogenase-cyclohydrolase). Despite its well-known bifunctional dehydrogenase and cyclohydrolase activities, MTHFD2 has been reported to be required for cancer proliferation and may have profound role in tumor development and progression. This metabolic enzyme has attracted particular interests in cancer research for several reasons. Firstly, MTHFD2 is upregulated in various cancers, transformed cells, and developing embryos, but has low or undetectable level in most differentiated normal adult tissues (1). Secondly, highly expressed MTHFD2 is associated with poor disease outcomes in breast cancer (2), colorectal cancer (CRC) (3), renal cell carcinoma (RCC) (4), and hepatocellular carcinoma (HCC) (5); upregulation of MTHFD2 may also contribute to an increased risk of bladder cancer (6). Thirdly, depletion of MTHFD2 may impair aggressive phenotypes and cause cell death in multiple cancers (1). Taken together, MTHFD2 is oncogenic in nature and may serve as a prognostic indicator as well as a therapeutic target in cancers.
Yet, the physiological role of MTHFD2 in malignancy and the mechanisms contributing to its pro-oncogenic activities have not yet been fully elucidated. A better understanding of both the enzymatic and non-enzymatic functional roles of MTHFD2 is essential for the optimal targeting of this novel candidate in cancer therapy. This review aims to highlight the potential functions of MTHFD2 in cancers, particularly focusing on its diagnostic/prognostic value and the effects of its knockdown on aggressive phenotypes. We will summarize the regulatory mechanisms of MTHFD2 and the effects after its depletion, including cell morphological changes, oxidative homeostasis, and metabolite profile alterations. The non-enzymatic "moonlighting" function of MTHFD2 and the development of MTHFD2 inhibitors will also be discussed.
MTHFD2
MTHFD2 is a bifunctional NMDMC. Its activity in transformed and non-differentiated cells was firstly detected in 1985 (7) and the cDNA cloning human MTHFD2 was isolated in 1989 (8). MTHFD2 had been reported to participate in the production of formyltetrahydrofolate for the synthesis of formylmethionyl transfer RNA required for the initiation of protein synthesis (9). The MTHFD2 protein had long been thought to be located exclusively in mitochondria until recently when it was also found FIGURE 1 | Folate one-carbon metabolism. One carbon metabolism enzymes are presented and activated in three different compartments, the nucleus, cytosol, and mitochondria. The flow of permeable metabolites linked the reactions between these compartments, such as formate, serine, and glycine. Briefly, different forms of THFs, function as carriers, transfer one-carbon units as from serine to formate in the mitochondria. Formate then supply the biosynthesis of purine in the cytosol and thymidylate in the nucleus. The reactions in mitochondria are catalyzed mainly by SHMT2, MTHFD2, and MTHFD1L; while in cytosol and nucleus are by SHMT1/2 and MTHFD1. The enzymatic functions of MTHFD2 in the mitochondria are well-studied, while its role in the nucleus is largely unknown and may hold various non-metabolic functions.
to be present within the nucleus at the site of newly synthesized DNA (10).
The canonical role of MTHFD2 is central to folate-mediated one-carbon metabolism in mitochondria. A one-carbon unit (1C) from serine is transferred to tetrahydrofolate (THF) by serine hydroxymethyl transferases (SHMTs) to form 5,10methylenetetrahydrofolate (methylene-THF/CH2-THF). The 1C unit is then transferred among different forms of THFs, thus enabling the folate cycle (Figure 1). This biochemical network comprises two parallel metabolic reactions that take place in the cytoplasmic and mitochondrial compartments. In the cytoplasm, a single trifunctional enzyme named MTHFD1 comprises all the three domains (methylenetetrahydrofolate dehydrogenase, cyclohydrolase, and formyltetrahydrofolate synthetase domains), and serves as the primary functional enzyme that interconverts CH2-THF to 10-formyl-tetrahydrofolate (10-formyl-THF/10-CHO-THF). In the mitochondria, the reactions are carried out by two MTHFD isozymes, MTHFD2 and MTHFD2L (11). They catalyze the production of 10-CHO-THF via two steps. The first one is the conversion of CH2-THF to 5,10methenyl-tetrahydrofolate (methenyl-THF/CH + -THF) through the dehydrogenase activity, the second step is the conversion of CH + -THF to 10-CHO-THF by the cyclohydrolase domain (12) (Figure 1).
MTHFD2 has been recognized to use NAD + as a cofactor in the oxidation process while MTHFD2L can use both NAD + and NADP + . A recent report, however, demonstrated that MTHFD2 can also use both NAD + and NADP + in rapidly proliferating cells (13), suggesting an additional uncharacterized antioxidative role. Compared with MTHFD2L isozyme, MTHFD2 was reported to have much higher expression (14) and displayed a more predominant role in maintaining mitochondrial folate pathway function as well as responding to growth factor stimulation (15). Thus, therapeutic strategies targeting the mitochondrial folate pathway could be simplified by specifically focusing on MTHFD2.
THE ROLE OF MTHFD2 IN CANCER
Although the enzymatic functions of MTHFD2 in purine synthesis have been well-studied (16), emerging evidence suggests an undefined role of MTHFD2 in embryonic development and carcinogenic transformation. MTHFD2 knockout caused embryonic deaths at 12.5 days gestation in mice (17), suggesting its indispensable role in normal embryonic development. The importance of MTHFD2 in cancers is manifested by its upregulated expression in tumor cells and the association with cancer patients outcome. Additionally, gene knockdown studies have revealed the profound impact of MTHFD2 depletion on cancers (Supplementary Table 1).
MTHFD2 Is Overexpressed in Cancer and Predicts Prognosis
In 2014, a meta-analysis study of 19 types of human cancers showed that MTHFD2 was overexpressed in various tumors, including breast cancer, colon cancer, and liver cancer (1). This enzyme was consistently detected in transformed cells within the tumor and metastatic tissues, while having low or undetectable levels in adjacent stroma (1, 3, 18). Liu et al. (2) found that the expression level of MTHFD2 was positively correlated with clinicopathological parameters of breast cancer, such as tumor size, histological grade, and metastases. Similarly in RCC (4) and HCC (5), MTHFD2 was upregulated in tumor tissues and associated with pathological characteristics including TNM staging, diseased recurrence and patient survival. Another study of 103 pancreatic cancer patients demonstrated that highly expressed 1C metabolic enzymes (MTHFD2, ALDH1L2, or SHMT2) may predict poor Overall Survival (OS) and Disease-Free Survival (DFS) rates. Multivariate Cox proportional hazards analysis then identified MTHFD2 and ALDH1L2 expression levels as independent survival predictors for OS and DFS (19).
Findings in glioma are more controversial. Several reports, in line with the above mentioned studies, demonstrated that MTHFD2 was upregulated in glioma (20) and positively correlated with tumor grade (21). Strikingly, other groups found the reverse trend. A bioinformatics analysis study showed that MTHFD2 was among the key genes that were downregulated in glioma and that such downregulation was associated with poor prognosis (22). Another study on glioblastoma multiforme (GBM) proposed a survival-prediction gene signature in which MTHFD2 was one of the protect factors. Patients with highly expressed MTHFD2 may have a longer survival period (23). The causes and mechanisms behind the discrepant expression profiles and distinct prognostic values of MTHFD2 between glioma and other cancers remain elusive and deserve further investigations. Given the fact that GBM is particularly refractory to treatment and frequently leads to poor patient outcome, novel therapeutic targets and strategies are needed. Metabolism reprogramming is one of the most active fields in cancer research, and the effects and consequences of elevated MTHFD2 in glioma are largely unknown. Thus, experimental and clinical studies are needed to understand the diagnostic/prognostic value of this metabolic enzyme as well as its potential functional roles in glioma.
Inhibition of MTHFD2 Causes Cancer Cell Death and Suppresses Malignant Phenotypes
RNA interference targeting MTHFD2 had been shown to suppress cancer cell malignant features and cause cell death in various cancers, including AML, CRC, HCC, RCC, glioma, breast cancer, lung cancer, ovarian cancer, and melanoma (Supplementary Table 1). The effects of MTHFD2 depletion vary among different types of cancer cells. For instance, in breast cancer, MTHFD2 knockdown suppressed cell migration and invasion. No significant effect on cell proliferation or apoptosis was observed (24). In AML cells, Pikman et al. (25) observed cell growth suppression and cell differentiation induction after depletion of MTHFD2. They further demonstrated that MTHFD2 ablation impaired leukemic establishment and progression in a human AML orthotopic xenograft model. In HCC cells, siRNA-mediated silencing of MTHFD2 inhibited cellular features associated with cancer metastasis, including cell migration, invasion, and epithelial-mesenchymal transition but no significant difference was observed on cell proliferation, apoptosis, or cell cycle distribution (5). In RCC, Lin et al. (4) described decreased cell proliferation, migration, and invasion after MTHFD2 knockdown in 786-O cells, possibly through a reduction in vimentin expression. Importantly from a therapeutic perspective, MTHFD2 down-regulation sensitized RCC cells to anti-folate chemotherapy drugs, such as methotrexate (MTX) and fluorouracil (5-FU). In CRC, MTHFD2 knockdown caused cell death under hypoxia and decreased cell growth and sphere formation ability. Furthermore, MTHFD2 suppression reduced tumor growth and significantly inhibited lung metastasis in CRC cell-derived xenograft mouse model (3,26,27). Targeting MTHFD2 may impair the stem-like features and chemo-resistance in lung cancer, offering an opportunity for eradicating tumors and preventing recurrence (28). In sum, MTHFD2 depletion could result in cancer cell death and FIGURE 2 | The regulatory mechanisms and biological functions of MTHFD2 in cancer. The gene expression of MTHFD2 is transcriptionally regulated by various transcriptional factors and post-transcriptionally regulated by microRNAs. Extracellular stimuli may modulate MTHFD2 expression via mTORC1/ATF4 signaling pathway. The canonical role of MTHFD2 is converting one-carbon units for de novo purine synthesis. It may also involve in the production of redox equivalent (NADPH) for oxidative stress defense. In the nucleus, this enzyme may play a role in DNA replication, RNA processing and translation.
impair key features associated with cancer progressions, such as proliferation, invasion, migration, and metastasis. Targeting MTHFD2 is thus a promising strategy for anti-cancer therapy.
REGULATION OF MTHFD2 IN CANCERS
Several studies have described the potential regulatory mechanisms of MTHFD2 expression, including transcriptional, post-transcriptional regulation, and extracellular stimuli (Figure 2). Various transcriptional factors (TFs) have been reported to modulate the expression level of MTHFD2. Ben-Sahra et al. (29) demonstrated that MTHFD2 expression was regulated by the mammalian target of rapamycin complex 1 (mTORC1) signaling pathway in mouse embryo fibroblasts (MEFs) and several human cancer cell lines. Mechanistically, activating transcription factor 4 (ATF4) may bind to the promoter region of MTHFD2 and directly regulate its expression. In response to cell growth signals, mTORC1 may activate ATF4, which then promotes the expression of MTHFD2 and facilitates the production of formyl units required for de novo purine synthesis.
MYC is a master regulator of cell growth and proliferation. It participates in the regulation of cell cycle progression, genetic instability, apoptosis, and metabolism (30). By analyzing publicly available ChIP-Seq data and ChIP-qPCR assay, Pikman et al. Li et al. (32) reported that Ad4-binding protein/steroidogenic factor 1 (Ad4BP/SF-1) directly regulated MTHFD2 expression by binding to the CHIP-peak regions, thus affecting NADPH production in adrenocortical Y-1 cells. In Ewing sarcoma (EWS), the chimeric transcription factor EWS-FLI1 was the primary oncogenic driver that positively regulated the expression of MTHFD2 and MTHFD1L and impacted cellular redox status (33). SOX7 is a transcription factor and functions as a tumor suppressor. Zhang et al. (34) identified MTHFD2 as one of the essential target genes of SOX7 in breast cancer. They further demonstrated that SOX7-repressed MTHFD2 could contribute to SOX7-mediated tumor suppression. Besides, MTHFD2 was speculated as one of the regulatory targets of Nrf2, as MTHFD2 mRNA was decreased by Nrf2 knockdown in A549 lung cancer cells (35).
On the epigenetic level, MTHFD2 has been reported to be post-transcriptionally regulated by microRNAs (miRNAs). Transcriptome profiling in breast cancer cells identified MTHFD2 as a target gene of miR-9 that affected cell proliferation and induced apoptosis (36). In AML cells, miR-92a inhibited cell proliferation and promoted apoptosis by directly downregulating MTHFD2 (37). In glioma, miR-940 might disturb the 1C metabolic pathway and suppress tumor progression by regulating MTHFD2 (21). In CRC, miR-33a-5p inhibited the growth and migration of HCT116 and HT29 cells by targeting MTHFD2 (38).
Intriguingly, among the enzymes that involved in mitochondrial folate pathway, MTHFD2 was particularly responsive to extracellular stimuli. The intracellular protein level of MTHFD2 responded rapidly to mitogenic stimuli in several cancer cells, such as U251, HeLa, and HCT116 (10). The expression was repressed by the deprivation of growth signals (e.g., serum) within 24 h and could be rapidly re-induced within 4 h after serum re-stimulation. Enforced expression of MTHFD2 was sufficient to promote cancer cell proliferation in serum-deprived condition, indicating that its function might override the growth factor limitation (10, 39). This might, at least partially, contribute to the uncontrolled tumor growth even under nutrition limited environments and the poor efficacy of growth factor inhibitors (e.g., EGFR inhibitors) in some refractory cancers such as NSCLC and GBM. Thus, combination of MTHFD2 inhibitor and growth factor inhibitors might be a promising therapeutic strategy for EGFR inhibitor-resistant cancers.
Morphological Changes
Downregulating of MTHFD2 could affect cancer cell morphology and possibly impair the ability of migration and invasion. In AML cells, knockdown of MTHFD2 resulted in morphological shift including nuclear condensation and cytoplasmic ruffling (25). In breast cancer cells, MTHFD2 depletion caused a weaker and deformed vimentin network (24), indicating the impairment of cell motility.
Increased Oxidative Stress
While the significance of folate metabolism has been recognized and attributed to the production of 1C units for nucleic acid synthesis, another crucial role of this pathway is the generation of NADPH, the important reducing power for redox homeostasis (40). Fan et al. (40) demonstrated that THFmediated folate metabolism contributed as much as 40% of NADPH production in immortalized mouse kidney epithelial cells (iBMK). Knockdown of MTHFD2 led to a decreased NADPH/NADP + ratio and increased reactive oxygen species level. In line with this study, Shin et al. (13) revealed that purified human MTHFD2 exhibited dual redox cofactor specificity and was able to utilize both NADP + or NAD + in rapidly proliferating cells. Additionally, Ju et al. (3) showed that MTHFD2 conferred redox homeostasis in CRC cells and promoted tumor growth and metastasis. Suppression of MTHFD2 disturbed NADPH production and redox homeostasis, rendering CRC cells more vulnerable to oxidative stress such as hypoxia. Histidine-induced filament formation required GCN2/ATF4/MTHFD2 axis-maintained redox homeostasis, and knockdown of MTHFD2 would affect the cytidine triphosphate (CTP) synthase filament formation due to redox imbalance (41). Nmdmc, the Drosophila homolog of MTHFD2, has been revealed as a longevity gene. Overexpression of Nmdmc extended Drosophila's lifespan, which might be associated with enhanced oxidative stress resistance. Decreased levels of mitochondrial ROS and Hsp22 and increased copy numbers of mitochondrial DNA were observed in Nmdmc upregulated Drosophila (42). It is not until recent years that the role of MTHFD2 in NADPH production and redox homeostasis has been recognized although the regulatory mechanism and clinical implication remain largely unknown.
Metabolite Profile Alterations
Unsurprisingly, depletion of MTHFD2 would inhibit mitochondrial 1C metabolism and disturb purine synthesis. For instance, loss of MTHFD2 led to glycine auxotrophs (i.e., a reliance on exogenous glycine) in mammalian fibroblasts (43), breast cancer (44), and AML (25). In mammalian fibroblasts, knockout of NMDMC(MTHFD2) completely blocked 1-C unit generation in mitochondria, and the cytoplasmic folate pathways were insufficient to compensate for the optimal purine synthesis (43). Moreover, suppression of MTHFD2 in MCF-7 breast cancer cells caused prominent metabolic remodeling, such as greater vulnerability to exogenous folate depletion, enhanced glycolytic flux, and increased glutamine consumption (44). In addition to disturbing the serine-glycine conversion in mitochondria, MTHFD2 suppression may deplete the tricarboxylic acid (TCA) cycle intermediates and cholesterol esters and increase sphingomyelin and triglyceride levels (25). By tracing and measuring the isotope-labeled metabolites, Ben-Sahra et al. (29) demonstrated that depletion of MTHFD2 decreased de novo purine synthesis and was associated with reduced formate production.
MTHFD2-mediated purine synthetic metabolism has been demonstrated to be critical for stem-like cell properties and resistance to chemotherapy in lung cancer cells. Knockdown of MTHFD2 significantly reduced tumorigenesis and stem-like properties, probably due to insufficient purine nucleotide (28). The production of 5-aminoimidazole carboxamide ribonucleotide (AICAR), the final intermediate of purine synthesis pathway, is crucial to purine synthesis. MTHFD2 knockdown (or AICAR rescue) was found to reduce stem-like properties and restore sensitivity to gefitinib in gefitinib-resistant lung cancer cells. Overexpression of MTHFD2, on the other hand, conferred gefitinib resistance in gefitinib-sensitive cells. Taken together, this study suggested that MTHFD2-mediated 1C metabolism contributed to cancer stem-like properties and resistance to chemotherapy drugs through the consumption of AICAR.
In glioma, suppression of MTHFD2 through upregulation of miR-940 led to the disruption of intracellular 1C metabolism and exhibited anti-tumor effects (21). Interestingly, MTHFD2dependent glycine synthesis has been reported as a prerequisite for angiogenesis in endothelial cells (45). Since abnormal angiogenesis is an important hallmark in GBM, targeting MTHFD2 may halt the progression of GBM by either slowing cancer cell proliferation or inhibiting abnormal angiogenesis, or both.
THE ROLE BEYOND ENZYMATIC FUNCTION
Much less is known about the non-enzymatic activity of MTHFD2. Beside the canonical role of supporting purine synthesis and the newly-demonstrated role in redox defense. Recent studies conceptualized that MTHFD2 might profoundly regulate gene expression via affecting DNA replication, RNA translation, and epigenetic modification (Figure 2). Gustafsson Sheppard et al. (10) found that overexpression of MTHFD2 was sufficient to promote cancer cell proliferation independent of its dehydrogenase activity. They generated HCT-116 CRC cell lines expressing either the wild-type MTHFD2 protein or a mutant MTHFD2 NAD , which lacks the dehydrogenase activity due to a mutation in the NAD-binding site. Similar to wild-type MTHFD2, induction of MTHFD2 NAD resulted in markedly increased cell proliferation, indicating that MTHFD2 protein could drive cell proliferation independent of its enzymatic function. They also found that MTHFD2 was co-localized with DNA replication sites in the nucleus, with a possible role in driving cancer cell proliferation. Koufaris et al. (39) investigated the possible non-enzymatic functions of MTHFD2 by identifying its interacting proteins, co-expression pattern and the knockdown transcriptional responses. By using coimmunoprecipitation (Co-IP) and mass spectrometry (MS), the authors identified that MTHFD2 may physically interact with a set of nuclear proteins involved in RNA metabolism and translation. Gene Ontology (GO) analysis of these proteins showed significant enrichment of RNA binding proteins, which were also frequently co-expressed with MTHFD2. A shared function between MTHFD2 and the interacting partners were supported by transcriptomics data.
The intriguing interactions between cell metabolism and gene expression have aroused extensive research attention in recent years. The reciprocally regulation of these two fundamental biological processes maintains homeostasis and regulates cell growth, survival, and differentiation (46). Cell metabolism has been established as an important regulator of eukaryotic gene expression, and there is a growing list of metabolic enzymes and metabolites with roles in the regulation of chromatin structure and transcription. Sdelci et al. (47) described the connection between 1C metabolism and gene transcriptional regulation. The folate metabolism enzyme MTHFD1 was bound to chromatin at distinct genomic loci and controlled gene expression in AML cells. The regulatory effect was dependent on the histone acetyl reader bromodomain-containing protein 4 (BRD-4), an important regulator chromatin structure and transcription. Other purine pathway enzymes, including SHMT and ADE2, have also been reported to interact with BRD4 bromodomains directly and may also transcriptionally regulate gene expression (48). More recently, MTHFD2 was reported to promoted metabolic reprogramming and tumor progression by forming a positive feedforward loop with HIF-2α. MTHFD2 promoted the methylation of HIF-2α mRNA and enhanced its translation, which in turn promoted the expression of MTHFD2 and aerobic glycolysis (49). This metabolic enzyme might therefore play an essential role in controlling RNA global N6-methyladenosine (m6A) methylation levels and linking RNA methylation status to the metabolic state.
A specific metabolic phenotype, elevated α-ketoglutarate to succinate ratio, could maintain pluripotency of embryonic stem cells through histone and DNA demethylation (50). Interestingly, MTHFD2 depletion could decrease the α-ketoglutarate to succinate ratio in AML cells and thus reduced their stem cell signatures, suggesting its potential role in epigenetic modulation.
DEVELOPMENT OF MTHFD2 INHIBITORS
Given the expression pattern of MTHFD2 and its knockdown effects in various cancers, there is a strong rationale for developing selective inhibitors targeting this enzyme for cancer therapy. The structure of MTHFD2 protein was first depicted in 2005 by Christensen et al. (51), its inhibitors has been discovered recently (Supplementary Table 2) As its enzyme activities require both the NAD and NADP cofactors and the substrate MTHF, one of the drug design strategies is to develop competitive inhibitors based on either the cofactors or the substrate. It is, however, worth noting that another two MTHFD enzymes (MTHFD1 and MTHFD2L) share both structural and functional similarities with MTHFD2. Tedeschi et al. (52) proposed a strategy to target MTHFD2 selectively but their modeling studies indicated that the conserved secondary and super-secondary structures (such as Rossman folds) in the three enzymes made it difficult to find competitive inhibitors with sufficiently high specificity. Notwithstanding, Nilsson et al. (15) demonstrated that MTHFD2 played predominant role in mitochondrial one-carbon pathway and should be the main targets. Inhibitors targeting multiple enzymes might result in a more complete inhibition.
The crystal structure of the first inhibitor of human MTHFD2, LY345899, was disclosed in 2017. Although this substratebased competitive inhibitor has a lower affinity for MTHFD2 compared to MTHFD1, it showed potent suppressive effect on MTHFD2, with an IC50 value of 663 nmol/L (96 nmol/L on MTHFD1) (53). Notwithstanding, LY345899 treatment significantly inhibited CRC tumor growth in both cell lines and patient-derived xenograft (PDX) model (3). Following this, Kawai et al. (54) recently disclosed a novel isozymeselective MTHFD2 inhibitor, DS44960156, with a tricyclic coumarin scaffold. It is characterized by several superior features, including remarkable selectivity (>18 fold) for MTHFD2 over MTHFD1, a low molecular weight (<400), and a good ligand efficiency (LE, a metric of binder). The upgraded compound DS18561882 showed a strong cell-based activity and a good oral pharmacokinetic profile which inhibited tumor growth in mouse xenograft breast cancer model upon oral administration (55). The anti-tumor efficacy of these selective MTHFD2 inhibitors in different cancers awaits further assessment. Moreover, the natural product carolacton, which was originally discovered as an antibacterial compound, was found to inhibit folate-dependent 1C metabolism by targeting FolD/MTHFD. Carolacton showed competitive inhibition on both the substrates and the cofactors, causing growth inhibition in different human cancer cell lines, such as HCT-116, KB-3.1, and KB-V.1 cells (56). Asai et al. (57) developed compounds that target THF and NAD pocket of MTHFD2, respectively, theie in silico study indicated high specificity and potential. But they have yet experimentally verify the efficacy of these compounds.
DISCUSSION
MTHFD2 has attracted increasing interests in cancer research due to its specific expression pattern and prognostic value. Agents that targeting this cancer cell specific molecular would minimize the adverse effects on normal cells. Several reports have demonstrated MTHFD2 as a critical player in cancer survival related to nucleotide synthesis, NADPH production, and redox defense. Depletion of MTHDF2 may abrogate malignant phenotypes, such as proliferation, migration, invasion, and metastasis. Cell line-derived xenograft and PDX-based studies further supported the anti-cancer effects of MTHFD2 knockdown in vivo. Currently, the studies are focus on breast cancer and gastrointestinal cancers, even its expression features and prognostic value have been demonstrate in a wider range of cancer types. It remains unknown whether inhibiting MTHFD2 in these cancers would significantly suppress tumor growth.
The expression of MTHFD2 can be regulated transcriptionally and post-transcriptionally or even by extracellular stimulation. The transcriptional factors that might regulate MTHFD2 expression have been identifying. But it remains elusive whether the high expression in cancer attributes to the MTHFD2-related genomic mutation or to the induction of other essential oncogenic drivers. How MTHFD2 might promote cancer progression remains undetermined. Most of the current studies indicated that oxidative stress defending would be an essential mechanism, but large is unknown especially from the non-enzymatic perspective. For instance, whether MTHFD2 is a DNA/RNA binding protein; what are the nucleotides it might bound; what are the genes it might regulated.
The inhibitory effects are predominantly carried out by RNA inference technique (siRNA and shRNA). Synthesized compounds has only been tested in breast cancer and CRC. The first synthetic MTHFD2 inhibitor LY345899 has shown potent anti-tumor activity in CRC while the research and development of inhibitors with higher affinity and selectivity is still ongoing. Its potential "moonlighting" functions in tumorigenesis, DNA replication, RNA metabolism, and epigenetic regulation render this molecule an intriguing research target. Genetical or pharmacological targeting MTHFD2 is a promising strategy in cancer therapy. Future studies may devote to investigate the anti-cancer role of MTHFD2 in a broad range of cancers; to clarify the regulatory factors and mechanisms attributing to the high expression of MTHFD2 in cancer cells; to explore the non-enzymatic functions of MTHFD2; to discover new compounds and test their efficacy in pre-clinical and clinical studies.
AUTHOR CONTRIBUTIONS
ZZ contributed to the literature research, manuscript draft, and figure/table design, GL provided critical revision of the manuscript as well as the final approval of the version to publish.
|
2020-04-29T13:09:32.683Z
|
2020-04-28T00:00:00.000
|
{
"year": 2020,
"sha1": "1cc87a7612ac7123e1c2c1d65dd5c840e2e99173",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2020.00658/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1cc87a7612ac7123e1c2c1d65dd5c840e2e99173",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
266326413
|
pes2o/s2orc
|
v3-fos-license
|
Vector-Valued Shepard Processes: Approximation with Summability
: In this work, vector-valued continuous functions are approximated uniformly on the unit hypercube by Shepard operators. If λ denotes the usual parameter of the Shepard operators and m is the dimension of the hypercube, then our results show that it is possible to obtain a uniform approximation of a continuous vector-valued function by these operators when λ ≥ m + 1. By using three-dimensional parametric plots, we illustrate this uniform approximation for some vector-valued functions. Finally, the influence in approximation by regular summability processes is studied, and their motivation is shown.
Introduction
The Shepard operators, initially defined by Donald Shepard in 1968 [1], are effectively employed in a wide array of fields, ranging from mathematics to engineering, and from geographical mapping systems to mining, owing to their interpolation capabilities as well as their ability to approximate functions more rapidly.These operators are quite successful not only in scattered data interpolation problems (see [2][3][4][5][6][7][8]) but also in the classical approximation theory (see [9][10][11][12][13][14][15]).Recently in [16,17], we investigated the approximation behavior of the Shepard operators in complex cases.Our main focus in this paper is to use these operators to approximate vector-valued functions on the unit hypercube and further enhance this approach through the utilization of regular summability methods.
Firstly, let us introduce the vector-valued Shepard operators that will be employed throughout this article.
Fixed n ∈ N, examine the set given by Then, in total, we have (n + 1) m sample points on K. Assume d ∈ N and f = ( f 1 , f 2 , . . ., f d ) is a vector-valued function on the set K, where each component f r (r = 1, 2, . . ., d) is a real-valued function on the set K. Now, if 0 < λ ∈ R, we examine the following vector-valued Shepard processes: where we write ∑ n k=0 for the multi-index summation Here, the symbol |•| m denotes the usual Euclidean norm on the set K. Observe that S n,λ (f) is an interpolatory process at the sample points x k,n , i.e., S n,λ (f; It is easy to verify that S n,λ (f) may be written with respect to the components of f as follows: S n,λ (f; x) = Sn,λ ( f 1 ; x), Sn,λ ( f 2 ; x), . . ., Sn,λ ( where Sn,λ is given by Sn,λ (g; for real-valued functions g defined on K.It is clear that Sn,λ (g; x) becomes real valued.We structure the present paper based on this terminology as follows: Section 2 examines the approximation characteristics of the vector-valued Shepard operators, employing both classical convergence and regular summability methods.Section 3 demonstrates our main approximation theorem through supporting the auxiliary results.Section 4 gives some significant applications, including the influence in approximating by regular summability processes.The last section is devoted to the concluding remarks.
Approximating by Vector-Valued Shepard Operators
Let C(K, R d ) denote the space of all continuous functions from K into R d .Now, we state an approximation result for the vector-valued Shepard operators given by (1).
with ⇒ denoting the uniform convergence.
Note that the uniform convergence in (3) can be written explicitly with respect to the components of f: Remark 1.We should note that Farwig in [8] considered the multidimensional Shepard operators by using n distinct sample points in a compact set A satisfying two conditions of regularity.However, in (1), taking (n + 1) m sample points in the unit hypercube K, we consider not only multidimensional but also vector-valued Shepard operators.One can also check that if we take d = 1 in Theorem 1, then our approximation result is in agreement with Farwig's theorem in [8].Indeed, it follows from Theorem 2.3 in [8] (in accordance with our notations, take q = 0, r = 1/n, s = m, p = λ) that the order of approximation is O((log n)/n) when λ = m + 1 and O(1/n) when λ > m + 1, which coincides with our Theorem 1.Here, we will examine the demonstration of Theorem 1 from a different point of view than Farwig's method (see Section 3).However, unlike Farwig's result, we have not yet proved the existence of the approximation when m ≤ λ < m + 1.
We also give some applications that graphically and numerically illustrate the approximation of some vector-valued functions (see Section 4).
We now discuss the effects of the regular summability methods to the approximation in Theorem 1.We first recall some preliminaries on summability.Let A be a matrix A := [a jn ] (j, n ∈ N) and x a sequence x := {x n }.Then, for the A-transformed sequence of {x n }, we write Ax := {(Ax) j } = ∑ ∞ n=1 a jn x n , if we obtain convergence for the series, at each j.In such a case, we write A as a summability (matrix) method.If lim Ax = L, with L ∈ R, for the sequence x = {x n }, this sequence is said to be A-summable (or A-convergent) to L (we write A-lim x = L).For a vector-valued functions sequence, this limit can also be considered.Examine the function sequence which is denoted by f n A ⇒ f on K.We also write that a matrix summability method A is regular when A-lim x = L, if lim x = L.By using the well-known Silverman-Toeplitz result (see, for instance, [18,19]), one can characterize the regularity of a matrix summability method as follows: A summability method A = [a jn ] is said to be non-negative if a jn ≥ 0 for all j, n ∈ N. We should note that non-negative regular summability processes are successful in approximation theory (cfr.[20][21][22][23][24][25][26]).Now we apply such methods in approximating by vector-valued Shepard operators.
From Theorem 1, we deduce that, for each Now, for a fixed non-negative regular summability method A = [a jn ], we immediately check that, for every λ ≥ m + 1, provided that the transformed operator ∑ ∞ n=1 a jn S n,λ (f) is well defined for each j.Indeed, we may write from (1), ( 4) and ( 5), for each r = 1, 2, . . ., d, is non-negative and regular.However, the next modifications give a motivation for regular summability methods in approximating processes.
Assume that, for each n ∈ N, u n : K → R d and v n : K → K are vector-valued functions such that have bounded components on K. Using the sequences {u n } and {v n }, we consider the following modifications of vector-valued Shepard operators: and Introducing the vector-valued test functions and one immediately deduces the following result.
) and λ ≥ m + 1 Then, for each r = 1, 2, . . ., d, we obtain from (1), ( 2) and ( 6) that holds for every x ∈ K and n ∈ N. Since each {u r,n } is bounded and uniformly convergent to 1 on K, the proof follows from Theorem 1 at once.
(ii) By using a similar idea, for each r = 1, 2, . . ., d, from ( 7) Then, we observe that where (see the proof of Theorem 1 in Section 3).Here, as usual, ω(•, δ), δ > 0, denotes the usual modulus of continuity defined by if g is any real-valued bounded function on K. Since each f r is uniformly continuous on K and v n ⇒ e 1 on K, the proof is completed.
Then, the following natural problem arises: • Can we preserve the approximation in Theorem 2 at "some sense" when u n ⇒ e 0 or v n ⇒ e 1 fails on K (in the usual sense)?
The next result partially gives an affirmative answer to this problem by using nonnegative regular summability methods.Theorem 3. Assume A = [a jn ] to be a non-negative regular matrix method.Let with δ n given (10) and M r , a positive constant, such that |u r,n (x is regular, the r.h.s. of the last inequality vanishes as j → ∞; consequently, S * n,λ (f) However, we understand from the discussion in Section 4.3 that the convergence v n A ⇒ e 1 on K is not sufficient for the A-summability.In such a case, we need strong summability.If {f n } is a vector-valued functions sequence from K into R d , then we write The above definition can be easily modified when the range of f n is K ⊂ R m .We also observe the following facts on K: where f n and f are vector-valued functions whose components are bounded on K.But it is easily verified that the converse result is not always true.Then we obtain the next statement.
Theorem 4. Assume
|A| ⇒ f, which also implies that the sequence {S * * n,λ (f)} is uniformly A-summable to f on K.
Proof.Assume f ∈ C K, R d .The uniform continuity of f r (r = 1, 2, . . ., d) on K implies that each component function f r verifies the following (see [27]): ∀ε > 0, one can find a constant holds true for each x, y ∈ K. Hence, from ( 6), ( 10) and ( 11), we deduce that for each If j → ∞, since A is regular, we obtain that the r.h.s. of the last inequality vanishes for any r = 1, 2, . . ., d.This mean that the sequence {S * * n,λ (f)} is strongly (uniform) Asummable to f on K.
Auxiliary Results and Demonstration of Theorem 1
To the aim of demonstrating Theorem 1, the next lemmas are needed.
Then, we observe that |x − Using this and by the inequality the assertion follows.
Then, we obtain since λ ≥ m + 1.Finally, assume that j = m.In this case, using a similar idea, we may write that We can see that Lemma 2 is also valid if all summations start from 1, which means, for every which implies (13).
Then, for any fixed x ∈ K, consider the real-valued function ϕ x on the set K by The following lemma will be useful.
Proof. It follows from (2) that
Sn,λ (ϕ Observe that the multi-index set Ω n \{k * } contains (n + 1) m − 1 elements.Now, for i 1 , i 2 , . . . ,i m ∈ {1, 2, . . ., m}, define the following (disjoint) multi-index subsets of Ω n : Also, for each set of such forms, So, all such sets contain (n + 1) m − 1 elements.We can consider the set Ω n \{k * } as the union of all disjoint sets having the above forms, that is, Then, we may write from (14) that Sn,λ (ϕ Now we bound the summations on the r.h.s. of the last inequality.We see that We also obtain Working similarly, we obtain Hence, we may write from ( 14) Therefore, using the fact that (13) (see also Lemma 2), we obtain from (15) that Sn,λ (ϕ holds true for each n ≥ 2. Now if n → ∞ on both sides in ( 16), the proof is completed.
Applications and Special Cases
In this section, we first apply Theorem 1.Secondly, we compute the corresponding approximation errors.Then, to point out the influence of regular summability methods in approximation, we consider a modification of vector-valued Shepard operators.
An Application of Theorem 1
As a first application, take d = 3 and m = 2. Now, consider the functions f and g on where, for x = (x, y) ∈ K, and g 1 (x) = 8 + (3 + cos(2πy)) cos(2πx) g 2 (x) = 3 + sin(2πy) Then, by Theorem 1, we have, if λ ≥ 3, S n,λ (f) ⇒ f on K (20) and S n,λ (g) ⇒ g on K. ( If we consider f and g as three-dimensional surfaces parametrized by x and y, we can produce their three-dimensional parametric plots by using the Mathematica program.Similarly, we can also produce the corresponding approximations in ( 20) and ( 21) by vectorvalued Shepard operators.Such parametric plots are shown in Figure 1 for the values n = 6, 11, 17 and λ = 5. 18) and ( 19).
Effects of Regular Summability Methods
In this subsection, we give some applications of Theorems 3 and 4. Firstly, in the modification (6), by using the test function e 0 in (8), define the vectorvalued functions u n : K → R d by u n (x) := 2e 0 (x), for n even 0, for n odd and also examine the Cesàro method C 1 = [c jn ] given by Consequently, we easily observe that u n where [•] denotes the floor function.Then, from the regularity and Theorem 1, the r.h.s. of the last inequality converges to f r (for each r = 1, 2, . . ., d) as j → ∞; hence, S * n,λ (f) Secondly, in the modification (7) by using the test function e 1 in (9), we consider the following function sequences: 1 2 ] m and n even 0, for x ∈ [0, 1 2 ] m and n odd e 1 (x), otherwise and Using again the Cesàro method, we obtain v n On the other hand, for the sequence {v n }, we immediately see that, for every x ∈ K, x r ≤ 1 j → 0 as j → ∞, ⇒ e 1 on K. Now, in (7), we take the sequences {v n } and {v n } instead of {v n }, respectively.If S * * n,λ is constructed with the sequence {v n }, then we observe that the approximation S * * n,λ (f) Sn,λ ( f r ; x) − f r (x) Sn,λ ( f r ; x) − f r (x) holds for each r = 1, 2, . . ., d.Then, from the regularity of C 1 and Theorem 1, the r.h.s. of the above inequality uniformly vanishes as j → ∞.Hence, the sequence {S * * n,λ (f)} is strongly (uniform) C 1 -summable to f on K for each f ∈ C K, R d .
Concluding Remarks
In this paper, we established the methodology of approximating continuous vectorvalued functions on the unit hypercube using Shepard operators.In order to compute the approximation error, we employed some known error metrics, including maximum error, mean error, and mean squared error.Furthermore, we demonstrated both the theoretical and practical improvements of this approximation by using some techniques from regular summability methods.
In future work, we plan to consider novel vector-valued Shepard processes for approximating functions not necessarily continuous (i.e., integrable functions), the famous Kantorovich operators, interesting in applications for image processing and sampling theory (see [28][29][30]).
Table 1 .
Approximation errors in the first component.
Table 2 .
Approximation errors in the second component.
Table 3 .
Approximation errors in the third component.
|
2023-12-17T16:20:17.137Z
|
2023-12-15T00:00:00.000
|
{
"year": 2023,
"sha1": "29441b4511a9801f31a63310efc6c76f5d44e56a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-1680/12/12/1124/pdf?version=1702631673",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4dd07786ca9b489709f6e80ddd073536ffcf0694",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
}
|
119386700
|
pes2o/s2orc
|
v3-fos-license
|
Dark-ages Reionization and Galaxy Formation Simulation - XV. Stellar evolution and feedback in dwarf galaxies at high redshift
We directly compare predictions of dwarf galaxy properties in a semi-analytic model (SAM) with those extracted from a high-resolution hydrodynamic simulation. We focus on galaxies with halo masses of 1e9<Mvir/Msol<1e11 at high redshift ($z\ge5$). We find that, with the modifications previously proposed in Qin et al. (2018), including to suppress the halo mass and baryon fraction, as well as to modulate gas cooling and star formation efficiencies, the SAM can reproduce the cosmic evolution of galaxy properties predicted by the hydrodynamic simulation. These include the galaxy stellar mass function, total baryonic mass, star-forming gas mass and star formation rate at $z\sim5-11$. However, this agreement is only possible by reducing the star formation threshold relative to that suggested by local observations. Otherwise, too much star-forming gas is trapped in quenched dwarf galaxies. We further find that dwarf galaxies rapidly build up their star-forming reservoirs in the early universe ($z>10$), with the relevant time-scale becoming significantly longer towards lower redshifts. This indicates efficient accretion in cold mode in these low-mass objects at high redshift. Note that the improved SAM, which has been calibrated against hydrodynamic simulations, can provide more accurate predictions of high-redshift dwarf galaxy properties that are essential for reionization study.
INTRODUCTION
Reionization refers to an important process after the Big Bang, during which the intergalactic medium (IGM) was transiting from neutral hydrogen to its ionized state (Wyithe & Loeb 2004). According to the observed galaxy sample at high redshift (Bouwens et al. , 2016Stefanon et al. 2017;Oesch et al. 2016), this process can only happen with ionizing photons from much fainter galaxies taken into account (Robertson et al. 2013;Duffy et al. 2014;Bouwens et al. 2015;Liu et al. 2016). Although there are still some debates on other possible sources that can dominate the high-redshift photon budget such as active galactic nuclei (AGN; Giallongo et al. 2015;Madau & Haardt 2015;Qin et al. 2017c;Hassan et al. 2018), dwarf galaxies that are beyond our observational capabilities are generally thought to have driven the Epoch of Reionization (EoR). In this con-E-mail: Yuxiang.L.Qin@Gmail.com text, understanding the formation of these unobserved objects is crucial to studying the EoR and can only be probed at this stage with theoretical simulations.
Hydrodynamic simulations evolve dark matter and baryonic particles simultaneously and provide direct insights into the relevant astrophysical process (Vogelsberger et al. 2014;Schaye et al. 2015;Hopkins et al. 2014;Feng et al. 2016). However, resolving dwarf galaxies within a cosmological volume for reionization studies usually involves more than a few billions of particles, which remains computationally challenging at this stage. A more efficient method is to apply semi-analytic models (SAMs; Croton et al. 2006;Somerville et al. 2008;Guo et al. 2011;Henriques et al. 2015) to N -body simulations (Springel et al. 2005b;Boylan-Kolchin et al. 2009;Klypin et al. 2011;Garrison et al. 2018) that only consider collisionless particles. Using the halo properties inherited from the parent simulation, SAMs approximate baryonic physics such as gas accretion, star formation and feedback using simplified scaling relations. These relations are motivated directly from physical processes, or empirically from observational results and more complicated numerical techniques such as hydrodynamic simulations and radiative transfer calculations. The semi-analytic prescriptions that indirectly model galaxy formation introduce free parameters to describe efficiencies which are inevitably accompanied by parameter degeneracies (Mutch et al. 2013;Henriques et al. 2013). This can make their predictions sometimes controversial, and potentially disconnected from the true behaviour in the universe.
An alternative to validate SAMs in the absence of observations at high redshift is to compare their results against hydrodynamic calculations that start from identical cosmological initial conditions. The goal of this work is to capture emergent behaviours from the hydrodynamic simulations (e.g. large-scale mass removal by winds from supernova events in individual star-forming sites) and to improve the parametrised modelling in SAMs as so to replicate these processes. Under the assumption that hydrodynamic simulations, which model the details of galaxy formation in a more physically realistic manner, are a more natural description of the astrophysical phenomenon and hence more representative of real galaxies, we can explore the semi-analytic prescriptions for quantities that are, in practice, unobservables and potentially reveal improper assumptions or missing physics in SAMs. Guo et al. (2016) compared the l-galaxies (Cole et al. 2000;Bower et al. 2006) and galform (Springel et al. 2005a;Henriques et al. 2015) SAMs with the EAGLE hydrodynamic simulations (Schaye et al. 2015), and concluded that the models can reproduce the stellar mass function predicted by EAGLE. However, discrepancies were also found in the efficiencies of stellar and AGN feedback as well as the prediction of stellar mass-metallicity and size relations. Mitchell et al. (2017) also used EAGLE to assess galform and found the angular momentum as well as the baryon cycling might not be properly traced in the SAM, leading to inaccurate predictions of galaxy sizes. Stevens et al. (2017), on the other hand, investigated cooling of Milk Way-like galaxies in EA-GLE and addressed the necessity of updating the cooling prescription employed in most SAMs (see recent updates of the cooling model in Hou et al. 2018a,b). We, in the previous paper (Qin et al. 2018), also found that the cooling prescription needs revision for more accurate modelling of low-mass galaxies at high redshift and proposed an alternative modification to the current prescription, avoiding the introduction of a new model. Côté et al. (2017) recently extended the comparison from cosmological simulations of smoothed particles to zoom-in simulations (Bryan et al. 2014) of a system with a total mass of ∼10 9 M (one main halo and two satellites; Wise et al. 2012aWise et al. ,b, 2014, and investigated the difference in dwarf galaxy formation between a SAM and hydrodynamic simulation. They found their SAM, which employs a different prescription of gas accretion, was successful in reproducing the hydrodynamic calculation of star formation history but with a prediction of a much narrower distribution of metallicity compared to the hydrodynamic result. This is the second paper following the work of Qin et al. (2018), where we investigated the performance of SAMs when applied to high-redshift dwarf galaxies. We used the Meraxes SAM (Mutch et al. 2016a) as an example and focused on gas accretion, cooling and star formation with reionization and supernova feedback isolated. We compared the stellar and gas masses with a high-resolution hydrodynamic simulation from the Smaug suite (Duffy et al. 2010(Duffy et al. , 2014, and found that, in the SAM, (i) due to the lack of hydrostatic pressure in parent Nbody simulations, inheriting halo properties directly from the dark matter halo merger trees overestimates the total mass of haloes hosting dwarf galaxies; (ii) the assumption that, in the absence of feedback, haloes consists of a baryonic reservoir with a mass of Ω b /Ωm of their total mass is not accurate for dwarf galaxy formation modelling and can lead to a significant overestimation of the total baryonic mass; (iii) star formation modelled by consuming the total gas disc in a few dynamical times of that disc cannot capture the evolutionary path of star formation implemented in hydrodynamic simulations; and (iv) gas accreted by dwarf galaxies is cold, the median temperature of which is significantly lower than the halo virial temperature, and the current cooling prescription is not representative of this process.
Accordingly, we proposed modifications to SAMs, seeking for consistency with hydrodynamic simulations in calculations of the evolution of stellar and gas components of dwarf galaxies. In this work, we include these modifications as well as feedback from reionization and supernovae, and investigate whether the updated SAM can broadly agree with the hydrodynamic calculation of dwarf galaxies in the presence of feedback.
We start with a brief review of the Meraxes SAM as well as the modifications proposed in Qin et al. (2018, hereafter Paper-XIV) and the Smaug hydrodynamic simulation suite in Section 2. We then present and discuss our comparison results in Section 3. Conclusions are given in Section 4. In this work, we adopt the Chabrier initial mass function (IMF;Chabrier 2003) in the mass range of 0.1 − 120M and cosmological parameters from WMAP7 (Ωm, Ω b , ΩΛ, h, σ8, ns = 0.275, 0.0458, 0.725, 0.702, 0.816, 0.968;Komatsu et al. 2011) in all simulations.
THE DRAGONS PROJECT
Taking advantage of N -body/hydrodynamic simulations (Poole et al. 2016;Duffy et al. 2014;Qin et al. 2017a) andSAMs (Mutch et al. 2016a;Qin et al. 2017c), the Darkages Reionization And Galaxy Formation Observable from Numerical Simulations (DRAGONS 1 ) programme studies reionization and high-redshift galaxy formation (Geil et al. 2016;Geil et al. 2017;Liu et al. 2016Liu et al. , 2017Mutch et al. 2016b;Park et al. 2017;Duffy et al. 2017;Qin et al. 2017b). In the previous publication of this series (Paper-XIV), we used the Meraxes SAM as an example and investigated the semi-analytic modelling prescriptions adopted in the literature. We focussed on comparisons of dwarf galaxy properties calculated by Meraxes with a simplified model that ignores feedback from the Smaug hydrodynamic simulation suite. Based on the comparison, we proposed modifications to the halo properties as well as the cooling and star formation prescriptions.
In this work, we include reionization and supernova feedback, and extend the comparison of high-redshift (z 5) dwarf galaxy modelling with a molecular-hydrogen-based star formation law in the SAM. We briefly introduce the Meraxes SAM and Smaug hydrodynamic simulation in this section with emphasis on reionization and supernova feedback, and refer the interested reader to Mutch et al. (2016a, hereafter Paper-III), Qin et al. (2017c) and Mutch et al. (in prep.) for more details of Meraxes, and Schaye et al. (2010) and Duffy et al. (2014) for the hydrodynamic simulations.
Meraxes
Meraxes evolves galaxies with scaling relations capturing baryonic processes. These include gas infall, cooling, star formation, supernova feedback, metal enrichment, stellar mass recycling, reionization, supermassive black hole growth, AGN feedback 2 and mergers 3 . It also calculates the ionization state of the IGM using the 21cmfast seminumerical algorithm (Mesinger et al. 2011). Note that, in order to take hydrostatic pressure into account (Paper-XIV) in this work, halo masses inherited from the merger trees that are constructed from collisionless N -body simulations as well as total baryonic masses are updated using the halo mass and baryon fraction modifiers provided in Qin et al. (2017a, see Appendix A for the impact of incorporating the modifiers to semi-analytic results). We next provide a review of the star formation, supernova feedback and reionization feedback prescriptions in this section.
Star formation
Gas falls into a halo from the IGM and cools through thermal radiation. Within ttransition (see Appendix B for a review of the cooling prescription), this process leads to the formation of a star-forming disc (m sf ), which is assumed to follow an exponential surface density profile where r disc =Rvir λ √ 2 (λ is the halo spin parameter as defined in Bullock et al. 2001) represents the disc scalelength assuming the conservation of specific angular momentum between the star-forming disc and its host halo (Mo et al. 1998).
Total-gas-based star formation prescription: In previous DRAGONS publications, we follow Croton et al. (2006), form stars by consuming the total star-forming gas, and calculate the star formation rate (SFR) bẏ where t sf ≡ α −1 sf 3r disc /Vvir represents the depletion timescale of total star-forming gas on the disc. Note that 3r disc corresponds to the outer disc radius where star formation can happen according to observations of the Milky Way (van den Bergh 2000). α sf , Vvir and m sf,c are the star formation efficiency, virial velocity and the minimum mass of hydrogen gas that a galaxy can form stars, respectively. The form of t sf indicates that the depletion time-scale of total star-forming gas approximately follows the dynamical time-scale of the host halo. However, this was found to be inconsistent with the implementation in hydrodynamic simulations (Paper-XIV). In this work, we instead adopt the following equation and use free parameters, α sf and β sf , to directly adjust the evolutionary path of t sf Molecular-hydrogen-based star formation prescription: A second star formation prescription will be explored in this work, the detail of which will be presented in Mutch et al. (in prep.). Note that this prescription is based on the depletion of molecular hydrogen (see Lagos et al. 2011 and references therein) and is considered as a more physically plausible model compared to the total-gas-based star formation prescription. First, we assume stars (m * ) follow the same distribution as the interstellar medium (ISM) and they are considered as a stellar disc with a surface density profile following equation (1; changing the subscript from sf to * ). We then calculate the pressure of the ISM accounting for both stars and hydrogen through the Elmegreen (1993) approximation where G represents the gravitational constant and, following Lagos et al. (2011), the vertical velocity dispersions of gas (σ sf ) and stars (σ * ) are assumed to be 10km s −1 and 0.02 √ r disc πGΣ * , respectively. In order to split the disc into molecules and atoms, the observed relation between the ISM pressure and the surface density ratio of Hi to H2 is implemented. Blitz & Rosolowsky (2006) investigated the Hi, CO and stellar densities of 14 nearby galaxies and found where kB represents the Boltzmann constant. With ΣH 2 (r) + ΣHi (r) = Σ sf (r), we estimate the H2 mass according to and calculate the SFR following equations (2) and (3; changing the subscript from sf to H 2 ). Note that we do not impose any mass threshold of H2 in this work and set mH 2 ,c = 0.
Supernova feedback
Inferred from the stellar lifetime-mass relation (Portinari et al. 1997) and the assumed IMF, we calculate the fraction of the newly formed stars (∆m =ṁ ∆t where ∆t is the time interval between two snapshots) which will have reached the supernova stage at the end of the current time step, ηSNII. These stars recycle their masses to the ISM, and the metals and energy produced by the supernovae provide feedback to the environment. In particular, the metals enhance the cooling rate through the metallicity dependent cooling function while the supernova energy leads to transition of gas between different reservoirs. In practice, supernova energy converts star-forming gas 4 (m sf ) to hot (i.e. non-star-forming gas; m hot ) and, in the case of strong supernova feedback, further ejects hot gas (m ejected ) from the galaxy.
In this work, we adopt the Guo et al. (2011) prescriptions to calculate the energy coupled to the surrounding gas e total = energyηSNII∆m × 10 51 erg, with energy = min 1, αenergy representing the coupling efficiency between supernova energy and the ISM, where Vmax is the maximum circular velocity of the host halo, αenergy, βenergy and Venergy are free parameters introduced to modulate the feedback efficiency.
We consider the following two supernova feedback regimes: 1) contemporaneous feedback which is provided by massive stars formed within the current snapshot; and 2) delayed feedback where long-lived stars formed at earlier times are taken into account. Therefore, the total energy released in the current snapshot, j, is where j − 4 represents delayed feedback from the previous 4 snapshots, and e j total,i ∆m j ,i , j energy , η j SNII,i denotes the supernova energy ejected by stars that are formed at snapshot i and become supernovae at snapshot j.
On the other hand, the expected mass of gas that is heated by supernovae depends on the mass loading factor, mass, which is assumed to follow the same form as the coupling efficiency in equation (8; changing the subscript from energy to mass). We first calculate the maximum reheated mass by with an upper limit of mass set to be max mass = 10 following Paper-III, which is a typical value for high-redshift dwarf starburst galaxies (Uhlig et al. 2012). Note that, since the total supernova energy is finite (see equation 9), a galaxy might not be able to heat all the mass estimated from the loading factor. Therefore, we calculate the mass of actually reheated gas according to While in the case of an intense supernova event where E total + 0.5∆m sf V 2 vir > 0, the energy released by supernovae further unbinds the hot gas which is removed from the galaxy and stored in a reservoir termed the ejected gas ∆m hot = −min m hot , max 0, We assume the ejected gas does not contribute to star formation due to its low-density and high-temperature profile, and its cooling process might only be efficient when it has been reincorporated 5 into the galaxy (Henriques et al. 2013).
Reionization feedback
In order to model the feedback from reionization, we further 6 inhibit the local baryon fraction of haloes by a factor of where Mvir is the halo mass and Mcrit represents a filtering mass below which haloes are not able to efficiently accrete baryons from the IGM. We calculate the critical mass for each halo following Sobacchi & Mesinger (2013) Mcrit (r, z) 2.8×10 9 M =J 0.17 21 (r, z) 1+z 10 where J21 (r, z) represents the local UV background intensity while zion is the redshift at which the surrounding IGM was first ionized and is determined using the 21cmfast algorithm (Mesinger et al. 2011; see Paper-III for more details).
Smaug
Smaug, a high-resolution hydrodynamic simulation suite, was performed using a modified version of the gadget-2 N -body/hydrodynamics code (Springel 2005), following the same parameter configuration of the OverWhelmingly Large Simulations project (OWLS; Schaye et al. 2010). The simulations presented in this work start from the same initial conditions generated with the grafic package (Bertschinger 2001) at z = 199 using the Zel'dovich approximation (Zeldovich 1970). Each simulation evolves (2×) 512 3 particles including dark matter (and baryons) within a periodic cube of comoving side of 10h −1 Mpc. The Plummer-equivalent comoving softening length is 0.2h −1 kpc and the particle resolution is 4.7 and 0.9 × 10 5 h −1 M for dark matter and baryons or 5.7 × 10 5 h −1 M if only dark matter particles are considered. We summarize the adopted subgrid physics prescriptions in this section.
(i) Cooling (Wiersma et al. 2009a) consists of both primordial elements and metal emission lines from carbon, nitrogen, oxygen, neon, magnesium, silicon, sulphur, calcium and iron. The cooling function is pre-tabulated using the cloudy package (Ferland et al. 1998), and accounts for free-free scattering between gas particles as well as Compton scattering between gas particles and cosmic microwave background (CMB) photons.
(ii) Star formation occurs in the ISM that is assumed to be multiphase. The following equation describe its state Tg,eos 8 × 10 3 K = nH 10 −1 cm −3 where Tg,eos, nH and γ eff = 4/3 represent the gas temperature on the equation of state (EOS), total hydrogen number density and effective ratio of specific heats, respectively. Gas particles are considered as on the EOS and identified as potentially star-forming regions when they become dense and cold. Star formation is then implemented by stochastically converting star-forming gas particles to star particles with a probability of min (1, mg∆t/tg), where tg represents the gas depletion time-scale. In the case that the ISM has formed a self-gravitating disc, its scaleheight is of the order of the local Jeans length. Based on this, the depletion time-scale follows the normalization and scaling of which were determined by observations -the KS law (Kennicutt Jr 1998).
(iii) Supernova feedback can be simulated kinetically (Dalla Vecchia & Schaye 2008) with supernova winds pushing nearby gas particles at given probabilities and velocities representing efficiencies, or can be modelled thermally (Dalla Vecchia & Schaye 2012) by stochastically distributing supernova energy to the surrounding gas particles and increasing the gas temperature by a given amount (∆T = 10 7.5 K). In order to compare galaxy properties between Meraxes and Smaug, we focus on the hydrodynamic simulation with thermal supernova feedback implemented, which behaves similarly to the semi-analytic supernova feedback prescription (see Section 2.1.2). Note that each star particle represents a single stellar population that is described by the Chabrier IMF (Chabrier 2003), and its mass decreases due to stellar recycling, which accounts for winds from AGB and massive stars as well as Type Ia and II supernovae (Wiersma et al. 2009b). The total number of stars per stellar mass that will reach corecollapse supernovae at the end of their life cycle (ηSNII = 1.19 × 10 −2 M −1 ) is inferred from the IMF. The supernova energy produced by a star particle (m * ) is stochastically distributed onto its N ngb = 48 nearby gas particles with a probability of min 1, , where f th is the fraction of energy that contributes to feedback, which is set to be unity in Smaug; mg,i represents the mass of gas particle i; and ∆Eg corresponds to the energy increment when the particle temperature is raised by ∆T .
(iv) Reionization feedback is implemented as a UV/Xray background (Haardt & Madau 2001) with all gas particles (assuming optically thin) being instantaneously heated to 10 4 K at a given redshift, zre. Although this prescription is numerically achievable and is considered appropriate in the 10h −1 Mpc volume of Smaug simulations (Duffy et al. 2014), it is not an accurate calculation of reionization feedback. Therefore, the semi-analytic prescription of reionization feedback (see Section 2.1.3) will be compared to two hydrodynamic simulations with zre = 9 and 6.5, which bracket the observed CMB and Ly-α forest boundaries of the EoR, and represent the strongest and weakest feedback scenarios, respectively.
Simulations utilized in this study is summarized below: (1) DMONLY, a collisionless N -body simulation including only dark matter particles and neglecting baryonic physics. It is used to construct dark matter halo merger trees for running Meraxes; (2) NOSN NOZCOOL, NOSN NOZCOOL LateRe and NOSN NOZCOOL NoRe, three toy models with cooling in the absence of metal line emission and ignoring supernova feedback. While NOSN NOZCOOL NoRe does not include reionization feedback, the UV/X-ray heating backgrounds of the first two simulations are switched on at zre = 9 and 6.5, respectively. These three models will be used to compare with the SAM and to investigate the homogeneous reionization feedback prescription.
(3) WTHERM, a complete 7 hydrodynamic simulation including radiative cooling from primordial elements and metals, as well as stellar evolution, thermal supernova feedback -the energy produced by supernovae stochastically increases the temperature of the nearby ISM, and instantaneous photonionization heating from a reionization background at zre = 9. We will use WTHERM to investigate the supernova feedback recipe implemented in the SAM.
COMPARISON BETWEEN DWARF GALAXIES IN SAM AND HYDRODYNAMIC SIMULATIONS
In this work, with algorithms described in Paper-XIV, we 1) include the aforementioned modifications of the semianalytic properties or prescriptions including the halo mass, baryon fraction, gas transition time-scale and star formation time-scale; 2) build halo merger trees using the DMONLY simulation; 3) match each individual galaxies 8 between Meraxes and Smaug outputs; and 4) identify star-forming and hot gas in Smaug. We present the comparison result between dwarf galaxy properties predicted by the hydrodynamic simulation and SAM in this section.
Reionization feedback
Reionization feedback in the SAM is incorporated by inhibiting the local baryon fraction of haloes using a filtering mass (see Section 2.1.3 or Paper-III for more details). In this work, we adopt the average filtering mass,Mcrit (z), proposed in Paper-III, which ignores the spatial distribution of the IGM ionization state and depends only on redshift. In order to assess the validity of this feedback prescription 9 , we compare with two Smaug hydrodynamic simulations in which gas particles are instantaneously heated to 10 4 K at zre = 9 (NOSN NOZCOOL) and 6.5 (NOSN NOZCOOL LateRe), respectively (see Section 2.2). Note that the suppression due to the ionizing background is quantified in the SAM using a baryon fraction modifier (see equation 13), which, in hydrodynamic simulations, can be informed by comparing the baryonic components of galaxies matched between the NOSN NOZCOOL NoRe and NOSN NOZCOOL( LateRe) results (Qin et al. 2017a) We show the the reionization modifiers adopted in Meraxes (equation 13) and calculated from Smaug (equation 17) in Fig. 1. We see that, through photonionization heating, reionization plays a significant role in reducing the fraction of baryons, and that the baryon fraction modifier adopted in the SAM is in general agreement with the hydrodynamic result -f mod decreases in less massive haloes and towards lower redshifts.
Reionization in the hydrodynamic simulations
In a 10h −1 Mpc volume, the UV/X-ray ionizing background adopted in the two hydrodynamic simulations represents the strongest (zre = 9) and weakest (zre = 6.5) feedback scenarios that are consistent with the CMB and Lyman α observations (Duffy et al. 2014). However, since reionization does not affect the gas component before zre in these simulations, its feedback on the baryonic reservoirs cannot be captured by the simulations at higher redshifts. Therefore, the time when reionization feedback becomes important is relatively late compared to the SAM where the onset of reionization is more gradual and realistic (Sobacchi & Mesinger 2013) on large scales. For instance, for halos with Mvir ∼ 10 8 M , z| f mod =0.9 is larger than 10 in Meraxes while it is around 8.5 and 6 in the two Smaug results. As a result, the baryon fraction is overestimated in the hydrodynamic simulations at earlier times, which can potentially lead to an overproduction of stellar mass in dwarf galaxies. More accurate modelling of reionization requires dedicated calculations of radiative transfer, which is numerically consuming and hence a challenging task (see a review and comparison of cosmological radiative transfer codes by Iliev et al. 2006 and a recent comparison by Hutter 2018 between semi-numerical reionization modelling and radiative transfer). In the following sections, we will focus on galaxies in the range of Mvir > 10 9 M , where reionization plays a similar and insignificant role in suppressing baryon fractions in Meraxes and Smaug.
Stellar evolution and feedback
We next investigate the stellar evolution and feedback in Meraxes and Smaug, starting with a discussion of involved free parameters in the SAM.
Degeneracy of the parameter space in SAMs
Cosmological SAMs are usually calibrated against the observed galaxy stellar mass functions (or equivalently the SFR functions or galaxy luminosity functions) where a sufficient sample is available. By doing this, the stellar component is assured to be well modelled in a statistical context, and with more upcoming observations, the parameter space becomes better constrained and missing physics in the SAM might be revealed (e.g. AGN feedback Croton et al. 2006). However, one of the issues about this calibration . Baryon fraction modifiers due to reionization (not to be confused with the baryon fraction modifier due to hydrostatic pressure, see Appendix A). The green solid thick line represents the homogeneous reionization background incorporated in Meraxes, which is proposed in Paper-III. The red solid and dashed thin lines demonstrate the effect of the UV/X-ray background in the NOSN NOZCOOL and NOSN NOZCOOL LateRe Smaug simulations where gas particles are instantaneously heated to 10 4 K at zre = 9 or 6.5 (marked by vertical dotted lines), respectively. The baryon fraction of halos in the WTHERM simulation where zre = 9 is shown by the red dash-dotted line, which indicates the suppression due to supernova feedback.
strategy is that it cannot guarantee the modelled galaxies are also representative of real galaxies in terms of their unobservable properties, considering most baryonic processes are modelled indirectly in SAMs with the relevant parameters poorly understood. Taking the gas component as an example, although a handful of radio telescopes are capable of observing the gas component of distant galaxies (e.g. ALMA 10 ; Aravena et al. 2016;Bradač et al. 2017;Carniani et al. 2017;Falgarone et al. 2017), unfortunately, such tasks are still extremely challenging. The current sample of observed gas components of distant galaxies remains small, limiting our understanding of how galaxies accrete baryons and convert their hydrogen into stars in the early universe.
In order to illustrate this, we use the total-gas-based star formation model (see Section 2.1.1) with parameters Table 1. A list and description of the main SAM parameters used in this work. SAM KS (un)limited and SAM H2 represent models using the total-and molecular-hydrogen-based star formation prescriptions, respectively, while the third row shows the parameters adopted for the fiducial model in Paper-III for comparison. 1.5 a The particle masses of the parent N -body simulations are 8.1 × 10 5 M in this work and 3.9 × 10 6 M in Paper-III, where the Chabrier and Salpeter IMFs are adopted, respectively. b t transition 0.2t dyn ≡ 36 [(1 + z) /6] −1.5 Myr. adopted in Paper-III as an example (see the main SAM parameters in Table 1) and refer to it as SAM PaperIII. With a short depletion time-scale of the total star-forming gas, the dynamical time-scale for gas transition, and strong supernova feedback, Meraxes was able to reproduce the observed stellar mass function at z = 5 − 7 in Paper-III. Although we are focusing on lower mass ranges with different dark matter halo merger trees, this combination of parameters can also reproduce the stellar mass function calculated from the Smaug hydrodynamic simulation. We show the two stellar mass functions of matched galaxies at z = 11 − 5 in the left panel of Fig. 2. Since only galaxies with Mvir > 10 9 M are included, at each redshift, we combine the result from 7 consecutive snapshots across a time range of ∼80Myr to obtain adequate samples. We see that the semi-analytic prediction is in agreement with Smaug above the resolution limit.
We next show more detailed galaxy property evolution, including the total baryonic mass, star-forming gas mass and SFR, from the two numerical experiments in the right panels of Fig. 2 in two mass ranges. We see that, although the SAM is in agreement with the hydrodynamic simulation on the stellar mass function at a large range of redshifts, they disagree on the evolutionary path of the gas component. We find that the baryonic mass is about 2 − 5 times smaller in the SAM compared to the hydrodynamic simulation, suggesting that too much supernova energy has been coupled to the ISM. In addition, the hydrodynamic simulation shows an increasing amount of star-forming gas towards higher redshifts for a given halo mass. This suggests that cooling (or cold-mode accretion) might be more efficient in the early universe. On the other hand, the SAM underestimates the star-forming gas reservoir at higher redshift but predicts a similar SFR. This suggests that the depletion time-scale might be set shorter in the SAM, which happens to result in an agreement with the hydrodynamic result on the stellar mass function.
The relevant free parameters involved in this work are the gas transition time-scale, ttransition(αtransition, βtransition), minimum hydrogen mass for star formation, m sf,c , depletion time-scales of the total or molecular gas, t sf(H 2 ) [α sf(H 2 ) , β sf(H 2 ) ], coupling efficiency of supernovae energy and the ISM, energy(αenergy, βenergy), and mass loading factor for supernovae heating, mass( max mass , αmass, βmass). Accounting for normalizations, scaling indices and upper limits, there are 10 free parameters involved. In this case, exploring the full parameter space becomes a challenging task, which is beyond the scope of this work. We are currently constructing an MCMC analysis package for Meraxes (Mutch in prep.), and we will apply it to accurately constrain the parameters against observations and the hydrodynamic results in the future. In the following sections, we only consider αtransition, βtransition, α sf(H 2 ) , β sf(H 2 ) , αenergy and αmass as free parameters when trying to find the best models that can reproduce the hydrodynamic calculation, with the others remaining the same as in Paper-III. We will compare the SAM result with the hydrodynamic simulations with two choices of m sf,c and discuss the comparison result in two mass bins (10 9 < Mvir/M < 10 10 ; 10 10 < Mvir/M < 10 11 ) to explore the impact of the mass scaling indices of supernova feedback (βenergy and βmass). After identifying the fiducial models, we then use them as references and further investigate the parameter space of the SAM.
Star formation thresholds
We recalibrate our chosen parameters in order to simultaneously reproduce the evolutions of the stellar mass function of the hydrodynamic simulation, as well as the following three quantities: (i) total baryonic mass, which is predominately controlled by supernova ejection; (ii) star-forming gas mass, which is jointly modulated by cooling and supernova heating; and Note that at each redshift, only galaxies with M vir > 10 9 M are considered and, in order to expand the sample size, we include matched galaxies from 7 consecutive snapshots (∼80Myr). The resulting galaxy sample size, N gals , is indicated in the right corner of each subpanel. The two grey regions mark the approximate resolutions of the simulation, which correspond to 10 and 50 stellar particles, respectively. Right two panels: the property evolution, including the total baryonic mass, star-forming gas mass and SFR, of galaxies with 10 9 <M vir /M <10 10 and 10 10 <M vir /M <10 11 , respectively. Lines and shaded regions represent the mean and the 95 per cent confidence intervals around the mean using 100000 bootstrap re-samples. In the low-mass panel, the star-forming gas mass of quenched galaxies in the SAM KS limited result is indicated by the thin solid green line (overlapped with the SAM PaperIII star-forming gas mass).
(iii) SFR, which is used to investigate the depletion timescale of the total or molecular hydrogen gas.
After exploring the parameter space 11 , we identify a set of parameters (see Table 1) that lead to a better agreement on the property evolution of galaxies with Mvir > 10 10 M , which is shown in Fig. 2. This model is referred to as SAM KS limited and, compared to SAM PaperIII, this model adopts (i) a smaller coupling efficiency of supernovae energy and the ISM, leading to an increased total baryonic mass; (ii) a longer time-scale of gas transition at z = 5, which decreases more rapidly towards higher redshifts and results 11 We manually explore the parameter space within a plausible range where scaling indices of redshift dependencies (e.g.
in a better agreement on the star-forming gas mass with the hydrodynamic calculation; (iii) a longer time-scale of gas depletion at z = 5, which decreases less rapidly towards higher redshifts and is inferred from the star formation efficiency adopted in the hydrodynamic simulation (Paper-XIV); (iv) a smaller mass loading factor for supernovae heating to further adjust the star-forming gas mass.
However, in the low-mass range where the virial mass is between 10 9 and 10 10 M , this model fails to reproduce the evolutionary path of the star-forming gas reservoir calculated by the hydrodynamic simulation, with a prediction of a flatter M sf − z relation. During the experiment, we find that the star-forming gas mass at low redshift (5 < z < 8) does not change by incorporating a larger mass loading factor, mass, which is expected to further suppress the starforming gas mass through supernova heating. This suggests that the bulk of star-forming gas is stored in quenched galaxies 12 where m sf < m sf,c , the mean star-forming gas mass of which is indicated by the thin solid line in the central panel of Fig. 2.
According to the total-gas-based star formation prescription (see Section 2.1.1), galaxies can only form stars when their gas reservoirs are adequate. This reservoir mass threshold is calculated through with m sf,c,0 = 1.9 × 10 8 M inferred from the KS observations (Kennicutt Jr 1998) at the local Universe. In the current DRAGONS series, we instead have adopted a lower critical mass with m sf,c,0 = 1.4 × 10 8 M . This is supported by Henriques et al. (2015), which proposes to reduce the mass threshold of star-forming galaxies to reconcile the issue that previous SAMs have overpredicted the number of quenched galaxies in the low-mass range while these galaxies still possess a significant amount of star-forming gas reservoir. This might explain the evolution of star-forming gas mass of dwarf galaxies predicted by SAM KS limited and suggests that the threshold of star formation needs to be further reduced in these low-mass galaxies. We next focus on the dwarf galaxies with 10 9 M < Mvir < 10 10 M , and recalibrate the model without any thresholds of star formation (i.e. m sf,c = 0). Fig. 2 shows the result of this model, SAM KS unlimited, where gas transition and star formation at lower redshifts as well as the supernovae energy coupling efficiencies are less efficient (the mass loading factor of supernova heating is kept the same as in SAM KS limited ). We see that, while SAM KS limited with m sf,c ∼ 10 8 M is better at reproducing the hydrodynamically simulated high-mass galaxies, the updated model with m sf,c = 0 is more consistent with the hydrodynamic result at the low mass range. This indicates high-redshift less massive galaxies, in general, possess lower thresholds of star formation as well (Henriques et al. 2015).
Note that in SAM KS unlimited, βtransition=−7.5 (see equation B3) is crucial to reproducing the evolution of the star-forming gas mass calculated by the hydrodynamic simulation. However, it also leads to unrealistically rapid changes of the gas transition efficiency in the SAM. The time-scale of gas transition from hot to star-forming is close to the dynamical time-scale (55Myr) at z ∼ 12, suggesting strong cold-mode gas inflow at early times (Kereš et al. 2005(Kereš et al. , 2009Benson & Bower 2011). On the other hand, ttransition increases dramatically towards lower redshifts with ttransition becoming larger than the Hubble time (1Gyr) at z ∼ 8, leading to greatly suppressed accretion of star-forming gas in low-redshift dwarf galaxies. We note that, with more intense supernova heating to offset it, additional gas can be allowed 12 In Paper-XIV, a high barrier to star-forming galaxies was adopted when we studied the total-gas-based star formation prescription in the absence of feedback. However, during the experiment, we found that a more suppressed star formation at higher redshift is required to reproduce the hydrodynamic simulation (i.e. NOSN NOZCOOL NoRe). This is due to the fact that galaxies are more likely to have an insufficient star-forming gas reservoir (i.e. m sf < m sf,c ) at lower redshifts due to the adopted high threshold for star formation.
to transition from hot to star-forming. Therefore, incorporating larger mass at lower redshifts will decrease ttransition accordingly and allow moderate changes of the transition rate. However, as we will see in Section 3.2.7, supernova heating only plays a secondary role in changing the evolution of star-forming gas of high-redshift dwarf galaxies. We will further discuss this rapidly evolving ttransition in Section 3.2.4.
In these two SAM KS models, stars form by consuming the total star-forming gas reservoir. However, due to the unknown time-scale of depleting the total gas (possibly there is no simple universal scaling of t sf , see Duffy et al. 2017), the degeneracy 13 between the processes of cooling and heating exists. For instance, if the real depletion time-scale of the total gas reservoir is longer than what we have adopted, cooling would have been underestimated to reproduce the correct SFR. Thus, if we have utilized the true star formation efficiency, we would need to implement 1) more rapid cooling to reproduce the correct amount of stars; 2) a larger mass loading factor to heat the additional star-forming gas that has cooled during the time step; and 3) more efficient coupling between supernova energy and the ISM to provide the energy needed for heating. In the next section, we discuss the H2-based star formation prescription (see Section 2.1.1), which is expected to be a more direct modelling approach and where the relevant time-scale is better understood. Duffy et al. (2017) investigated the H2 component of dwarf galaxies using the Smaug simulations and found the depletion time-scale of H2 is tH 2 ∼ 0.3Gyr, independent of the feedback regime (e.g. NOSN NOZCOOL, WTHERM ). They also discussed the mass and redshift dependencies when applying SAMs with H2-based star formation laws and proposed 14 tH 2 ∼ 0.9Gyr [(1+z) /6] −1.1 , the extrapolation of which agrees with the previous findings at the local Universe (Leroy et al. 2008). We note that the scaling index of tH 2 was motivated from the KS law with an assumption that galactic discs are self-gravitating and follow exponential surface profiles. The latter might need revising for highredshift dwarf galaxies. In the early universe, galaxies tend to possess larger velocity dispersions and both mergers and cold-mode accretion are significant. These all indicate that high-redshift galaxies might have thickened discs (Moster et al. 2012;Newman et al. 2012;Price et al. 2016). In this section, we adopt tH 2 ∼ 0.9Gyr [(1 + z) /6] −0.8 instead with the scaling inferred from an isothermal profile, and then calibrate cooling and supernova feedback efficiencies to reproduce the dwarf galaxy properties from Smaug. We will further discuss the semi-analytic prediction when varying tH 2 in Section 3.2.5.
Star formation from molecular hydrogen
The result is shown in Fig. 2. We see that without any degeneracies 15 , the H2-based model can still reproduce the hydrodynamic calculation of the properties of dwarf galaxies 13 Note there are three constraints with four sets of freedoms. 14 We ignore the weak dependency on stellar mass and use 10 7.2 M , which is approximately the average stellar mass of the final sample, as a representative in this work. 15 Note there are three constraints with three sets of freedoms. . First row: the parameter configuration including the gas transition time-scale, gas depletion time-scale, mass loading factor for supernova heating, and coupling efficiency of supernova energy and the ISM, for different models. Bottom three rows: the evolution of the ratio of properties calculated by the SAM to the hydrodynamic simulation, including the total baryonic mass, star-forming gas mass and SFR. Only galaxies with 10 9 <M vir /M <10 10 are considered. Lines and shaded regions (only for the fiducial model) represent the mean and the 95 per cent confidence intervals around the mean using 100000 bootstrap re-samples.
as well as the cosmic evolution of the stellar mass function. Compared to the SAM KS unlimited result, SAM H2 agrees better with Smaug on the evolution of the total baryonic mass of dwarf galaxies, and the calculation of massive galaxies. However, it still overestimates the total baryonic mass and underestimates the mass of star-forming gas and SFR of massive galaxies, suggesting that these galaxies might possess different cooling and supernova feedback efficiencies or shorter depletion time-scale of H2 compared to less massive galaxies ).
The success of our SAM with the H2-based star formation prescription and a fixed H2 depletion time-scale is encouraging. It indicates that the accretion-coolingdepletion-heating-and-ejection pathway of gas is still representative for dwarf galaxy formation at high redshift in terms of predicting the gas and stellar properties of the hydrodynamic simulation. We next use the H2-based SAM as an example and illustrate the impact of changing the relevant parameters with comparisons to the fiducial SAM H2 model. In practice, we change one set of parameters at each time with the other parameters remaining the same as the fiducial model. We show the result of the property evolution in Fig. 3 in terms of the ratio of the properties calculated by the SAM to the hydrodynamic simulation.
Gas transition time-scale
The gas transition time-scale of the fiducial SAM H2 model is ttransition = 6 [(1 + z) /6] −6.5 Gyr, which results in a significantly larger value at low redshift. In the first column of Fig. 3, we show this scaling, as well as the result of assigning ttransition with the free-fall time-scale (FreeFallTransition), which is a common assumption adopted in the literature for the rapid cooling regime, and a time-scale that evolves slower towards higher redshifts (SlowChangeTransition) compared to the fiducial model. We see that adopting a shorter transition time-scale results in the star-forming gas reservoir receiving more efficient replenishment. In the case of unchanged gas depletion time-scale, SFR increases. Consequently, more energy gets ejected from supernova explosions, leading to more suppressed total baryonic masses given that the energy coupling efficiency and mass loading factor for heating do not change. From the first column of Fig. 3, we see that without changing other parameters, a significantly evolving transition time-scale is required to reproduce the rapidly decreasing star-forming gas mass at lower redshifts as predicted by the hydrodynamic simulation.
We note again that varying ttransition from the dynamical time-scale was proposed to compensate for the overestimated collapse rate attained by assuming SIS profiles for hot gas and the underestimation due to large transition radii between hot and star-forming gas (Paper-XIV). In the presence of feedback, the star-forming disc shrinks, in particular at lower redshifts (see Appendix B). Correspondingly, the overestimation of ttransition (overestimated L inflow in equation B2) becomes insignificant -gas indeed needs to collapse into the central region to become star-forming gas. Therefore, we need a much longer transition time-scale at low redshift to take into account that hot gas is less dense at the inner regions compared to the SIS profile -only a small amount of gas can collapse into the centre within the dynamical time-scale (overestimatedṁ hot−>sf in equation B2). We will investigate below whether the other free parameters can have a significant impact to the star-forming gas reservoir evolution, so that the steep gradient of m sf (z) in Fig. 2 can be reproduced without the implementation of a rapidly-evolving redshift dependency in ttransition.
Gas depletion time-scale
The H2 depletion time-scale is better constrained than that of the total gas. However, observational results still possess large variance even in the local Universe, from a half to a few Gyr (Leroy et al. 2008;Bigiel et al. 2011;Saintonge et al. 2011;Tacconi et al. 2013). Therefore, we use the redshift-dependent molecular hydrogen depletion time-scale proposed in Duffy et al. (2017), which rapidly decreases at higher redshift (see Section 2.1.1). For the fiducial model, we use tH 2 = 0.9Gyr × [(1 + z) /6] −0.8 , with a less steeply evolving redshift dependency due to thicker discs at high redshift (see Section 3.2.3). In the second column of Fig. 3, we show the property evolution of using the time-scales proposed by Duffy et al. (2017) (RapidChangeSF ) as well as a constant value (TwoGyrSF ), tH 2 = 2Gyr, which is commonly adopted for SAMs in the literature (e.g. Lagos et al. 2011). We see that by increasing the time-scale of converting hydrogen into stars, star formation quenches, leading to weaker supernova ejection and heating. With the current configuration of parameters, we see that tH 2 = 2Gyr significantly underestimates star formation at high redshift in agreement with Duffy et al. (2017) and the star-forming gas evolution gradient is not expected to change significantly by varying βH 2 .
Supernova feedback parameters
Supernova explosions increase the thermal energy of the ISM and expel baryons in dwarf galaxies. However, since the relevant region cannot be resolved in cosmological simulations, subgrid physics with free parameters are adopted by both hydrodynamic and semi-analytic modelling approaches. These parameters (see Section 2) represent the coupling efficiency between supernova energy and the ISM (i.e. energy and f th ) or modulate heating and ejection by supernova feedback (i.e. mass and ∆T ). We use the energy flow illustrated in Fig. 4 to facilitate the following discussions of supernova feedback, where (i) tot represents the fraction of the energy ejected by supernovae (ESN) that is coupled to the ISM; (ii) sf and hot represent the fractions of energy that are Ejected Gas
Starforming
Gas Hot (nonstar-forming) Gas cooling Figure 4. Energy flow of supernova feedback. tot, sf , hot , sf−sf , sf−hot , hot−hot and hot−eject represent fractions of supernova energy (E SN ) that are 1) coupled to the ISM; 2) distributed to the star-forming gas; 3) distributed to the hot gas; 4) used to increase the thermal energy of the star-forming gas reservoir; 5) used to heat the star-forming gas to hot; 6) used to increase the hot gas thermal energy; and 7) used to eject the hot gas from the host. Note that tot = sf + hot , sf = sf−sf + sf−hot and hot = hot−hot + hot−eject . The dotted line indicates the process of radiative cooling.
distributed to the star-forming and hot gas, respectively, with tot = sf + hot ; (iii) sf−sf and sf−hot represent the fractions of energy that are used to increase the thermal energy of the starforming gas reservoir and to heat the star-forming gas to hot, respectively, with sf = sf−sf + sf−hot ; and (iv) hot−hot and hot−eject represent the fractions of energy that are used to increase the thermal energy of the hot gas reservoir and to eject the hot gas from the host, respectively, with hot = hot−hot + hot−eject .
Energy-ISM coupling efficiency: The fraction of supernova energy that contributes to feedback is f th and energy in Smaug and Meraxes, respectively. Since f th is chosen to be unity, with all the supernova energy being coupled to the ISM, one might expect energy = 1 as well. However, because SAMs ignore the thermal energy of the star-forming gas and assume the temperature of hot gas does not change during one time step, sf−sf and hot−hot are zero. This means that, despite all supernova energy being coupled to the ISM in the hydrodynamic simulation (i.e. f th ≡ tot = 1), only a fraction of it contributes to feedback in the SAM (some of the energy might also be lost in the ejected gas reservoir due to the untraced thermal energy of the ejected gas). Therefore, energy ≡ sf−hot + hot−eject < f th = 1.
Mass loading factor: How much of the supernova energy is coupled to the ISM and used to heat gas (∆m sf ) is governed by free parameters describing the mass loading factor ( mass,sam) in the SAM while in the hydrodynamic simulation, it is the increment of gas temperature (∆T ) that determines the number of gas particles that are affected. This indicates that, in the hydrodynamic simulation, 3 2 ∆m sf µmp kB∆T = 10 51 erg × f th ηSNII∆m * , where µmp is the average particle mass of fully ionized gas. With a large temperature increment of ∆T = 10 7.5 K, the mean number of instantaneously heated nearby gas particles per stellar baryon is mass,hydro,ins ≡ ∆m sf /m * ∼ 1.34 (Dalla Vecchia & Schaye 2012). This small mass loading factor places the heated gas in the Bremsstrahlung cooling regime, achieving an efficient supernova feedback mechanism through heating. Moreover, due to the increased pressure from the thermal feedback, gas particles within high overdensities tend to move outwards in a wind, perpendicularly to the star-forming disc plane as a result of the density gradient. These particles are usually referred to as the wind particles, some of which might eventually escape from the gravitational potential of the system. As the wind particles travel, they further increase the thermal energy of the nearby gas particles along the path, leading to a much larger effective mass loading factor over a long period of time such that mass,hydro,eff 1.34. Since the SAM captures the average property over 11Myr, one might expect mass,sam 1.34 as well. However we have also shown that in hydrodynamically simulated dwarf galaxies, gas particles need not be fully virialized to become non-star-forming gas (i.e. V hot,hydro < Vvir; Paper-XIV) while on the other hand, SAMs ignore the thermal energy of the star-forming gas (V sf,sam = 0) and assume the non-star-forming hot gas shares the virial temperature of host halo (V hot,sam = Vvir). Therefore, following the supernova energy used for heating in the SAM and hydrodynamic simulation (E reheat,sam = E reheat,hydro ), we see that where 0.5V 2 hot,sam(hydro) and 0.5V 2 sf,sam(hydro) represent the specific thermal energy of the hot and star-forming gas reservoirs in the SAM (or particles in the hydrodynamic simulation) while indicate the property is averaged over all reheated gas particles and a long period of time in the hydrodynamic simulation.
Supernova feedback in the SAM
Without properly tracking the thermal energy of varied gas reservoirs in the SAM, it is challenging to determine the energy-ISM coupling efficiency and mass loading factor. In this work, against the hydrodynamic result of WTHERM, we have calibrated our fiducial SAM and adopted mass=1.5 and energy=0.03 for SAM H2 (see Table 1).
In the last two columns of Fig. 3, we show four varying models of supernova feedback with 1) stronger supernova heating (StrongSNHeating); 2) an evolving mass loading factor (EvolvingSNHeating) with βmass = 3.5 following Guo et al. (2011); 3) no coupling between supernova energy and the ISM (NoSNCoupling); and 4) all supernova energy contributing feedback (MaximumSNCoupling) in terms of converting star-forming gas to hot and ejecting hot gas from galaxies ( sf−hot + hot−eject =1, see the illustration in Fig. 4). Note that Vmax ∼ Vvir = 41 [(1 + z) /6] 0.5 km/s for haloes with Mvir ∼ 10 9.5 M . Therefore, the velocity dependencies (see equation 8) that were proposed to supernova feedback (Guo et al. 2011) represent a cosmic evolutionary path for a given halo mass. We see that when the mass loading factor is fixed, more coupled energy to the ISM leads to stronger suppression of the total baryonic mass, which subsequently decreases the mass of the star-forming disc and quenches star formation. From the MaximumSNCoupling model, we see that with all supernova energy used to convert star-forming gas to hot and eject hot gas from the galaxy, the total baryonic mass and star-forming gas become significantly suppressed. On the other hand, when the supernova energy coupling efficiency is fixed, less heating leads to a larger reservoir of starforming gas and enhanced star formation. Consequently, more supernova energy is coupled to the ISM. With less energy used for heating, more mass in the hot gas reservoir gets ejected. Depending on the increased amount of star-forming gas and stellar mass as well as the decreased hot gas mass, the total baryonic mass varies slightly. For instance, from the EvolvingSNHeating, to fiducial and StrongSNHeating model, mass loading factor increases, leading to decreased SFR and M sf . However, both EvolvingSNHeating and StrongSNHeating predict more baryonic mass compared to the fiducial model, with more star-forming gas and stars formed in the former while more hot gas is retained in the latter model. In addition, the property evolution does not change significantly between these three models. Therefore, we do not expect that, by changing the heating efficiency of supernovae, the issue of incorporating a rapidly evolving gas transition time-scale (see Section 3.2.4) can be resolved.
CONCLUSIONS
Following Qin et al. (2018, Paper-XIV), we further investigate the semi-analytic modelling prescriptions of galaxy formation that are commonly adopted in the literature. In this work, we include supernova feedback and homogeneous reionization background in both the Meraxes SAM (Mutch et al. 2016a) and Smaug high-resolution hydrodynamic simulation (Duffy et al. 2014), and make comparisons between the stellar and gas reservoirs predicted by these two methods. We focus on galaxies with 10 9 M < Mvir 10 11 M . With the modifications previously proposed in Paper-XIV including adjustments to halo masses from the merger trees, suppression of baryon fractions accounting for hydrostatic pressures, and the modulation of time-scales for the transition of gas from hot to star-forming (ttransition) and from star-forming to stars (depletion time-scale, t sf(H 2 ) ), we find that the current SAM is able to reproduce the hydrodynamic calculation of the cosmic evolution of galaxies with Mvir > 10 10 M at high redshift. This includes the stellar mass function, total baryonic mass, star-forming gas mass and SFR between z = 5−11. However, in less massive galaxies (10 9 M < Mvir < 10 10 M ) with SFR calculated using the total star-forming gas, we identify a significant amount of star-forming gas stored in quenched galaxies due to the imposed mass threshold of star formation. After reducing the threshold, the SAM successfully mimics the evolution of dwarf galaxies in the hydrodynamic simulation.
We also investigate a second star formation prescription, which splits the star-forming gas disc into molecular and atomic hydrogen and forms stars from molecules (Lagos et al. 2011). Fixing the depletion time-scale of H2 inferred from a previous study of the Smaug hydrodynamic simulation , we find that, with only calibrations of the gas transition rate and supernova efficiencies, the SAM can also reproduce the dwarf galaxy properties calculated by the hydrodynamic simulation. In addition, we find that when reionization and supernova feedback are included, dwarf galaxies tend to accrete a significant amount of star-forming gas at early times (z > 10), which quickly becomes suppressed towards lower redshifts. Future work needs to take this into account and incorporate modelling of cold-mode accretion to study dwarf galaxies in the early universe.
APPENDIX A: MODIFICATIONS OF HALO MASSES AND BARYON FRACTIONS
We show the impact to the semi-analytic calculation, in the presence of reionization and supernova feedback, of incorporating the halo mass and baryon fraction modifiers (Qin et al. 2017a(Qin et al. , 2018, which correspond to the slower evolution of haloes and less efficient gas accretion due to hydrostatic pressure. We apply Meraxes with the total-gasbased star formation law (see Section 2.1.1) and the same parameters adopted in Paper-III (SAM PaperIII ) but without the baryon fraction modifier (SAM PaperIII noB ) and without any modifiers (SAM PaperIII noHB ). We show the halo mass, gas mass and stellar mass functions at z = 5 and 12 of the three Meraxes results and the halo mass functions predicted by the Smaug simulations in Fig. A1.
We see that without the halo mass modifier, the halo mass function is overestimated compared to the hydrodynamic result at high redshift, which consequently increases the mass function of gas and stars. In addition, further excluding the baryon fraction modifier increases the amount of gas accreted by the host halo and subsequently causes more stars to form. However, we see that the modifications have an insignificant impact to the stellar mass function in the current observable range, which requires deeper surveys with upcoming space programs such as JWST 16 .
APPENDIX B: THE BUILD-UP OF STAR-FORMING GAS Qin et al. (2018, Paper-XIV) shows that in the absence of feedback, the majority of dwarf galaxies in the hydrodynamic simulation accrete gas particles with temperatures around a few of 10 4 K, which is much lower than their halo virial temperatures. This represents a cold-mode accretion of the infalling gas (Kereš et al. 2005(Kereš et al. , 2009, which in the SAM is currently modelled through the cooling prescription of the rapid cooling regime proposed by White & Frenk (1991) gas accreted in hot mode shares the halo virial temperature due to shock heating and cools rapidly within the dynamical time-scale, t dyn . The infalling hot gas (m hot ) is also assumed to follow the singular isothermal sphere (SIS) profile To ease demonstration in this paper, we term the time-scale of gas being transited from hot reservoir to the star-forming disc as a transition time-scale where m hot−>sf andṁ hot−>sf represent the mass and mass rate of the transition, while L inflow and V inflow are the distance and velocity of the corresponding gas inflow, respectively. We illustrate the cooling prescription in Fig. B1. Most massive haloes are able to create shocks and heat the infalling gas, resulting in hydrostatic equilibrium. In this case, which is termed the hot halo regime, the time-scale of hot gas transitioning to star-forming is determined by the thermal cooling time-scale, ttransition = t cool . However, it is difficult to generate shock heating in less massive systems (Birnboim & Dekel 2003;Cattaneo et al. 2017), leaving little support to prevent gas from infalling onto the central disc, and cooling becomes rapid. In this rapid cooling regime, the prescription assumes the star-forming gas disc is relatively small and such a process happens as free-fall. These mean that the gas at the virial radius needs to travel through a distance of L inflow ≈ Rvir with V inflow = Vvir, leading to ttransition ≈ t dyn .
In making comparisons of the gas reservoir calculated by the SAM and hydrodynamic simulation with reionization and supernova feedback isolated in Paper-XIV, we found ttransition = t dyn becomes less accurate when applying the rapid cooling prescription to high-redshift dwarf galaxy modelling. This is due to the aforementioned two assumptions which lead to over-and under-estimations of the gas transited from hot to star-forming, respectively.
(i) Assuming the SIS profile of the accreted mass overestimates the gas density in the inner regions (see the illustration in the bottom panels of Fig. B1). Subsequently, for a given time step of ∆t < t dyn , m hot−>sf = m hot ∆t t dyn = MSIS r = ∆t t dyn Rvir is overestimated; (ii) star-forming gas particles of dwarf galaxies (in the hydrodynamic simulation) possess larger extensions and can be found as far as the virial radius. This means that assuming gas can only transfer from non-star-forming hot gas to star-forming when it reaches the galaxy centre introduces a longer inflow path (L inflow ) and hence leads to an overestimated transition time-scale (ttransition; see equation B2). In this case, for a given time step of ∆t, m hot−>sf = m hot ∆t ttransition is underestimated instead.
We note that when feedback is included, semi-analytic modelling of dwarf galaxies still suffers from these two factors. First, in order to demonstrate that most high-redshift dwarf galaxies in the SAM are still identified as in the rapid cooling regime when reionization and supernova feedback are included, we calculate the cooling radius, R cool , at which the time-scale of thermal cooling is equal to the halo dynamical time in the SAM and show the ratio of the cooling radius to the virial radius (R cool /Rvir) calculated from the models discussed in this work in Fig. B2. Note that gas within the cooling radius is considered to have reached hydrostatic equilibrium and cool thermally if R cool < Rvir. However, in the case of a large cooling radius (i.e. R cool > Rvir), the infalling gas will not be able to form stable shocks or remain in hydrostatic equilibrium. Accordingly, all of the accreted gas directly collapses into the central regions as free-fall. From Fig. B2, we see that most low-mass galaxies discussed in this work are considered to be in the rapid cooling regime.
Next we show the evolution, in terms of the densitytemperature phase and spatial distributions of star-forming and non-star-forming gas particles (identified using the algorithm described in Paper-XIV), of the most massive halo in the WTHERM Smaug simulation identified at z = 5 as an example in Fig. B3, and discuss the gas density profile of galaxies with M * ∼10 7±0.5 M in Fig. B4. We see that, compared to the NOSN NOZCOOL NoRe simulation where Figure A1. Comparisons of the virial mass, gas mass and stellar mass functions (from left to right) at z = 5 and 12 between three Meraxes WTHERM runs with fixed parameters adopted by Paper-III but different implementations of modifiers: 1) no modifiers (SAM PaperIII noHB ); 2) halo mass modifier only (SAM PaperIII noB ); and 3) two modifiers including halo mass and baryon fraction (SAM PaperIII ). Shaded regions represent the 1σ Poisson uncertainties. For comparison, the halo mass functions predicted by WTHERM Smaug hydrodynamic simulation and the N -body simulation are indicated by the thin black solid and dotted lines, respectively.
heating from supernova (and reionization) is not included, galaxies within the same stellar mass range are hosted by larger haloes with more gas particles identified as non-starforming when the feedback is considered. However, the total gas mass does not change significantly, indicating suppressions of baryonic mass and self-regulation of star formation. Moreover, although the star-forming regions become relatively smaller in WTHERM, they still possess a large dispersion at high redshift. This can also be observed from the large radius of the maximum rotation (Rmax) of the most massive halo, which suggests the necessity of an enhanced inflow rate between the circum-galactic medium and ISM at earlier times.
More accurate semi-analytic modelling of gas accretion should not only distinguish the hot-and cold-mode inflows with gas reaching the star-forming disc on different timescales (Kereš et al. 2005(Kereš et al. , 2009Cattaneo et al. 2017), but also account for the larger disc size at higher redshifts. We consider these as a future project with a more complete cooling function implemented. For the purpose of accurately capturing the gas transition time-scale using the current rapid cooling prescription, in Paper-XIV, we proposed to change the cooling efficiency when galaxies are identified in this regime. We introduced a maximum cooling factor, κ cool . This was used to modulate the gas transition time-scale based on the time-scale of free-fall, ttransition = κ −1 cool t dyn , so that the overestimated collapse rate from the assumed SIS density profile and the underestimation due to the longer inflow path before the transition of gas reservoirs can be balanced. In this work, we adopt this modification by incorporating the fol-lowing form of the transition time-scale. ttransition = max t min transition , αtransition where t min transition is set to be 0.2t dyn following Paper-XIV. However, considering the transition radius between starforming and non-star-forming gas changes due to feedback, αtransition and βtransition are not expected to possess the same values as adopted in Paper-XIV 17 . Therefore, we leave them as free parameters and explore ttransition in this work. This paper has been typeset from a T E X/ L A T E X file prepared by the author. 17 α cool ≡ α −1 transition × 180Myr = 1 and β cool ≡ −1.5 − β transition = 1 were utilized instead in Paper-XIV. Δ! ! ./% 1 2'# Δ! ! ./% 1 2'# well-approximated 3 4("→&6 overestimated 3 4("→&6 small offset large offset 8 8 8 Figure B1. Modifications to the semi-analytic cooling prescription. Top panels: the contour demonstrate the gas density with the outermost regions illustrating the size of the hot halo, R vir , while the dotted line indicates the cooling radius, at which the cooling time is equal to the halo dynamical time. Bottom panels: the illustration of the hot gas density profile compared to the SIS profile. In the original cooling prescription, hot gas is assumed to share the halo virial temperature and follows the SIS profile. Gas of massive haloes usually remains in thermal equilibrium due to shock heating, and the transition time-scale of gas from hot to star-forming depends on the thermal cooling time-scale, t transition = t cool . On the other hand, gas of less massive haloes do not experience significant heating from shocks, and gas falls onto the central disc at the dynamical time-scale, t transition ≈ t dyn . The modification proposed to the rapid cooling regime is that during the rapid cooling regime, t transition = t dyn . Figure B3. Profiles of the most massive z = 5 halo in the WTHERM Smaug simulation at z = 13 − 5. Top panels: gas densitytemperature phase diagram. The black solid lines split the gas particles into star-forming (blue; inside the lower left region) and nonstar-forming gas (red). Bottom panels: face-on projections of the non-star-forming (top) and star-forming (bottom) gas particles. The stellar mass, gas mass, virial mass, virial radius and the radius of maximum rotation of this halo are shown in the bottom panels. Figure B4. Gas profiles of galaxies with M * ∼10 7±0.5 M at z = 13 − 5 in the WTHERM Smaug simulation (red thick lines). In each panel, top: the median radial density profiles of all gas (i.e. star-forming and non-star-forming gas; dash-dotted line) and the non-starforming gas (solid line). Lines with shaded regions represent the median and 95 per cent confidence intervals around the median using 100000 bootstrap re-samples of the non-star-forming gas profile. The median SIS profile assumed in the SAM is calculated using the same amount of non-star-forming gas, and is indicated with the red dashed line; bottom: the ratio of the density profiles of all gas to non-star-forming gas. The radius, within which the SIS gas is able to reach the centre through free-fall after one time step in the SAM, is indicated with thin vertical dashed lines. The radii, where the star-forming gas is as dense as the non-star-forming gas (i.e. ρ sf ∼ ρ hot ) or becomes deficient (i.e. ρ sf ∼ 0), are indicated with thin solid lines. The median virial radius, masses of non-star-forming and all gas mass are shown in the bottom right corner of each panel. The results of galaxies with M * ∼10 7±0.5 M in the NOSN NOZCOOL NoRe are indicated with black thin lines for comparison, and their median properties are given in parenthesis.
|
2018-08-10T07:09:23.000Z
|
2018-08-10T00:00:00.000
|
{
"year": 2019,
"sha1": "7b5cb4093eeeb02f16fd17fe507af3199167bc0f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1808.03433",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7b5cb4093eeeb02f16fd17fe507af3199167bc0f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
219981384
|
pes2o/s2orc
|
v3-fos-license
|
The Effects of Regulatory Lipids on Intracellular Membrane Fusion Mediated by Dynamin-Like GTPases
Membrane fusion mediates a number of fundamental biological processes such as intracellular membrane trafficking, fertilization, and viral infection. Biological membranes are composed of lipids and proteins; while lipids generally play a structural role, proteins mediate specific functions in the membrane. Likewise, although proteins are key players in the fusion of biological membranes, there is emerging evidence supporting a functional role of lipids in various membrane fusion events. Intracellular membrane fusion is mediated by two protein families: SNAREs and membrane-bound GTPases. SNARE proteins are involved in membrane fusion between transport vesicles and their target compartments, as well as in homotypic fusion between organelles of the same type. Membrane-bound GTPases mediate mitochondrial fusion and homotypic endoplasmic reticulum fusion. Certain membrane lipids, known as regulatory lipids, regulate these membrane fusion events by directly affecting the function of membrane-bound GTPases, instead of simply changing the biophysical and biochemical properties of lipid bilayers. In this review, we provide a summary of the current understanding of how regulatory lipids affect GTPase-mediated intracellular membrane fusion by focusing on the functions of regulatory lipids that directly affect fusogenic GTPases.
INTRODUCTION
Membrane fusion is a vital step of a variety of fundamental processes in the cell and can be defined as a merger of two membrane-enclosed compartments into a single compartment. Membrane fusion is catalyzed by either a single protein or a series of proteins. Two types of fusogenic proteins are involved in most intracellular fusion events: SNAREs catalyze most of the membrane fusion events that occur during intracellular vesicle trafficking, while membrane-bound GTPases mediate the homotypic fusion of organelles such as the endoplasmic reticulum (ER) and mitochondria. These GTPases belong to a dynamin-like GTPase superfamily with conserved domain compositions and structures (Yan et al., 2015). The members of this family are mechanochemical GTPases that participate in the fusion and fission of membranes (Praefcke and McMahon, 2004). Here, we focus on dynamin-like fusogenic GTPases, including mitofusins (MFNs) and atlastins (ATLs), which share common features but act in different parts of the cell.
While proteins generally act as catalysts during membrane fusion, lipids have been long known to play a structural role. However, there is emerging evidence that lipids can also regulate membrane fusion events directly. These lipids, such as diacylglycerol, phosphatidic acid, phosphoinositides, and sterols, play more functional roles than structural roles during membrane fusion and thus are termed "regulatory lipids" (Fratti et al., 2004). The structures and physical properties of these regulatory lipids often differ from those of structural phospholipids; specifically, structural phospholipids take the form of cylinders with a typical phosphate head group and two acyl chains, while regulatory lipids display differential head group sizes and numbers of acyl chains and charges, resulting in different overall shapes of the lipids. In addition, regulatory lipids often contribute to the formation of microdomains on membranes, thereby affecting their physiochemical properties (Munro, 2003). These microdomains play an important role in membrane fusion by serving as fusion sites at which lipid rearrangement and bilayer mergers occur (Lang et al., 2008). Regulatory lipid-containing microdomains are believed to control membrane fusion mainly by changing the fluidity and curvature of the membrane, making it more prone to fusion (Zhukovsky et al., 2019). However, recent studies revealed that regulatory lipids also control membrane fusion by physically interacting with fusogenic proteins and thereby affecting their functions. There is indeed evidence for the direct involvement of regulatory lipids in GTPase-induced ER fusion and mitochondrial fusion through protein-lipid interactions. In this review, we describe current knowledge of the mechanisms by which certain regulatory lipids affect GTPaseinduced intracellular membrane fusion.
MITOFUSIN IS INVOLVED IN MITOCHONDRIAL OUTER-MEMBRANE FUSION
Mitochondria play a vital role in cellular homeostasis and survival by functioning as the key player in cellular ATP production, apoptosis regulation, and cell aging. Mitochondria normally exist as elongated tubules in the cytoplasm, undergoing constant fusion and fission (Bereiter-Hahn and Voth, 1994;Sesaki and Jensen, 1999;Shaw and Nunnari, 2002). Maintenance of the normal mitochondrial morphology is critical for their function, and mitochondrial dysfunction is associated with neurodegenerative disorders such as Parkinson's and Huntington's diseases (Chen and Chan, 2009). Because mitochondria are enclosed by outer-and inner-membranes with distinct roles, the mechanism by which fusion and fission of these two membranes are coordinated is a long-standing question. Fusion of the mitochondrial outer-membrane is controlled by the dynamin-like GTPases MFN1 and MFN2 in mammals and Fzo1p in yeast (Hermann et al., 1998;Rapaport et al., 1998;Ishihara et al., 2004;Koshiba et al., 2004), whereas OPA1/Mgm1p controls fusion of the inner-membrane (Alexander et al., 2000;Delettre et al., 2000;Olichon et al., 2003;Wong et al., 2003). Although fusion of the outer-and inner-membranes are mechanistically distinct events (Meeusen et al., 2004), they are tightly inter-regulated (Cipolat et al., 2004). In yeast, Fzo1p and Mgm1p cooperate to coordinate outer-membrane fusion and inner-membrane fusion (Sesaki et al., 2003;Sesaki and Jensen, 2004;Coonrod et al., 2007), and these two events are thought to be synchronized by Ugo1p (Hermann et al., 1998;Wong et al., 2003;Sesaki and Jensen, 2004). However, the exact mechanism involved in this process is still largely unknown, and a mammalian orthologue of Ugo1p is yet to be identified.
The first factor identified as a regulator of mitochondrial morphology was fuzzy onions (fzo) in Drosophila (Hales and Fuller, 1997). The mammalian homologues of fzo, MFN1 and MFN2, are similar in structure to each other, but these proteins seem to play separate roles in mitochondrial fusion (Santel and Fuller, 2001). Overexpression of MFN2 suppresses MFN1induced mitochondrial tubulation (Eura et al., 2003). MFNs consist of a large N-terminal GTPase domain followed by two heptad repeat (HR) domains. Although it is generally accepted that the HR domains are separated by two transmembrane domains, thus both face the cytoplasm (Rojo et al., 2002;Li et al., 2019), a different topology of MFNs was also suggested (Mattie et al., 2018). In a working model for MFN1-induced fusion, MFN1 proteins in the fusing membranes form a homodimer via their GTPase domains upon GTP hydrolysis (Cao et al., 2017;Yan et al., 2018). This homodimerization induces a drastic conformational change of MFN1, resulting in close apposition and the subsequent merger of the membranes (Yan et al., 2018). The HR domains of MFNs (HR1 and HR2) are structurally similar to the SNARE domain of SNARE proteins, wellcharacterized fusogens involved in intracellular vesicle fusion (Bonifacino and Glick, 2004). Structural studies revealed that the HR domains of MFNs which consist of repeats of seven amino acids, form amphiphilic helices that potentially interact with each other by building coiled-coil structures, similar to the formation of trans-SNARE complexes between apposed membranes (Koshiba et al., 2004;Daste et al., 2018). Notably, HR1 and HR2 play distinct roles as follows: the HR2 domain forms an antiparallel dimer with another HR2 domain on the opposing membrane, which mediates docking between the two membranes (Koshiba et al., 2004), whereas the amphiphilic property of the HR1 domain enables it to bind to the surface of the membrane and perturb its structure, thereby facilitating membrane fusion (Daste et al., 2018). Although this working model by which MFN1 mediates mitochondrial membrane fusion has been widely accepted, the exact mechanism by which the HR domains facilitate fusion remains largely unclear.
PHOSPHATIDIC ACID AND MITOFUSIN-MEDIATED FUSION
Phosphatidic acid (PA) constitutes approximately 5% of the mitochondrial membrane. PA has a relatively small head group and thus becomes a cone-shaped lipid that spontaneously induces negative membrane curvature when present in lipid bilayers (Kooijman et al., 2005). There are two ways through which PA is incorporated into the mitochondrial membrane: first, the majority of PA molecules are transferred from the ER to the mitochondrial outer-membrane, presumably through ER-mitochondrial contact sites, such as ERMES in yeasts (Murley and Nunnari, 2016;Petrungaro and Kornmann, 2019); second, a smaller number of PA molecules are generated in the mitochondrial membrane directly through enzymatic conversion of cardiolipin (CL) by mitochondrial phospholipase D (MitoPLD) (Choi et al., 2006). PA influences both fusion and fission of the mitochondrial outer-membrane, although its exact roles in these processes remain poorly characterized (Choi et al., 2006;Adachi et al., 2016). One plausible role of PA in membrane fusion is the introduction of negative curvature into the membrane, making its shape more favorable for fusion (Frohman, 2015). MitoPLD also seems to be important for mitochondrial outer-membrane fusion as follows: overexpression of MitoPLD aggregates mitochondria, indicating fusion of these structures, and RNAi-mediated knockdown of MitoPLD dramatically decreases mitochondrial fusion (Choi et al., 2006).
Although there is no direct evidence that PA physically interacts with MFN1 to mediate membrane fusion, overexpression of phospholipase A1, which converts PA to lysophosphatidic acid, triggers mitochondrial fragmentation, while its suppression induces elongation of mitochondria (Baba et al., 2014), suggesting that mitochondrial fusion and fission depend on the level of PA in the mitochondrial outer-membrane. Notably, PA interacts directly with the N-terminal amphipathic helix of the SNARE Spo20p, a yeast homologue of mammalian SNAP25, recruiting it to the site of fusion (Nakanishi et al., 2004;Horchani et al., 2014). Since the HR domains of MFN also contain 2 conserved amphipathic helices and bind to the lipid bilayer, it is possible that they also associate with PA directly to facilitate mitochondrial outer-membrane fusion (Figure 1; Cohen and Tareste, 2018). A direct interaction between PA and Ugo1p, a protein involved in the coordination of mitochondrial inner-and outer-membrane fusion, has been reported in yeast, and PA is required for the biosynthesis of Ugo1p (Vogtle et al., 2015). Thus, it can be speculated that PA promotes the generation of Ugo1p, thereby enriching Ugo1p at the fusion site where the yeast MFN Fzo1p is also recruited. Taken together, these studies suggest that PA can regulate MFN-induced mitochondrial outer-membrane fusion, although the exact mode of action remains yet to be clarified.
OPA1 IS INVOLVED IN MITOCHONDRIAL INNER-MEMBRANE FUSION
OPA1 is a major regulator of mitochondrial inner-membrane fusion, and its genetic mutation is the main cause of optic atrophy (Alexander et al., 2000;Delettre et al., 2000). Deletion or mutation of the genes encoding OPA1 and its yeast orthologue Mgm1p results in abnormal mitochondrial morphology (Olichon et al., 2003;Wong et al., 2003). OPA1/Mgm1p belongs to the dynamin-like GTPase family and includes a GTPase domain in the middle section, a transmembrane domain at the N-terminus, and a membrane-binding domain, called a paddle domain, at the C-terminus (Faelber et al., 2019). Although encoded by a single gene, OPA1/Mgm1p exists in the following two forms: the long isoform L-OPA1/Mgm1p and the short isoform S-OPA1/Mgm1p. Short isoforms are produced by proteolytic cleavage (MacVicar and Langer, 2016) and lack the transmembrane domain, thereby existing as soluble proteins in the intermembrane space of mitochondria. Although both the short and long forms participate in inner-membrane fusion (Meeusen et al., 2006;DeVay et al., 2009;Zick et al., 2009), they seem to play distinct roles. The short form readily hydrolyzes GTP to initiate membrane tethering, and its drastic conformational change triggers membrane fusion (Zick et al., 2009;Faelber et al., 2019). By contrast, although the long form lacks GTPase activity, it associates with and activates the GTPase activity of the short form. Furthermore, the transmembrane domain of the long form is required for its precise targeting to the mitochondrial innermembrane (DeVay et al., 2009). However, a recent study revealed that the long form of OPA1 is sufficient to drive liposome fusion in a GTP-dependent manner (Ban et al., 2017), indicating that it also plays a direct role in fusion. Thus, although both forms of OPA1/Mgm1 are required for mitochondrial inner-membrane fusion (DeVay et al., 2009;Ban et al., 2017;Ge et al., 2020), it is unclear how they cooperate to mediate this process.
CARDIOLIPIN AND OPA1-MEDIATED FUSION
Cardiolipin is an important lipid that comprises approximately 25% of the inner-membrane and approximately 4% of the outer-membrane phospholipids (Ardail et al., 1990;Horvath and Daum, 2013). Unlike other phospholipids, CL has a unique chemical structure; it contains two phosphate head groups and four acyl chains, forming a symmetric structure. A number of reports have emphasized the importance of CL in mitochondrial inner-membrane fusion. For example, the inactivation of enzymes involved in CL synthesis generally causes morphological defects of mitochondria (Matsumura et al., 2018). In addition, CL regulates the mitochondrial morphology directly by facilitating the assembly of the dynamin-like GTPase OPA1/Mgm1p (DeVay et al., 2009;Rujiviphat et al., 2009;Joshi et al., 2012;Ban et al., 2017). Moreover, CL stimulates the GTPase activity of S-Mgm1p in a concentration-dependent manner, as evidenced by the finding that GTP hydrolysis by S-Mgm1p was higher in liposomes containing 20% CL than in liposomes containing 6% CL (DeVay et al., 2009). Similarly, enhanced GTP hydrolysis and S-OPA1 oligomerization were observed in the presence of CL (Ban et al., 2010). Compared with the short form of OPA1/Mgm1p, little is known about the long form, mainly because L-OPA1 is difficult to purify for biochemical studies. However, in a recent study, recombinant L-OPA1 was successfully purified from silk worm, and its function was assessed in vitro. Strikingly, this study reported that L-OPA1 was sufficient to drive fusion of liposomes containing 25% CL in a GTP-dependent manner. This fusion requires heterotypic interactions between L-OPA1 and CL in trans; specifically, L-OPA1 in a liposome binds to CL in another liposome (Ban et al., 2017;Ge et al., 2020). This result may explain why fusion was observed between mitochondria from OPA1-depleted cells and those from wild-type cells (Ban et al., 2017(Ban et al., , 2018. Thus, CL may serve as a binding site for S/L-OPA1 heterodimers, thereby enabling these proteins to tether membranes and induce the subsequent fusion (Figure 2A).
L-OPA1 induces fusion only when it interacts with CL on the opposite membrane in trans. Therefore, it has been suggested that the CL-binding region of L-OPA1 is required for its recruitment to CL-enriched microdomains to facilitate fusion (Ban et al., 2017;Ge et al., 2020). Furthermore, a recent structural study of S-Mgm1p revealed that the CL-binding site lies on the GTPase domain, and its positively charged residues on the surface participate in electrostatic interactions between negatively charged lipids (Yan et al., 2020). Therefore, it is possible that the interaction of L-Mgm1p/OPA1 with CL induces a conformational change in the protein to facilitate the formation of S/L-OPA1/Mgm1p heterodimers. It is also possible that this interaction enhances the GTPase activity of Mgm1p/OPA, which then supports efficient fusion (Figure 2B).
A similar mode of action in promoting yeast vacuole fusion was observed for the Phox homology domain of the SNARE Vam7p. This domain binds phosphatidylinositol 3phosphate (PI(3)P) on the vacuolar membrane, resulting in the accumulation of Vam7p at PI(3)P-rich regions, where it forms trans-SNARE complexes with other SNARE proteins to promote vacuole fusion (Cheever et al., 2001). In addition, the interaction between PI(3)P and the Phox homology domain of Vam7p is thought to cause a conformational change in Vam7p, which may enhance its interaction with other fusion components (Cheever et al., 2001;Miner et al., 2016).
ATLASTIN IS INVOLVED IN ER FUSION
The ER, a large but single organelle that spreads throughout the cytoplasm, is the major site of lipid synthesis, protein folding, and protein quality control (Baumann and Walz, 2001;Ellgaard and Helenius, 2003). Although enclosed by a single, continuous lipid bilayer, the ER exists in the following two distinct forms: a sheet like structure surrounding the nucleus and a tubular network dispersed throughout the cytoplasm (Voeltz et al., 2002). The tubular ER is a dynamic structure that constantly undergoes elongation, retraction, and fusion (Lee and Chen, 1988). The tubular structure of the ER seems to be important for its function because it enables distinct membrane contact sites with various organelles (Phillips and Voeltz, 2015). Maintenance of the proper morphology of the ER is thought to be important for normal cell physiology, and its disruption is often associated with neurological disorders such as hereditary spastic paraplegia (Namekawa et al., 2006;Salinas et al., 2008;Park et al., 2010).
Although the mechanism by which the tubular ER network is formed and maintained remains poorly understood, Yop1/DP1 and a class of proteins called reticulons are thought to play a critical role in generating the high membrane curvature required to form ER tubules (Voeltz et al., 2006;Hu et al., 2008). In addition, ATLs, which belong to the family of dynaminlike GTPases, are also thought to mediate the fusion of ER tubules (Orso et al., 2009) by forming three-way junctions of the tubules and thus generating the mesh-like structure of the ER. Drosophila ATL alone or yeast ATL (Sey1p) with either reticulon or DP1 is sufficient to recapitulate formation of the tubular ER network structure in vitro when reconstituted into synthetic liposomes (Powers et al., 2017). Furthermore, proteoliposomes reconstituted with purified Drosophila ATL, Sey1p, or the plant ATL Root Hair Defective 3 are able to fuse with each other, confirming that these proteins can function as genuine fusogens (Orso et al., 2009;Anwar et al., 2012;Zhang et al., 2013). However, human ATL1 is unable to induce liposome fusion, suggesting that additional proteins are required for ER membrane fusion in human cells (Wu et al., 2015). The fusogenic activities of the other human ATLs (ATL2 and ATL3) have not yet been investigated.
Atlastin family proteins contain a large N-terminal GTPase domain followed by three helical bundles, two transmembrane domains, and a short α-helix at the C-terminal end (Bian et al., 2011;Yan et al., 2015). The current model for ATL-induced membrane fusion is that upon GTP hydrolysis, the GTPase domain of ATL forms a homodimer with that of another ATL molecule on the apposed membrane, and their helix bundles then undergo dramatic conformational changes that bring the membranes into close proximity, which eventually induces the fusion of ER tubules (Bian et al., 2011;Yan et al., 2015;O'Donnell et al., 2017;Winsor et al., 2017). Although it is widely accepted that ATLs are sufficient to drive liposome fusion and are therefore the major fusogens for ER membrane fusion (Orso et al., 2009;Anwar et al., 2012;Zhang et al., 2013), a recent study using purified yeast ER microsomes suggested that additional factors are required for efficient ER fusion in vivo, at least in yeast (Lee et al., 2015). In this study, ER-resident SNAREs were critical for ER microsome fusion in vitro and for normal ER morphology in vivo. This finding is consistent with the observation that human ATL1 alone is insufficient to induce liposome fusion.
CHOLESTEROL AND ATLASTIN-MEDIATED FUSION
Cholesterol has a small hydrophilic head group and a bulky steroid backbone, and is a vital component of biological membranes. Accumulating evidence supports the importance of cholesterol in various fusion events, such as exocytosis (Wasser et al., 2007;Linetti et al., 2010) and viral fusion (Klug et al., 2017;Lee et al., 2017). Cholesterol is thought to participate in membrane fusion mainly by altering the biophysical properties of the membrane, such as the fluidity, thickness, curvature, and stability of lipid bilayers (Yang et al., 2016). In addition, cholesterol may also regulate membrane fusion by interacting directly with fusogenic proteins. Consistent with this idea, cholesterol promotes clustering of SNARE proteins at the site of fusion (Murray and Tamm, 2011;Enrich et al., 2015). Furthermore, some SNARE proteins contain cholesterol-binding motifs, such as CRAC [Cholesterol Recognition/interaction Amino acid Consensus sequence, (L/V)-X 1−5 -Y-X 1−5 -(K/R)] and CARC [an inverted CRAC motif, (K/R)-X 1−5 -(Y/F)-X 1−5 -(L/V)], in or near their transmembrane regions (Enrich et al., 2015), suggesting that cholesterol affects the function of SNAREs to facilitate membrane fusion by binding to them directly.
We recently revealed that ergosterol (yeast cholesterol) affects ER membrane fusion by interacting directly with Sey1p (Lee et al., 2019). The transmembrane domains of Sey1p contain two sterol-binding motifs, the R-W-L motif (a combination of basic [R], aromatic [W], and aliphatic [L/V] residues) and the CARC motif ( Figure 3A). Furthermore, disruption of these sterol-binding motifs abolished the binding of sterols to Sey1p, severely reduced ER microsome fusion in vitro, and disrupted the normal ER morphology in vivo. Although the exact mechanism by which sterols stimulate Sey1p-medited ER fusion remains unclear, one possibility is that the interaction between the transmembrane domain of ATLs and cholesterol (or ergosterol in yeast) causes conformational changes of ATLs, making them more favorable for fusion. Consistent with this idea, mutant Sey1p lacking the sterol-binding motifs is unable to interact with Sec22p (Lee et al., 2019), an ER SNARE involved in Sey1pdependent ER fusion (Lee et al., 2015), supporting the notion that the binding of cholesterol to Sey1p affects the overall conformation of the protein, resulting in modification of its fusogenic activity as well as of the profiles of its interacting proteins ( Figure 3A). Notably, the transmembrane domain of the SNARE synaptobrevin-2 exists as two distinct forms, an open scissor form and a closed, parallel form, depending on the presence of cholesterol. This conformational transition modifies the fusogenic activity of the protein by changing the curvature of the surrounding membrane and possibly promotes complex formation with other SNAREs (Tong et al., 2009). Because ATLs contain two transmembrane domains, it is plausible that their conformations are affected by the presence of cholesterol similarly to that of the transmembrane domain of synatobrevin-2. Furthermore, because potential sterol-binding motifs are found in all human ATL proteins, regulation of ATL activity by direct binding of cholesterol is likely to be evolutionarily conserved.
We also found that Sey1p interacts physically with Erg4p and Erg11p, enzymes involved in the biosynthesis of ergosterol, which raises the possibility that Sey1p acts to increase the local concentration of ergosterol at the fusion site (Lee et al., 2019). In turn, this process not only stimulates the pre-existing Sey1p molecules for efficient fusion, but also recruits more Sey1p molecules and interacting proteins such as Sec22p to the site of fusion ( Figure 3B). In support of this concept, ER subdomains containing Rab10, which reportedly mediates fusion between ER tubules in mammalian cells, are enriched in ER enzymes that regulate phospholipid synthesis, including phosphatidylinositol synthase and choline/ethanolamine phosphotransferase 1, which converts diacylglycerol precursors to phosphatidylethanolamine and phosphatidyl-choline (English and Voeltz, 2013).
In addition to the direct participation of cholesterol in ATL-mediated ER fusion, structural and biochemical studies of Drosophila ATL have suggested that a direct interaction of the C-terminal tail of ATL with lipid bilayers plays an important role in ER membrane fusion (Moss et al., 2011;Liu et al., 2012). In one of these studies, deletion of the short C-terminal tail of Drosophila ATL almost completely abolished the fusion of phosphatidylcholine:phosphatidyl-serine (PC:PS) proteoliposomes (Moss et al., 2011). The C-terminal tail of ATL is predicted to form an amphiphilic helix, which is very likely to be embedded into the lipid bilayer, thereby affecting the curvature and the stability of the membrane (Drin and Antonny, 2010). Indeed, the hydrophobic residues of the C-terminal tail of ATL interact directly with the hydrophobic side of the lipid bilayer (Liu et al., 2012). Similar observations were made for the plant ATL Root Hair Defective 3, which contains a conserved C-terminal tail that is required for ER targeting and efficient ER membrane fusion, implying that the C-terminal region is inserted into the lipid bilayer, as seen in Drosophila ATL-mediated fusion (Sun and Zheng, 2018). Although it is unclear how the C-terminal tail of ATL functions during ER membrane fusion, its insertion into the membrane may perturb the lipid bilayer, making it more prone to membrane fusion (Liu et al., 2012;Faust et al., 2015). However, it was reported that the necessity of the C-terminal tail of ATL for membrane fusion became less stringent when phosphatidylethanolamine (PE), a non-bilayerprone lipid, was added to PC:PS proteoliposomes (Faust et al., 2015). This result suggests that although the C-terminal tail of ATL facilitates fusion, it is not essential for ER membrane fusion in vivo, as ER membranes contain significant amounts of non-bilayer-prone lipids such as phosphatidylethanolamine, cholesterol, and diacylglycerol (van Meer et al., 2008). In particular, Sey1p-mediated liposome fusion is highly susceptible to the omission of PE or ergosterol (Sugiura and Mima, 2016;Lee et al., 2019).
DISCUSSION
This review describes the role of regulatory lipids in GTPasemediated intracellular membrane fusion, focusing on examples of how these lipids affect proteins involved in membrane fusion processes. Some regulatory lipids facilitate membrane fusion by serving as an anchoring site for partner proteins and thus concentrating them at the site of membrane fusion, while others may bind directly to fusion proteins and modulate their fusogenic activity. Although lipids and proteins are both key players of membrane fusion, we have only just started to understand how their interactions control membrane fusion, and much remains to be clarified. A number of fusogenic proteins have potential lipid-binding domains or motifs; however, further studies are required to determine whether they indeed bind to lipids and how their interactions affect membrane fusion. In a recent report (Lee et al., 2019), we demonstrated that the yeast ATL Sey1p contains two sterolbinding motifs near its transmembrane domains. Disruption of these motifs severely abrogates Sey1p-mediated ER fusion, suggesting that the binding of sterols affects the fusogenic function of Sey1p. We also found that all three human ATL proteins contain two potential sterol-binding motifs. It would be interesting to investigate whether human ATLs associate directly with cholesterols, and whether this interaction influences their fusogenic activity. A study by Joji Mima's laboratory showed that Sey1p-mediated liposome fusion is stimulated by other regulatory lipids, such as phosphatidylinositol and PA (Sugiura and Mima, 2016). It would therefore also be interesting to investigate how these lipids regulate Sey1p-mediated fusion. Compared with current knowledge of the role of regulatory lipids in ATL-mediated ER fusion, much less is known about how regulatory lipids control GTPase-mediated mitochondrial fusion. Recent advances in research tools for lipid studies and microscopy will guarantee a deeper and more comprehensive understanding of how regulatory lipids dictate GTPase-mediated intracellular membrane fusion events.
|
2020-06-24T13:06:02.733Z
|
2020-06-24T00:00:00.000
|
{
"year": 2020,
"sha1": "f18ca66ee2be5f7d040cf8be51044863cc35bafd",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2020.00518/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f18ca66ee2be5f7d040cf8be51044863cc35bafd",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
268041685
|
pes2o/s2orc
|
v3-fos-license
|
Casimir Physics beyond the Proximity Force Approximation: The Derivative Expansion
We review the derivative expansion (DE) method in Casimir physics, an approach which extends the proximity force approximation (PFA). After introducing and motivating the DE in contexts other than the Casimir effect, we present different examples which correspond to that realm. We focus on different particular geometries, boundary conditions, types of fields, and quantum and thermal fluctuations. Besides providing various examples where the method can be applied, we discuss a concrete example for which the DE cannot be applied; namely, the case of perfect Neumann conditions in 2 + 1 dimensions. By the same example, we show how a more realistic type of boundary condition circumvents the problem. We also comment on the application of the DE to the Casimir-Polder interaction which provides a broader perspective on particle-surface interactions.
I. INTRODUCTION
Casimir forces are one of the most intriguing macroscopic manifestations of quantum fluctuations in Nature.Their existence, first realized in the specific context of the interaction between the quantum electromagnetic (EM) field and the boundaries of two neutral bodies, manifests itself as an attractive force between them.That force depends, in an intricate manner, on the shape and EM properties of the objects.Since the discovery of this effect by Hendrik Casimir 75 years ago [1] this, and closely related phenomena, have been subjected to intense theoretical and experimental research [2][3][4][5].The outcome of that work has not just revealed fundamental aspects of quantum field theory, but also subtle aspects of the models used to describe the EM properties of material bodies.Besides, it has become increasingly clear that this research has potential applications to nanotechnology.
Theoretical and experimental reasons have called for the calculation of the Casimir energies and forces for different geometries and materials [6], and with an ever increasing accuracy.The simplicity of the theoretical predictions when two parallel plates are involved, corresponds to a difficult experimental setup, due to alignment problems (in spite of this, the Casimir force for this geometry has been measured at the 10% accuracy level [7]).Equivalently, geometries which are more convenient from the experimental point of view, and allow for higher precision measurements, lead to more involved theoretical calculations.Such is the case of a cylinder facing a plane [8], or a sphere facing a plate, which is free from the alluded alignment problems [9][10][11][12][13][14][15].
From a theoretical standpoint, finding the dependence of the Casimir energies and forces on the geometry of the objects, poses an interesting challenge.Indeed, even when evaluating the self-energies which result from the coupling on an object to the vacuum field fluctuations, results may be rather non-intuitive; as in the case of a single spherical surface [16].
For a long time, calculations attempting to find analytical results for the Casimir and related interactions had been restricted to using the so called proximity force approximation (PFA).In this approach, the interaction energies and the resulting forces are computed approximating the geometry by a collection of parallel plates and then adding up the contributions obtained for this approximate geometry.This procedure was presumed to work well enough, at least for smooth surfaces when they are sufficiently close to each other; in more precise terms: when the curvature radii of the surfaces R i are much larger than the distance d between them.Indeed, this is the main content of the Derjaguin approximation (DA), developed by Boris Derjaguin in the 1930s [17][18][19] , which is pivotal in the study of surface interactions, especially in the context of colloidal particles and biological cells.This approach has significant implications in understanding colloidal stability, adhesion, and thin film formation.
It is worth introducing some essentials of the DA, in particular, of the geometrical aspects involved.Assuming the interaction energy per unit area between two parallel planes at a distance h is known, and given by E (h), the DA yields an expression for the interaction energy between two curved surfaces, U DA [2,4,[17][18][19][20].Indeed, where a denotes the distance between the surfaces, R 1 and R 2 are their curvature radii (at the point of closest distance), while . It is rather straightforward to implement the approximation at the level of the force f DA between surfaces: This approximation is usually derived from a quite reasonable assumption, namely, that the interaction energy arXiv:2402.17864v1[quant-ph] 27 Feb 2024 can be approximated by means of the PFA expression: Here, the surface integration may be performed over one of the participating surfaces, but it could also be over an imaginary, "interpolating surface", which lies between them.The DA is obtained from the expression above, by approximating the surfaces by (portions of) the osculating spheres (with radii R 1 and R 2 ) at the point of closest approach.
Based on this hypothesis, on dimensional grounds one can expect the corrections to the PFA to be of order O(a/R i ).Note, however, that since the PFA had not been obtained as the leading-order term in a well-defined expansion, the approximation itself did not provide any quantitative method to asses the validity of that assumption.
A need for reliable measure of the accuracy of the results obtained using different methods became increasingly crucial, specially since the development of the "precision era" in the measurement of the Casimir forces [9][10][11][12][13][14][15].It was in this context that the Derivative Expansion (DE) approach, was first introduced by us in 2011 [21], as a tool to asses the validity of the PFA, by putting it in the framework of an expansion, and to calculate corrections to the PFA using that very same expansion.When one realizes that the PFA had previously been proposed in contexts which are rather different to Casimir physics, it becomes clear that the improvement on the PFA which represents the DE may and does have relevance on those realms, regardless of them having an origin in vacuum fluctuations or not.Indeed, when one strips off the DE of the particularities of Casimir physics, one can see the ingredients that allowed one to implement it are also found, for example, in electrostatics, nuclear physics, and colloidal surface interactions.
Here, we present the essential features of the DE, its derivation, and consider some examples of its applications.The review is organized as follows.In Section II , we recall some aspects of the DA which stem from its application to nuclear and colloidal physics.We start with the DA not just for historical reasons, but also because we believe that this sheds light on some geometrical aspects of the approximation, in a rather direct way (like the relevance of curvature radii and distances).
Then, in Section III, we introduce the DE in one of its simplest realizations, namely, in the context of electrostatics, for a system consisting of two conducting surfaces kept at different potentials [22].We first evaluate the PFA in this example, and then introduce the DE as a method to improve on that approximation.In Section IV, we introduce a more abstract, and therefore more general, formulation of the DE [23].By putting aside the particular features of an specific interaction, and keeping just the ones that are common to all of them, we are lead to formulate the problem as follows: the DE is a particular kind of expansion of a functional having as argument a surface (or surfaces).We mean "functional" here in its mathematical sense: a function that assigns a number to a function or functions.We elucidate and demonstrate some of the aspects of the DE in this general context; the purpose of presenting those aspects are not just a matter of consistency or justification, but they also provide a concrete way of applying and implementing the DE to any example where it is applicable.
Then, in Section V, we focus on the DE in the specific context of the Casimir interaction between surfaces, for perfect boundary conditions at zero temperature; i.e., vacuum fluctuations [21,24].Then in Section VI we review the extension of those results to the case of finite temperatures and real materials [25,26].As we shall see, the temperature introduces another scale, which affects the form one must adopt for the different terms in the DE.Then we comment on an aspect which first manifests itself here: as it happens with any expansion, it is to be expected to break down for some specific examples, when the hypothesis that justified it are not satisfied.We show this for the case of the Casimir effect with Neumann conditions at finite temperatures [26,27].We also show that the application of the DE to the EM field is free of this problem, if dissipative effects are included in the model describing the media [28].
The application of the DE to Casimir-Polder forces for atoms near smooth surfaces [29] is described in Section VII.Other alternatives to compute Casimir energies beyond PFA [30] are described in Section VIII.Section IX contains our conclusions.
II. PROXIMITY APPROXIMATIONS IN NUCLEAR AND COLLOIDAL PHYSICS
The introduction of the Derjaguin Approximation (DA) to nuclear physics dates back to the seminal paper [31].In this paper, the DA was rediscovered and applied to calculate nuclear interactions, starting with a Derjaguin-like formula for the surface interaction energies.The approach was based on a crucial "universal function" -a term referring here to the interaction energy between flat surfaces, calculated using a Thomas-Fermi approximation.In spite of the rather different context, the analogy with the approach followed in the DA becomes clear when one introduces three surfaces, the physical ones, Σ L and Σ R , and the intermediate one Σ which one uses to parametrize the interacting ones.Then, if the physical surfaces are sufficiently smooth, the interaction energy should, to a reasonable approximation, be described by the PFA, in a similar fashion as in Equation (3).To render the assertion above more concrete, we yet again use the function h : Σ → R, measuring the distance between Σ L and Σ R at each point on Σ.Since h will have level sets which are, except for a zero measure set, one-dimensional (closed curves), and the interaction depends just on h, the PFA expression for the interaction energy U may be rendered as a one-dimensional integral: where J(h)dh is the infinitesimal area between two level curves on Σ: the ones between h and h + dh, while E is the universal function.
We now assume that Σ is a plane, and that the physical surfaces may be both described by means of just one Monge patch based on Σ.This surface is then naturally thought of (in descriptive geometry terms) as the projection plane.Using Cartesian coordinates (x 1 , x 2 ) ≡ x on Σ, assuming (for smooth enough surfaces) that J may be regarded as constant, and using a second-order Taylor expansion of h around a (the distance of closest approach): produces, when evaluating the PFA interaction energy (4), the DA energy (1).Here, R 1 and R 2 are the radii of curvature of the surface by x 3 = h(x ) at x 3 = a.This result may be improved, even within the spirit of the PFA, by introducing some refinements.Indeed, in [32], a generalization of the PFA has been introduced such that the starting point was Equation ( 4), but now allowing for the surfaces to have larger curvatures, as long as they remained almost parallel locally.The main difference that follows from those weaker assumptions is that, now, the Jacobian J may become a non-trivial function of h.For instance, introducing a linear expansion: a straightforward calculation shows that the force f becomes: Note that the result is the sum of the DA term plus a second term proportional to the derivative of the Jacobian with respect to h.This is a correction to the DA obtained from the same starting point we used for the DA: U PFA .In other words, Equation ( 7) is still determined by the energy density for parallel plates.As we shall see, the DE will introduce corrections that go beyond E (a).The correction will depend on both the geometry and the nature of the interaction.We wish to point out that the lack of knowledge of an exact expression for E is not specific to nuclear physics, but of course it may appear in other applications.The general PFA approach can nevertheless be introduced; the accuracy of its predictions will then be limited not just by the fulfillment or not of the geometrical assumptions, but also by the reliability of the expression for E .Using different approximations for E gives as many results for the PFA.For a recent review in the case of nuclear physics, see Ref. [33,34].
An apparently unrelated approximation, based on different physical assumptions, was introduced in the context of colloidal physics.Let us now see how it yields a result which agrees with the DA: it is the so called Surface Element Integration (SEI) [35], or Surface Integration Approach (SIA) [36].This approach may be introduced as follows: let us consider a compact object facing the x 3 = 0 plane.x 3 is then the normal coordinate to the plane, pointing towards the compact object.With this conventions, the SEI approximation applied to the interaction energy amounts to the following: Here, n denotes the outwards pointing unit normal to each surface element of the object.We see that, when the compact object may be thought of as delimited by just two surfaces, one of them facing the plane and the other away from it, the SEI consists of the difference between the PFA energies of those surfaces.This (possibly startling) fact is, as we shall see, related to the fact that the SEI becomes exact for almost transparent bodies, a situation characterized by the fact that the interaction is the result of adding all the (volumetric ) pairwise contributions.
In the context of colloidal physics , the SEI method relies heavily upon the existence of a pressure on the compact object.The effect of that pressure should be integrated over the closed surface surrounding the compact object, in order to find the total force [35].An alternative route to understand the SEI is to showthat Equation (8) becomes exact when the interaction between macroscopic bodies is the superposition of the interactions for the pair potentials of their constituents [36].That may be interpreted by using a simple example.Consider two media, one of them, the left medium L, corresponding to the x 3 ≤ 0 half-space, while the right medium, R, is defined as the region: The interaction energy U is a functional of the two functions ψ 1,2 .When the media are diluted, we expect the interaction energy to have the form where E (a) is the interaction energy per unit area, between two half-spaces at a distance a.This formula can be interpreted as follows: to obtain the interaction energy for the configuration described by ψ 1 and ψ 2 , one must certainly subtract from E (ψ 1 ) the contributions from x 3 > ψ 2 .This "linearity" is expected to be valid only for dilute media, and in that situation it coincides with the result obtained using the SEI.One expects then the SEI to give an exact result for almost-transparent media, for which the superposition principle holds true, and the total interaction energy is due to the sum of all the different pairwise potentials [36].It is worth noting, at this point, the important fact that the PFA also becomes exact in Casimir physics when the media constituting the objects are dilute.Indeed, this has been pointed out in [37,38].
The examples just described illustrate the relevance of the DA, and of some of its variants, to different areas of physics.At the same time, the main drawback is made rather evident: in spite of being based on reasonable physical assumptions, it is difficult to assess its validity.The reason for this difficulty is that the approximation is uncontrolled, and therefore the estimation of the error incurred is difficult, within a self-contained approach.
The DE provides a systematic method to improve the PFA, and to compute its next-to-leading-order (NTLO) correction in a consistent set up.
III. INTRODUCING THE DERIVATIVE EXPANSION
A. The PFA in an Electrostatic Example We introduce the PFA, and then the DE, in an example which neatly illustrates the DE main aspects, in the context of electrostatics.Here, contrary to what happens when dealing with more involved systems, like, say, Van der Waals, nuclear or Casimir forces, the physical assumptions and their implementations are more transparent.We follow closely Ref. [22] The set-up we want to describe consists of two perfectly-conducting surfaces, one of them an infinite grounded plane and the other a smoothly curved surface kept at an electrostatic potential V 0 .We use coordinates such that the plane corresponds to z = 0 while the smooth surface is such that it can be described by a single function, namely, by an equation of the form z = ψ(x ).The electrostatic energy contained between surfaces can then be written as follows: where ϵ 0 denotes the permittivity of vacuum.In terms of U and V 0 , the capacitance C of the system is then given by C = 2U/V 2 0 .Let us see how one implements the PFA in order to calculate U (from which one can extract, for instance, an approximate expression for C) expecting it to be accurate when the distance between the two surfaces is shorter than the curvature radius of the curved conductor.To that end, one first finds and approximation to the electric field between the conductors, by proceeding as follows: the smooth conductor is regarded as a set of parallel plates (Fig. 1), in the sense that the electric field E points along the z direction and has a z-independent value.The electric field does, however, depend on x since it is assumed to have, for every x , the same intensity as the electric field due to two (infinite) conducting planes at a distance ψ(x ).Namely, E(x) = −V 0 /ψ(x ) ẑ. Therefore, the approximated expression for the electrostatic energy becomes: It is implicitly understood in the equation above, that the region to integrate is such that the assumption on the distance and curvature is satisfied.On the contrary, regions such that the assumption is not satisfied can be consistently ignored (see the example below).
It should be evident that Equation (12) provides a rather convenient tool to obtain estimates for the electrostatic energy in many relevant situations.Indeed, to illustrate this point we consider a cylinder of length L and radius R in front of a plane, and denote by a the minimum distance between the two surfaces.The cylinder is not a surface that can be described by a single patch; namely, one needs at least two functions.However, in the context of the PFA, it is reasonably to assume that only the half that is closer to the plane should be relevant.Assuming the axis of the cylinder to be along y, the function ψ reads: with the variable x assumed to be in the range −x M < x < x M < R. Note that for x M /R = O(1) < 1 the assumption on the distance and the curvature is satisfied.It is to be expected that, as long as R ≫ a (where the PFA gives an accurate value of the electrostatic energy), the final result will not depend on x M .This can be readily checked by inserting Equation (13) into Equation (12), computing the integral, and expanding that result for a ≪ R. Doing this we obtain: which is independent of x M .An immediate consequence of this is that, when the cylinder approaches the plane, the electrostatic force behaves as a −3/2 .Let us check now the accuracy of U cp PFA .We take advantage of the knowledge of the exact expression for the electrostatic interaction energy: For a/R ≪ 1, U cp yields the PFA result U cp PFA (14).The relevance of the corrections to the PFA can be estimated by expanding the exact result, but keeping also the nextto-leading order (NTLO) when a << R: In the PFA, the interaction between a smoothly curved surface and a plane is approximated by that of a set of parallel plates.For each pair of parallel plates, border effects are ignored.
We will now introduce the DE.By construction, it should produce the NTLO result (for this an other surfaces), without resorting to the expansion of any exact expression (the knowledge of which, needless to say, is usually lacking).
B. Improvement of the PFA Using a Derivative Expansion
We begin by noting that the electrostatic energy is a functional of the function which defines the shape of the surface.A second observation is that, in principle, there is no reason to assume that the functional is local in ψ.Here, "local" means that it contains just one integral over x of a sum of terms involving powers of ψ(x ) and derivatives at ψ(x ).On the contrary, the exact functional will generally involve terms where, for example, there are two or more integrals over x , and kernels depending of those variables, and products of ψ with different arguments.However, regardless of the non locality of the exact expression, it must become local when the surfaces are sufficiently smooth and close to each other.Indeed, if the PFA becomes valid asymptotically in that limit, then the energy must approach a result which is a local function of ψ.Not whatever local functional but just one without derivatives.
The way we found to depart slightly but significantly from the PFA, has been to add terms involving derivatives of ψ.Namely, we shall assume that the electrostatic energy can be expanded in local terms involving derivatives of ψ.One can think of the condition |∇ψ| ≪ 1, as introducing a small, dimensionless expansion parameter.In physical terms, this means that the curved surface is almost parallel to the plane on the points where it is satisfied.
To introduce the first departure from the PFA, we in-clude terms with up to two derivatives.Then the electrostatic energy has to be (up to this order) of the form: for some functions V and Z.The gradient is the twodimensional one, and it can only appear in such a way that the energy is a scalar (ψ is a scalar under changes of coordinates on the plane).Besides, recalling the equations of electrostatics, and on dimensional grounds, the result must be proportional to ϵ 0 V 2 .On top of that it must reproduce U PFA for constant ψ.Furthermore, as ψ is the only other dimensionful quantity, both functions V and Z have to be proportional to ψ −1 .Thus, we have restricted even further the functional to: where β E is a numerical coefficient to be determined (the subindex E stands for electrostatics).It is worth stressing that it is independent of the specific surface being considered, as long as it is smooth.Therefore, it can be obtained once and for all just from its evaluation for a particular case.A simple procedure to obtain the coefficient β E , when an exact analytic solution to the problem is known, would be to retrieve its value by expanding that solution.Let us do that for the configuration of a cylinder in front of a plane.Inserting Equation ( 13) into Equation ( 18), and performing the integrals, an expansion of the result in powers of a/R, allow us to fix β E .Indeed, in order to agree with the expansion of the exact result in Equation ( 16), this fixes its value to β E = 1/3.Of course, one will obtain the same value for any other particular example for which the exact solution was known.
It is worthy of noting that, since the DE is a perturbative approach, it should be desirable to have a perturbative method to calculate the coefficient β EM .In other words, to compute it from first principles, using the appropriate expansion.One can do that, for instance, by solving perturbatively the Laplace equation and then resorting to the method described in Section V. We have performed that calculation in Ref. [22], and refer the reader to that work for details, and also for the application of the DE to other electrostatic examples.
C. Two Smooth Surfaces
As a natural generalization of the previously discussed situation, let us now consider two surfaces described by the two functions ψ 1 (x ) and ψ 2 (x ), each one of them measuring the respective height of a surface with respect to a reference plane Σ.This geometry was first considered in the context of the DE for the Casimir effect in Ref. [24].
To construct the DE for the electrostatic energy in this case, we keep up to two derivatives of the functions.This allows we to write the general expression: where is the electrostatic energy between parallel plates, and the dots denote higher derivative terms.Equation (19) actually contains four numerical constants: β 1 , β 2 , β × , and β − .However, symmetry considerations imply some constraints on them: the energy must be invariant under the interchange of ψ 1 and ψ 2 , since that is just a relabeling: β 1 = β 2 and β − = 0. Furthermore, in order to reproduce the result for a single smooth surface in front of a plane we must have β 1 = β 2 = 1/3.The coefficient β × can be determined taking into account that the energy should be invariant under a simultaneous rotation of both surfaces [24].Indeed, for an infinitesimal rotation of each surface by an angle ϵ in the plane (x, z), the changes induced on the functions ψ i are To simplify the determination of β × we can assume that, initially, ψ 1 = 0 and that ψ 2 is only a function of x.Computing explicitly the variation of U DE to linear order in ϵ one can show that and therefore Note that, by taking the variation of the electrostatic energy Equation ( 19) with respect to translations or rotations of one of the surfaces, one can obtain the vertical and lateral components of the force, as well as the torque, due to the remaining surface.
The identities β 1 = β 2 and β − = 0 are universally valid, regardless of the interaction (as long as the surfaces are of an identical nature), but β × = β 1 holds true for the electrostatic interaction.This depends upon the fact that the leading term is proportional to ψ −1 (i.e. it is then not valid for the Casimir energy).
For later use, let us recall that, for a general function E (ψ), the relation between the different coefficients becomes [24]: The relation (22) shows that, for any interaction, the DE for the interaction energy between two curved surfaces can be reduced to the problem of a single surface in front of a plane.Indeed, in the later case one can determine β 1 and β 2 , while Equation ( 22) determines the remaining coefficient β × .
To summarize: when computing the electrostatic energy associated with a configuration of two conductors at different potentials, with smoothly curved surfaces, one can go beyond the PFA by simply assuming that the energy admits an expansion in derivatives of the functions that define the shapes of the conductors.If the exact electrostatic energy for a single non trivial curved configuration is known, one can determine all the free parameters in the expansion.
Finally, the NTLO correction produces an appreciable improvement in the DA and, by the same token, also provides an assessment for its validity.An interesting alternative approach to compute electrostatic forces beyond the PFA can be found in Ref. [39].
IV. OBTAINING THE DE FROM A PERTURBATIVE EXPANSION
Regardless of the interaction considered, the DA and its improvement, the DE, can be obtained by performing the proper resummation of a perturbative expansion [23].The required expansion is in powers of the departure of the surfaces, about a two flat parallel planes configuration.This connection yields a systematic and quite general approach to obtain the DE, even when an exact solution is not available.
To keep things general, we work with a general functional of the surface; that functional may correspond to an energy, free energy, force, etc. Besides, we do not make any assumption about the kind of interaction involved, not even about whether it satisfies a superposition principle or not.
To begin, let us we assume a geometry where there are two surfaces, one of which, L, is a plane, which with a proper choice of Cartesian coordinates (x 1 , x 2 , x 3 ), is described by x 3 = 0.The other one, R, is assumed to be describable by x 3 = ψ(x ).
The object for which we implement the approximation is denoted by F [ψ], a functional of ψ.Then we note that the PFA for F , to be denoted here by F 0 , is obtained as follows: add, for each x , the product of a local surface density F 0 (ψ(x )) depending only on the value of ψ at the point x , times the surface element area; namely, The surface density is, in turn, determined by the (assumed) knowledge of the exact form of F for the case of two parallel surfaces, as follows: where S denotes the area of the L plate and a is a constant.Namely, to determine the density one needs to know the functional F just for constant functions ψ ≡ a.Note that, if the functional F is the interaction energy between the surfaces, F 0 becomes the interaction energy per unit area E , and F 0 becomes U PFA (see Equation ( 3)).
Let us now show how to derive the PFA (and its corrections) by the resummation of a perturbative expansion.To that end, we evaluate F for a ψ having the form: and write the resulting perturbative expansion in powers of η, which has the general form: where δ(•) is the Dirac delta function, and the form factors h (n) can be computed by using perturbative techniques.For the Dirichlet-Casimir effect, this can be done in a rather systematic way [40].Although the approach to follow in order to obtain those form factors may depend strongly on the kind of system considered, the form of the expansion shall be the same.Note that the form factors may depend on a, although, in order to simplify the notation, we will not make that dependence explicit.Up to now, we have not used the hypothesis of smoothness of the R surface.We do that now by assuming that the Fourier transform η is peaked at the zero momentum.What follows is to make use of this assumption for all terms in the expansion.In Equation (26), we set then: h (n) (k (1) , ..., k (n) ) ≃ h (n) (0, ..., 0), and, as a consequence: One could evaluate the form factors at the zero momentum straighforwardly.However, there is a shortcut here that allows one to obtain all of them immediately: consider a constant η(x ) = η 0 , so that the interaction energy is given by Equation ( 27) with the replacement d 2 x η(x ) n → Sη n 0 .For this particular case, F becomes just the functional corresponding to parallel plates, which are separated by a distance a + η 0 : We then conclude that, in this low-momentum approximation, the series can be summed up with the result: which is just the PFA.
The calculation just above shows that, for the class of geometries considered in this paper, the PFA can be justified from first principles as the result of a resummation of a perturbative calculation corresponding to almost flat surfaces.In order to be well defined, the PFA requires that the form factors h (n) (k (1) , ..., k (n) ) have a finite limit as k (i) → 0.
This procedure also suggests how the PFA could be improved; one can include the NTLO terms in the lowmomentum expansions of the form factors.We assume that they can be expanded in powers of the momenta up to the second order.We stress that this is by no means a trivial assumption.Indeed, depending on the the interaction considered, the form factors could include nonanalyticities (we will discuss some explicit examples below).In case of no nonanalyticities, one can introduce the expansions: for some a−dependent coefficients A (n) iα and B (n) ijαβ .Here i, j = 1, ..., n label arguments while α, β = 1, 2 label their components.Symmetry considerations are crucial, since they allow us to simplify the above expression (30), as fol-lows: rotational invariance implies that the form factors depend only on the scalar products k (i) • k (j) .Addition-ally, they have to be symmetric under the interchange of any two momenta.This thus leads to for some coefficients B (n) and C (n) .Inserting Equation (31) into Equation ( 26) and taking integrations by parts, one then finds the form of the first correction to the PFA: where the coefficients D (n) are linear combinations of B (n) and C (n) .The subindex 2 in F indicates that this is the part of the functional containing two derivatives.We complete the calculation by calculating the sum in Equation (32).To that end, we evaluate the correction F 2 for a particular case: η(x ) = η 0 + ϵ(x ), with ϵ ≪ η 0 , and expand up to the second order in ϵ.Thus, The resummation can be obtained in this case, by considering the usual perturbative evaluation of the interaction energy up to second order in ϵ.This evaluation does, naturally, depend on the interaction considered, but, once one has that result one can obtain the sum of the series above.We we will denote by Z that sum, namely: Upon replacement η 0 → η in Equation (34), one obtains This is the NTLO correction to the PFA.This concludes our systematic derivation of the PFA, including its first correction, a result which may be put as follows: where V (ψ) = F 0 (ψ) is determined from the (known) expression for the interaction energy between parallel surfaces, while Z(ψ) can be computed using a perturbative technique.In practice, Z(ψ) can be evaluated setting η 0 = 0 in Equation ( 34).The higher orders may be derived by an extension of the procedure described just above.It should be evident that, for the expansion to be well-defined, the analytic structure of the form factors is quite relevant.Indeed, the existence of nonanalytic zero-momentum contributions can render the DE non applicable.This should be expected on physical grounds, since the presence of nonanalytic terms implies that the functional cannot be approximated, in coordinate space, by the single integral of a local density.Physically, it is a signal that the nonlocal aspects of the interaction cannot be ignored.That should not come up as a surprise, when one recalls that the same kind of phenomenon does happen when evaluating the effective action in quantum field theory, and the quantum effects contain contributions due to virtual massless particles.In this case, the effective action may develop nonanalyticities at zero momentum.
The main messages of this Section are the following: irrespective of the nature of the interaction, the energy and forces between objects are functionals of their shapes.The PFA is recovered when the form factors of the functionals are evaluated at zero momentum.Enhancements to this approximation are achievable by expanding these form factors at low momenta.If the expansion is analytic, a resummation of the form factors produces the DE.
V. DE FOR THE ZERO-TEMPERATURE CASIMIR EFFECT
The application of the DE to the Casimir interaction energy between two objects was, actually, our original motivation to introduce the approximation, and it is useful briefly review some aspects of this application here.We consider first a real vacuum scalar field satisfying Dirichlet boundary conditions (Section V A) and then we move to the EM field with perfect-conductor boundary conditions (Section V B).We follow Ref. [21] for the derivation of the DE in the Dirichlet case.
A. Scalar Field with Dirichlet Boundary Conditions
We consider here a massless real scalar field φ in 3 + 1 dimensions, coupled to two mirrors which impose Dirichlet boundary conditions.In our Euclidean conventions, we use x 0 , x 1 , x 2 , x 3 to denote the spacetime coordinates, x 0 being the imaginary time.As before, the mirrors oc-cupy two surfaces, denoted by L and R, defined by the equations x 3 = 0 and x 3 = ψ(x ), respectively.
On only dimensional grounds, and using natural units (ℏ ≡ c ≡ 1), the DE approximation to the interaction energy to be of the form where α D and β D are dimensionless coefficients that do not depend on the geometry.The subindex D stands for Dirichlet.An evaluation of the above expression for parallel plates fixes α D ≡ 1.As in the electrostatic case, the coefficient β D could be computed from explicit examples where the interaction energy is known exactly.Let us recall, from Section IV, that the interaction energy can also be computed from an expansion of the Casimir energy in powers of η for where a (assumed to be greater than zero) is the spatial average of ψ whereas η contains its varying piece.The expansion needed is of the second order in η, and with up to two spatial derivatives.
To obtain such an expansion, we start from a rather general yet formal expression for the energy (for earlier perturbative computations of the Casimir force see, for example, Ref. [41,42]).That formal expression follows from the functional approach to the Casimir effect, where we deal with Z, the zero-temperature limit of a partition function.That partition function, for a scalar field in the presence of two Dirichlet mirrors is given by with S denoting the real scalar field free (Euclidean) action while the δ L and δ R impose Dirichlet boundary conditions on the L and R surface, respectively.The vacuum energy, E, is then obtained as follows: where T is the extent of the time dimension (or β −1 , in a thermal partition function setting).We discard from E the terms that do not contribute to the Casimir interaction energy between the two surfaces.These terms will appear as factors in Z; among them the one describing the zero point energy of the field in the absence of the plates, and also the 'self-energy' contributions, due to the vacuum distortion produced by each mirror, even when the other is infinitely far apart.
Exponentiating the two Dirac delta functions by introducing two auxiliary fields, λ L and λ R , we obtain for Z an equivalent expression: with where we have introduced x ≡ (x 0 , x 1 , x 2 ) = (x 0 , x ).The factor depending on the determinant of the induced metric on the R, g R (x ) ≡ 1 + |∇ψ(x )| 2 makes the expression above reparametrization invariant.However, by a redefinition of the auxiliary field λ R one gets rid of that factor, at the expense of generating a Jacobian.That Jacobian does not depend on the distance between the two surfaces, since only derivatives of ψ are involved.Therefore it will not contribute the the Casimir interaction energy and thus we shall subsequently ignore such factor, as well as others that will appear in the course of the calculations.
Integrating out φ, we see that Z 0 , corresponding to the field φ in the absence of boundary conditions factors out, while the rest becomes an integral over the auxiliary fields: and α, β = L, R .We have introduced the objects: where we use a "bra-ket" notation to denote matrix elements of operators, and ∂ 2 is the four-dimensional Lapla-cian.Thus, for example, A subtraction of the zero point contribution contained in Z 0 leads to: which still contains self-energies.Up to now, we have obtained a formal expression for the vacuum energy; let us now proceed to evaluate its DE.We need to expand E to the second order in η, keeping up to the second order term in an expansion in derivatives.It is convenient to do so first for Γ ≡ 1 2 Tr log T .Namely, Γ(a, η) = Γ (0) (a) + Γ (1) (a, η) + Γ (2) (a, η) + . . .(51) where the upper index denotes the order in derivatives.Each term will be a certain coefficient times the spatial integral over x of a local term, depending on a and on η-derivatives.Additionally, because the configuration is time-independent, they should be proportional to T (a factor that will cancel out).Expanding first the matrix T in powers of η we obtain: Γ = Γ (0) + Γ (1) + Γ (2) + . .., Tr log T (0) Tr log (T (0) ) −1 T (1) where, in Γ (l) , we need to keep up to l derivatives of η.
Then, the zeroth-order term is obtained as follows: replace ψ by a constant, a, and then subtract from the result its a → ∞ limit (this gets rid of self-energies).This leads to: RL .
(54) Here, the T (0) αβ are identical to the ones for two flat parallel mirrors separated by a distance a .
Taking the trace, leads to: Then, we recall the general derivation to note that the replacement a → ψ leads to: which is the PFA expression for the vacuum energy.
To improve on the previous result, we consider its first non trivial correction.There can be no first order term because of symmetry considerations.while to terms contribute to the second order where, and Tr log (T (0) ) −1 T (1) (T (0) ) −1 T (1) .(59) In the terms above, we have to keep just up to two derivatives of η.We see that, in Fourier space, and before implementing any expansion in momentum (derivatives), they have the structure: (j = 1, 2), with η denoting the Fourier transform of η, and with the f (2,j) kernels denoting the k 0 → 0 (i.e., static) limits of the more general expressions: By subtracting all the a-independent contributions, one finds: with: .
(62) The low-momentum behaviour of f (2) determines whether the DE can be applied or not.In this case, the function is analytic and therefore a local expansion of the vacuum energy exists.We need to extract its k 2 order term in a Taylor expansion at zero momentum, namely f (2) (k ) ≃ χ k 2 .We find: Thus, Therefore, the NTLO term in the DE becomes: where the index α runs from 1 to 2.
Putting together the terms up to second order, (66) The leading-order term above is the Casimir energy according to the PFA , while the second order one represents the first significant deviation from it.We note that the structure of both terms had been anticipated by dimensional analysis and symmetry considerations.The overall normalization, on the other hand, had been fixed by our previous knowledge of the (well-established) result for parallel plates.
We would like to insist on the fact that the relative weight between the PFA and its correction term-the factor β D = 2/3-is independent of the surface geometry.This value of β D has been independently corroborated in concrete examples by expanding the exact Casimir energy expressions.Interesting cases among them are, for example, either a sphere or a cylinder positioned in front of a plane.
We conclude this Section with an application of the DE to the particular geometry of a sphere in front of a plane.Let us express the function ψ of Equation ( 13) in polar coordinates ρ, with R the radius of the sphere and d the distance to the plane.The function ψ(ρ) describes an hemisphere when 0 ≤ ρ ≤ R. By inserting the expression of ψ into the DE for the Casimir energy, it becomes possible to explicitly calculate the integrals, to get a rather compact analytical expression: where
B. The EM Case
The results for the scalar field satisfying Dirichlet boundary conditions, described in Section (V A) above, have been generalized to different boundary conditions and fields.Results for the EM field case and two curved surfaces have been presented in Ref. [24].Note that, as pointed out at the end of Section III C, symmetry considerations allow for the two-surface problem to be reduced to the one of a curved surface facing a plane, namely, the geometry we have just dealt with in the Dirichlet case above.Indeed, as shown in Ref. [24], the extension of Ref. [21] to two curved surfaces is restricted among other things by the tilt invariance of the reference plane, to which the two surfaces can be projected.This served as a rigorous test for the self-consistency of perturbative results.
Venturing beyond the scalar Dirichlet (D) case of Ref. [21], they calculated the DE for Neumann (N), mixed D/N, and electromagnetic (EM) (perfect metal) surfaces.Interestingly, they observed that the EM correction must align with the sum of D and N corrections.They also replicated previous findings for cylinders under D, N, and mixed D/N conditions, as well as for the sphere with D boundary conditions.However, their calculations did not confirm previous results for the sphere/plane geometry, either with N or EM boundary conditions.Indeed, the results for β were found to disagree with those obtained from Refs.[43][44][45].This discrepancy was later resolved in Ref. [46] in favour of the results in [24].
Another interesting concrete example presented in [24] is the DE for two spheres of radii R 1 and R 2 , both imposing the same boundary conditions.It was found there that ; a is chosen to be the distance of closest separation, and β is a number that depends on the type of boundary condition, as can be seen from Table I. α = α EM = 2 in the EM boundary conditions case.The corresponding formula for the sphere/plane case can be obtained by taking one of the two radii to infinity (in fact it coincides with the D case in Equation ( 67) when α = α D = 1 and β = β D = 2/3).
A rather different example corresponds to two circular cylinders (with identical boundary conditions) whose axes are inclined at a relative angle θ.Using the DE, the interaction Casimir energy reads: For this particular geometry, the interaction energy has been computed numerically in [47].The numerical results reproduce Equation ( 69) at short distances.
The results obtained for the β-coefficients in each case are summarized in Table I.
Having presented in this Section a derivation and some interesting results obtained by applying the DE to the Casimir effect at zero temperature and for perfect boundary conditions, we present in the rest of the review some generalizations and applications.
VI. FINITE TEMPERATURE, NONANALYTICITIES, AND DE
The DE can be extended to the finite temperature case [25,26,28], the free energy being the relevant functional Table I. β coefficient from (68) for the following five cases: a scalar field obeying Dirichlet (D) or Neumann (N) boundary conditions on both surfaces, or D boundary condition on one surface and N boundary condition on the other, or vice versa, and for the electromagnetic (EM) field with ideal metal boundary conditions [24] to approximate.There are at least two reasons why this extension is not trivial: firstly, the temperature introduces a dimensionful magnitude, and this will reflect itself in the form of the DE (part of it was fixed by dimensional analysis).Second, a known phenomenon in quantum field theory at finite temperature is the so-called "dimensional reduction", by which a bosonic model which is defined in d + 1 dimensions at zero temperature, becomes effectively d-dimensional at high temperatures.The DE should therefore manifest (and interpolate be-tween) those two cases.
We first describe, in Section VI A, the results for a scalar field satisfying Dirichlet conditions [26] in d + 1 dimensions.Then, Section VI B discusses the appearance of nonanaliticities for Neumann boundary conditions [26,27].Finally, we comment on the results for the EM field with imperfect boundary conditions [25,28] (Section VI C) and on semianalytic formula for planesphere geometry (Section VI D).
A. Dirichlet Boundary Conditions
In the finite-temperature case, and for the same geometry that we have considered in the zero temperature case, the functions V (ψ) and Z(ψ) cannot be completely determined from dimensional analysis alone.Indeed, on general grounds, we can assert that the Casimir free energy in d + 1 dimensions, if the DE is applicable, must have the form: where b 0 and b 2 are dimensionless and depends on the ratio of the local distance between surfaces ψ and the inverse temperature β.They can be obtained from the knowledge of the Casimir free energy for small departures around ψ(x ) = a = constant.They are given by [26] b where In the zero temperature limit, the Matsubara sum becomes an integral that can be analytically computed.
The results are described in Table II.The ratio b 2 /b 0 tends to 1 for large values of d.
In the high temperature limit, we find where ξ = ψ/β.The coefficients b 0 (d − 1) and b 2 (d − 1) agree with those for perfect mirrors at zero temperature, but in d−1 dimensions, i.e., the ' dimensional reduction" effect.An interesting result is found when this is applied to the (Dirichlet) Casimir interaction for a system consisting of a sphere in front of an infinite plane.Denoting by a the distance between the surfaces, and by R the radius of the sphere, we get for the free energy at high temperatures: We see that the R/a 2 -behavior corresponding to the dominant contribution at zero temperature changes to R/aβ in the high temperature case.This could be expected on dimensional grounds, if one assumes that the free energy is linear in the temperature in this limit.Note that the same problem has been exactly solved in Ref. [48], and one can show that Equation (74) does agree with the small-distance expansion obtained from the exact solution.
It is worth to remark that the NTLO correction from the DE becomes nonanalytic, because of the integration, in the ratio a/R.This behavior has been observed in numerical calculations of the Casimir interaction energy for this geometry, in the infinite temperature limit, for the electromagnetic case (see Refs. [48,49]).It is important to recognize that this nonanalyticity has nothing to do with the nonanalyticity in momenta of the form factors described in Section 4, and is a non trivial prediction of the DE.
B. Neumann Boundary Conditions
This case, discussed in Ref. [27], highlights a potential warning to the applicability of the DE, already mentioned previously: the appearance of nonanalyticities in the form factors.To begin with, we deal with the zero temperature case in 2 + 1 dimensions, since the nonanalyticity appears because of the existence of a Matsubara mode which behaves as a massless field in 2 + 1 dimensions, with Neumann boundary conditions.
The free Euclidean action for the vacuum (i.e., T = 0) field φ is given by and, instead of imposing perfect Neumann boundary conditions on the surfaces, we add the following action to describe the interaction between the vacuum field and the mirrors: The constant μ, which has the dimensions of a mass, is used to impose Neumann boundary conditions in the μ → 0 limit.We use the same μ on both L and R mirrors, since we will assume them to have identical properties, differing just in their position and geometry.
The DE approximation to the Casimir energy can be computed following standard steps.The result reads, in the limit μψ → 0 [27], In the expression above, the first term is the PFA contribution while the second one is a non trivial correction to it, and depends on the shape of the boundary (defined by ψ).It is then clear that, as this equation shows, the DE is well posed when imposing imperfect Neumann boundary conditions in 2 + 1 dimensions.On the contrary, it cannot be applied when the boundary conditions become perfect (μ = 0).The reason is that the hypothesis of analyticity in momentum, used to derive the DE, is clearly violated.The non-existence of a local expansion is due to the existence of massless modes, allowed by Neumann boundary conditions.Since, at finite temperatures, a 3 + 1 dimensional theory may be decomposed into the sum of an infinite tower of decoupled 2 + 1 dimensional Matsubara modes, each one satisfying N boundary conditions, and with a mass 2nπ β , n = 0, 1, 2, . . .The existence of the massless n = 0 mode (the only one surviving in the high temperature limit) means that analyticity will be lost in 3 + 1 dimensions, for any non zero temperature.That is indeed the case [26].We summarize here some of the main features of that example: the free energy in the d + 1 dimen-sional Neumann case can again be written as before (see Eq.( 70), but with coefficients c 0 and c 2 instead of b 0 and b 2 .The zero order term coincides with the one for the Dirichlet case; namely: c 0 = b 0 .
When d = 3, the NTLO term contains, besides a local term, a nonlocal contribution which is linear in T , and thus present for any T > 0. Hence, there is no local DE for perfect Neumann boundary conditions at d = 2 at zero temperature and for d = 3 at any finite temperature.Indeed, an expansion for small values of |k | of the form factor contains, in addition to a term proportional to k 2 , one proportional to (T a)k 2 log k 2 a 2 .
C. The Electromagnetic Case for Imperfect Boundary Conditions
We have seen that, for a real scalar field in the presence of Neumann boundary conditions, the DE cannot be applied when in 2 + 1 dimensions at zero temperature, or in 3 + 1 dimensions at a non-zero temperature [26].The reason is that, as we have shown, nonanalyticities in the form factors appear.We have shown that the nonanalyticity could be cured by introducing a small departure from perfect Neumann conditions [27].It is natural to wonder whether the nonanalyticities could also be cured by a similar approach for the EM field in 3+1 dimensions at finite temperatures.We know, based on the insight obtained from Ref. [27], that nonanalyticities are originated in contributions due to dimensionally reduced massless modes: zero Matsubara frequency terms.To obtain an answer to this question, in Ref. [28] we singled out in detail the zero-mode contributions to the free energy, for a media described by non trivial permittivity ϵ(ω) and permeability µ(ω) functions.
We start from the free energy F for the EM field, which can be written in terms of the partition function Z(ψ), as follows: where the denominator, Z 0 , denotes the partition function for the EM field in the absence of media and The gauge invariant action S inv (A) reads Here, indices like i, j . . .run over spatial indices, Einstein summation convention is assumed, and ϵ(τ − τ ′ , x) and µ(τ − τ ′ , x) denote the imaginary time versions of the permittivity and permeability, respectively (µ −1 is the inverse integral kernel of µ).
The geometry of the system is determined by same two surfaces L and R we have considered before, and defined by x 3 = 0 and x 3 = ψ(x ), but now they correspond to the boundaries of the media, i.e., where ϵ L,R (τ − τ ′ ) and µ L,R (τ − τ ′ ) characterize the permittivity and permeability of the respective mirror.
We can expand the fields and the electromagnetic properties as where ω n ≡ 2πn/β (n ∈ Z) are the Matsubara frequencies.
Inserting these expansions into the partition function one can readily check the factorization and therefore As mentioned, we are particularly interested in the n = 0 contribution, where and Note that Ω 0 vanishes for a dielectric and also for a metal described by the Drude model.On the other hand, it equals the plasma frequency for a metal described by the plasma model.The zero mode contribution to the free energy therefore splits into a scalar (s) and a vector (v) contribution, the former associated to the field A (0) 0 and the later to To discuss the emergence of non-analyticities in the derivative expansion we computed F s and F v assuming ψ(x ) = a + η(x ) up to second order in η.The quadratic contributions can be written as F (2) s,v = 1 2 and the crucial point is whether the functions f s,v are analytic or not in k ∥ .
Omitting the details, we summarize the main results [28]: for finite values of µ and ϵ, the scalar contribution f (2) s analytic, including the limit ϵ → ∞, in which it tends to the 2 + 1 dimensional Dirichlet value.It develops a nonanalytic (logarithmic) contribution for µ = ∞, since the kernel corresponds in this case to that of a scalar field in 2 + 1 dimensions satisfying Neumann boundary conditions.In other words, magnetic materials regulate the non-analyticity of the TE zero mode.
On the other hand, the TM zero mode is nonanalytic whenever ω 2 ϵ(ω) → Ω 2 ̸ = 0 as ω → 0 for both mirrors.In terms of the models usually considered in the Casimir literature to describe real materials, this condition corresponds to the plasma model.
In summary, the nonanalyticities we observed for perfect conductors in our previous work [26], survive only under the assumption of perfectly lossless materials.The NTLO corrections to PFA for metals (gold) at room temperature have been computed in Ref. [25].
D. A Semianalytic Formula for Plane-Sphere Geometry
As a final application of the DE to compute the Casimir free energy we mention the results of Ref. [50], where the author combined exact calculations for the zero mode and the DE to obtain a precise formula for the interaction between a sphere and a plane at a finite temperature which is valid at all separations.We briefly describe here these findings.
Formally, the free energy for this geometry can be written as where the sum is over the Matsubara frequencies ξ n = 2πnk B T /ℏ and M denotes scattering matrix elements for this geometry.The prime on the sum indicates that the n = 0 term has an additional 1/2 factor.
The n = 0 contribution can be computed exactly using the Drude model to describe the materials of the plane and the sphere, and plays a crucial role.Indeed, the proposed approximation for the Casimir force on the sphere of radius R at a distance a from the plane is where θ can be computed using the DE.Notably, F approx describes with high precision the Casimir force at all separations, as can be checked by comparison with high precision numerical simulations of the exact scattering formula.
These results have been generalized in subsequent studies to the case of the two spheres-geometry [51], also considering the differences that come from the use of the Drude vs plasma models, as well as for grounded vs isolated spheres [52].The relevance of the use of grounded conductors in Casimir experiments has also been discussed in Ref. [53].
VII. CASIMIR-POLDER FORCES
The DE approach has also been applied to the calculation of the Casimir-Polder interaction between a po-larizable particle and a gently curved surface [29].We present in this Section a simplified version of the results contained in that reference.
When a small polarizable particle is at a distance a of a planar surface, the Casimir-Polder potential reads [4] U where α(ω) is the frequency dependent polarizability (which is assumed isotropic), ω c = c/a, and For moderate distances such that α(ω) ≈ α(0) one obtains the usual Casimir-Polder potential [54] U (a) = − 3 8π Assume now that the particle is in front of a slightly curved surface.The particle is at the origin of coordinates, and the surface is described, as usual, by the height function z = ψ(x ).The DE for the Casimir-Polder interaction U DE assumes that the interaction depends on the derivatives of the height function ψ evaluated at x = 0, the point on the surface closest to the particle (a local minimum for ψ).If the surface is homogeneous and isotropic, then the interaction energy must be invariant under rotations of the x coordinates.The more general expression compatible with this properties describes the Casimir -Polder interaction energy at T = 0 reads [29]: (94) The dimensionless function β (1) can be read from the perturbative expansion of the potential U , carried to second order in the deformation, that is, for ψ(x ) = a + η(x ) with η(x ) ≪ a.We stress that here the Casimir-Polder energy is not a functional but a function of ψ and its derivatives evaluated at the origin of coordinates (recall that ∇ψ(0) = 0).The DE is expected to be valid when a ≪ R 1 , R 2 , the radii of curvature of the surface at x = 0. Note that ψ(0) = d and Using again the static polarizability approximation, α(ω) ≈ α(0), one obtains The results presented in Ref. [29] are much more general than those described here: they include the Casimir-Polder potential for a general polarization tensor α µν (ω) and higher order corrections proportional to (a/R i ) 2 , as well as the details of the computation of the corresponding functions β (p) .Additional applications can be found in [55,56].
VIII. OTHER TECHNIQUES BEYOND PFA
In Ref. [57] a detailed analysis of the Casimir effect's roughness correction in a setting involving parallel metallic plates is presented.The plates were defined through the plasma model.The approach used is perturbative, factoring in the roughness amplitude and allowing for the consideration of diverse values of the plasma wavelength, plate separation, and roughness correlation length.A notable finding was that the roughness correction exceed the predictions of the PFA.The authors have calculated the second-order response function, G(k), across a spectrum of values encompassing the plasma wavelength (λ P ), distance (a), and roughness wave vector (k): applicable when λ p → 0. Here, A represents the plate surface area, K the dimensionless integration variable denoting the imaginary wave vector's z-component scaled by plate separation d, K ′ the longitudinal component of the imaginary wave vector for the diffracted wave, and q = ka.
The calculation in Ref. [57] helps to compute the second-order roughness correction as a function of the surface profiles, h 1 and h 2 .Analytical solutions were determined for specific limiting cases, revealing a more complex relationship with the perfect reflectors model than previously recognized [58,59], particularly in scenarios involving extended distances and small roughness wavelengths.While the asymptotic case of long roughness wavelengths aligns with PFA predictions, it was established that PFA generally underestimates the roughness correction, a critical aspect for exploring constraints on potentially new weak forces at sub-millimeter ranges.
As a further expansion to [57], in Ref. [60], the authors explored the Casimir interaction between a plane and a sphere of radius R at a finite temperature T , in terms of the distance of closest approach, a.Noting that, un-der the usual experimental conditions, the thermal wavelength λ T satisfies a ≪ λ T ≪ R, they evaluated the leading correction to the PFA, applicable to such intermediate temperatures.They resorted to developing the scattering formula in the plane-wave basis.The result captures the combined effect of spherical geometry and temperature, and is expressed as a sum of temperature-dependent logarithmic terms.Remarkably, two of these logarithmic terms originated from the Matsubara zerofrequency contribution.
Defining the variables x = a/R and τ = a/λ T , and the deviation δF (T ) = F (T ) − F (0), in the intermediate temperature regime x ≪ τ ≪ 1, it is found in Ref. [60] that The leading neglected terms stem from non-zero Matsubara frequencies.In Ref. [61], the leading-order correction to PFA in a plane-sphere geometry was derived.The momentum representation connected this with geometrical optics and semiclassical Mie scattering.The primary contributions are shown to come from diffraction, with TE polarization becoming more relevant than TM polarization.The diffraction contribution is calculated at leading order, using the saddle-point approximation, considering leading order curvature effects at the sphere tangent plane.
Additionally, the next-to-leading order (NTLO) term in the saddle-point expansion contributed to the PFA correction.This involved computing the round-trip operator within the WKB approximation, representing sequences of reflections between the plane and the sphere.A key aspect was the tilt in the scattering planes, allowing TE and TM polarizations to mix.
Comprehending the implications of polarization mixing channels on the geometric optical correction applied to PFA holds considerable importance.Indeed, these channels are recognized for inducing negative Casimir entropies with a geometric foundation.[63? -67].In spite of the non-vanishing contribution of the polarization mixing matrix elements, the total correction associated with the tilt between the scattering and Fresnel planes is zero at NTLO.This implies that the primary correction to the PFA would remain unchanged even if the complexities arising from the differences between the Fresnel and scattering polarization bases were initially ignored.The latter points to the fact that a different approach, one that completely omits the effect of polarization mixing, could directly produce the leading order correction to PFA.Plane waves proved to be a well-suited basis for studying the Casimir effect, as has been evidenced in the more recent study [30].The utility of that basis ranges from analytical to numerical applications, particularly when dealing with objects in close proximity, the most relevant situation in experiments.It has been also shown that the use of plane waves was notably effective in improving the interpretation of results in the realms of geometrical optics and diffractive corrections.
In the context of a setup involving two spheres with arbitrary radii in vacuum, it was shown in [30] that the PFA emerged as the leading term in an asymptotic expansion for large radii.Extending a prior calculation based on the saddle-point approximation, involving a trace over multiple round-trips of electromagnetic waves between the spheres, the study encompassed spheres made of biisotropic material, requiring the consideration of polarization mixing during reflection processes.The result was naturally elucidated within the framework of geometrical optics.
Then, by relying on a saddle-point approximation framework, the authors derived leading-order corrections, of geometrical and diffractive origins.Explicit results, at first obtained for perfect electromagnetic conductors (PEMC) spheres at zero temperature, indicated that for certain material parameters, the PFA contribution vanishes; should that be the case, the leading-order correction would be the dominant term in the Casimir energy.
In the lowest-order saddle-point approximation, but including diffractive corrections, one can show that the expression for the Casimir energy becomes: 15(10 + 3π) 4π 3 x 3/2 + .. , (99) where x = a/R eff .As expected, this result reproduces the PFA result and its leading-order diffractive correction.The NTLO correction behaves as x 3/2 .However, the prefactor obtained accounts for about 90% of the one coming from numerical results [61].This discrepancy may be traced back to having neglected the NTLO-SPA and NNTLO-SPA contributions.
IX. CONCLUSIONS
In this review, we have discussed several properties and applications of the DE approach, mostly as a method to improve the predictions of the Proximity Force Approximation, of long standing use in many different fields.
We started the review by briefly discussing the precursor of the PFA: the Derjaguin (and related) approximations, since we have found them rather appropriate in order to display the essentially geometric nature of the kind of problem we discuss: two quite close smooth surfaces, and an interaction energy between them.Depending on the kind of system being considered, that interaction between the two surfaces may or not be the result of the superposition of the interactions between pairs.An example of an interaction which is not the result of such a superposition is the Casimir effect.Note, however, that even when the fundamental interaction satisfies a superposition principle, like in electrostatics, the actual evaluation of the Coulomb integral to calculate the total interaction energy could be a rather involved problem because the actual charge density may not be known a priori.That is indeed the case when the surfaces involved are conductors, since that usually requires finding the electrostatic potential.We have used precisely this problem in order to present the idea of the DE in a concrete example: to calculate the electrostatic energy between two conducting surfaces held at different potentials.
After introducing and applying the DE in that example, we have discussed its more general proof of that expansion, by first putting the problem in a more general and abstract way: how to approximate, under certain smoothness assumptions, a functional of a pair of surfaces.At the same time, the proof provides a concrete way to determine the PFA and its NTLO correction, the DE: one just needs to perform an expansion in powers of the deformation of the surfaces about the situation of two flat and parallel surfaces.
The derivations and examples here have been presented for a geometrical setting were one surface is a plane, while the other may be described by a single Monge patch based on that plane.However, as shown by other authors, under quite reasonable and general assumptions, the results obtained for that situation may be generalized to the case of two curved surfaces parametrized by their respective patches, based on a common plane (which now does not coincide with one of the physical surfaces).
Then we reviewed different applications of the DE to the zero temperature Casimir effect, considering different fields and boundary conditions, staring from the cases of the scalar field with Dirichlet boundary conditions, then the EM field in the presence of perfectly conducting surfaces, and commented on the scalar field with Neumann conditions.
We afterwards presented a description and brief review of the extension of DE to finite temperature cases, and different numbers of spatial dimensions.The temperature is a dimensionful magnitude and the phenomenon of dimensional reduction presents a problem when there are Neumann boundary conditions or when an EM field is involved.Indeed, dimensional reduction implies the existence of a massless 2+1 dimensional field (with Neumann conditions), and this mode introduces a nonanalyticity in momentum space, which violated one of the hypothesis of the DE, and therefore it cannot be applied.Nevertheless, we have shown that the introduction of a small departure from ideal Neumann conditions solves this issue, namely, analyticity is recovered and the DE may be applied.
We also mentioned the application of DE to the Casimir-Polder interaction, particularly between a polarizable particle and a gently curved surface.This example highlights the broader implications of DE in understanding particle-surface interactions beyond the Casimir force itself.
To conclude, we have presented in this review the main features of the DE approach, with a focus in the Casimir effect, but pointing at the fact that its applicability can certainly go beyond that realm.We have shown that explicitly for electrostatics, but we expect it to be applicable to, for example, the same kind of systems where the DA, SEI and SIA were introduced.This research was funded by Agencia Nacional de Promoción Científica y Tecnológica (ANPCyT), Consejo Nacional de Investigaciones Científicas y Técnicas (CON-ICET), Universidad de Buenos Aires (UBA), and Universidad Nacional de Cuyo (UNCuyo), Argentina.
Values of the ratio b2(d)/b0(d) for different dimensions.The ratio tends to 1 for d → ∞.See text for details.
|
2024-02-29T06:44:18.856Z
|
2024-02-27T00:00:00.000
|
{
"year": 2024,
"sha1": "1169f67704871f162b5c2c71272a3ccfb11ccba9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2624-8174/6/1/20/pdf?version=1709115271",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "1169f67704871f162b5c2c71272a3ccfb11ccba9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
261982339
|
pes2o/s2orc
|
v3-fos-license
|
Particular Anatomy of the Hyperopic Eye and Potential Clinical Implications
Background and Objectives: Hyperopia is a refractive error which affects cognitive and social development if uncorrected and raises the risk of primary angle-closure glaucoma (PACG). Materials and Methods: The study included only the right eye—40 hyperopic eyes in the study group (spherical equivalent (SE) under pharmacological cycloplegia over 0.50 D), 34 emmetropic eyes in the control group (SE between −0.50 D and +0.50 D). A complete ophthalmological evaluation was performed, including autorefractometry to measure SE, and additionally we performed Ocular Response Analyser: Corneal Hysteresis (CH), Corneal Resistance Factor (CRF); specular microscopy: Endothelial cell density (CD), Cell variability (CV), Hexagonality (Hex), Aladdin biometry: Anterior Chamber Depth (ACD), Axial Length (AL), Central Corneal Thickness (CCT). IBM SPSS 26 was used for statistical analysis. Results: The mean age of the entire cohort was 22.93 years (SD ± 12.069), 66.22% being female and 33.78% male. The hyperopic eyes had significantly lower AL, ACD, higher SE, CH, CRF. In the hyperopia group, there are significant, negative correlations between CH and AL (r −0.335), CRF and AL (r −0.334), SE–AL (r −0.593), ACD and CV (r −0.528), CV and CRF (r −0.438), CH (r −0.379), and positive correlations between CCT and CH (r 0.393) or CRF (r 0.435), CD and ACD (r 0.509) or CH (0.384). Age is significantly, negatively correlated with ACD (r −0.447), CH (r −0.544), CRF (r −0.539), CD (r −0.546) and positively with CV (r 0.470). Conclusions: Our study suggests a particular biomechanical behavior of the cornea in hyperopia, in relation with morphological and endothelial parameters. Moreover, the negative correlation between age and ACD suggests a shallower anterior chamber as patients age, increasing the risk for PACG.
Introduction
Hyperopia is one of the most frequent refractive errors, both in the pediatric and adult populations, with an important potential for impact on the daily quality of life [1].It is estimated that the worldwide prevalence of hyperopia is 4.6% in children and 30.9% in adults, with large variations between different geographic regions [2].
While common, uncorrected hyperopia, and particularly anisometropia (difference in refractive error between the two eyes), raise an important risk for amblyopia (also known as lazy eye) during childhood, as evidenced by a recent study performed on a Romanian pediatric population [3].Persistent amblyopia has been found to be associated with a poorer self-rated overall health, and to have an impact on mental health and overall well-being [4].
In adults, hyperopia is a known risk factor for primary angle-closure glaucoma (PACG)-an SE between 1.01 and 3.00 Diopters (D) associates an odds ratio of PACG of 1.58, while hyperopia over 3 D associates an odds ratio of 3.33, these figures being even higher in patients younger than 65 years old [5].Regarding glaucoma, along with the high intraocular pressure (IOP), another important risk factor is represented by biomechanical corneal properties.In PACG, corneal hysteresis has been frequently described as lower compared with healthy controls, even adjusting for age and IOP, and improving after treatment [6].
Recent literature has shown significant anatomical differences in the hyperopic eyesuch as a higher choroidal thickness in children, which correlates with the axial length [7].The objective of this study is to better describe the morphological, biomechanical and endothelial properties of the hyperopic cornea, in relationship with axial length and anterior chamber depth, and to compare those to a control group of emmetropic eyes.
Materials and Methods
This study has a prospective, non-randomized cross-sectional methodology.The study cohort was formed by applying inclusion and exclusion criteria to all patients who consecutively presented to the Oftaclinic Ophthalmology practice, in Bucharest, Romania, between February 2023 and June 2023.The study was conducted in accordance with the Declaration of Helsinki, and approved by the Research Ethics Committee of Carol Davila University of Medicine and Pharmacy (protocol code PO-35-F-03/16.01.2023).Informed consent was obtained from all subjects involved in the study, and from legal guardians in the case of participants under the age of 18.
The inclusion criteria were: -For the study group: diagnosis of hyperopia (spherical equivalent (SE) over 0.50 D) [8]; -For the control group: diagnosis of emmetropia (SE between −0.50 D and +0.50 D) [9].Patients were included in the study and control groups according to the value of the spherical equivalent calculated after pharmacological cycloplegia (cyclopentolate 10 mg/mL, instilled 3 times every 5 min in both eyes).Furthermore, patients were included in the pediatric group (age under or equal to 18 years old) and the adult group (age over 18 years old).
The exclusion criteria were represented by the presence of ocular pathology, other than hyperopia (myopia, keratoconus, amblyopia, cataract, glaucoma, vitreoretinal pathology), the diagnosis of presbyopia or a history of refractive surgery.Furthermore, patients were excluded in the absence of testing compliance (such as low waveform in Ocular Response Analyser testing, under 7), if the patient was pregnant or if they disclosed any systemic pathology (diabetes mellitus, arterial hypertension, dyslipidemia) or systemic chronic medication.Randomly the right eye of each patient was included in the analysis.
The Ocular Response Analyzer is a non-contact tonometry device applying an air pulse on the corneal surface and following the corneal deformation and its return to the initial state using infrared light.The device records 2 applanation pressures, therefore measuring intraocular pressure and two estimates of corneal viscoelasticity: CH, which represents corneal capacity to absorb and dissipate energy (equal to the pressure difference between the first and second applanation) and CRF, which reflects the global corneal resistance (similar to CH, the second applanation multiplied with a constant) [10].The measurements with the highest Waveform score were included in the analysis.
The specular microscopy uses the principle of specular light reflection, in which the endothelial layer acts as a mirror, transmitting an image of itself to the device, which analyzes its properties [11].
The Aladdin biometer is an optical low-coherence interferometer measuring ocular morphological parameters [12], and autorefractometry measures refractive errors following the principle of retinoscopy (registering the movement of the retinal reflection of a light, projected towards the eye) [13].
Statistical Analysis
This study includes both categorical and numerical, continuous data.The absolute and relative frequency were calculated for categorical data.For numerical data, the average and standard deviation were determined.
Levene's Test, followed by the t Test, was applied in order to identify significant differences between the groups (hyperopic and emmetropic, male and female, children and adults).Pearson's correlation coefficient ("Pearson's r") was calculated to determine the degree of correlation between variables.A weak correlation has Pearson's r between 0.3 and −0.3, a moderate correlation between 0.3 and 0.5 or between −0.3 and −0.5, and a strong correlation over 0.5 or under −0.5.As IOPg may act as a confounding variable, correlations were calculated controlling for it.The p value of 0.05 is considered a threshold for statistical significance.The Statistical Package IBM SPSS Statistics for Windows, version 26 (IBM Corp., Armonk, NY, USA) was used to perform the statistical analysis.
The hyperopic eyes had significantly lower AL, ACD, higher SE, CH, CRF, and were from significantly younger patients (see Table 1).There were no significant differences between males and females, in the entire cohort, study or control group.
There are statistically significant differences between adults and children: lower ACD, CH, CRF and CD in adult hyperopes and emmetropes, and significantly higher CV in adult hyperopes compared to pediatric hyperopes (See Table 2).In the hyperopia group, there are several significant correlations between variablesall statistically significant ones can be found in Table 3.Thus, in the hyperopia group there are significant strong negative correlations between Age-CH, Age-CRF, AL-SE, ACD-CV, Age-CD, moderate negative correlations between Age-ACD, AL-CH, AL-CRF, Hex-CV, CH-CV, CRF-CV, CD-CV, moderate positive correlations between Age-CV, CCT-CH, CCT-CRF, ACD-CD, CH-CD.Scatter plots of correlations of age can be found in Figure 1.all statistically significant ones can be found in Table 3.Thus, in the hyperopia group there are significant strong negative correlations between Age-CH, Age-CRF, AL-SE, ACD-CV, Age-CD, moderate negative correlations between Age-ACD, AL-CH, AL-CRF, Hex-CV, CH-CV, CRF-CV, CD-CV, moderate positive correlations between Age-CV, CCT-CH, CCT-CRF, ACD-CD, CH-CD.Scatter plots of correlations of age can be found in Figure 1.In the emmetropic group, there are strong negative correlations between Age-ACD, Age-CD, moderate negative correlations between Age-CH, Age-CRF, AL-CD, Hex-CD, ACD-CV, Hex-CV and strong positive correlations between CCT-CH, CCT-CRF, ACD-CH, ACD-CRF, CRF-CD, CH-CD, moderate positive correlations between Age-CV, ACD-CD, SE-CD, Hex-AL (Table 4).
Discussion
Corneal biomechanics represent an emerging domain in adult and pediatric ophthalmology, with proven value in refractive surgery [14] and diseases such as glaucoma [15,16], keratoconus [17], and other refractive errors such as myopia [18].In our study, both CH and CRF were significantly higher in hyperopes compared to emmetropes, and in children compared to adults.A large scale study, involving over 93,000 eyes, has led to similar results: CH is higher in younger people, however that study has involved significantly older people (between the ages of 40 and 69), and has also revealed a difference between genders in terms of CH, which was not present in our study [19].A study which divided the participants in age decades revealed that CH and CRF are significantly different between the ages of 10 and 69, with the average values in the 10-19 age bracket being most significantly higher than in other decades.Moreover, CH and CRF were on average higher in females and, similar to our study, were higher in hyperopes (compared to myopes and emmetropes) [20].
Biologically, this may be explained as aging induces reduced elasticity and compliance in the cornea through the effect of oxidative stress, protein glycation and, ultimately, collagen crosslinking [21].
There is an important relationship between corneal thickness and corneal hysteresis and resistance factor-a strong correlation has been established through multiple studies [22,23], including through multivariate analysis [24].This can be easily explained, as the elasticity and viscosity of the cornea, of which CH and CRF are markers, are increased as the corneal thickness is increased [25].
Corneal thickness is an ocular parameter which may be influenced by several systemic pathological processes-such as accumulation of advanced glycation end products in the stroma or endothelial dysfunction, which all lead to an increase in central or peripheral corneal thickness [26].Specifically, an increase of corneal thickness has been detected in diabetes mellitus (DM) [27], hyperparathyroidism, gout, and a decrease in connective tissue disease such as Ehlers-Danlos Syndrome, Marfan Syndrome [26].
In our study, CCT is correlated with CH and CRF both in emmetropes and in hyperopes, and the latter two are different between the two refractive groups.However, no significant difference in CCT was observed-it is known that corneal biomechanics are influenced by biological properties such as the tridimensional organization of collagen fibers, extracellular matrix components, or osmotic pressure [28], and thus may explain the increased CH and CRF of the hyperopic cornea, for the same CCT.
One important relationship to discuss pertains to the anterior chamber depth.Hyperopia, a shallow central anterior chamber and a short axial length are all known risk factors for primary angle-closure glaucoma [5,29].As expected, our study reveals a lower AL and ACD in hyperopic patients.Interestingly, it also reveals a negative correlation between age and ACD, both in emmetropes and hyperopes, suggesting a lower anterior chamber depth as patients grow older.Other studies confirm this association between ACD and either age or refractive error, some data even supporting the fact that the largest rate of ACD decrease occurs in the second decade of life [30].In tandem with the anterior chamber depth, lens parameters are of importance in angle-closure glaucoma.It is reported that in PACG, the lens thickness is higher and the relative position of the lens is more anterior [31].
A Cochrane review suggests that, in PACG, lens extraction, which acts on relieving the pupillary block and on increasing the ACD, is a feasible therapeutic approach, with benefits in terms of visual field progression and quantity of IOP-lowering medication needed [32].
Several studies have investigated the correlation between CH and CRF and morphological parameters, such as the ACD or AL [25,33].However, results were conflicting-in our study, there is a positive correlation between ACD and the corneal biomechanical parameters in emmetropes, while the correlation is statistically significant between AL and corneal biomechanics in hyperopes.In a study of almost 1000 eyes, linear regression analysis identifies anterior chamber depth and volume as factors influencing CH in a model adjusted for age and gender, however the correlation is no longer significant in a multivariate model which includes other factors such as CCT or corneal curvature [25].In a study of pediatric eyes, a multivariate analysis reveals that CH and CRF are both negatively correlated with AL, with no correlation with ACD [33].Similarly, in our hyperopic cohort ACD did not correlate with corneal biomechanics, while in the emmetropic group an inverse correlation was found.These differing results suggest that more research is needed, and that refractive state may have a big influence over the biomechanical-morphological interactions.
The corneal endothelium is the innermost layer of the cornea, consisting of tightly interconnected cells, with an essential role in maintaining proper corneal hydration and transparency [34].It is a single-cell layer, with limited capacity for regeneration [35].In normal eyes, the annual rate of cell loss is 0.6%, and there are several systemic and ocular conditions which may increase this rate during the course of the patient's life [34,36].Similarly, in our study there is a correlation between age and cell density and variability, also a significant difference between adults and children, which suggest a decrease in density and uniformity of endothelial cells as patients age.However, as the present study is cross-sectional, a rate of cell loss could not be calculated to compare the hyperopic and emmetropic patients.
The corneal endothelium is an ocular structure which may be influenced by systemic conditions, such as diabetes mellitus.Several morphological alterations have been recorded in DM, including reduced cell density, polymorphism, and a higher cell loss rate which correlates with longer disease duration and low glycemic control [37].Endothelial cell dysfunction has been described also in the context of hyperlipidemia, smoking or in patients with a history of ischemic stroke [34].
As stated previously, hyperopia is a significant PACG risk factor, and several studies have found lower CD in PACG compared to open-angle glaucoma [38] or compared to healthy controls [35].Both in emmetropes and in hyperopes we have found a correlation between CD, CV and ACD-a shallower anterior chamber is correlated with a decrease in endothelial cell density and increase in variability, which may have important repercussions over the patient's lifetime.
Our study has found statistically significant, moderate-to-strong correlations between CH and CRF and endothelial cell count and variability.However, to the best of our knowledge, these findings differ from those in the literature, where no significant correlation has been found between biomechanical and endothelial corneal parameters in healthy volunteers [39] or in patients with cataract [40].As the level of corneal hydration, which is regulated by the endothelial pump function, may influence corneal biomechanics [39], more studies are needed in order to identify the factors that modulate the relationship between CH, CRF and endothelium parameters.
Our study shows the inverse correlation between age and biomechanical parameters (CH and CRF), anterior segment morphology (ACD) and endothelial layer parameters (Cell density, variability or hexagonality).However, it is cross-sectional; therefore, it does not follow the patients' evolution over time, in relation to these variables.Prospective follow-up studies of hyperopic cohorts are needed, in order to assess this evolution over decades and to evaluate the PACG risk.
Figure 1 .
Figure 1.Correlations of age in the hyperopia study group: with anterior chamber depth (scatter plot (A)), with endothelial cell density (scatter plot (B)), with corneal hysteresis (scatter plot (C)).
Figure 1 .
Figure 1.Correlations of age in the hyperopia study group: with anterior chamber depth (scatter plot (A)), with endothelial cell density (scatter plot (B)), with corneal hysteresis (scatter plot (C)).
Table 1 .
Mean and standard deviation of the age, Spherical Equivalent (SE), Axial Length (AL), Anterior Chamber Depth (ACD), Central Corneal Thickness (CCT), Corneal Hysteresis (CH), Corneal Resistance Factor (CRF), Endothelial cell density (CD), Cell variability (CV), Hexagonality (Hex) in the whole cohort and in the hyperopic and emmetropic groups, with Mean Difference, Standard Error and p value of Independent Samples t Test.
Table 2 .
Mean , standard deviation and p value of the Independent Samples t Test, regarding the age, Spherical Equivalent (SE), Axial Length (AL), Anterior Chamber Depth (ACD), Central Corneal Thickness (CCT), Corneal Hysteresis (CH), Corneal Resistance Factor (CRF), Endothelial cell density (CD), Cell variability (CV), Hexagonality (Hex) in adult and pediatric subjects, in the hyperopia and emmetropia groups.
|
2023-09-17T15:20:56.637Z
|
2023-09-01T00:00:00.000
|
{
"year": 2023,
"sha1": "de507e8a573593af0c77cc61b89a61318112e4c5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1648-9144/59/9/1660/pdf?version=1695018080",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ba01121ba20d13248e64b661f7471c457480b813",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
4785435
|
pes2o/s2orc
|
v3-fos-license
|
Automated structure and flow measurement — a promising tool in nailfold capillaroscopy
Objectives Despite increasing interest in nailfold capillaroscopy, objective measures of capillary structure and blood flow have been little studied. We aimed to test the hypothesis that structural measurements, capillary flow, and a combined measure have the predictive power to separate patients with systemic sclerosis (SSc) from those with primary Raynaud's phenomenon (PRP) and healthy controls (HC). Methods 50 patients with SSc, 12 with PRP, and 50 HC were imaged using a novel capillaroscopy system that generates high-quality nailfold images and provides fully-automated measurements of capillary structure and blood flow (capillary density, mean width, maximum width, shape score, derangement and mean flow velocity). Population statistics summarise the differences between the three groups. Areas under ROC curves (AZ) were used to measure classification accuracy when assigning individuals to SSc and HC/PRP groups. Results Statistically significant differences in group means were found between patients with SSc and both HC and patients with PRP, for all measurements, e.g. mean width (μm) ± SE: 15.0 ± 0.71, 12.7 ± 0.74 and 11.8 ± 0.23 for SSc, PRP and HC respectively. Combining the five structural measurements gave better classification (AZ = 0.919 ± 0.026) than the best single measurement (mean width, AZ = 0.874 ± 0.043), whilst adding flow further improved classification (AZ = 0.930 ± 0.024). Conclusions Structural and blood flow measurements are both able to distinguish patients with SSc from those with PRP/HC. Importantly, these hold promise as clinical trial outcome measures for treatments aimed at improving finger blood flow or microvascular remodelling.
Introduction
The value of nailfold capillaroscopy in the early diagnosis of systemic sclerosis (SSc) has long been recognised. At the nailfold, capillaries run parallel to rather than perpendicular to the skin surface, and can be easily seen when magnified. Characteristic abnormalities in patients with SSc include dilated capillary loops (including giant capillaries), distortion of the normal capillary architecture, areas of avascularity, and areas of haemorrhage (Maricq and LeRoy, 1973;Herrick and Cutolo, 2010;Smith et al., 2016). Because Raynaud's phenomenon (RP) is the most common presenting symptom of SSc, nailfold capillaroscopy is a key investigation in patients presenting with RP: abnormal nailfold capillaries allow early diagnosis of SSc (LeRoy and Medsger, 2001;Koenig et al., 2008;Matucci-Cerinic et al., 2009;Avouac et al., 2011) and are included in the 2013 American College of Rheumatology (ACR)/European League against Rheumatism (EULAR) Classification Criteria for SSc (Van den Hoogen et al., 2013).
Most clinical applications and research studies concerning nailfold capillaroscopy relate to imaging of microvascular structure. However, capillary flow (one aspect of function), can also be assessed, potentially providing additional insights into pathogenesis and measurement of the SSc disease process. Measuring capillary blood flow is challenging, and previous work estimating capillary flow in nailfold capillaroscopy videos has estimated flow only at manually selected points or vessels, leading to subjectivityif only a small number of vessels are selected then blood flow in these may be unrepresentative of the whole nailfold (Shih et al., 2011;Mugii et al., 2009). To address the inherent challenges, we have developed a system for measuring both structure and Fig. 1. Image capture, vessel detection, and flow sequence estimation. All video image frames are 640 × 480 pixels with a resolution of 1 μm per pixel. Frames are stitched in software to produce mosaic images across the whole nailfold. Panels are as follows: (1) An overview of the microscope system and capture software interface; (2) Diagram of motor positions and tracking data used to inform frame stitching and the mosaic creation process; (3) A fullyregistered mosaic image with automatically detected vessels highlighted, counted and measured; (4) Individual regions from the mosaic in (3), showing vessel region detection (left) and vessel path orientation (right); (5) Structural mosaic from (3) overlaid with false colour flow information extracted from vessels using optical flow techniques.
flow fully-automatically in all visible capillaries across the whole nailfold. The aim of this study was to apply this novel capillaroscopy system, incorporating flow measurements, in a cross-sectional study of patients with SSc, patients with primary (idiopathic) Raynaud's phenomenon (PRP), and healthy control subjects. Our hypothesis was that a combination of capillary flow and structural measurements would be better able to discriminate between patients with SSc and those with PRP (or healthy controls), than any single measurement. At the time of imaging, 33/50 (66%) of the patients with SSc, and 3/12 (25%) of the patients with PRP were taking vasodilatory medication.
Nailfold capillaroscopy system
The nailfold capillaroscopy system which we have developed is described fully elsewhere (Berks et al., 2014;Berks et al., 2016). In brief, the system uses a high frame rate (120 frames per second) camera to capture video sequences which allow measurement of red blood cell velocity in individual capillaries. The camera is mounted on a softwarecontrolled 3-axis motorised stage which allows sequences to be captured more quickly (approximately 1 min per finger) than with conventional manually-adjusted microscope systems. Novel software then generates high-quality static nailfold capillary image mosaics (Fig. 1, and see Supplementary video 1) and subsequently makes fully-automated measurements of capillaroscopy structure and flow (Berks et al., 2014;Murray et al., 2011). The image processing and automated measurement take 1-2 min per nailfold on a standard desktop computer. Since there is no alternative method to measure flow velocity for individual capillaries, we validated the flow measurement algorithm using realistic software phantom data (Tresadern et al., 2013)
Imaging protocol
Participants were acclimatised in a temperature-controlled laboratory for 20 min prior to imaging. For each participant, video sequences were taken from all 10 digits (where available). These sequences were then used to generate a total of 1104 static nailfold mosaic images. A total of 16 digits were not imaged; reasons included the presence of obscuring dressings because of ulcers/calcinosis (3 digits from 1 participant), amputations (5 digits from 2 participants), severe contractures (7 digits from 1 participant), and equipment failure (1 digit from 1 participant).
Automated image analysis
For each mosaic, together with its corresponding video sequence, automated software was used to compute measures of capillary density, mean and maximum width, shape, derangement and mean flow velocity. To generate these parameters the software first detects each distal row capillary and estimates the location of its apex and the path the capillary follows along the arterial and venous limbs. The width and orientation at each point along the capillary path up to a distance of 100 μm from the apex are then estimated, and used to compute the capillary's average width, principal orientation (the mean of the path orientations) and shape score (the dispersion statistic (Mardia, 2000) of the path orientationsthis varies between 0 and 1 and will be low for capillaries with highly tortuous, abnormal shapes and high for normal hairpin shaped capillaries). The software also estimates mean blood flow velocity along each capillary path, using all video frames in which the capillary apex was present.
These capillary-level parameters were used to produce the following image-level parameters: a. Capillary density (number of capillary apices per millimetre, measured from the left-most to the right-most capillaries). b. Mean width (the mean of the individual capillary widths). c. Max width (the largest of the individual capillary widths). d. Shape score (the mean of the individual capillaries shape scores). e. Derangement score (the dispersion statistic of the principal orientations of each capillaryas with the shape score, this varies between 0 and 1, and will be low in nailfolds with highly irregular capillary structure and high where all capillaries 'line-up' in a common direction). f. Mean flow velocity (the mean of the individual capillary flow measures).
Finally, these six nailfold-level measurements were averaged across all imaged digits to produce corresponding participant-level parameters.
Statistical analysis
For each parameter one-way ANOVA and Tukey's range test were computed to check for group differences. The area under ROC curve (A Z ) was used to measure separation between SSc and PRP/HC groups. Healthy controls and patients with PRP were combined for this analysis because it is generally considered that in patients with PRP, the nailfold capillaries are normal (this is one of the defining characteristics of PRP (LeRoy and Medsger, 1992; Maverakis et al., 2014)) and in our analysis (see below) results were similar between healthy controls and patients with PRP for all parameters except maximum width (which has previously been shown to be increased in patients with PRP compared to healthy controls (Bukhari et al., 1996)). We combined the individual parameters in a logistic regression model using HC/PRP vs SSc as a binary output variable, first using only the structural measures, and then including flow. Stepwise regression was used to add/remove terms from an initial linear model fit, and in both cases max width (highly correlated with mean width) and derangement (highly correlated with shape) were discarded. In the latter model flow was retained suggesting it provides additional independent information to the structural measures. To estimate model performance we applied leave-one-out crossvalidation to obtain unbiased predictions for each subject.
Results
Group means for the six measured parameters (5 structure + flow) are shown in Table 1. Additionally in Table 1 are the A Z values for the individual parameters, and the two regression models (structure alone, and structure with flow). Group means for patients with SSc were statistically significantly different from both healthy controls and patients with PRP for all parameters, including blood flow velocity. Fig. 2a shows ROC curves and A Z values for predicting SSc (positive) versus the combined healthy control/PRP group (negative) for each individual parameter. Fig. 2b shows ROC curves for: (1) the five structural parameters combined (ROC A Z = 0.919 ± 0.026), which was greater than the best single parameter in Fig. 2a (mean width, ROC A Z = 0.874 ± 0.043); and (2) the five structural parameters plus flow (ROC A Z = 0.930 ± 0.024).
Discussion
We have confirmed that capillaroscopic parameters allow differentiation of patients with SSc from those with PRP/HC. Key new findings are that (1) blood velocity provides discrimination comparable with individual structural parameters, (2) combining structural parameters, measured automatically at high speed (approximately 1 min per nailfold), improved differentiation (compared to individual parameters); (3) blood velocity results, although preliminary, provide complementary information for distinguishing SSc from PRP/HC and combining velocity with structural measurements further improved discrimination performance.
The ability to measure capillary flow velocity objectively is a very major step forward and potentially opens up a new era for nailfold capillaroscopy as a non-invasive investigative tool. In recent years there has been a huge increase in application of nailfold capillaroscopy by rheumatologists (in part fuelled by inclusion of abnormal capillaroscopy in the ACR/EULAR criteria (Van den Hoogen et al., 2013)), evidenced by increasing numbers of research publications and EULAR and British Society for Rheumatology sponsored training courses. Structural measurements will always be those most relevant to practicing rheumatologists, the major application of capillaroscopy being in the assessment of the patient presenting with RP. However, for the clinical researcher, it is important to point out that structural abnormalities are likely to develop and progress relatively slowly: probably too slowly to be useful as outcome measures in most clinical trials, although this is an area of current research and increasingly investigators are including capillaroscopy in studies of treatment response (Moore et al., 2007;Guiducci et al., 2012;Cutolo et al., 2013). Conversely, capillary blood flow is likely to change rapidly in response to, for example, vasodilator drug treatment, and could therefore be an outcome measure especially in early phase, proof-of-concept trials as well as in studies of pathophysiology of SSc and other rheumatological conditions in which the microvasculature is thought to contribute to pathogenesis. Relevant to this, Mugii et al. (2009), as well as reporting reduced red blood cell velocity in nailfold capillaries from patients with SSc compared to healthy controls, reported that velocity increased in seven patients with SSc after treatment with alprostadil, underscoring how red cell blood velocity is likely to be much more sensitive to change than structural capillaroscopic parameters. Increased nailfold red blood cell velocity in response to vasoactive treatments has also been reported in other conditions, for example to antioxidants (in the context of smoking) (Henriksson et al., 2011) and to moxonidine (in the context of hypertension) (Martina et al., 1998). By allowing rapid measurement of velocity, averaged across the whole nailfold, our automated methodology brings the potential of sensitive, accurate, and rapid measurement for application in early phase clinical trials. A limitation of our study was that all patients with SSc had well established SSc, with a median disease duration of 12 years, and it will be important to test ability to discriminate between patients with PRP and those presenting with early SSc, so as to obtain results which are generalisable to those patients presenting with RP in everyday practice. Therefore larger scale studies including patients with early disease, and examining patients prospectively over time, are now required. Also, to assess the sensitivity of flow measures, we are planning studies including different dynamic challenges, to test the ability of our system to measure flow changes in response to (for example) temperature changes and occlusion. Future studies could also incorporate comparison of flow measurements to measurements obtained using other physiological measurement techniques to validate/calibrate our flow velocity measures, for example thermography which measures surface temperature or laser Doppler imaging which directly measures blood cell movement (though not for individual capillaries).
In conclusion, we have developed a state-of-the-art nailfold capillaroscopy system which measures nailfold capillary structure and blood flow automatically, with a substantial benefit in terms of operator time. Adding blood flow to structural measures may (with further refinement) help distinguish patients with SSc from those with PRP, and holds promise as an outcome measure in clinical trials of treatment aimed at either improving finger blood flow or remodelling the microvasculature. Finally, our novel system could be modified to assess, non-invasively, other aspects of capillary function, for example oxygenation (using multispectral imaging) and oxidative stress (using ultraviolet-induced fluorescence).
Funding statement
This work was funded by the Wellcome Trust (09342/Z/10/Z).
Declarations of interest
None.
|
2018-04-26T18:25:50.804Z
|
2018-07-01T00:00:00.000
|
{
"year": 2018,
"sha1": "667caf38c107e769baefb7a12be94bf2e9c20eee",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.mvr.2018.03.016",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "667caf38c107e769baefb7a12be94bf2e9c20eee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237746376
|
pes2o/s2orc
|
v3-fos-license
|
Malaria Knowledge and Infection Among Migrant Population in the China-vietnam Border: A Questionnaire-based Survey
Background remain as one of the greatest challenges facing elimination in China. Malaria control interventions among migrant population across border relies on personal protection from mosquito bites. the knowledge of the link between mosquitoes and malaria will inform malaria control and elimination programmes on those risk population. Methods
Introduction
Malaria prevalence in border areas is often higher than in other areas due to lower access to health services, treatment-seeking behaviour of marginalized populations, di culties in deploying prevention programmes to hard-to-reach communities, often in di cult terrain, and constant movement of people across porous national boundaries [1]. Though China has eliminated malaria and no indigenous case was reported since 2017, border areas still pose a great challenge to the achievement of malaria elimination [2][3][4]. Malaria elimination was challenged by diversity and complexity of the determinants in the border areas [5]. The border areas in Guangxi province, covered 8 counties, neighbouring with Vietnam, was once high endemic area [6]. The malaria incidence in those 8 counties was ranged from 125.58 to 605.77 per 10,000 [7]. After continuous effort by the government and technical staff, the incidence has sharply declined to 0.22 per 100,000 in 2010 and no local Plasmodium falciparum was reported since 1996. Ningming County was one of the 8 border counties, once belong to a malaria hyperendemic area, with 31,200 malaria cases and 1.9 per 10,000 incidence reported in 1953 [8]. Plasmodium vivax was the predominant species since P. falciparum was no longer reported after 1988. However, the imported malaria cases in Ningming County, similar as the nationwide, has increased due to the frequently economic exchange. The blood examination conducted from 2000 to 2010 has reported 7 positive slides among totally 3,439 migrant population with the positive rate was 0.20%. Hence, the imported malaria caused by frequent migration was the greatest challenge for the border areas since Anopheles mosquito still exist in this county. Since less published documents have investigated and evaluated the malaria risk in this border county, herein we carry out malaria knowledge survey and parasitological study among the migrant population.
Methods
Study sites and samples.
Demographic study
A total of 108 migrant population returning to Guangxi Province from Vietnam between March 2018 and September 2019 were enrolled in this study. All participants were Vietnamese with 52.8% male (n = 57) and 47.2% female (n = 51). The average age of participants was 32 years ranging from 16 to 54 years. Most were aged at the years of 20-30 (36.1%) and 30-40 (40.7%). The occupations of all participants were mainly migrant workers (50.9%) and farmers (37.0%). The overwhelming majority of participants experienced 1time journey from Vietnam to China (78.7%), ranging from 0 to 6. There were 26 people (24.1%) who stayed in China for less than a week, 50 people (46.3%) in 1 month, 14 people (13.0%) in one month to 6 months, and 5 people (4.6%) in more than 6 months. Most of them went to Guangxi (80.6%), a small number of them worked in Guangdong (5.6%).
Malaria knowledge and control prevention behaviors
A survey of malaria knowledge among all participants found that knowledge of malaria transmission was only 19.4%, and knowledge of malaria symptoms was 23.2%. Awareness of the risk of death from malaria was 7.4%, and awareness of prevention methods was 14.8%. No signi cant difference was found among occupations except for migrant workers, whose knowledge rate were higher than other occupations including farmers and plant workers. In terms of prevention and control conditions, 80.6% of the participants had mosquito nets in their homes and 58.3% had screen doors and windows installed. At night, 73.2% of them had 2 persons who were under the bed net at night, whereas 7.4% was 1 person. The usage rate of bed nets accounted for over 49.1%. In addition, a small proportion (7.4%) of participants had the habit of sleeping rough in summer.
Malaria parasitological study
Of the 108 participants, 5.6% (n = 6) of those infected tested positive for malaria. The positive rate was 7.0% for males (P > 0.05) and 3.9% for females. There were no statistically signi cant differences in the positive rate among different age, sex, family size, nationality and occupation (Table 1). Further, no statistically signi cant differences occurred in the number of outbound visits, overseas stay time, entry and exit locations, and the positive detection rate of malaria knowledge (P > 0.05). The positive rate of home without using mosquito net was 4.8% (1/21), the positive rate of home without mosquito net installation was 6.8% (3/44), the positive rate of home without using mosquito coil incense was 3.6% (2/55), the positive rate of having the habit of sleeping rough was 0.0% (0/8), but the differences in positive rate between different behaviors were not statistically signi cant (P > 0.05) (Table2).
Discussion
Movement of infectious diseases such as malaria and COVID-19 across borders poses a major obstacle to achieving and maintaining elimination [1,9,10]. The ndings in our study have revealed that 6 asymptomatic infections detected, accounting for 5.6% of all migrant population from Vietnam. Unlikely the China-Myanmar border, which may pose great challenge for malaria elimination to Yunnan Province due to the high prevalence of P. vivax and P. falciparum in northern Myanmar [11,12], malaria in the China-Vietnam border seems a "forgotten disease" because of the low incidence in northern Vietnam. Hai Phong, located in the northern Vietnam, the average positive predictive values was 0.10% in 2010-2014 [13]. This was not only in Guangxi-Vietnam border, but also similar in Yunnan-Vietnam border. For example, Hekou County in Yunnan Province, the annual malaria parasite rate was lowered to 0.18 per 1,000 in 2008 and was the rst county to achieve malaria elimination in Yunnan-Vietnam border in 2015 [14].
In spite of achieving the goal of malaria elimination in the border counties in Guangxi [7], some challenge could be faced by the frequent mobile population. First, how to detect the asymptomatic infections timely was crucial for the malaria control intervention for both sides in the border. For Vietnam, the high risk of migrant population was proposed as forest goers, who may live in forest borer regions and have poor knowledge of malaria and limited access to preventive and therapeutic services [15,16]. As malaria transmission decline in Vietnam, the high prevalence of asymptomatic and sub-microscopic infections was the main challenge [17][18][19][20]. Asymptomatically infected individuals usually do not seek treatment and generally harbour low parasite density undetectable with microscopy examination. Therefore, parasites could persist in these individuals from one season to the next maintaining local transmission [21]. However, the asymptomatic infections were reported in the Central and South Vietnam, while in our study, it is noted that the Northern Vietnam, also has become a risk concern for the asymptomatic infections. Second, the susceptibility of both P. falciparum to Artemisinin-based Combination Therapy (ACT) and P. vivax to chloroquine was declined in Vietnam [22,23]. The risk of anti-malarial drug-resistance spread to the border, is likely due to importation of multi-drug resistant malaria caused by migrant population [24]. However, the emergence of Kelch 13 mutations associated with increased ring survival rates and parasite clearance delay were found in the China-Myanmar border [25][26][27][28], though there is no evidence showing the emergence of resistance P. falciparum strain against ACT along the China-Vietnam border, more attention should be paid to the pathogen population to monitor and evaluate the potential emergence of ACT resistance. Third, the malaria knowledge rate was low in our study among the migrant population. It is noted that the border residents, especially for the young adults and women have poor malaria knowledge [29,30]. In our study, only 19.4% of the surveyed population understanding malaria transmission through mosquito biting and 23.2% of them understanding malaria symptoms.
The study has some limitations. First, not all the questionnaires in the survey were obtained from the participants, possibly due to the language only used in English version. Second, the study was conducted in Ningming County, one of the 8 border counties in Guangxi, the results obtained from this study may not represent the whole status in the China-Vietnam border.
Conclusions
In summary, the study indicated the low malaria knowledge among the migrant population around the China-Vietnam border, also the asymptomatic infections were detected, which suggesting the risk of reestablishment of malaria facing post-elimination stage in the border. The ndings of this study have shown that the health education focus on those high risk population such as migrant workers and forest goers should be strengthened. In an area like Guangxi where literacy and language could be a barrier, health education based on verbal communication such as web, radio, and mobile phone may be required under the COVID-19 pandemic situation. Further proactive case detection should also be carried out, not only in Ningming County, but also in other border counties in Guangxi, which aimed to timely detect the patients, as well as the asymptomatic infections that could cause the re-establishment of malaria.
Declarations Funding
The work was supported by the key techniques in collaborative prevention and control of major infectious diseases in the Belt and Road (Grant No. 2018ZX10101002-004).
Ethics approval and consent to participate
This study was reviewed and approved by the ethical committee of the National Institute of Parasitic Diseases, Chinese Centre for Disease Control and Prevention (NIPD, China CDC, No. 2019008).
Consent for publication
Not applicable.
Availability of data and materials
The data was collected through paper-based questionnaire and recorded in the private computer with strictly protected ID and password, only can be accessed by the team member of co-authors.
Competing interests
The authors declare that they have no competing interests.
Author's Contributions
|
2021-09-28T01:09:28.911Z
|
2021-07-09T00:00:00.000
|
{
"year": 2021,
"sha1": "57bbcbfd2204a5caa2d18c2bf29f802227622478",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21203/rs.3.rs-606850/v1",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "7c921a40ab416fd95d5b64b9b0763c890edf3a3f",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
}
|
1852983
|
pes2o/s2orc
|
v3-fos-license
|
SESS: A Self-Supervised and Syntax-Based Method for Sentiment Classification
. This paper presents a method for sentiment classification, called SESS ( SE lf-S upervised and S yntax-Based method). SESS includes three phases. Firstly, some documents are initially classified based on a sentiment dictionary, and then the sentiments of phrases and documents are iteratively revised. This phase provides some accurately labeled data for the second phase. Secondly, a machine learning model is trained with the labeled data. Thirdly, the acquired model applies on the whole data set to get the final classification result. Moreover, to improve the quality of labeled data, the affect of compound and complex sentences on clause sentiment is examined. For three types of compound and complex sentences, i.e., coordination, concession or condition sentence, the clause sentiment is revised accordingly. Experiments show that, as an unsupervised method, SESS achieves comparative performance to state-of-the-art supervised methods on the same data.
Introduction
The task of sentiment classification is: given an opinionated piece of text, classify the opinion as falling under one of two opposing sentiment polarities (positive or negative) (Pang and Lee, 2008). The "piece of text" can refer to either a sentence or a document. In this paper, it refers to document, e.g., a movie review or a product review.
Generally, there are two types of approaches tackling the sentiment classification task: supervised (Dave et al., 2003;Yu and Hatzivassiloglou, 2003;Aue and Gamon, 2005;Read, 2005) and unsupervised (Pang, 2002;Turney, 2002;Gamon and Aue, 2005;Zagibalov and Carroll, 2008a). Supervised approaches usually employ machine learning methods to train a model based on some human-labeled data, and then apply the acquired model on the new data. On the contrary, unsupervised approaches usually employ a list of sentiment words, e.g., a sentiment dictionary or some seed words, to help decide the sentiment polarity of documents. Supervised approaches generally achieve better performance than unsupervised ones, because methods such as SVM or Naïve Bayes have been deeply studied in machine learning area, and the human-labeled data reveal a lot of clues about human classification. However, as a doubleedged sword, human-labeled data also bring the disadvantage of domain-dependence. Although some researches have been done on domain adaptation Blitzer et al., 2007), the problem is far from resolved. In this paper, a self-supervised method is proposed to share both the power of machine learning methods and the domain-independence property. The method is referred to as SESS (SElf-Supervised and Syntax-Based method). SESS takes three steps. Firstly, an unsupervised method is used on the data to label some documents (i.e., decide their sentiment polarities). Secondly, a machine learning method applies on these labeled documents to train a model. Thirdly, the model is used to label all the documents (notice that a document may change its label acquired in the first step to another one after the third step). SESS makes use of machine learning methods without need of any human-labeled data.
To ensure that the machine learning method can achieve good performance in the third step, the unsupervised method must provide accurately labeled documents in the first step. To satisfy that requirement, SESS makes two special designs in the first step. First, an iterative procedure is used to decide the polarities of documents and words. A general method may use a sentiment dictionary to decide the document polarity. However, the words in the dictionary may not be comprehensive and the sentiment of those words may not fit for current data set. The iterative method can find new sentiment words that are not in the dictionary and revises the polarity of words according to current data set. Second, the polarities of documents and words are revised by analyzing the relation of clauses of compound and complex sentences in documents. Particularly, seven types of compound and complex sentences are analyzed, while three of them, i.e., coordination (discourse markers such as and or in addition), concession (discourse markers such as but or however) and condition (discourse markers such as if) sentences, take effect on sentiment of clauses. The detailed effects of these sentences are examined.
The experiments show that SESS achieves an overall F 1 -score of 81.7% on data sets of four domains, which is comparative to 83.3%, the best result of the supervised approach in previous studies (Li and Zong, 2008) on the same data set.
The rest of this paper is organized as follows. Section 2 surveys related work. The overview of our approach is presented in Section 3. Section 4, 5 and 6 describe the details of the SESS model. Experiments are shown in Section 7. The final section gives conclusions and proposes future work.
Related Work
Standard machine learning technologies such as SVM and Naïve Bayes are usually used by supervised approaches (Alpaydin, 2004). Different factors affecting the machine learning process were investigated. For instance, linguistic, statistical and n-gram features are used in (Dave et al., 2003). Semantically oriented words are utilized to identify polarity at the sentence level (Yu and Hatzivassiloglou, 2003). Selected words and negation phrases are investigated in (Na et al., 2004). Such approaches work well in situations where large labeled corpora are available for training.
But the performance of supervised approaches generally decreases when training data are insufficient or acquired from a different domain Read, 2005). To solve that problem, unsupervised or weakly supervised methods can be used to take advantage of a small number of annotated in-domain examples and/or unlabelled in-domain data. For instance, Aue and Gamon (2005) train a model on a small number of labeled examples and large quantities of unlabelled in-domain data. In (Blitzer, 2007), structural correspondence learning is applied to the task of domain adaptation for sentiment classification of product documents. Li and Zong (2008) integrate training data from multiple domains.
Unsupervised approaches usually assume that there are certain words people tend to use to express strong sentiment, so that it might suffice to simply produce a list of such words by introspection and rely on them alone to classify the documents. Pang (2002) checked this assumption by asking human to read movie reviews, selecting ten to twenty sentiment words (like fantastic or terrible), and using them to classify reviews. The results show that such a method performs worse than supervised models built on sufficiently large training sets in the movie review domain.
Later, such human-given-word-list method is extended. The words given by human are considered as seed words, and other sentiment words in the documents are picked out by some kind of "similarity" between the words and the seed words. Turney (2002) selected two seed words, excellent and poor. For every phrase in a document, the mutual information between the phrase and the two words are computed respectively, to reveal that the phrase is more positive (like excellent) or more negative (like poor). A document is classified as positive if the average sentiment of all its phrases is positive, and vice versa.
Fewer seed words imply less domain-dependency. Zagibalov and Carroll (2008a) select only one word good as seed positive word, and use negation words such as not to find initial negative expressions. In (Zagibalov and Carroll, 2008b), even the one word good is ignored. Instead, seed words are automatically generated based on a linguistic pattern which is called negated adverbial construction like not very good. In such way, the problem of domain dependency is completely avoided.
Discourse markers such as but, and, or have been explored to identify sentiment polarity of adjectives. Usually two adjectives have different sentiment polarity if they are connected by but, e.g., elegant but over-priced, while they usually have the same sentiment polarity if connected by and, e.g., clever and beautiful. Utilizing this property, Hatzivassiloglou and McKeown (1997) cluster the adjectives collected from a domain corpus and decide the sentiment polarity of these adjectives in that domain. Yao and Lou (2007) improved that method by combining it with the idea of Turney (2002). This paper is different from the rest in that, we concern with discourse markers between clauses, but not adjectives, and integrate the discourse marker analysis in the process of document polarity decision. Figure 1 shows the flow chart of SESS. In phase 1, an unsupervised approach applies on the original data to automatically label some data. In phase 2, a supervised approach applies on the labeled data to acquire a model. In phase 3, the model applies on the original data to do classification.
Overview of Our Approach
In phase 1, the unsupervised approach adopts the method of (Zagibalov and Carroll, 2008b). That method initially selects some seed sentiment words automatically, and assigns initial sentiment score for those words. Then it employs an iterative procedure to update both the sentiment score of words and polarity of documents. First, the sentiment of sentences and documents is decided based on the sentiment words (and phrases) they have. A sentence is decided positive if the sum of the sentiment score of words (and phrases) in the sentence is greater than zero, and negative if the sum is less than zero. A document is judged as positive if it has more positive sentences than negative ones. Second, the sentiment score of words (and phrases) is updated according to the judged polarity of documents. Basically, if a word (or a phrase) occurs in more positive documents than negative documents, it is judged as positive, and the score is computed as the difference of the number of positive documents and negative documents the word occurs in. Labeled data
Result
In this paper, we made three improvements over the method of (Zagibalov and Carroll, 2008b). First, positive/negative ratio control is introduced in the iterative procedure. Second, a sentiment dictionary is used to initialize seed words. Third, compound and complex sentences are examined to revise the sentiment of clauses and documents. The supervised method in phase 2 requires the training data to be both adequate and precisely labeled. We can imagine that, if only a small percentage of data are labeled, or very low precision is acquired on the labeled data, the supervised method surely suffers bad performance no matter how powerful the method is. However, the quality of data labeling cannot be controlled as no human-labeled data is available. Therefore, what can be controlled here is only the amount of labeled data. Since the unsupervised approach takes an iterative style, it labels more and more documents when the iteration goes on. To make the control, a point is set on the percentage of the labeled data. In the experiment, we select the golden mean. That is, if 61.8% of documents have been labeled, the iteration procedure completes. And the labeled 61.8% of documents are provided as the training data of phase 2.
In phase 2, Naïve Bayes is selected as the realization of supervised approach. As a widely used method, Naïve Bayes achieves good performance in many areas. But in fact, the performance of phase 3 depends much more on the quality of labeled data provided by phase 1, while less on the particular machine learning method.
The Unsupervised Approach of SESS
The basic method of phase 1 adopts the method of (Zagibalov and Carroll, 2008b). The method keeps two lists, i.e., a sentiment vocabulary list and a sentiment document list. The sentiment vocabulary list is initialized by some seed words (see Initialization Step in the following). Then the sentiment vocabulary list is used to identify polarity of documents (Step 1), and the result is saved in the sentiment document list. Further, the sentiment document list is checked to reversely update the sentiment vocabulary list (Step 2). Such an iterative procedure completes when both lists remain unchanged. Our improvements on that method are introduced in Initialization Step and Step 1.
Initialization
Step Zagibalov and Carroll (2008b) identify seed sentiment words in the way: if a word such as good, occurs more frequently in the documents than its negated adverbial form such as not very good, then the word is judged as positive. That method has the advantage of domain independence, while the disadvantage is that only positive words can be found and no negative words are found. One consequence is that the method tends to classify negative documents as positive, because knowledge about negative expression is insufficient. To overcome that problem, we use a general sentiment dictionary to initialize the seed sentiment words. Since a sentiment dictionary contains a lot of positive words, as well as negative words, the bias on classification can be greatly lightened. In addition, since the general sentiment dictionary is applicable to many domains, the advantage of domain independence is still remained.
The sentiment vocabulary list, denoted by V sen , maintains a list of items, each of which is a unigram or bi-gram, and assigned with a sentiment score. In the initialization step, +1 score is assigned to positive words while -1 for negative words. Some dictionaries such as Subjclueslen1-HLTEMNLP05 provide sentiment strength information, e.g., great is strong while feasible is weak. In such case, 1 is assigned to strong words while 0.5 to weak ones (for both polarities).
Step 1: Identify the Sentiment of Documents
To compute the polarity of a document D, for each item w∈D and w∈V sen , weight its score in D by the following formula: where L w denotes the length of w, L d the length of D, S v the sentiment score of w in V sen , and N is -1 if an negation word precedes w, or 1 if none negations.
Divide D into clauses by comma and full stop. The sentiment score of a clause c, denoted by CS(c), is defined as CS(c)=∑S w , for all w∈c. A clause c is positive if CS(c)>0 or negative if CS(c)<0. Zagibalov and Carroll (2008b) classify D to positive if it contains more positive clauses than negative ones. Then those documents compose of the sentiment document list.
We found a disadvantage of this method. Since there are usually different amount of classified positive and negative documents, when items are updated in step 2, their scores (S v ) may be biased. In detail, for formula (3), if there are more classified positive documents than negative ones, then F p may be bigger than the value it should be. To overcome that bias, a ratio control is designed, which requires the number of positive and negative documents in the sentiment document list to be the same.
Denote the number of positive and negative documents in one round of iteration as DN positive and DN negative respectively. To realize the ratio control, first, rank all documents according to their sentiment score, denoted by DS(D), where DS(D)=∑CS(c), for all c∈D. Second, take the smaller one of DN positive and DN negative , i.e., Min(DN positive , DN negative ), as a threshold, remain the positive and negative documents above the threshold in the sentiment document list, and remove others. To make the process stricter, a weight α, where 0<α≤1, can be added to the threshold of Min(DN positive , DN negative ). Figure 2 shows the whole process to classify the documents with ratio control. Those documents form the sentiment document list.
Step 2: Update the Sentiment Vocabulary List
For an item w, denote the number of positive documents containing w as F p , and the number of negative documents containing w as F n . Preceding by a negation makes the account reduce by one. E.g., if "not good" is found in a negative document, then F n = F n -1 for good. The idea of updating V sen is: if F p is much bigger than F n , then w is very likely to be a positive item, and vice versa. The following formula is designed as a measure.
If DIF(w)≥1, w is included in V sen (current items in V sen will be removed if they no longer satisfy this condition). The sentiment score of w is updated as
Iteration Control
The unsupervised approach iterates between step 1 and 2. In (Zagibalov and Carroll, 2008b), the iteration completes when both V sen and the sentiment document list do not change. Generally, when the iteration completes, almost all the documents are classified. For SESS, since the goal of the unsupervised approach is to provide accurately labeled data, it is not necessary to label that many documents. In addition, generally, more documents are classified, lower accuracy of the classification is acquired, for the errors generated in the former rounds of iteration will propagate to the following ones. Therefore, the iteration should complete at some early point of iteration. However, the iteration cannot complete too early, because the supervised approach still needs adequate data to train the model. A parameter β is set, where 0<β<1. When β*100 percent of documents have been labeled, the iteration completes. In the experiments, β is set as 0.618 (i.e., golden mean).
Syntax-based Approach of SESS
In phase 1 of SESS, the sentiment score of a document is calculated as the sum of clause score of the document. But the relation of clauses was neglected. For instance, considering the following sentence, The concept is a great one, but it's mostly a waste of time.
The former clause takes positive polarity while the latter one takes negative. If the former CS(c) has bigger absolute value than the latter one, the whole effect of these two clauses on the document is positive. However, since the sentence emphasizes on the latter part, the effect should be negative. Considering that hint, the polarity of the former clause should be reversed (change positive to negative). This example reveals that, to correctly compute the sentiment of a document, the relation of clauses should be examined.
There are mainly seven types of compound and complex sentences, which are listed in Table 1. Among them, only three ones have effect on the sentiment of clauses, i.e., coordination, concession and condition sentences. Particularly, first, the polarity of two clauses of a coordination sentence should be consistent. If not, there may be an error in it. One principle to identify the error could be: the polarity of the clause having the smaller absolute value of CS(c) is more likely to be an error. It should be adjusted to keep consistent with the polarity of the other clause. Second, a concession sentence usually emphasizes on the latter clause. Therefore, the polarity of the former clause should be reversed. Third, a condition sentence generally talks about an assumption. Thus, it should be ignored in the calculation of document sentiment. Figure 3 shows the realization.
The type of a compound or complex sentence is identified by discourse markers listed in Table 1. Figure 4 shows how to revise the frequency of documents an item occurring in step 2 of phase 1. For an adjacent clause-pair <Cl 1 , Cl 2 >, 1. If it is a coordination sentence with CS(Cl 1 ) * CS(Cl 2 ) < 0, denote i as the index of the clause whose absolute value of CS(c) is bigger than the other one, and set CS(Cl 1 )= CS(Cl 2 )=CS(Cl i ). 2. If it is a concession sentence, set CS(Cl 1 )= -CS(Cl 1 ). 3. If it is a condition sentence, set CS(Cl 1 )= CS(Cl 2 )= 0. For an adjacent clause-pair <Cl 1 , Cl 2 >, 1 If it is a coordination sentence, calculate F p or F n as before (without revision) for any item w∈Cl 1 or Cl 2 . 2 If it is a concession sentence, then 2.1 If current document is judged to be positive, set F p = F p -1 for any item w∈Cl 1 .
2.2 If current document is judged to be negative, set F n = F n -1 for any item w∈Cl 1 . 3 If it is a condition sentence, do not account current document in either F p or F n for any item w∈Cl 1 or Cl 2 .
Supervised Approach of SESS
Naïve Bayes is chosen as the machine-learning method in phase 2. The items in sentiment vocabulary list V sen in phase 1 are taken as features, which are weighted by TFIDF.
Data and Tools
Experiments are carried out on the data set of reviews of four domains: books, dvds, electronics, and kitchen appliances 1 . All the documents are in English. Each domain contains 1,000 positive and 1,000 negative documents. The sentiment dictionary required in the initialization step of phase 1 takes Subjclueslen1-HLTEMNLP05 2 Sentiment Dictionary, which contains 2294 positive words and 4146 negative words.
WEKA 3.4.11 3 is used as the implementation of Naïve Bayes classifier.
Results of the Supervised Methods
In (Li and Zong, 2008), the data in each domain are partitioned randomly into training data and testing data, with the portion of 70% and 30% respectively. The development data are used to train a meta-classifier. Four types of feature selection are used, i.e., 1Gram, 2Gram, 1+2Gram and 1Gram+2Gram. The 1Gram method takes unigram as feature candidates and optimally selects features by Bi-Normal Separation (BNS) method. The 2Gram and 1+2Gram are similar. For 1Gram+2Gram, it adopts the selected features in both 1Gram and 2Gram.
Results of The SESS Model
In phase 1 of SESS, α is set as 0.618 (i.e., golden mean) for the first round of iteration, and 1.0 for the following rounds. β is set as 0.618. The following negation words are used: {not, no, none, nothing, nor, neither, never, hardly, seldom, don't, doesn't, didn't, isn't, wasn't', aren't, weren't, won't, wouldn't, can't, cannot, couldn't} Table 2 shows the results of unsupervised, supervised and self-supervised methods. The result of unsupervised method is acquired in phase 1, while the iteration completes until sentiments of both V sen and the sentiment document list do not change (phase 2 and 3 are ignored). The dictionary-based self-supervised method is different from SESS in that, in phase 1, the sentiment dictionary is directly used to label documents. That is, no iteration is taken. Then half top-ranked positive and negative documents are chosen to form the training data. Phase 2 and 3 are the same. Table 2 shows that, 1) self-supervised and supervised methods are better than unsupervised one; 2) for self-supervised method, SESS is better than dictionary-based one; 3) SESS is better than three supervised methods while worse than one (1Gram+2Gram). Notice that the performance of supervised methods is achieved on 30% document, while SESS on the whole set of documents.
Different settings of β in SESS are examined. Table 3 shows that the best performance is achieved when β is set as 0.618 among the three settings: 0.5, 0.618 and 0.8. Generally the unsupervised method takes two or three rounds of iteration to label those data.
The Improvement of Syntax-based Approach in SESS
The analysis of three compound and complex sentences takes effect on the performance of SESS simultaneously. To check their individual effect, five variant models were implemented. They are referred to as V 1 , V 2 , V 3 , V 4 and V 5 respectively. In V 1 , all the revision on three types of sentences is removed. In V 2 , only the revision on coordinate sentence is remained. In V 3 , only the revision on condition sentence is remained. In V 4 , only the revision on concession sentence is remained. In V 5 , all the revisions are remained. Table 4 shows that all the three types of revision totally achieve 3.7% F 1 -score improvement (from V 1 : 78.0% to V 5 : 81.7%). Among them, the revision on concession sentences plays the most important role (from V 1 : 78.0% to V 4 : 81.2%), condition as the second (from V 1 : 78.0% to V 3 : 78.7%), while coordination as the least (from V 1 : 78.0% to V 2 : 78.4%).
Compare the result of V 1 to Table 2, we can conclude that, for SESS, self-supervise way achieves 5.6% improvement over unsupervised way (from unsupervised: 72.4% to V 1 : 78.0%), while the analysis on compound and complex sentences contributes 3.7% more (from V 1 : 78.0% to V 5 : 81.7%).
Conclusion and Future Work
SESS is proposed in this paper to tackle the task of document sentiment classification. It uses an unsupervised method to automatically label some data, train a machine learning model with the labeled data, and classify the whole data by the acquired model. The contributions of this paper are: 1) propose a self-supervised method to do document sentiment classification; 2) use an iteration method to provide accurately labeled data for training; 3) improve the iteration method by revising clause sentiment of three types of compound and complex sentences. Experiments show that SESS achieves performance better than the unsupervised method, the dictionary-based self-supervised method, and three supervised methods. It is just a little worse than a specially designed supervised method (1Gram + 2Gram).
In the future, there are still several avenues to be explored. Firstly, the use of linguistic knowledge in sentiment classification needs further study. For instance, the word "great" in the phrase "a great deal" is currently considered as sentiment word, but it contains no sentiment. Secondly, the none-opinioned part of a document should be separated from the opinioned part. For example, currently, the content of a book talking about a tragedy story mixes with the opinion of the reviews talking about happy feeling of the book, which introduces errors.
|
2015-11-19T21:22:29.539Z
|
2009-12-01T00:00:00.000
|
{
"year": 2009,
"sha1": "30ddd55d4cd2263544ea54ccdf8a051323f8c303",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "30ddd55d4cd2263544ea54ccdf8a051323f8c303",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
221920657
|
pes2o/s2orc
|
v3-fos-license
|
Cell Survival Failure in Effector T Cells From Patients With Systemic Lupus Erythematosus Following Insufficient Up‐Regulation of Cold‐Shock Y‐Box Binding Protein 1
The importance of cold‐shock Y‐box binding protein 1 (YB‐1) for cell homeostasis is well‐documented based on prior observations of its association with certain cancer entities. This study was undertaken to explore the role of YB‐1 in T cell homeostasis and survival and the potential contribution of YB‐1 to the pathogenesis of systemic lupus erythematosus (SLE).
INTRODUCTION
Active control of survival and apoptotic cell death pathways is crucial for T cell homeostasis, and thus, balanced immune responses. Mechanisms that support T cell survival ensure efficient responsiveness to acute immune challenges and are required for the generation of T cell memory. Apoptosis, in return, serves to eliminate dysfunctional or infected T cells and is required to terminate immune response (1). Disturbed T cell homeostasis has been implicated in the pathogenesis of autoimmune diseases, such as systemic lupus erythematosus (SLE). The underlying pathophysiologic mechanisms, however, remain incompletely understood. SLE, similar to rheumatoid arthritis and other chronic inflammatory diseases, has been linked to a dysregulation of apoptosis (2)(3)(4). Interestingly, both reduced and increased rates of T cell apoptosis have been reported in SLE. For instance, anergy-resistant T cells from SLE patients were found to escape apoptosis by up-regulating cyclooxygenase 2. Moreover, T cells from SLE patients were reported to express elevated levels of the survival factor Bcl-2 (5). In contrast, experiments on peripheral blood mononuclear cells (PBMCs) from SLE patients with infections and fever revealed enhanced apoptosis in activated T cells (6), a fact that may also apply to other autoimmune diseases (7). Consistent with this finding, at least some SLE patients have a reduced amount of peripheral CD4+ T cells and exhibit abundant circulating apoptotic material that promotes autoantibody production (8,9).
Two distinct apoptosis pathways are engaged to maintain homeostatic control of activated T cell populations (10). The extrinsic pathway is triggered by binding of FasL to its cell death-inducing surface receptor Fas/CD95/APO-1 or by tumor necrosis factor (TNF) to related TNF receptors (TNFRs). As both ligands and receptors are progressively up-regulated during T cell activation, they eventually counteract excessive T cell expansion through activation-induced cell death (AICD) (11). The intrinsic pathway reflects the existence of T cell-intrinsic prosurvival/antiapoptotic and proapoptotic factors. The balance may be shifted toward the proapoptotic factors under conditions of severe cell stress or at the end of T cell clonal expansion (ACAD) (12). DNA damage induces enhanced expression of the Bcl-2 interacting, proapoptotic p53 up-regulated modulator of apoptosis (PUMA), which in turn unleashes the proapoptotic protein Bax. This leads to the release of mitochondrial cytochrome C into the cytosol, where it activates caspase 9, which initiates the apoptotic execution cascade at the point which the extrinsic and intrinsic pathways converge (13). Of note, apoptosis may also be abrogated at the level of, for example, death receptors (14), caspases (15), or mitochondria (16). The kinase Akt phosphorylates and thereby inactivates proapoptotic molecules (17), whereas Bcl-2 and Bcl-xl act as prominent antiapoptotic factors at the mitochondrial level (13,18). In fact, Bcl-2 or Bcl-xl gain-of-function mutations are often encountered in T cell leukemia (19).
We recently showed that nuclear enrichment of cold-shock Y-box binding protein 1 (YB-1) tightly correlates with the proliferation of both activated T cells and T cell acute lymphoblastic leukemia (20). YB-1 expression is required for cell cycle progression in primary and Jurkat T cells (20). The pleiotropic functions of YB-1 include transcriptional and translational activities (16,21). Moreover, YB-1 has been implicated in the regulation of cell homeostasis and apoptosis in endothelial and epithelial cells (22)(23)(24)(25). In this study, we identified YB-1 as a crucial determinant for the disturbed homeostasis of activated T cells in SLE. [26] scores ranging from 0 to 10), of whom 37 were women and 8 were men, with ages of female SLE patients ranging from 20 to 79 years and ages of male SLE patients ranging from 21 to 68 years (mean age of 48 years for all SLE patients) and 24 age-matched healthy donors, of whom 18 were women and 6 were men, with ages of female controls ranging from 23 to 53 years and ages of male controls ranging from 32 to 56 years (mean age of 43 years for all controls). All SLE patients met ≥4 American College of Rheumatology (ACR) criteria for SLE (27). The patients received either monotherapy or combination therapy as follows: 9 patients received mycophenolic acid (daily median ± SD doses of 1,080 ± 288 mg daily with monotherapy or 1,250 ± 599.02 mg with combination therapy), 5 patients received azathioprine (daily median ± SD dose of 100 ± 15.81 mg), 6 patients received methotrexate (weekly median ± SD dose of 11.25 ± 3.03 mg), 23 patients received hydroxychloroquine (daily median ± SD dose of 300 ± 97.6 mg), 7 patients received intravenous belimumab (10 mg/kg of body weight once a month), and 16 patients received prednisone (daily median ± SD dose of 4.5 ± 2.31 mg). Five patients had received antibiotic treatment for Pneumocystis jiroveci prophylaxis within the last 3 weeks before sampling. No correlation was observed between these treatments and YB-1 expression in activated T cells from SLE patients (data not shown).
PATIENTS AND METHODS
Cell isolation and culture. PBMCs were isolated using a density gradient separation with Pancoll human solution (PAN-Biotech). PBMCs were washed for 1 day prior to stimulation. CD4+ T cells were purified using CD4 MicroBeads (Miltenyi Biotec) according to the manufacturer's instruction. CD4+ T cells were cultured in complete RPMI supplemented with 100 units/ml of penicillin, 100 µg/ml of streptomycin (Thermo Fisher Scientific), and 10% fetal calf serum (Biochrom) in an atmosphere of 5% | 1723 CO 2 . CD4+ T cells were activated with either anti-CD3 alone or anti-CD3 and anti-CD28 antibodies immobilized on microspheres (Molecular Probes Inc.) at a 1:1 ratio. As previously described (28), 10 8 microspheres were coated with 1 µg/ml of anti-CD3 antibody plus 2 µg/ml of anti-CD28 or isotype control antibody.
Plasmids and cloning. For the knockdown of YB-1, the plasmids pLKO and pLKO-YB-1-short hairpin RNA (shRNA) (Sigma-Aldrich) were genetically modified by replacing the puromycin-resistance gene with the cytoplasmic domain-depleted nerve growth factor receptor gene (ΔNGFR) as a selection marker, as previously described (20). The plasmid pCCLsin.PPT.hPGK. ΔNGFR was kindly provided by R. Bacchetta and L. Passerini (San Raffaele Telethon Institute for Gene Therapy, San Raffaele Scientific Institute, Milan, Italy). To overexpress YB-1, the plasmid FuGW was modified with the open-reading frame of green fluorescent protein (GFP)-tagged YB-1. The DNA constructs pCG-HΔ24 and pCG-FΔ30 for the measles virus transduction system were kindly provided by F. L. Cosset and F. Fusil (International Center for Infectiology Research, University of Lyon, Lyon, France). To overexpress Akt1, the plasmids pLenti-Akt1-GFP (RC220257L2) and GFP control (PS100071) were obtained from Origene (Herford, Germany). All constructs were verified by sequencing.
Viral transduction of CD4+ T cells.
To generate pseudo measles viral particles, HEK 293T cells were transfected with the expression vector (pLKO or pLKO_YB-1shRNA; both containing ΔNGFR as selection marker), the packaging plasmid psPAX2, and envelope plasmids pCG-HΔ24 and pCG-FΔ30 using calcium phosphate precipitation. Forty-eight and 72 hours after transfection, supernatants containing pseudo viral particles were collected and passed through 45-µm filters (Sarstedtz), with a 42% polyethylene glycol (PEG) solution added at a ratio of 1:5. After incubation for 16 hours at 4°C, concentrated particles were concentrated by centrifugation and the pellet resuspended in RPMI. For transduction of CD4+ T cells, culture plates were coated with 80 µg/ml of Retro-Nectin (Clontech Laboratories) for 16 hours at 4°C and blocked with 2% bovine serum albumin. After being washed in phosphate buffered saline, the concentrated viral particles were spin-occulated to the wells at 2,000g for 2 hours, and CD4+ T cells were added. After 16 hours' incubation, T cells were stimulated as described above.
Antibodies and inhibitors. All antibodies, with clone names, used for flow cytometry and Western blotting can be found in Supplementary Table 1 Protein and RNA quantification. For Western blotting and quantitative reverse transcription-polymerase chain reaction (qRT-PCR), NGFR-positive cells were sorted with a FACSAria III flow cytometer (BD Biosciences). Extraction of total RNA, reverse transcription of isolated messenger RNA (mRNA), and quantification of mRNA by qRT-PCR were performed as previously described (29). Oligonucleotides were obtained from TIB MolBiol (see Supplementary Table 1, available on the Arthritis & Rheumatology website at http://onlin elibr ary.wiley.com/doi/10.1002/ art.41382/ abstract). Data analyses were carried out using CFX96 Manager Software (Bio-Rad). Fold change in expression of each gene was normalized to the expression of GAPDH using the 2-ΔΔC t method (30).
Lysates of NGFR-positive T cells were prepared (29), and electrophoresis (12% sodium dodecyl sulfate-polyacrylamide gel electrophoresis [SDS-PAGE] gels) and blotting onto nitrocellulose membranes were performed, as described previously (20). Blots were probed with antibodies and visualized and quantified using an Odyssey scanner and software (Li-Cor).
Flow cytometric analysis. Infected CD4+ T cells were identified by ΔNGFR staining and noninfected CD4+ T cells were used as infection control. For intracellular staining of YB-1, Ki-67, Noxa, Bcl-2, Bcl-xl, and active caspase 3, cells were fixed using the FoxP3 Intracellular/Nuclear Staining Kit (eBiosciences). Cytometric measurements were performed on a FACSCanto II (BD Biosciences) and analyzed with FlowJo software (Tree Star). For the measurement of apoptotic cells, cells were stained with propidium iodide (PI) and annexin V in binding buffer (10 mM Hepes, pH 7.4, 140 mM NaCl, 2.5 mM CaCl 2 ) or in blocking buffer (10 mM Hepes, pH 7.4, 140 mM NaCl, 2 mM EGTA) as a control. For the detection of dead cells in combination with intracellular staining, the Zombie Violet Fixable Viability Kit (BioLegend) was used according to the manufacturer's instructions.
Statistical analysis. For statistical analyses, a Student's 2-tailed t-test was performed. For multiparametric experiments, the analysis of variance test was applied using GraphPad Prism version 6. Pearson's correlation test was used to analyze relationships between YB-1 levels and homeostasis of activated T cells in SLE patients and healthy donors.
Sensitivity of CD4+ T helper cells to apoptosis in SLE patients.
The cold-shock protein YB-1 is expressed in T cells, especially after activation (20), and has been implicated in cell survival (23,31). Therefore, we hypothesized that YB-1 might contribute to the altered homeostasis and survival of SLE T cells. As leukopenia is characteristic of SLE patients (32), we first evaluated the contribution of CD4 and CD8 populations to leukopenia. T helper (CD4+) cell numbers especially showed significant reduction ( Figure 1A). In order to assess the susceptibility of CD4+ T cells to apoptosis, CD4+ T cells enriched from PBMCs of SLE patients and healthy donors were incubated for 24 hours prior to polyclonal stimulation with anti-CD3-coated microspheres ( Figure 1B). Nonviable T cells where identified by flow cytometry using Zombie Violet (ZV) staining. After 2 days of incubation, the frequencies of ZV+CD4+ T cells obtained from SLE patients and healthy donors were comparable. However, after 6 days of incubation, the percentage of apoptotic/dead cells within the CD4+ T cell populations was increased in samples derived from SLE patients (mean ± SD 44.7 ± 3.168%) compared to those obtained from healthy donors (mean ± SD 28.31 ± 3.946%) (P = 0.0036).
Correlation of YB-1 high expression with viability of activated primary CD4+ T cells. Next, we investigated whether YB-1 counteracts apoptosis in activated CD4+ T cells. Therefore, we first analyzed YB-1 expression in healthy donors ( Figure 2A). T cells stimulated with anti-CD3-coated microspheres had YB-1-expressing T cell frequencies of 60-80% on day 4, with frequency of YB-1-expressing T cells dropping to half on day 6 ( Figure 2B). This decline was precluded when interleukin-2 (IL-2) was added. In turn, IL-2 dependency was overcome when microspheres coated with both anti-CD3 and anti-CD28 were used Figures 2C and D). In fact, based on the Pearson's correlation coefficients, we observed a strong positive correlation between YB-1 expression levels and T cell viability ( Figure 2E).
Effects of YB-1 inhibition by shRNA and apoptosis in CD4+ T cells, and promotion of T cell survival by ectopic overexpression of wild-type YB-1 (YB-1 wt ). Next, we exam-ined whether reduced levels of YB-1 in dying CD4+ T cells are just a consequence of apoptosis, or whether a decrease in YB-1 expression contributes to apoptosis. To test the possibility of whether reduced YB-1 expression contributes to apoptosis, we used an shRNA knockdown approach. Specifically, constructs expressing YB-1-specific or control shRNA were virally transduced into primary T cells. Truncated NGFR expressed from the same constructs served as a surface marker for the detection and enrichment of transduced cells by flow cytometry and MACS technology. Upon stimulation, T cells transduced with YB-1-specific shRNA displayed a dramatic reduction of YB-1 mRNA and protein expression when compared to controls ( Figure 3A and In order to monitor the frequencies of preapoptotic and apoptotic/ necrotic cells, CD4+ T cells were stained with annexin V and PI on days 6, 8, and 9. Compared to controls, YB-1 depletion caused an increase in the frequencies of preapoptotic cells (annexin V+ and PI−) and apoptotic cells (annexin V+ and PI+) by almost 100% (Figure 3B). A hallmark of apoptosis is the activation of caspases and, as depicted in Figure 3C, active caspase 3 was unambiguously increased upon YB-1 knockdown, validating the mediated cell death as an apoptotic process.
To determine whether elevated YB-1 expression is sufficient to enhance T cell survival, we overexpressed GFP-tagged YB-1 (GFP-YB-1 wt ) from a lentiviral construct. Using a GFP-expressing construct as a control, we found that on day 6 of stimulation, the frequencies of viable CD4+ T cells expressing GFP-YB-1 wt exceeded those of just GFP-expressing controls by nearly 100% (Figure 3D). Taken together, the results demonstrate that up-regulation of YB-1 tightly correlates with viability of CD4+ T cells and that knockdown of YB-1 strongly promotes apoptosis of activated T helper cells, whereas YB-1 overexpression increases CD4+ T cell viability. Therefore, we propose that YB-1 acts as a crucial determinant for the death and survival of T cells.
Enhancement of mRNA and protein expression of the proapoptotic molecule PUMA by YB-1 inactivation.
We next searched for factors that mediate apoptosis downstream of YB-1 knockdown. The extrinsic pathway for induction of apoptosis in T cells either involves crosslinking of the Fas receptor (FasR) or triggers binding of TNF to its death domain-bearing receptors. In Jurkat and HeLa cells, YB-1 was found to suppress the fas promoter (33). We therefore monitored FasR mRNA and cell surface receptor expression in activated T cells transduced with YB-1-specific shRNA or control shRNA. Neither qRT-PCR analysis on day 4 of stimulation nor assessment of surface FasR (CD95) on days 4 and 6 of stimulation revealed significant changes in FasR in YB-1-depleted T cells compared to controls ( Figure 4A).
We also addressed the possibility that differential FasRcrosslinking by FasL might account for YB-1-related changes in frequencies of apoptotic cells. To this end, we forced crosslinking of FasR by anti-FasR combined with protein A on activated T cells ectopically expressing GFP or GFP-YB-1 wt , respectively. Compared to untreated controls, apoptosis remained unaffected by FasR crosslinking ( Figure 4B). We therefore conclude that differences in FasL expression do not account for YB-1-dependent protection against apoptosis.
To evaluate a potential role of YB-1 in TNF-mediated apoptosis in activated T helper cells, we determined TNF protein expression in YB-1-depleted cells and control cells upon stimulation with anti-CD3 alone or with anti-CD3 plus anti-CD28. Flow cytometric analysis revealed that in both cases, the level of intracellular TNF was substantially lower in YB-1-knockdown T helper cells in comparison to control samples ( Figure 4C).
It was speculated that the reduced levels of intracellular TNF might reflect prior enhanced secretion of this cytokine, and therefore we tested the effect of adding TNF to the cultures. However, the effect of adding TNF on the existing increased frequency of apoptotic CD4+ T cells was similar between YB-1 shRNAexpressing T helper cells and control shRNA-expressing T helper cells ( Figure 4D). Thus, FasL and TNF are unlikely to be the decisive factors in the induction of apoptosis in T helper cells with reduced amounts of YB-1 expression.
Consequently, we concluded that YB-1 is involved in the control of intrinsic mitochondrial apoptosis pathways and analyzed the expression of proapoptotic marker molecules, including NOXA, PUMA, and p53, in anti-CD3/anti-CD28-stimulated CD4+ T cells in the absence or presence of YB-1 knockdown. On day 4 of stimulation, normalized mRNA levels of NOXA and p53 had not significantly increased when YB-1 was depleted. In contrast, PUMA mRNA showed a robust increase of ~50% compared to controls ( Figure 4E). Flow cytometric analysis demonstrated a clear reduction of the frequencies of NOXA+ T helper cells upon YB-1 knockdown ( Figure 4F).
To quantify protein expression of p53 and PUMA, we performed Western blot analysis on extracts from NGFR+ T helper cells ( Figure 4G). While p53 protein expression was slightly reduced in the absence of YB-1, PUMA protein expression was strongly increased in YB-1-depleted cells compared to controls ( Figure 4G).
Correlation between YB-1 depletion and reduced expression of antiapoptotic molecules Bcl-x L and Bcl-2 and Akt in activated CD4+ T cells.
Given that YB-1 overexpression protects against apoptosis, we hypothesized that YB-1 might be involved in the control of antiapoptotic and/or prosurvival molecules. Accordingly, we performed qRT-PCR analyses on YB-1-depleted and control cells to determine transcript levels of the BAX antagonists Bcl-x L and Bcl-2 and transcript levels of Akt as a key regulator in the main survival pathway ( Figure 5A). At day 4 of anti-CD3/anti-CD28 stimulation, we found a reduction in Bcl-x L transcript levels by ~50% in YB-1 knockdown cells.
As YB-1 may control translation (21), protein expression of the same molecules was also analyzed. Indeed, monitoring protein expression by flow cytometry revealed that the frequencies of Bclx L -, Bcl-2-, and Akt-expressing cells were significantly reduced when YB-1 was knocked down (Figures 5B and C). Whereas the percentage of Bcl-x L -and Akt-positive cells dropped to ~50% compared to controls, YB-1-depleted, Bcl-2 protein-expressing cells were reduced by ~85% compared to controls ( Figure 5B).
To further analyze the possible involvement of YB-1 in the Akt-dependent survival pathway, Akt itself or its activating kinase PI3K was inhibited during stimulation using AKT inhibitor VIII or Ly294002, respectively ( Figures 5D and E). To evaluate the extent of inhibitor activity, both inhibitors were tested on activated T cells; at 24 hours and 5 days after the initiation of treatment with either inhibitor during stimulation, we observed that Akt phosphorylation was down-regulated in comparison to that in DMSO-treated activated T cells ( Figure 5D). For YB-1-depleted T cells, both treatments had little, if any, effect beyond the already high level of apoptosis in untreated controls ( Figure 5E). In contrast, a profound increase in apoptosis was observed in control shRNAtransduced YB-1-expressing cells. With inhibition of PI3K, virtually the same levels of apoptotic cells as in YB-1-depleted cell populations were achieved ( Figure 5E).
Correlation between low YB-1 expression in activated CD4+ T cells from SLE patients and enhanced apoptosis.
Various studies have shown that SLE T helper cells exhibit enhanced susceptibility to apoptosis (7,34). Based on our
YB-1 PROTEIN EXPRESSION AND APOPTOSIS IN T CELLS OF SLE PATIENTS
| 1729 results, we hypothesized that YB-1 up-regulation in SLE T cells is not induced upon stimulation. Therefore, we isolated CD4+ T cells from SLE patients and quantified YB-1 expression. In analyses of YB-1 expression by CD4+ T cells, no significant differences were found between healthy donors and SLE patients when the CD4+ T cells were analyzed ex vivo (data not shown) or in a resting state after 4 days of culture ( Figure 6A). Next, CD4+ T cells were stimulated with either anti-CD3/isotype or anti-CD3/anti-CD28- Figure 5. Regulation of antiapoptotic molecules by Y-box binding protein 1 (YB-1) in primed CD4+ T cells. A, Primary CD4+ T cells 4 were stimulated ex vivo with anti-CD3/anti-CD28-coated microspheres and then transduced with YB-1-specific short hairpin RNA (shRNA) (red) or control shRNA (ctrl shRNA) (gray). Bar graphs show relative Bcl-2, Bcl-xl, and Akt mRNA levels on day 4. The cells were analyzed by quantitative polymerase chain reaction (n = 4-6). B, Representative histograms show intracellular Bcl-xl or Bcl-2 staining or Fluorescence Minus One (FMO) control on day 4 in CD4+ T cells that were transduced and activated as described in A and analyzed by flow cytometry. Cumulative data are also shown (n = 4). C, Representative histogram shows Akt expression on day 4 in CD4+ T cells, treated as described in A and analyzed by flow cytometry (n = 4). D, CD4+ T cells were stimulated with anti-CD3/anti-CD28-coated microspheres after preincubation with inhibitors AKT VIII and Ly294002 for 30 minutes and then analyzed by immunoblotting (IB) for Akt and p-Akt expression. Values under the blots represent the relative protein amounts normalized to the levels of GAPDH. E, Representative dot plots show control CD4+ T cells without (w/o) inhibitor or CD4+ T cells treated with 10 µM of AKT VIII or Ly294002 after the cells had been activated and transduced as described in A. The cells were stained with annexin V and propidium iodide (PI) and assessed by flow cytometry on day 5. In A-C, symbols represent individual samples; bars show the mean ± SD. ** = P < 0.01; *** = P < 0.001; **** = P < 0.0001, by Student's t-test. NS = not significant. Color figure can be viewed in the online issue, which is available at http://onlinelibrary.wiley.com/doi/10.1002/art.41382/abstract. Figure 6. Correlation between reduced Y-box binding protein 1 (YB-1) levels in activated CD4+ T cells from systemic lupus erythematosus (SLE) patients and loss of viability that can be restored upon re-expression of YB-1 or Akt-1. A, YB-1 expression levels in resting CD4+ T cells are shown. B, Normalized expression levels of YB-1 were assessed by flow cytometry on day 4 in CD4+ T cells that were activated with either anti-CD3/isotype-coated microspheres (n = 25) or anti-CD3/ anti-CD28-coated microspheres (n = 29). C, Pearson's correlation tests were used to assess the correlation between YB-1 expression and percentage of dead cells (by Zombie Violet staining) among CD4+ T cells from SLE patients in cultures with either anti-CD3/isotype-coated microspheres in the absence or presence of interleukin-2 (IL-2) or anti-CD3/anti-CD28coated microspheres. D, YB-1 expression and percentage of dead cells (by Zombie Violet staining) were assessed by flow cytometry on day 6 in CD4+ T cells from SLE patients that were activated with anti-CD3/anti-CD28-coated microspheres and left untreated or treated with DMSO and/or 20 nM of rapamycin. E, YB-1 expression and percentage of dead cells (by Zombie Violet staining) were assessed by flow cytometry on day 6 in viable CD4+ T cells from SLE patients that were activated with anti-CD3/anti-CD28-coated microspheres and transduced with a wild-type green fluorescent protein-YB-1 (GFP-YB-1 wt ) vector or GFP control vector. F, Pearson's correlation tests were used to assess the correlation between YB-1 expression and the percentage of dead cells (by Zombie Violet staining) among CD4+ T cells from SLE patients after activation with anti-CD3/anti-CD28-coated microspheres and transduction with GFP-YB-1 wt vector or GFP control vector. G, CD4+ T cells from SLE patients were stimulated with anti-CD3-coated microspheres and with IL-2 for 2 days. After stimulation, T cells were transduced as described in E, and YB-1 and cell death were analyzed by flow cytometry. H, The percentage of dead cells (by Zombie Violet staining) was determined by flow cytometry on day 6 among CD4+ T cells from SLE patients that were activated as described in E and transduced with an overexpressing Akt-1-GFP vector or GFP control vector. I, Diagram depicts the impact of reduced YB-1 levels in effector T cells on the development and progression of SLE. Upon infection, effector T cells from SLE patients hardly up-regulate YB-1 to a level that would ensure survival. Consequently, apoptosis is initiated as a result of missing survival signals. Defective clearance of apoptotic cell bodies and debris may activate autoreactive T cells. These T cells again barely up-regulate YB-1, feeding into a vicious circle of apoptosis, activation, autoimmunity, and chronic inflammation that drives SLE pathology. In A, B, D, and E, symbols represent individual samples; bars show the mean ± SD. * = P < 0.05; ** = P < 0.01; *** = P < 0.001; **** = P < 0.0001, by Student's t-test in A and D and by analysis of variance in B. MFI = mean fluorescence intensity; NS = not significant.
YB-1 PROTEIN EXPRESSION AND APOPTOSIS IN T CELLS OF SLE PATIENTS
| 1731 coated microspheres for up to 6 days and then analyzed for YB-1 expression and frequency of apoptotic cells. On day 4 of stimulation, CD4+ T cells from SLE patients showed a significant decrease of intracellular YB-1 expression compared to control cells from healthy donors ( Figure 6B). As shown above for healthy donors ( Figure 2E), a highly significant correlation between YB-1 expression and cell survival was observed using Pearson's test (R 2 = 0.9479; P = 0.0001), even when assessed across different modes of stimulation in SLE ( Figure 6C). Since rapamycin treatment is used in SLE patients, T cells were incubated during anti-CD3/anti-CD28 stimulation to rescue apoptotic T cells (Figure 6D). Indeed, rapamycin induced a significant increase in YB-1 expression in SLE T cells and a 50% reduction in cell death.
To further evaluate the relationship between reduced YB-1 levels and the susceptibility of SLE T helper cells to stimulation-induced apoptosis, we performed a rescue experiment, in which we overexpressed GFP-tagged YB-1 in SLE T cells using lentiviral transduction ( Figure 6E). In fact, viability of GFP-tagged YB-1-expressing SLE T helper cells increased by 20-30% compared to GFP-tagged controls. Thus, survival of SLE T helper cells was rescued by YB-1 overexpression ( Figure 6F) (R 2 = 0.8109; P < 0.0001). Even in an SLE patient (defined as an individual with a SLEDAI score of >4) with 90% of T cells that were YB-1 low ZV+ 4 days after onset of stimulation (GFP control cells), ectopic overexpression of YB-1 (GFP-tagged YB-1 wt ) enhanced survival of T cells ( Figure 6G).
To elucidate the proposed signaling pathway through Akt by which YB-1 stimulates cell survival, we overexpressed Akt-1 GFP and GFP control in SLE T cells ( Figure 6H). Indeed, in SLE patients, Akt-1 overexpression rescued the apoptosis-prone T cells, and this was achieved, at least partly, by enhancing T cell survival (P = 0.0184 versus controls; n = 5).
DISCUSSION
In this study, we identified YB-1 as a central factor in the regulation of survival induction in activated T cells, and we established its malfunction as a characteristic of T cells from SLE patients. Our data show that the frequency of viable activated T cells is high as long as cells retain YB-1 expression. Mechanistically, YB-1 acts on the proapoptotic PUMA-Bcl-2 pathway and enhances the PI3K-Akt axis to mediate survival. Forced up-regulation of YB-1 in activated T cells of SLE patients indeed rescued the T cells from apoptosis.
We show for the first time that YB-1 down-regulation in activated T cells ultimately leads to apoptosis. The expression of the proapoptotic molecules NOXA and PUMA in activated T cells requires YB-1 down-regulation. NOXA up-regulation is usually closely connected to nutritional deprivation, but since nutrition levels in our experiments were not limited, YB-1 likely operates at a connecting point of downstream apoptosis pathways (35). PUMA is also strongly up-regulated upon YB-1 knockdown. It has been shown that YB-1 interacts with p53, thereby ham-pering each other's function. Thus, in the absence of YB-1, free p53 might up-regulate the expression of the downstream target PUMA (36)(37)(38). Therefore, the absence of YB-1 induces apoptosis as a default pathway.
The present study shows that overexpression of YB-1 markedly enhances survival of activated T cells. In fact, consistent with earlier studies in malignant melanoma cells (39), we demonstrated that YB-1 is necessary in order to express the antiapoptotic molecules Bcl-2 and Bcl-xl (40). Our data also demonstrate that YB-1 induces Akt expression, a major player in the survival pathway of CD4+ T cells (41). In cells with YB-1 knockdown, Akt expression is reduced. Moreover, Akt inhibition virtually abolished the survival-promoting effect of YB-1 (i.e., led to rates of apoptosis in YB-1 high T cells similar to those observed in T cells with YB-1 shRNA or in T cells of SLE patients deficient in YB-1) ( Figures 5E and 6E) (42). Although Akt-1 overexpression rescued SLE T cells at least partially from apoptosis ( Figure 6H), inhibition of PI3K entailed an even stronger induction of apoptosis in the control shRNA cells, suggesting that beyond YB-1 Akt signaling, other pathways might be involved in regulating survival in CD4+ T cells ( Figure 5D). Since rapamycin treatment yields a marked increase in expression of YB-1 in SLE T cells, the survival pathway could also include aspects of autophagy, which would less likely involve necrosis, as PI+ annexin V− T cells ( Figure 3B) are not often observed upon YB-1 reduction in activated T cells (43,44). As rsk is the major kinase in T cells for YB-1 phosphorylation (20,45), we thus propose an extended Akt survival pathway encompassing a p90 rsk-YB-1-Akt-Bcl-2 axis.
Our results provide an explanation for the malfunction of T cell responses in SLE patients due to insufficient up-regulation of YB-1 ( Figure 6I). Compared to T cells from healthy donors, activated T cells from SLE patients were hardly able to enhance or maintain the expression of YB-1 upon activation. Consequently, YB-1 low T cells will undergo apoptosis, and responses against pathogens may thus be terminated prematurely. Thereby, failure to up-regulate YB-1 leads to abrogated immune responses and enhanced appearance of apoptotic waste. Taken together with previous findings showing that apoptotic cells and RNA/DNA debris are often insufficiently cleared in SLE patients (46,47), this may result in a vicious cycle of prolonged inflammation during which T cells get activated but often fail, and self-reactive T cells might get activated from the debris. On a few waves of T cell ac tivation caused by infections, followed by failure of YB-1-mediated survival, accumulation of debris might contribute to starting and maintaining the autoimmune process. This model is supported by the fact that flairs of SLE often start with infections (48) and that SLE patients experience infections 5 to 10 times more frequently compared to individuals without SLE (8).
Our data show that upon suboptimal stimulation of T cells, IL-2 increases survival and frequencies of YB-1 high T cells from SLE patients but does not impact YB-1 expression of optimally activated T cells from healthy donors ( Figure 2B and Supplementary Figure 1). Currently, low-dose IL-2 is under investigation as a potential treatment in SLE, and YB-1 stability might explain why IL-2-treated patients seem to experience fewer infections than conventionally treated patients (49). It is tempting to speculate that expansion of Treg cells following low-dose IL-2 therapy may be attributable to stabilized Treg cell survival via recovery of YB-1 up-regulation (50). Indeed, other likely successful treatment strategies in SLE, such as the one shown herein for rapamycin application, may unknowingly aim for YB-1 stabilization in T cells ( Figure 6D) (44).
Our data show that YB-1 expression correlates strongly with survival of CD4+ T cells. The observed effects of modulation of YB-1 expression appear to confirm its antiapoptotic function. Therefore, it can be postulated that YB-1 behaves like a sensor for survival, which is applied to apoptosis-prone T cells from SLE patients. So far, autoantibody production, organ involvement, or disease severity does not correlate with the frequency of YB-1 low T cells upon activation (data not shown). Whether a certain threshold for YB-1 expression may be used as a new biomarker for the prediction of certain disease manifestations or as a biomarker for the progression of SLE (or even other autoimmune diseases) has to be examined in a larger patient cohort and in a prospective longitudinal study, as previously shown for IL-10 (51). Also, several other biomarkers have been suggested for SLE, including cytokines of the IL-1 family, which are increased in SLE (52). YB-1 expression level inversely correlated with the diagnosis of SLE in general, and thus, might even contribute to more systemic effects in disease. Since increased rates of infections did not correlate with the SLEDAI scores (8), the infection rates may well be correlated with the abrogation of YB-1 expression in SLE patients. Indeed, T cells from 1 SLE patient with extremely low YB-1 expression upon stimulation could be rescued by enhancing YB-1 expression ( Figure 6E). Therefore, YB-1 may be a new target molecule in the treatment of SLE.
|
2020-06-02T21:04:57.307Z
|
2020-05-31T00:00:00.000
|
{
"year": 2020,
"sha1": "62cdc2ef374e4b3a044eec782abfd794a39cce91",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/art.41382",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "9ff3310acca3e4666da6aa497200faabaf1a6c72",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
246298684
|
pes2o/s2orc
|
v3-fos-license
|
Low Levels of Granulocytic Myeloid-Derived Suppressor Cells May Be a Good Marker of Survival in the Follow-Up of Patients With Severe COVID-19
Infection with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) causes a disease (coronavirus disease 2019, COVID-19) that may develop into a systemic disease with immunosuppression and death in its severe form. Myeloid-derived suppressive cells (MDSCs) are inhibitory cells that contribute to immunosuppression in patients with cancer and infection. Increased levels of MDSCs have been found in COVID-19 patients, although their role in the pathogenesis of severe COVID-19 has not been clarified. For this reason, we raised the question whether MDSCs could be useful in the follow-up of patients with severe COVID-19 in the intensive care unit (ICU). Thus, we monitored the immunological cells, including MDSCs, in 80 patients admitted into the ICU. After 1, 2, and 3 weeks, we examined for a possible association with mortality (40 patients). Although the basal levels of circulating MDSCs did not discriminate between the two groups of patients, the last measurement before the endpoint (death or ICU discharge) showed that patients discharged alive from the ICU had lower levels of granulocytic MDSCs (G-MDSCs), higher levels of activated lymphocytes, and lower levels of exhausted lymphocytes compared with patients who had a bad evolution (death). In conclusion, a steady increase of G-MDSCs during the follow-up of patients with severe COVID-19 was found in those who eventually died.
INTRODUCTION
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection may produce a systemic disease termed COVID-19 (coronavirus disease 2019), with high morbidity and mortality. This viral infection became a pandemic in 2020 and showed rapid and uncontrolled expansion worldwide in 2021, despite vaccination of around 50% of the total population. In fact, the COVID-19 pandemic is the major global health threat in the last century. Understanding of the pathophysiology of this viral infection is a major challenge and is absolutely necessary to improve the somber prognosis of COVID-19 patients with severe disease who require admission to the intensive care unit (ICU) (1). Impairment of both innate and adaptive immunity has been described in patients with SARS-CoV-2 infection, and it has been associated with poor outcomes (2). Lymphopenia is a frequent finding in these patients and has been identified as a variable independently associated with mortality (3). It has been observed that lymphocyte subsets such as CD4 + T cells, CD8 + T cells, B cells, and natural killer (NK) cells decreased in COVID-19 patients, especially in severe cases. Moreover, the underlying mechanisms responsible for lymphopenia in COVID-19 patients still need to be investigated since these could be responsible for the delayed virus clearance and the increased mortality rate among patients. In line with this notion, myeloid-derived suppressor cells (MDSCs) are a heterogeneous group of immature myeloid cells that mainly inhibit T-cell immune responses and NK cell proliferation using different mechanisms. They consist of monocytic (M-MDSCs) and granulocytic (G-MDSCs) subsets, which have been recently defined as pathologically activated neutrophils and monocytes with potent immunosuppressive activity (4).
The role of MDSCs was first discovered in cancer patients, but they have been found to be important in several disease processes such as sepsis (5). The persistence of these cells may contribute to long-lasting immunosuppression, thus leaving patients unable to resolve infections. We have recently found increased M-MDSCs in patients with mild COVID-19 (6), suggesting that the monocytic MDSC subset may contribute to lymphopenia and immune suppression in COVID-19. Nevertheless, the role of MDSCs in the pathogenesis of severe COVID- 19 has not yet been fully elucidated, although recent studies have reported that MDSCs might influence both disease severity and mortality (7,8). Moreover, the levels of M-MDSCs have been recently found to predict the severity of COVID-19 (9), whereas others have found an expansion of G-MDSCs in patients with severe COVID-19 (8). Moreover, the function and transcriptome of G-MDSCs may explain, at least in part, the severity of the disease (10). In line with this, MDSCs have been proposed as a potential biomarker and a therapeutic target in this viral infection (11). MDSCs are also well known to induce regulatory T cells (Tregs), which are a specialized subpopulation of T cells that can inhibit T-cell proliferation and cytokine production. Patients with COVID-19 exhibit low levels of circulating Tregs, being lower in severe cases, although this study did not include patients admitted to the ICU (12). We also found decreased levels of Tregs in patients with mild COVID-19 (6).
In addition, programmed death-1 (PD-1) binds to its ligands (PD-L1 or PD-L2), expressed on antigen-presenting cells (APCs), to generate inhibitory signals that downregulate Tcell-mediated immune responses. Lymphocytes from COVID-19 patients have been found to have increased expressions of inhibitory molecules, such as PD-1 or CTLA-4, producing an ineffective immune response (13,14). Upregulation of PD-1 on CD4 + T cells in SARS-CoV-2 patients has also been associated with poor outcomes at 30 days (15). Therefore, information about the behavior of the lymphocyte subsets in critically ill COVID-19 patients is lacking or has been obtained from a reduced number of patients. The present study explored the immunosuppressive cell populations, MDSCs, and Tregs in critically ill COVID-19 patients and compared their evolutions in patients who died and those who survived.
Study Design
This is a prospective, observational, cohort study that enrolled critically ill adult patients (age ≥ 18 years) with COVID-19 admitted to the ICU of the Virgen Macarena University Hospital (Seville, Spain) from October 2020 to March 2021. The exclusion criteria were as follows: patients with previous immunosuppression (solid organ or hematologic transplantations, hematologic malignancies, or taking immunosuppressants before hospital admission) and pregnant women.
The following data were noted: age, gender, body mass index (BMI), comorbidities (diabetes mellitus, liver cirrhosis, chronic renal disease, chronic heart failure, and chronic obstructive pulmonary disease), disease chronology (time from the onset of symptoms and from hospital admission to ICU admission), pharmacological treatments, ICU length of stay (LOS), and ICU mortality. Illness severity at ICU admission was assessed using the Acute Physiology and Chronic Health Evaluation (APACHE) II score and the Sequential Organ Failure Assessment (SOFA) scale, considering the worst data point of the first 24 h in the ICU (16,17). Nosocomial infections included ventilator-associated pneumonia, primary bacteremia, and catheter-related bloodstream infection that were diagnosed following current definitions (18). Septic shock was diagnosed following the Sepsis-3 criteria (19). Continuous renal replacement therapy was initiated by the attending physician and followed the recommendations of the Spanish Society of Intensive Care Medicine (20).
Patients
We studied the immunological characteristics of peripheral blood cells from 80 COVID-19 patients hospitalized in the ICU with respiratory failure and positive for real-time reverse transcriptase polymerase chain reaction (RT-PCR) (Allplex 2019-nCoV Assay; Seegene, Seoul, South Korea) assay for nasal and pharyngeal swab specimens. Blood was obtained in the first 24 h following admission into the ICU using samples sent to the hospital laboratory for routinary tests and weekly thereafter up to death from ICU discharge. total T, B, and NK cells) were measured by flow cytometry using the FACSCanto II flow cytometry system (Becton Dickinson, Franklin Lakes, NJ, USA) from EDTA-K3 tubes. Analyses were carried out from ICU admission to the last determination before ICU discharge or death. Furthermore, the total lymphocyte, monocyte, and granulocyte counts were obtained from hematologic counts (Sysmex CS-1000).
Monoclonal Antibodies
The following antibodies were obtained from Becton Dickinson Immunocytometry Systems (San Jose, CA, USA) and were used at the manufacturer's recommended concentrations. MDSCs
Data Analysis
Statistical analysis was performed and graphs were constructed using GraphPad Prism 8.0.2 (GraphPad Software, San Diego, CA, USA). Continuous variables were shown as the median and 95% confidence intervals. Qualitative variables were presented as absolute numbers and percentages. Normal distribution of the analyzed variables was examined using a histogram, box plot, the Q-Q plot, and the outcomes of the Kolmogorov-Smirnov normality test.
Non-parametric tests were used due to the absence of normality. The Mann-Whitney U test was used to compare the cell distributions between the discharged and deceased COVID-19 patients. Wilcoxon's test was used to compare the cell distributions in each group of patients at ICU admission vs. the last determination. The Friedman test and Bonferroni corrections were performed to compare the cell distributions in each group of patients during the ICU follow-up from admission to the third week of stay. Bivariate correlations among cell populations were carried out using Spearman's coefficient. Pvalues ≤0.05 were considered statistically significant differences.
Clinical Characteristics of COVID-19 Patients
Eighty-seven patients diagnosed of COVID-19 and hospitalized in the ICU during the study period were screened, but seven patients were excluded (three patients with onco-hematologic diseases, two renal transplant patients, and two patients taking immunosuppressant drugs for systemic diseases). Thus, 80 patients were analyzed. The age of the patients was 62 years (median, p25-p75= 59-66 years). The male/female ratio of COVID-19 patients was 76.5%/23.5%. The patients' clinical characteristics are shown in Table 1. Thirty-eight patients were discharged from the ICU alive, but two of them died in the hospital.
Circulating MDSCs in COVID-19 Patients at Admission and During the ICU Stay
The follow-up of blood MDSCs from severe COVID-19 patients in the ICU is shown in Figures 1A-C. All MDSC populations significantly decreased (Friedman test: p < 0.001) in patients who were discharged from the ICU, whereas MDSCs increased in those who passed away. Between both groups of patients, the results of the Mann-Whitney U test revealed statistically significant differences in G-MDSCs and total MDSCs at the last determination (p = 0.007 and p = 0.003, respectively). Similar results were obtained when the first and the last determination for each patient were compared. In those who were discharged from the ICU, all MDSC subsets and total MDSCs slightly decreased (Figures 2A-C). In contrast, all MDSC populations were remarkably increased in patients who passed away (p = 0.037 for M-MDSCs, p < 0.001 for G-MDSCs, and p = 0.003 for total MDSCs); in consequence, significant differences were found between the patient groups at the last determination (p < 0.001 for both M-MDSCs and total MDSCs and p = 0.002 for G-MDSCs), also shown in Figures 2A-C.
Both M-MDSC and G-MDSC populations were also positively correlated at the beginning (r S = 0.296, p = 0.007) and at the end (r S = 0.326, p = 0.003) of their stay in the ICU.
Blood Tregs in COVID-19 Patients at Admission and During the ICU Stay
The trend of Tregs in both discharged and deceased patients during the follow-up ( Figure 1D) was significant (p < 0.001 and p = 0.049, respectively). Although all patients had similar concentrations of Tregs in blood at ICU admission, the followup revealed different trends in both groups: Tregs from discharged patients had a 2.5-fold increase, whereas those from deceased patients remained practically constant. In addition, significant differences between patient groups were found from the first to the third week of their stay in the ICU (p < 0.001 in the first week and p = 0.004 in the second and third weeks).
Comparison of the first and the last determination ( Figure 2D) showed that the levels of circulating Tregs were similar in all COVID-19 patients at ICU admission. However, this T-cell subset significantly increased in discharged patients (p < 0.001) and remained constant in the group of patients who finally died. Consequently, significant differences were also found between both groups at the last determination (p < 0.001). Frontiers in Immunology | www.frontiersin.org January 2022 | Volume 12 | Article 801410
Concentrations of Exhausted T Cells in COVID-19 Patients at Admission and During the ICU Stay
The evolution of exhausted (PD-1 + OX40 − ) T cells during the follow-up of severe COVID-19 patients is shown in Figures 3A-C.
As happened with MDSCs, there was a depletion of exhausted T cells (particularly from the first week after admission into ICU) in the discharged group (p = 0.001 for CD4 + , p = 0.0026 for CD8 + , and p = 0.003 for total T cells), whereas these T-cell subsets slightly increased during the follow-up in the group of patients who finally died. When the first and the last blood determination were compared ( Figures 4A-C), the levels of both exhausted CD4 + T cells and total T cells remained without significant changes; however, exhausted CD8 + T cells slightly decreased in patients who were discharged from the ICU, whereas all exhausted T-cell populations significantly increased in patients who passed away (p = 0.034 for CD4 + , p = 0.004 for CD8 + , and p = 0.001 for total T cells). Significant differences between groups were only found at the last determination of exhausted CD4 T cells (p = 0.023).
Positive correlations were found between the levels of CD4 + and CD8 + inhibited T cells at both the beginning (r S = 0.255, p = 0.022) and the last analysis (r S = 0.536, p < 0.001) of each patient.
Circulating Activated T Cells in COVID-19 Patients at Admission and During the ICU Stay
During the follow-up ( Figures 3D-F), only discharged patients presented a significant increment in the evolution of activated (OX40 + PD-1 − ) T cells (p = 0.002 for CD4 + , p = 0.011 for CD8 + , and p = 0.001 for total T cells). Moreover, notable differences were also obtained when a comparative analysis between both patient groups was performed (p = 0.030 during the third week for CD8 + and p = 0.042 during the second week for total OX40 + PD-1 − T cells).
In the comparison between the first and the last determination in the ICU (Figures 4D-F), it was observed that the concentrations of CD4 + , CD8 + , and total activated (OX40 + PD-1 − ) T cells were significantly increased in patients with the best outcome (p < 0.001 in all cases). In deceased patients, the levels of activated CD4 + and total T cells also increased, but those of the cytotoxic T-cell subpopulation remained constant. At the last determination, notable differences were found between groups (p = 0.012 for CD4 + and p = 0.002 for CD8 + and total T cells).
In addition, strong positive correlations were found between helper and cytotoxic activated T cells in patients hospitalized in the ICU (r S = 0.340, p = 0.0021) and during their last determination (r S = 0.579, p = 0.0001). Frontiers in Immunology | www.frontiersin.org January 2022 | Volume 12 | Article 801410
Tregs Were Positively Correlated With Activated T Cells and MDSCs With Exhausted T Cells in COVID-19 Patients in ICU
Apart from the significant positive correlations mentioned above, we also observed other strong correlations between the different cell populations. Tregs were positively correlated with OX40 + PD-1 − T cells both at admission into ICU and at the end of the stay and were negatively correlated with G-MDSCs at the last determination, as shown in Table 2. For its part, G-MDSCs were positively correlated with CD8 + and total exhausted T cells at the moment of hospitalization and also with CD4 + after the stay in the ICU. Total MDSCs were only positively correlated with exhausted T cells at the end of admission. All statistically significant correlations between cell groups are shown in Table 2.
DISCUSSION
SARS-CoV-2 infection causes immune defects such as lymphopenia (22) in patients with mild (6) and severe (23) COVID-19. Moreover, persistent lymphopenia was observed in patients with severe COVID-19 after 3 weeks of follow-up (24), but the lymphocyte reduction was more highlighted in critically ill patients, especially T lymphocytes (25). In our study, we found lymphopenia in all COVID-19 patients after admission into ICU, and the lymphocyte levels were increased in blood during the follow-up regardless of their outcomes ( Table 3). At least in part, it could occur because the treatments contributed to the activation of the immune system of COVID-19 patients to fight the viral disease. However, 40 patients eventually died, suggesting that there are mechanisms of immunosuppression due to infection with SARS-Cov-2, as MDSCs could be. Accordingly, T cells, especially CD4 + and NK cells, were significantly lower in patients with fatal outcomes. As previously explained, MDSCs are pathologically activated neutrophils and monocytes with potent immunosuppressive activity (4), and they mediate the mechanism of immune downregulation, especially the inhibition of lymphocyte activation and proliferation (26). In line with this, it has been found that cells with MDSC features are implicated in COVID-19, and several reports have described the accumulation of potent immunosuppressive M-MDSCs and G-MDSCs in the disease (4,27,28). In fact, a high M-MDSC/monocyte ratio has been associated with secondary infections and death due to the disease (29); the granulocytic subset has also been associated with mortality in severe COVID-19 (7). We aimed to analyze MDSCs, namely, the monocytic (M-MDSCs) and granulocytic (G-MDSCs) subsets, and the lymphocyte subpopulations in patients with severe COVID-19 from admission into the ICU and during the follow-up until discharge or death. MDSC expansion has been related to dysfunction in lymphocytes (30), and MDSCs have even been proposed as potential biomarkers and therapeutic targets in COVID-19 (11). Nevertheless, the increase in MDSC levels could not be a specific mechanism of immunosuppression in COVID-19 since it has also been described in other viral infections, such as influenza (31,32), hepatitis B, hepatitis C, and human immunodeficiency virus (33).
Even though we found increased levels of MDSCs in all patients on the first day after admission into the ICU compared with control subjects and patients with mild COVID-19 (6), there were no differences between patients with good evolution (discharged from the ICU) and those who died in the ICU. Nevertheless, the follow-up of patients showed that those with good evolution (discharged) had lower levels of MDSCs, especially G-MDSCs. The relative influence of MDSC subtypes is not clear. M-MDSCs have been found accumulated in severe COVID-19 patients, and they seem to have been responsible for the production of IL-6 in these patients (34), whereas others have found that G-MDSCs may predict fatal COVID-19 outcomes (7,35). Our data were similar and showed that the number of circulating G-MDSCs may predict fatal outcomes only at the weekly follow-up. It is important to mention that most of the G-MDSC data at week 3 for the deceased patients were collected on the same day of death, or at 1 or 2 days before death as maximum, so the peak value of circulating G-MDSCs in these patients (~60 cells/ml) may be considered as a "danger point" to predict death, even more if we consider the notable differences regarding the levels of G-MDSCs obtained during the follow-up (~10, 19, and 20 cells/ml at ICU admission and during the first and second weeks, respectively), as shown in Figure 1B.
MDSCs are known to mediate the production of Tregs (36); conversely, Tregs are known to regulate MDSCs (37). The crosstalk between both cell types has been previously studied (38). However, we found decreased numbers of circulating Tregs in severe COVID-19 patients, although they have increased circulating MDSCs. We already discovered this discrepancy in mild COVID-19 patients (6), and we assumed that the lymphopenic effect of SARS-CoV-2 infection may also affect Tregs. In fact, other research groups have also found decreased numbers of Tregs in COVID-19 patients (12), especially in critically ill patients (39). In line with this, we have also found that patients with better outcomes have increased numbers of circulating Tregs, probably due to the recovery of total lymphocytes.
In any case, the increased numbers of MDSCs seemed to be sufficient, at least in part, to account for the higher numbers of exhausted T cells in COVID-19 patients with fatal outcomes. In line with this, an increase in exhausted T cells (expressing PD-1) has previously been found in COVID-19 patients, which showed a relationship with their clinical outcomes, suggesting that the expression of PD-1 on T cells may be a risk factor for unfavorable outcomes in these patients (40). Moreover, we have found a positive correlation between the numbers of MDSCs and exhausted T cells in patients with severe COVID-19 admitted into the ICU. Moreover, lower numbers of both activated CD4 + and CD8 + T cells were found at the last determination in patients who died in the ICU compared with patients who were discharged from the ICU. These data suggest that the increase in MDSCs could help prevent T-cell activation, therefore further contributing to the immunosuppression in severe COVID-19 patients with fatal outcomes.
One limitation of the study is the inclusion of a small number of patients from just one center. Nevertheless, the high ICU mortality rate of COVID-19 allowed us to study a similar number of deceased patients and patients who were discharged from the ICU, even though these patients were prospectively recruited.
In conclusion, patients with severe COVID-19 admitted into the ICU had increased levels of MDSCs and exhausted T cells, whereas they had decreased circulating Tregs and activated T cells. However, only the weekly follow-up of these cellular populations could differentiate the group of patients with good outcomes (ICU discharge) from those who eventually passed away, who had increased numbers of MDSCs, especially the granulocytic subset, which may be an interesting biomarker of fatal outcomes in the follow-up of severe COVID-19 patients admitted into the ICU.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation upon reasonable request.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by IRB Virgen Macarena and Virgen Rocio University Hospital. Written informed consent for participation was not required for this study, in accordance with the national legislation and the institutional requirements.
|
2022-01-28T14:15:26.911Z
|
2022-01-28T00:00:00.000
|
{
"year": 2021,
"sha1": "a4a334e90a56b14547752450f749f1c4389d4b12",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "a4a334e90a56b14547752450f749f1c4389d4b12",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
115875745
|
pes2o/s2orc
|
v3-fos-license
|
Proactive condition-based bridge rehabilitation planning including LCA and LCC
The implementation of structural health monitoring (SHM) for management and maintenance of critical transport infrastructures, such as bridges, dams or tunnels, is a widely established approach. Even though, SHM shows various technical limitations (e.g. relating to spatial capabilities of the sensors, high cost, repeatability or interpretation of the sensor measurements to support structural assessment and prediction of the infrastructure condition states). Furthermore, linking SHM with life cycle based methodologies such as life cycle costing (LCC) or life cycle assessment (LCA) is only recently discussed. The SENSKIN EC co-funded research project aims to overcome above mentioned challenges through development of a new sensor system and its integration within a Decision Support System (DSS) for proactive condition-based structural rehabilitation planning during the bridge life cycle. The DSS will include structural assessment models (exclusively based on sensor measurements for assessing the bridge condition and damage states of the main structural and a rehabilitation planning module (RPM) that will enable end-users to assess the life cycle economic and environmental implications of bridge rehabilitation options. Hereby, a tailored submodule for integrated life cycle costing (LCC) and life cycle assessment (LCA) assists, taking into account not only direct impacts of the rehabilitation solutions but also external effects caused by restricted traffic conditions (e.g. due to ongoing construction works). Thus, the SENSKIN project will contribute to a sustainable infrastructure. The following paper will sketch out the main scientific and functional structures of the developed DSS, with focus on the RPM and its LCA/LCC submodule for bridge rehabilitation planning.
Introduction
Transport infrastructures are a vital part of the whole logistics in Europe and their importance is steadily increasingtraffic is not only expected to rise due to the growing population and motorisation, but also due to the intensifying trade relations and associated road-based transportation of goods. The higher traffic demand will pressure on the current transport infrastructure which is in many cases already experiencing deterioration, e.g. bridges are ancient since they are built to stand for several decades. Moreover, climate changes and consequential extreme environmental conditions are predicted to negatively impact on these existing infrastructures, requiring continuous maintenance in order to retain their condition and safety.
The implementation of structural health monitoring (SHM) for management and maintenance of critical transport infrastructures, such as bridges, dams or tunnels, is hereby a widely established approach. However, due to their sensors, SHM technologies show various limitations which are related e.g. to spatial capabilities of the sensors, high cost, durability aspects, repeatability or the interpretation of the sensor measurements to support structural assessment and prediction of the infrastructure condition states. With regard to decision support from an environmental or cost perspective and for proactive condition-based bridge rehabilitation planning, linking SHM with life cycle based methodologies such as life cycle costing (LCC) or life cycle assessment (LCA) is only recently discussed [1], [2], [3].
The SENSKIN project
For the purpose of improving current practice in structural health monitoring (SHM) for maintenance and rehabilitation planning of transport infrastructures, the SENSKIN project aims to develop a new sensing device (skin like SENSKIN sensor) which is able to detect even small strains in the surface area of infrastructures. The sensing device will be coupled with a data acquisition system (DAQ) to digitize sensed strains, a communication module for implementing communications operational logic and data transmission, a processing module as computational core, a low power wakeup receiver as well as power management, energy harvesting and energy storage units, to form the resilient SENSKIN Integrated System. Power supply of this integrated system will be realized via a photovoltaic panel that provides the highest power concentration (100mW/cm 2 ).
Furthermore, a dedicated decision support system (DSS) will be set up for bridge rehabilitation planning purpose. The DSS will be capable to handle the data generated by the SENSKIN sensor for condition-based structural assessment. From a life cycle perspective, the DSS will be enhanced by the integration of life cycle costing (LCC) and life cycle assessment (LCA) functionality. Thus, the DSS will support and facilitate the development of sustainable bridge rehabilitation planning and respective holistic intervention, based on precise actual structural condition state predictions of the assessed transport infrastructure.
The SENSKIN project is co-funded by the sustainable EU Horizon 2020 Research and Innovation programme and its developments are strongly driven by improving the assessment of transport infrastructures, life cycle optimisation considerations as well as industrial end-user application.
Proactive decision support via LCA and LCC
As a first step within the scope of the SENSKIN project, a methodology for decision-making on structural bridge rehabilitation is being developed. This methodology takes into account the precise actual condition of the bridge construction, life cycle cost analyses (LCC) and life cycle assessment (LCA) results of available rehabilitation options. Besides cost and environmental implications from bridge construction, the cost and environemntal consequences due to external effects will be evaluated. In a second step, the methodology will be implemented as part of a software system called the Rehabilitation Planning Module (RPM). The RPM is combined with an expert system for risk-based decision-making on the type (WHAT) and timing (WHEN) of intervention in bridge rehabilitation. The expert system links directly to the SENSKIN sensor technology and to information from the new SENSKIN bridge monitoring system (SENSKIN Integrated System) as well as to historical, external and knowledge databases. The RPM and the expert system constitute the Decision Support System (DSS), which offers bridge operators rehabilitation options for proactive interventions under operating loads and interventions after extreme events depending on the actual damage condition and taking into account life cycle considerations (costs, environment).
The SENSKIN Rehabilitation Planning Module (RPM)
The RPM consists of various modules which together contribute to the decisionmaking process in bridge rehabilitation. In total four different modules ( Figure 1) contribute to the life cycle related assessment of bridge maintenace measures.
Figure 1. SENSKIN RPM -Structural layout
The Structural Assessment Module converts the measured strain of the sensors into damage states. The module on Rehabilitation Measures identifies rehabilitation options for each bridge element and each damage condition. Their life cycle-related environmental and economic impacts are evaluated in the LCC/LCA submodule, which constitutes the core module of the RPM. In addition, the traffic module calculates external environmental and socio-economic effects caused by the impact of remediation measures on transport. These external effects are combined with the results derived from the LCA/LCC submodule. Following, the Structural Assessment Module as well as the LCA/LCC submodule are described in more detail, outlining their scientific and functional backgrounds.
The Structural Assessment Module
The purpose of the Structural Module is to assess the structural condition and safety of the monitored structural members in the bridge superstructure and reinforced concrete piers, exclusively based on measurements of the SENSKIN sensors attached on the beams and piers surfaces in correspondence of the critical sections. The results of the strain measurement are processed taking into consideration the loading system at the time of installation, shrinkage and creep effects on existing materials. The cross sections of the monitored elements are then analyzed based on the constitutive laws of the materials in order to assess the actual structural capacity of the investigated bridge element under operating loads. Furthermore, a stochastic assessment is implemented in order to assess the structural reliability and the probability of failure as a function of time considering the time-dependent deterioration of the materials. The structural capacity of the bridge elements is also assessed taking into consideration future changes in the loading system due to the application of updated regulations, or the event of oversize loads travelling the bridge. Finally the SENSKIN system assesses the structural behaviour of the structural elements of the bridge under seismic loads and horizontal forces based on the sensors measurements through the displacement analysis of the structural elements. The uncertainty in structural behaviour is taken into account by modelling the load effects on the structure, the strength and toughness and the sensor measurements as random variables. Rehabilitation options have been set up for each structural element, e.g. for steel beams there are four rehabilitation options available where one of it is 'welding additional steel elements'. The default mapping of damage states and rehabilitation options was gathered alongside the structural assessment module.
LCA/LCC Submodule
Thanks to the structure of the RPM, which is kept quite open, the update of background data for future analysis (e.g. cost, environmental and traffic data) is possible very easily. The incorporated life cycle phases for assessment are based on the classification according to DIN EN 15978 [9], defining specifications and requirements for the life cycle-based sustainability assessment of buildings as a European standard document. This classification has been adapted for bridge assessment in former research projects [5]. As the SENSKIN research project deals with the assessment of bridge rehabilitation options and their external effects, the scope includes the life cycle modules B1 (maintenance), B2 (restoration), B3 (reinforcement) and E (external effects). For environmental impact assessment during LCA, the impact category Global Warming Potential (GWP) was selected on the basis of end-user requirements and its relevance to the system under investigation. The impact category offers scientifically proven calculation methods and is internationally recognised. It is based on its contribution to the greenhouse effect (climate warming) and is expressed in the unit 'kg CO2 -equivalent'.
Consideration of external effects
As part of the environmental and cost analysis within SENSKIN, the internal and external effects of bridge refurbishment measures are calculated. While the internal effects result from the rehabilitation measures itself, the external effects derive from traffic restrictions, such as road closures or traffic jams, required for some of the rehabilitation activities. Both the additional driving time and the longer distance travelled have economic effects, which are translated into monetary values in terms of delays and additional fuel consumption. In addition due to the traffic restrictions, higher emissions are generated, which feed into the life cycle asessment calculations. Thus, the LCA methodology also covers both the environmental impact of bridge rehabilitation options (internal effects) and additional emissions due to traffic restrictions (external effects).
The external effects are included for the lifetime of a bridge, taking into account the traffic restrictions of the individual rehabilitation options as well as the development of the energy carrieres and the transport composition within the next 50 years for every country within the European Union [10].
Integrated Decision Support System
The integrated Decision Support System (DSS) invokes the structural assessment modules in chapter 0 and the LCA/LCC submodule in chapter 2.3 as required and display, via the user interface, all necessary data to the end user (the bridge operator). For this, a graphical interface is able to create a 3D representation of the monitored bridge and to display on this representation in a graphical form the strains provided by the SENSKIN monitoring system on the bridge. Additionally, the end user will be able to see the strains under different scenarios of hypothesized situations (e.g., earthquakes, higher future loads) and warnings for abnormal situations. Furthermore, the end user will be able to see when and how to intervene.
Validation and benchmarking in Greece
A systematic approach will be followed to validate and benchmark the SENSKIN monitoring system and the integrated package under actual/operational field conditions. It will cover technical feasibility, economic and operational benefits analysis, the assessment of price/performance and benchmarking.
On the basis of pilot testing at the Bosporus 1 suspension bridge in Istanbul (including also benchmarking with a conventional monitoring system), a refined version of the entire SENSKIN monitoring system will be installed at the steel composite Egnatia Bridge G4 in Greece (Figure 1). In total, 64 sensors will be patched on 7 midspan and near supports sections of the superstructure as well as on the basement of two piers. For validation purpose, conventional electrical foil strain gages will be installed as well. Furthermore, static and dynamic load testings will accompany the field trials.
Figure 1. Egnatia Motorway G4 Bridge (© EOAE)
From end-user and life cycle based rehabilitation planning perspective, the developed SENSKIN Rehabilitation Planning Module (RPM) will be evaluated in terms of trustworthiness, user friendliness and ease of application during the field tests in Greece. Hereby, constantly monitored sensor data will be incorporated into the SENSKIN RPM and its LCA/LCC submodule to further calibrate and benchmark the single decision support system (DSS) modules as well as the DSS itself, by using real data from Greece. Furthermore a vibration based identification of the actual mechanical performance of the bridge by using the equipment (accelerometers) and the methodology of Egnatia Odos AE [11] will test and calibrate structural assessment and DSS modules.
In that way, field testing in Greece supports the prediction of the future intervention needs, their related environmental and economic impacts [12] as well as on time detection of severe structural degradation of the bridge by including future traffic load increase.
Conclusion and future Outlook
An important outcome of the SENSKIN project, especially in terms of the growing demand for transport, is the development of an integrated monitoring system that is open to multimodal infrastructures, combined with an application-oriented DSS that supports economic and environmental analyses throughout the bridge's entire life cycle. This system will enable the reduction of maintenance costs and the associated environmental impacts for different bridge types. Hence, it is expected to contribute to the creation of a less disrupted transport infrastructure, as inspections and maintenance work can be planned more effectively. Additionally, the functioning of these infrastructures will be less affected by their increased security and resilience to growing transport demand, climate change and disruptive events. Since permanent monitoring provides precise structural data of the strains, necessary rehabilitation measures can be developed and carried out to extend the service life of bridges. Thus, the SENSKIN project helps to plan and establish effective maintenance methods for bridges. By taking into account sustainability aspects of bridge maintenance, the rehabilitation planning system contributes to a sustainable infrastructure and supports decision makers in the proactive life cycle based planning of bridge rehabilitation.
|
2019-04-16T13:28:34.935Z
|
2018-05-17T00:00:00.000
|
{
"year": 2018,
"sha1": "3487b82b8b6178819a8b0748f9ce898a540226d3",
"oa_license": "CCBY",
"oa_url": "https://zenodo.org/record/2649018/files/1086_Gordt_etal2018.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "83cd53338afde9e32d3f8e4297590798aaa122ce",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
201219644
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of a new strategy in the elaboration of culture media to produce surfactin from hemicellulosic corncob liquor
Highlights • Biosurfactants are more eco-friendly than synthetic surfactants.• High costs with substrates represents a problem in biosurfactants production.• Alkaline pretreatment was applied to extract hemicellulosic fractions from corncob.• Hemicellulosic corncob liquor was utilized to produce surfactin.• Surfactin showed high bioremediation potential.
Introduction
Biosurfactants are substances produced by microorganisms endowed with an amphipathic structure. The presence of hydrophilic and hydrophobic portions allows the biosurfactants to reduce the surface and interfacial tension of different compounds, that's why they are considered excellent foaming, dispersing and emulsifying agents as well as environmental applications such as bioremediation and improved oil recovery [1]. In comparison to synthetic surfactants, biosurfactants are an excellent alternative since they are more environmentally friendly due to their lower toxicity and greater biodegradability, as well as the maintenance of their specific activities under extreme conditions of temperature, pH, and salinity [2,3]. Due to its different properties, the versatility in the application of biosurfactants has aroused the interest of different industry segments, such as the paint, food, cosmetics, detergents, textiles, agrochemical, and pharmaceutical industries [4].
Despite the possibility of being obtained from renewable sources, one of the main challenges for commercial use of biosurfactants is the onerous costs linked to the production process [5]. In order to significantly reduce this economic unfeasibility, other strategies have been studied and published in the literature, among which the use of alternative sources of abundantly available and low-cost nutrients to serve as a nutritional source for the producer microorganism [6,7]. More specifically, the substitution (total or partial) of expensive synthetic media for agro-industrial residue can contribute to the reduction of high production costs and, therefore, increase the competitiveness of biosurfactants against synthetic surfactants [8].
Corncob is a residue from the processing of corn. It is estimated that for every 100 kg of corn ear produced, around 18 kg are formed by the corncob [9]. With the world's largest producers the United States, China, and Brazil producing about 348.8, 219.6 and 93.5 million tons of corn, respectively, residue generation becomes a potential environmental problem [10]. The main constituents of corncob, as well as other lignocellulosic biomasses, are cellulose, hemicellulose, and lignin. Since these three components are strongly associated, providing the microorganism access to hemicellulose fractions requires the use of a pretreatment that breaks the lignocellulosic structure [11]. Xylans are one of the main polysaccharides present in hemicellulose, thus, the separation of hemicellulosic fraction from biomass offers the possibility of using this carbon source for xylansfermenting microorganisms. Studies have shown that the bacterium Bacillus subtilis has the ability to metabolize xylans, which could make the use of rich fractions of hemicellulose corncob liquor more suitable for surfactin production [12].
Lignocellulosic biomass is one of the most abundant, renewable and underutilized resource, been widely used in the field of bioprocesses as a cheap energy source for microbial fermentation, for example, as a substrate in alcohol or biosurfactant productions [13,14]. In this context, this work proposed the use of the hemicellulosic corncob liquor, extracted by an alkaline pretreatment, as an alternative carbon source for the production of surfactin, which has not been reported yet. Despite the need of chemical reagents which infers on the operational costs, an alkaline pretreatment, unlike acid or hydrothermal processes, are known to act efficiently in the lignin solubilization process, reducing the degradation of carbohydrates of interest, such as cellulose and hemicellulose [15]. On the other hand, the use of differentiated pretreatment conditions may also favor the alkaline extraction of hemicellulose [16]. Furthermore, surfactin is a biosurfactant of the lipopeptide class, produced by the B. subtilis and recognized as one of the strongest and active biosurfactants available [17,18].
There are several factors that affect the quality and quantity of the biosurfactant produced, such as the source of carbon used and the growth conditions, such as pH, temperature, trace elements (K + , Mn 2+ , Mg +2 and Fe 2+ ) and other nutrients [19,20]. The involvement of so many variables ends up fostering a growing number of scientific studies that aim to investigate the relationship of the factors mentioned above with the growth of the microbial cell, in order to promote significant improvements in the biosurfactant production pathways. For this, the experimental design emerges as an effective alternative in process optimization and evaluation of complex systems [21]. In fermentation processes, the advantages of statistical tools range from increased product reach and reduced process variability to greater conformity of output response and reduction both the time required for tests development and overall cost [22].
The current discussion is reinforced with projections that the total global market for biosurfactants will reach USD 5.52 billion by 2022, at a compound annual growth rate (CAGR) of 5.6% from 2017 to 2022 [23]. Biosurfactants will continue to be the focus of research around the world due to all their advantages in relation to synthetic surfactants. For the purpose of optimizing resources, processes and yields, studies can use statistical tools to determine optimal factor values instead of increasing the number of experiments to be performed.
Microorganism and cultivation conditions
The strain of Bacillus subtilis ICF-PC used belongs to the Sergipe Microorganisms Culture Collection (CCMO/SE code: LMA-ICF-PC 001). The bacterium was maintained in nutrient agar tubes at a temperature of 4 C. The composition of the concentrated salt solution, which will be diluted in the media, was elaborated by adapting the methodologies described in Cooper et al., [24] Sheppard and Cooper [25] and Marin et al. [26] peptone solution and incubated on a rotary shaker at 30 C. After 18 h, 10 mL of the pre-inoculum were transferred to 90 mL of inoculum (1% glucose and 1% of mineral salt solution). The solution was incubated at 30 C and 120 rpm for 24 h. After this period, 5 mL of inoculum and 95 mL of nutrient solution were added, varying the contents of hemicellulosic corncob liquor (HCL), glucose, and mineral salt solution with final pH adjusted to 6.85, according to experimental design ( Table 2). The fermentative media were kept under the same incubation conditions for 72 h. Afterward, the samples were centrifuged at 10,000 rpm for 20 min, obtaining a cell-free supernatant for further analysis.
Preparation and characterization of the substrate
The corncob collected in the county of Poço Verde (state of Sergipe) was cut into small pieces and dried in an oven under a temperature of 45 C for 24 h. Then, the dried material was subjected to a milling process carried out in a knife mill to be homogenized and stored at room temperature. The residue (in natura) was characterized by the modified method of Klason [27], which composition was 26.2 AE 0.6% cellulose, 25.3 AE 1.1% hemicellulose, and 34.7 AE 1.5% lignin.
Alkaline pretreatment and chemical composition of HCL
In order to extract the hemicellulosic fraction, the corncob was subjected to an alkaline pretreatment under less aggressive conditions of temperature and alkali concentration. This process consisted of adding 10 g of the dry and ground sample into 100 mL of NaOH solution (0.75 mol.L À1 ). The solution was heated and shaken for 2 h at 50 C with an electric plate. After cooling and pH adjusted to 7.0 by the addition of acetic acid, the sample was filtered. For the determination of carbohydrates and organic acids, furfural and hydroxymethylfurfural, it was also used the modified Klason method [27]. The elemental composition (C, H, N, and O) was determined through a CHN analyzer (Thermo Finnigan FLASH EA 1112 Series). The oxygen content was calculated by difference, according to Sheng and Azevedo [28].
Optimization of carbon source and mineral salt solution
In order to evaluate the effects and interactions arising from the variation of the nutritional composition of the culture medium, a 2 3 full factorial design was performed for the following independent variables: concentration of HCL (X 1 ), concentration of glucose (X 2 ), and concentration of mineral salt solution (X 3 ). Each variable was evaluated in four coded levels, as presented in Table 2. A set of 18 experiments was performed by four replicates at the central point and six axial points. The interactions of the microorganism in different concentrations and constituents of the culture medium were analyzed by the statistic considering the results obtained for the following response variables: surface tension reduction rate and emulsification index. The STATISTICA software (version 13.2) was used for both regression analysis of experimental data and for surface response method.
Concentration of cells
Cell concentration was performed by centrifuging 1 ml of the fermentation medium at 10,000 rpm for 15 min. After the supernatant was removed, the cell mass was resuspended in 1 mL of distilled water, vortexed, and diluted again in 4 mL of distilled water to measure the optical density at 610 nm using UV-M51 spectrophotometer (BEL Photonics, UV-VIS).
Surface tension reduction rate (STRR)
The surface tension measurements were determined using a tensiometer (Attension, Sigma 700/701) according to the Wilhelmy plate method. The analyses were performed on cell-free broth obtained after centrifugation of culture medium, with results expressed in mN. m À1 . The surface tension reduction rate was determined from the Eq. (1) considering the values for surface tension of distilled water (ST H2O ) and surface tension of biosurfactant (ST Bio )
Emulsification index (EI 24 )
The emulsification index was determined through the method proposed by Cooper and Goldenberg [29]. A mixture of 4 ml of the cell-free supernatant added to 6 ml of the kerosene was homogenized in vortex for 2 min. After 24 h at room temperature, the height of the emulsion layer (H e ) and the total height of the liquid in the column (H t ) were measured. The emulsification index was calculated from Eq. (2).
Semipurified surfactin extraction
The cell-free supernatant was subjected to acid precipitation by adjusting the pH to 2.0 with addition of HCl (6 N), maintaining the mixture standing for 24 h at 4 C. The solution was then centrifuged at 5000 rpm for 25 min to separate the phases and the precipitate obtained was suspended in 1 ml of distilled water, vortexed, and subjected to oven drying at 50 C to constant weight (24 h). For the purification, the crude surfactin was redissolved with 4 mL of distilled water and adjusted to pH 7.0 with 1 M NaOH, again being oven dried to constant weight [30]. The yield of the semi-purified surfactin was expressed in g.L À1 .
Critical micellar concentration (CMC)
The CMC was determined by plotting the surface tension as a function of biosurfactant concentration. It was realized a serial dilution from a 1 mg.mL À1 biosurfactant sample. The CMC value corresponds to the central point of the curve inflection in this graph [31].
Bioremediation potential (BP)
BP was evaluated according to the methodology described by Mnif et al. [32] and Marin et al. [26]. Samples of 10 g of sand were contaminated with 1 g of commercial diesel oil. For three days the mixture was maintained at room temperature. After this period, a sample of 5 g of contaminated sand was transferred to a 150 mL Erlenmeyer flask containing 20 mL of biosurfactant, which concentrations were over CMC values. The mixture was shaken at 150 rpm and 30 C during 24 h to decant the washed solution. This process was repeated twice and the remaining diesel was extracted with two washings of 10 mL dichloromethane, dried, and weighted.
Results and discussion
Under the conditions applied, the use of an alkaline pretreatment promoted an efficient fractionation of the residue with the extraction of hemicelluloses (polysaccharide rich in xylose), since the hemicellulose content of liquor (48.8%) was higher than that determined for the corncob residue (25.3%), according to results presented in Table 1. Khan et al. [33] observed that the yield of biosurfactant production from B. subtilis was improved in fermentative media where pentoses, such as xylose and arabinose, were added. In this context, it is suggested that the corncob use could promote induction of biosurfactant production.
Compared with the results obtained by other pre-treatments, the hemicellulose content of 48.8% was superior to that found by Benko et al. [34], which used microwave-assisted heat treatment and extracted 22.8% of polysaccharides hemicellulosebased from corn fiber at a temperature of 210 C. On the other hand, the content found was lower than that reported by Aguilar-Reynosa et al. [35], which recovered 66.9% of xylan in the hemicellulosic hydrolysate of the corn stover by means of a conduction-convection heating system at the temperature of 180 C. Although it is a chemical pretreatment, in the present study softer operating conditions were used in order not to degrade the hemicellulose and, in particular, the remaining lignocellulosic fractions, which can happen in treatments with higher temperatures. These fractions can be used in a future biorefinery view.
The content of HCL, glucose, and mineral salt solution as carbon and nutrient sources are fundamental for the functioning of the cellular metabolism of the producing microorganism and, therefore, for the structure and properties of the produced biosurfactant [36]. Table 3 presents the results of the 2 3 full factorial design performed to determine the role of each dependent variable in relation to the optimization of surfactin activity.
In the experiments performed for different cultures, parameters such as cell concentration and semipurified surfactin were also monitored. After 72 h of fermentation, the highest concentration of cells obtained corresponded to 4.54 g.L À1 . Regarding the concentration of semipurified surfactin, the maximum value observed was 3.95 g.L À1 , higher than those found in studies such as Gudiña et al. [8], which obtained a concentration of 1.3 g.L À1 with the use of corn straw liquor for the production of surfactin and Kumar et al. [37] which reached a concentration of 1.8 g.L À1 of semipurified surfactin using Bacillus licheniformis. Being always close to neutrality, the pH reading indicated a variation of 6.31-7.87 between the samples, oscillation already expected taking into account the difference in the composition of the culture media.
Statistical analysis of experimental data and model validation
The experimental results for the two studied responses were modeled as second order polynomial equations considering the Fisher's test (F) determined the value of the probability (p) that should be less than 0.05 (regression performed with 95% confidence) so that null hypothesis is rejected, indicating the dependence of the biosurfactant produced in relation to the variables studied. The statistical significance of the proposed models was verified by the p-value and the regression coefficients presented in Table 4. The percentage of variation explained was determined by the coefficient R 2 , whereupon the model for the surface tension reduction rate could explain 72.91% of the variability in response, with a p-value statistically significant at the 5% level for the mean (p = 0.0000) and a predicted value of 46.48%. It was obtained a positive effect for the mean (an increase in the content of the substrate employed leads to an increase in the response studied) while the quadratic liquor (p = 0.0065) presented a negative effect (a smaller quantity leads to a better response) as shown in Eq. (3).
For the emulsification index in kerosene, the explained percentage of variation was 83.48% with a predicted value of 65.74%. The p-value was statistically significant at the 5% level for the mean (p < 0.0001), as well as for the variables quadratic liquor (p = 0.0015) and linear salt solution (p = 0.0262). While the quadratic liquor obtained a negative effect, the linear salt solution had a positive effect (Eq. 4), and therefore, the higher amount of mineral salts used in the preparation of the culture medium was responsible for a higher emulsification index. Therefore, when lower levels of mineral salt solution were used in assays 1-4 present in Table 3, a number of lower values were obtained for EI 24 .
An analysis of variance (ANOVA) was used to verify the linear interaction between the factors and quadratic models from pure error. The lack-of-fit demonstrates a possible failure to represent experimental domain data that is not included in the regression. However, the proposed models are considered adequate due to the insignificance of the lack-of-fit (95% confidence level) ( Table 5). Thus, the models are effective to describe the relationship between the conditions established for the variables and the responses studied.
Graphical analysis of response surface models
As already shown in Eq. (3), the response to STRR is related to an interaction next to the quadratic liquor variable. This interaction was investigated by plotting the response surfaces in threedimensional space, with the vertical axis corresponding to the dependent variable as a function of the two horizontal axes representing the independent variables. As documented, it is known that surfactin can decrease the surface tension of water from 72 to 27 mN.m À1 , which consists in an STRR of 62.5% [38]. Therefore, the higher the values for the surface tension reduction rate better the biosurfactant performance, which makes the prediction of this factor extremely important for the analysis of the surfactin produced [39]. Fig. 1a shows the response surfaces for the surface tension reduction rate as a function of the concentrations of HCL, and glucose, presenting variables coded with values in the region of the central point, tending the region of the surface of lower concentration. Fig. 1b represents the response surface for the surface tension reduction rate as a function of the HCL and mineral salt solution concentrations. It was verified that the ideal range was close to the point central to the liquor, tending to a lower content, while in the same region for the mineral salt solution the tendency was to the maximum of its content. Thus, by establishing a parallel between Fig. 1 and the results described in Table 3, it is possible to notice that the biosurfactant from experiment number 16 presented the highest value for STRR response, corresponding to 47.91%. This is consistent with the results described by authors who also made use of B. subtilis for the production of biosurfactant, but in different culture media [8,40]. The proposed mathematical model for the surface tension reduction rate suggests an optimized concentration with coded variables of -0.21 for HCL, -0.09 for glucose, and +0.33 for the mineral salts solution (in accordance with the response surfaces of Fig. 1), with actual concentrations correspondent to 15.9% of HCL, 2.39% of glucose, and 1.21% of the mineral salt solution. This composition was used again for optimized production of biosurfactant B-STRR subsequently.
The response surface graph for the emulsification index as a function of both HCL and glucose concentrations is set in Fig. 2a.
The optimized condition was visualized around the center point, tending towards a lower concentration of liquor and higher glucose. Fig. 2b shows the response surface for the emulsification index as a function of the concentration of HCL and the concentration of the mineral salt solution, whose graphic analysis indicated that the ideal region comprised the intermediations of the central point, with tendency to a lower concentration for the liquor in this region, as well as a lower concentration of mineral salt solution an area above the central point, being in agreement with the fact that the run number 15 obtained the best result for the EI 24 , equivalent to 64.38%. Since an oil/water ratio of 6:4 was used, it is known that the oil phase constituted 60% of the total volume, so, for EI 24 values equal or greater than 60, a complete emulsification of the oil phase occurred [38]. Moreover, in the run number 9 the absence of liquor and the use of sugar as the only feedstock providing carbon resulted in low values for EI24 and STRR responses, especially when compared with those presented by experiments 15 and 16, which had liquor and sugar in their compositions. Thus, the response surfaces showed in Fig. 2 presented an optimized condition described by the mathematical model with coded variables of -0.41 for HCL, +0.45 for glucose, and +0.87 for the mineral salt solution, results corresponding to a real concentration of 15.24%, 3.20%, and 1.53%, respectively, for the emulsification index, conditions applied in the elaboration of another optimized biosurfactant (B-EI 24 ).
Optimization of biosurfactant production
As two optimized compositions for the biosurfactant production have been established, two culture media were prepared again in order to evaluate the quality of surfactin produced at this work. To do so, a comparison was made between these new results against those obtained using a standard 4% glucose medium, under the same conditions of temperature, agitation and time of fermentation already known, and the synthetic surfactant polysorbate 80 (Tween 80). Table 6 shows the behavior of the biosurfactant according to the tests of surface tension reduction rate, emulsification index, and bioremediation potential.
As verified in Table 6, the best results for STRR were obtained by the biosurfactants B-STRR (57.10%) and B-EI 24 (57.38%), which was higher than the 46.48% value predicted by the model. The results obtained for both were superior to glucose 4% and to the chemical surfactant Tween 80, highlighting the effectiveness of the product elaborated in the laboratory from alternative substrates. Promoting a comparison between the literature and the presented results, it was verified that they were superior to those reported by Abdel-Mawgoud et al. [38], who used molasses as a substrate for B. subtilis and obtained a percentage of surface tension reduction of 48.57%. The reduction of surface tension obtained was also higher than the percentage of 43.62% and 39.22% determined by Pornsunthorntawee et al. [41] for Bacillus subtilis PT2 and Pseudomonas aeruginosa SP4, respectively; in which the authors made use of nutrient broth with palm oil as a source of carbon.
The ability to form and stabilize an emulsion is a precept used to verify if the microorganism is producing biosurfactant [42]. Several factors can influence the emulsifying properties of biosurfactant, such as organic and aqueous phase composition, emulsionstabilizing nature, temperature, and the presence of fine particulates [43]. The emulsification index reached by the optimized biosurfactants B-STRR (34.12%) and B-EI 24 (65.30%) were higher than those determined by the biosurfactant produced from glucose 4% and the synthetic surfactant ( Table 6). The variations found for this response may be correlated to the different concentrations of the substrates used to obtain microbiological surfactant. The more expressive experimental value was obtained by the biosurfactant Table 6 Values of the surface tension reduction rate, emulsification index and bioremediation potential for the optimized biosurfactants B-STRR, B-EI 24 B-EI 24 which was close to the predicted one (65.30% versus 65.74%, respectively). This can be considered satisfactory according to the criterion that establishes that the emulsifier agent has the ability to maintain at least 50% of the original emulsion volume after 24 h of its formation [44]. The result determined for B-EI 24 was higher than the 58% found by Liu et al. [45] but lower than that found by Oliveira et al. [42], an emulsification index for surfactin correspondent to 67%.
To evaluate the potential of bioremediation in contaminated sand, it is necessary to determine the value of the critical micellar concentration, which is the parameter used to predict the efficiency of the biosurfactant by measuring the concentration value necessary to obtain a significant reduction in the water surface tension. Thus, the CMC is defined from the inflection point of the surface tension curve versus concentration of surfactin [46].
Based on the inflection point provided by each of the plotted curves, the measured CMC corresponded to approximately 100 mg. L À1 for the surfactants B-STRR, B-EI 24 and Tween 80 (Fig. 3a,b,e,d), with the exception of the biosurfactant produced from glucose 4%, which CMC value was 120 mg.L À1 (Fig. 3c). The lower the CMC more efficient the surfactant and therefore more favorable they are in economic terms in the use of industrial processes. Bognolo [47] did a survey of the CMC range for different synthetic surfactants, whose values were kept on a scale of 0.7 to 2900 mg.L À1 . This oscillation is linked to the difference in the composition of the surfactants. The surfactin produced in the present work presented a better CMC than those reported by the author for the synthetic surfactants linear alkylbenzene sulfonate and sodium lauryl ether sulfate, which presented values of 590 and 2,000-2,900 mg.L À1 , respectively. In contrast, the results found in these trials were higher than other data already documented, since surfactin can reach a CMC value up to 11 mg.L À1 . In addition to the composition, this difference can also be explained by the strong influence that acyl chain length of surfactin exerts along with its ability to form micelles [48].
Treating contaminated soils is not considered an easy task as pollutants, that may be toxic, mutagenic or carcinogenic, are often strongly bound to soil particles [49]. The application of surfactants to contaminated soil and water at concentrations above the critical micelle concentration can potentially reduce interfacial tension, increase solubility, and facilitate biodegradation [50]. Therefore, to evaluate the biosurfactants with respect to the bioremediation potential in sand contaminated with commercial diesel oil, a biosurfactant concentration of 200 mg.L À1 was used. As shown in Table 6, the values remained in a range between 71.16 and 91.55%, which the optimized biosurfactants (B-STRR and B-EI 24 ) had a lower value than the commercial surfactant Tween 80. This may be related to the dependence that the biodegradation of hydrocarbons in contaminated soils presents in relation to the environmental conditions and the types of hydrocarbons in contaminated soil [51,52]. Nevertheless, the biosurfactants developed in the present study demonstrated a very satisfactory effect on the bioremediation of the sand/diesel oil system, especially the optimized surfactin B-STRR, which exhibited a bioremediation potential of 85.18%.
Conclusions
The hemicellulosic liquor extracted from corncob by an alkaline treatment presented a high potential as a sustainable and economic substrate for growth of B. subtilis, which proves the relevance of the proposal to take advantage of the C5 fraction from the hemicellulose of vegetal biomass. In addition to the type of carbon source, the production of biosurfactants is also influenced by the concentration of the substrate, so the initial studies of surfactin production performed by a design of experiment allowed the identification and elaboration of two optimized biosurfactants: B-STRR and B-EI 24 . These bioproducts were submitted to experiments like surface tension reduction rate and emulsification index in kerosene, which results put them as quality emulsifier agents according to literature parameters. The optimized biosurfactants showed an excellent ability for bioremediation of soils contaminated with diesel oil, which encourages their commercial application.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2019-08-23T08:16:50.513Z
|
2019-07-30T00:00:00.000
|
{
"year": 2019,
"sha1": "f1bb2ebdd6058730abe8d77cbedf5a9e494c0a34",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.btre.2019.e00364",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "f1bb2ebdd6058730abe8d77cbedf5a9e494c0a34",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
21695867
|
pes2o/s2orc
|
v3-fos-license
|
CHerenkov detectors In mine PitS (CHIPS) Letter of Intent to FNAL
This Letter of Intent outlines a proposal to build a large, yet cost-effective, 100 kton fiducial mass water Cherenkov detector that will initially run in the NuMI beam line. The CHIPS detector (CHerenkov detector In Mine PitS) will be deployed in a flooded mine pit, removing the necessity and expense of a substantial external structure capable of supporting a large detector mass. There are a number of mine pits in northern Minnesota along the NuMI beam that could be used to deploy such a detector. In particular, the Wentworth Pit 2W is at the ideal off-axis angle to contribute to the measurement of the CP violating phase. The detector is designed so that it can be moved to a mine pit in the LBNE beam line once that becomes operational.
Introduction 1.Motivation
Recent observations of a large θ 13 mixing angle have refocussed the next generation of long baseline experiments towards resolving the mass hierarchy, determining the octant of θ 23 , and measuring δ CP .Degeneracies among the remaining oscillation parameters mean that, unless nature has chosen extremely favorable values, NOvA may not be able to satisfactorily measure all the remaining unknowns.Other planned experiments are unlikely to significantly improve our knowledge of these unknowns until 2023 when the first LBNE experiment, the 10 kton LAr detector, is planned to be operational and taking initial beam data.An additional 10 years of data is required to fully realize the projected sensitivity.This leaves a long drought of physics output from the Fermilab long-baseline neutrino program.
For the U.S. long-baseline neutrino program to continue to be an attractive and vibrant endeavor, it is essential to have a phased program that can achieve new physics results on both short and long time scales.To achieve that aim, we advocate for enhanced exploitation of the NuMI beam, as part of a new plan to develop an experimental long-baseline neutrino program that can lead the world in delivering new neutrino insights.Fermilab's NuMI beam line has been the workhorse of the U.S. neutrino program over the past seven years.After upgrades, NuMI will run at double its original intensity and will be the most powerful neutrino beam in the world.With its flexible running configurations and its suite of near detectors, the beam will be the best understood neutrino beam ever constructed, and it is a resource that creates unprecedented opportunities.As an initial stage of the new long-baseline program, detectors could be developed and run in the NuMI beam, delivering world class constraints on δ CP , even while the new LBNE beam line is being built.
CHIPS Concept
This Letter of Intent outlines a proposal to build a large, yet cost-effective, 100 kton fiducial mass water Cherenkov detector that will initially run in the NuMI beam line.The CHIPS detector (CHerenkov detector In Mine PitS) will be deployed in a flooded mine pit, removing the necessity and expense of a substantial external structure capable of supporting a large detector mass.There are a number of mine pits in northern Minnesota along the NuMI beam that could be used to deploy such a detector.In particular, the Wentworth Pit 2W is 7 mrad away from the central axis of the beam, a position which optimizes rate and background rejection.The pit is also one of the deepest in the area, allowing for a water overburden of several tens of meters.The detector is designed so that it can be moved to a mine pit in the LBNE beam line once that becomes operational.
While one can not achieve the ideal baseline to measure the mass hierarchy in the NuMI beam, studies performed by the eNuMI working group [1] show that detectors in the NuMI beamline can constrain the value of δ CP .The CHIPS experiment will probe δ CP by measuring electron neutrino appearance in the NuMI muon neutrino beam.Assuming the nominal beam power that NuMI will achieve in the NOvA era, the nominal NOvA beam configuration, and a 100 kton fiducial mass CHIPS detector deployed in the Wentworth Pit, on the order of 340 (190) ν e -CC events would be observed in the normal (inverted) hierarchy above a background of approximately 640 events in a three year run with the beam in neutrino mode.In antineutrino mode, about 200 (150) ν e -CC events should be observed on a background of about 350 events.With these event rates, the combination of CHIPS, NOvA and T2K provide an error on δ CP better than 25 • for all values of δ CP , assuming the mass hierarchy and other degeneracies are resolved.Being close to the beam axis, CHIPS sees a relatively wide energy distribution and high flux, and thus provides complementary information to the off-axis experiments.Combining CHIPS data with the off-axis results can further constrain δ CP , improve the significance of a discovery of CP violation in the neutrino sector, and help resolve ambiguities in the mixing parameters.
Even a modest target mass (10 kton) can improve the resolution in δ CP over NOvA, indicating a prototype detector in the NuMI beam can deliver meaningful contributions to neutrino physics on a short time scale.This document also describes an R&D plan to prove the CHIPS concept and to study ways to reduce the cost per kiloton of building such a detector, at the same time delivering additive results on δ CP .This R&D effort will encourage a new, vibrant detector development community centered on the FNAL neutrino program.A nationwide consortium of laboratory and university groups are already collaborating to focus on the development of new and innovative photodetector technologies [2].U.S. companies are beginning to develop other photodetector technologies, providing competition that will drive down the cost of instrumentation.Leveraging these efforts will enable U.S. leadership in the construction of megaton size neutrino detectors.
Physics Reach
The physics capabilities of CHIPS have been studied using GLoBES [3].The nominal experimental setup assumes a 100 kton fiducial mass detector with an exposure of 6 × 10 20 POT/year, which is the NOvA expected yearly exposure.The medium energy (ME) flux described in Section 9.1 is used as input to these simulations.Cross sections used are standard to GLoBES.Three flavor neutrino oscillations are incorporated into event rate predictions; the known oscillation parameters are fixed at the values given in reference [4] and are summarized in Table 1.Selection efficiencies for each event type as a function of energy included in GLoBES are based on Super-K experience, using a 20% photodetector coverage.Two levels of selection are applied.First, a pre-smearing efficiency (vs.true energy) is applied, which represents a cut based on what fraction of each event type looks like a single electron.True energy is then converted to a reconstructed energy using migration matrices, again from Super-K.Then a post-smearing efficiency (vs.reconstructed energy) is applied, based on the Super-K log-likelihood cut.The resulting efficiency for each event type as a function of reconstructed energy are shown in Figure 1.The energy distribution of each event type, for each mass hierarchy is shown in Figure 2. Integrated event counts are given in Table 2.
Events from ν τ appearance are not included in the GLoBES simulations.Independent calculations indicate there will be 3.7 ν τ -CC interactions per kton per year, integrated over all energies.An estimate of how many of these events would pass the ν e selection was made using the selection efficiencies from GLoBES for each tau decay mode: the ν e selection efficiency is applied to the electron decay mode, the ν µ selection efficiency is applied to the muon decay mode, and the NC selection efficiency is applied to the hadronic decay mode.The event counts from each decay mode are weighted by the branching fractions and summed, to produce a prediction of 0.35 additional background events per kton per year from ν τ appearance.This estimate will be further refined once a full simulation and event reconstruction suite is available.
Figure 3 shows the resolution on δ CP when a 100 kton fiducial mass CHIPS starts taking data four years after NOvA starts.This resolution assumes that the mass hierarchy is known, and all other degeneracies are resolved.The CHIPS information is also combined with NOvA and T2K in a simultaneous fit.The resolution ranges from around 15 • to around 24 • , across the whole range of δ CP .It can be seen from these figures that the information from CHIPS is complementary to NOvA+T2K owing to the wider beam spectrum.At large δ CP the δ CP resolution is much better than NOvA, while at small δ CP it is worse.The wrong-hierarchy exclusion significance for the same configuration is shown in Figure 3 (middle).The best combined exclusion in the ME tune reaches a 4σ significance.The potential for discovering CP violation (i.e.excluding δ CP = 0 • or 180 • ) is shown in Figure 3 (bottom).The features for one half of δ CP space (positive δ CP with NH, negative δ CP with IH) are due to the ambiguity in the hierarchy.
If the hierarchy is determined, then the curves look symmetric.While CHIPS can achieve lower errors on larger values of δ CP , the shape of the χ 2 curve gives NOvA plus T2K more power to exclude CP conservation.However, the combination of CHIPS, NOvA, and T2K can find evidence for CP violation (at above 3 sigma) in around 25% of δ CP space, doubling to 50% if the hierarchy is known.The sensitivity of CHIPS in the lower energy beam tune was also explored.An increase in the low energy beam flux can be achieved by moving the hadron production target closer to the magnetic focusing horns.The standard NuMI low energy configuration is achieved by partially inserting the target into the neck of the first horn.This configuration is harder to achieve, in terms of reconfiguring the beamline at Fermilab, now that the beam line has been upgraded for the NOvA running.For comparison, the event rates associated with the ME fluxes are shown in Figure 4.The same figure also shows the band of δ CP resolutions (minimum to maximum across all values of δ CP in both hierarchies) against the off-axis angle of the detector.The choice of 7 mrad is the preferred location in both the LE and ME beams in terms of δ CP resolution, owing to the combination of a high event rate with a low background for either beam tune.
Staged NuMI Reach
Owing to financial constraints, it may not be possible to construct a 100 kton detector in the four years after starting NOvA, and so the possibility of building CHIPS in a phased approach has been investigated.This would involve increasing the fiducial mass over multiple years, exploiting the experience to accelerate the expansion.Figure 5 shows how adding a phased-CHIPS detector in the NuMI beamline improves the δ CP resolution over the default configuration of NOvA and T2K only.Two approaches are shown; a fast track approach of building a 10 kton detector two years after NOvA starts data taking and increasing this to 20, 50 and 100 kton every subsequent two years.The other is a slower-track approach, where 10 kton is instrumented four years after the NOvA turn on, and increased to 20 and 50 kton after seven and nine years respectively.
CHIPS in LBNE
When the LBNE beam is completed, the CHIPS detector will be redeployed in that beam.The construction procedure will allow for the PMTs and electronics to be salvaged and reused.The question of where best to position CHIPS for the best complementarity to the LAr detector has been studied.As a first consideration, the off-axis angle was varied and the resolution on δ CP was studied.Figure 6 shows the band of δ CP resolutions (minimum to maximum across all values of δ CP in both hierarchies) against the off-axis angle of the detector in the LBNE beam.The left plot shows the combined reach (in green) if the CHIPS detector has been already constructed in the NuMI beam.The right plot shows the δ CP reach if CHIPS is only available in the LBNE beam.In either case, the CHIPS detector contributes a large weight to the resolution.
It would be preferable to place the redeployed CHIPS in a position for maximum complementarity, while taking into account the geographical considerations.There is a reservoir at a baseline of 1250 km and at an angle 20 mrad off-axis in the LBNE beam which could potentially house the CHIPS detector.In this case, the second maximum of the oscillation could be studied, which would be complementary to the on-axis LBNE detector.When the 100 kton fiducial mass CHIPS detector is placed in the LBNE beamline, with a baseline of 1250 km and at 20 mrad off-axis, the neutrino spectra produced are shown in Figure 7.
The hierarchy exclusion significance, resolution on δ CP , and CP violation discovery potential are shown in Figure 8 for different combinations of the currently foreseen long-baseline neutrino experiments.A 5σ exclusion of the wrong hierarchy can be made for the whole phase space of δ CP only with the help of CHIPS (with a 100 kton fiducial mass, in 6 years of NuMI beam The water in the Wentworth Pit was surveyed from January 2010 to September 2012 to characterize the quality.Two separate types of testing were conducted.The first type consisted of monthly tests of surface water for standard mine water contaminants.The temperature of the water was measured to range between 0 • and 20 • C due to seasonal weather fluctuations.The turbidity of the water, a measure of the clarity, was measured to be 0.7±0.5 NTU (Nephelometric Turbidity Units), implying that the water is quite transparent.The pH at the surface was measured to be 8.3 ± 0.3.A further set of profile measurements was also taken in September of 2011, from the surface to a depth of 123 feet.These profiles showed that the temperature varied from 20 • C at the surface to 5 • C at 123 feet.The pH was also observed to drop from 8.4 at the surface to 7.2 at 123 feet deep.Detailed results of these tests are summarized in the appendix.
Depth Considerations
The water of the mine pit not only provides structural support for the detector, but also serves as an overburden to shield the detector from cosmic rays.The depth of the Wentworth pit allows a relatively shallow overburden of a few tens of meters of water, implying a high rate of cosmic ray (CR) muons entering the detector.To determine the feasibility of the shallow overburden, expected cosmic rates were computed as a function of detector depth, and detector dead time due to those rates was considered.
To first order, the energy-averaged intensity of muons at sea level, I S , has a characteristic angular dependence proportional to cos 2 θ, where θ is the zenith angle [5]: The vertical cosmic ray flux above 1 GeV at sea level, I SV , is 70 m −2 s −1 sr −1 .At the proposed detector depth, muon energy loss is primarily by ionization; radiative energy losses are completely negligible [6].Using calculations from Bugaev et al. [7] and Bogdanova et al. [8] we have estimated the rate of cosmic rays as a function of detector depth and detector geometry [9]. Figure 11 shows rates as a function of depth.With a 40 m.w.e overburden, the cosmic rate is expected to be 50 kHz in a cylindrical detector 50 m in diameter and 20 m high.The impact of the cosmic rates on the NuMI beam events is mitigated by the short NuMI spill of about 10 µs.
To further understand the impact of these cosmic ray events on the CHIPS detector, we used the GEANT4 framework [10] and the cosmic ray flux available through the CRY package [11] to study the efficiency of photon detection and the effect of the event time span on the overall deadtime caused by the 0.5 cosmic ray events per spill.In the simulations we have assumed a detector comprising two concentric cylinders: an Inner Detector (ID) surrounded by and optically separated from a larger Veto Detector (VD).The veto volume extends 2 m outward from the inner detector boundary.The walls of the ID volume are assumed to absorb light, while the walls of the VD volume are reflective.Figure 12 shows event displays from this simulation package.
The distribution of cosmic ray event duration is shown in Figure 13.Average dead time during the spill due to CR muons is (rate)×(event time span), which results in a conservative estimate of the average dead time per spill of 250 ns [12].This is 2.5% of the beam spill.For contained CR muon events, the dead time window could be enlarged, perhaps to 1-2 µs, to minimize the impact of muon decay Michel electrons on the beam events.
Detector Concept
Due to practical considerations deploying very large detectors, we propose to build up the needed detector mass in independent, cylindrical units.Each unit will sit at the bottom of the mine pit.The detector height is constrained by the depth of the water and the overburden requirements.Detector dimensions are further limited by the attenuation length of light in the water.To respect these constraints, each unit will comprise a cylinder of photodetectors surrounding a water volume 20 m high and 50 m diameter.Excluding interactions closer than 2 m to the photodetector surface, the proposed dimensions yield a fiducial mass of 27 kton.The water enclosed in each unit will be kept dark and isolated from the outside lake water by a reinforced polymer membrane.Additionally, some photodetectors will be arranged to point outwards into a 2 m-thick veto volume along the top and side of the cylinder.This volume also provides room for the support framework, and is optically separated from the active volume by an opaque plastic sheet between the photodetectors.Figure 14 illustrates the module geometry, while Table 3 tabulates the detector parameters.
Photodetectors
The nominal design calls for high quantum efficiency (HQE) 10 photomultiplier tubes from Hamamatsu [13].As in the LBNE water Cherenkov design [14], we assume the HQE tubes can achieve the same efficiency with 10% photosensor coverage as Super-K achieved with 20% coverage using lower QE tubes.A simulation and reconstruction program is under development to determine the optimal coverage and placement of the tubes, as is described in Section 9.The tubes will need to withstand substantial pressure.The 10 tubes have been shown in tests done by LBNE [14] to survive down to 60 m, but this is at the edge of the comfort zone.The 12 tubes from Hamamatsu do withstand more than 60 m hydrostatic pressure [13][14][15], and would be an appropriate replacement should the 10 tubes not suffice.The comparatively low cost of the deployment and support system is a strong motivation for use of cheaper photodetectors currently in development [16] (see Section 10 for further discussion on photodetector strategy).Individual photodetectors on the bottom and top surface of the cylinder will be mounted on a lightweight truss framework, or "space frame", extending 1-2 m perpendicular to the instrumented plane; frame components will incorporate only enough mass to approximately cancel the buoyancy of the photodetectors, so that the assembly remains neutrally buoyant and spans the 50 m diameter without significant distortion.Molded plastic housings will be used to gently hold photodetectors while providing a secure mounting system, similar to those designed and tested for the LBNE-WCD option [14].Photodetectors on the cylinder walls can be secured to vertical steel support cables [14,17] or to a framework similar to the bottom and top planes.The framing and/or support cables are in turn secured between large stiff rings, defining an overall 20 m-high cylinder of instrumentation that can be raised and lowered as needed.Illustrations of the PMT housings developed at Physical Sciences Laboratory (University of Wisconsin) are shown in Figure 15, which are compatible with either framing or cable supports as shown.Cost estimates for the PMT assemblies are given in
Detector Vessel
Surrounding each 27 kton detector unit is a reinforced polymer membrane (liner) that blocks outside light and isolates the pure water inside the modules from the pit water.Many commercially available liner material options exist and are regularly used in the geomembrane and roofing industries for blocking water over large areas [18].For CHIPS, the liner will be maintained in a cylindrical shape by a framework and cables connecting two large stiff rings.Such rings and associated mooring lines are routinely used for construction of net cages in the aquaculture industry [19,20].Rings up to 64 m diameter have been deployed in open sea conditions [21].
The polymer liner may be contracted as a design-build project.There are several relevant examples in the literature to guide the design of the CHIPS liner.A baseline material is Hypalon, which was the proposed material in the GRANDE detector design [22].Hypalon is a chlorosulfonated polyethylene (CSPE) synthetic rubber (CSM) that was previously manufactured by DuPont.In 2010, DuPont ceased manufacturing Hypalon, but several other manufacturers are still operational and there are other viable options on the market [23].Some alternative liner materials have been investigated, including XR-5 manufactured by Layfield [24], polypropylene, and polyethylene materials.These materials may provide more economical alternatives to the mainstream Hypalon/CSPE.
Because of forces acting on the liner surface, the design is expected to require additional support to relieve stresses in the liner and maintain its cylindrical shape.The main challenge is posed by differences in water density that may occur between the inside and outside of the liner volume.Lakes exhibit a time-dependent temperature vs. depth profile, chemical concentration profile, and density profile [25].If the temperature profile inside the liner lags that outside by 5 • C, the density effect causes differential pressures up to 100 N/m 2 across the liner surface; for a surface with curvature radius 25 m, the resulting tension approaches the tearing strength of available liner materials.Supporting such a pressure difference on either the top or bottom flat of the cylinder is even more impractical than on a curved side.In addition, while motion of water in small lakes is modest compared to open seas, storm driven flows (seiches) reach up to 20 cm/sec well below the surface [26].Such flows create a dynamic pressure on the cylinder wall, which can be regarded as a "bluff body" in the turbulent flow regime [27].While corresponding forces are only of order 1 N/m 2 , they are asymmetric and act over large almost-flat areas; the forces can also be magnified by vortex-excited oscillations [28].
Analysis of the density profile issue reveals that a completely submerged cylinder tends to experience substantial differential pressure on the top, bottom or both flat surfaces, which are hard to restrain.This follows from computing the pressure increase from top to bottom, which cannot match between inside and outside if the densities are different.However, a single horizontal surface on a submerged volume naturally experiences low differential pressure as long as the remainder of the structure is much stiffer, for example by virtue of a support frame that maintains a curved shape.For CHIPS, the bottom of each cylinder is chosen as the single horizontal surface, whereas the top will be covered by a structural dome, as illustrated in Figure 16.The dome needs to be sealed with a liner similar to the side wall, but an additional membrane will isolate its volume from the pure detector water without compromising its structural function.The vertical side walls and the dome will still be subject to differential density pressure rising with distance above the bottom surface, but this can be reduced to around 10 N/m 2 by management of the thermal profile inside the liner.A truss framework would provide effective reinforcement of the liner walls against remaining forces, and other options such as tension cables and rope netting will also be considered.Costs associated with the detector structure are provided in Table 5.
Construction and Deployment
A cable or net cage will be moored in the lake surrounding each intended detector location, supported at the surface by a large floating ring and held at the bottom by a large sinker $500,000 $500,000 Deploy PMT modules, 9 FTEs $900,000 $600,000 Water Purification System $1,400,000 $1,400,000 Total $5,050,000 $3,950,000 ring, like those used for the aquaculture cages [19][20][21] shown in Figure 17.As shown in the bottom panels of Figure 17, the detector will be built incrementally downward, supported by the surface ring, gradually flooded and lowered (or raised) inside the outer cage by cables.During construction activities, the top dome will be held above the lake surface (empty) and will serve as a sheltered work area preventing contamination of the interior purified water volume.Once the cylinder is complete and capped by liner, the dome will be sealed to the cylinder top and then also flooded and sunk.During the winter, standard marina equipment will be used to circulate water near the floating ring and keep it decoupled from the ice sheet around it.Each photodetector is served by a single electrical cable which is routed along the support framework and emerges in bundles near the top of the cylinder.The emerging bundles are sealed or surrounded by additional liner material and routed to on-shore power supply and data acquisition equipment.Alternatively, floating trailer-sized enclosures at one side of each surface ring could house the first level electronics, with consolidated power and high-speed communication connections from there to shore.Additional connections to shore will be necessary for the flow of purified water.These connections can be sealed to suitable openings in the liner during deployment, and the corresponding umbilicals will then be routed to on-shore purification equipment.Multiple connections at different depths will allow better matching of the lake's thermal profile if each umbilical is supported at approximately constant depth from a line of buoys.Construction of each cylinder will begin with assembly of the upper and lower rings defining the outer cage, together with an inner concentric ring on top of which the dome is built.This work can be done near shore, then towed to the final location and moored.Within the dome, additional temporary floating docks will enable workers to assemble sections of the large bottom panel and subsequently to join them, as shown in Figure 18.Each section could start with a triangular raft 5 m per side, assembled on the dock as a 1 m lattice of PVC pipe lengths, then floated and finally secured to preceding raft sections.With inner pipe diameter of 4 , each raft section can temporarily support 360 kg for the additional work of unrolling and welding together sections of liner and building a 1 m-high network of support framing on top of that.Because of the raft support, plastic welding of liner strips or large prefabricated liner sections can be carried out above the water line, and standard field techniques [18] can be used.Around the growing perimeter of the base section, the liner will be wrapped upwards along the support framing.Photodetectors in housings will also be attached to the framework at this stage, and cables routed as needed.Once complete, the raft pipes can be filled with water to eliminate their buoyancy, but still leaving the entire bottom panel temporarily in a floating state.
An internal ring will also be assembled and joined to the framing inside the perimeter, which helps to stiffen the bottom circumference and provide an attachment point for vertical deployment cables that ultimately extend to a corresponding ring 20 m above.Again, as in Figure 17, the remaining vertical wall of the liner will then be built up around the perimeter in 1 m increments, while flooding the existing volume with enough purified water to maintain the top edge just above the water line where it is easily accessed by workers on floating dock sections.After completion of the walls, the second ring will be attached and serve as the top attachment point for the vertical deployment cables.Similar to IMB [17] and the concept for LBNE-WCD, photodetectors in plastic frames will be attached in sequence and lowered along each pair of support cables, with the cables looped around pulleys at top and bottom to allow the necessary motion.
After completion of the walls, the top framework is built in from the perimeter, utilizing integral PVC pipe for temporary flotation.Construction of the wall and top sections includes installation of light barrier material to define the veto volume.Finally the cylinder top is sealed with liner and all temporary flotation devices are filled with water to allow submerging the detector.This includes the dome which can be flooded with pit water.In principle, the assembly process can be stopped at any time, allowing a partially complete detector to be lowered and later retrieved for further work.
Cosmic Veto
At 40 m.w.e in such a large detector, the cosmic ray rate will not be negligible; these background muons must be tagged for removal.To this end, a fraction of the photodetectors will be arranged to point outwards into a 2 m-thick veto volume along the top and side of the cylinder, where they will reliably detect Cherenkov light from background muons.This veto volume also provides room for the support framework, and is optically separated from the active volume by opaque plastic sheets between photodetectors.The PMTs are the same type as those of the inner detector and are mounted on the same wall.The role of the veto is to efficiently tag and measure time of CR muons entering the veto and possibly penetrating into the inner detector.The efficiency of different configurations was studied using the cosmic ray simulation.The PMT readout threshold is set to 1 PE.Taking into account the QE and the threshold, a PMT "fires" if 10 photons hit its surface.A veto is defined as a coincidence of m or more veto detector PMTs firing, where m is an integer allowed to vary in the tests.Figure 19 shows the number of veto PMTs that fire and the veto efficiency for different assumed liner reflectivity values.Figure 20 shows the number of veto PMTs that fire and the efficiency for different veto PMT spacing.It is found that it is sufficient if each veto PMT covers a 4 × 4 ID PMT array.Assuming a 10% photocathode coverage, a total of 626 PMTs are needed for this configuration.
Water Purification
A long attenuation length of light in the water is critical for the Cherenkov radiation to reach the PMTs.Furthermore, knowledge of the attenuation length is critical for accurate modeling of the detector.Though remarkably clear, the Wentworth pit water is not clear enough for the detector volume, which requires a light attenuation length of ∼40 m.The detector volume water will need to be purified to attain and maintain water clarity.Water purification is a standard technology that will likely be implemented through a design-build or a design-build-operate contractor.
Given a fiducial mass of 27 kton, and a total mass of 39 kton, we can scale the water system requirements from past detectors using total volume and surface area considerations.The scaling results in a required recirculation rate of about 300 gal/min.The 300 gal/min system would fill and recirculate the 39 kton in 24 days.Figure 21 shows an outline of a 200 gal/min filtration system that could be scaled for this application.To reduce the cost of civil construction, the system could be mounted in three, 40 foot-long shipping containers using modified tanks.Such a project has been implemented by the U.S. Navy.
The water system, shown schematically in Figure 22, will be used to both fill and recirculate the detector water.The system will be used to fill the volume enclosed by the polymer liner initially, and then will be used to provide additional pure water as the detector leaks or evaporates over time.Recirculation will be accomplished by bypassing the Reverse Osmosis stage of the system as described below.Recirculation is necessary to maintain the high degree of purity while also eliminating "dead zones" in the detector volume where detrimental bacteria growth would normally take place.While bubbling of compressed air may be useful in deterring surface freezing in the winter, a water heating system may also need to be included during recirculation.A preliminary and conservative budget, meant to be able to cover the cost for details that have not been included, is $1.4M for the system, containers, and the internal and interconnect piping.
The Filling Filtration System As described for previous water Cherenkov detectors [17,22,29], we can expect that the water filtration system will include an initial depth and/or cyclone filter to remove contaminants down to a few µm.Chemical treatment will then be implemented to remove undesired chemicals from the water.A reverse-osmosis (RO) stage will follow (possibly multiple stages -IMB used a 3-stage RO system), which will reduce the remaining particulate size down to less than 0.01 µm.A deionization (DI) filter may also be necessary following reverse-osmosis, but due to the high cost of deionization, this stage may be eliminated or only a portion of the water may pass through the DI filter.A UV sterilization stage will kill any bacteria.
The Recirculation Filtration System
The recirculation flow an bypass the reverse osmosis stage since the input water (coming from the detector) will already be quite pure.The remaining portion of the system will remove particulates down to 0.2 µm and remove substances that leach into the detector water from the detector materials themselves.Cost may drive the final solution towards eliminating the deionization process, but this decision will be made later by the water purification contractor.
CHIPS ND Concept 6.1 Physics Considerations
The NuMI beam is not a pure ν µ beam.It has a small inherent admixture of ν e that is an irreducible background to the ν µ → ν e oscillation signal.In addition, neutral current ν µ events, particularly those with a π 0 in the hadronic recoil system, can mimic the ν µ → ν e oscillation signal.CHIPS requires a Near Detector to study all neutrino interaction types before they have had a chance to oscillate and to provide understanding of the initial composition of the beam.A Near Detector would study the neutrino-nucleus interactions in a high statistics environment close to the beam target and could also monitor the neutrino beam's performance.Beam monitoring is not a requirement given the array of detectors that already exist that monitor the NuMI beam.
With these goals in mind, a key issue of the Near Detector design is that it should be as similar as possible to the Far Detector design and material.Sufficiently similar Near and Far Detectors would allow one to use the same event reconstruction and particle identification in both detectors.This would minimize the systematic uncertainties in the predicted background at the Far site.A water Cherenkov Near Detector design is preferred to maximize the benefits.It provides the same neutrino-interaction target, ensuring that the efficiencies for signal and background events are similar in each detector.Such a detector would be low cost per ton and utilize a well known technology; however, it should be noted that a water Cherenkov detector has never been proven in high intensity environments.Deployment in the NuMI beam would represent such a scenario.
Shape and Size Considerations
The challenges related to containment and multiplicity of the neutrino interactions drive the specifications of a water Cherenkov Near Detector design.On one hand, the detector needs to span enough radiation lengths for a developing electromagnetic shower to be identified.The design must also allow for the separation photon rings with the ring identification algorithms that are used for the event reconstruction in the Far Detector.On the other hand, if the detector is too large then event pile-up and high rock event overlap rates will incur high dead times, significantly reducing statistics.
The current location being considered for the Near Detector placement is 100 m underground on the Fermilab site upstream of the MINOS and MINERvA detectors.For reference, the MINOS Near Detector is situated 500 m downstream of the end of the decay pipe.This is a compromise between the cost of digging a new cavern, finding available space in the heavilycongested cavern in front of the NuMI beam, and trying to achieve the necessary physics, driven by the flux requirements.
The MINOS Near Detector was used to estimate the expected event overlap rate.It has a fiducial mass of approximately 24 tons, with a total mass of about 1 kton.In a 10 µs beam spill, 5.6 events are expected in the fiducial volume compared to a total of 35 events in the whole detector.A simple study found that 14.8% of neutrino interactions inside the fiducial volume had additional activity in the fiducial volume within a time window of 50 ns from other neutrino interactions.These other neutrino interactions occur either in the rock or in the non-fiducial part of the detector.This fairly large event overlap rate demonstrates the importance of keeping the CHIPS Near Detector as small as possible in order to minimize overlaps and cost.
Design Possibilities
Three ND options are being explored.One design under consideration is a thin side-on inner cylinder (IC) of radius 0.5-1 m and of length ∼ 4 m, which encloses a ∼12.5 ton volume.The IC would be filled with water to serve as the water target and would be contained and supported along the center of a larger side-on light-tight outer cylinder (OC).The readout PMTs would be instrumented along the sides and back face of the inside of the OC.This option is illustrated in Figure 23.
Another possible design would entail a larger water detector using photodetectors with significantly better time resolution (∼100 ps) and finer granularity (∼1 cm) than the standard PMTs to be used in the Far Detector [2,30].The better time sampling of the showers can help in both enhancing the particle identification in a small volume as well as overcoming high overlap rates.Furthermore, the finer granularity can help mitigate the deadtime issues as each channel could be read out independently.These enhancements would allow a single volume of water with the size driven by space and cost constraints.The sampled bins in time and space for each interaction can then be merged to simulate the geometry and granularity at the Far Detector.
The third possible option sacrifices the idea of a common particle ID between the Near and Far Detectors.Instead, it utilizes a combination of existing MINOS+ and MINERvA detectors with a water target to study neutrino-nucleus interactions on water.
Data Acquisition
The Data Acquisition (DAQ) for the CHIPS detector is designed with the aim of being flexible enough to be cost effective enough for the full detector but also simple enough to implement for an R&D stage.These disparate aims can be accommodated within a modular solution based around the MicroTCA crate, which is rapidly becoming the industry standard [31].The principle advantage of this modular approach is that in the early stages (low channel count), off-the-shelf components can be used, then be replaced with custom boards for the final (high channel count) detector.The system follows a fairly typical design featuring front-end boards, digitizers, and FPGAs for triggering and digitizer readout, and fiber optic links to transfer the triggered data to an on-line CPU farm for full event building.A summary of the requirements and channel count is provided in Table 6.
Front End Electronics
The front end electronics will handle the output signal of the photodetectors.The exact design of the front-end board will necessarily depend on the final choice of photodetector, although in general terms the board will contain a preamp to amplify the signal before digitization, a discriminator to provide a digital signal for triggering and timing, and a shaper (if required).Several particle physics experiments have developed application-specific integrated circuits (ASIC) which perform all three of these tasks in a single package.The final choice of photodetectors will determine whether these existing ASICs could be used in the CHIPS front end electronics.The amplified and shaped signal is then sampled and digitized, either directly via a high speed analog-to-digital converter (ADC) or using a switched capacitor array (SCA) and lower speed ADC.
Digitization
During the early stages of the experiment when only a handful of PMTs will be read out, the digitizer will be an off-the-shelf solution.In the reference design, this is a multichannel FPGA mezzanine card sitting on a carrier board in the MicroTCA crate.Several vendors sell suitable crates, carrier boards, and ADC mezzanine cards, some of which are already in use at CERN, DESY and other laboratories.Ultimately, the only way to cost effectively instrument a 10,000+ channel detector is to replace the off-the-shelf high speed ADC components with custom, inhouse designed boards.There are several possible high speed sampling or digitizing ASICs on which the electronics could be based, including the IRS/TARGET family of chips from the University of Hawaii [32] or the DRS family from PSI [33].
Clock, Control and Triggering
The global time references will be 10 MHz and pulse per second signals distributed to the Mi-croTCA crates from a GPS receiver.Existing experiments have demonstrated channel synchronization is possible to better than the 1 ns level.
There are two distinct levels of triggering in the reference design: the local trigger that determines when the signal from a given photodetector is digitized, and the global event trigger which determines when digital data from the digitization modules is transferred to the offline.Simulation studies are currently ongoing to determine the optimal local (single channel vs. single string vs. logical channel group) and global trigger conditions.The local trigger will be implemented in the FPGA of the ADC carrier board, the global event trigger will either be implemented in the FPGA, or if rate permits, in the CPU farm.
Event Readout and Storage
Triggered event data and small amounts of housekeeping information will be transferred from the MicroTCA crates to a Linux CPU farm via standard optical Ethernet links.High level software triggers can be run in the CPU farm to reduce the data rate further if necessary.
Calibration
The concept for monitoring and calibration of the photodetectors for CHIPS is based on the light-injection system currently being deployed for SNO+, which in turn builds on systems used successfully for Double Chooz and MINOS [34].The SNO+ system uses 50 m long poly-methyl methacrylate (PMMA) and quartz fiber optic cables to route LED and laser light (respectively) into the detector from the deck above the detector.The detector-ends of each of the 92 fibers are mounted on the PMT support structure in SNO+ and the light shines all the way across the detector to illuminate the PMTs 18 m away on the opposite side.Controlled pulses of light with between 1000 and 1,000,000 photons are injected.The SNO+ system is capable of providing data for PMT timing and gain calibration as well as measurements of scattering and attenuation monitoring.Adapting the SNO+ design and scaling the dimensions up to match those envisioned for CHIPS is expected to be straightforward.A similar design was proposed for the LBNE-WCD option that also included a light-diffusing ball located near the center of the water volume [14].As proposed in the LBNE CDR, energy and vertex calibration can be performed using naturally occurring events in the detector such as cosmic muons or Michel electrons.
Simulation and Reconstruction tools
While the initial physics reach of CHIPS was established using GLoBES, a program is already underway to develop a full simulation of the beam and detector and a full ring reconstruction protocol.This program draws from extensive work on simulation of WC detectors (WCSim) and will be leveraged to determine the optimal geometry and photodetector coverage for a massive, cost-effective water Cherenkov detector.The MINOS, NOvA and MINERvA experiments each have extensive simulations of the NuMI beam.By taking advantage of the fact that neutrino production from decaying hadrons is isotropic in the center of mass frame, and that the existing simulations store neutrino parent information, we can reweight the existing MC to give a neutrino flux at any location in the beam [35].Furthermore we scan over a region of interest to construct a map of flux characteristics such as peak energy.Cross section and oscillation information can also be included to give a clear, intuitive impression of the oscillation sensitivity at a given location.Figure 24 shows the computed ν µ -CC event rate integrated over all energies for various locations in northern Minnesota.Figure 25 shows the predicted ν µ -CC event energy spectra at different detector locations in northern Minnesota.
Beam Simulation
True Neutrino Energy (GeV) The true energy distribution of ν µ -CC events that would be seen at CHIPS in one kiloton year with (red) and without (black) neutrino oscillations.
Detector Simulation
The detector simulation is performed by a GEANT4 [10] simulation called WCSim.WCSim was developed to study water Cherenkov detector options for the LBNE project.The simulation outputs a list of hits from PMTs.An initial CHIPS geometry was added to WCSim describing a cylindrical detector of radius 20 m and height 20 m.It is instrumented with 10% coverage using 10 HQE PMTs. Figure 26 shows a 1.6 GeV CC ν e interaction generated with the GENIE [36] event generator occurring at the center of the detector.The fuzzy ring is the typical signature of an electron in a water Cherenkov detector.
Reconstruction
A major goal of the reconstruction work is to find an optimized HQE photodetector number and layout.The planned reconstruction method is based on an algorithm developed for the MiniBooNE experiment [37,38], modified to remove the scintillation light component.The algorithm generates a likelihood for each PMT to register a given charge at a given time, for a predefined set of track parameters.Minimizing this likelihood with respect to the track parameters provides the reconstructed track objects.The method is readily extendable to multiple tracks such as those from NC π 0 decays.Figure 27 shows an example of the expected and observed charge distributions that go into forming this likelihood.A version tested on Super-K reported a 60% reduction [29] in the NC background compared to the standard ring reconstruction method.This improvement is not incorporated into the GLoBES physics reach calculations.A preliminary implementation of the algorithm is already in development.The charge component of the likelihood is determined by combining the probability for a propagating particle to emit light in the direction of the PMT with the probability for this light to reach the PMT and produce a recorded signal.This depends on the particle type and its emission profile, the geometry of the track and PMT, detector properties such as the absorption and scattering of light, and the effects of digitization at the PMT.
To calculate the time likelihood, the registered PMT hit times are corrected by subtracting the expected hit time of a photon emitted at the mid-point of the candidate track, and a series of fits are performed on the resulting distribution.First, the distributions are separated into bins of charge, and a fit is carried out using a Gaussian (to model direct Cherenkov light) plus a Gaussian convolved with an exponential (to model indirect scattered light).Polynomial fits are then used to determine these fit parameters as a function of predicted charge, and further fits are performed to express these coefficients as a function of energy, allowing the time likelihood to be calculated for arbitrary track energy and PMT charge combinations.The overall log likelihood surface is produced by adding the charge and time surfaces.Figure 28 shows an example of the time likelihood.
R&D Program
This LOI also outlines a path of development towards a cost effective, 100 kton water Cherenkov detector in a neutrino beam.A $10M investment over the next 4-5 years could provide a 10 kton detector in the NuMI beam which would improve the δ CP reach of NOvA substantially.The cost per kton for the initial prototype would be $1-2M/kton, whereas the goal, over the ensuing decade, would be to reduce this cost by up to an order of magnitude, chiefly by reductions in photodetector costs.
An R&D proposal will be submitted to funding agencies this fall.There are a number of issues which need to be tested on a smaller scale before the full 10 kton prototype can be deployed.While the PMTs, HV and readout are all reasonably well developed, and the purification plant technology is well understood, the detector structure needs to be prototyped in order to produce a full conceptual design for a 10 kton fiducial volume detector.The R&D program is summarized in the Table 7, but the main issues which need to be resolved in the first years are: • Verify liner construction Once these critical path items have been developed, building on previous work carried out for LBNE where possible, the full 10 kton prototype detector could be built in one season, but procurement of the PMTs is likely to be the item which dominates the build time.Some work has already started.The University of Minnesota Duluth Large Lakes Observatory group have already deployed instruments in the Wentworth pit to monitor the deep currents before the winter.The design of the purification system has already been designed by the South Coast Water company in Santa Ana, CA, a company with significant expertise in water Cherenkov purification systems.An engineering consultant firm based in Minnesota, Barr Engineering (http://barr.com), is currently assisting in the design and implementation of the CHIPS detector.It is already clear that Barr Engineering is capable of handling all of the engineering aspects of the CHIPS detector.They have extensive experience designing projects for water-filled mines and hazardous waste treatment in northern Minnesota [39].This experience is well-aligned with the requirements for water system design for the CHIPS detector.Barr Engineering has also completed numerous projects involving fabricating and binding large-scale geotextiles in landfill and mining applications, including storm water structures, pile revetment walls, and other water-related structures.They have significant experience designing support structures such as dams, retaining walls/structures, and bridges [40][41][42].Their expertise with such structures will be invaluable when designing and implementing the support structure for the CHIPS detector.Barr Engineering is also capable of supplying structures on the shore for housing the data acquisition, power, and water filtration systems.These structures would either be prefabricated structures from Barr Engineering (if adequate models were available) or the work would be subcontracted out to one of Barr's partners.Initial work has already been carried out to create a CAD model of the conceptual design to aid future engineering efforts.Additionally, basic flow calculations have been evaluated to approximate the forces induced by the flow of the lake water around the outer surface of the detector.These calculations have shown that the forces induced, though somewhat large due to the large scale of the detector, will be easily manageable with a traditional steel cable system.
Summary
The CHIPS concept outlined in this Letter of Intent could represent a step change in our ability to make precision neutrino measurements using the FNAL intense neutrino beams planned for the near and further future.A 100 kton fiducial mass CHIPS in NuMI would provide a ∼12-25 o accuracy on δ CP and an increase in the mass hierarchy reach of a factor 2, in combination with NOvA and T2K.As an ultimate goal, the CHIPS detector could be redeployed off-axis in the LBNE beam line, to complement the on-axis Liquid Argon detector, enabling results on a faster timescale than presently expected.
Figure 1 :
Figure 1: Final assumed efficiency for each event type as a function of reconstructed energy in linear (Left) and log (Right) scales.
Figure 2 :
Figure 2: Event rates when running CHIPS in 3 years of neutrino beam (left) and three years of antineutrino beam (right) for Normal Hierarchy (top) and Inverted Hierarchy (bottom).Beam ν e events are divided into quasielastic (QE) and non-quasielastic (nQE) samples.The wrong sign (WS) neutrino sample is separated in the antineutrino beam plots.
Figure 3 :
Figure 3: CHIPS physics reach in the Normal Hierarchy (left) and Inverted Hierarchy (right), for NOvA (5+5y) and T2K(8.8e21POT), and CHIPS(3+3y).(Top) δ CP resolutions.(Middle) The significance of excluding the wrong hierarchy.(Bottom) Significance of discovering CP violation.The red line is NOvA and T2K, the blue line is CHIPS and the green is the combination.
Figure 4 :
Figure 4: (Left) ν µ flux (in arbitrary units) seen at 0, 7 and 14 mrad off-axis, in the Medium Energy (ME) and Low Energy (LE) beam configuration.(Right) δ CP resolution band for off-axis angles from 0 to 20 mrad, for the ME and LE beams.The orange line at an angle of 7 mrad corresponds to the position of the Wentworth pit.
Figure 5 :
Figure 5: Impact of a phased CHIPS program on δ CP resolution.
Figure 6 :
Figure 6: δ CP resolution bands for off-axis angles from 0 to 20 mrad, in the LBNE beam.(Left) Assuming the CHIPS detector has already run in the NuMI beam.(Right) Assuming CHIPS runs only in the LBNE beam.Only the CHIPS detector in the LBNE beam has been calculated at different off-axis angles; other detector positions are not varied.
Figure 7 :
Figure 7: The expected event rates for a 100 kton CHIPS detector 20 mrad off-axis at the Pactola Reservoir in South Dakota, a hypothetical target for deployment of the CHIPS detector(s) in the LBNE beam.Beam ν e events are divided into quasielastic (QE) and non-quasielastic (nQE) samples.The wrong sign (WS) neutrino sample is shown separately in the antineutrino beam plots.
Figure 8 :Figure 9 :
Figure 8: Physics reach in the Normal Hierarchy (left) and Inverted Hierarchy (right), for NOvA+T2K, 10 kton LAr LBNE, and CHIPS in the LBNE beam at 20 mrad.(Top) δ CP resolutions.(Middle) The significance of excluding the wrong hierarchy.(Bottom) Significance of discovering CP violation.The red line is NOvA and T2K, the blue line is a 10 kton LAr detector on-axis in the LBNE beam, and the green is the combination of those experiments.Solid black line is for CHIPS, from both a NuMI and LBNE run.Dotted lines show each experiment (or combination of experiments) without a CHIPS run.Solid lines show the effect of adding CHIPS to the results.
Figure 10
Figure10depicts a topographical map of the proposed location with elevation contours derived using photogrammetric methods from aerial photographs taken in May, 2001.From the contour map, the lowest elevation is 1305 ft above sea level, but the contour map was made from a photo taken when the pit had about 10 m of water in it.The current water level is at 1471 ft, making the local depth of the water about 60 m.This estimate of the water level is consistent with recent measurements taken with a commercial depth finder.Water is drained from the pit in the spring to ensure the pit does not overflow during the summer rainy season.Maximum fluctuations of the water level are estimated to be on the order of ±10 ft.
Figure 10 :
Figure 10: An elevation map of the Wentworth Pit.The intervals for the contour plot are 5 feet.
Figure 12 :Figure 13 :Figure 14 :
Figure 12: (Top) Rain of cosmic rays around the CHIPS detector.(Bottom Left) A 1 GeV µ − entering the Inner Detector from the top center and producing a Cherenkov cone.For a better view, most of the photons are not shown and the veto is disabled.(Bottom Right) A 1 GeV µ − entering from one side of the detector producing Cherenkov light in the veto.The inner detector is disabled for a better view.The white dots represent veto PMTs and the green lines represent photons.The photons are trapped in the veto until they are absorbed or detected.
2 3 1Figure 15 :
Figure 15: Depictions of the PMT housings.(Far Left) The collar of the PMT housing.(Middle Left) Side view of the PMT housing and collar.(Middle Right) PMT housings in the framing mounts.(Far Right) PMT housings on the cable mounts.Figures from Ref [14].
Figure 16 :
Figure 16: Side view of submerged thin-walled cylinders filled with liquid denser than surroundings.(a) Sealed cylinder with flat top and bottom.(b) Cylinder with flat top and bottom, with riser tube allowing reduction of pressure until bottom pressure is in equilibrium.(c) Sealed cylinder featuring domed top.Solid squares show constraints assumed around perimeter to prevent the structures sinking.Solid lines indicate nominal shape; dashed lines indicate deformation under load, which is greatest for large flat surfaces in (a) and (b).
Figure 17 :
Figure 17: (Top) Floating ring and platform used in aquaculture cages.Note the Wentworth Pit is not expected to ever be as turbulent as the open sea.(Bottom) Pictorial representation of detector construction then submersion.
Figure 18 :
Figure 18: Construction of cylinder bottom floating on water inside the dome. 1 -Assembly of raft on work platform.2 -Joining of rafts.3 -Unrolling and seaming of liner sections.4 -Framing for beginning of vertical wall, with vertical liner strip attached.5 -Bottom support ring for vertical cables.6 -Bottom surface framing with panels of photodetectors partially installed.
Figure 19 :Figure 20 :
Figure 19: (Left) The dependence of number of PMTs fired in the VD per event on the assumed reflectivity of the veto detector wall.Histograms are area normalized to 1. (Right) The number of cosmic ray muon events vetoed divided by the total number of cosmic ray muon events vs. reflectivity.The different colors are for different requirements on the minimum number of fired PMTs to tag a muon (m).PMT spacing is fixed at 284.7 cm.
Figure 21 :
Figure21: Physical layout of a water purification system containing pretreatment equipment (carbon filters, water softeners, micron filtering), reverse osmosis unit and post treatment (pumps, UV sterilizer, sub-micron filters, deionization vessels).
Figure 22 :
Figure 22: A sketch of the proposed water filtration systems that will fill and maintain the CHIPS detector.
Figure 23 :
Figure 23: A conceptual design of the CHIPS Near Detector.
Figure 24 :
Figure24: A map of potential neutrino event rates, assuming no oscillations, between 0-30 GeV for an exposure of 1 kton-year.Contours show lines of constant L/E where L is the distance from the hypothetical detector to the NuMI target and E is the peak energy of the reweighted neutrino spectrum
Figure 25 :
Figure25:(Left) True energy distribution of ν µ -CC events at the MINOS, NOvA, and CHIPS far detector locations, assuming no oscillations and 1 kton-year of exposure.(Right) The true energy distribution of ν µ -CC events that would be seen at CHIPS in one kiloton year with (red) and without (black) neutrino oscillations.
Figure 26 :
Figure 26: An event display of a 1.6 GeV CC ν e interacting in the center of the detector.The top endcap (left) and bottom endcap (right) views are shown above the larger unfolded cylindrical section.Each bin shows a single PMT and the color shows the collected charge in PE.
Figure 27 :
Figure 27: Comparison of the measured (top) and expected (bottom) charge distributions, for the top (left)and bottom (center) endcaps, and the unfolded cylinder wall (right).The distributions are for a muon track with 1.5 GeV of kinetic energy, created at the center of a 20 m radius by 20 m high cylindrical detector, propagating along the x axis towards the curved wall of the cylinder.The units of the measured charge are digitized photoelectrons, while the predicted charge is in arbitrary units.
Figure 28 :
Figure28: Example of a one dimensional time likelihood.The simulated event is a muon with 1.5 GeV of kinetic energy, created at the center of a 20 m radius by 20 m high cylindrical detector, propagating along the x axis, towards the curved wall of the cylinder.The likelihood is plotted against the vertex position along the x axis, assuming a 1.5 GeV muon track for which all other parameters are known.
Table 2 :
Number of selected events in 100 kton fiducial mass CHIPS detector after 3 years in each mode, for both the normal hierarchy (NH) and the inverted hierarchy (IH).
Table 3 :
Summary of cosmic rate and key features of the basic CHIPS module.
Table 4 :
Cost of the PMT assemblies
Table 5 :
Cost of a detector vessel module.
Table 6 :
Summary of electronics, DAQ requirements, and channel counts.
Table 7 :
5 year research and development plan towards a 10 kton detector in the Wentworth Pit.12 Appendix 1: Pit Water Content Results of water tests are summarized in Table8with 1σ uncertainties.Table9summarizes additional results from the profile measurements.
Table 8 :
Data from surface water quality tests conducted in the Wentworth Mine Pit with 1σ uncertainties.Data courtesy of Cliffs Natural Resources.
Table 9 :
Data from two broad-spectrum water quality tests conducted in 2011.
|
2013-09-23T16:14:30.000Z
|
2013-07-22T00:00:00.000
|
{
"year": 2013,
"sha1": "f90c5ae1cc2aa0617ccaf4aaf4b6f16aab3aead2",
"oa_license": null,
"oa_url": "http://www.slac.stanford.edu/econf/C1307292/docs/submittedArxivFiles/1307.5918.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "336104c1e463381fa72b6fc3070523f11cd962de",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
1472729
|
pes2o/s2orc
|
v3-fos-license
|
Influence of substrate potential shape on the dynamics of a sliding lubricant chain
We investigate the frictional sliding of an incommensurate chain of interacting particles confined in between two nonlinear on-site substrate potential profiles in relative motion. We focus here on the class of Remoissenet-Peyrard parametrized potentials $V_{\rm RP}(x,s)$, whose shape can be varied continuously as a function of $s$, recovering the sine-Gordon potential as particular case. The observed frictional dynamics of the system, crucially dependent on the mutual ratios of the three periodicities in the sandwich geometry, turns out to be significantly influenced also by the shape of the substrate potential. Specifically, variations of the shape parameter $s$ affects significantly and not trivially the existence and robustness of the recently reported velocity quantization phenomena [Vanossi {\it et al.}, Phys. Rev. Lett. 97, 056101 (2006)], where the chain center-of-mass velocity to the externally imposed relative velocity of the sliders stays pinned to exact"plateau"values for wide ranges of the dynamical parameters.
I. INTRODUCTION
Sliding friction has been a broadly-studied field due to its huge practical relevance as well as its theoretical challenges [1,2]. The regime of validity and the microscopic origin of the Amontons-Coulomb empirical laws of static and dynamic friction have still open issues [3,4]. The advancements of technology in the last few decades has triggered both theoretical [5][6][7][8][9][10][11] as well as experimental [12][13][14] investigations in this field.
A broad range of investigations focuses on a simple fundamental model for microscopic tribological system: the Frenkel-Kontorova (FK) model [15] and its extensions [16]. Its standard 1D version consists of a chain of harmonically interacting atoms subject to one periodic sinusoidal potential, thereby representing a discretized elastic overlayer deposited on a corrugated surface. The application of a constant force to the chain allows us to determine a depinning threshold, representative of static friction. For an irrational ratio between the natural atomic spacing and the period of the substrate potential (incommensurate interface), the FK model undergoes a phase transition, called Aubry transition, where the ground state "hull function" exhibits analyticity breaking [17]. When, for a fixed interparticle chain stiffness, the amplitude of the sinusoidal potential is smaller than a certain critical value, the static frictional force vanishes, leading to the onset of free sliding or "superlubricity"; otherwise the chain is pinned until a finite threshold force is overcome. * Electronic address: rwoulach@yahoo.com; rwoulach@ictp.it Superlubricity connected with incommensurability is one of the pervasive concepts of modern tribology with a wide area of relevant practical applications as well as fundamental theoretical issues [18]. The role of incommensurability has been recently extended [19] in the framework of a driven 1D confined model inspired by the tribological problem of two sliding interfaces with a thin lubricant layer in between. The moving interface is thus characterized by three inherent length scales: the periods of the bottom and top substrates, and the period of the embedded solid lubricant structure. In particular, in the presence of a uniform external driving of the top substrate, the interplay between these incommensurate length scales can give rise to intriguing dynamical phase-locking phenomena and surprising velocity "quantization" effects, due to the dragging of topological solitons ("kinks" and "antikinks"), i.e., nonlinear localized density superstructures arising from geometrical lattice mismatch [20][21][22][23][24][25].
These results are suggestive but remain rather idealized in several respects. In particular, the profile of the corrugation potential energy experienced by a lubricant atom interacting with real physical surfaces is likely to deviate considerably from the sinusoids of the two-substrate confined tribological model. It is therefore useful to investigate what influence the shape of the substrate corrugation may have on the frictional dynamics.
In this paper we model the corrugation of the two confining substrates via the Remoissenet-Peyrard (RP) function [26,27], whose shape can be varied continuously as a function of a parameter. The RP potential, which retrieves the sine-Gordon shape as a special case, has been employed widely and successfully to model the dynamics of atoms adsorbed on crystal surfaces in realistic situa-FIG. 1: (Color online) Scheme of the model with three characteristic length scales a+ (spatial period of the static substrate), a0 (the equilibrium spacing of the harmonic chain representing the lubricant film), and a− (spatial period of the advancing substrate). The substrate corrugation is modeled by the RP potential, which is illustrated below for three values of the deformation parameter. Top to bottom s = −0.6 (narrow valleys), 0 (pure sine), and 0.6 (broad valleys).
II. THE MODEL
Consider the one-dimensional generalization of the two-sines FK model, as in Ref. [20], consisting here of two RP substrate potentials, of spatial periodicity a + and a − and a chain of interacting particles of mass m and harmonic spring constant K, equally spaced by a lattice constant a 0 , mimicking the sandwiched lubricant layer as shown in Fig. 1. The motion of the i-th lubricant particle is governed by Here the RP potential [26] V RP (x, s) is defined by For s = 0, the potential V RP± (x, s) of amplitude U yields a sinusoidal shape; for s > 0, it provides an array of broad wells separated by narrow barriers; for s < 0, it provides deep narrow wells separated by broad flat barriers (see Fig. 1). k ± = 2π/a ± are the wave-vector periodicities associated to the upper (−) and lower (+) substrates, and v − , v + denote their sliding velocities respectively. γ is a viscous-friction damping which takes into account various sources of dissipation in the substrates (phonons, electronic excitations, etc.), which are not explicitly included in the model. We select a relatively small dissipation constant γ = 0.1, producing an underdamped regime. As done in previous work [19,20] to simulate an infinite system, periodic boundary conditions (PBC) are applied, implementing, e.g. via a continued fraction expansion technique [31], suitable rational approximations of the system periodicities a + , a 0 , and a − , mimicking the desired incommensurability.
As found in earlier studies, the general behavior of this model depends crucially on the relative commensurability of the substrates and the chain (see Ref. [32] and references therein). To make a comparison with previous work in the sine-Gordon substrates, we consider here the corresponding ratios of length scales defined by r = a + /a 0 = N 0 /N + and r ′ = a − /a + = N + /N − . r and r ′ are also expressed in terms of the number of lubricant particles N 0 and the numbers of periods N ∓ of the top/bottom potential oscillations in Eq. (1) within each simulation cell. For definiteness we take r ′ > r −1 and r ′ > 1 so that the top substrate has the longest lattice spacing. We take a + = 1, m = 1, and the force F + = 2 π U/a + as basic units for the model. In the following we express all physical quantities in terms of suitable combinations of a + , m, F + .
An adaptive fourth-order Runge-Kutta algorithm is used to integrate the equations of motion. We start off with the chain particles at equilibrium (the local energy minimum obtained by relaxing the immobile -v ± = 0 -system from a chain at rest with uniform separation a 0 ). Without loss of generality, we select a reference frame such that the bottom substrate is at rest (v + = 0), and make the upper substrate slide at constant velocity v − = V ext . After an initial transient, the system reaches a dynamical steady state characterized by regular or irregular fluctuations of the drift velocity of the chain particles around an average value, which we indicate by V CM .
In the present work, we investigate the effects, on the tribological behavior of the sliding interface, of (i) the shape of the substrates potentials, represented by the RP parameter s, and (ii) the coverage Θ of the upper substrate to the array of kinks or antikinks. Since the mean distance between consecutive kink/antikinks is this coverage can be tuned by modifying r ′ [24,32].
III. RESULTS AND DISCUSSION
A. Kinks: lubricant forward motion Figure 2 reports the time-averaged center-of-mass velocity V CM of the chain (normalized to the top driving velocity V ext ) as a function of its stiffness K, for three values of the deformation parameter s. In this calculation, for the length ratios we take a rational approximant to the golden mean r = φ = (1 + √ 5)/2 ≃ 1.62. We adopt Θ = 1, i.e. as many solitons as oscillation periods of the top substrate, and this, due to Eq. (4), implies r ′ = 1/|r − 1|. For r ≃ φ, r ′ happens to approximate φ as well. According to Eq. (4), this choice for r ′ produces a coverage Θ = 1, i.e. as many solitons as oscillation periods of the top substrate. The velocity of the sliding top substrate is set to a moderate V ext = 0.1.
Within a broad range of stiffness values, the chain moves at the quantized velocity V CM = w plateau V ext [20], with approximately w plateau ≃ 0.382 for the adopted value of r. For extremely soft harmonic interparticle couplings (small chain stiffness K) and for s = −0.6 or s = 0.6, the chain center of mass tends to move ahead at the full external velocity V ext . In the opposite limit of a very stiff chain (large K), it moves at the symmetric speed V ext /2. This is expected in a situation where the chaincorrugation interaction become marginal, and the dominating term in Eq. (1) is the dissipative one, which is In the transitions between the plateau speed and the large-K and small-K regimes, the chain average velocity V CM is generally a nontrivial function of the chain stiffness K. The effect of the shape of the corrugation potential is evident: for both positive and negative s, the plateau shrinks in size at the soft-chain side (small K), while it tends to expand in the stiff-chain side (large K). While the large-K expansion is the same for positive and negative s, the small-K shrinkage is far more dramatic for positive s, i.e. for broad shallow minima separated by narrow sharp maxima in the corrugation potential. Such a behavior can be qualitatively understood by considering that the plateau mechanism has been interpreted in terms of solitons, formed by the mismatch of the chain periodicity to that of the more commensurate substrate (here the bottom potential), being rigidly driven forward by the (top) advancing potential representing the other, more mismatched, sliding surface. As K is decreased, kinks become more and more localized objects: the plateau ends when the Peierls-Nabarro barrier [15] for a kink to move forward one lattice parameter approaches the single-particle activation energy to jump a corrugation potential barrier. When the two barriers coincide, no kink motion is granted any more, and the chain advances as a whole (V CM = V ext ). For positive s values, the possibility for the particles to arrange relatively uniformly over the RP substrate at a small energy cost yields poorly localized kink superstructures in the chain, rapidly becoming equivalent to non-interacting particles, which are easily dragged forward, and drag the whole chain along. This rapidly destroys the quantized velocity plateau. In contrast, the deep narrow wells for negative values of the shape parameter s tend to compress the chain particles in sharper kink structures relatively more easily dragged along by the moving substrate, while leaving the other (non-soliton) particles still pinned in the other deep minima, and preserving the quantized motion down to softer K. The potential deformation is beneficial in the rigid-chain regime because, for given corrugation amplitude U , the maximum force (the slope at the inflection point, see Fig. 1) that the top potential can apply to the chain particles is larger for s = 0 than for the harmonic chain s = 0. The dragging force acting on the kinks is proportionally larger, allowing dragging to extend to stiffer chains which come with broader and fainter kinks.
To obtain a microscopic view of the quantized phenomena, we follow the motion of a particle of the chain and plot its velocityẋ i , aside with the CM velocity V CM , as a function of time (left panels of Fig. 3). We compare the usual three values of the deformation parameter considered in simulations. We adopt relatively rigid stiffness K = 100 (s = 0) or 400 (s = ±0.6) selected to remain well inside the quantized plateau in all cases. The motion of a single particle is a periodic oscillation, representing the passage of a soliton across that specific particle. This period τ equals the distance d kink between successive kinks divided by the speed V ext at which they are dragged forward by the advancing top substrate: Of course, this period is independent of the deformation parameter of the substrates. A Fourier analysis (right panels of Fig. 3) reveals that indeed the single-particle motion is periodic, with the same period τ = 1/ν 0 ≃ 26.1, where ν 0 ≃ 0.0382 is the fundamental harmonic peak frequency in the Fourier spectrum of the present examples. However, the detailed motion induced by the deformed potential is clearly very different, characterized by a remarkably high harmonic contents, compared to the simple harmonic oscillation of the s = 0 case. The s = ±0.6 potential requires a complicated "dance" of the individual atoms to accommodate the passage of a soliton. Note also that the concerted oscillation of all particles in the chain makes its center of mass advance at an essentially con- To characterize in greater detail the effect of the potential shape on the quantized motion, we investigate the upper and lower boundary of the quantized plateau, K Max and K Min respectively. These boundaries are obtained by a sequence of linked calculations carried out with increasing (for K Max ) or decreasing (for K Min ) K in small steps, until the quantized plateau is abandoned. For example, the sequence of calculations of Fig. 2 shows that K Max ≃ 2000 for s = ±0.6. Figure 4 reports the dependency of K Min and K Max on the potential shape parameter s. K Max is a symmetric function of the shape parameter s. In contrast, the K Min curve is quite asymmetric. As already remarked above, at the soft-chain side positive s is consistently detrimental to the plateau state, leading to a rapid (approximately exponential) increase of K Min with s. In contrast, the negative-s region has a range −0.4 < s < −0.2 where the potential shape deformation is beneficial to the quantized plateau even for soft chains. A further decrease of s to more negative values produces an increase of K Min , but a slow one, such that the relative width K Max /K Min of the plateau actually increases as s decreases. The K Min and K Max curves delimit the quantized velocity plateau region in the space of parameters s and K. Above this region, we find a stiff-chain region where the dynamics is dominated by the dissipative term in Eq. (1), and V CM approaches rapidly V ext /2, with the two sliders acting symmetrically on the chain. The soft-chain region below the K Min exhibits occasional pinning to either the top or the bottom slider, or unpinned nonquantized nonperiodic orbits. The "dynamical phase diagram" of Fig. 4 is relevant for the specific adopted value of dissipation γ and of speed V ext . A modification of these two parameters would indeed modify the shape of the diagram, while preserving its overall features.
B. Antikinks: lubricant backward motion
When r < 1, i.e. when lubricant particles are fewer than the minima in the static substrate (N 0 < N + ), Eq. (5) predicts that the lubricant velocity turns negative, i.e. opposite to V ext . This remarkable leftward motion, produced by rightward moving antikinks is indeed observed even for the deformed RP potential, as shown in the example of Fig. 5. The resulting "backward" plateaus are not so wide as in the case of kinkassisted forward lubricant motion. This qualitative finding, quite likely to carry forward to experiment is due to the dissipation into the substrate represented by the last term in Eq. (1) which tend to favor a positive "sym- metric" speed V CM ≃ V ext /2, thus actively disturbing the V CM < 0 quantized motion. Like in the r = φ case, the plateau width and K range depends quite sensitively on the deformation parameter s. In particular positive s is also especially effective in disrupting the plateau, and indeed for very strong deformation s = 0.6 the backward plateau disappears altogether, the top driving substrate being unable to grab and drag the antikink pattern in the confined chain.
C. The effect of coverage
The choices of r ′ adopted in the calculations of both Figs. 2 and 5 produce coverage Θ = 1, i.e. as many solitons (or antisolitons) as periodic oscillations of the top slider. As was pointed out [33,34], such perfect matching is the most favorable for kinks (or antikinks) dragging, thus for the quantized sliding phenomenon. To investigate how releasing the Θ = 1 matching condition affects the lubricant dynamics, we compare several calculations with the same r but with different values of the coverage obtained by changing r ′ , i.e. the top substrate lattice spacing a − .
To investigate the coverage dependency, we consider fixed r = 377/233 and the following values of r ′ : 233/180, 233/144 and 233/72 corresponding to kink coverage Θ = 0.8, Θ = 1 and Θ = 2 respectively. The results of these calculations are displayed in Fig. 6 for three values of the shape parameter. It can be seen that, independently on the value of the deformation parameter s, the kink coverage largely affects the plateau by reducing its width as soon as the full matching (unit coverage) is lost. In gen- eral, for the less commensurate the kink coverage Θ = 0.8 the plateau is disrupted quite substantially. Not surprising, for the commensurate Θ = 2 this reduction is less significant. Still, for Θ = 2 only one kink out of two finds a top corrugation which drags it along: accordingly, a plateau shrinkage is observed nonetheless.
We have carried out similar simulations for other commensurability ratios, e.g. the value r = 25/36 considered in Sect. III B. The conclusion is that the quantized plateau always shrinks, often to the point of disappearing, whenever Θ = 1.
IV. CONCLUSIONS
In this work, we study the effect of the potential shape on the dynamics of a sliding 2-substrate FK-type model. Even though this deformation goes a modest step away from the idealized world of models in the direction of real friction, it provides some useful trends and general understanding. In particular, we establish that in the rigid-lubricant limit (large K) the worst possible pinning scheme for the solitons is that granted by a sinusoidal corrugation. Any kind of deformation is beneficial to the quantized state. This is quite remarkable also in view of the fact that this rigid-lubricant regime is relevant whenever the lubricant-lubricant in-plane forces dominate over the substrate corrugation, e.g. for noble-gases layers driven over graphite or even over several metallic surfaces [35][36][37]. On the opposite, soft-lubricant limit instead a sinusoidal surface corrugation tends to be optimal, although a pattern based on narrow (but not too narrow wells may occasionally provide even more favorable conditions for the quantized lubricant state. The grabbing of kinks by the more rarefied top slider is best seen when the kink lattice is fully commensurate to the top. Whenever this is not the case (kink coverage deviating from unity) the quantization phenomenon becomes less prominent. However, this observation is to be integrated by a further point. As illustrated in the upper and central panels of Fig. 6, secondary plateaus can arise V CM values different from those predicted by the quantized formula (5). For example, in the Θ = 0.8 simulations producing the secondary plateau characterized by V CM /V ext ≃ 0.802, individual particles do carry out regular periodic trajectories, like in the standard quantized state. These secondary plateaus, observed also for the purely sinusoidal corrugation [21,38], are likely to be due to resonances very much akin to Shapiro steps [15,[39][40][41], excited by the simultaneous action of the periodically oscillating force produced by the sliding substrate and the forward-dragging force produced by the dissipative term in Eq. (1). Further investigation of such secondary plateaus can lead to better insight in their nature, and possibly find realistic configurations where they could arise, e.g. in colloidal sliding [42][43][44][45].
|
2013-06-07T11:11:18.000Z
|
2013-06-07T00:00:00.000
|
{
"year": 2013,
"sha1": "b214d5cdb45ca2068e2530cf7266b29b8c8d958a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1306.1688",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b214d5cdb45ca2068e2530cf7266b29b8c8d958a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
231849443
|
pes2o/s2orc
|
v3-fos-license
|
Transfer of rhodamine-123 into the brain and cerebrospinal fluid of fetal, neonatal and adult rats
Background Adenosine triphosphate binding cassette transporters such as P-glycoprotein (PGP) play an important role in drug pharmacokinetics by actively effluxing their substrates at barrier interfaces, including the blood-brain, blood-cerebrospinal fluid (CSF) and placental barriers. For a molecule to access the brain during fetal stages it must bypass efflux transporters at both the placental barrier and brain barriers themselves. Following birth, placental protection is no longer present and brain barriers remain the major line of defense. Understanding developmental differences that exist in the transfer of PGP substrates into the brain is important for ensuring that medication regimes are safe and appropriate for all patients. Methods In the present study PGP substrate rhodamine-123 (R123) was injected intraperitoneally into E19 dams, postnatal (P4, P14) and adult rats. Naturally fluorescent properties of R123 were utilized to measure its concentration in blood-plasma, CSF and brain by spectrofluorimetry (Clariostar). Statistical differences in R123 transfer (concentration ratios between tissue and plasma ratios) were determined using Kruskal-Wallis tests with Dunn’s corrections. Results Following maternal injection the transfer of R123 across the E19 placenta from maternal blood to fetal blood was around 20 %. Of the R123 that reached fetal circulation 43 % transferred into brain and 38 % into CSF. The transfer of R123 from blood to brain and CSF was lower in postnatal pups and decreased with age (brain: 43 % at P4, 22 % at P14 and 9 % in adults; CSF: 8 % at P4, 8 % at P14 and 1 % in adults). Transfer from maternal blood across placental and brain barriers into fetal brain was approximately 9 %, similar to the transfer across adult blood-brain barriers (also 9 %). Following birth when placental protection was no longer present, transfer of R123 from blood into the newborn brain was significantly higher than into adult brain (3 fold, p < 0.05). Conclusions Administration of a PGP substrate to infant rats resulted in a higher transfer into the brain than equivalent doses at later stages of life or equivalent maternal doses during gestation. Toxicological testing of PGP substrate drugs should consider the possibility of these patient specific differences in safety analysis.
body (Reviewed in: [1]). PGP efflux at the blood-brain barriers can greatly restrict molecular entry into the central nervous system (CNS) [2,3]. Drugs that are not substrates for PGP are more likely to be able to reach sites of action within the CNS, unless excluded by other ABC transporters, whereas drugs that do bind to PGP are less likely to have off-target CNS side effects [4,5].
Recently, studies have identified many complexities regarding the functional capacity of PGP. Two different barriers in the body (e.g. blood-brain barrier and placental barrier) may efflux PGP substrates at different rates [6,7]. In addition, there may be variation in the level of substrate efflux that occurs at the same barrier (e.g. blood-brain barrier) between two people [6,7]. Factors such as genetics, disease and medication use can all alter the degree of PGP efflux that will occur for different patients [1,[8][9][10]. Age can also contribute to the ABC efflux capacity of certain barrier interfaces. At the bloodbrain barrier (BBB) of humans and rodents the level of PGP (abcb1) increases over the course of development [11][12][13]. Our recent studies have shown that the percentage of radiolabelled PGP substrate digoxin ( 3 H-digoxin) that transferred from blood to the brain and CSF was greater in fetal and newborn rats than in adults [7]. For fetal pups the placenta restricted the amount of digoxin that entered the fetal blood, meaning that although transfer from blood to brain was high the total transfer from maternal blood to fetal brain was similar to the transfer from adult blood to brain [7]. These results have important implications for the understanding of PGP substrate transfer into the developing brain following administration to pregnant women or newborn children. Digoxin does not solely bind to the PGP transporter but also to other transport systems such as Organic Anion Transporting Polypeptide 2 [14]. Replicating these findings with other PGP substrates would strengthen our understanding of whether multiple PGP binding molecules follow a similar developmental profile, especially as juvenile stages of development have not been investigated with digoxin and therefore it is not known at what stage PGP substrate transfer into the brain and CSF matures to adult levels.
Rhodamine-123 (R123) is a naturally fluorescent 380 Da molecule that has been used as a prototypical substrate in many studies to assess the function of the PGP transporter [15][16][17][18][19][20][21]. The overlap in substrate affinity of efflux transporters makes complete specificity to one transport system unlikely. However, studies have shown that other transporters such as BCRP and MRP1 are unlikely to have interactions with R123 that compare to the high degree of efflux by PGP [22]. At the placenta PGP is present on the maternal side of syncytiotrophoblast cells, limiting substrate transfer from mother to fetus [23,24]. Accordingly, the transfer of R123 across the placenta has been shown to be 11 times lower in the maternal-fetal direction than from fetus-mother [25]. At the BBB PGP is located on the blood-facing side of cerebral endothelial cells, restricting molecular transfer from blood to brain [12,26]. The transfer of R123 into the brain of PGP knockout adult mice has been shown to be 4 fold greater than controls that have PGP at the BBB [15].
In the present study R123 transfer was measured across the E19 rat placenta from mother to fetus and from blood into brain and cerebrospinal fluid (CSF) at E19, P4, P14 and adult ages. These four ages reflect stages of brain development in humans of mid-gestation, late-gestation, newborn children and adults respectively [27]. The results presented describe differences in the transfer of PGP substrate R123 into the brain and CSF at different developmental stages and indicate the importance of placental protection to the overall entry into the fetal brain, as well as the greater vulnerability of the newborn brain.
Animals
Animal experimentation was approved by the University of Melbourne Animal Ethics Committee (Ethics Permission AEC: 1714344.1) and conducted in compliance with Australian National Health and Medical Research Guidelines. The Sprague Dawley strain of Rattus Norvegicus was used for this study, supplied by the University of Melbourne Biological Research Facility. Rats were subjected to a 12-hour light/dark cycle with ad libitum access to food and water. Age groups investigated were embryonic day 19 (E19), postnatal day 4 (P4), P14 and adult (6-10 week).
Permeability experimentation
Rhodamine-123 (R123; SIGMA; 2.5-20 mg/kg) dissolved in sterile saline (0.9 % NaCl) was injected i.p. and allowed to circulate for 30 minutes prior to sampling. For postnatal experiments blood samples were taken from the right cardiac ventricle and cerebrospinal fluid (CSF) from the cisterna magna. Brain samples were cortical segments of the frontal and parietal lobes, dissected as previously described [28]. Blood samples were centrifuged (1200g, 5 min) to obtain plasma and CSF was examined microscopically to confirm no traces of red blood cells, as previously described [29]. For pregnancy studies the dam was anaesthetized with urethane (SIGMA; 25 %w/v, 1 ml/100 g, i.p.), placed supine on a heating pad (35 °C) and an endotracheal cannula inserted. Blood was sampled from the maternal circulation periodically via a femoral arterial catheter, maintaining blood volume with equivalent volumes of heparinized (Hospira Inc, 5000 international units/ml) saline. Fetal blood, CSF and brain were sampled identical to postnatal procedures. Fetal sampling was conducted from 30 minutes after R123 injection and concluded when placental circulation became insufficient, as previously described [7]. All samples were stored at − 20 °C until use.
Spectrofluorimetry
R123 fluorescence levels were measured using the Clariostar plate reader (BMG Labtech). Samples were loaded onto black Corning 364 well clear bottom plates (04214038). Fluorescence readings were taken at excitation 504 nm and emission 547, as determined by the Claristar spectral analysis setting to identify optimal sample fluorescence. R123 was extracted from cortical samples in 10:1 volume/weight HCl (0.1 M) by manual crushing, pipette dissociation, sonification and vortexing. Extracts were centrifuged (10,000g, 5 min) and clear supernatant sampled for analysis. Plasma and CSF samples were measured in 1:5 dilutions. Control samples (from animals not injected with R123) were run on every plate for background correction. Fluorescence readings were converted to R123 concentrations (ng/µl) via standard concentration lines, ensuring measurements were in the linear range (see Fig. 1). Linear range of detection was determined by spiking serial dilutions of R123 into plasma, CSF and brain samples from control animals. R123 was extracted from brains and CSF/plasma samples were diluted for measurement as described above.
Statistics
Results are described as concentration ratios between compartments of interest (e.g. [brain]/[plasma] ratio) at the time of sampling, as previously described [7,30]. Shapiro Wilk tests were used to determine data normality, which indicated that non-parametric analysis best suited the data. Statistical differences between concentration ratios were determined using Kruskal-Wallis tests with Dunn's correction for multiple comparisons. Least squares linear regression with analysis of covariance was conducted as previously described [31]. Significance levels were set at p < 0.05.
Linear range of detection
To establish the linear range of R123 detection in blood, CSF and brain samples R123 was spiked into the control tissue/fluid (see Methods). Results are shown in Fig. 1. Least squares linear regression with analysis of covariance indicated that a single line best represented the data. There were no significant differences (p > 0.05) between any two tissues or ages for line elevation or slope. This indicated that R123 fluorescence readings were independent of sample. Therefore, one equation was used to convert fluorescence readings to concentration of R123 (see Fig. 1). The linear range of detection was 4.22 ng-4.59 µg per 30 µl well. This equates to 380 nM to 403 µM of R123 per diluted sample.
R123 transfer into the postnatal brain
The concentration of R123 in the plasma and brain of newborn (P4), early neonatal (P14) and adult (6-10 week) rats is shown in Fig. 2. The concentration of R213 in the brain was higher for early postnatal animals (P4, P14) than adults over the range of plasma concentrations investigated (Fig. 2a). Least squares linear regression with analysis of covariance analysis showed that there was a significant difference in the elevation of P14 and adult data ( Fig. 2a; p < 0.01). The brain/plasma concentration ratio for all samples was not significantly different between P4 43.3 % ± 34.2 and P14 22.2 ± 5.9 (Fig. 2b). The brain/plasma ratio for adults 8.7 % ± 9.1 was significantly lower than at P4 (0.20 fold, p < 0.05). Adult values were lower than at P14 (0.39 fold) but this did not reach significance (p > 0.05).
R123 transfer into the postnatal CSF
The concentration of R123 in the plasma and CSF of newborn (P4), early neonatal (P14) and adult (6-10 week) rats is shown in Fig. 3. Similar to the brain, the concentration of R123 in the CSF was higher for early postnatal animals (P4, P14) than adults over the range of plasma . Samples analyzed were P4 plasma (blue circle), P14 plasma (red square), adult plasma (green triangle), P4 brain extract (purple inverted triangle), P14 brain extract (orange diamond), adult brain extract (black large circle) and adult CSF (brown square). Note the similarity (overlap) of samples over the linear range. Representative equation for all data points: y = 0.96x + 7.8; R 2 = 0.98 concentrations investigated (Fig. 3a). Least squares linear regression with analysis of covariance stated that the lines were significantly different between P14 and adult rats (slope p < 0.01; elevation p < 0.01) as well as between P4 and adult rats (elevation p < 0.001). The steeper gradient of lines for early postnatal rats compared to adults suggests that the discrepancy in CSF transfer between the ages is likely to be highest at higher doses. The CSF/ plasma concentration ratios were not significantly different between P4 7.9 % ± 4.2 and P14 7.5 % ± 3.8 (Fig. 3b). The transfer for adults 1.3 % ± 0.9 was significantly lower than P14 (0.17 fold, p < 0.05) and P4 (0.17 fold, p < 0.01). Comparing Figs. 2 and 3 it can be seen that the lines were much more variable for the brain/plasma ratios than CSF/plasma ratios, where the linear trend was much more consistent for each point. The brain/plasma ratio of R123 was higher than the CSF/plasma ratio for each age (see Y-axis; Figs. 2b and 3b; note difference in Y-axis).
R123 transfer from maternal blood to fetal brain
Following maternal exposure at E19, the concentration of R123 in the maternal and fetal plasma is shown in Fig. 4. From 30 min to 120 min post-injection maternal plasma levels slightly decreased and fetal plasma levels were reasonably stable (Fig. 4a). Over the course of the experiment R123 transfer across the placenta was restricted, as indicated by the ratio ([fetal plasma]/[maternal plasma]) of 19.9 % ± 3.3 (Fig. 4b).
Of the concentration of R123 that reached fetal plasma, an average of 38 % ± 7 entered the CSF at the time of sampling (Fig. 5a). This percentage slightly increased from 37 min post-injection (23 %) to 114 min Fig. 2 Transfer of R123 into the postnatal brain. a The concentration of R123 in the brain cortices compared to the concentration in the plasma (blood). The three age groups investigated were P4 (light blue, squares), P14 (teal, diamonds) and adult (dark blue, triangles) rats. b The concentration ratio between R123 in the brain and plasma. Graph A and B display data from the same rats. Significant differences between age groups are indicated: * p < 0.05 (Kruskal-Wallis with Dunn's correction) Fig. 3 Transfer of R123 into the postnatal cerebrospinal fluid (CSF). a The concentration of R123 in the CSF compared to the concentration in the plasma (blood). The three age groups investigated were P4 (light blue, squares), P14 (teal, diamonds) and adult (dark blue, triangles) rats. b) The concentration ratio between R123 in the brain and plasma. Graph a and b display data from the same rats. Significant differences between age groups are indicated: * p < 0.05 and ** p post-injection (35 %). Unlike at postnatal ages, E19 values were not taken at one exact time-point and did not cover the same range of plasma concentrations. However, the CSF/plasma entry for all E19 pups was higher than all postnatal ages, indicating high levels of permeability. On average, the entry of R123 into the E19 CSF was higher than at P4 (5 fold), P14 (5 fold) and adult (26 fold) ages (see Fig. 5b). The brain/plasma concentration ratio at E19 between 30 and 120 min post dam injection was 43 % ± 10 (Fig. 5c). Similar to transfer into the CSF, brain/plasma ratios increased from 37 min (28 %) to 114 min post injection (50 %). R123 entry into the brain at E19 was similar to that at P4 (0.95 fold) but higher than at P14 (2 fold) and adult (5 fold) ages (see Fig. 5d).
Discussion
The results from this study show that despite the welldescribed presence of PGP at the blood-brain barrier [12,13,26,32] the capacity to limit entry of substrates into the brain is developmentally regulated at the ages studied. Transfer of R123 into the brain was highest at early stages of development (fetal, newborn) and decreased through later postnatal stages (P14, adult). Previous studies have suggested that the relative increase of brain and CSF volumes during development [33,34] can create an apparent decrease in molecular concentrations in the adult CNS due to a larger distribution volume [35]. However, the results described in this study are ratios between concentrations in blood and the brain and because over the course of development the increase in blood volume exceeds the relative increase in brain/CSF volume this cannot fully account for the developmental trends described [33,36,37]. The presentation of results over a range of drug concentration in plasma also suggests that the developmental differences are unlikely to be due to variances in the amount of drug that could access the blood from the peritoneum between ages. Previous studies have shown that this developmental trend in BBB transfer is not observed with at least some molecules that pass barriers passively, and are not recognized and effluxed by the PGP transporter [7]. This suggests that the differences in R123 transfer into the brain are likely to be due to differences in PGP capacity at brain barriers rather than being due to developmental factors that influence the transfer of all molecules [7,28].
The age-dependent changes in R123 transfer into the brain correlate with previous reports of PGP levels at the blood-brain barrier. In Fig. 6 the restriction of R123 by the blood-brain barriers measured in this study is compared to the level of PGP gene abcb1a expression in the rat brain, measured in our previous publication [11]. Restriction of R123 transfer into the brain was highest in adults, which corresponds to the highest level of PGP expression [11][12][13]. PGP levels also increase in the human brain between mid-gestation and adult ages [13], providing some evidence that the functional analysis of ABC transporters at rodent blood-brain barriers may translate to humans. If the functional capacity of PGP is lower at earlier stages of development this should similarly affect the transfer of many other PGP substrates, not just R123. Previous publications have shown a similar percentage transfer profile of PGP substrate digoxin into the brain as was shown by R123 in this study. Transfer of digoxin from blood to brain decreased from 47 % at E19 to 12 % in adults [7], and reflect a similar profile to R123 transfer that was 43 % at E19 and 9 % in adults (Fig. 5). The identification of this age-dependent transfer into the brain using two separate PGP substrates that were measured with different methods (radiolabel, fluorescence) provide strong evidence that PGP functionality at the BBB may change over the course of development. Although these two PGP substrates followed similar percentage profiles, not all PGP substrates may have the same fold changes between ages. Other factors, such as the affinity for the PGP binding site, level of lipid solubility and degree of plasma protein binding, can all contribute alongside changes in PGP functionality to dictate the overall difference in barrier transfer between patients. Fig. 6 Comparison between R123 transfer into the brain with abcb1a (PGP) levels in the brain at different stages of development. Average concentration ratios for the restriction of R123 transfer into the brain ([plasma]/[brain]) at E19, P4, P14 and adult (P42+) are listed as a ratio compared to adult (red squares). Data are the same as Fig. 5. The average level of abcb1a (PGP) in the rat brain at E13, E15, E18, P1, P7, P21 and adult (P42+) as measured in [11], listed as a ratio compared to adult (blue, circles). Age is on the x-axis, with embryonic day (E) before birth and postnatal day (P) after birth. Note the positive gradient both lines, with abcb1a levels in the brain and restriction of PGP substrate R123 entry into the brain both increasing with age. Linear regression lines of best fit are shown for R123 restriction (red; R 2 = 0.98) and abcb1a levels (blue; R 2 = 0.92) Transfer of PGP substrate R123 across the E19 placenta in this study was 20 % (relatively stable over a 30-120 min experimental period). Results from previous studies investigating PGP substrate transfer across the placenta provide varying results depending on the drug selected, route of administration, stage of pregnancy, animal model and time-post injection that measurements were taken. Some studies reported 25-37 % transfer for digoxin and 2.5-25 % for paclitaxel [7,38,39]. The overall transfer of R123 from maternal blood into fetal brain in the present study was 9 % (20 % across the placenta, 43 % of that into fetal brain), which was identical to transfer into the adult brain (9 %). This suggests that if pregnant adults take a medication that is a PGP substrate, it is possible that a similar amount of drug could reach both the mother's and baby's brains. Previous analysis using similar methodology showed that digoxin followed a similar trend, with 12 % transfer into adult brain and 17 % from maternal blood to fetal brain [7]. R123 transfer differed across multiple barriers in the adult body. Transfer was highest across the placenta (20 %), followed by transfer into the brain (9 %). Transfer into the CSF (1 %) was very low. This may be a product of differences in PGP functionality between the respective barriers or the characteristics of R123 that may predominate in certain tissues/ fluids compared to others.
The present study provides a basis on which a host of future analyses could be conducted. The quantification of PGP substrate levels in the brain cortices does not identify whether a substrate was accumulating in the extracellular fluid of the brain, within certain cell types or in subcellular compartments. Future studies may investigate this to better understand what cells within the brain may be at highest risk of PGP substrate-mediated damage at different developmental stages. Metabolomic analysis of PGP substrates may contribute to the understanding of where parent drug molecules and any metabolite forms distribute to following administration. The present study primarily investigated a short time point after i.p. injection (30 min), so a majority of compound in blood would be expected to still be R123 at the time of measurement [40]. In addition, metabolism of R123 has been shown to be negligible during placental transfer [41]. However, deacetylated metabolite rhodamine-110 (R110) may appear in plasma in low amounts [40]. This metabolite is not lipid soluble and is therefore is likely to have low barrier permeability. Investigating changes in the transfer of drug metabolites over the course of development would be an interesting and novel area of research. The present study focused on age as a factor, however similar analysis could be conducted with cohorts of both sexes to identify at what stage(s) of development there may be sex-dependent differences in PGP functionality. A final area of consideration for future studies would be the similar profile of R123 transfer into the CSF and brain over the course of development (Fig. 5), despite a much more prominent change in PGP level at the blood-brain barrier than the blood-CSF barrier [11]. This may be due to age-dependent changes in PGP levels at other interfaces, such as the ventricular CSF-brain [28] that may contribute to a concentration of substrates in the CSF early in development.
Conclusions
The results from the present study have clinical implications for the use of PGP substrate medications in pregnant women and in newborn children. The placenta provides a degree of protection for substrates entering the fetal bloodstream but higher transfer into the fetal brain may mean that the concentration of drug that reaches both the maternal and fetal brain could be quite similar. The largest differential in drug transfer into the brain compared to adults is likely to be in newborn children. P4 rats have a stage of brain development that resembles a key patient cohort, the earliest viable prematurely born humans, whereas P14 resembles a full-term newborn [27,42,43]. Transfer of R123 into the P4 rat brain was 2 times greater than at P14 and 5 times greater than that of adults. Newborn children, particularly those born prematurely, who are without placental protection but still at stages of brain development that have lower efflux transporter capacity than adults may have increased risk of drug transfer to the brain, where damage may occur. Further replication of these results with other PGP substrates, using alternative measurement methods and experimental models, would strengthen the suggestion that the observed developmental trends are indeed due to changes in PGP functionality.
|
2021-02-09T14:27:32.912Z
|
2020-10-01T00:00:00.000
|
{
"year": 2021,
"sha1": "9355eab8ca82cf87d4bc16b17629358cd1fd0b9e",
"oa_license": "CCBY",
"oa_url": "https://fluidsbarrierscns.biomedcentral.com/track/pdf/10.1186/s12987-021-00241-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "89a0885ed828700aee1a85fa8f91698b1ba8fa5d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17821144
|
pes2o/s2orc
|
v3-fos-license
|
Switching the mode of sucrose utilization by Saccharomyces cerevisiae
Background Overflow metabolism is an undesirable characteristic of aerobic cultures of Saccharomyces cerevisiae during biomass-directed processes. It results from elevated sugar consumption rates that cause a high substrate conversion to ethanol and other bi-products, severely affecting cell physiology, bioprocess performance, and biomass yields. Fed-batch culture, where sucrose consumption rates are controlled by the external addition of sugar aiming at its low concentrations in the fermentor, is the classical bioprocessing alternative to prevent sugar fermentation by yeasts. However, fed-batch fermentations present drawbacks that could be overcome by simpler batch cultures at relatively high (e.g. 20 g/L) initial sugar concentrations. In this study, a S. cerevisiae strain lacking invertase activity was engineered to transport sucrose into the cells through a low-affinity and low-capacity sucrose-H+ symport activity, and the growth kinetics and biomass yields on sucrose analyzed using simple batch cultures. Results We have deleted from the genome of a S. cerevisiae strain lacking invertase the high-affinity sucrose-H+ symporter encoded by the AGT1 gene. This strain could still grow efficiently on sucrose due to a low-affinity and low-capacity sucrose-H+ symport activity mediated by the MALx1 maltose permeases, and its further intracellular hydrolysis by cytoplasmic maltases. Although sucrose consumption by this engineered yeast strain was slower than with the parental yeast strain, the cells grew efficiently on sucrose due to an increased respiration of the carbon source. Consequently, this engineered yeast strain produced less ethanol and 1.5 to 2 times more biomass when cultivated in simple batch mode using 20 g/L sucrose as the carbon source. Conclusion Higher cell densities during batch cultures on 20 g/L sucrose were achieved by using a S. cerevisiae strain engineered in the sucrose uptake system. Such result was accomplished by effectively reducing sucrose uptake by the yeast cells, avoiding overflow metabolism, with the concomitant reduction in ethanol production. The use of this modified yeast strain in simpler batch culture mode can be a viable option to more complicated traditional sucrose-limited fed-batch cultures for biomass-directed processes of S. cerevisiae.
Background
The yeast Saccharomyces cerevisiae has been known to humans for thousands of years and it is routinely used in many traditional biotechnological processes, including bread making and production of several alcoholic beverages. Consequently, it has been extensively studied and thus is considered a model system for the metabolic, molecular and genetic analysis of eukaryotic organisms. Due to its GRAS status, S. cerevisiae yeasts are also applied on a huge scale in biomass-directed processes, such as the production of baker's yeast, yeast extract and other food additives (vitamins, proteins, enzymes, and flavouring agents) [1], as well as for production of heterologous proteins (including vaccines and other therapeutic compounds), or even for engineering completely novel metabolic pathways leading to the biotechnological production of important pharmaceuticals [2][3][4][5]. The combination of the large knowledge of yeast physiology, together with the fact that the yeast genome has been fully sequenced, has resulted in the development of production strains with optimized properties [6,7].
However, it should be stressed out that most of the industrial applications of S. cerevisiae rely in its ability to efficiently ferment sugars, even under fully aerobic conditions [8]. Since low by-product formation and a high biomass yield on sugar are prerequisites for the economic viability of biomass-directed applications, the occurrence of alcoholic fermentation in such processes is highly undesirable as it will reduce the biomass yield [9]. Aerobic ethanol production by S. cerevisiae cultures occurs when the carbon flux through glycolysis exceeds the capacity of the tricarboxylic acids cycle to completely oxidize the pyruvate produced. Thus, fully respiratory metabolism only takes place during the utilization of low sugar concentrations and slow rates of sugar consumption and growth, and plenitude of oxygen. Indeed, this yeast has developed several sensing and signaling mechanisms in order to not only ensure efficient sugar uptake from the medium, but to also repress alternative carbon source utilization and respiration, thus favoring the production of ethanol [10][11][12][13][14]. Accordingly, high cell concentrations are rarely feasible in a simple batch mode, as the required high initial sugar concentration would result in the significant production of ethanol, which can accumulate to values as high as 50% of the supplied sugar.
Consequently, in order to maximize biomass yield S. cerevisiae yeast cell are cultivated in a fed-batch manner, in which a sugar-concentrated solution is fed into the bioreactor under a variety of control strategies. Usually, after a batch phase, an exponential feeding profile is applied to ensure optimal production and growth conditions, followed by a decline phase at the end of cultivation. To ensure optimal oxidative growth several approaches have been developed to control the feed rate at a level below the critical value, beyond which ethanol is produced and therefore the biomass yield decreases. Nevertheless, supplementary equipment, complex control systems and kinetic models are usually required to monitor on-line the fermentation process in order to provide small sugar concentrations to the yeast cells, avoiding ethanol production [15][16][17]. Other technical and physical limitations, such as time delays or measurement noise, sub-optimal stirring and oxygen transfer, as well as non-homogenous supply of nutrients, may result in both a decrease of the growth rate of the microorganism and/or overflow metabolism, and consequently decreases in biomass productivity [18,19]. Many of these problems could be overcome by culturing the cells in the simpler batch mode, as long as overflow metabolism and/or ethanol production is prevented.
Sucrose is by far the most abundant, cheap and important sugar in the industrial utilization of the yeast S. cerevisiae. More than half of the world's ethanol production relies on the efficient fermentation of sucrose-rich broths such as sugarcane juice and molasses, and these raw materials are also used for the production of baker's yeast, and for production of several distilled alcoholic beverages [20,21]. It is generally accepted that S. cerevisiae cells harbor an extracellular invertase (β-D-fructosidase), that hydrolyzes sucrose into glucose and fructose, which are transported into the cell by hexose transporters and metabolized through glycolysis. This enzyme has been a paradigm for the study of protein synthesis and regulation of gene expression. Invertase is encoded by one or several SUC genes (SUC1 to SUC5 and SUC7), SUC2 being the most common loci found in almost all S. cerevisiae strains, including in other closely related yeast species [22,23].
These SUC genes generate two different mRNAs: a larger transcript encoding an invertase with a signal sequence required for its secretion from the cell, and a shorter transcript lacking this signal sequence and thus coding for an intracellular form of the enzyme [24]. While the former mRNA is repressed by high concentrations of sucrose or its hydrolysis products (glucose and fructose), the intracellular invertase is expressed constitutively. Finally, it has recently become evident that efficient invertase expression requires low levels of glucose or fructose in the medium [25][26][27]. Despite significant improvements in our knowledge regarding the molecular mechanisms involved in the repression of SUC expression, the transcriptional activator of this gene is still unknown [28,29]. A further level of complexity is the fact that invertase levels at the yeast surface are poorly (or even inversely) correlated with the ability of the cells to ferment this sugar, especially at high sucrose concentrations [30][31][32]. Extracellular sucrose hydrolysis may even allow growth of other microorgan-isms, including contaminant yeasts lacking invertase [33]. Extracellular production of fructose also imposes several problems to the industrial process due to slower fructose utilization by S. cerevisiae cells [34], which may result in residual sugar at the end of the cultivation with consequent losses in productivity.
In this study, a poorly characterized pathway for sucrose utilization in S. cerevisiae was engineered in order to improve biomass-directed applications of this microorganism. Several reports have shown that the kinetics of cell growth on sucrose by this yeast can only fit a model in which its utilization is composed of the contributions from both the direct uptake of sucrose, and the uptake of its hydrolysis products into the cell [35][36][37]. The analysis of direct sucrose uptake by S. cerevisiae cells revealed the presence of an active sucrose-H + symport [38,39] which was shown to be mediated by two different transport systems: high-affinity (K m~7 mM) uptake mediated by the the AGT1 permease, while the MALx1 maltose transporters allow the active uptake of sucrose with low (K m > 100 mM) affinity [40,41]. The active uptake of sucrose would justify the existence of the constitutive intracellular invertase, although sucrose can also be hydrolyzed by other intracellular glycosidases, such as α-glucosidase (maltase), an enzyme with the same affinity and activity for sucrose and maltose [42]. Indeed, we have recently shown that yeast strains unable to ferment glucose or fructose due to the absence of hexose transport, or strains lacking invertase, can actively transport sucrose into the cells allowing efficient fermentation of this sugar [43,44]. In the present report we show that when we modulate at the molecular level the rate of active sucrose uptake, we obtain yeast strains that can easily attain higher cell densities when grown in simple batch cultures with 20 g/L sucrose as carbon source. This novel and value strategy that improves one of the industrial applications of S. cerevisiae represents an interesting alternative to classical bioprocessing approaches.
Kinetics of active H + -sucrose uptake in yeast
Strain 1403-7A is a MAL constitutive strain that lacks invertase activity, but it still grows and ferments efficiently sucrose due to its active transport into the cell, and intracellular hydrolysis by a cytoplasmic α-glucosidase [42,44,45]. Since several different permeases (with differing kinetic properties) have been described as capable of active sucrose-H + symport [40], we decided to investigate the role that sugar transport could have on sucrose utilization by this yeast cells. Sucrose transport kinetics by strain 1403-7A ( Figure 1) indeed indicated the presence of both a high-affinity (K m~7 mM) and low-affinity (K m~1 20 mM) transport activities, as has already been described for other wild-type yeast strains [40]. The AGT1 permease is responsible for the high-affinity component, while the low-affinity transport activity is mediated by any of the MALx1 maltose permeases. Indeed, this strain contains in its genome ( Figure 2) at least 3 different MALx1 permeases, MAL21 (chromosome III), MAL31 (chromosome II), and MAL41 (chromosome XI), while the AGT1 gene is located in chromosome VII. It should be stressed out that both genetic and molecular studies [46,47] have already shown that in strain 1403-7A the MAL31, MAL41 and AGT1 permeases are functional, and we do not know if the MAL21 gene encodes for a functional permease. Our PFGE and blotting analysis also showed the presence of a unique SUC gene (SUC2 in chromosome IX, data not shown) in the genome of strain 1403-7A, probably a mutant suc2 allele of the gene since this strain lacks invertase activity [42,44,45].
In order to develop a new yeast strain that would take up sucrose from the medium by just the low-affinity transport activity, we deleted from strain's 1403-7A genome the high-affinity sucrose-H + symporter encoded by the AGT1 gene (see strain LCM001 in Fig. 2). As expected, the kinetics of active sucrose transport indicated that in the Kinetics of active H + -sucrose symport activity in yeast strains Figure 1 Kinetics of active H + -sucrose symport activity in yeast strains. The initial rates of active H + co-transport with sucrose were determined in yeast cells from strain 1403-7A (solid circles), or its agt1∆ counterpart strain LCM001 (hollow squares), pre-grown in rich YP media containing 20 g/L sucrose.
agt1∆ strain LCM001 sucrose-H + symport was mediated by a low-affinity and low-capacity transport activity ( Fig. 1). Since the AGT1 permease is a low-affinity (K m 20-30 mM) maltose transporter [41], in strain LCM001 maltose uptake from the medium was normally mediated by the above indicated high-affinity MALx1 (MAL21, MAL31 and MAL41) maltose transport activities (data not shown).
Sucrose utilization by yeast strains
The kinetics of sucrose consumption and ethanol production during batch fermentations ( Figure 3) indicated that the agt1∆ strain LCM001 had a significant slower capacity to consume sucrose from the medium (at a concentration of 20 g/L), producing less ethanol than the parental wildtype strain 1403-7A. This phenotype was more pronounced when fermentations were performed in synthetic medium, compared with rich YP medium (Fig. 3). Strain 1403-7A showed rates of maximum sucrose consumption of 0.25 and 0.78 g sucrose (g cell dry weight) -1 h -1 in synthetic and rich medium, respectively, while the maximum sucrose consumption rate was reduced fourfold in the agt1∆ LCM001 strain, to 0.06 and 0.17 g sucrose (g cell dry weight) -1 h -1 in synthetic and rich medium, respectively. The beneficial effect of complex nitrogen sources (peptides or amino acids) compared with ammonium sulfate present in the synthetic medium has been described previously [48][49][50]. Furthermore, no major difference in the rates of sucrose consumption or ethanol production were observed among both strains when high sucrose concentrations (> 200 g/L) were used ( Figure 4). The maximum sucrose consumption rates for the wildtype and the agt1∆ strains were 0.99 and 0.78 g sucrose (g cell dry weight) -1 h -1 , respectively, which is in accordance with the kinetics of active sucrose transport presented by Batch fermentations of 20 g/L sucrose by yeast strains Figure 3 Batch fermentations of 20 g/L sucrose by yeast strains. The consumption of sucrose, and ethanol and biomass production during sucrose batch fermentations with 10 g/L yeast cells (dry weight) of strain 1403-7A (black symbols) or strain LCM001 (open symbols), were determined using synthetic YNB (circles) or rich YP (squares) medium. Data shown are means (± range) from two independent experiments. Table 1, data not shown).
Detection of active sucrose transporter genes in yeast strains
each strain (Fig. 1). At 20 g/L sucrose (~58 mM) the low affinity transport system present in strain LCM001 is far from saturated (Fig. 1), while at this concentration the wild type strain's high-affinity transporter is operating with high capacity. At concentrations of ~0.73 M sucrose (250 g/L), probably all transport activities present in both strains are operating with high capacity allowing efficient sucrose fermentation. An intriguing observation was that even when the yeast cells were consuming sucrose more slowly, the increases in biomass during these batch fermentations were the same for both strains ( Fig. 3 and 4). This result indicated that strain LCM001 is preferring to respire more of the carbon source (sucrose), and not fermenting it into ethanol.
Indeed, when we analyzed the kinetics of growth on sucrose by both strains (Figure 5), strain 1407-7A clearly presented a typical diauxic growth curve [51], where initially cell growth is coupled to efficient sugar consumption and ethanol production, when the carbon source is exhausted cells stop growing and the diauxic shift takes place, allowing the further consumption of the ethanol produced during sugar fermentation. However, strain LCM001 had a quite different pattern of sucrose utilization (Fig. 5). This agt1∆ yeast grow on sucrose efficiently, with the same specific growth rate (µ = 0.24 ± 0.02 h -1 ) as strain 1403-7A (µ = 0.24 ± 0.01 h -1 ) or other SUC + strains [44], although sugar consumption was slightly slower than with the wild-type strain (0.26 and 0.51 g sucrose [g cell dry weight] -1 h -1 for strains LCM001 and 1403-7A, respectively). These LCM001 yeast cells did not enter into the diauxic shift (Fig. 5), but continued to grow efficiently through time, reaching higher cell densities than the parental strain 1403-7A. Although it is expected that these batch cultures become oxygen limited during exponential cell growth, favoring sugar fermentation, strain LCM001 not only produced less ethanol from sucrose, but also started to consume the ethanol produced during growth even when significant amounts of sugar (~40% of initial value) still remained in the medium (Fig. 5). These results indicated that strain LCM001 had shifted its metabolism into a more aerobic sucrose utilization pattern than the parental strain.
We thus analyzed the biomass yield of these two strains during growth using three different carbon sources (glucose, maltose and sucrose) and different amounts (and quality) of nitrogen sources ( Figure 6). It is evident that strain LCM001 produces from 1.5-to 2-fold more biomass during batch growth on sucrose (compared to the parental strain), while the biomass yields on glucose or maltose were unaffected. Conversely, while the ethanol yields on glucose by both the wild-type and the agt1∆ strain were similar and varied from Y e/s values between 0.16 ± 0.02 and 0.31 ± 0.03 g ethanol (g glucose) -1 when ammonium sulfate or peptone were used as nitrogen source, respectively, the agt1∆ strain LCM001 produced only 10 to 40% of the ethanol produced by the wild-type strain 1403-7A when sucrose was used as carbon source. In order to confirm that this increase in biomass production by the LCM001 strain is due to respiration of the sugar, we added amtimycin A to the medium. Under this condition (respiration blocked), the biomass yield on sucrose of strain LCM001 was exactly the same as for the parental strain 1403-7A (Fig. 6), and both strains fermented sucrose efficiently (Y e/s = 0.50 ± 0.01 g ethanol [g sucrose] -1 ).
Discussion
Due to the increased interest in biomass-based industrial applications of S. cerevisiae, several approaches have been developed to engineer the metabolism of this microorganism towards a more aerobic or respiratory utilization of sugars. In one approach where the targets are key regulatory proteins, either overexpression of the transcriptional factor HAP4 (activating respiratory genes), or deletion of GCR1 and GCR2 (activators of glycolytic genes), HXK2 or REG1 (relieving glucose repression), results in yeast strains showing increased biomass production during glucose fermentation [52][53][54][55][56]. However, modifying these regulatory circuits may also have some undesired side-effects, including significant reductions in the specific growth rate of the cells, and consequently losses in biomass productivity, or even altered patterns of sugar (other than glucose) utilization [57][58][59][60][61]. Another logic approach would be to restrict sugar uptake from the medium, which in the case of glucose transport by S. cer-Biomass yields by yeast strains Figure 6 Biomass yields by yeast strains. The relative biomass yields, normalized to the values obtained by the wild-type 1403-7A strain in medium containing 20 g/L glucose, were determined during growth of strain 1403-7A (black symbols) or strain LCM001 (red symbols) in synthetic yeast nitrogen medium containing 5 (triangles), 10 (squares) or 15 (inverted triangles) g/L ammonium sulfate (open symbols) or peptone (close symbols) as nitrogen source, and 20 g/L of the indicated carbon sources. Results obtained with rich YP medium (circles) in the absence (open symbols) or presence (close symbols) of antimycin A are also shown. For the wild-type 1403-7A strain the Y x/s values varied between 0.20 ± 0.02 and 0.48 ± 0.03 g biomass (g glucose) -1 when ammonium sulfate or peptone were used as nitrogen source, respectively, or 0.14 g biomass (g sucrose) -1 when antimycin A was added to the medium.
Batch growth of yeast strains on 20 g/L sucrose Figure 5 Batch growth of yeast strains on 20 g/L sucrose. The consumption of sucrose and the ethanol and biomass production by strain 1403-7A (solid circles) or strain LCM001 (hollow circles) were determined during growth in rich YP medium. For each strain a representative growth curve is shown.
evisiae was a huge challenge as it was required not only to delete the hole set of hexose transporters found in this yeast (compromising almost 20 different genes), but also required the expression of a mutant chimera between the low-affinity (HXT1) and high-affinity (HXT7) glucose transporters as the unique sugar permease at the plasma membrane [62][63][64]. Although this yeast strain will respire and produce higher biomass levels when grown on glucose, it still has some inconveniences, such as slow growth rates, fructose fermentation, and an inability to use other sugars (e.g. galactose, due to GAL2 deletion). Indeed, the modification of the glucose uptake system in E. coli [65,66] also allowed the development of a bacteria with reduced overflow metabolism, and increased biomass production.
Our results show that when we engineer the mode of sucrose utilization by the yeast S. cerevisiae, allowing the direct uptake of the sugar by low-affinity and low-capacity transport systems, followed by its intracellular hydrolysis mediated by a maltase (α-glucosidase), the yeast cells grow efficiently on sucrose but produce significantly less ethanol since the cells are diverting more of this carbon source towards biomass production. Thus, higher biomass production can be attained with simple batch cultures in 20 g/L sucrose, avoiding some drawbacks of fedbatch cultures due to easier operational procedures, reduced equipment and process time needed for each production lot. Indeed, the more respiratory phenotype of strain LCM001 was clearly demonstrated by the occurrence of only one growth phase during batch cultivation with sucrose as the carbon and energy source, and the corresponding decrease (to the same levels of the parental strain 1403-7A) in the biomass yield when the respiratory inhibitor antimycin A was used.
Although S. cerevisiae has a strong tendency towards alcoholic fermentation of sugars, several reports have shown that in the case of some α-glucosides (e.g. maltotriose [67,68] and trehalose [69][70][71][72]) which are transported by low-affinity and/or low-capacity uptake systems, the sugar may be completely respired by the yeast cell. However, these approaches have little practical application due to the high commercial prices of these sugar substrates. Nevertheless, all these studies, including the results shown in the present report, highlight the importance of the sugar transport step in the aerobic/fermentative dissimilation of sugars by yeast cells [73]. It is also important to emphasize that the engineering strategy utilized in the present approach does not affect the utilization (and fermentation) of other sugars (e.g. glucose, fructose, maltose) commonly used by yeasts, and thus would not affect the downstream utilization of such strains in important industrial applications such as bread making, or production of distilled beverages. The high specific growth rates observed during batch growth of the engineered strain on sucrose, even when the sugar is consumed more slowly, is probably a consequence of the superior efficiency of this sugar (when compared to glucose) for binding to and stimulate the GPR1 sugar receptor in S. cerevisiae cells, an important signaling system that controls, among several other physiological aspects, yeast cell growth [10,12]. It would thus be interesting to delete this GPR1 sugar receptor, and analyze the consequence of such deletion on the growth rate and metabolism of sucrose by S. cerevisiae cells. Finally, another way to further improve the biomass yield of yeasts grown at excess sucrose concentrations could be obtained by combining the properties of strain LCM001 with one or more of the above described strategies (e.g. HAP4 overexpression, and/or hxk2 deletion), or even by using the classical fed-batch mode of yeast cultivation. It remains to be seen whether combination of such approaches can further improve the biomass yield of S. cerevisiae at higher sucrose concentrations.
Conclusion
Higher cell densities during batch cultures on sucrose were achieved by using a S. cerevisiae strain engineered in the sucrose transport system. Deletion of the high-affinity sucrose transport system mediated by the AGT1 permease produced a yeast strain where sucrose was transported by low-affinity and low-capacity permeases. While up to 1.5 to 2-times more biomass, when compared with the parental strain, were obtained by the engineered yeast strain in simple batch cultivations using 20 g/L sucrose, the ability of the strains to efficiently ferment very-high sucrose concentrations (> 200 g/L) was unaffected. The yeast growth rate on rich medium containing 20 g/L sucrose was also unaffected, and thus the higher biomass yields were accomplished by preventing overflow metabolism and increasing respiration by the engineered strain, with the concomitant reduction in ethanol production. The simpler batch cultivation mode can be a viable option to more complicated traditional sucrose-limited fed-batch cultures. A thorough analysis of the physiological and transcriptional response of the engineered S. cerevisiae strain to very-high sucrose concentrations will help to better understand the regulatory mechanisms involved in sugar fermentation by yeasts, and could serve as a basis for engineering metabolic pathways to improve process performance of S. cerevisiae for biomass directed approaches using highly concentrated culture media.
Media and culture conditions
Cells were routinely grown on rich YP medium (10 g/L yeast extract and 20 g/L peptone), or synthetic medium (2 g/L yeast nitrogen base without aminoacids containing 75 mg/L L-tryptophane and 150 mg/L uracil) supplemented with different quantities of ammonium sulfate or peptone as nitrogen source, and 20 g/L glucose, sucrose or maltose as carbon source. The pH of each medium was adjusted to pH 5.0 with HCl, and media was either sterilized by filtration (synthetic medium), or autoclaved at 120°C for 20 min (rich YP medium). When required, 20 g/L agar, 3 mg/ L antimycin A, or 200 mg/L geneticin (G-418) sulfate were added to the medium. Cells were grown aerobically at 28°C with shaking (160 rpm) in cotton plugged Erlenmeyer flasks filled to 1/5 of the volume with medium. The inoculum for growth assays was prepared by transferring aseptically a single colony from a plate into 5-10 mL of the selected medium containing 20 g/L glucose, and allowed to growth into stationary phase for 2 to 3 days before they were used to inoculate (by a 100 × dilution) new media containing the indicated carbon sources. Culture samples were harvested regularly, centrifuged (5,000 g, 1 min), and their supernatants used for the determination of sugars and ethanol. For batch fermentations yeasts were pregrown on YP-20 g/L sucrose until the exponential phase (~1 g of dry yeast/L), centrifuged (3,500 g, 3 min) and washed twice with cold water, and inoculated at a high cell density (10 g of dry yeast/L) into synthetic medium (4 g/L yeast nitrogen base without aminoacids and 10 g/L ammonium sulfate) or rich YP medium containing the indicated amounts of sucrose. Batch fermentations were incubated as described above for growth assays, and samples were collected regularly, centrifuged, and their supernatants analyzed as described below.
Yeast strains
The S. cerevisiae strains and oligonucleotides used in the present study are described in Table 1. The yeast strains were kept at -80°C in 25% sterile glycerol, and from these frozen stocks yeast cells were streaked onto solid YP-2% glucose plates, incubated for 2 days at 28°C, and stored at 4°C (for a maximum of 1 month) until use. Standard methods for yeast transformation, DNA manipulation and analysis were employed [74]. The AGT1 gene was deleted according to the polymerase chain reaction (PCR)-based gene replacement procedure as described previously [43,50]. Briefly, the kanMX cassette from plasmid pFA6a-kanMX6 [75] was amplify with primers AGT1-pFA6-F1 and AGT1-pFA6-R1, and the resulting PCR product of 1,579-bp was used to transform competent yeast cells. After 2-hour cultivation on YP-20 g/L glucose, the transformed cells were plated on the same medium containing G-418 and incubated at 28°C. G-418-resistant isolates were tested for proper genomic integration of the kanMX cassette at the AGT1 locus by Southern analysis (see below) and analytical colony PCR using 3 primers (V-AGT1-F, V-AGT1-R and V-kanr-R; Table 1). These set of 3 primers amplified a 1,938-bp fragment from a normal AGT1 locus, or yielded an 813-bp fragment if the kanMX cassette was correctly integrated at this locus and replaced the AGT1 gene.
PFGE, chromosome blotting and hybridization
Yeast chromosomes were prepared as previously described [76] from 1 ml of yeast cells pre-grown in YP-2% glucose medium and collected at the stationary phase of growth. Cells were washed with 10 mM Tris-HCl, pH 7.5, containing 50 mM EDTA, and resuspended in the 0.4 ml of 4 mM Tris-HCl, pH 7.5, containing 95 mM EDTA, 130 mg/L of Zymolyase 20T and 7 g/L of molten (42°C) low-melting-point agarose. After solidification in a mold (Pharmacia Biotech), the agarose blocks were immersed in 10 mM Tris-HCl, pH 7.5, containing 0.5 M EDTA, and incubated at 37°C for 8 hours. Following a subsequent incubation in 10 mM Tris-HCl, pH 9.5, containing 0.5 M EDTA, 1% N-lauroylsarcosine, and 2 g/L proteinase K at 50°C overnight, the blocks were washed in 10 mM Tris-HCl, pH 7.5, containing 50 mM EDTA, and stored at 4°C in the same buffer. Each low-melting-point agarose block was transferred to a 10 g/L agarose gel in 50 mM Tris-HCl, pH 8.3, containing 50 mM boric acid and 1 mM EDTA. Pulsed field gel electrophoresis (PFGE) was performed at 10°C using a Gene Navigator pulsed-field system (Pharmacia Biotech) for a total of 27 hours at 200 V. The pulse time was stepped from 70 seconds after 15 hours to 120 seconds for 12 hours. Following electrophoresis, the gel was stained with ethidium bromide and photographed. The chromosomes separated by PFGE were transferred to a nylon filter (Biodyne A, Gibco BRL) by capillary blotting [74], and labeling of DNA probes (see below), including the pre-hybridization, hybridization, stringency washes and chemiluminescent signal generation and detection was performed by using a AlkPhos kit (GE Healthcare/ Amersham Biosciences) as recommended by the manufacturer. After hybridization, an autoradiography film (Hiperfilm™ ECL -Kodak) was exposed to the membrane for 2 to 3 h before it was developed. Images were obtained by scanning with an ImageScanner™ (Amersham Biosciences) and annotated with Microsoft PowerPoint. Probes corresponding to nucleotides +1 through +1848 on the AGT1 ORF, -73 through +1845 of the MAL31 gene, or +77 through + 1333 of the SUC2 locus were generated by PCR using primers AGT1-F and AGT1-R, MAL31-F and MAL31-R, and SUC2-F and SUC2-R (Table 1), respectively.
Analytical methods
Sucrose was quantified using 50 U/mL of yeast β-D-fructosidase (Sigma) in 50 mM citrate-phosphate buffer, pH 4.5, followed by glucose determination. Glucose was measured by the glucose oxidase and peroxidase method using a commercial kit (BioDiagnostica-Laborclin). Maltose was determined spectrophotometrically at 540 nm with methylamine in 0.25 M NaOH as described previously [50]. Ethanol was determined with alcohol oxidase and peroxidase as described previously [44,50]. Cellular growth was followed by turbidity measurements at 570 nm after appropriate dilution, and yeast cell dry weight was determined as described elsewhere [77]. Briefly, from 1 to 5 mL of fermentation broth was filtered through preweighed filters (0.45 µm mixed nitrocellulose and cellulose acetate filters), washed twice with 5 mL of distilled water, and after placing in a small (5 cm diameter) covered Petri dish, dried for 3 to 5 min in a microwave oven at maximum power (900 W) until constant weight. The sugar consumption rates were calculated using samples harvested from the logarithmic growth phase and/or in intervals during which maximal rates were attained. Mean values of dry weight in the specified time intervals were used in the rate calculations. Specific growth rate (µ, h -1 ) was determined as slope of a straight line between ln OD 570 nm and time (h) during the initial (~12 h) exponential phase of growth. Biomass and ethanol yield coefficients (Y x/s and Y e/s , respectively) were obtained at the end of cell growth or ethanol production, taking into account the amount of sugar utilized. The kinetics of active H +sucrose or H + -maltose symport were determined as previously described [40,41] using a PHM84 research pH meter attached to a TT1 Servograph (Radiometer, Copenhagen). Initial rates of sugar-induced proton uptake were calculated from the slope of the initial part (< 10 s) of the curve obtained in the recorder, subtracting the basal rate of proton uptake observed before addition of 0.1-100 mM of the sugar. All determinations were done at least in duplicate, and assays were monitored so that no more than 5% of the substrate was depleted. All activities were expressed as nmol of substrate transported per mg dry cell weight per min.
|
2014-10-01T00:00:00.000Z
|
2008-02-27T00:00:00.000
|
{
"year": 2008,
"sha1": "1310ce67e18e9a24a9c4bba9d47405c7751f799d",
"oa_license": "CCBY",
"oa_url": "https://microbialcellfactories.biomedcentral.com/track/pdf/10.1186/1475-2859-7-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e674f70282d94560e36748192b273e7077b6c8cf",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
264528184
|
pes2o/s2orc
|
v3-fos-license
|
Lipid Disturbances in Breast Cancer Patients during Chemotherapy
Breast cancer is the most common cancer in women. Cardiovascular diseases are common complications after chemotherapy due to the effect of the drug on lipid levels. This study aimed to explore the changes in lipid profiles in patients with breast cancer under chemotherapy. Methods: In this prospective study, 50 patients with breast cancer participated. Three biochemical–lipid hematological tests were performed: total cholesterol (TC), triglycerides (TGs), High-Density Lipoprotein (HDL-C), and Low-Density Lipoprotein (LDL-C) before initiation (pre-chemotherapy), at the start (first follow-up), and at the completion (second follow-up) of the first cycle of chemotherapy. Statistical significance was set at p < 0.05. Analyses were conducted using SPSS Statistical Software (version 22.0). Results: Mean TC values increased significantly at second follow-up. TGs values decreased significantly from first to second follow-up. HDL-C was significantly lower at first follow-up compared with pre-chemotherapy and was similar to the pre-chemotherapy levels at second follow-up. LDL-C values were significantly higher at second follow-up compared with pre-chemotherapy measurement. Significantly positive correlations of BMI with pre-chemotherapy LDL-C, first follow-up TC, first follow-up LDL-C, second follow-up TC, and second follow-up LDL-C were found. Conclusions: There is a statistically significant increase in the levels of TC and LDL-C in breast cancer patients during chemotherapy. This study was not registered.
Introduction
Breast cancer is the second leading contributor to cancer-related mortality after lung cancer (23%), accounting for 15% of cancer deaths [1].An estimated 2.3 million new cases are expected to be diagnosed each year.Lung cancer accounts for 11.8% of all new cases, colorectal cancer accounts for 12.9%, prostate cancer for 11.7%, and gastric cancer for 5.6%.As with other diseases, it is caused by a complex interaction between genetic and environmental factors [2].Until now, a few rare mutations in genes, such as Breast Cancer 1 and Breast Cancer 2, which significantly increase the risk of the disease, have been discovered, as well as many other much more common genes, each of which individually contributes to an increased risk of developing the disease [3].
Several studies have shown that malignancies and especially their worsening are related to disturbances in the levels of lipids and lipoproteins [4,5].Metabolic syndrome and accompanying lipid disorders are epidemiologically linked to many neoplasms (hepatocellular carcinoma, colorectal, endometrial, prostate), either due to their effect on hormonal changes or because they are associated with common determinants (diet or inflammation) [6].The lipid profile and its disturbances in patients with early-stage cancer are not only related to the disease but also to predisposing risk factors that affect lipids, such as polycystic ovary disease (for endometrial cancer) and obesity (for breast cancer) [7].Researchers emphasize that low cholesterol levels are an important indicator of cancerous conditions but not necessarily a causative factor.Therefore, this finding usually concerns rapidly evolving malignancies, raising the suspicion of pre-existing preclinical malignancy [8].In prostate malignancies, the results are inconsistent [9] since studies found weak or no correlation between cancer and high blood cholesterol levels [9], while lung cancer patients with lower High-Density Lipoprotein Cholesterol (HDL-C) levels had an increased risk of lung cancer development [10].Regarding the association of breast cancer with the levels of lipids, data are controversial [11].A recent study [12] found a positive correlation of breast cancer rates with elevated lipid levels while an earlier study [13] showed lower cholesterol levels in breast cancer patients compared to a group of healthy individuals.In a study [14] conducted in 1991-1996 involving 17,035 women, the risk of breast cancer was inversely related to the concentration of Apo-B, the primary protein responsible for Very Low-Density Lipoprotein (VLDL) and Low-Density Lipoprotein Cholesterol (LDL-C).
Chemotherapy plays a vital role in the management of breast cancer patients, serving as an indispensable treatment for both disease control and the overall survival of patients.Nevertheless, as new therapeutic approaches are introduced and life expectancy continues to increase, it is essential to acknowledge the long-term and delayed consequences of cancer treatment [15].In particular, the employment of chemotherapy has been linked to enduring adverse effects, notably cardiovascular ailments.Cardiovascular diseases, including heart failure, myocardial ischemia, and hypertension, represent prevalent complications following chemotherapy administration [16].Li et al. [17] conducted an analysis of 1054 breast cancer patients and 2483 healthy women (2483) stratified by age.The lipid profile status of the patients was characterized at the beginning of the diagnosis period and during chemotherapy.The lipid profiles become more deteriorated after chemotherapy, resulting in an increase in total cholesterol (TC), triglycerides (TGs), LDL-C, and apo-B concentrations.Levels of HDL-C decreased compared with their pre-chemo status [17].
Antineoplastic drug cardiotoxicity is dependent on a variety of variables, either intrinsic to the drug or intrinsic to the individual health of the patient.Regarding the drug itself, dosage, route of administration, cumulative dose, and number of chemotherapy sessions all play a role.Furthermore, the combination of the drug with other chemotherapy drugs may or may not contribute to the development of cardiac dysfunction [18].These complications arise not only due to the direct cardiotoxicity of chemotherapy drugs but also due to their impact on serum lipid levels [19].Several chemotherapy drugs can cause significant dyslipidemia in breast cancer patients after chemotherapy.Anthracyclines, such as doxorubicin, decrease ABCA1 gene and apoA1 expression in HepG2 cells, and ABCA1 expression in hepatocytes significantly contributes to HDL-C levels [13].In addition, anthracyclines have been linked to a dose-related risk for cardiomyopathy and heart failure.In particular, when no risk factors are present, the tolerability of doxorubicin up to a dose of 300 mg/m 2 is not significantly affected and the rate of heart failure is less than 2%.On the other hand, retrospective studies have indicated that an estimated 3-5% of patients with no other risk factors would develop heart failure at a dose of 400 mg/m 2 , and this percentage could even increase to 48% for a dose of >550 mg/m 2 [18].Taxanes are chemotherapeutic drugs that act as antimitotic agents, stabilizing microtubules in a mitotic spindle to inhibit cell cycle progression.However, substantial toxicities limit the efficacy of taxane-administered treatment regimens.It is reported that 3 to 20% of patients experience cardiotoxic events (QT interval extension, bradycardia, and atrial fibrillation) following the administration of taxanes [20].Also, taxanes-specifically, paclitaxel treatment-significantly reduce HDL-C levels and increase hydroperoxide levels compared with those in breast cancer patients who do not receive chemotherapy [21].
Risk factors associated with patient-related drug cardiotoxicity include age (<4 years and old age), ethnicity (black race), obesity, smoking, pre-existing cardiovascular disease, cardiotoxic drug use, arterial hypertension, and diabetes mellitus, as well as metabolic disorders [22].
Chemotherapy itself can also cause endothelial dysfunction increasing insulin resistance.This leads to a decrease in cytokines, increasing the patients' lipid profile [23].Triglycerides (TGs) are a sensitive measure to explore the impact of adjuvant chemotherapy in patients with breast cancer.High TGs levels are linked to an increased risk of cardiovascular complications in patients with breast cancer, making them significant predictors of coronary heart disease [24].Additionally, it has been observed that patients with breast cancer and high levels of High-Density Lipoprotein Cholesterol (HDL-C) are more likely to develop coronary heart disease [25].
Another element that should be emphasized at this point is the lipid status in patients with metastases.Breast cancer patients with metastases were found to have lower levels of TC and LDL-C than non-metastatic breast cancer patients.Hypocholesterolemia in breast cancer patients has been attributed to disease progression since the neoplastic cells consume a significantly greater amount of cholesterol, resulting in lower LDL-C levels [26,27].Indeed, the correlation between the risk of cancer recurrence and body mass index (BMI) is remarkable.Authors conducted a study [28] to investigate the association between body mass index (BMI) and the risk of distant metastasis and death due to breast cancer in early-stage breast cancer patients.The study included patients with positive estrogen receptors, negative estrogen receptors, and unknown tumors.The results showed that patients with a BMI ≥ 30 kg/m 2 had a 42-46% increased risk of developing distant metastasis after 10 years compared with patients with a BMI < 25 kg/m 2 .Furthermore, patients with a BMI ≥ 30 kg/m 2 had a 38% increased risk of death due to breast cancer after 30 years.A distinct investigation [29] conducted on patients with triple-negative (not having any estrogen or progesterone receptors or human epidermal growth factor receptor 2) breast cancer did not provide conclusive evidence of a correlation between obesity and mortality.In contrast, women with positive estrogen receptors exhibited a threefold higher risk of death.
Understanding the potential adverse consequences of chemotherapy on serum lipid levels is of utmost importance in order to enhance the well-being of individuals diagnosed with breast cancer in the forthcoming years.This includes ameliorating their chances of survival, prognosis, and minimizing the occurrence of complications.Based on the above, the aim of this study was to explore the status and the changes of serum lipids among patients with breast cancer during chemotherapy.
Materials and Methods
This is a prospective study carried out with the participation of women hospitalized in an oncology clinic of a tertiary hospital in Athens (convenience sample).Women with an independent stage of breast cancer with normal values of blood lipids participated in this study.Inclusion criteria were women aged >18 years with diagnosis of primary breast cancer of any stage who had undergone chemotherapy after total or partial mastectomy.The chemotherapy drugs used in these women were either Doxorubicin-Cyclophosphamide for six cycles every three weeks or Docetaxel for eight cycles every three weeks.Exclusion criteria were women diagnosed with metastases, history of other cancers, and presence of any cognitive or psychiatric difficulties.Women who had undergone or were currently undergoing any other therapeutic intervention other than chemotherapy (radiotherapy, hormone therapy, etc.) were also excluded.In total, 70 evaluable patients were invited to participate in the study.Finally, 50 patients agreed to take part (response rate 71%).
When the patients entered the hospital, their age and body mass index (BMI) were recorded.This record was noted because increasing age and body weight are associated with insulin resistance, which in turn affects lipid levels.The study was conducted in three phases: The first phase took place before the initiation of the first cycle of chemotherapy.
The second phase took place at the start of the aforementioned treatment.The third phase took place at the completion of the first cycle of chemotherapy.In all three phases, a blood sample was obtained for biochemical control of lipids.The chemotherapy regimen consisted of six chemotherapy cycles.To perform the biochemical hematological control of lipids, the participants were informed that they must remain fasting and not smoke for 12 h before the control.The study was conducted from March 2018 until August 2020.
To conduct the study, licenses were obtained by the Scientific Committee of the Hospital (approval number 31/23.02.2018).Throughout the study, the principles of anonymity, voluntary patient participation, and data confidentiality were respected.Each participant was provided with information regarding the purposes of the study, the voluntary nature of their participation in it, as well as their right to discontinue participation and withdraw at any time.The procedures respected the ethical principles of the Declaration of Helsinki.
The quantitative variables were described using mean and standard deviations.The qualitative variables were described in terms of absolute (N) and relative (%) frequencies.ANOVA with repeated measurements was utilized to examine any variations in the measurements over time.To detect type I error caused by multiple comparisons, a significance level (µ = 0.05/Kc) was applied to the analysis based on the Bonferroni correction (K = number of comparisons).The percentages of individuals who had abnormal results in any of the three measurements were compared using the McNemar test.Pearson's correlation coefficient (r) was used to correlate BMI with lipid levels.Statistical significance was set at 0.05, and significance values are two-sided.Statistical analysis of the data was performed using the statistical program SPSS 22.0.
Results
The mean age of the sample was 60.3 years (±11.4), the mean BMI value of women was 29.6 kg/m 2 (±5.4), 46.0% (n = 23) of the participants were overweight, 38.0% (n = 19) were obese, and 16% (n = 8) were of normal weight.The clinical characteristics (tumor type, stage of tumor, medication, and cycles of chemotherapy) of participants are presented in Table 1.
A statistically significant decrease in HDL-C levels was observed at the first followup (mean: 52.7 mg/dL, SD ± 14.4) compared with the pre-chemotherapy measurement (mean: 57.1 mg/dL, SD ± 15.5) (p = 0.05), while at the second follow-up, it was increased significantly (mean: 56.9 mg/dL, SD ± 13) compared with the first follow-up (p = 0.026), reaching similar baseline levels.
Pre-chemotherapy LDL-C levels (mean: 130.1 mg/dL, SD ± 34) were similar to levels at the first follow-up (mean: 134.8 mg/dL, SD ± 41.1).However, the values at the second follow-up (mean: 146.1 mg/dL, SD ± 57.1) were significantly higher compared with the baseline values (p = 0.035).The changes in participants' lipid values during the three stages are given in the table below (Table 2).
The percentages of participants with normal and abnormal lipid values during the three stages of the study are given in the table below (Table 3).The percentages of participants with abnormal lipid values were similar throughout the follow-up period.
Statistically positive correlations were found between BMI Index and pre-chemotherapy LDL-C, first follow-up TC, first follow-up LDL-C, second follow-up TC, and second followup LDL-C.Correlations are shown in Table 4.
Discussion
The purpose of this prospective study was to look at serum lipids during the first cycle of chemotherapy in breast cancer patients.This study is significant because cancer, especially its progression, is related to disturbances in lipid levels [30].
This study showed that during chemotherapy in women with breast cancer, a significant increase in the levels of TC, LDL-C, and TGs disturbance occurs, while HDL-C remained at the same levels at the end of chemotherapy.Several studies [17,31] highlight the importance of monitoring lipid levels during chemotherapy in women with breast cancer.
In the present study, TC values were increased significantly from the baseline to the last chemotherapy of the first regimen.Furthermore, pre-chemotherapy TGs values were similar to values at baseline and the last chemotherapy of the first regimen.However, TGs values at the last chemotherapy of the first regimen were significantly lower than the values at the start of chemotherapy.
In the current study, HDL-C decreased significantly at the first follow-up compared with the pre-chemotherapy measurement, while at the second follow-up, it increased significantly compared with the first follow-up of chemotherapy, reaching similar baseline levels.Also, pre-chemotherapy LDL-C levels and first follow-up levels were similar.However, the values at the second follow-up were significantly higher compared with the baseline values.These results are in contrast to those of a retrospective study in China among 141 invasive breast cancer patients in which no significant difference was found between TC, TGs, and LDL-C before and after chemotherapy [32].The different results are probably because the measurement of the lipid profile in the above study was performed after six cycles of chemotherapy and not immediately after the first chemotherapy.Another retrospective study conducted in China in breast cancer patients undergoing various chemotherapy regimens found significantly increased levels of TGs, TC, and LDL-C [33].Li et al. (2018) [28] also revealed increased post-chemotherapy TGs, TC, and LDL-C levels but lower HDL-C levels [17].In the study of Xu et al. (2020) [31], cholesterol, LDL-C, and TGs levels were found to be significantly elevated at the end of one month after completion of chemotherapy, while 12 months after completion of chemotherapy, only TGs and LDL-C levels continued to be at non-normal levels.The rate of dyslipidemia in breast cancer patients increased from 41.5% at the start of chemotherapy to 54.1% at 12 months after the end of chemotherapy [31].In contrast, a meta-analysis [34] found no significant difference between total cholesterol and LDL-C levels before and after chemotherapy in breast cancer patients [32].Qu et al. (2020) [35] also reported lower levels of TGs, TC, HDL-C, and LDL-C among female patients with breast cancer compared to healthy women.After neoadjuvant chemotherapy, TGs and LDL-C levels increased significantly, while HDL-C levels decreased significantly [35].Changes in the lipid profile induced by chemotherapy may have implications for the patient's response to chemotherapy.
Regarding the correlations between BMI Index and lipids, this study revealed statistically positive correlations of BMI Index with pre-chemotherapy LDL-C, first follow-up TC, first follow-up LDL-C, second follow-up TC, and second follow-up LDL-C.This positive correlation was also highlighted in the study of Okekunle et al. (2022) [36] among premenopausal women.He et al. [33] investigated the risk factors associated with elevated levels of TG, TC, and LDL-C as well as low levels of HDL-C in patients with normal lipid profiles prior to chemotherapy.After univariate and multivariate analyses, they found that BMI > 24 was an independent predictor of high TC, TG, LDL-C, and low HDL-C levels after chemotherapy.
The correlation between these variables is a less explored area and the mechanism behind this correlation is still unclear.It is hypothesized that obesity may alter drug metabolism and pharmacokinetics, leading to reduced efficacy and increased toxicity of chemotherapy [37,38].The relationship between BMI, lipids, and chemotherapy outcomes varies depending on the specific chemotherapy regimen used, the stage of breast cancer, and individualized factors related to the patient's clinical characteristics [21].
To facilitate the continuous evaluation of metabolic profiles during the diagnosis and treatment of breast cancer, interdisciplinary cooperation should be enhanced.Nurses are an integral part of the multidisciplinary team and play a prominent role in cancer screening and diagnosis, monitoring for potential chemotherapy-related disorders such as lipid disorders [39].Early detection of lipid disorders and quick referrals by nurses to their team members lead to improved therapeutic outcomes in terms of reducing cardiovascular risk and the possibility of metastases.Health education, nutritional counseling stemming from evidence-based nursing and provided by nurses, as well as placing patients in the context of the therapeutic alliance help patients understand the necessity of cooperation in treatment.Therefore, high-quality nursing care favors the improvement of clinical outcomes of chemotherapy [40].
One of the limitations of this study is the non-stratification of patients into premenopausal and postmenopausal, as the levels of sex hormones affect lipid metabolism.Also, the type of chemotherapy the patients received was not recorded as the chemotherapeutic agent affects the patients' serum lipid levels.Another limitation of the study is that insulin resistance was not studied.Insulin resistance is a risk factor for developing breast cancer, possibly because estrogen or Insulin-Like Growth Factor-I (IGF-I) levels are elevated.Insulin resistance has been associated with obesity, dyslipidemia, arterial hypertension, and impaired glucose tolerance.Additionally, the study sample does not allow generalization of the results.Therefore, a multicenter, randomized, controlled trial should be performed to confirm the results and generate additional findings.
Conclusions
Women with breast cancer had a statistically significant increase in TC and LDL-C levels during the first cycle of chemotherapy, and a lipid disorder was also revealed.BMI was positively correlated with TC and LDL-C levels.Hence, it is imperative to carry out lipid surveillance and engage in the management of dyslipidemia as a preventive measure during the primary diagnosis stage and also throughout the course of chemotherapy in individuals affected by breast cancer.Additionally, it is crucial to conduct further investigations to determine the temporal nature of the alterations in lipid profiles and ascertain their persistent nature, thus establishing their clinical significance for patients.Finally, given that the current investigation solely examined a restricted number of lipid alterations in reaction to chemotherapy, it would be advantageous to explore a broader spectrum of changes in the lipid composition of females diagnosed with breast cancer who are receiving chemotherapy.Additionally, it would be of value to assess the impact of distinct categories of chemotherapy medications on lipid concentrations.
Table 1 .
Clinical characteristics of patients regarding the type of tumor, stage of tumor, medication, and cycles of chemotherapy.
Table 2 .
The changes in lipid values during the three stages of the study (N = 50).
Table 3 .
The percentages of participants with normal and abnormal lipid values during the three stages of the study.
Table 4 .
Correlations between BMI Index and lipids.
|
2023-09-22T13:06:25.928Z
|
2023-10-25T00:00:00.000
|
{
"year": 2023,
"sha1": "f9e40628d2f03f0520d304ea538ebf76286b1ab3",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "7f359b22465f04f2663ba88ede750c157d13572d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
86488791
|
pes2o/s2orc
|
v3-fos-license
|
Trastorno del espectro autista : Diagnóstico clínico y test ADOS Autism spectrum disorder : Clinical diagnosis and ADOS Test
Introducción: El trastorno del espectro autista (TEA) es un desorden neurobiológico altamente prevalente, cuyo diagnóstico clínico es un desafío constante. Objetivos: Describir el perfil clínico, en una cohorte de niños con TEA desde su derivación al especialista hasta la realización de un test diagnóstico. Pacientes y Método: Estudio descriptivo desde los primeros síntomas pesquisados por la madre, hasta la certificación diagnóstica de una serie de 50 niños, diagnosticados clínicamente con TEA entre 2012-2016. Se incluyeron niños de 3 a 10 años al momento del Test ADOS-G, con lenguaje de al menos una palabra. Los niños fueron evaluados neuropsicológicamente (funcionalidad, intelectualidad y test ADOS). Comparamos las medianas de edad al diagnóstico neurológico, según carga de sintomatología autista y nivel cognitivo. Resultados: El test ADOS corroboró un TEA en 44 niños (88%), 93,1% eran varones. La edad promedio al diagnóstico clínico y test ADOS fue 48,2 ± 18,3 y 62,6 ± 23,3 meses. La consulta neurológica en el 72% de los casos fue motivación parental/educador por síntomas como trastorno interacción social y retraso de lenguaje. El 34,1; 47,7 y 18,2% tenían sintomatología autista leve, moderada y severa respectivamente. En 5 de 27 niños en los que se realizó la evaluación neuropsicológica se detectó déficit cognitivo. La mediana de edad al diagnóstico fue significativamente menor en niños con sintomatología autista grave vs levemoderada (p 0,024). Conclusión: La sintomatología autista determina la precocidad de consulta, por lo que es necesario orientar a la población general, educadores y personal de salud, respecto a estos síntomas. Versión in press ID 872
Introduction
In recent years, the diagnostic criteria for autism spectrum disorder (ASD) have changed, addressing the challenge associated with the high clinical variability of this condition. The DSM-V (Diagnostic and Statistical Manual of Mental Disorders) defines ASD as a persistent and heterogeneous neurodevelopmental disorder and categorizes the symptoms into two groups: a) deficiencies in communication and social interaction, and b) patterns of restrictive and repetitive behavior. Included in this spectrum are Asperger syndrome, Childhood disintegrative disorder, and Pervasive developmental disorder not otherwise specified 1 .
In 2012, as reported by 11 ASD monitoring sites in the USA, the prevalence was 14.6 per 1.000 8-yearolds children (1 in 68), with a 4.5:1 ratio for males 2 . The prevalence has been increasing in recent decades, making it the second most frequent developmental disability, after intellectual disability 3,4 .
One of the most widely used tools for the ASD diagnosis is the ADOS (Autism Diagnostic Observation Schedule) test, initiated in the 1980s 5,6 . The ADOS-G (generic) is a standardized, semi-structured assessment of social interaction, communication, imaginative play, and use of materials for children, youth, and adults who may have an ASD. It has four modules, each with diagnostic algorithms that allow the examiner to observe the behavior at different levels of development and language 7 . This instrument is sensitive and specific and the latest version available in Spanish is the ADOS-2, which includes improvements and novelties such as the design of a module for young children (12-30 months) called Module T, as well as the revision of the algorithms of modules 1-3 8 .
Given the continuous development in the specific criteria of ASD, its relationship with prevalence and other clinical characteristics (gender bias, age of diagnosis, etc.), a detailed description of the autistic phenotype heterogeneity in our population is relevant.
The objective of our study is to describe the clinical profile in a cohort of children with ASD, from referral to a specialist to the ADOS-G test that will support such diagnosis. The processes will be described, from the first symptoms observed by the mother to the diagnostic confirmation.
Patients and Method
This study was carried out in the Pediatric Neurology Unit of the PUC (Pontificia Universidad Católica de Chile) between 2012 and 2016. It is a descriptive study of a series of consecutive cases that are clinically diagnosed (neurological examination and ADOS-G test).
The clinical diagnosis of ASD was made by a pediatric neurologist according to DSM-IV criteria and was later referred to neuropsychological evaluation for functional and intellectual evaluation, and ADOS-G test.
The medical history and data such as sex, perinatal and family history, first symptoms of ASD observed by the mother, age at the time of consultation with a pediatric neurologist, reason for consultation, who referred to specialist, comorbidities, schooling, and requested studies were obtained from the clinical records. All children with suspected autism were referred to multidisciplinary rehabilitation or social skills workshops, and studies were requested during their subsequent check-ups. A school report was issued for integration or curricular adjustments. The study was approved by the Institutional Ethics Committee.
Admission criteria: First consultation on ASD suspicion and compatible ADOS-G, age between 3 and 10 years at the time of the ADOS-G test, and being able to speak with at least one meaningful word. Exclusion criteria: Vocalizations in which no words or approximation to words are recognized, carrier of a known genetic chromosomal disorder or severe malformation or structural damage of the central nervous system.
The neuropsychological assessment consisted of measuring adaptive and cognitive behavior with the Vineland Adaptive Behavior Scales Survey Form: 1-100), Kaufman Brief Intelligence (K-BIT), which provides IQ (M = 100, SD = 15), for total verbal and non-verbal IQ and Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV), applicable to children and adolescents aged between 6 years 0 months to 16 years 11 months, which also give IQ of the same types described above 10 .
The ADOS-G test evaluated whether or not patients had autism spectrum symptomatology, and was categorized into a mild, moderate or severe symptomatic burden.
The neuropsychological evaluation was performed by a neuropsychologist with certification in ADOS test evaluation.
Statistical analysis
Categorical demographic variables were calculated in number and percentage, and for numeric variables, averages and standard deviation (SD) or medians and interquartile range (IQR) were calculated according to whether they distributed normal or not. The median age and IQR comparison at the onset of symptoms (reported by the mother), at the time of neurological consultation and ADOS-G test vs severity of autistic symptomatology, was carried out with the Mann-Whitney U test dichotomized in mild-moderate v/s severe (p < 0.05).
Results
Out of the 50 assessed children, 44 met inclusion criteria because they had a concordant ADOS-G. These 44 cases are described according to protocol (Figure 1). 93.2% (41/44) were boys, with a 13.6/1 boy: girl ratio. The average age at neurological evaluation and application of the ADOS-G test was 48.2 (± 18.3) and 62.6 (± 23.3) months. In 4/44 children there was a family history of ASD and 13/44 had perinatal morbidity (table 1).
The main requests for a first neurological evaluation in the 44 children occurred by spontaneous consultation of the parents in 18/44 and at the request of the educational institution in 16/44. Other requests were referred by another doctor (7/44 children) and by another health professional (psychologist, and speech and hearing therapist) in 3/44.
In the anamnesis of the 44 cases, mothers reported having perceived some neurodevelopmental alteration (first symptom), at an average age of 26.23 months (SD 14.0). Of this symptomatology, the language alterations and alterations in social interaction corresponded to 26/40 and 17/40 respectively. In 6/40 cases, a regression with language and social interaction loss was described. In the remaining cases (4 cases), they reported that they were 'always' different, without defining the main symptom.
Among the requested and performed tests, the hearing function was evaluated in 17/44 children (with normal auditory evoked potentials, audiometry, and impedance audiometry in 16/17, only 1 case with mild conductive hearing loss). In 15/44 cases (34.1%) the standard EEG was requested, where 3 out of 15 were abnormal, 2 with epileptiform activity, and 1 with a focal slowing. Other requested examinations were neuroimaging, genetic studies (karyotype, array CGH, molecular testing for fragile X), and metabolic studies, but most were not performed.
The neuropsychological evaluation was performed on 37/44 children. Adaptive behavior assessment (Vineland) was carried out in 10 children, and in 27 cognitive assessments (19 with K-BIT and 8 with WISC-IV). The Vineland evaluation had a median of 52 points (range 20-97). The results of the 27 children who underwent cognitive assessment were an 85 (58-126) verbal scale median, an 82 (52-119) executive scale, and a total of 83 (44-126) were obtained. An IQ of less than 70 (cognitive deficit) was detected in 5 children.
The ADOS-G test defined a mild to moderate autism symptomatic burden in 36 children (81.8%) v/s 8 (18.2%) with severe autistic symptomatology, and there were significant differences between median ages in both groups at the first neurological consultation (p 0.024) ( Table 2). In 27/44 cases, children attended kindergarten or school with integration and 10 of them also attended to speech-language programs. In 13/44 cases, they only attended a speech-language program or special education school (10 and 3 children respectively), and 4/44 were not in school.
Regarding drug therapy, 12/44 received melatonin, risperidone, methylphenidate or sertraline, alone or in combination, and antiepileptic drugs. Three children attended alternative equine therapy.
The 6 children whose ADOS results did not meet ASD criteria were not analyzed in this sample but were in followed-up for at least one year and their diagnoses were selective mutism and social anxiety disorder (2/6), intellectual disability (2/6), and cognitive disharmony and social anxiety disorder (2/6).
Discussion
The family and social impact of the ASD diagnosis, associated with the heterogeneity of its symptoms and the lack of biological markers, requires a multidisciplinary evaluation that allows us a high diagnostic certainty. A diagnostic error implies avoidable emotional and social costs. According to some authors 5,9 , the misdiagnosis percentage with a clinical examination is 10-12%. In our work, there were 6/50 (12%) children whose ADOS test excluded an ASD despite having some similar behaviors and whose long-term followup was compatible with it.
Our cohort excluded children with a known chromosomal disorder, malformation, or severe brain damage and included children with developed oral language (at least one word or gesture with meaning), so we believe that an ASD subgroup with greater cognitive abilities, defined as high-functioning autism (HFA) 10,11 was selected. Although the admission criterion was age between 36 months and 10 years, we believe that the age at diagnosis of HFA is higher than the general population with ASD reported by Nassar and Daniels 12,13 and higher than the 18-24 months recommended for M-CHAT screening 14 . Therefore, it is necessary to assess the symptoms caused by the environment surrounding the child (parents and educators) who is highly sensitive to developmental alterations.
The surprising male predominance found in our boys (13.6/1 vs 4/1 or 7/1 for typical ASD and HFA) leads us to hypothesize that we have an underdiagnosis of HFA in girls 15,16 . There is growing evidence of a camouflage effect among girls with ASD, particularly those without intellectual disabilities, which may affect performance on standard diagnostic measures. One of the hypotheses of Head et al. 17 is that girls with HFA retain the complex social and emotional skills that characterize the female population because they use cognitive skills to respond to social situations 18 . Another theory is the 'female protective effect' since women would have a higher genetic threshold compared with men. Higher testosterone levels have also been found in girls with ASD than in girls with normal development 19 .
The most frequent cause of consultation was the lack of social interaction (43%), a key element for ASD diagnosis. Language impairment or atypical language development is not a diagnostic criterion for ASD 1 , but it is described in 86.4% of our patients. On the other hand, repetitive behaviors and restricted behavioral patterns were not mentioned as a cause of consultation, even though they were mentioned in the anamnesis during the neurological consultation and the ADOS test. This symptom must be specifically addressed in all consultations due to language alteration.
Regarding the diagnostic process times, 1 in 4 mothers described developmental alterations before Table 2 Autism spectrum disorders -M. C. González et al one year of age, but consultation or referral to a specialist occurred 20 months later. This is consistent with Ozonoff and Martin data 20,21 who indicate that autism can be diagnosed in infants, based on the parent's report and yet the median age at diagnosis is 4 years 22 .
. Comparison of median age at the detection of symptoms, clinical neurological diagnosis and ADOS test and functional and intellectual level v/s autistic symptom burden, in children with ASD, 2012-2016 years, Universidad Católica, Santiago, Chile
Intellectual disability is described in 42% of the population with ASD 16 and in our cohort, it was present in 5 of 27 children evaluated with K-BIT and WISC (Table 2). There was no significant correlation between IQ level and the median age at the time of neurological consultation, contrary to what occurred with the severity of ASD symptomatology. Children with severe autistic symptomatology consulted before children with mild-moderate symptomatology.
In order to search for these etiologies, an exhaustive neurological-psychiatric clinical examination is essential and to assess the timing of appropriate neurophysiological, imaging, genetic, and metabolic studies. In most cases, these are very expensive and require sedation or general anesthesia in the case of neurophysiological studies and brain imaging which further limits their application. In our cases, a large number of studies were requested, most of which were not performed.
The genetic study is one of the most suggested in all cohorts of children with autism at an international level, specifically needed for genetic counseling in case of inherited etiologies 23 . In case of regressive autism or epilepsy suspicion, a sleep and awake EEG should exclude a Landau-Kleffner syndrome or encephalopathy with continuous spike and wave during sleep or another type of epilepsy that impairs communication, susceptible to improvement with antiepileptic drugs 24 . Neuroimaging or metabolic studies will be requested in case of high suspicion of a brain lesion or inborn error of metabolism.
The most frequent comorbidities were learning difficulties, and sleep and eating disorders (61.4%, 31.8%, and 27.3% respectively). Therefore, the most indicated drugs were methylphenidate and melatonin, highlighting little use of risperidone 17 .
Although 40/44 children (90.1%) were attending an educational institution (kindergarten, regular school, speech-language programs, special education school) about 10% (4/44) remained out of school. No child attended behavioral-educational intervention therapies such as ABA (applied behavior analysis) con-sidering the good results described in the literature 25 in programs started early and implemented intensively (more than 20 hours per week). There is a shortage of Chilean centers that perform this type of therapy (author's personal comment).
Among the weaknesses of our work are the admission criteria that biased the sample towards older children, with developed oral language and therefore better cognitive level. We did not include younger children because the autism symptomatic burden (ADOS-G) and intellectuality (K-BIT and WISC) assessment instruments are applicable in children over three years of age and children with developed oral language. We believe it is necessary to carry out new studies that include a sample with more and younger children (preschoolers) to be evaluated with current instruments such as ADOS-2, module T for children from 12 to 36 months and psychological tests such as the Leiter-R test (which allows us to evaluate intellectuality in children without oral language).
Conclusions
The autism symptomatic burden, given by alterations in interaction, communication, and restricted interests is the cause that motivates early consultation in parents and educators. Although there are clinical criteria defined in DSM IV and DSM V, clinically detectable, the high phenotypic variability of ASD related to autism symptomatic burden, cognitive capacity, and language requires teamwork (family, educators, and health care team) for early detection 26 .
Ethical Responsibilities
Human Beings and animals protection: Disclosure the authors state that the procedures were followed according to the Declaration of Helsinki and the World Medical Association regarding human experimentation developed for the medical community.
Data confidentiality:
The authors state that they have followed the protocols of their Center and Local regulations on the publication of patient data.
Rights to privacy and informed consent:
The authors have obtained the informed consent of the patients and/or subjects referred to in the article. This document is in the possession of the correspondence author.
Conflicts of Interest
Authors declare no conflict of interest regarding the present study.
|
2019-03-28T13:34:03.448Z
|
2019-10-07T00:00:00.000
|
{
"year": 2019,
"sha1": "1fc8803a474255749137a73c8ac4747c645090dd",
"oa_license": null,
"oa_url": "https://www.revistachilenadepediatria.cl/index.php/rchped/article/download/872/1166",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a0515f826d4160a5bd5ecfd355a78c83ea937b11",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
}
|
2456458
|
pes2o/s2orc
|
v3-fos-license
|
Nanomechanical properties of silicon surfaces nanostructured by excimer laser
Abstract Excimer laser irradiation at ambient temperature has been employed to produce nanostructured silicon surfaces. Nanoindentation was used to investigate the nanomechanical properties of the deformed surfaces as a function of laser parameters, such as the angle of incidence and number of laser pulses at a fixed laser fluence of 5 J cm−2. A single-crystal silicon [311] surface was severely damaged by laser irradiation and became nanocrystalline with an enhanced porosity. The resulting laser-treated surface consisted of nanometer-sized particles. The pore size was controlled by adjusting the angle of incidence and the number of laser pulses, and varied from nanometers to microns. The extent of nanocrystallinity was large for the surfaces irradiated at a small angle of incidence and by a high number of pulses, as confirmed by x-ray diffraction and Raman spectroscopy. The angle of incidence had a stronger effect on the structure and nanomechanical properties than the number of laser pulses.
Introduction
The surface modification of semiconductors, e.g. silicon, has great fundamental and technological importance. It is achieved by various methods, such as chemical etching and different types of high-energy beam irradiation employing high-power laser beams. Silicon is a versatile material, which is being used very extensively in the electronic industry for most of the semiconducting applications. It exhibits a range of attractive properties, such as high melting/boiling point, thermodynamic and chemical stability, and semiconducting properties [1]. Laser beam interaction with silicon has been a major focus of interest of the research community because modified silicon surfaces exhibit novel functional properties [2][3][4][5][6][7][8][9][10][11][12][13][14][15]. The Nd:YAG [16][17][18], excimer [19,20] and CO 2 lasers are typically used for these purposes. Semiconducting materials with a low thermal conductivity and a high mechanical stability are required for microelectromechanical systems (MEMS) [21,22]. One of the most important advantages of nanocrystalline silicon is that it has an increased stability over amorphous silicon (a-Si)-a candidate MEMS material-for one of the reasons being its lower hydrogen concentration. Producing silicon surfaces with desired physical properties (such as optical, electrical and thermal behaviors) without sacrificing their mechanical strength is a current technological challenge. Excimer-laser-induced nanostructuring and surface modification still attract attention owing to their potential for material modification and good process control [23].
Studies on the effects of laser processing parameters on mechanical properties of silicon are very rare in the literature. To date, only porous silicon as-prepared by a chemical route has been characterized in terms of its mechanical properties [24]. An extensive research on the excimer-laser-induced nanostructuring of surfaces has been carried out by Kumar et al. [25,26]. Such laser-nanostructured silicon surfaces exhibited strong light absorption and photoluminescence due to their porous structure produced through catastrophic damage [27]. It is imperative to perform a systematic study of nanomechanical properties of excimer-laser-fabricated nanostructured silicon, as reported in this article.
Experimental details
The surfaces of [311]-oriented silicon wafers were irradiated by a KrF excimer laser (Compex Pro 201F, Lambda Physik, Germany) with a wavelength of 248 nm and a peak energy of 700 mJ in a 30 ns pulse. The surfaces were cleaned ultrasonically in acetone, isopropyl alcohol and distilled water each for 15 min and dried in an oven at 80 • C for 20 min. The laser repetition rate was maintained at 1 Hz for all experiments. The incident laser fluence was kept constant at 5 J cm −2 and the surfaces were irradiated by 1 to 100 pulses. The angle of incidence of a laser beam on a silicon surface was fixed at 20, 30, 45 or 90 • .
The catastrophic damage after irradiation was monitored using an optical microscope (Olympus Model MX51). Microstructural changes were observed with an atomic force microscope (AFM, SPA 400 of SII Inc, Japan), operating in the intermittent contact dynamic force microscope (DFM) mode. The extent of crystallinity was determined by Raman spectroscopy (He-Ne laser excitation) and x-ray diffraction (XRD, Co-K α radiation, λ = 1.7902 Å) measurements carried out at room temperature.
Mechanical properties of virgin silicon and laser-treated silicon surfaces, after various numbers of shots at various incident angles, have been investigated using a Hysitron Triboscope indentation system (Minneapolis, MN, USA). A three-sided pyramidal Berkovich tip of 100 nm diameter was used in this study. A maximum load (P max ) of up to 8 mN was used. In all cases, the rates of loading and unloading were set such that the loading and unloading times were both 10 s with a pause of 10 s at P max . Load (P) versus depth of penetration (h) was recorded and analyzed to extract Young's modulus (E) and hardness (H) using the Oliver and Pharr method [28,29]. Images of the indents were captured using the same indenter in the scanning probe microscopy (SPM) mode. The SPM imaging of the surfaces was conducted before and after each indent to perform indentations in relatively smooth, defect-free regions.
Results and discussion
The excimer laser irradiation of single-crystal silicon surfaces with a laser fluence of 5 J cm −2 was an extreme condition that caused severe damage to the surface. When silicon interacted with a laser, it underwent nonequilibrium changes resulting in catastrophic damage. As is evident from the optical images, the untreated silicon surface (figure 1(a)) was significantly roughened and bundled fiber-like microstructures were formed ( figure 1(d)). At the incident angle of 90 • , and for 1 shot, the size of nanoparticles on the surface was approximately 300 nm (figure 1(b)), which reduced to 40-60 nm after 100 shots (figure 1(d)). At 8 shots, silicon condensed into clusters, as shown in figure 1(c). A larger number of laser pulses resulted in a higher extent of nanostructuring and hence a smaller nanoparticle size. The presence of pores in the modified silicon surfaces irradiated by 8 pulses at the 90 • angle of incidence is shown in the upper right inset of figure 1(d). Few pores are indicated by circles for better identification, and the pore size varies from 10 nm to 300 µm. An almost 50-90% porosity was achieved by varying the incident angle of laser beam. Various types of pores, i.e., micropores, mesopores and macropores, were observed in our experiments.
To investigate the effect of incident angle on surface nanostructuring, the angle was varied from 20 to 90 • . At an incident angle of 20 • , it is clear from figure 2(a) that an anisotropic growth feature is present with nanostripes aligned in the direction of the laser beam. At 30 • , however, more interesting growth features were observed in the form of nanowires oriented in the direction of the laser beam and consisted of minute nanoparticles, as shown in figure 2(b). At 45 • , the anisotropy in the growth features was reduced (figure 2(c)), and at 90 • , the growth was almost isotropic (figure 2(d)). At 20 and 30 • , the separations between two self-organized linear features were 370 and 510 nm, respectively. The formation of such self-organized nanostructures is part of an independent on-going research.
The effect of laser irradiation on the evolution of microstructures on single-crystal silicon surfaces is depicted in figure 3. Figure 3(a) shows the variation in grain size distribution with the number of laser shots; 250-300 grains were used to evaluate the grain size distribution of the modified surfaces in a 10 × 10 µm 2 area. The average grain size was about 180 nm for 1 shot and its distribution was monomodal. For 8 shots, however, the average grain size reduced to 110 nm and the distribution became bimodal. For 100 shots, the grain size further reduced to 80 nm, retaining the bimodal distribution. Such a bimodal distribution might be due to further fragmentation of formed minute grains by successive laser bombardment. The effect of the incident angle of the laser beam on the grain size distribution of silicon surfaces is shown in figure 3 same position but with a higher full width at half maximum (FWHM), indicating that the surfaces lost their long-range crystalline order and became nanocrystalline or amorphous; the FWHM increased with the number of laser pulses. No peaks from silicon oxide were observed in the XRD patterns, indicating the lack of oxidation of the modified surfaces. The crystallite size was reduced from 30 to 10 nm with increasing number of pulses from 1 to 100, as estimated with Scherer's equation. In laser-material interaction, the surface locally heats up to ∼ 3 000 • C and then cools down to room temperature within microseconds after laser pulse application. Thus, the surface experiences ultrafast cooling that could be responsible for the observed amorphization. We would also note that when significant damage occurs, microsized and nanosized features are formed on the surface. Each microstrand in a bundled fiber-like structure, for example, consists of many nanograins, as demonstrated in figure 1(d).
An example of this surface feature is also shown in figure 2(b), where each nanowire consists of grains that are much smaller than the wire diameter. Raman spectra of excimer-laser-nanostructured silicon surfaces showed that crystallite size is smaller at a relatively low incident angle than at a higher incident angle, as shown in figure 5. This size change was revealed in the AFM images shown in figure 2, and it resulted in the broadening of the characteristic silicon Raman peak at 518 cm −1 . The extent of surface nanocrystallization increased with the number of laser pulses as confirmed by the broadening of the Raman peak. There was no broadening for the Raman peak centered at 300 cm −1 . This peak is 20 times weaker than the 518 cm −1 line. Both peaks are due to the phonon modes of the Si substrate. Preliminary results suggest the following two main contributions to these Raman peaks: transverse optical phonon (TO) and grain boundary defects, such as stacking faults. These two contributions result in the broadening of the 518 cm −1 line toward the low-energy side and thus an asymmetrical peak shape. This asymmetry is associated with a grain size distribution that can generate tensile strain on the surface [30].
To perform a meaningful nanoindentation of a modified, nanostructured silicon surface, it is essential to locate the damaged region. This was accomplished by using the in situ SPM imaging capability of the Triboindenter. An area of 100 µm 2 was scanned by SPM, and the region with a minimum surface roughness was selected for indentation. Figure 6(a) shows the load-displacement curves for the untreated and laser-irradiated silicon surfaces at different numbers of pulses. A pop-out (sudden material expansion) occurred in the unloading curve for the untreated silicon surface at 1760 µN load and at 104 nm depth. This type of behavior was not observed for any of the laser-treated silicon surfaces. From figure 6(a), it is clear that the initial part of unloading is purely elastic, because it follows the power law relation (equation (2)). The loading curve can be fitted with the power law relation, and the unloading curve can be expressed as Here, P is the indenter load, h is the elastic displacement of the indenter, h f is the final depth, and α and m are fitting constants. The value of m for the Berkovich tip is typically 1.5. It has already been well established that the inelastic deformation in silicon under nanoindentation is mainly caused by phase transformations and the contributions from other deformation mechanisms. Thus, the bifurcation from the elastic curve shown in figure 6(a) indicates that a phase transformation has occurred. A pop-out can occur after the bifurcation. Chang and Zhang [31] indicated that the onset of phase transformation during unloading occurs once the contact pressure reaches the critical value (for the phase transition from Si-II to Si-III/Si-XII and/or to amorphous phases) and is independent of peak indentation load or of loading/unloading rate. The pop-out effect is a consequence of a phase transformation from metallic Si-II to either Si-III or Si-XII, accompanied by a sudden volume increase and hence the uplift of the material surrounding an indenter. Interestingly, the unloading part after the pop-out can also be fitted with the power law relation, indicating that the unloading process after a pop-out is purely elastic. The absence of pop-out in the unloading part for the laser-treated silicon surfaces is interesting. It could be attributed to the phase transformation of silicon from a single crystal to an amorphous/nanocrystalline phase as a result of laser irradiation. Figure 6(a) reveals a systematic load-displacement (L-D) curves variation with varying number of pulses. After 1 pulse, a negligible load-displacement curve difference was seen as compared with virgin silicon, whereas a significant difference in load-displacement curve was observed for 100 pulses. Although the loading and unloading rates were the same (∼800 µN s −1 ) in all experiments, the slope of the loading curve varied significantly after different numbers of pulses. The indenter penetrated into the material without any resistance after irradiation, indicating that the material became very soft because of rapid-cooling-induced surface porosity. Figure 6(b) presents a typical plot of hardness and Young's moduli of laser-treated surfaces (laser fluence of 5 J cm −2 and incident angle of 90 • ) versus indentation depth at various numbers of pulses. The hardness of untreated silicon was ∼ 12.8 GPa and decreased to 10.5, 7, 2 and less than 1 GPa after laser irradiations at 1, 8, 40 and 100 shots, respectively. Clearly, the extent of surface damage increased with the number of pulses, and a catastrophic surface damage was observed after 100 shots. The variation in the Young's modulus of the damaged surface with the number of pulses is shown in figure 6(c). The modulus of untreated silicon was ∼ 180 GPa and decreased to 165, 150, 22 and 10 GPa for 1, 8, 40 and 100 shots, respectively. Modulus is a measure of stiffness-the higher the modulus, the stiffer the material. This does not, however, guarantee good deformability or energy absorption before fracture. We observed that more laser pulses induced heavier damage, larger porosity and smaller grain sizes. This type of nanostructuring gradually softens the surface. Figure 7(a) shows the load-displacement curves for various angles of incidence and 8 laser pulses. The observed behavior is different from the power law relation for surfaces irradiated at smaller angles of incidence, namely, 20, 30 and 45 • . At 20 • , the unloading part of the load-displacement curve shows a sudden increase in the elastic recovery rate of the indented material. This recovery rate was lower for surfaces irradiated at 30 and 45 • than for the surface irradiated at 20 • . This type of unloading behavior was not observed for the surfaces irradiated by different numbers of pulses. Figures 6(a) and 7(a) suggest that the angle of incidence has a more profound effect on the surface modification than the number of pulses, that is, 100 shots at 90 • and 8 shots at 20 • resulted in similar hardnesses and moduli. This could be due to the fact that laser energy is more localized at 90 • , which resulted in the molten the material and an immediate solidification. However, at lower angles of incidence, the damage area was a significantly elongated ellipse. The energy delivered to the material was invested in material movement along the laser beam. This was corroborated by the optical micrographs. A large and deep eroded trench was observed at lower angles of incidence, but shallow ripples occurred after 90 • irradiation. Smaller angles of incidence resulted in an improved porosity. Therefore, instead of increasing the number of laser pulses, one could reduce the incident angle to achieve improved material properties (compactness). was ∼ 180 GPa and decreased to 150, 80, 42 and 22 GPa for 90, 45, 30 and 20 • incident angles, respectively. Linear profiles (ripples) were observed in the direction of the laser beam at small irradiation angles. Each ripple consists of minute nanoparticles, and its growth is highly anisotropic at small angles of incidence. As the incident angle gradually increased to 45 • , the anisotropy decreased and vanished at 90 • . Moreover, the surface layers were removed (ablated) at 90 • but not at angles smaller than 90 • , at which material was instead redistributed on the surface. Very few remaining nanoparticles can be seen on the silicon surface for the 90 • incident angle. Figure 8 shows the hardnesses and moduli of laser-irradiated silicon surfaces as a function of the laser parameters.
In general, hardness should be large for reduced grain sizes up to a certain limit (∼ 10 nm, Hall-Petch effect). However, a small hardness was observed for small grains in this study. Such an inverse Hall-Petch behavior was typically seen for grains smaller than 10 nm, which is not the case here. Such an anomaly can be attributed to the high porosity of the excimer-laser-damaged silicon surfaces.
Conclusion
Single-crystal silicon surfaces have been modified using a KrF excimer laser at a laser fluence of 5 J cm −2 . A systematic investigation was carried out to explore the effects of the number of laser pulses and incident angle on the nanomechanical properties of the modified silicon surfaces. The porosity and grain size of the modified silicon surfaces were controlled by varying the laser parameters. We found that the incident angle of laser irradiation has a more profound effect on the modification of silicon surfaces than the number of laser pulses as far as nanomechanical properties are concerned.
|
2018-03-27T23:16:32.361Z
|
2010-02-01T00:00:00.000
|
{
"year": 2010,
"sha1": "0d554201f2f9eb55eb6b9f72269fdaac9841ce27",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1088/1468-6996/11/2/025003?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "3af0243eb5fcac48cbc004702279f78547bdbef4",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
252904475
|
pes2o/s2orc
|
v3-fos-license
|
Contrast-enhanced ultrasound in detection and follow-up of focal renal infections in children
Objective: Focal renal infections in children have to be diagnosed early in order to enable an appropriate antibiotic treatment. The purpose of this paper was to investigate the efficacy and clinical utility of intravenous renal contrast-enhanced ultrasound (CEUS) as an alternative imaging method for the diagnosis and follow-up of focal renal infections in children. Methods: Fourteen children aged from 6 months to 17 years (mean 6.5 years) in whom focal renal infection was suspected were included in this retrospective study. All data were obtained from medical and imaging records of the patients. Results: CEUS was performed for the diagnosis in all 14 children and then also for follow-up in seven children with renal abscess. In three children enhancement pattern was concordant with focal nephritis and in four children CEUS excluded focal renal infection and the diagnosis of pseudolesion was confirmed. Conclusion: Renal CEUS was proven to be an efficient and self-sufficient imaging in diagnosis and further follow-up of focal renal infections in children. CEUS patterns of focal renal infections are described as well as relevant CEUS enhancement patterns important for differential diagnosis. Renal abscess follow-up algorithm with CEUS is suggested. Advances in knowledge: All clinically relevant imaging data was obtained by CEUS and no other imaging was necessary for the diagnosis and follow-up.
INTRODUCTION
Focal renal infections like focal nephritis and renal abscesses are not very common in children. However, they usually present with a non-specific and varying clinical presentation, which can result in prolonged antibiotic treatment and increased length of hospital stay. 1,2 Renal abscesses can evolve from haematogenous spread of other localised infections or ascending urinary tract infection. Most frequently isolated pathogens in renal abscesses in children are Escherichia coli and Staphylococcus aureus. 2 Conventional ultrasound is the first-line imaging method to detect renal structural lesions, 3 but for final diagnosis, CT or MRI is often required. The latter are not optimal for the use in children due to ionising radiation exposure in CT and the need for sedation or general anaesthesia in MRI. On the other hand, contrast-enhanced ultrasound (CEUS) is a novel, real-time bedside, children-friendly imaging modality with high safety profile and many benefits in comparison with conventional imaging techniques. [4][5][6] The purpose of this paper was to investigate the efficacy and clinical utility of intravenous renal CEUS as an alternative to CT or MRI for the diagnosis and follow-up of focal renal infections with particular attention to description of various enhancement patterns in children. Follow-up algorithm with CEUS is suggested to objectively monitor the focal renal infection and assess possible chronic renal parenchymal changes.
METHODS AND PATIENTS
Children in whom focal renal infection was suspected at our University Children's hospital from January 2018 to February 2022 were included in the retrospective study. All data were obtained from medical and imaging records of the patients included in the study.
Clinical, laboratory and treatment data Clinical data contained: prolonged fever, chills, pain (abdominal, flank), nausea and vomiting, diarrhoea, headache, changes in mental ability and smelly urine.
Documented laboratory data included: erythrocyte sedimentation rate (ESR) (mm/h), C-reactive protein (CRP) (mg/L), procalcitonin (PCT) (µg/L), white blood cell Objective: Focal renal infections in children have to be diagnosed early in order to enable an appropriate antibiotic treatment. The purpose of this paper was to investigate the efficacy and clinical utility of intravenous renal contrast-enhanced ultrasound (CEUS) as an alternative imaging method for the diagnosis and follow-up of focal renal infections in children. Methods: Fourteen children aged from 6 months to 17 years (mean 6.5 years) in whom focal renal infection was suspected were included in this retrospective study. All data were obtained from medical and imaging records of the patients. Results: CEUS was performed for the diagnosis in all 14 children and then also for follow-up in seven children with renal abscess. In three children enhancement pattern was concordant with focal nephritis and in four children CEUS excluded focal renal infection and the diagnosis of pseudolesion was confirmed. Conclusion: Renal CEUS was proven to be an efficient and self-sufficient imaging in diagnosis and further follow-up of focal renal infections in children. CEUS patterns of focal renal infections are described as well as relevant CEUS enhancement patterns important for differential diagnosis. Renal abscess follow-up algorithm with CEUS is suggested. Advances in knowledge: All clinically relevant imaging data was obtained by CEUS and no other imaging was necessary for the diagnosis and follow-up.
Treatment data included choice of antibiotics, mode of administration and duration of treatment (Table 1).
Renal ultrasound and intravenous contrastenhanced ultrasound of the kidney Conventional and colour Doppler ultrasound of the kidneys were performed in Aplio 500 US machine (Canon Medical System, Europe, B.V.) to evaluate the structural changes in kidney parenchyma and in the perirenal space. 3 A focal nephritis was suspected, when an area of hypoechoic structural changes with diminished vascularisation usually without mass effect was seen. When a heterogeneous mass within the renal parenchyma, which usually causes mass effect, was seen, an early stage of renal abscess was suspected. Mature renal abscess typically appears as a well-defined hypoechoic mass with thick irregular walls or a capsule and increased through-transmission. It demonstrated internal echoes and/or hypoechoic fluid areas. Colour Doppler ultrasound showed increased peripheral vascularisation and lack of vascularisation in the central part of the abscess. In addition, the local extrarenal parenchymal extension of the collection was determined as subcapsular or perinephritic fluid collection and inflamed hyperechoic thickened perirenal fat. 7 CEUS examinations of a kidney were performed on the same ultrasound machine using a 1.9-5.0 MHz convex or 7.5-12 MHz linear transducer. A child laid in the position (supine, prone, decubitus) where the renal changes were best seen. The secondgeneration ultrasound contrast agent (UCA) SonoVue ® (Bracco, Milan, Italy) in a dose of 0.03 mL/kg for convex probe or 0.05 mL/ kg for linear probe was injected by bolus through one of the arm veins followed by a 10 ml saline flush. Subsequent enhancement was recorded as a continuous cine loop for the first 60 s, then the shorter cine loops or still images were saved. Number of UCA applications depended on the visualisation of the lesion and BJR Contrast-enhanced Renal Ultrasound in Children child's cooperation, but in order to thoroughly scan the whole kidney with focal infection and also contralateral kidney in venous phase, usually two applications of UCA were applied.
Based on the CEUS enhancement the type of focal kidney infection was identified as: (1) focal nephritis-hypoenhanced area in renal parenchyma with slow washout comparable to normal renal parenchyma washout (Figure 1), (2) early stage of abscesshypoenhanced area with nonenhanced part (Figure 2a), (3) mature abscess-non-enhanced central part with hyperenhanced capsule (Figure 2b), (4) subcapsular abscess with boundary between the avascular non-enhancing subcapsular collection and the enhancing renal parenchyma (Figure 3a), and (5) perinephritic changes including perinephritic fluid (non-enhanced part) and hyperenhanced inflammatory changed perinephritic fat. The same enhancement of focal lesion detected on conventional ultrasound as the surrounding normal renal parenchyma indicated parenchymal pseudolesion (Figure 4).
Prior to the CEUS examination, written informed consent was obtained from all children's parents. All the examinations were conducted following the Helsinki Declaration.
RESULTS
During the study period, there were 14 children (11 girls and 3 boys) with suspected focal renal infection, aged from 6 months to 17 years (mean 6.5 years). Renal CEUS was performed for diagnosis in all children and then also for follow-up in seven children.
Demographic, clinical, laboratory, treatment data, and CEUS findings were presented in Table 1.
In three cases, risk factors for bacteraemia were present; first child was suffering from pneumonia caused by Mycoplasma pneumoniae during hospitalisation and later immunodeficiency disorder was diagnosed, second presented influenza infection and pneumonia, and third was diagnosed with peritonitis and appendectomy was performed at the beginning of the hospital stay. Three children had suffered from antecedent urinary tract infections. Vesicoureteral reflux was diagnosed in three of the observed patients. All children had normal potassium, sodium, blood urea nitrogen and creatinine levels on admission. ESR was done in four patients and it was elevated in all of them, values ranging from 46 up to 113 mm/h. In 9/14 patients, causative organisms were isolated from urine (E. coli in seven, Enterococcus faecalis in two). All blood cultures were negative.
Focal renal infection was demonstrated by renal CEUS in 10 children; in 3 enhancement pattern was concordant with focal nephritis (Figure 1) and in 7 with renal abscess (parenchymal or subcapsular) (Figure 2 and Figure 3). In four children, CEUS excluded focal renal infection, the diagnosis of pseudolesion was established; in two cases, the pseudolesion represented atypical Columna Bertini and two pseudolesions were hyperechoic compared to normal renal parenchyma on conventional ultrasound (Figure 4). Multiple follow-up CEUS were performed in seven children with renal abscesses. Three to four follow-ups were performed to evaluate the dynamics of abscess maturation and regression: the first follow-up after 7-9 days, the second follow-up after 22-28 days, the third after 1-2 months, and in one child after 9 months to evaluate chronical changes after abscess healing (Figure 2b-c, Figure 3b). Other children were followed-up only by ultrasound and Doppler.
All patients with focal renal infection were initially treated with intravenous antibiotics and completed the treatment with oral drugs. Total treatment duration ranged from 3 to 9 weeks (mean 6 weeks), depending on the type of focal infectious lesion.
However, no chronical changes like signs of renal parenchyma scars, thinned renal cortex or perfusion defects at the site of the abscess were found on follow-up examinations after treatment.
DISCUSSION
Clinical applicability and utility of CEUS in the diagnosis and monitoring of focal renal infections in children was investigated. All patients in our series presented similar clinical symptoms and elevated inflammatory markers. By the use of CEUS, clinically relevant imaging data of pathomorphological and structural changes in renal parenchyma and perirenal tissue in all of the children were obtained. All children with focal renal infection were promptly treated with broad-spectrum antibiotics and a complete resorption of lesions was achieved.
Renal CEUS was recognised as a problem-solving method in the evaluation of renal focal lesions and a promising method in microvascular renal perfusion evaluation, like in acute pyelonephritis 8 or other causes of renal perfusion disorders (in the settings of renal artery stenosis, renal infarction, including smaller and polar areas as well as cortical necrosis). 7 Most of the data regarding renal CEUS are based on studies of adults. There are only a few reports of using UCA in renal tumour or inflammation pathology in children. 5,7,8 However, there are no papers evaluating CEUS in a subset of children with focal renal infections.
In our study, CEUS was performed as a continuation of the conventional ultrasound examination, particularly in cases presenting on ultrasound as an indeterminate solid or mixed lesion. The knowledge of distinct CEUS enhancement patterns of renal and perirenal space in different pathological entities helps in differential diagnosis. With CEUS, it is possible to confirm focal renal infection and to differentiate between focal nephritis and different stages of renal abscess. This has an impact on the antibiotic treatment duration, which is considerably longer in renal abscess. The possible differential diagnostic option in children with focal renal infection, particularly in the early stage of abscess formation, is Wilms tumour, which has a more non-homogenous hyperenhancement of tumour tissue, usually with multiple non-enhanced areas of necrosis, various sizes and shapes. [9][10][11] A focal nephritis has to be differentiated from renal cell carcinoma which is hypoenhanced, but has fast washout, while in inflammatory lesions there is slow washout on delayed images. 7 On the other hand, CEUS can exclude focal renal infection by diagnosing pseudolesion, which has a similar enhancement pattern as normal kidney parenchyma, and direct the investigation searching for other infectious foci.
CEUS has been shown as a highly sensitive diagnostic imaging modality for detecting and monitoring renal scars in children with reflux nephropathy. 12 Findings included hypoenhancing areas in the renal parenchyma of different shapes (wedge-shaped areas, areas of flattening), irregularity of the outer renal contour, and the parenchymal thinning. CEUS was proven to be an objective follow-up method regarding inflammation of renal tissue and its reparation in our small cohort. Suggested follow-up timing, according to our experience, is 7-10 days after the diagnosis, then after 3-4 weeks and 6-8 weeks, depending on clinical response to treatment. Only if parenchymal changes are seen on the last CEUS, another follow-up CEUS should be performed 3-6 months after the last examination to evaluate for possible chronical parenchymal changes. Due to a timely diagnosis and treatment, the outcome was excellent without the need for interventional (percutaneous drainage) or surgical treatment.
Although CEUS has many above-mentioned advantages, there are possible limitations to its use such as poor display of the entire kidney or poor depiction of the focal lesion by native ultrasound (obese patient, bowel gas interposition, poor child co-operation). Another limitation of this method is the off-label use of intravenous second-generation UCA in children despite the high safety profile of the contrast. Recently, a comprehensive review of intravenous CEUS safety in children has been published, which confirms initial data previously published on a single large cohort of paediatric patients. 13,14 History of known hypersensitivity to the active substance of UCA is according to the recent review the only known contraindication for intravenous CEUS application. 13
CONCLUSION
Renal CEUS is an efficient, safe, children-friendly imaging method for timely diagnosis of focal renal infections, their objective follow-up during antibiotic treatment, and objective evaluation of potential chronical changes of renal parenchyma. It has proven to be a selfsufficient method and thus has a high potential to replace CT or MRI in the assessment and monitoring of focal renal infections in children.
FUNDING
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sector.
|
2022-10-16T06:17:29.382Z
|
2022-10-14T00:00:00.000
|
{
"year": 2022,
"sha1": "5dad4bdc4bb1d8ac379461f7dda313f545dfbd18",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1259/bjr.20220290",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "2a85c1ce7f33de7cb889e89c3e60091eefbcdbf2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237271336
|
pes2o/s2orc
|
v3-fos-license
|
Development of Wheat (Triticum aestivum L.) Populations for Drought Tolerance and Improved Biomass Allocation Through Ethyl Methanesulphonate Mutagenesis
The narrow genetic variation for drought adaptive traits and biomass allocation in wheat (Triticum aestivum L.) presents a major bottleneck for breeding. Induced mutagenesis creates genetic variation and complements conventional breeding for drought tolerance improvement. The aims of this study were to induce mutations in wheat genotype LM43 using three ethyl methanesulphonate (EMS) treatments, and to develop mutant populations for improving drought tolerance, biomass allocation and agronomic performance. Experiments were conducted under controlled and field conditions at the University of KwaZulu-Natal. Data on percentage germination (%G), days to 90% maturity (DTM), plant height (PH), shoot biomass (SB), root biomass (RB), root-shoot ratio (RSR), spike length (SL), spikelet count (SPS), thousand seed weight (TSW), and grain yield (GY) were collected from M1 to M4 generations. Significant (p < 0.001) differences among individuals and generations were observed for all the assessed traits and the generation × population interaction effects were significant (p < 0.01) for SB, TSW, and GY due to EMS treatments. The differences among the generations showed that the mutagenic effects were cumulative and exhibited clear segregations in subsequent generations. The new selections with unique biomass allocation, drought response and agronomic performance will be useful for wheat improvement programs.
INTRODUCTION
An estimated seven billion people across the world depend on bread wheat (Triticum aestivum L.; 2n = 6x = 42, AABBDD) for food, making it the second most important food crop globally (Tilman et al., 2011). Wheat is a source of fiber, carbohydrates and proteins (Mahajan and Tuteja, 2005). Globally, wheat was produced on ∼218 million hectares with an output of 772 million tons of grain in the year 2017 [Food Agricultural Organization (FAO), 2018]. However, it is projected that a 70% increase in wheat production will be required to suffice human consumption by the year 2060 (Ortiz et al., 2008). Reports indicate that wheat production and productivity has declined by 5.5% in the last few decades due to climate change-induced drought and heat stresses (Daryanto et al., 2016).
There is a need to develop wheat cultivars with improved yield potential and enhanced resistance to biotic and abiotic constraints to meet the projected demand for wheat.
Drought stress is one of the major constraints of wheat production and productivity. Daryanto et al. (2016) estimated that, on average, 21% loss in yield can be incurred in wheat when moisture availability decreases by 40%. The impact of drought on wheat production is influenced by genotype (Daryanto et al., 2016), the stress intensity and duration (Park et al., 2016;Sun et al., 2017), plant health status and nutrition (Lobell et al., 2008;Yu et al., 2018) and genotype-by-environment interactions. Supplemental irrigation is used to mitigate the impact of drought stress. However, this option is not sustainable. Average rainfall is declining and inadequate to replenish water reservoirs to meet human, industrial and agricultural uses, which may create conflict on water management and use. Developing drought adapted cultivars is among the most sustainable strategies to reduce water demand for agriculture and minimize the impact of drought stress on wheat production.
Several wheat breeding programs spearheaded by the International Maize and Wheat Improvement Centre (CIMMYT) and the International Centre for Agricultural Research in Dry Areas (ICARDA) in collaboration with various national organizations initiated the development of wheat breeding lines with improved drought tolerance. The breeding lines reportedly exhibited high yield potential and were adapted to dryland farming ecologies (Smale et al., 2002). The successful development of drought tolerant cultivars depends on identifying and exploiting wide genetic variation for drought adaptive traits in wheat. Drought adaptive traits include flowering and maturity periods, plant height and spike length, kernel weight, tillering capacity, and biomass allocation (Abdolshahi et al., 2013;Mehraban et al., 2018;Hooshmandi, 2019). Most adaptive traits have been investigated extensively in studies on drought tolerance and yield in wheat, while biomass allocation has been less reported. Assessing biomass allocation in plants involve quantifying biomass in the above and below ground parts. Root traits have been neglected in breeding program due to difficulties associated with root sampling and phenotyping (Den Herder et al., 2010;Fang et al., 2017). Conventional wheat varieties exhibit narrow genetic variation for root traits because most breeding programs primarily aim to improve harvest indices to increase yield potential. While this has led to increased grain yield potential, it has narrowed genetic variation for rooting ability, lowered root to shoot ratios and increased susceptibility to drought stress in modern varieties (White et al., 2015).
Genetic variation allows for selection of superior individuals. Breeding wheat populations for drought tolerance has been limited by several factors including large environmental variance encountered during phenotyping, lack of genetic variation and loss of genetic diversity in improved cultivars. The loss of genetic diversity has contributed to stagnant yields and high susceptibility of wheat to environmental stress (Keneni et al., 2012;Voss-Fels et al., 2015). The narrow genetic diversity in wheat is attributed to continuous directional selection within a narrow range of elite parental lines. Many spring wheat cultivars in developing countries were developed involving at least one elite parent bred by CIMMYT (Smale et al., 2002). Thus, there is a need to create new genetic variation for developing new cultivars with improved drought stress tolerance. Genetic variation is created after recombination of genes through controlled crosses. Recombination occurs through sexual reproduction when divergent and complementary parents are crossed. This process does not occur naturally in selfpollinating species such as wheat. Self-pollinating species require emasculation prior to crossing, which is tedious and expensive. Furthermore, conventional breeding by crossing of superior genotypes is a long-term process that takes about 12 years to develop distinct, stable, and uniform varieties (Shivakumar et al., 2018). There is a need to rapidly create genetic variation and develop superior cultivars within a shorter period to respond to the rapidly changing environment.
Induced mutagenesis, which involves exposing biological material to chemical or physical agents that induce genetic modification through mutations in the DNA, has been used in widening genetic variation in self-pollinated species such as rice, sorghum, and wheat [International Atomic Energy Agency (IAEA), 2020]. The resultant mutant varieties created through mutagenesis have improved productivity and quality (Kenzhebayeva et al., 2014). The use of induced mutagenesis has the potential to create new genetic variation that may not be created by conventional breeding strategies. For instance, the possible genetic recombination obtained by sexual reproduction after crossing is limited by the initial allelic diversity within the base breeding population (Voss-Fels et al., 2015). Mutagenesis broadens the possibilities of allelic diversity of the base population. The efficacy of the mutagenic agent can be manipulated by altering its dosage and treatment conditions. It is imperative to generate large mutant populations to enhance the efficiency of mutagenesis and increase the probability of obtaining superior mutant individuals. Various mutagens including ethyl methanesulphonate (EMS) have been used successfully to improve agronomic traits such as flowering and maturity period, reduced plant height, yield, grain quality and tolerance to abiotic and biotic stress (Maluszynski and Kasha, 2002;Kontz et al., 2009;Singh and Balyan, 2009;Dhaliwal et al., 2015;Nazarenko et al., 2018;Lethin et al., 2020).
The use of EMS mutagenesis requires less sophisticated equipment, which makes it appropriate for developing countries, and poses low health and environmental hazard risks (Anbarasan et al., 2013). However, mutations obtained after exposure to EMS are random and some may not be useful in developing fit-for-purpose varieties. There is a need to develop various populations and select superior mutant genotypes or families. The selected families can either be used as parental lines to develop breeding populations or released as mutant varieties. Reportedly, mutagenesis caused desirable genetic changes and improved agronomic performance in wheat, rice, and cowpea (e.g., Dhaliwal et al., 2015;Horn et al., 2016;Luz et al., 2016). However, there are no previous reports on development of breeding populations of wheat with optimized biomass allocation, drought tolerance and agronomic performance. There is a need to identify novel mutant breeding populations with enhanced biomass allocation for drought tolerance and agronomic performance in wheat. Early generation selection of mutant populations is important to advance desirable traits in wheat (OlaOlorun et al., 2020). In a preliminary study, OlaOlorun et al. (2019) established three ideal EMS treatment conditions in wheat genotype LM43. The three pre-determined EMS treatment conditions were suitable to induce mutation and to select ideotypes with high yield, improved drought tolerance and high root to shoot ratios (OlaOlorun et al., 2021). Biomass allocation to roots has been neglected in wheat breeding despite the importance of roots in nutrient cycling, water extraction, carbon retention to soil. Studies have reported that biomass allocation can be pivotal in drought tolerance (Griffiths and Paul, 2017;Mathew et al., 2019). Therefore, the objectives of this study were to induce mutations in a wheat genotype LM43 using three predetermined ethyl methanesulphonate treatments, and to develop breeding populations involving M 1 to M 4 generations for enhanced drought tolerance, biomass allocation and agronomic performance.
Plant Materials
Bread wheat genotype designated as LM43, was selected from a panel of germplasm obtained from CIMMYT. The genotype was selected after prior evaluation for its drought tolerance and yield potential (Mwadzingeni et al., 2016). A preliminary study to establish optimal conditions for effective mutagenesis with minimum biological damage was conducted prior to embarking on a large-scale mutagenesis (OlaOlorun et al., 2019).
Selection Procedure
The selection procedure across generations is illustrated in Figure 1. Preliminary phenotypic variation analyses showed that EMS mutagenesis was effective on genotype LM43 (OlaOlorun et al., 2019). Hence this genotype was selected for large-scale mutagenesis under three EMS treatment conditions. Breeding populations were developed under four generations based on the three EMS treatment conditions (OlaOlorun et al., 2021). Fresh EMS treated M 1 seeds were planted in the field between March and August 2018. The first breeding population (Population 1) was developed after the treatment of seeds at 0.1% v/v EMS for 1 h at 25 • C. The second breeding population (Population 2) was derived after seeds were treated under 0.1% v/v EMS for 1 h at 30 • C while the third breeding population (Population 3) involved seeds exposed to 0.7% v/v EMS for 1.5 h at 25 • C. In addition, an untreated seed of the genotype LM43 was included as Population 4 and as a comparative control. M 1 plants were grown to maturity and the grains were harvested and bulked according to their respective treatments and developed into populations. The M 2 seed harvested from M 1 plants were grown out as M 2 plants. During the M 2 generation, 180 individual plants were purposefully selected based on high biomass and yield potential and further evaluated at M 3 and M 4 generations. Selections made in the M 3 and M 4 generations were for improved agronomic performance, drought tolerance and biomass allocation under drought-stressed and non-stressed conditions.
Planting Sites and Establishment
The M 1 and M 2 generations were evaluated at Ukulinga Research Farm of the University of KwaZulu-Natal (UKZN) (29 0 40'S, 30 0 24'E; 806 m above sea level) during the 2018/2019 cropping season. The M 3 and M 4 generations were established both at Ukulinga Research Farm and under greenhouse condition at the Controlled Environment Facility (CEF) at UKZN during the 2019/2020 cropping season. The meteorological data during the growing period and soil physiochemical properties at both sites are provided in Tables 1, 2, respectively. The M 1 and M 2 generations were planted under normal growing conditions with irrigation up to maturity, while the M 3 and M 4 were screened under drought-stressed and non-stressed conditions. Under field conditions, seeds were planted on 12 m long rows. The spacing between rows was 60 cm and plants were spaced 10 cm apart within a row. The spacing was consistent with normal field planting but was selected to optimize planting density within the available space and customized planting conditions. The experiments used custom-made plastic mulch as rain-out selection strategy. In the greenhouse, seeds were planted in 10liter capacity plastic pots filled with pine bark. All experiments were laid out in a randomized complete block design with two replications. Drought stress was imposed by withholding irrigation water to 35% field capacity at anthesis, while the nonstressed treatment was well watered up to physiological maturity.
Data Collection and Analysis
Quantitative data from 10 selected and tagged plants was collected during each generation to summarize the genetic variation and aid selection. The following data were collected during the M 1 through M 4 : days to 90% maturity (DTM), plant height (PH), shoot biomass (SB), spike length (SL), 1000-seed weight (TSW) and grain yield (GY). In addition, percentage germination (%G) and number of spikelets per spike (SPS) were collected at M 1 and M 2 generations, while root biomass (RB) and root-shoot ratio (RSR) were measured at M 3 and M 4 generations. Data collection procedures were adapted from Mathew et al. (2019). The data were subjected to analysis of variance (ANOVA) and vital descriptive statistics were computed using GenStat 18th edition (Payne et al., 2017). The relationships among traits were quantified under each stress treatment using the Pearson correlations coefficient with the SPSS version 24 (IBM SPSS I, 2016). Trait correlation strengths were categorized into weak, moderate and strong following Zou et al. (2003).
Analysis of Variance
The analysis of variance for M 1 and M 2 generations showed that the population × generation interaction effects were significant (p < 0.01) for SB, TSW, and GY (Table 3). Significant (p < 0.001) differences across the mutant generations were observed for all traits measured, while the population main effect showed significant (p < 0.05) impact on PH, RB, and GY.
There were significant (p < 0.05) differences in PH and SB in response to the three-way population × generation × water regime interaction effects at M 3 and M 4 generations ( Table 4). The effects of the interaction involving generation and population were significant (p < 0.01) for SB and TSW only. The generation × water regime, and population × water regime interaction effects resulted in significant (p < 0.05) differences in SB, SL, TSW, and GY among the M 3 and M 4 mutants. Significant (p < 0.05) differences were observed among the M 3 and M 4 (Figure 2A). At the M 2, the mutants from Population 3 recorded the highest SB (61.82 g/m 2 ) while mutant plants developed in Population 2 maintained the highest GY (19.48 g). Mutants from Population 1 recorded the shortest PH (83.71 cm) and highest TSW (47.29 g) (Figure 2B). At M 3 , mutant plants developed in Population 2 produced the highest grain yield of 11.58 g under drought-stress condition ( Figure 3A). The highest shoot biomass was produced under non-stress and water stressed conditions at 80.04 and 71.51 g/m 2 , respectively for mutants in Population 1. Mutants from Population 2 recorded the highest root biomass under non-stress and water stress conditions at 14.36 and 13.37 g/m 2 , respectively. During the M 4 generation, mutant plants established in Population 1 produced the highest root biomass (9.38 g/m 2 ) under non-stressed condition, while Population 2 recorded the highest RB (7.87 g/m 2 ) under water stress. The highest GY (23.51 g) under non-stressed condition was recorded for mutants from population 3 while mutants from Population 1 had the highest GY of 14.53 g under water stressed conditions. Under water stress, mutants from Population 2 had the highest SB (32.93 g/m 2 ) while mutant plants from Population 3 recorded the shortest PH of 87.33 cm ( Figure 3B).
The mean performance of the three EMS-treated populations and the untreated control across four generations are presented in Figure 4. Mutants developed from Population 3 had the highest SB of 55.43 g/m 2 while the highest GY (18.39 g) was recorded for mutant plants in Population 2. The SL and TSW were the highest for mutants from Population 2 (13.64 cm and 61.61 g, respectively) across the four generations. Figure 5 summarizes the differences among the M 4 wheat Populations under water stressed and non-stressed conditions in two planting sites.
Variation Observed at M 3 Generation
Many individual plants in the M 3 generation were available for selection based on their breeding population and observed variation in spike and awn morphology (Figure 6). Individual plants with variable tiller number (Figure 7), plant height and shoot biomass production (Figure 8) and, biomass partitioning into roots and shoots (Figure 9) were also observed. Qualitative traits had limited variation in M 3 generation when compared with the M 2 . However, segregation at M 3 generation produced a wider range of variation (Figures 8, 9) making selection more efficient. Various spike mutants with high number of seeds from each breeding population were selected. Subsequently, abnormal, and deformed spikes with low number of seeds were discarded. Mutants with high root and shoot biomass and number of tillers were identified and advanced to M 4 generation.
Genotypic Variation for Phenotypic Traits
The significant (p < 0.05) effects of generations, breeding populations and their interaction for most agronomic traits probably resulted from genetic segregation or cumulative mutagenic effects in subsequent generations. Each generation was self-pollinated to generate the subsequent generation and the variation in subsequent generations could be due to segregation at heterozygous loci caused by mutations in M 1 generation. Similarly, Shorinola et al. (2019) found both superior and inferior mutants in later generations of wheat and supposed that the variation emanated from segregating heterozygous mutant phenotypes from the initial population. In other studies, the phenotypic variation between early and subsequent populations was attributed to the cumulative effects of the EMS. Hussain et al. (2018) asserted that the variation in subsequent generations is induced by non-lethal cumulative mutagenic effects. Singh et al. (2006) reported significant variation between M 1 and M 2 generations with reduced variation in M 3 generation, which was attributed to homozygosity even at mutated loci in advanced generations. Expectedly, phenotypic expression in mutant generations was significantly (p < 0.05) affected by drought-stress. Traits such as SB, SL, TSW, and GY were significantly reduced under drought stress, which corroborated previous studies (Marchin et al., 2020). Soil moisture is vital for biological process and nutrient transport, and inadequate water supply interferes with essential processes leading to poor growth and development (Daryanto et al., 2016). Grain yield production under drought stress was likely supported by families that were able to maintain high shoot biomass production. It is reported that shoot-related traits influence grain production under waterlimited environments by translocation of assimilates previously synthesized in the shoot before the onset of the detrimental drought stress (Abdolshahi et al., 2015).
Mean Performance of EMS Treated Populations
The lack of definite trends in the pattern of variation among the EMS-treated wheat populations point to the random nature of mutations induced by EMS and the wide variation created in subsequent segregating generations. The superior agronomic performance of EMS mutagenized populations compared to the untreated control for biomass, yield and yield-related traits measured under water stress during M 3 and M 4 generation indicates that EMS is efficient in creating potentially useful variation. It can be assumed that genetic modification through mutations induced by EMS improved drought tolerance. EMS is a potent mutagen and widely used in plant breeding programs (Talebi et al., 2012;Luz et al., 2016). Mutagenesis has potential to create genetic variation for exploitation in breeding for improved biomass and yield-related traits under water-limited environments (Addai and Salifu, 2016;Luz et al., 2016). Population 2 had the highest average biomass and yield performance across selection generations under the two water regimes. Selected individual mutants from Population 2 can be FIGURE 3 | The means of individuals among three EMS-treated and control populations of wheat at M 3 (A) and M 4 (B) generation under two water regimes. NS, non-stressed condition; WS, water stressed condition; DTM, days to 90% maturity; PH, plant height; SB, shoot biomass; RB, root biomass; RSR, root-shoot ratio; SL, spike length; TSW, 1000-seed weight; GY, grain yield. recommended for further screening for enhanced biomass and yield stability in water-limited environments.
Morphological Traits of M 3 Mutants
Morphological variations reported in this study revealed the usefulness of chemical mutagenesis in wheat breeding. Detectable mutations result in traits that are morphologically distinct showing that such traits would be underpinned by heritable genetic changes (Gnanamurthy et al., 2012). The various types of spikes observed at the M 3 generation suggested that genetic changes in the spikes were attributable to EMS mutagenesis. Mutations can occur as chromosomal breakage, disturbed auxin synthesis, disruption of mineral metabolism and accumulation of free amino acids leading to variation in spike morphology (Goyal and Khan, 2010). Reportedly, spike morphology affects the extent of seed set and threshability of wheat grains. Minor and major mutations were reported to affect spike morphology in hexaploid wheat (Nalam et al., 2007). Mutants with favorable spike morphology (i.e., longer spikes, adequate seed set, and heavier seed weight) are useful genetic resources to improve grain yield potential and enhance grain threshability (Sharma et al., 2019). Variations in spike mutants generated from an EMS mutagenized wheat population study were reported by Dhaliwal et al. (2015). The positive effect of EMS mutagen was also confirmed by the wide range of variation in biomass traits. Variation in biomass is important to develop a larger breeding parental population for subsequent drought improvement programs, since evaluating and optimizing biomass partitioning will indirectly improve yield especially for water-limited environments. Variation in agronomic performance is a product of genotype, environment, and genotype × environmental interaction effects. The genetic constitution of individuals is modified due to EMS mutagenesis leading to heritable genetic changes. Mutational events amongst induced plants vary due to inherent genotype differences, EMS dose and permeability, and reshuffling of genes on mutated genomic regions (Hussain et al., 2018). Therefore, variation present in agronomic performance is underpinned by genetic changes introduced through EMS mutagenesis. This will condition substantial changes in the genetic homeostasis and physiological functions including in the synthesis of hormones, growth regulators, protein coding and cell division. As a result, heritable and desirable genetic changes in agronomic performance and physiological responses are introduced in functional mutants for selection. Detection of mutations caused by major genes is feasible using diagnostic molecular markers in the early mutant generations. However, changes due to minor genetic effects are linked to involvement of a larger numbers of genes that condition phenotypic segregations in the early selection generation. Hence, marker-assisted selection should be made in the advanced selection generation when homozygous and homogenous stable mutants are identified. This will identify genetically stable and homozygous individuals due to the inherent self-fertilization system in wheat. Previous studies used molecular markers to select mutant individuals in the M 3 or M 4 generations (Kenzhebayeva et al., 2014;Hussain et al., 2018).
Trait Associations
The significant (p < 0.05) correlations observed among the measured traits suggest that the traits were interdependent and provide opportunities for simultaneous selection. The positive and significant association exhibited by GY and SB with the other yield related traits indicate the strong linkage between above ground traits. These traits can easily be selected simultaneously during yield improvement. Taller plants may be able to accumulate adequate photosynthates for attaining higher above ground biomass, which can directly increase grain yield (Zhang et al., 2009). Previously, the influence of above ground traits such as biomass production, spike morphology and kernel weight on grain yield was established (Reynolds et al., 2007;Kandić et al., 2009;Rahman et al., 2016). However, genotypes that accumulate excessive above ground biomass at the expense of developing extensive root systems may be susceptible to drought stress, especially in sub-Sahara Africa where wheat is grown under residual moisture and the rainfall is increasingly becoming erratic and inadequate (Haque et al., 2016). The stronger associations between the biomass traits and grain yield under water stressed conditions shows that biomass partitioning under drought is more critical for plant survival and attaining reasonable yield. For instance, a slight decrease in rooting capacity is likely to have higher influence on grain yield under drought stressed conditions compared to non-stressed conditions. Genotypes with potential to accumulate higher above ground biomass before the onset of drought stress have comparative advantage under terminal drought as they can translocate assimilates from shoot biomass to grains during grain filling (Kandić et al., 2009). The positive and significant correlations of RB and SB are favorable to develop cultivars with high extensive root biomass for water and nutrient extraction and shoot biomass for building adequate above ground biomass to support grain filling. Palta et al. (2011) asserted that a direct and positive relationship between root and shoot biomass is necessary for grain yield improvement. The negative association between RSR and GY regardless of moisture availability conditions indicates that there should be a balance between biomass allocation to above and below ground parts to avoid compromising grain production. Excessively large root systems have high maintenance costs that will limit amount of assimilates available for biomass accumulation in shoots or grain. On the other hand, shallow rooted plants with disproportionately large shoots have higher risk for lodging at anthesis, which increases chances of susceptibility to diseases and pests and reduces grain quantity and quality (Berry, 2013;Dahiya et al., 2018).
CONCLUSION
This study established the importance of EMS mutagenesis in creating genetic variation within and among wheat breeding populations. Wide phenotypic variation in mutants under each breeding population were identified for improving drought tolerance, biomass, yield, and yield-related traits. The differences in agronomic performance among the generations exhibited that segregation and cumulative mutagenic effects contributed TSW, 1000-seed weight; GY, grain yield, * significant at P ≤ 0.05 probability level; ** significant at P ≤ 0.01 probability level, *** significant at P ≤ 0.001 probability level.
to the genetic variation. There is a need to ensure that the favorable mutations are fixed in homozygous and homogenous states before cultivar release. Mutants with favorable agronomic performance can be selected as parental populations for crop improvement. Identified mutants need further screening for biomass and yield stability in diverse environments especially in drought stressed areas. Furthermore, the selected mutants are candidate training populations to identifying genomic regions controlling biomass allocation and yield components. This will contribute to marker-assisted selection for biomass allocation, and yield and yield related traits in wheat.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
|
2021-08-24T13:11:03.829Z
|
2021-08-24T00:00:00.000
|
{
"year": 2021,
"sha1": "131243cc940277adb48f07c4de56fad73037bd1b",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fagro.2021.655820/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "131243cc940277adb48f07c4de56fad73037bd1b",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
17152102
|
pes2o/s2orc
|
v3-fos-license
|
Amerindian-specific regions under positive selection harbour new lipid variants in Latinos
Dyslipidemia and obesity are especially prevalent in populations with Amerindian backgrounds, such as Mexican–Americans, which predispose these populations to cardiovascular disease. Here we design an approach, known as the cross-population allele screen (CPAS), which we conduct prior to a genome-wide association study (GWAS) in 19,273 Europeans and Mexicans, in order to identify Amerindian risk genes in Mexicans. Utilizing CPAS to restrict the GWAS input variants to only those differing in frequency between the two populations, we identify novel Amerindian lipid genes, receptor-related orphan receptor alpha (RORA) and salt-inducible kinase 3 (SIK3), and three loci previously unassociated with dyslipidemia or obesity. We also detect lipoprotein lipase (LPL) and apolipoprotein A5 (APOA5) harbouring specific Amerindian signatures of risk variants and haplotypes. Notably, we observe that SIK3 and one novel lipid locus underwent positive selection in Mexicans. Furthermore, after a high-fat meal, the SIK3 risk variant carriers display high triglyceride levels. These findings suggest that Amerindian-specific genetic architecture leads to a higher incidence of dyslipidemia and obesity in modern Mexicans.
D yslipidemia is a highly prevalent (53%) 1 cardiovascular risk factor in the United States that will drastically increase medical and economic burdens in the subsequent decades if prevention and treatment cannot be better tailored for those most susceptible. In addition to socioeconomic status, the prevalence of lipid disorders also varies among ethnic groups, with Hispanics being more prone to dyslipidemia than any of the other US groups 2 . With 40% of Mexican-American men and 35% of women exhibiting high triglycerides (TGs) (41.69 mmol l À 1 ) 2 , a large portion of the population has a high risk of cardiovascular disease (CVD), especially as a direct causal relationship between hypertriglyceridemia and CVD was recently demonstrated 3 . Strikingly, the decreasing rate of CVD currently observed in Europeans 4 does not extend to Hispanic-origin populations, as exemplified by the four times higher incidence of CVD among the Amerindians when compared with Europeans 2 . Thus, identifying Hispanic-specific lipid variants is critical to deciphering the genetic pathogenesis of dyslipidemia and CVD in this rapidly growing US minority, and ultimately personalizing prevention and treatment of this major risk factor.
Despite their increased predisposition 5 , Mexicans and other groups with Amerindian heritage have been substantially underrepresented in genomic studies 6,7 . Most lipid studies focus on recapturing European-origin signals in the Latino populations [8][9][10][11][12][13] , with only a single Mexican lipid genome-wide association study (GWAS) reported 14 . GWAS in admixed populations are hindered by a complex population substructure that can reduce power 15 . Statistical methods, such as local ancestry inference or admixture mapping, have been employed to overcome or even utilize such ancestral variations to identify disease-associating loci in diverse populations; however, they often rely on ancestry-informative markers or parental population haplotype panels that are not readily available in all populations, as is the case with Latinos [16][17][18] . Fitting a mixed model or adjusting for ancestry in GWAS can circumvent the confounding effect of ancestry, but may lead to a higher falsenegative rate and losing ancestry-specific variants 14,15 .
To this end, we design an approach utilizing cross-population allele screen prior to GWAS (CPAS-GWAS) to identify Amerindian-origin lipid variants in Mexicans. Utilizing the CPAS-GWAS approach, we identify 18 Amerindian risk variants for lipids and obesity and one risk haplotype for TGs in Mexicans. Interestingly, the Amerindian-specific TG risk haplotype and 10 of the Amerindian lipid and obesity variants have not been implicated in lipid traits or obesity in other populations. Two of the new TG loci also show signs of potential positive selection, reflecting the possibility that maintaining high serum lipid levels was favourable during the Amerindian population history.
Results
A novel cross-population allele screen approach. To search for Amerindian-specific genetic variants that contribute to the high risk of dyslipidemia and obesity in Mexicans, we developed a CPAS-GWAS approach that first screens across the genome for variants that differ in frequency between the two ancestry populations, Europeans and Amerindians, and subsequently includes only these variants (CPAS variants) in the actual Mexican GWAS. Thus, we restricted the Mexican GWAS to variants only present in Mexicans and not in Europeans, and variants that show statistically significant differences in allele frequency between Mexicans and Europeans, as explained in detail below (see Supplementary Fig. 1 for CPAS design).
CPAS enriches for Amerindian TG variants. We first screened for population-specific variants between the admixed Mexican population and its European ancestry population represented by Finns, using Finnish and Mexican controls matched on the tested phenotype (that is, Finns and Mexicans with normal levels of TGs, total cholesterol (TC), high density lipoprotein cholesterol (HDLC) or body mass index (BMI), respectively). The purpose of the phenotypic matching is to ensure that the differences in allele frequencies are strictly due to population structure in order to focus on the variants that are population-stratified instead of confounded by other phenotypes. Based on our local ancestry estimates, African ancestry is low (2.3%) in the Mexican cohort, and accordingly, no screening between Mexican and African controls was performed.
For screening across the genome, we first imputed the GWAS data in the Finnish and Mexican cohorts to increase both the number of overlapping common variants between the cohorts and the number of low-frequency single-nucleotide polymorphisms (SNPs) (minor allele frequency (MAF) 1-5%), known to differ most between populations 19 . Overlapping SNPs with MAF45% in Mexicans were pruned using an R 2 cutoff of 0.5 in the Mexican controls to reduce redundancy and multiple testing. To avoid overestimation of linkage disequilibrium (LD) among the low-frequency variants, all overlapping SNPs with MAF 1-5% in Mexicans were retained.
In the actual TG CPAS screen, 967,056 SNPs (61%) exhibited a difference in allele frequencies between Mexican and Finnish TG controls that passed the Bonferroni correction (Po3.16 Â 10 À 16 ) for 1,584,455 SNPs tested. A Mantel-Haenszel test showed that the MAF distribution difference is significantly greater between populations after CPAS (Po2.20 Â 10 À 16 ), indicating that population-stratified variants were indeed detected (Fig. 1a,b). In addition, we compared these variants between Europeans and admixed Native Americans from the 1000 Genomes Project, and 74% of them displayed 410% difference in MAF, demonstrating that our screening does filter for variants that differ between the populations. We also included in the GWAS the 694,185 Mexican-specific SNPs that after imputations were only present in Mexicans but not in Finns to further enrich the GWAS for Amerindian-specific variants. Taken together, 1,661,241 CPAS SNPs filtered by CPAS to significantly differ between Finns and Mexicans or not present in Finns were carried forward for association testing between Mexican TG cases and controls. CPAS was also carried out for three additional traits, HDLC, TC and BMI in a similar way.
GWAS results and independent replication. We performed GWAS for high TGs in Mexicans using only the CPAS SNPs as the input. HDLC, TC and BMI were analysed as continuous traits instead to demonstrate that CPAS-GWAS is effective for quantitative traits as well. As the four phenotypes are highly correlated, we only corrected for the number of SNPs using Bonferroni in the GWAS step, followed by the replication step in which we also corrected for multiple testing using Bonferroni. The top 1% of the TG GWAS results are shown in Supplementary Data 1. We selected 15 non-redundant TG SNPs with P-values 1.07 Â 10 À 5 À 6.08 Â 10 À 33 for replication in 6,159 additional Mexican individuals based on P-value, functional annotation and MAF difference between Mexicans and Finns (Table 1 and Supplementary Table 1). Three of the 15 SNPs were Mexican-specific as their frequencies were less than 1% in the Finnish cohort or Europeans (the 1000 Genomes database). The Mexican replication sample (n ¼ 6,159) consisted of an unrelated cohort and a family-based cohort (see Supplementary Table 2 for clinical characteristics). We combined the results from the two replication cohorts by performing a meta-analysis using METAL 20 . Four variants (rs28680850, rs79236614, rs964184 and rs139961185) on chromosomes 8 and 11 resulted in P-values less than the Bonferroni correction significance level (Po0.0033) in the replication stage (Table 1). Furthermore, their overall meta-analysis in all Mexican cohorts (GWAS combined with the replication cohorts, total n ¼ 9,482) resulted in P-values between 7.1 Â 10 À 9 À 1.8 Â 10 À 67 . Interestingly, the intergenic variant rs79236614 that resides B100 kb downstream of the lipoprotein lipase (LPL) gene is in high LD (R 2 ¼ 0.91) in Mexicans with an early stop variant in LPL, rs328 (S474X), that cuts off the last exon ( Table 1). The novel TG variant rs28680850 on chr8p21 resides in a predicted CpG site. We verified its allele-specific effect on methylation by pyrosequencing bisulphite-treated whole blood-derived DNA samples from Mexicans. The homozygous individuals with the rs28680850 A risk allele (n ¼ 11) all had a 0% methylation status, whereas individuals with AG and GG genotypes (n ¼ 48) had a methylated CpG site with an average methylation of 57% (range 36-100%), implicating potential epigenetic regulation of TG levels. The new TG-associated variant on chr11, rs139961185 that resides in an intron of salt-inducible kinase 3 (SIK3), is common in Mexicans but not observed in Finns (Table 1). To eliminate the possibility that the association signal came from a correlation with the nearby, known TG-associated gene, apolipoprotein A5 (APOA5), we carried out a regional LD analysis ( Supplementary Fig. 2). We did not observe any pair-wise R 2 40.2 between rs139961185 and any of the APOA5 or APOC3 variants, indicating that this novel Mexican-specific TG Chr, chromosome; CI, confidence interval; MAF, minor allele frequency; NS, non-significant, PZ0.05; OR, odds ratio; P, P-value from linear and logistic regression for quantitative and qualitative TG traits respectively; and Z, the standard score from meta-analysis. *Meta-analysis of the family and unrelated cohorts in the replication stage. MAFs (on the scale 0-100%) are listed in the following order: Finnish low TG controls/Mexican low TG controls/Mexican high TG cases. wFinnish MAFs of these SNPs were obtained from the Finnish population in the 1000 Genomes Project as they were missing in our Finnish cohort.
variant in SIK3 is independent from APOA5 and APOC3.
In addition, four other SNPs (rs62436827, rs4360309, rs72925845 and rs78536982) showed suggestive TG signals (Po0.05) in replication for the same allele and direction as in the GWAS (Table 1). Six HDLC variants and three TC SNPs passed the genomewide significance threshold (Po5 Â 10 À 8 ) in the Mexican CPAS-GWAS (Table 2). Two HDLC hits (rs78557978 and rs148533712) and the top BMI signal (rs6027281) that reside near novel genes that have never been implicated for these traits were selected for replication. HDLC and TC variants near or in known lipid genes such as CETP and CELSR2 were not selected for replication. Two novel HDLC loci, an intronic variant in receptor-related orphan receptor alpha (RORA) (rs148533712) and an intergenic variant near UDP glycosyltransferase 8 (rs78557978) were replicated ( Table 2). Since a known HDLC-associated gene, hepatic lipase (LIPC), is 2.3 Mb away from rs148533712, we performed a regional LD analysis ( Supplementary Fig. 3) to investigate whether this Mexican HDLC signal is independent from LIPC. The regional LD analysis demonstrated that the LD (in R 2 ) decays drastically before reaching LIPC, and there was no strong LD between rs148533712 and any variant within LIPC (R 2 o0.2), indicating that the Mexican HDLC signal in RORA is independent from the previously known European LIPC lipid signal, as is also suggested by the relative long distance of 2.3 Mb. Interestingly, the associated interval around the latter SNP rs78557978 (R 2 40.5) includes only one gene, UDP glycosyltransferase 8. The replicated BMI hit, rs6027281 ( Table 2) resides between C20orf197 and LOC284757. However, the associated interval (R 2 40.5) does not extend to these adjacent predicted genes, suggesting an intergenic regulatory effect for this BMI hit or its proxy.
TG CPAS-GWAS loci are enriched for Amerindian ancestry.
To provide additional support for our CPAS approach, we compared the four replicated TG signals with regions displaying enriched Amerindian ancestry in Mexican TG cases versus controls identified by using LAMP-LD 17 . Figure 2a,b shows that the four replicated TG variants reside in regions with the highest Amerindian ancestry difference across the whole genome (a percent difference43% and a z-score43 for an ancestry enrichment between the Mexican TG cases and controls). Supplementary Figs 4-6 show the close-up views of these loci with regional genes. Furthermore, three (rs78536982, rs72925845, and rs4360309) of the four suggestive loci also reside in regions with Amerindian enrichment (a percent difference 42%) in Mexican high TG subjects ( Supplementary Figs 7 and 8). Genomewide ancestry difference is shown in Supplementary Fig. 9.
Genome-wide SKAT analysis supports replicated TG loci. To utilize the imputed low-frequency variants that are more likely to be population-specific 19 , we examined the combined effect of common and rare variants using combined sum test with sequence kernel association test (SKAT-C) analysis 21 . Only the CPAS SNPs were included as input variants in the SKAT-C. Both 11q23 and 8p21 loci where three (rs964184, rs139961185 and rs79236614) of the replicated SNPs from the single-marker analysis reside were significant in SKAT-C (Po7.64 Â 10 À 7 ) after correcting for 65,428 regions tested ( Supplementary Fig. 10a-c). An additional peak near LPL with no GWAS hits is likely due to a cluster of regional rare variants driving the signal ( Supplementary Fig. 10a). The 8p23.3 region where the fourth replicated GWAS SNP rs28680850 resides resulted in a suggestive SKAT-C P-value of P ¼ 2.70 Â 10 À 5 ( Supplementary Fig. 10b).
These results indicate that the use of CPAS variants in SKAT helps identify regions with population-based combined effects of common and rare variants.
Logistic regression of the TG case/control status on the HT1 haplotype carrier status resulted in OR ¼ 1.65 (P ¼ 7.79 Â 10 À 14 ), suggesting that HT1 is a significant risk factor for high TGs in Mexicans. Interestingly, HT1 is Mexican-specific and not observed in Finns, because it is tagged by rs964184 and rs139961185 (Supplementary Table 3) of which rs139961185 is Mexican-specific (not observed in Finns and MAF ¼ 0.5% in the 1000 Genomes Europeans). This Mexican-specific risk HT1 also showed strong association with high TGs in the replication cohorts (P ¼ 7.09 Â 10 À 12 , OR ¼ 1.46) with a frequency of 20% in the Mexican TG cases (overall haplotype P ¼ 2.83 Â 10 À 41 and the risk haplotype P ¼ 2.51 Â 10 À 13 ).
Two causative TG variants on the haplotype background. To identify causative variants travelling on the haplotype background, we examined all SNPs in the haplotype region, focusing on the Mexican-specific HT1. Eight exonic SNPs on the HT1 background, as well as one known hypertriglyceridemia promoter SNP 14,23 on the HT2 background, were further investigated based on differences in allele frequencies and potential deleterious effect ( Fig. 3c; Supplementary Table 4). To identify variants that best explain the Mexican TG case/control status, we carried out a stepwise logistic regression including all nine SNPs. Rs11820589 and rs662799 were retained in the model (Po0.00001; Fig. 3c) with a pseudo-R 2 value of 0.057, indicating that these SNPs tagged by the risk haplotypes explain B6% of high TG levels in the Mexican cohort. Interestingly, rs11820589 is in LD (R 2 ¼ 0.82) with a known non-synonymous variant, rs3135506 in APOA5 (ref. 24). A PolyPhen 2 score of 0.993 and a SIFT score of 0 for rs3135506 indicate a possible damaging effect on the protein. Thus, a change in TGs attributed to rs11820589 is likely due to the effect of rs3135506 on APOA5. Based on ENCODE data, rs662799 (2 kb upstream of APOA5) is a strong enhancer in a HepG2 liver cell line, probably regulating APOA5 in cis, as APOA5 is highly expressed in liver. In summary, these two variants explain B6% of TG levels in Mexicans likely due to a change of function of APOA5.
Positive selection on Amerindian TG loci. To examine if the top TG GWAS loci were favourably retained in the Mexican population due to recent positive natural selection, we examined the integrated haplotype score (iHS) statistics of neutrality (see Methods) for all genotyped and imputed SNPs with MAF45% across the chr8 and chr11 regions (Fig. 4) instead of just the CPAS variants, because focusing only on the CPAS variants that differ in allele frequency between the two populations would have introduced a bias into our selection analysis 25 . In our selection analysis, we found multiple peaks of extreme |iHS| values (44.0) in the chr11 risk haplotype region within the SIK3 gene (Fig.4c). It is worth noting that both the Mexican-specific, TG-associated haplotype-tagging SNP, rs139961185 and the novel Mexicanspecific HDLC-associated variant rs11216230 also reside in SIK3 (Tables 1 and 2). We estimated that these extreme |iHS| scores in SIK3 rank among the top 0.1% chromosome-wide scores based on our iHS analysis on all genotyped SNPs on the entire chr11, suggesting that SIK3 has been under recent positive selection and thus retained unusually high homozygosity. We also identified peaks with |iHS| 44.0 near the novel TG variant on chr8, residing inside a lincRNA gene, LOC286083, expressed in most human tissues (Fig. 4b). The LPL region resulted in several |iHS| values 43.0, although no extreme |iHS| scores (44.0) were seen in this TG region (Fig. 4a). Interestingly, the extreme iHS scores were observed with imputed SNPs, suggesting that the genotype panel does not represent well Mexican-specific variants and Latino populations in general. Rs28680850 and rs79236614 were both significant after Bonferroni correction and rs4360309 displayed a suggestive signal in the GWAS. All three variants reside in regions that show Amerindian enrichment in Mexican high TG cases (43% Amerindian ancestry difference). (b) Local ancestry difference between Mexican low TG controls and high TG cases on chromosome 11q23 where the TG risk haplotype region resides. The seven haplotype-tagging SNPs are shown as green diamonds that are clustered together in the plot. These LAMP-LD 17 results indicate that the 11q23 region is highly enriched for Amerindian ancestry in the Mexican high TG cases. NATURE COMMUNICATIONS | DOI: 10.1038/ncomms4983 ARTICLE To investigate whether admixed ancestry confounds selection signal on chr11, we also performed the iHS analysis in all subjects homozygous for the Amerindian ancestry in the chr11 region (n ¼ 1,217), as estimated by LAMP-LD. We observed iHS scores of 3.3 (rs609177) and 2.8 (rs111809212) in SIK3. Interestingly, these variants are in LD with the Mexican-specific TG risk haplotype SNP rs139961185 in SIK3, both resulting in R240.54 and D 0 40.99 with rs139961185. Accordingly, they were also associated with high TGs when analysed in the entire Mexican TG case/control sample (P ¼ 9.51 Â 10 À 7 and P ¼ 1.46 Â 10 À 10 ). These data show that the iHS scores remain large when the analysis is performed only on the Amerindian background, further supporting natural selection of SIK3 in Mexicans.
Response to oral fat tolerance test in Mexicans. To examine if the Mexican-specific SIK3 risk variant, rs139961185 affects postprandial TG metabolism, we carried out an oral fat tolerance test in a Mexican cohort. Briefly, the Mexican participants ate a fatty meal at the baseline and their TG levels were measured over a period of 8 h postprandially to calculate the postprandial TG response as an area under the curve (AUC) (see Methods for details of the diet study). Figure 5 demonstrates that both in the low TG (fasting baseline TGo1.69 mmol l À 1 ) and high TG (fasting baseline TG41.69 mmol l À 1 ) groups (Fig. 5a) and in the combined Mexican study sample (Fig. 5b), the Mexican rs139961185 risk allele carriers consistently retained a significantly higher TG levels throughout the time course in contrast to non-carriers (P ¼ 0.03 for TG AUC), suggesting that this TGassociated SIK3 risk variant may delay TG clearance after a fatty meal in Mexicans.
Discussion
Admixed populations provide unprecedented opportunities to understand human demographic history and genetic diversity, and moreover, to uncover variants of different ancestral origin and frequency that may contribute to variations in disease prevalence between populations 26,27 . However, genetic studies in recently admixed populations have proven difficult due to the confounding effects of population substructure and the reliance on an ancestral population reference panel that might not be readily available 15,16 . To this end, we designed a CPAS-GWAS approach that restricts GWAS to include only those variants that differ in frequency between the two ancestral populations. We performed the first CPAS-GWAS to discover Amerindian variants associated with dyslipidemia and obesity in Mexicans. ARTICLE Hypoalphaproteinemia, hypertriglyceridemia and hypercholesterolaemia are more prevalent in Amerindian-origin populations than in Europeans, with 60.5% Mexicans suffering from hypoalphaproteinemia (HDLCo1.03 mmol l À 1 ); 43.6% from hypercholesterolaemia (TC45.17 mmol l À 1 ); and 31.5% from hypertriglyceridemia (TG41.69 mmol l À 1 ), respectively 1,2,5,28,29 . Clinical significance of dyslipidemia derives from the fact that patients with these lipid disorders are predisposed to CHD and often exhibit type 2 diabetes (T2D). CHD and T2D emerged as the two leading causes of death in Mexico in a recent national survey 30 , and more than 65% of the Mexican diabetics have hypertriglyceridemia 31 . Furthermore, recent evidence demonstrate a causal role of TGs in CHD 3,32-34 . Thus, it is critical to focus efforts and resources on the identification of the population-specific genetic components that make hypertriglyceridemia so prevalent in Mexicans.
In contrast to other methods used to analyse admixed populations, CPAS-GWAS is able to achieve single-variant resolution uncovering susceptibility variants or their proxies instead of wider ancestry-enriched chromosomal regions identified using other approaches 15,16 . For example, our TG CPAS-GWAS identified eight Amerindian hypertriglyceridemia variants and one Amerindian-specific risk haplotype, of which all but one reside in genomic regions enriched for Amerindian ancestry in Mexican high TG cases as shown by local ancestry analysis. A two-step tree-based approach evaluating selection on a set of SNPs from several populations has previously been proposed that examines frequency difference among populations 35 . First, Bhatia et al. 35 built an unrooted tree utilizing Fst to identify divergence between populations followed by selection estimation at each marker common to all populations. To identify the potential traits under selection, they cross-referenced selected variants with GWAS catalogues. While CPAS and the tree-based method share similarity, they do not follow the same assumption and principle. CPAS does not assume variants to be under selection, rather we first screen for population-specific variants by comparing phenotypically matched distinct populations and then test their association with a trait directly. As a result, we can also identify population-enriched risk variants that correlate with a phenotype but are not necessarily under selection pressure, as is the case for instance with the LPL locus. Overall, our data demonstrate that CPAS-GWAS can effectively screen for ancestry-specific susceptibility variants in admixed populations.
CPAS-GWAS is not restricted to a single admixed population or trait, and in fact, it can easily be tailored for other populations or diseases as shown by our qualitative TG and quantitative HDLC, TC and BMI CPAS-GWAS analyses. Moreover, CPAS-GWAS is not vulnerable to estimation of local ancestry that can be nontrivial if the appropriate parental populations are unknown or unavailable, as is often the case for admixed Latino populations 16 . Accordingly, false positives due to incorrect ancestry calculations are major concerns of local ancestry inference 15,16 . However, CPAS-GWAS does not face the same challenge as this step is eliminated. One limitation of the CPAS-GWAS approach is that its resolution and accuracy rely on the density of the genotyping arrays and the quality of imputation, but both will likely be circumvented in the near future as whole genome or exome sequencing become common practice as the price of sequencing continues to drop. We utilized Finns as surrogates of Europeans in CPAS, because Finns are the single largest population group investigated in extensive European lipid GWAS studies 11,13 , suggesting that Latino comparisons against Finns should sufficiently screen against the European lipid signals.
Chromosome 11q23 harbours a well-known TG-associated APOA1C3A4A5 gene cluster, and the variant rs964184 has been a b 6 5 Non-carrier Carrier implicated for TGs in multiple populations [8][9][10][11][12][13][14] . In this key TG region, CPAS-GWAS identified Amerindian TG risk variants and haplotype signatures, of which the most striking example is HT1 with zero frequency in Europeans and 20% frequency in Mexican TG cases. Of variants tagged by the haplotypes, rs11820589 and rs662799 explain B6% of variability of TGs in Mexicans. Rs11820589 is in strong LD with a non-synonymous SNP (S19W), rs3135506, a known TG-increasing variant that resulted in a three-fold lower plasma Apo A-V levels when introduced in the mouse genome 24 . Rs662799, previously associated with both TGs and CHD 23 , resides in the promoter or enhancer region of APOA5. It is worth noting that these TG risk variants rs3135506 and rs662799 are 42 and B4 times more prevalent in the Mexican TG controls and Mexican TG cases than in the Finnish TG controls, respectively. APOA5 is a potent regulator of serum TG levels, as knockout mice lacking apoa5 have four times higher TG levels; mice expressing a human APOA5 transgene have one-third lower plasma TG levels; and overexpression of APOA5 reduces TG levels in mice [36][37][38] . In addition, APOA5 stimulates the LPLmediated VLDL-TG hydrolysis via interaction with proteoglycanbound LPL 38,39 . The variants rs662799 and rs3135506 likely affect the function of APOA5, which in turn regulates LPL that is reflected as elevated TG levels in Mexicans. Targeted sequencing of the chr11q23 haplotype region that has substantial Amerindian ancestry in Mexican TG cases is bound to identify additional functional variants that influence TG levels in Amerindian-origin populations.
We also identified two TG loci on chr8p21 and chr8p23 with a significant Amerindian ancestry in the Mexican TG cases. Rs79632214 is located downstream of the key TG gene, LPL, previously associated with TGs and CHD 11,40,41 . In Mexicans rs79632214 is in tight LD with rs328 (S474X), resulting in an early stop in LPL. Interestingly, our SKAT-C data implicated the presence of multiple Amerindian rare risk variants in the LPL region contributing significantly to TGs in Mexicans. Variant rs28680850 on chr8p21 is intergenic and the region has not previously been implicated for lipids in other populations. Our initial data show that this novel TG variant influences differential methylation of a CpG site, suggesting that allele-specific methylation contributes to the underlying biological mechanism.
CPAS-GWAS also identified two novel replicated HDLC loci and one BMI locus that reside near or within genes that have never been associated with either trait in human. Interestingly, the new HDLC variant rs148533712 on chr15 is located in an intron of the retinoic acid RORA gene, and it is an independent signal of LIPC. RORA is a known transcriptional activator of APOA5, APOA1 and APOC3 [42][43][44] , all residing in the Mexican risk haplotype region on chr11, suggesting distinct converging lipid pathways underlying dyslipidemia in Mexicans. At the chr20 BMI locus, protein phosphatase 1, regulatory subunit 3D (PPP1R3D) was recently identified for obesity in mice 45 . Thus, additional genes affecting BMI likely exist at this locus.
To the best of our knowledge, we carried out the first study examining positive selection of GWAS loci for metabolic traits in an admixed population. TG is the most plausible trait under selection at these loci since our diet study implicates SIK3 in delayed TG clearance after a fatty meal; the chr11 locus displays the strongest association signal with TGs both in Mexicans and Europeans; and the novel chr8p23.3 region does not have significant associations with any other traits we tested (P40.0003). Furthermore, converging evidence from our selection analysis and diet study; TG and HDLC CPAS-GWAS; as well as a previous mouse model all support the role of SIK3 in metabolic functions. Interestingly, these Mexican-specific TG and HDLC CPAS variants in SIK3 are not present, and thus have not previously been identified in extensive European lipid GWAS studies 11,13 , suggesting that there are Amerindian-specific genetic lipid pathways involving SIK3. Notably, recent data on a Sik3 knockout mouse identified SIK3 as a novel energy regulator, altering cholesterol and bile acid metabolism by coupling with retinoid metabolism 46 . We also searched the Gene Expression Omnibus 47 database at the NCBI and ArrayExpress 48 database at the European Bioinformatics Institute to verify that SIK3 is expressed in human liver and adipose tissues, the most relevant tissues in lipid metabolism. Furthermore, the iHS analysis suggests that SIK3 has been under positive selection pressure, pointing to an advantageous role for SIK3 in reproductive survival. However, whether selection pressure was acting on Amerindians prior to or after admixture requires further investigation. One possible explanation is that the ability to retain sufficiently high serum lipid levels could have contributed to the survival when resources were scarce during the early period of human habitation in the America continent. As a result, this genetic background was preferentially retained in the population. Additionally, in line with the selection results, our fatty diet study demonstrated that the Mexican-specific rs139961185 TG risk allele is significantly associated with delayed postprandial TG clearance in Mexicans, further supporting the role of SIK3 in TG metabolism and its candidacy for future functional studies. Individually, these findings do not stand alone as evidence of selection on TGs. However, taken together, they suggest that the SIK3 gene, associated with TGs in modern Mexicans, has undergone selection at some point during the Amerindian lineage. SIK3 may thus be a genetic responder to the Western diet that was recently introduced to Latinos, contributing to increased susceptibility to metabolic diseases in modern Mexicans. Additional future studies with whole-genome sequence data will help more comprehensively evaluate selection of lipid traits across the genome in Mexicans.
In summary, we developed the CPAS-GWAS approach to uncover Amerindian variants in Mexicans that contribute to their greater susceptibility to dyslipidemia and obesity when compared with Europeans. Of the novel lipid genes we identified, RORA and SIK3 are of major interest. RORA is a transcriptional ligandregulated mediator of multiple key lipid genes [42][43][44][45] . Furthermore, selective inhibition of the retinoic-acid-receptor-related orphan receptors via synthetic ligands has been suggested as a viable therapeutic approach for metabolic disorders 49 . Based on our findings from CPAS-GWAS, local ancestry, selection analysis, and oral fat tolerance test, we hypothesize that SIK3 may have played an important role in maintaining high plasma TG level that was historically critical for Amerindian survival but led to a higher rate of dyslipidemia and obesity in modern Hispanics after the adaption of Western diet. Our results suggest SIK3 as a strong candidate for future functional investigation to elucidate the molecular basis of the high prevalence of dyslipidemia in Mexicans.
Methods
Human subjects. A total of 19,273 participants from Finnish (n ¼ 9,791) and Mexican (n ¼ 9,482) cohorts were included in the study (see Supplementary Table 2 for clinical characteristics). All studies were approved by local research ethic committees: the Institutional Review Boards (IRB) of the Helsinki, Turku and Tampere University Hospitals; IRB of the National Institute for Health and Welfare; IRB of the Instituto Nacional de Ciencias Médicas y Nutrición, Salvador Zubiran; and IRB of UCLA), and all participants gave informed consent.
We screened six Finnish population-based cohorts with GWAS data available 50-52 (total n ¼ 14,217) for individuals with low serum TG levels (TGso1.69 mmol l À 1 ) and not taking lipid-lowering medication. Fasting TG values were used to determine the low TG status, except for the FINRISK cohort. However, since non-fasting does increase and does not decrease serum TG levels, the use of non-fasting TGs in that cohort should not influence the results. A subset of 9,791 Finnish individuals with low TGs were included in the cross-population NATURE COMMUNICATIONS | DOI: 10.1038/ncomms4983 ARTICLE screening step from the Northern Finland Birth Cohort 1966 (NFBC66) (n ¼ 4,427), the Cardiovascular Risk in Young Finns Study (n ¼ 1,428), Helsinki Birth Cohort Study (n ¼ 991), Health2000 GenMets Study (n ¼ 1,301), FinnTwin12 and FinnTwin16 cohort studies (Twins) (n ¼ 421; one randomly selected twin in each twin pair was selected to investigate only unrelated subjects), and FINRISK (n ¼ 1,223). The Finnish GWAS data on the NFBC1966 Study has been previously deposited in the NIH dbGAP data repository under the accession code phs000276.v1.p1.
In the replication stage, we investigated 6,159 additional Mexican individuals for replication of 15 SNPs using the same criteria for the hypertriglyceridemia status as in the cross-population allele screen, which resulted in 2,129 high TG cases, 2,985 low TG controls and 903 family members from 73 Mexican dyslipidemic families 14,54,55 . To utilize all individuals with lipid phenotypes available in these cohorts (n ¼ 6,159), we also analysed log-transformed serum TGs as a quantitative trait.
Serum TGs, HDLC and TC were measured using enzymatic and enzymatic colorimetric methods with commercial reagents in the Finnish and Mexican cohorts [50][51][52][53][54] . The cut-points for TG cases (TGs42.26 mmol l À 1 ) and TG controls (TGso1.69 mmol l À 1 ) are based on the American Heart Association TG guidelines. The general population means of HDLC, TC and BMI in Finns and Mexicans were used as cut-points in the two populations for the CPAS stage to screen for controls. The thresholds of the three traits for controls in Finns and Mexicans were as follows: HDLC41.15 mmol l À 1 and HDLC41.54 mmol l À 1 ; TCo5.17 mmol l À 1 for both populations; and BMI o25 kg m À 2 and BMIo27 kg m À 2 , respectively.
The Mexican participants (n ¼ 57) included in the fatty meal diet study were recruited at the Instituto Nacional de Ciencias Médicas y Nutrición Salvador Zubirán, Mexico City.
Genotyping and imputation. In the CPAS, Illumina genotyping platforms were used for all cohorts, as described in detail previously 14,[50][51][52] . The NFBC cohorts were genotyped with the HumanHap CNV 370k array: GenMets and FINRISK with the HumanHap 610 k array: and Young Finns Study, Helsinki Birth Cohort Study and Twins with the HumanHap 670 k array, respectively. The Mexican cohorts were genotyped using Human 610 BeadChip and Human Omni 2.5 BeadChip array, respectively. Genotype quality control was performed on each cohort separately using the following inclusion criteria: SNP and sample genotyping success rate Z95%, MAFZ1%, Hardy-Weinberg equilibrium (HWE) PZ1 Â 10 À 6 , and individual heterozygosity rate o4s.d. Samples with gender discrepancies or closely related individuals were removed.
In the replication stage, SNPs were genotyped using Sequenom and TaqMan platform. These SNPs had a genotype call rate Z90%, and they passed a Bonferroni corrected HWE P-value40.05 for the number of tested SNPs. In addition, the family data were checked for Mendelian errors using the Mendel 56 mistyping option.
Imputation was carried out separately in Mexicans and Finns. To reduce imputation runtime, we first pre-phased the Mexican and Finnish cohorts separately using SHAPEIT with the 1000 Genomes Project reference panel 57,58 . Subsequently, imputation was carried out using IMPUTE2 utilizing the 1000 Genomes Project reference panel as well 59,60 . Following the IMPUTE2 guideline and results from a previous study, we employed a cosmopolitan imputation strategy that included all populations from the 1000 Genomes Project to maximize accuracy and the number of imputed SNPs 16,61 . Imputed data were filtered using the following quality control criteria: infoZ0.8, probabilityZ0.9, MAFZ1% and HWE (P40.0001).
Bisulphite pyrosequencing. The methylation status of the CpG site containing the SNP, rs28680850, was measured using bisulphite pyrosequencing with customdesigned kit from EpigenDx according to the standard protocol for bisulphite treatment and pyrosequencing by the manufacturer.
Association analyses. Association testing at the CPAS step and the subsequent GWAS was carried out for the binary TGs status with logistic regression using an additive genetic model, including age, sex and BMI as covariates to control for their potential confounding effects on serum TGs at the allele screen step. For the quantitative CPAS-GWAS analysis of HDLC and TC levels, HDLC and TC levels were first log-transformed to approximate normal distribution, and multiple linear regression was used with age, sex, BMI, global ancestry estimates and the high TG status as covariates. For the quantitative CPAS-GWAS analysis of BMI, age and sex were used as covariates in linear regression, as no inflation was observed ( Supplementary Figs 11-14). Imputed SNPs were analysed using SNPTEST v2.4 (ref. 62) and the score method was used to incorporate the imputation uncertainties into the regression model. Redundant SNPs with a MAF45% were pruned based on LD with R 2 Z0.5 in Mexican controls. In the CPAS step for qualitative TGs, a Bonferroni correction for 1,584,455 tested SNPs (Po3.16 Â 10 À 8 ) was used to identify variants that have different allele frequencies in Mexicans and Finns, resulting in 967,056 SNPs that were significantly different and carried forward to the TG GWAS ( Supplementary Fig. 1). The set of SNPs (n ¼ 694,185) that were variable in Mexicans but were monomorphic in the Finnish cohorts were also included in the GWAS to capture additional Amerindian-specific TG-associated variants. A total of 1,661,241 SNPs were analysed in the TG GWAS ( Supplementary Fig. 1). We also performed CPAS for the three additional traits, HDLC, TC and BMI in a similar way ( Table 2). The quantile-quantile plots ( Supplementary Figs. 11-14) of the all GWAS results with the CPAS SNPs demonstrate that most of the distribution behaves as the expected null, ruling out major confounders.
Haplotype logistic regression, step-wise logistic regression and McKelvey and Zavoina pseudo-R 2 analysis, and Mantel-Haenszel test were all performed in R statistical package (http://www.r-project.org/). Conditional association analysis on rs964184 was carried out using SNPTEST2.4 with the SNP genotype as a covariate.
Association analyses of the 15 TG SNPs genotyped in the replication stage were performed employing the same logistic regression model as in the GWAS using PLINKv1.08 package 63 . In the replication stage, we also performed a quantitative trait analysis on log-transformed TG levels including sex and age as covariates using PLINK. For the two HDLC SNPs, linear regression was carried out using PLINK as well, including sex, age, BMI, high TG status and global ancestry as covariates. Part of the independent cohort (n ¼ 2,121) was used for HDLC replication as these samples have global ancestry estimates available. The family cohort was analysed using the quantitative trait locus association option of Mendel 64 . After taking into account multiple testing using Bonferroni correction, P-values of 0.0033 (15 tested SNPs), 0.025 (two tested SNPs) and 0.05 (one tested SNP) were considered as statistically significant in the replication stage for TG, HDLC and BMI SNPs, respectively, when combining the P-values of the two replication cohorts by weighting by sample size using METAL 20 or the subset of independent cohort for HDLC.
Analysis of combined rare and common variant effects was carried out using SKAT-C implemented in R with a window size of 50 kb and a sliding window of 40 kb. To increase the number of rare variants in SKAT, we used a 5% frequency cutoff. Alternatively, we also calculated the rare variant frequency as 1 ffiffiffiffiffiffi 2N p where N is the sample size (N ¼ 3,701).
Local ancestry inference. To investigate whether variants identified utilizing the cross-population allele screen approach reside in chromosomal genomic regions enriched for Amerindian ancestry in the Mexican high TG cases, we carried out local ancestry estimation utilizing Local Ancestry in adMixed Populations using LD (LAMP-LD) 17 . A three-population mixed model was assumed to estimate proportions of the three ancestral populations (European, Amerindian and African) in the modern Mexicans 65 . The parental population reference panels were constructed from individuals in the Genetics of Asthma in Latino Americans 66 study as described in detail previously 18 and LAMP-LD was run with default parameters, window size 300 and 15 hidden Markov models states, on each chromosome separately. To identify Amerindian enriched regions associating with TG, the standard scores of the difference in local Amerindian ancestry between the Mexican TG cases and controls were calculated for each region. A significance threshold of z-score42 was used to call ancestral enrichment. To calculate the percent difference between cases and controls for each ancestral population, the proportion of all parental populations was estimated for every window in cases and controls separately, and the difference was calculated between the cases and controls for individual ancestry.
Analysis of positive natural selection. To examine if the 8p21, 8p23.3 and chr11q23 TG risk regions have undergone partial selective sweeps, we searched for haplotypes that were unusually long, given the frequency of the focal variant 67 . Specifically, we first estimated extended haplotype homozygosity using the 'rehh' R package 68 . Next, we calculated the integrated extended haplotype homozygosity for both ancestral and derived alleles for each genotyped SNP with MAF45% and then calculated the standardized natural log ratio of integrated extended haplotype homozygosity between ancestral and derived alleles (iHS) 25 . Similarly, we also calculated the iHS scores for imputed variants only in the two chr8 TG risk regions and chr11 risk haplotype region due to computing time. All calculations were performed in the entire Mexican GWAS study sample and including all variants (MAF45%) without any ascertainment or CPAS screening to avoid a potential bias. We used the top 1% chromosome-wide absolute iHS (|iHS|) score (42.56) as a cutoff to identify SNPs showing extremely large values of iHS.
Fatty meal study in Mexican cohort. The 57 Mexican participants underwent an oral fat tolerance test after a 12-hour overnight fast. The fatty meal contained 1,000 kcal; 72 g fat (saturated fat 65%, monounsaturated fat 30%, polyunsaturated fat 5%) with polyunsaturated:saturated fat ratio of 0.08, 490 mg cholesterol, 50 g carbohydrate and 38 g protein, as described in detail earlier 69 . In this diet study, blood samples were drawn at the baseline and at 3, 4, 6 and 8 h postprandially. Postprandial TG response was calculated as an AUC, as described in detail earlier 70 . The intronic SIK3 variant rs139961185 was genotyped in the 57 participants of which 20 had fasting TG levels o1.7 mmol l À 1 at the baseline (the low TG group) and 37 had fasting TG levels 41.7 mmol l À 1 at the baseline (the high TG group). To test for association between rs139961185 and postprandial TG clearance rate, a linear regression for TG AUC was performed using an additive genetic model and adjusting for the baseline TG status.
|
2018-04-03T06:01:40.618Z
|
2014-06-02T00:00:00.000
|
{
"year": 2014,
"sha1": "dc3272e4de48204876db1452ef9d47edeb2d93b7",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.nature.com/articles/ncomms4983.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc3272e4de48204876db1452ef9d47edeb2d93b7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
265039519
|
pes2o/s2orc
|
v3-fos-license
|
Practical Non-Linear Energy Harvesting Model and Resource Allocation in SWIPT Systems
Simultaneous wireless information and power transfer (SWIPT) is a promising solution for enabling long-life, and self-sustainable wireless networks. In this thesis, we propose a practical non-linear energy harvesting (EH) model and design a resource allocation algorithm for SWIPT systems. In particular, the algorithm design is formulated as a non-convex optimization problem for the maximization of the total harvested power at the EH receivers subject to quality of service (QoS) constraints for the information decoding (ID) receivers. To circumvent the non-convexity of the problem, we transform the corresponding non-convex sum-of-ratios objective function into an equivalent objective function in parametric subtractive form. Furthermore, we design a computationally efficient iterative resource allocation algorithm to obtain the globally optimal solution. Numerical results illustrate significant performance gain in terms of average total harvested power for the proposed non-linear EH receiver model, when compared to the traditional linear model.\
Introduction
Ever since wireless networks have been deployed in our surroundings, there has been an exponential growth of the data rate requirements that these networks need to satisfy along with the increasing demand for new and improved services. Under this content, several significant technologies, such as multiple-input multiple-output (MIMO), capacity achieving codes, and small-cell networks, have been proposed to tremendously increase the speeds in wireless networks [4]. However, the demands in high quality of service (QoS) increase the amount of required energy that wireless networks need to operate, in both the transmitters to the end users, i.e., the mobile devices. The bottleneck that slows down the evolution of communication networks is mainly at the mobile devices, due to their limited energy supply. In particular, the development of battery capacity has not been keeping up with the evolution of other network constituents. In the last decade, extensive research has been conducted to study alternative solutions that might offer ways to surpass the limitations caused by batteries. An appealing solution is energy harvesting (EH), which has become very popular in the field of communications for enabling self-sustainable mobile devices [5]. With the intelligence to harvest energy from different sources, such as solar and wind, the lifetime of communication networks can be increased along with enabling self-sustainability at the mobile terminals. However, these natural sources have limited availability and are usually constrained by weather and geographical location. One of the possible solutions to go beyond these limitations is via the concept of wireless power transfer (WPT), which was first introduced in Tesla's work [6], published in the early 20th century. Yet, researchers started investigating the possibilities of using WPT for charging end-user devices in wireless networks decades later [7]. The opportunity rose due to the rapid advancement of microwave technologies in the 1960s, along with the invention of rectifying antennas. The energy in WPT can be harvested from either ambient radio frequency (RF) signals, or in a dedicated manner from more powerful energy sources, e.g. base stations [5]. In the last decades, due to the increasing number of wireless communication devices and sensors, the focus has been set on recycling power from an omnipresent source of energy, i.e., harvesting power 1 from the energy of RF signals. Recently, simultaneous wireless information and power transfer (SWIPT) has drawn much attention in the research community [8]- [10]. In order to unify the transmission of information along with the process of EH, the receivers in SWIPT systems reuse the energy that the RF signals carry in order to supply the batteries of mobile devices while decodes the information successfully. In the case when certain users are the energy harvesters and others are information receivers, the concept is referred to as wireless information and power transfer (WIPT). Another similar emerging concept is the wireless powered network (WPN) [11,12,13], where the receivers rely solely on the power harvested from the appointed transmitter and use that power for their future transmissions. In the following, we focus on SWIPT/WIPT systems.
In this chapter, we give an overview of SWIPT, along with some specifics of receiver modelling in SWIPT systems. Then, we state the motivation of the thesis.
Simultaneous Wireless Information and Power
Transfer EH is a promising solution for overcoming the limitations introduced by energy-constrained mobile devices. Moreover, when considering RF signals as an energy harvesting source, we have an omnipresent, relatively stable, and controllable source of energy [14,15].
The harvested energy from the RF signals can be recycled and used as a supply to the mobile devices in both indoor and outdoor environments. With existing EH circuits available nowadays, we are able to harvest microwatts to milliwatts of power from received RF signals over the range of several meters for a transmit power of 1 Watt and a carrier frequency less than 1 GHz [16]. Thus, RF signals can be a viable energy source for devices with low-power consumption, e.g. wireless sensor networks [17]. In addition, we have the possibility to transmit energy along with the information signal [18], which is known as SWIPT.
The receivers in a SWIPT system have the possibility to decode the transmitted information and also harvest power that would be stored in their batteries for future use. Ideally, the receivers in a SWIPT system would process the information at the same time while harvesting energy from the same signal [14,18]. However, due to practical limitations, the EH receiver cannot reuse the power from the signal intended for decoding in general. As a result, separate receivers that decouple the processes of information decoding (ID) and EH using different policies have been presented in [19]- [30]. One of the approaches for realizing this goal is implementing a power splitting receiver.
Specifically, the power splitting receiver splits the power of the incoming signal into two power streams with power splitting ratios 1 − ρ and ρ, for EH and ID, respectively.
The power splitting ratio 0 ≤ ρ ≤ 1 is previously determined for the power splitting unit, which is installed at the analog front-end of the receiver. Power splitting receivers in the context of SWIPT systems have been studied widely in the literature [31]- [35].
Since we introduce the EH capability to the receiver side, a trade-off between ID and EH arises naturally in such systems. Therefore, new resource allocation algorithms that satisfy the requirements of SWIPT systems were investigated in [19]- [40]. The fundamental trade-off between channel capacity and the amount of harvested energy, considering a flat fading channel and frequency selective channel, was studied in [14,18,19,21]. Moreover, [36] and [37] focused on transmit beamforming design in multiple-input single-output (MISO) SWIPT systems for separated and power splitting receivers, respectively. Additionally, the concept of SWIPT was included in MIMO system architectures in [41,42]. The optimization of beamformers with the objective to maximize the sum of total harvested energy under the minimum required signalto-interference-plus-noise ratio (SINR) constraints for multiple information receivers was considered in [38]. In [22,31,33,34,42], resource allocation algorithms for the maximization of the energy efficiency and spectral efficiency were developed in different network architectures including SWIPT. These works have shown that the energy efficiency can be improved by employing SWIPT in the considered communication systems. In more recent works, [27,43], the authors proposed multiuser scheduling schemes, which exploit multiuser diversity for improving the system performance of multiuser SWIPT systems. Besides, SWIPT has also been considered in cooperative system scenarios [39,44], where the performance of SWIPT systems is analysed by considering different relaying protocols. Another aspect that is widely studied in the literature is improving communication security in SWIPT systems [23]- [25], [30,35].
Namely, in order to facilitate EH at the receivers in SWIPT systems, the transmit power is usually increased. Due to that fact, the susceptibility to eavesdropping might also be increased. As a result, authors in [45]- [51] designed algorithms that provide physical layer security in SWIPT systems. Furthermore, SWIPT has also been introduced in cognitive networks [40,48,50], where cooperation between the primary and secondary systems in a cognitive radio network with SWIPT was investigated. The abundance of research demonstrated above implies that SWIPT leads to significant gains in many aspects, for instance, energy consumption, spectral efficiency, and time delay. Therefore, SWIPT is a novel concept that unlocks the potential of RF energy for developing selfsustainable, long-life, and energy-efficient wireless networks.
Power Transfer Systems
In this section, we focus on a widely adopted receiver model for EH in WIPT systems.
In general, a WIPT system consists of a transmitter of the RF signal, e.g. a base station, that broadcasts the signal to the receivers, cf. Figure 1.1, as well as a receiver RF energy harvesting node. After the signal has been received at the EH node, a chain of elements process the signal as follows. The bandpass filter employed after the receiver antenna performs the required impedance matching and passive filtering, before the RF signal is passed to the rectifying circuit. The rectifier is a passive electronic device, usually comprising diodes, resistors, and capacitors, that converts the incoming RF power to direct current (DC) power, which can be stored in the battery storage of the receiver.
After the rectifier, usually a low-pass filter follows, in order to eliminate the harmonic frequencies and prepare the power for storage. half of the alternating current (AC) wave, while the other half is blocked [52]. Even though they result in a lower output voltage, the half-wave rectifiers comprise only one diode in the simplest case. Thus, the half-wave rectifier is the simplest form of rectifier, which is suitable for small mobile devices or wireless sensors. and other corresponding elements. Depending on the number of stages required for a particular rectifier, the circuit parts can be repeated until the N-th element is reached.
RF Energy
This configuration offers an increase of the conversion efficiency of the circuit, as well as reducing the negative effects of a single circuit part. The rectifying circuits and their optimization have been a research topic for decades [7], although recently more attention has been drawn to them, due to their important role in WIPT/SWIPT systems [53].
In [53], different configurations of rectifying circuits and an illustration of their efficiency of converting the input RF power to output DC power have been presented.
On the other hand, the authors in [1] have developed a particular rectifier, suited for the Global System for Mobile Communications (GSM) frequency band, which was optimized to result in maximal conversion efficiency. Thus, the resulting configuration that was built is a rectifying circuit with 36 stages in complementary metal-oxide semiconductor (CMOS) technology. Another work in [2] analyzed a circuit configuration that attempts to maximize the input before rectification by using a high-Q resonator preceding the rectifier. Moreover, authors in [54] have studied three different techniques for impedance matching and their influence on the efficiency of the rectifier. Design of a dual-band rectifier for WIPT, whose efficiency is optimized in both the 2.4 GHz and 5.8 GHz industrial, scientific and medical (ISM) bands, was presented in [55]. On the other hand, we expect that the input-output response of the EH circuit is non-linear, considering that in any possible configuration, the rectifying circuit has at least one non-linear element, such as the diode or diode-connected transistor. The most important parameter that describes the capability of the rectifying circuit is the RF-to-DC conversion efficiency. In general, the conversion efficiency is defined as the ratio between the output DC power and the input RF power: where P RF-in is the power of the RF signal that enters the rectifier and P DC-out is the converted output DC power. The relationship that the efficiency described is shown to be non-linear, due to the non-linear nature of the circuit itself. This non-linearity is observed in all the measurements presented in [53]- [3], which were performed using practical EH circuits. Similar non-linear behaviour also appears when we observe the output DC power with respect to the input RF power, because they are also connected through the conversion efficiency of the circuit. As aforementioned, the problem of modelling the relationship between the input and output power of a rectifier through a general expression has not been reported in the literature, yet. However, an accurate and tractable model is necessary in order to include the effect of practical rectifying circuits on the harvested power at the EH receivers, when working with SWIPT communication systems.
In many recent works related to EH in communications, a specific linear model has been assumed for describing the harvested power after the rectifying circuit [19]- [30].
In particular, the output power is related to the input power through the conversion efficiency η [19]: Furthermore, η is a constant that can take on values in the interval [0, 1] and is supposed to represent the capability of the RF-to-DC conversion circuit. The authors in [19]- [30], as well as many others, assume the same model as in (1.2.2) for representing the harvested power after the RF signal has been received and processed. Through (1.2.2), a linear behaviour between the input and output power is introduced in the system. With this model, the power conversion efficiency is independent of the input power level of the EH circuit. In practice, the end-to-end wireless power transfer is non-linear and is influenced by the parameters of the practical EH circuits, which are built using at least one non-linear element, as it was previously shown. Thus, the linear assumption for the conversion efficiency and for the EH receiver model does not follow the actual characterization of practical EH circuits in general. More importantly, significant performance losses may occur in SWIPT systems, when the design of resource allocation algorithm is based on an inaccurate linear EH model.
Motivation
This thesis is motivated by the inaccuracy of the traditional linear EH receiver model to capture the non-linear characteristic of the RF-to-DC power conversion in practical RF EH systems. Specifically, the use of the conventional linear EH model may lead to resource allocation mismatch in SWIPT systems, resulting in losses in the amount of total harvested energy in the system.
In this thesis, we first focus on modelling a practical EH receiver circuit, which is fundamentally important for the design of resource allocation algorithm in SWIPT systems. To this end, an accurate and tractable EH model, which reflects the non-linear nature of the practical EH circuit, is proposed. Alongside this model, we design a resource allocation algorithm for the maximization of the total harvested power at the EH receivers in the system, subject to QoS constraints. Furthermore, the proposed practical non-linear model is compared to the existing linear EH model used in the literature.
The rest of the thesis is organized in the following manner. In Chapter 2, we introduce the communication system model adopted in the thesis. Afterwards, we propose a practical non-linear EH model which is used in the resource allocation algorithm design.
Then, the results from the simulation framework are presented. Finally, we summarize the contributions of this thesis in Chapter 3.
Resource Allocation Algorithm for a Practical EH Receiver Model
In this chapter, we focus on designing a resource allocation algorithm for a practical EH receiver model in a SWIPT system. To this end, we first propose a practical non-linear EH receiver model, which we adopt as an objective function for the design of the resource allocation algorithm. We aim to maximize the average total harvested power at the EH receivers in the system under some QoS constraints. The optimization problem is formulated as a non-convex sum-of-ratios problem. After transforming the considered non-convex objective function in sum-of-ratios form into an equivalent objective function in parametric subtractive form, we present a computationally efficient iterative resource allocation algorithm for achieving the globally optimal solution. At the end of the chapter, numerical results for the underlying simulation framework are presented, where the proposed EH receiver model is compared to the existing linear EH receiver model.
System Model
The system model for this work is depicted in Figure 2.1. We focus on a downlink multiuser system, where a single-antenna base station broadcasts the RF signal to K single-antenna users, which are capable of ID and EH. We assume that the users have additional power supply, such that they do not rely solely on the RF EH for their battery supply. Transmission in the system is divided into T unit time slots. For each time slot n and each user k, we perform joint user selection and power allocation to optimize represents the additive white Gaussian noises (AWGN) for time slot n and user k with zero mean and equal variance σ 2 . Given perfect CSI at the user, the instantaneous capacity for user k and time slot n is defined by At each time slot, only a single user is chosen to receive the information, i.e., to perform ID, while the other K − 1 users can opportunistically harvest energy from the signal that is radiated from the base station. Considering the fact that we focus on maximizing the overall harvested power, in the following we focus only on the users selected for EH, while also satisfying the QoS constraints for the ID users.
At the EH receiver, the users receive the signal through their antennas, which are assumed to have ideal impedance matching. Then, the RF signal goes through the rectification process, which converts the incoming RF power into output DC power. For this part, instead of adopting the existing linear model for modelling the DC output power, a non-linear conversion function for a practical EH receiver model is proposed.
The proposed power conversion function captures the effect of the practical rectifier on the end-to-end RF-to-DC power conversion.
Practical EH Receiver Model Proposition
In this section, we propose a non-linear function that describes the input-output response of a practical EH receiver. As it was previously elaborated in Chapter 1, the existing linear model for the EH circuit does not capture the end-to-end non-linearity of a practical EH receiver in a WIPT system and can lead to resource allocation mismatch for the corresponding system. This can be avoided by adapting the model to the practical EH circuits. For this purpose, we propose to use a logistic (sigmoidal) function, which is a special kind of quasi-concave functions, to model the input-output characteristic of the EH circuits. Its standard shape is shown in Figure 2.2, and the general analytical expression has the following form: function can also take many different forms, with more or less parameters, depending on the model and the specific application. It is used in many different fields of science, for instance, modelling population growth, machine learning, as a utility function in networking etc..
To facilitate the development of a practical model for the end-to-end power conversion in a practical EH circuit, we transform (2.2.1) into a slightly different form of the logistic function: In (2.2.2), P DC is the output DC power, while P RF represents the RF power from the RF signal that enters the rectifier, after the RF signal has been received and processed. We note that equation (2.2.2) takes into account the zero-input/zero-output response of EH circuits [56], which cannot be modelled by the function in (2. In the next section, (2.2.2) is used as a building block of the optimization problem formulation that follows, with the aim to design a resource allocation algorithm for the system model presented above.
Resource Allocation Problem Formulation
The aim of the following section is the design of a jointly optimal power allocation and user selection algorithm that maximizes the total harvested power for the considered SWIPT system in Section 2.1. The objective is to maximize the total harvested power at the EH receivers using the proposed practical EH receiver model. For this purpose, we adopt the power conversion function (2.2.2), which was modelled according to the logistic function, as an objective function for the optimization problem.
The optimization problem with respect to the instances of the user selection and power allocation optimization variables {s k (n), P k (n)} is formulated as follows In the formulation of Problem 2.1, function E k (n) is the power conversion function, proposed in (2.2.2), modified corresponding to the parameters assumed in the system model: For notational simplicity, we rewrite (2.3.2) as , and (2.3.4)
3.5)
Ψ k (n) is the standard logistic function with respect to the received power K j=1 s j (n)P j (n), transmitted to all the users selected for ID in a specific time slot n. In the following development of the optimization problem, we use directly Ψ k (n) from (2.3.4) to represent the harvested power at a corresponding EH receiver, while ignoring the constant part Ω, since it does not depend on the optimization variables. Without loss of generality, the term (1 − s k (n)) is included inside the objective function, i.e., in the exponential part of Ψ k (n). Thus, Problem 2.1 takes the following form: subject to C1, C2, C3, C4, C5.
Variable P ER k (n) = (1 − s k (n))( K j=1 s j (n)P j (n)), ∀n, k, represents the total power that is received at EH receiver (ER) k at specific time slot n. The requirements of the system are reflected in constraints C1-C5 in Problem 2.1 and 2.2. Constraints C1 and C2 are imposed to guarantee that in each time slot n at most one user is served by the transmitter for information decoding. C3 imposes a constraint on the maximum of average radiated power P av and C4 shows the hardware limitations for the maximum power P max that is allowed to be transmitted from the base station at each time slot.
Moreover, the QoS constraint is included into C5, where C k (n) is the data rate for user k and time slot n, defined in (2.1.2). C5 implies that the minimum required data per user C req k needs to be achieved on average. Problem 2.2 is a mixed non-convex and combinatorial problem. In order to exploit standard convex optimization tools to efficiently solve the problem, Problem 2.2 needs to be transformed into an equivalent 1 problem with tractable structure. In the following, we present the solution of the optimization problem.
Solution of the Optimization Problem
The non-convexity of the optimization Problem 2.2, arises from both the objective function and the constraints. In particular, the objective function is a sum-of-ratios function which does not enjoy convexity. Furthermore, the combinatorial nature is imposed by the binary integer constraint C1 for the user selection variable. The first step in solving the optimization problem is to transform the objective function. 1 Two optimization problems are equivalent if the solution of one is readily obtained from the solution of the other problem. [58]
Transformation of the Sum-of-ratios Objective Function
The sum-of-ratios optimization problem, which includes an objective function with a sum of rational functions, is a non-convex problem that cannot be directly solved via traditional optimization methods and optimization tools. Lately, several attempts for solving this non-linear optimization problem have been presented in the literature. For instance, the authors in [59] used the branch-and-bound method [60] along with insights from recent developments in fractional programming and convex underestimators theory in order to find the solution to a specific sum-of-ratio problem. However, their methods result in relatively high computational complexity and only yield an approximation to the globally optimal solution. Another work in [61] focused on maximizing a sum of sigmoidal functions subject to convex constraints, which resembles our problem formulation. The authors proved that the defined problem is NP-hard and used the branch-and-bound method to solve it, even thought they were also only able to give an approximate solution to the problem. Along with the fact that these methods are not able to obtain the globally optimal solution, the branch-and-bound method is of exponential complexity, and may increase the computational time severely. Although there already exist algorithms, such as the Dinkelbach method [62] or the Charnes-Cooper transformation [63], that solve the non-linear optimization problem for a single rational objective function, they cannot be applied to the case with a sum-of-ratios objective function. Until very recently, the algorithm introduced in [64], on the other hand, offered a solution to the sum-of-ratios problem that is proven to achieve the global optimum. The crux of the method is a transformation of the sum-of-ratios objective function into an equivalent parametric convex optimization function, such that the globally optimal solution can be successfully found through an iterative algorithm. The mentioned algorithm has been initially used in several works [65]- [67], mostly for the optimization of different types of system energy efficiency under different contexts. In [65], the authors focused on the design of a resource allocation algorithm for jointly optimizing the energy efficiency in downlink and uplink for networks with carrier aggregation. The authors in [66] used the algorithm for energy efficiency maximization framework in cognitive two-tier networks. Moreover, a multi-cell, and multi-user precoding was designed in [67], with the goal to maximize the weighted sum energy efficiency.
The main transformation, that the author in [64] proposed, converts the original sum-of-ratio functions into a parametric subtractive form. This transformation allows standard optimization tools to be further used and provides the ability to design an efficient algorithm for achieving the globally optimal solution of the original sum-ofratios problem. The assumptions for this transformation require that the numerator of the rational function of every summand is concave, and the denominator is convex and greater than zero. Thus, the transformed subtractive form is a concave function for every summand in the case of maximization. We introduce the transformation of the objective function from Problem 2.2, through the following theorem.
Theorem 1. Let s * k (n), and P * k (n) be the optimal solution to Problem 2.2, then there exist two parameter vectors µ * k and β * k , k ∈ {1, 2, . . . , K}. Furthermore, s * k (n), and P * k (n) are the optimal solution to the following transformed optimization problem:
Problem 2.3. EH Maximization -Sum-of-Ratios Objective Function Transformation:
is the feasible solution set of Problem 2.2 and P ER k (n) = (1−s k (n))( K j=1 s j (n)P j (n)), ∀n, k. In addition, the optimization variables s * k (n), and P * k (n) must satisfy the system of equations: Proof. Please refer to Appendix A.1 for the proof.
As Theorem 1 suggested, there exists an optimization problem with an objective function in subtractive form that is an equivalent problem to the sum-of-ratios Problem 2.2. More importantly, both optimization problems share the same optimal solution and we can straightforwardly obtain the solution to the initial problem by solving the transformed optimization problem [58], in the case when the transformed optimization Problem 2.3 can be solved. Therefore, we can focus on the optimization problem with the equivalent objective function in the rest of the thesis.
Iterative Algorithm for Maximization of Harvested Energy at EH Receivers
In this subsection, we design a computationally efficient algorithm for achieving the globally optimal solution of the resource allocation optimization Problem 2.2. To obtain the solution for Problem 2.2, we adopt an equivalent objective function, such that the resulting resource allocation policy satisfies the conditions in Theorem 1. The algorithm has an iterative structure, consisting of two nested loops. Its structure is presented in Proof. For a proof of convergence, please refer to [64]. Table 2.1, we solve the following optimization problem for given µ k (n) and β k (n), ∀n, k, and obtain the optimal solution for the optimization variables s k (n), and P k (n).
Although the objective function in Problem 2.4 is in subtractive form and is concave, the transformed optimization problem is still non-convex due to the binary constraint C1. To obtain a tractable problem formulation, we handle the binary constraint C1 from Problem 2.4 in each iteration of the algorithm. For this purpose, we apply time-sharing relaxation.
In particular, by following the approach in [68], we relax the user selection variable s k (n) in constraint C1 of Problem 2.2 to take on real values between 0 and 1, i.e., C1: 0 ≤ s k (n) ≤ 1, ∀n, k. The user selection variable can now be interpreted as a timesharing factor for the K users during one time slot n. With the time-sharing relaxation, the inner problem that we solve in each iteration takes the following form:
Problem 2.5. EH Maximization -Time-sharing Relaxation:
For facilitating the time-sharing, we introduce an auxiliary variable in Problem 2.5, defined as P k (n) = P k (n)s k (n), ∀n, k. The new optimization variable P k (n) represents the actual transmitted power in the RF of the transmitter for user k at time slot n under the time-sharing assumption. It also solves the problem with the coupling of the optimization variables P k (n), and s k (n), which is present in some of the constraints.
However, coupling of the variables is still present in the objective function after this reformulation. Thus, we perform another variable change. In particular, we define the variable P virtual k (n) = (1 − s k (n)) K k=1 P k (n), which represents the actual received power at EH receiver k at a specific time slot n. After these changes, the inner loop optimization problem is rewritten with respect to the optimization variables {s k (n), P k (n), P virtual k (n)}: Proof. Please refer to the Appendix A.2 for the proof.
Equation (2.4.8) represents the convergence condition of the algorithm. For the update of the respective variables, the modified Newton method is used, as shown in (2.4.9). If ζ m = 0, we have the well-known Newton method for the corresponding update. The modified, or damped Newton method converges to the unique solution (µ * i , β * i ), ∀i, while satisfying equations (2.4.2) and (2.4.3), with linear rate for any starting point [64], [65]. The rate in the neighbourhood of the solution is quadratic, which follows from the convergence analysis of the Newton method.
Dual Problem Formulation
In order to further investigate the structure of the solution, in this subsection we use duality theory for solving the transformed optimization problem, cf. Problem 2.6. With the corresponding transformations performed in the previous subsection, it can be shown that Problem 2.6 is jointly concave with respect to the power allocation and user selection variables. As a result, under some mild conditions, the solution of the dual problem is equivalent to the solution of the primal problem [58], i.e., strong duality holds. Thus, we can use duality theory to obtain the solution. In order to do that, we start with the formulation of the Lagrangian for Problem 2.6: the set that contains all Lagrange multipliers. is defined in order to simplify the notation. In (2.4.11), α k (n), and k (n), ∀n, k, are the Lagrange multipliers that account for constraint C2, i.e., that only one user is chosen in one time slot n, along with λ(n), ∀n, that accounts for constraint C1. γ is the Lagrange multiplier related to the constraint on the average radiated power implied by C3. δ(n) and ε(k), ∀n, k, account for the maximum power transmitted from the base station during time slot n in C4 and the minimum data rate requirements per user in C5, respectively. Furthermore, ζ k (n), η k (n), and θ k (n), ∀n, k, are associated with the constraints C6-C8 related to the auxiliary optimization variable P virtual k (n), ∀n, k. The dual problem is given by: It can be shown that Problem 2.6 satisfies the Slater's constraint qualification and strong duality holds. Thus, from the Karush-Kuhn-Tucker (KKT) optimality conditions, the gradient of the Lagrangian with respect to the elements of the optimization variables vanishes at the optimum point. First, we consider the derivatives of the Lagrangian with respect to the instances of the optimization variables s k (n), P k (n), and P virtual k (n).
By exploiting the fact that the derivative of the Lagrangian with respect to the optimization variable P k (n) vanishes at the optimum point, from (2.4.13), we obtain the following P k (n) = s k (n)P k (n) From (2.4.15), taking the derivative of the Lagrangian with respect to P virtual k (n) yields: The structure of the solution at the optimum point can be observed from the results presented above. In particular, it can be observed from (2.4.16) that the power allocation in the system follows the water-filling solution. The dual variables show the costs for realizing the specific power allocation. Namely, in (2.4.16), we can observe that P k (n) as an auxiliary variable is defined as the coupling between the true power allocation variable P k (n) and the user selection variable. Regarding the power allocation P k (n), which follows the water-filling policy, we can notice a ratio between the dual variables. Specifically, we can observe the dual variable dedicated to the rate constraint in the numerator and the dual variables connected to the power constraints from the transformed optimization problem in the denominator. The Lagrange multipliers ε(k), γ, δ(n), and η k (n), ∀n, k make sure that the transmitter transmits with a sufficient amount of power to fulfill the data rate requirements, while satisfying the average and maximum power constrains. Moreover, in equation (2.4.16) we can observe a part that is inverse to the channel value h k (n) for user k and time slot n, which follows the form of water-filling, i.e., users with better channel conditions at a specific time slot are allocated more power.
Results
In this section, we present simulation results to illustrate the system performance of the proposed resource allocation algorithm with respect to the non-linear practical EH receiver model, proposed in Section 2.2. The important parameters adopted in the simulation are summarized in Table 2.2. Regarding the noise variances at the receiver for this is that low input power levels lead to poor performance and underutilization at the EH receivers due to the behaviour of the non-linear EH circuits. Moreover, the QoS constraint at the ID users must be satisfied, which further reduces the portion of radiated power that can be harvested at the ERs. power P max . We assume that the distance of the users is 10 meters. The performance of the considered SWIPT system increases in both schemes when more users are present. It can also be observed that the average total harvested power is an increasing function with respect to the maximum transmitter power for the proposed scheme. However, it is expected that this increasing trend exists only until the saturation region, i.e., the value of maximum harvested power at all the EH receivers, is reached. On the other hand, we can observe that the average total harvested power is almost constant with respect to P max for the baseline scheme. In particular, the baseline scheme may cause Chapter 3.
Conclusion
In this thesis, we focused on the design of a resource allocation algorithm for a SWIPT Following the parametric algorithm in [72], the optimization Problem A.1 is equivalent to the following problem:
Problem A.2. EH Maximization -Equivalent parametric optimization:
Following the outline of the proof of Lemma 2.1 in [64], we define the following function for Problem A.2: The set = {α k (n), λ(n), k (n), γ, δ(n), ε(k), ζ k (n), η k (n), θ k (n)}, ∀n, k, contains all the dual variables and is defined solely for notational simplicity. According to Fritz-John optimality conditions [73], there must exist variables * , α * k (n), λ * (n), * k (n), γ * , δ * (n), ε * (k), ζ * k (n), η * k (n), θ * k (n), and µ * k (n) such that they satisfy where I P k * (n) = n, k| K k=1 P k * (n) − P max = 0, ∀n . Moreover, g k (n) P k * (n) = P max − K k=1 P k * (n). From the Slater's condition, there exist a P k (n) such that g k (n) P k (n) < 0, ∀n, k. (A.1.11) Since g k (n) P k * (n) are convex ∀n, k, we obtain the following : ∇g k (n) P k * (n) T P k (n)−P k * (n) ≤ g k (n) P k (n) −g k (n) P k * (n) < 0, ∀n, k ∈ I P k * (n) . In order to focus on the impact of the dual variable λ(n) related to constraint C2 in Problem 2.6, we consider the other dual variables α k (n), k (n), ζ k (n), ε(k) as constants, ∀n, k. In the following, the derivative is rewritten as: where y = k * (n) − α k * (n) − ζ k * (n)P max , and F k (n) = ln 1 + ∀n, k. Now, we obtain the following for the user selection variable: At the optimum point constraint C2 must be satisfied with equality. Moreover, we aim for maximizing the Lagrangian function, which follows from the dual problem formulation [58]. Thus, if F k (n) are different for every user k and time slot n, the decision about the optimal user selection is obtained according to: F k (n) is referred to as the marginal benefit achieved by the system by selecting user k. We note that the Lagrange multipliers in the scheduling policy depend only on the statistics of the channels. Hence, they can be calculated offline, e.g. using the gradient method, and then be used for online scheduling as long as the channel statistics remain unchanged. As a result, the optimal scheduling rule in (A.2.5) depends only on the CSI in the current time slot and the channel statistics, i.e., online scheduling is optimal, although the optimization problem considers an infinite number of time slots and long-term averages for the total harvested energy. At the optimum solution, the optimal value for the user selection variable is strictly 1 or 0, ∀n, k, in the case of ID and EH, respectively. Due to the fact that the optimal selection converges to the boundary values at the optimum point, the time-sharing relaxation is proven to be tight.
A.3. Proof of Proposition 2.1
In order to prove Proposition 2.
|
2016-02-02T08:56:50.000Z
|
2016-02-02T00:00:00.000
|
{
"year": 2016,
"sha1": "c1d6b183188b82b1a8ae6ef8073ef5ce1782eb0b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "962308fcceb2e452236066e144cb1fab56ccf50c",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
248573573
|
pes2o/s2orc
|
v3-fos-license
|
Understanding the P-Loop Conformation in the Determination of Inhibitor Selectivity Toward the Hepatocellular Carcinoma-Associated Dark Kinase STK17B
As a member of the death-associated protein kinase family of serine/threonine kinases, the STK17B has been associated with diverse diseases such as hepatocellular carcinoma. However, the conformational dynamics of the phosphate-binding loop (P-loop) in the determination of inhibitor selectivity profile to the STK17B are less understood. Here, a multi-microsecond length molecular dynamics (MD) simulation of STK17B in the three different states (ligand-free, ADP-bound, and ligand-bound states) was carried out to uncover the conformational plasticity of the P-loop. Together with the analyses of principal component analysis, cross-correlation and generalized correlation motions, secondary structural analysis, and community network analysis, the conformational dynamics of the P-loop in the different states were revealed, in which the P-loop flipped into the ADP-binding site upon the inhibitor binding and interacted with the inhibitor and the C-lobe, strengthened the communication between the N- and C-lobes. These resulting interactions contributed to inhibitor selectivity profile to the STK17B. Our results may advance our understanding of kinase inhibitor selectivity and offer possible implications for the design of highly selective inhibitors for other protein kinases.
INTRODUCTION
Protein kinases transfer the γ-phosphate group of ATP to serine, threonine, or tyrosine residues of their substate proteins. This physiological process is also called as phosphorylation. Protein phosphorylation provokes cellular signal transduction cascades associated with cell differentiation, growth, homeostasis, and death (Pearce et al., 2010). Aberrant protein kinase function by either activating mutations or translocations is related with numerous disease states, including cancer, Alzheimer disease, Parkinson's disease, inflammation, and metabolic disease (Attwood et al., 2021;Cohen et al., 2021). Protein kinase are thus important therapeutic targets for drug discovery. Until now, 71 small-molecule kinase inhibitors have been approved by the FDA in the treatment of cancer and other diseases (Roskoski, 2021).
Despite the inspiring clinical benefits, kinase inhibitors are still encountered an unsurmountable challenge hallmarked by kinase selectivity profile. This is because that the vast majority of protein kinase inhibitors bind to the conserved ATP-binding site, leading to the poor selectivity of kinase inhibitors towards a unique kinase (Wu et al., 2015;Chen et al., 2020;Li C. et al., 2020). For example, Davis et al. (2011) have previously explored the interaction of 72 kinase inhibitors with 442 kinases representing >80% of the human catalytic protein kimome and found that the kinase inhibitor selectivity profile is relatively narrow, with 10%-40% of inhibitors interacting with >60% of kinases, and each inhibitor interacting with more than one kinase. Therefore, developing a promising strategy to discover highly selective inhibitors is an area of intensive research in kinase kinome (Lu et al., 2018, Lu et al., 2019aLu and Zhang, 2019).
To achieve inhibitor selectivity, several successful strategies have been reported. Covalent kinase inhibitors are a class of compounds that harbour a reactive, electrophilic warhead, reacting with a nucleophilic cysteine residue at the target site and then forming a stable covalent adduct (Nussinov and Tsai, 2015;Lu and Zhang, 2017;Ni et al., 2020). These covalent inhibitors have pharmacological advantages of high potency and selectivity. For instance, in the double mutant T790M/ L858R epidermal growth factor receptor (EGFR), the FDAapproved Osimertinib engages with Cys797 at the ATPbinding site through a covalent bond (Jia et al., 2016;Nussinov et al., 2022). However, in the ATP-binding site, the availability of cysteine residues at the proper position is scarce for most of kinases, rendering the design of covalent inhibitors remaining a challenging task.
Harnessing the sequence differences of ATP-binding site that control inhibitor selectivity has emerged as an alternative. One quintessential example is STK17B, a member of the deathassociated protein kinase family of serine/threonine kinases (Pearce et al., 2010). Overexpression of STK17B plays a crucial role in hepatocellular carcinoma and thus, inhibition of STK17B catalytic activity in cells implies clinical utility in the treatment of this malignancy (Lan et al., 2018). The crystal structure of ADPbound STK17B contains a small N-lobe and a large C-lobe ( Figure 1A). The N-lobe is mainly consisted of five β-strands and one catalytic helix αC. The phosphate-binding loop (P-loop) connecting the β1 to the β2 adopts a "U" shape. The C-lobe is largely constituted by helices. The activation loop (A-loop) that control catalytic activity runs along the substrate binding groove. The flexible hinge domain connects the N-lobe to the C-lobe. ADP binds to the cleft between the two lobes located under the P-loop. There are several reported STK17B inhibitors, including quercetin 1, dovitinib 2, and benzofuranone 3 (Supplementary Figure S1). However, these are non-selective or modest selective inhibitors toward STK17B. Recently, Picado et al. (2020) reported a cell active STK17B inhibitor, thieno[3,2-d] pyrimidine PFE-PKIS 43 ( Figure 1B), which had remarkable potency and selectivity toward STK17B against other homologous protein kinases. A crystal structure of PFE-PKIS 43 complexed with STK17B highlights a unique P-loop flip that interacts with the inhibitor. In addition to the crystal structure of STK17B−PEF-PRIS 43 complex, there are five co-crystal structures of STK17B in complex with different inhibitors previously reported, including EBD (PDB ID: 3LMO), quercetin (PDB ID: 3LM5), UNC-AP-194 probe (PDB ID: 6Y6H), AP-229 (PDB ID: 6ZJF), and dovitinib (PDB ID: 7AKG). Structural superimposition of the five cocrystal structures shows that the P-loop conformation in these structures adopts the ordered β-strands (Supplementary Figure S2), which is different from that in the crystal structure of STK17B−PEF-PRIS 43 complex. However, the conformational dynamics of the P-loop in the STK17B−PEF-PRIS 43 complex remain unexplored.
Here, we performed a multi-microsecond length molecular dynamics (MD) simulation of STK17B in the ligand-free, ADPbound, or ligand-bound states, to characterize the conformational plasticity of the P-loop and its interplay with the ligand over long time-scales. We collected an overall simulated trajectories of 27 μs, which were conducted in multiple replicates in different states. Coupled with the analyses of principal component analysis (PCA), cross-correlation and generalized correlation motions, secondary structural elements, and community networks, the distinct conformational dynamics of the P-loop in the different states were presented. Our results will advance our understanding of kinase inhibitor selectivity and provide hits for the design of selective inhibitors for other protein kinases.
System Stability
Based on the available X-ray crystal structures of STK17B, we collected conformational ensembles of μs-length MD simulations. We simulated STK17B in various states (i.e., ligand-free, ATP-bound, or ligand-bound) to explore differences and similarities during MD simulations. For each system, MD simulations were performed in explicit water environment, collecting multiple μs-length trajectories (i.e., 3 replicates of 3 μs each) and yielding a total of sampling of 27 μs. Such a multiple and independent μs-length MD trajectory has been proved efficient for investigating the interdependent conformational plasticity of the kinase domains (i.e., P-loop and A-loop) and their interactions with the ADP or the ligand (Lu et al., 2019b;Zhang et al., 2019;Lu et al., 2021a;Lu et al., 2021b;Maloney et al., 2021;Ni et al., 2021;Hu et al., 2022).
We first monitored the root mean square deviation (RMSD) of the kinase Cα atoms averaged over three replicates for each system. As shown in Supplementary Figure S3, the kinase backbone reached a similar stability in the apo (ligand-free), ADP-bound, and ligand-bound states (i.e., the RMSD reaches 1-1.5 Å). This suggested that upon ADP or ligand binding, the overall stability of the kinase has no significant conformational differences during the simulations.
Coupled Motions of Kinase Intradomains
The dynamic correlation analysis was carried out to probe the interdependent dynamics among different kinase domains. Two distinct methods, including the traditional Pearson crosscorrelation (CCij) and the generalized correlation (GCij), were used to calculate the correlation analysis (Shibata et al., 2020;Liang et al., 2021;Zhang et al., 2022a), which was conducted and averaged over all MD trajectories. The CCij analysis describes the collinear correlation between the two residue Cα atoms (i and j), reflecting whether they move in the correlated motions (CCij > 0) or in the anti-correlated (CCij < 0) motions. The GCij analysis monitors the degree of correlation between the two residue Cα atoms (i and j), reflecting how much information of one atom's positions is provided by that of another atom. The GCij analysis cannot identify correlated or anticorrelated motions of the two atoms, ignoring the elucidation of atom's motions.
The CCij matrix of STK17B that is represented by a two-bytwo plot of the Cα CCij coefficients reveals a conserved pattern of correlated/anticorrelated motions in all apo, ADP-bound and ligand-bound states ( Figure 2). The N-lobe containing the P-loop (residues 40-47) and C-lobe shows anticorrelated motions, which is also observed on other protein kinases such as anaplastic lymphoma kinase (ALK) (Liang et al., 2021), BCR-ABL (Zhang et al., 2022a) and epidermal growth factor receptor (EGFR) . This suggests that the opposite movement of the N-and C-lobes favours the "open or closed" conformational transition of the nucleotide binding site underlying ADP/ATP and substrate binding. In addition, the difference matrix of ADP-and ligand-bound states using the apo state as the reference indicates that the opposite movement of the N-and C-lobes was stronger in the ADP-bound state than that in the ligand-bound state (Supplementary Figure S4). The GCij analysis was further used to unravel the global dependencies of the protein kinase domain motions ( Figure 3). Like the CCij matrix, the GCij matrix of the STK17B in the apo, ADP-bound and ligand-bound states showed a high degree of correlations between the N-lobe and the C-lobe. However, the protein in the ligand-bound system had a slightly higher correlations than that in the ADP-bound and apo systems, which was further supported by the difference matrix of ADP-and ligand-bound states using the apo state as the reference (Supplementary Figure S5). This result indicated that ligand binding induced an enhanced motions of protein kinase domains.
Local Motions and Conformational Dynamics
In order to unravel the predominant collective motions of different STK17B states and capture their essential degrees of freedom, we conducted principal component analysis (PCA) of STK17B in the apo, ADP-bound, and ligand-bound states. Based on the PCA, the first two principal modes of motion (i.e., principal components 1 and 2, PC1 and PC2) provide information regarding to the large-amplitude motions of different STK17B states, which represent their functional dynamics (Masterson et al., 2011;Chen et al., 2019;Chen et al., 2021;He et al., 2021;Okeke et al., 2021;Rehman et al., 2021). In PCA, we selected all simulated trajectories for each system and subjected to RMS-fit to the same initial structure to rule out the translational and rotational motions of the protein.
As shown in Figure 4A, the apo protein sampled a confined distribution of conformations. Addition of ADP largely changed PC1, but did not change PC2 ( Figure 4B), indicating that the protein kinase had increased dynamics in response to ADP binding. More remarkably, in the ligand-bound system ( Figure 4C), both PC1 and PC2 were enlarged compared to the apo and ADP-bound systems. This observation suggested that the ligand binding induced more enhanced conformational dynamics of STK17B, which was consistent with the GCij analysis. We further extracted the most represented conformation from each cluster in the ligand-bound state (L1-L3). As shown in Supplementary Figure S6, structural overlapping of the three most represented conformations showed that the P-loop and A-loop in the ligand-bound STK17B underwent obvious conformational changes. Indeed, The conformational landscapes of different STK17B states based on the PCA results implied that STK17B was more dynamics in the presence of ligand. To further validate this hypothesis, the PC1 of the STK17B in the three different states was visualized on the 3D structure ( Figure 5). The red arrows show the direction of residue motions, with the length proportional to the intensity of the motion. Remarkably, the ligand binding ( Figure 5C) triggered more dynamic movement of P-loop and A-loop than the apo ( Figure 5A) and the ADP-bound ( Figure 5B) systems. For instance, no motion of the P-loop, but a weak motion of the A-loop was observed in both the apo and ADP-bound systems. In agreement with the PCA results, both the P-loop and the A-loop of STK17B in the presence of ligand were highly flexible, which may determine the selectivity profile of ligand to the STK17B.
Secondary Structural Analysis of the Phosphate-Binding Loop
To further reveal the different secondary structures of the P-loop in the three different STK17B states, the defined secondary structure of proteins (DSSP) (Lei et al., 2019) method was used to analyse the secondary structural elements of residues Tyr32−Ser55. Figure 6 shows the secondary structural profile of residues Tyr32−Ser55 for the three systems. In both the apo ( Figure 6A) and ADP-bound ( Figure 6B) systems, the residues Ile33−Arg41 and Val46−Ile51 formed two extended strands (β1 and β2) and residues Gly42−Ala45 at the P-loop adopted the bend conformation. These secondary structural elements of the β1, P-loop and β2 in the apo and ADP-bound states are consistent with the typical protein kinases at the corresponding position. In sharp contrast, in the ligand-bound state ( Figure 6C), the secondary structural conformation of the β-strand in the residues Ile33−Arg41 and Val46−Ile51 was disturbed, especially the residues Ile33−Arg41 in the disordered conformation. Together, DSSP results indicated that the conformational changes of residues Ile33−Arg41 induced by the ligand binding may have an important role in the control of inhibitor selectivity to the STK17B.
Community Network Analysis
We next performed community network analysis to reveal the altered community networks of STK17B in the apo, ADPbound, and ligand-bound states. The whole simulated trajectories were selected for community network analysis. The two Cα atoms within a cut-off distance of 4.5 Å that has an occupation time >75% of simulation time were classified into the same community (Sethi et al., 2009;Liang et al., 2020;Li et al., 2021a;Foutch et al., 2021;Tian et al., 2021). Each community was represented by coloured circles whose size is related to the number of residues it includes. The strength of the two communities was represented by the width of sticks that connect inter-communities. Figure 7 shows the communities of different STK18B states. In the apo system ( Figure 7A), there has nine communities. The community 1 contains the P-loop, the helix αC, and the β3-β5. The community 2 consists of the helix αD and the β6-β7. The community 9 largely includes the A-loop. There was the existence of strong connection between the community 1 and community 2 and between the community 1 and community 9. In contrast, the communication between the community 1 and community 9 was weak. This observation indicated that there was no information flow between the P-loop and the A-loop in the apo system. In the ADP-bound system ( Figure 7B), the community 1 diminished, which only consists of the helix αC. The sizes of the community 2 and community 9 in the ADP-bound system were similar to those in the apo system. However, the information flow that connects between the community 1 and community 2 and between the community 1 and community 9 was markedly weaker in the ADP-bound system than in the apo system. This indicated that upon ADP binding to the STK17B, the inter-domain interaction between the P-loop in the N-lobe and the helix αD in the C-lobe became weaken compared to the apo system. In the ligand-bound system ( Figure 7C), the community 1 was enlarged compared to the ADP-bound systems, which was the same with the apo system. The community 1 in the ligand-bound systems consists of the P-loop, the helix αC, and the β3-β5. More significantly, the communication between the community 1 and community 2 in the ligand-bound system was enhanced compared to the ADP-bound system, with the strength resembling to the apo system. This observation suggested that upon ligand binding to the ADP-bound site, the information flow between the P-loop in the N-lobe and the helix αD in the C-lobe became stronger compared to the ADP-bound system. This enhanced interactions between the two lobes may promote inhibitor binding and selectivity to the STK17B.
Comparative Binding Modes
Community network analysis implied the strong interactions between the N-and C-lobes in response to the ligand binding. To further elucidate the conformational arrangement of the two lobes of the protein kinase and the detailed interactions of ADP and the ligand with the STK17B, the most representative conformation of the STK17B-ligand and STK17B-ADP complexes was obtained using the cluster analysis of the three simulated trajectories (Liu et al., 2018;Xie et al., 2019). As shown in Figure 8A, in the ligand-bound state, there was a significantly disordered conformation of the P-loop, especially the β1, which was in good agreement with the DSSP results. Owing to the disordered P-loop conformation, the Arg41 at the β1 was flipped into the ADP-binding site and formed hydrogen bonding or salt bridge interactions with the residues Glu117 and Asn163 at the C-lobe and the carboxylic acid of the ligand. The hydrogen bonding occupation percentage was summarized in the Supplementary Table S1. These interactions promoted the strong communication between the N-and C-lobes, which contributed to increase the selectivity profile of the ligand to the STK17B. Simultaneously, the carboxylic acid of the ligand also interacted with the catalytic residue Lys62 through a salt bridge. Lys62 in turn formed salt bridge interactions with the Glu80 at the helix αC. In addition, the N1 of the thieno[3,2-d]pyrimidine formed a hydrogen bond with the amide backbone of Ala113 at the hinge domain. In contrast, in the ADP-bound state ( Figure 8B), the β1 and β2 formed two anti-paralleled strands, which was consistent with the DSSP results. Owing to the ordered P-loop conformation, the Arg41 at the β1 was protruded into the solvent and had no interactions with the C-lobe, which was markedly different from that in the ligand-bound state. In the hinge domain, the backbone of residues Glu111 and Ala113 formed two hydrogen bonds with the adenine moiety of ADP. The hydrogen bonding occupation percentage was summarized in the Supplementary Table S2. The catalytic residue Lys62 formed salt bridges with the αand β-phosphate moieties of ADP and the Mg 2+ ion was coordinated with the αand β-phosphate moieties, the carboxylic moiety of Asp179, and the carbonyl moiety of Asn163. Collectively, the comparative binding modes of ADP and the ligand with the STK17B highlighted that the unique p conformation induced by the ligand binding played a determined role in the increased selectivity of the ligand to the protein kinase. Given that the important role of the salt bridge interactions between the carboxylic acid moiety of the ligand and Arg41, it is advisable to retain the carboxylic acid moiety in the future drug design toward STK17B.
CONCLUSION
In the present study, the collective sampling of 27 μs MD simulations, coupled with the PCA, correlated motion analysis, DSSP, and community network analysis, revealed the effect of the conformational dynamics of the P-loop on the inhibitor selectivity profile to the STK17B. Ligand binding contributed to the increase of the conformational plasticity of the STK17B. Compared to the apo and ADP-bound STK17B, the P-loop, especially the β1, adopted the disordered conformation in the presence of the ligand. This unusual P-loop conformation rendered the residue Arg41 at the β1 flipping into the ADPbinding site and interacted with the carboxylic acid moiety of the ligand and residues Glu117 and Asn163 the C-lobe. These interactions in the ligand-bound state enhanced the information flow between the N-and C-lobes as observed by the community network analysis, which played an essential role in the control of the inhibitor selectivity to the STK17B. Owing to the importance of the salt bridge interactions between the carboxylic acid moiety of the ligand and Arg41 in the maintenance of the unique, disordered P-loop conformation, the carboxylic acid moiety is suggested to retain in the future drug design toward STK17B. These results shed light on the structural basis of the selectivity of the inhibitor to the STK17B, which may be useful for the design of highly selective inhibitors to other protein kinases.
System Preparation
The co-crystal structures of STK17B in complex with ADP (PDB ID: 6QF4) (Lieske et al., 2019) or PFE-PKIS 43 (PDB ID: 6Y6F) (Picado et al., 2020) were respectively downloaded from the Protein Data Bank (PDB). The missing residues E191−E194 in the 6QF4 and C187−I195 in the 6Y6F at the A-loop were modelled using the MODELLER program (Webb and Sali, 2014). The ADP molecule in the 6QF4 was removed to serve as the ligand-free STK17B (apo STK17B). The force field parameters for ADP and Mg 2+ were obtained from the AMBER parameter database (www.amber.manchester. ac.uk) and the generalized AMBER force field (GAFF) (Wang et al., 2004) was used for PFE-PKIS 43. Partial changes for PFE-PKIS 43 were computed using the RESP HF/6-31G* method (Bayly et al., 1993) through the antechamber module in AMBER 18 (Case et al., 2005) and Gaussian 09 program. The AMBER ff14SB (Maier et al., 2015) force field was used for the protein and the TIP3P model was used for water molecules (Jorgensen et al., 1983). The three simulated systems were embedded in a truncated octahedron TIP3P explicit water box with a boundary of 10 Å, while counterions Na + were added to neutralize the total charge. Then, 0.15 mol/L NaCl were added to simulate the physiological environment.
Molecular Dynamics Simulations
MD simulations were carried out using the AMBER 18 program (Case et al., 2005). Two rounds of minimizations of the three simulated systems were performed, including the steepest descent and conjugate gradient algorithms. This simulation protocol has also been employed in recent studies of protein conformational dynamics (Lu et al., 2019c;An et al., 2021;Liu et al., 2021;Zhang et al., 2022b). Then, each system was heated up from 0 to 300 K within 1 ns of MD simulations in the canonical ensemble (NVT), imposing position restraints of 100 kcal/mol·A 2 on the solute atoms. Finally, three replicas of independent 3 μs simulations were performed with random velocities under isothermal isobaric (NPT) conditions. An integration time step of 2 fs was used. The SHAKE algorithm was used to constrain all bond lengths involving hydrogen atoms (Ryckaert et al., 1977). The particle mesh Ewald (PME) method was used to treat with the long-range electrostatic interactions (Darden et al., 1993), while a 10 Å nonbonded cut-off was used for the short-range electrostatics and van der Waals interactions.
Principal Component Analysis
Principal component analysis (PCA) has been widely used to elucidate large-scale collective motions of biological macromolecules during MD simulations (Li et al., 2020b;Li et al., 2021b;Feng et al., 2021), which can transform a series of potentially coordinated observations into orthogonal vectors to capture large-amplitude motions. Among these vectors, the first two principal component (named PC1 and PC2) provide the dominant motions during MD simulations. In PCA, PCs were generated based on coordinate covariance matrix of Cα atoms in the STK17B protein and these collected frames were all projected on the PC1 and PC2.
Generalized Correlation Analysis
Generalized correlation (GC ij ) analysis was performed to monitor the correlated motions of residues (He et al., 2022;Wang et al., 2022;Zhuang et al., 2022). To describe that how much information of one atom was provided by another atom, Mutual Information (MI) was calculated using the Eq. 1: The equation can be calculated using the known measure of entropy as the Eq. 2: p(x) ln p(x)dx (2) The correlation between pairs of atoms x i and x j can be calculated using the marginal Shannon entropy H[x i ], H[x j ], and the joint entropy term H[x i , x j ] as the Eq. 3: The MI[x i , x j ] values can be further normalised to obtain the normalised generalised correlation coefficients (GC ij ) as the Eq. 4: where d represents the dimensionality of x i and x j .
Cross-Correlation Analysis
Based on Pearson coefficients between the fluctuations of the Cα atoms, the cross-correlation matrix (CC ij ) was calculated to describe the coupling of the motions between the protein residues (Li et al., 2020b;Aledavood et al., 2021;Hernández-Alvarez et al., 2021;Wang et al., 2021). CC ij was computed using the following Eq. 5, C i, j c i, j c(i, i) 1/2 c j, j 1/2 The positive CC ij values indicate the two atoms i and j moving in the same direction, whereas the negative CC ij values indicate the anti-correlated motions between the two atoms i and j.
Community Network Analysis
Community network was analyzed to uncover the intercommunity interactions using the Network View plugin in VMD (Sethi et al., 2009;Marasco et al., 2021). In this analysis, the Cα atoms in the STK17B were selected as nodes to represent their corresponding residues. Edges were described between nodes whose distances are within a cut-off of 4.5 Å occupying >75% of simulation time. The edge between nodes was calculated using the Eq. 6: where i and j represent the two nodes.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
|
2022-05-10T13:25:18.195Z
|
2022-05-10T00:00:00.000
|
{
"year": 2022,
"sha1": "e49b3af8f1d0ed1049df4cb727c8ed6882eddf2a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "e49b3af8f1d0ed1049df4cb727c8ed6882eddf2a",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237316509
|
pes2o/s2orc
|
v3-fos-license
|
Synthesis and Characterization of Porous CaCO3 Vaterite Particles by Simple Solution Method
Appropriately engineered CaCO3 vaterite has interesting properties such as biodegradability, large surface area, and unique physical and chemical properties that allow a variety of uses in medical applications, mainly in dental material as the scaffold. In this paper, we report the synthesis of vaterite from Ca(NO3)2·4H2O without porogen to obtain a highly pure and porous microsphere for raw material of calcium phosphate as the scaffold in our future development. CaCO3 properties were investigated at two different temperatures (20 and 27 °C) and stirring speeds (800 and 1000 rpm) and at various reaction times (5, 10, 15, 30, and 60 min). The as-prepared porous CaCO3 powders were characterized by FTIR, XRD, SEM, TEM, and BET methods. The results showed that vaterite with purity 95.3%, crystallite size 23.91 nm, and porous microsphere with lowest pore diameter 3.5578 nm was obtained at reaction time 30 min, temperature reaction 20 °C, and stirring speed 800 rpm. It was emphasized that a more spherical microsphere with a smaller size and nanostructure contained multiple primary nanoparticles received at a lower stirring speed (800 rpm) at the reaction time of 30 min. One of the outstanding results of this study is the formation of the porous vaterite microsphere with a pore size of ~3.55 nm without any additional porogen or template by using a simple mixing method.
Introduction
Calcium carbonate is one of the most commonly used compounds in nature and industry because of its biocompatibility and non-toxic properties, as well as other functional structures, and it can fulfill various biological applications such as drug delivery and bone regeneration in dental material as scaffold [1][2][3]. Under ambient conditions, the anhydrous crystalline calcium carbonate forms calcite; otherwise, under certain conditions, it forms aragonite and vaterite. Their crystallization in the natural system is often preceded by the formation and subsequent transformation of amorphous calcium carbonate (ACC) [4][5][6][7][8][9]. Calcite and aragonite are stable forms with trigonal and orthorhombic polymorphic crystals, while vaterite has a hexagonal crystal system. Among other polymorphs, vaterite is the most unstable phase [2,7,8,[10][11][12]. Vaterite is used for biomaterial applications such as abrasive agents, bone substitutes, and drug delivery systems [7]. In bone substitute development, porous structure and pure vaterite are preferable to enhance the formation of carbonate apatite [13]. Due to its metastable properties and applications, the synthesis method for preparing pure vaterite and porous structure remains a challenge.
The vaterite particles can be prepared by the solution method with precipitation or carbonation process through CO 2 bubbling. In general, factors such as solvent, temperature, stirring speed, pH of the medium, ion concentration, and additives influence the morphology or size of the resulting vaterite particles [11,12,14]. For example, the effect of stirring speed was found that a higher agitation speed favored the formation of the smaller size distribution of vaterite microspheres [15]. However, most of these are laborious or require complicated conditions and specialized equipment [6,16]. The simplified method involves mixing saturated aqueous solutions containing calcium and carbonate ions. The mixing method is very cost-effective, quick, and easy to perform and also requires simple instruments. In addition, it can be extended for the industrial production of vaterite particles [7,11,17,18]. The excellent properties of vaterite include a high specific surface area, a higher solubility than calcite and aragonite, a high dispersion, a lower specific gravity, a spherical shape, and a porous internal structure with a particle diameter of 0.05 to 5 µm [11,19,20]. However, controlling the phase transformation of ACC to form pure vaterite is still a challenge. Therefore, there are two main tasks in the synthesis of porous vaterite CaCO 3 : to create the transformation of ACC to pure vaterite and formation of a porous structure.
Among the numerous factors that influence the precipitation of calcium carbonate polymorphs, one of the most determining factors is the presence of various foreign ions or molecules in the aqueous solution from which the carbonate precipitates. Controlling phase transformation of ACC to form pure vaterite governed by their kinetic of ions and molecules (from precursor or additive), and the saturation index of the solution resulted in either remain vaterite or aragonite or calcite. Those factors are determined by the reaction temperature, aging times, reaction times, and stirring speed [8,20]. Jiang (2018) reported that increasing aging times 0 to 42 h and reaction temperatures from 0 • C to 60 • C, correspondingly, reduced the amount of vaterite from 90.4% to 81.4% and 85.8% to 70.2% [20]. Ševčík et al. (2015) found the optimal synthesis conditions for the preparation of pure vaterite (≥99 wt.%) at 60 • C and a stirring speed of 600 rpm without additives using CaCl 2 ·H 2 O and KCO 3 as precursors [8]. It is also remarked that the phase transformation of ACC to calcite and vaterite occurred at a short reaction time of 3 min and eventually at particular conditions either remain calcite or transform into vaterite [10]. Due to the unstable phase of vaterite, other researchers proposed an additional ethylene glycol (EG) or biopolymers, such as carboxymethyl inulin (CMI), served as a stabilizer to prevent the conversion of vaterite to other CaCO 3 polymorphs (aragonite or calcite) in the dynamic of precipitation process [21,22]. In contrast, fast stirring may also inhibit the conversion of vaterite to calcite [15]. Furthermore, stirring speed affects the size and morphology where higher stirring speeds favored the formation of smaller size and size distribution microspheres. In the formation of porous structures, the use of porogen as a template is usually introduced in the synthesis [4,11,23]. There are very few studies on the synthesis of mesoporous carbonate vaterite using only deionized water as a solvent for a particular precursor without any addictive substances or porogen. To this point, the synthesis of pure and porous vaterite in aqueous solutions is still a major challenge since the products usually have a non-uniform shape, and also calcite and aragonite co-exist.
Although many researchers reported the successful synthesis of pure vaterite and porous structure, the properties of vaterite prepared from different calcium sources resulted in different morphologies and mechanical properties [24]. The mechanical properties of vaterite obtained from Ca(NO 3 ) 2 ·4H 2 O precursor receive higher shear strength than CaCl 2 [24]. The application of vaterite for biocompatible ceramic needs to have characteristics similar to mineral phases in the natural bone, such as transformation to phosphate base biomaterial which can be used as bone substitutes and osteoconductive scaffold. The formation of calcium phosphate has received great attention due to its requirement for Materials 2021, 14, 4425 3 of 17 excellent biocompatibility and osteoconduction as a bone substitute material. Ideal calcium phosphate for bone substitutes shall support cell attachment, migration, proliferation, interact actively with cells and tissues, and stimulate repair and regeneration provided Ca/P ratio closed to 1.67 [25]. There are two recognized techniques for the synthesis of calcium phosphate, either using the solution method or sintering the vaterite in the presence of the phosphate source (DCPA). Many phases of calcium phosphate might form, such as hydroxyapatite, carbonate apatite, and α or ß-TCP indicated by their calcium/phosphorus ratio (Ca/P). If calcium phosphate is prepared by solution method with calcium nitrate as the source, the hydroxyapatite resulted in Ca/P near 1.5 [26] and created biphasic nature. In our future scenario, we prepared the calcium phosphate using the second technique by introducing a phosphor source in vaterite powder during the sintering process. Therefore, an appropriate structure of powder solid particles is required for this process and the Ca(NO 3 ) 2 ·4H 2 O is a preferable choice for the synthesis of vaterite.
Therefore, the main objective of this study is to synthesize vaterite from Ca(NO 3 ) 2 ·4H 2 O without porogen to obtain a pure and porous microsphere as appropriate raw material for calcium phosphate. Nature and crystallization behavior of the solids and the evolution of the aqueous solution chemistry were investigated at various reaction times (5,10,15,30, and 60 min) at two selected temperatures (20 and 27 • C) and stirring speeds (800 and 1000 rpm) considering optimum condition as reported in the literature [8]. The selected experimental condition can avoid complicated procedural steps, for example, keeping the temperature under room temperature and higher stirring speed. Furthermore, this study provided crucial insight into the calcium carbonate polymorphic transformation processes and porous formation that occurred in the presence of ions or molecules from selected Na 2 CO 3 precursor and Ca(NO 3 ) 2 ·4H 2 O precursor in an aqueous environment.
Materials and Methods
Two precursor materials were used: Na 2 CO 3 (Natrium Carbonate) and Ca(NO 3 ) 2 ·4H 2 O (Calcium Nitrate Tetrahydrate); both sources are analytical grade (Merck). The synthesized vaterite was prepared by mixing 0.5 M Na 2 CO 3 with 0.5 M Ca(NO 3 ) 2 ·4H 2 O, as illustrated in Figure 1. The solution was continuously mixed under stirring at the speed of 800 and 1000 rpm. As the precipitation process occurs at temperatures of 20-40 • C, reactions were conducted at two conditions of temperatures (20 and 27 • C) for different reaction times (5,10,15,30, and 60 min). The sample was collected and filtered on Whatman filter paper in a Buchner funnel at various reaction times. The obtained precipitate was washed with ethanol several times to ensure the removal of mother liquor. The washed precipitate was then dried in a desiccator for 48 h.
The synthesis of vaterite was performed at two different temperatures and two different stirring speeds. The samples were investigated as the reaction times increased. Samples were measured in transmission mode by FTIR Nicolet iS5 (Thermo Scientific, Waltham, MA, USA) equipped with the iD5 ATR. FTIR analysis was performed to investigate the absorption bands of CaCO 3 polymorphs. The obtained spectra exhibit the characteristic absorption bands that correspond to symmetric stretching, out-of-plane bending, asymmetric stretching, and in-plane bending [27]. The spectral region was selected from 1000 cm −1 to 600 cm −1 for the analysis.
The crystal structure of synthesized vaterite was analyzed by X-ray diffraction (XRD) (PANalytical, Almelo, Netherlands). Diffraction data were acquired by exposing samples to Cu-Kα X-ray radiation, which has a characteristic wavelength of 1.5406 Å. The X-rays were generated from a Cu anode using accelerating voltage and the applied current 40 kV and 30 mA, respectively. The data were collected within the range of 20 to 60 • and scan step time 2.90 s. The corresponding XRD patterns aimed to confirm the presence of both calcite and vaterite structures. The crystallite size of the microsphere was quantitatively calculated based on the Debye-Scherrer Equation to investigate the effect of the reaction time on the formation of vaterite microspheres, as follows Equation (1): where D is the size of the crystal, k is the Debye-Scherrer constant (0.89), λ is the wavelength of X-ray (1.5406 Å), β is the line broadening from the full width at half maximum (FWHM), and θ is the Bragg angle. To know the percentage of occurred crystal phase in the samples, we conducted quantitative analysis on the XRD spectra using MATCH (Crystal Impact, Bonn, Germany, version 3.7.0.124), which can perform a semiquantitative analysis of the sample using the so-called Reference Intensity Ratio Method [28].
thermore, this study provided crucial insight into the calcium carbonate polymorp transformation processes and porous formation that occurred in the presence of ions molecules from selected Na2CO3 precursor and Ca(NO3)2·4H2O precursor in an aqueo environment.
Materials and Methods
Two precursor materials were used: Na2CO3 (Natrium Carbonate) a Ca(NO3)2·4H2O (Calcium Nitrate Tetrahydrate); both sources are analytical grade (Merc The synthesized vaterite was prepared by mixing 0.5 M Na2CO3 with 0.5 Ca(NO3)2·4H2O, as illustrated in Figure 1. The solution was continuously mixed un stirring at the speed of 800 and 1000 rpm. As the precipitation process occurs at tempe tures of 20-40 °C, reactions were conducted at two conditions of temperatures (20 and °C) for different reaction times (5,10,15,30, and 60 min). The sample was collected a filtered on Whatman filter paper in a Buchner funnel at various reaction times. The tained precipitate was washed with ethanol several times to ensure the removal of mot liquor. The washed precipitate was then dried in a desiccator for 48 h. The morphology of the particle was examined by scanning electron microscope (SEM) (Hitachi SU3500, Tokyo, Japan) with an accelerating voltage of 10 kV. A light source was introduced into the cell, and the scattered light was collected at 90 • . The size distribution of primary and secondary particles was obtained using image analysis software ImageJ (NIH Image, Bethesda, MD, USA, version 1.46r: Java 1.6.0_20) from the magnification of SEM images. High-resolution transmission electron microscopy was conducted on Hitachi TEM System with an accelerated voltage of 120 kV.
The specific surface area was measured by the Brunauer-Emmett-Teller (BET) method using Quantachrome instruments surface area analyzer (Quantachrome Instrument, Boynton Beach, FL, USA). The small number of samples are dried with nitrogen purging (vacuum) that applies high temperature with pressure tolerance (P/P 0 ) of 0.050/0.050 (ads/des) as a measurement point. The outgas temperature was 300 • C. The gas volume adsorbed to the surface of the samples is measured at the nitrogen boiling point. It was correlated to the total surface area of the samples, including pore volume and pore diameter, which was calculated based on the BET Equations (2)-(6) [29]: where, W = weight of gas adsorbed After conducting the multipoint BET method, three data points are shown, i.e., volume at STP, relative pressure P P 0 as x-axis, and 1 W P 0 P −1 as y-axis in the multipoint BET linear plot. The intercept (i) is related to the first term of Equation (2) as follows: Meanwhile, the slope (s) is related to the part of the second term of equation (2) as follows: The total surface area (S t ) can be obtained as follows: where, W m = weight of adsorbate as monolayer N = Avogadro's number (6.023 × 10 23 ) M = Molecular weight of adsorbate A cs = Adsorbate cross sectional area (16.2 for Å 2 Nitrogen) The specific surface area (S) is then determined by dividing the total surface area (S t ) with sample weight (w):
Figures 2 and 3
show FTIR and XRD spectra of the prepared carbonate polymorphs as a function of reaction time where their absorption is indicated by V (vaterite) and C (calcite) at temperature 20 • C and different stirring speeds (800 and 1000 rpm). Fourier Transform Infrared spectroscopy was used to track changes in carbonate-related vibrational modes in three different CaCO 3 polymorphs (calcite, aragonite, and vaterite). One carbonate ion can have four normal modes: symmetric stretching, out-of-plane bending, asymmetric stretching, and in-plane bending [8,27]. The results showed the characteristic absorption band of vaterite at 849 cm −1 , 877 cm −1 , and 744 cm −1 and absorption bands of calcite at 877 cm −1 and 712 cm −1 . Overlapping of vaterite and calcite absorption bands were present at 877 cm −1 .
Considering only characteristics of active carbonate modes, the in-plane bending mode shows the most pronounced changes as a function of reaction time at a particular temperature and stirring speed. We have shown that spontaneous precipitation of calcium carbonates from moderately or low supersaturated (S ≤ 1) solutions led to the initial formation of vaterite and calcite. The FTIR spectra in Figure 2a show that lower intensity of the calcite absorption band is obtained at reaction time from 5 to 15 min. These results indicated that at this condition (temperature 20 • C and stirring speed 800 rpm), crystals mainly consist of vaterite and calcite present as a minor phase. However, the presence of calcite was increased at the reaction duration of 30 and 60 min. This result is evidence of precipitate instability of ACC formation to form the vaterite phase at lower temperatures and lower stirring speed. The stirring speed created hydrodynamic conditions of ions and molecules, promoting collisions or diffusion of the vaterite crystal with intense contact with the solution. Therefore, much amount of vaterite transformed into calcite upon increasing their reaction time (30-60 min). However, the kinetics of the process was also to be dependent on the ionic strength of the solution. Three possible kink sites occur to be a calcium site ≡CO 3 Ca + , a carbonate site ≡CaCO 3 − or a bicarbonate site ≡CaHCO 3 • . Figures 2 and 3 show FTIR and XRD spectra of the prepared carbonate polymorphs as a function of reaction time where their absorption is indicated by V (vaterite) and C (calcite) at temperature 20 °C and different stirring speeds (800 and 1000 rpm). Fourier Transform Infrared spectroscopy was used to track changes in carbonate-related vibrational modes in three different CaCO3 polymorphs (calcite, aragonite, and vaterite). One carbonate ion can have four normal modes: symmetric stretching, out-of-plane bending, asymmetric stretching, and in-plane bending [8,27]. The results showed the characteristic absorption band of vaterite at 849 cm −1 , 877 cm −1 , and 744 cm −1 and absorption bands of calcite at 877 cm −1 and 712 cm −1 . Overlapping of vaterite and calcite absorption bands were present at 877 cm −1 . Considering only characteristics of active carbonate modes, the in-plane bending mode shows the most pronounced changes as a function of reaction time at a particular temperature and stirring speed. We have shown that spontaneous precipitation of calcium carbonates from moderately or low supersaturated (S ≤ 1) solutions led to the initial formation of vaterite and calcite. The FTIR spectra in Figure 2a show that lower intensity of the calcite absorption band is obtained at reaction time from 5 to 15 min. These results indicated that at this condition (temperature 20 °C and stirring speed 800 rpm), crystals mainly consist of vaterite and calcite present as a minor phase. However, the presence of calcite was increased at the reaction duration of 30 and 60 min. This result is evidence of precipitate instability of ACC formation to form the vaterite phase at lower temperatures and lower stirring speed. The stirring speed created hydrodynamic conditions of ions and molecules, promoting collisions or diffusion of the vaterite crystal with intense contact with the solution. Therefore, much amount of vaterite transformed into calcite upon increasing their reaction time (30-60 min). However, the kinetics of the process was also to The corresponding XRD pattern also confirmed that the obtained CaCO 3 polymorphs were a mixture of calcite and vaterite (Figure 2b). The characteristic peaks and hkl plane ( A higher percentage of calcite (37.8%) in reaction times of 10 min compared to 30 min (4.7%) can be accepted due to its higher peak intensity of 877 cm −1 in the FTIR result, which confirmed its presence in the higher calcite phase. Unlike the stirring speed 800 rpm, upon the increase of stirring speed (1000 rpm), the FTIR spectra (Figure 3a) showed the highest calcite absorption band at 712 cm −1 indicated the highest percentage of calcite phase in the CaCO 3 polymorphs for the reaction time 15 min. This analysis is also supported by the XRD observation, where their corresponding higher calcite peaks were obtained at 15 min (Table 3). The higher stirring speed creates higher hydrodynamic conditions of ions and molecules improved possibility of collisions of the vaterite crystal with intense contact with the solution caused the enhanced amount of vaterite transformed into calcite. Generally, the result on varying the stirring speed at temperature 20 • C appeared to vaterite as the major component of CaCO 3 polymorphs, but still unstable phase due to moderate supersaturation at low temperature (20 • C). Table 3. Effect of reaction time on the as-prepared CaCO 3 polymorph ratio and size of crystallite (at a temperature of 20 • C and stirring speed of 1000 rpm).
Sample Name Reaction Time (min) Calcite (%) Vaterite (%) Crystallite Size (nm)
R05 T20 S1000 5 The sample at reaction time 30 min was further investigated by SEM and TEM due to its relatively small crystallite size compared to others. The spherical morphology of CaCO 3 was obtained for the sample at the reaction time of 30 min at two different stirring speeds as shown in Figure 4a-d. The CaCO 3 particles formed secondary microspheres with nanosize primary spherical particles. The size distribution analysis showed that the average size of microsphere was about (2.91 ± 1.06) µm for the stirring speed of 800 rpm (Figure 4a), and about (2.47 ± 0.85) µm for the stirring speed of 1000 rpm (Figure 4c). The average size of the primary particles were (153 ± 27.95) nm ( Figure 4b) and (171.29 ± 36.61) nm (Figure 4d) at corresponding stirring speed 800 and 1000 rpm. For the stirring speed 800 rpm, the morphology was denser and more spherical compared to the morphology at stirring speed of 1000 rpm, relatively irregular in shape. This was due to the higher percentage of the vaterite phase in the sample (up to 95.3%). At the same reaction time, the average size of the microsphere was higher at stirring speed 800 compared to 1000 rpm. This means that the growth rate of the microsphere is faster at lower speeds. Thus, it was emphasized that at higher stirring speed received smaller microsphere particles due to higher speed may prevent adsorption of the nanoparticle suspension in the process of spherulitic growth to form larger incorporation of nanoparticles in the presence of centrifugal force. However, when the stirring speed increased, the motion of ions and molecules improved caused increasing the chances of collision and increasing the rate of vaterite active interaction with foreign ions in the solution promoted phase transformation of vaterite to calcite as shown in SEM images Figure 4c which is highlighted in the circle. This picture is in agreement with the previous FTIR and XRD analysis in Figures 2 and 3. Generally, all obtained sample at 20 • C undergoes phase transformation of vaterite metastable state. Thus, further study was extended for the higher temperature (27 • C) to see the phase transformation, morphology, and porosity of the obtained porous particles and investigate their formation mechanism.
In contrast to the FTIR and XRD analysis of the sample obtained at 20 • C, lower calcite phase formation as a function of reaction times at temperature 27 • C as shown in Figures 5 and 6. It was highlighted that at stirring speed 800 rpm, the calcite phase indicated in peak absorption was reduced significantly as a function of times. However, at the beginning of the reaction, calcite formation dominates over vaterite (reaction time ≤ 5 min) at a lower stirring speed (800 rpm). It was highlighted that at the temperature of 27 • C appeared to vaterite as the major component of CaCO 3 polymorphs, with metastable phase due to higher supersaturation compared to temperature 20 • C.
The XRD pattern of polymorphs was shown in Figure 5b, along with their attribution of the main peaks. From the XRD patterns, the characteristic peaks corresponding to vaterite were dominant as the reaction time increased, which was confirming that a higher temperature of 27 • C can also inhibit the transformation of vaterite to calcite. The detailed quantitative results based on XRD analysis were listed in Tables 4 and 5. According to the calculation, the crystallite sizes of microspheres were calculated as 29.94, 26.25, 26.00, 23.92, and 25.08 nm, respectively (Table 4). This showed that the crystallite size tends to decrease with the increase of reaction times from 5 to 60 min (at the temperature of 27 • C and stirring speed of 800 rpm). The peaks corresponding to calcite were much narrower than those of vaterite, confirming a larger crystallite size. The XRD pattern of polymorphs was shown in Figure 5b, along with their attribution of the main peaks. From the XRD patterns, the characteristic peaks corresponding to vaterite were dominant as the reaction time increased, which was confirming that a higher temperature of 27 °C can also inhibit the transformation of vaterite to calcite. The detailed quantitative results based on XRD analysis were listed in Tables 4 and 5. According to the calculation, the crystallite sizes of microspheres were calculated as 29.94, 26.25, 26.00, 23.92, and 25.08 nm, respectively (Table 4). This showed that the crystallite size tends to decrease with the increase of reaction times from 5 to 60 min (at the temperature of 27 °C and stirring speed of 800 rpm). The peaks corresponding to calcite were much narrower than those of vaterite, confirming a larger crystallite size. The XRD pattern of polymorphs was shown in Figure 5b, along with their attribution of the main peaks. From the XRD patterns, the characteristic peaks corresponding to vaterite were dominant as the reaction time increased, which was confirming that a higher temperature of 27 °C can also inhibit the transformation of vaterite to calcite. The detailed quantitative results based on XRD analysis were listed in Tables 4 and 5. According to the calculation, the crystallite sizes of microspheres were calculated as 29.94, 26.25, 26.00, 23.92, and 25.08 nm, respectively (Table 4). This showed that the crystallite size tends to decrease with the increase of reaction times from 5 to 60 min (at the temperature of 27 °C and stirring speed of 800 rpm). The peaks corresponding to calcite were much narrower than those of vaterite, confirming a larger crystallite size. Table 5. Effect of reaction time on the as-prepared CaCO 3 polymorph ratio and size of crystallite (at a temperature of 27 • C and stirring speed of 1000 rpm).
Sample Name Reaction Time (min) Calcite (%) Vaterite (%) Crystallite Size (nm)
R05 T27 S1000 5 The XRD result confirming the FTIR spectroscopy, when the sample was stirred under 800 rpm at 5 min, only calcite was obtained, and the crystallite size was higher (29.94 nm). When the reaction time increased, the crystallite size was decreased to 25.08 nm as the percentage of calcite was decreased (Table 4). It was seen from the XRD pattern, that vaterite percentage was increased and much metastable vaterite crystal was obtained as the increase of reaction time. When the samples were mixed under a stirring speed of 1000 rpm, the calculated crystallite sizes were 25.11, 22.99, 24.33, 22.58, and 23.90 nm, respectively (Table 5). This showed that the crystallite size decrease with the increase of reaction time from 5 to 60 min (at the temperature of 27 • C and stirring speed of 1000 rpm). However, based on the calculated mean of crystallite size showed all value was not significant and in agreement with result previously reported research [30]. This means the selected experimental condition revealed the crystallite size relatively homogeneous and also similar morphology observed in the SEM images.
The Proposed Mechanism of Spherulitic Growth of CaCO3 Mesoporous
The mechanism of the ACC to vaterite formation is still under development, revealed with three main proposed mechanisms or models. The first proposed mechanism starts by dissolving ACC to form spherical vaterite by homogeneous nucleation of crystalline vaterite nanoparticles, and consequently, vaterite nanoparticles aggregate very rapidly to form the polycrystalline microsphere [31][32][33]. Second, ACC particles are dehydrated and recrystallized to form vaterite [34,35]. Third, continuous dissolution of ACC and spherulitic growth formation of vaterite microsphere [36][37][38][39][40]. Therefore, in this study, we investigated the effect of temperature and stirring speed as a function of reaction times on the phase transformation vaterite crystals, morphology, and also polycrystalline pore structure. It was explained by considering the mechanism of the vaterite crystallization by introducing a discussion on the recent proposed model crystal growth mechanism. One of the outstanding results of this study was the formation of porous particles without any additional porogen or template and interestingly using a simple mixing method. The ki- To analyze the total surface area of vaterite particles, multipoint BET measurement was conducted, as shown in the second column of Table 6. The results showed that the specific surface area is greatly affected by the stirring speed. The specific surface decreases when the stirring speed increased. The BJH (Barret, Joyner, and Halneda) method was conducted to calculate the pore surface area, volume, and diameter from experimental isotherm using adsorption and desorption technique. Only BJH desorption results are shown in the table. The result showed that only stirring speed affects the surface area of the CaCO 3 polymorphs. The pore surface area, volume, and diameter remain the same with the increase of temperature. This indicated that the temperature difference between 20 • C and 27 • C did not affect their surface area. Meanwhile, when the stirring speed increased, the surface area and pore diameter were decreased, but the pore diameter was increased. This might be due to the diffusion of NaNO 3 from the solution to the microsphere trapped during the spherulitic process.
The Proposed Mechanism of Spherulitic Growth of CaCO 3 Mesoporous
The mechanism of the ACC to vaterite formation is still under development, revealed with three main proposed mechanisms or models. The first proposed mechanism starts by dissolving ACC to form spherical vaterite by homogeneous nucleation of crystalline vaterite nanoparticles, and consequently, vaterite nanoparticles aggregate very rapidly to form the polycrystalline microsphere [31][32][33]. Second, ACC particles are dehydrated and recrystallized to form vaterite [34,35]. Third, continuous dissolution of ACC and spherulitic growth formation of vaterite microsphere [36][37][38][39][40]. Therefore, in this study, we investigated the effect of temperature and stirring speed as a function of reaction times on the phase transformation vaterite crystals, morphology, and also polycrystalline pore structure. It was explained by considering the mechanism of the vaterite crystallization by introducing a discussion on the recent proposed model crystal growth mechanism. One of the outstanding results of this study was the formation of porous particles without any additional porogen or template and interestingly using a simple mixing method. The kinetic formation of vaterite in the presence of various ions and molecules represents an intermediate step in the reaction pathway that leads from ACC to vaterite following Ostwald's steps rule [36]. The most important one was the mechanism on the formation of the porous formation of polycrystalline microsphere without additional porogen or template. The formation of porous polycrystalline vaterite was markedly different from the conventional particle to particle formation in which nucleation occurred.
The process involved mixing a precursor of Na 2 CO 3 as the source of CO 3 and Ca(NO 3 ) 2 ·4H 2 O source of Ca prepared at the condition of supersaturation concerning the initial precursors results in the formation of amorphous calcium carbonate (ACC) which later transform into vaterite crystal. Many factors influence the precipitation of calcium carbonate polymorphs, one of the most determinant factors was the presence of foreign ions (Ca 2+ , Na + , CO 3 2− ) or molecules (NaNO 3 ) in the aqueous solution from which the carbonate precipitates. The proposed chemical reaction for the used precursors Na 2 CO 3 and Ca(NO 3 ) 2 ·4H 2 O are presented in Equations (1)-(3) and excess of NaNO 3 molecules in the solution (Equation (2)). The complete chemical reaction was presented in Equation (3) We proposed the mechanism of polycrystalline vaterite microsphere formation at temperature 27 • C based on the FTIR, XRD, SEM, and TEM observations, as illustrated in Figure 9. Immediately after mixing the precursors, nucleation of Ca 2+ and CO 3 2− ions occurred spontaneously in supersaturation solution to form clustered of primary vaterite crystals transformation to growth. The vaterite crystals are fully formed at nearequilibrium saturation conditions. On the other hand, Ostwald ripening of the small nanocrystallites is formed as the result of the internal crystal structure changing over time. Smaller nanocrystals divorcement due to a total contact area reduction with a solvent causes the growth of larger ones. That formation above occurred continuously with the fast spherulitic growth of vaterite crystal to form a mesocrystalline vaterite microsphere via growth front nucleation (GFN) mechanism [38,39].
tallites is formed as the result of the internal crystal structure changing over time. Smal nanocrystals divorcement due to a total contact area reduction with a solvent causes t growth of larger ones. That formation above occurred continuously with the fast spher litic growth of vaterite crystal to form a mesocrystalline vaterite microsphere via grow front nucleation (GFN) mechanism [38,39]. The primary vaterite crystals have a secondary structure called polycrystallin which is composed of nanocrystallites that have a size in the range of nanometers [4 Na+ and NO3ions could be adsorbed in the original primary vaterite nanosize partic surface inside the microsphere and consequently prevent the crystal growth polymor and phase transformation. A similar mechanism was reported using Na2CO3 as a carb The primary vaterite crystals have a secondary structure called polycrystalline, which is composed of nanocrystallites that have a size in the range of nanometers [41]. Na + and NO 3 − ions could be adsorbed in the original primary vaterite nanosize particles surface inside the microsphere and consequently prevent the crystal growth polymorph and phase transformation. A similar mechanism was reported using Na 2 CO 3 as a carbon source that ions such as NH 4 + ions are able to control the growth of primary particles [42]. In this case, the particle growth mechanism of spherulites of vaterite was found to be dependent on the crystal surface structure. The GFN mechanism mostly produced polycrystalline sphere contained crystalline nanosphere with similar in size. This is in agreement with the morphology observed in SEM and TEM images (Figures 7 and 8). The presence of excess NaNO 3 molecules in the solution may be diffused inside of the microsphere during spherulitic growth and has a significant role for the mesocrystalline particles. In this condition, the trapped NaNO 3 diffusion prevented primary particles from becoming new crystals in secondary size during the formation of spherulites of the vaterite microsphere [41]. Thus, pore mesocrystalline microsphere was obtained with relatively similar pore size which was consistent with the observation of their morphology in SEM and TEM images (Figures 7 and 8) and BET of pore size in Table 6. Thus, the presence of the NaNO 3 acting as porogen in microsphere during spherulitic growth was able to control the crystal porosity without the addition of any chemicals. Therefore, the obtained microspheres represent a very important biomaterial for various biomedical application such as bone substitute, i.e., our future development of CaCO 3 vaterite based scaffold.
Conclusions
In conclusion, we have successfully synthesized vaterite from Ca(NO 3 ) 2 ·4H 2 O without porogen to obtain vaterite with purity 95% crystallite size 23.91 nm and porous microsphere with lowest pore diameter 3.5578 nm at reaction time 30 min, temperature reaction 20 • C, and stirring speed 800 rpm. It was emphasized that more spherical with a size of around 2-3 µm and consisted of multiple primary nanoparticles to form the porous microsphere with lower pore size at the lower stirring speed (800 rpm) at the reaction time of 30 min. Generally, the percentage of vaterite and calcite co-exist were varied as the reaction time increase in all temperatures and stirring speed. It was concluded that the BET result confirmed that only stirring speed affects the surface and pore volume, and consequently pore diameter due to NaNO 3 act as porogen in the spherulitic process of mesocrystalline vaterite microsphere. Considering the result over the ranges of variables (temperature and stirring speed), the experimental route presented in this paper offering the efficient procedure to obtain a high percentage yield of porous vaterite (majority more than 90%). Therefore, it is potentially feasible for developing into industrial scale production.
|
2021-08-28T06:17:20.001Z
|
2021-08-01T00:00:00.000
|
{
"year": 2021,
"sha1": "70f960b95dd0cf5411fa7e11201b2d394af528f4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ma14164425",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f5cfef139b28d46daee86391add0263a4213ae36",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
260391037
|
pes2o/s2orc
|
v3-fos-license
|
GWAS reveals genomic associations with swine inflammation and necrosis syndrome
The recently identified swine inflammation and necrosis syndrome (SINS) occurs in high prevalence from newborn piglets to fattening pigs and resembles an important concern for animal welfare. The primary endogenous syndrome affects the tail, ears, teats, coronary bands, claws and heels. The basis of clinical inflammation and necrosis has been substantiated by histopathology, metabolomic and liver transcriptomic. Considerable variation in SINS scores is evident in offspring of different boars under the same husbandry conditions. The high complexity of metabolic alterations and the influence of the boar led to the hypothesis of a polygenic architecture of SINS. This should be investigated by a genome-wide association study. For this purpose, 27 sows were simultaneously inseminated with mixed semen from two extreme boars. The mixed semen always contained ejaculate from a Pietrain boar classified as extremely SINS susceptible and additionally either the ejaculate from a Pietrain boar classified as SINS stable or from a Duroc boar classified as SINS stable. The 234 piglets were phenotyped on day 3 of life, sampled and genetically assigned to the respective boar. The piglets showed the expected genetic differentiation with respect to SINS susceptibility. The suspected genetic complexity was confirmed both in the number and genome-wide distribution of 221 significantly associated SNPs, and led to 49 candidate genes. As the SNPs were almost exclusively located in noncoding regions, functional nucleotides have not yet been identified. The results suggest that the susceptibility of piglets to SINS depends not only on environmental conditions but also on genomic variation. Supplementary Information The online version contains supplementary material available at 10.1007/s00335-023-10011-6.
Background
Swine inflammation and necrosis syndrome (SINS) is a newly identified, specific syndrome, resulting from the combined presence and signs of inflammation and necroses in acral body parts.It particularly affects the tail base, tail tip, ears, coronary bands, heels, soles, claw walls, teats, navel, and face and can be observed in suckling piglets, weaners, and finishing pigs (Reiner and Lechner 2019;Reiner et al. 2019Reiner et al. , 2020Reiner et al. , 2021a;;Kühling et al. 2021a, b).Signs of inflammation and the loss of the integrity of body parts indicate serious impairment of animal welfare and reflect one of the major challenges in pig farming (EFSA 2012(EFSA , 2014)).
Three main observations support the assumption that SINS is primarily an endogenous disease, even though it may be modified by technopathies and other mechanical stressors: (1) The simultaneous occurrence in such disparate body parts as the tail, teats and claws (Reiner et al. 2019;Kühling et al. 2021a, b); (2) Evidence that SINS can be expressed before birth (Kühling et al. 2021a); (3) Evidence that inflammation originating from blood vessels was proven by histopathology before birth when biting and mechanical irritation (e.g., from soil) are excluded, in piglets with (still) intact epidermis (Reiner et al. 2020;Kühling et al. 2021a).
The histopathological background of clinical inflammation is vasculitis, thrombosis, intimal proliferation, oedema, and hyperaemia accompanied by an intact epidermis (Reiner et al. 2020;Kühling et al. 2021a).Inflammation was characterized by granulocytes in considerable numbers, macrophages, and lymphocytes in piglets not older than 2 h, indicating an onset of inflammation at least 4 days before birth (Betz 1994).Bristle loss was associated with inflammatory processes in the deeper parts of the hair follicles (Reiner et al. 2020;Kühling et al. 2021a).Significant proportions of neonates can be affected.In the study by Kühling et al. (2021a) on a conventional farm, 40 to 80% of neonatal piglets were affected by haemorrhages of the claw wall, coronal inflammation, redness of heels, bristle loss, and redness of the tail and ears.
The syndrome is also accompanied by a huge series of clinical-chemical, metabolic (Löwenstein et al. 2021) and transcriptomic (Ringseis et al. 2021) alterations.Practical experience from pig farms with uniform sow base regularly shows evidence of boar effects on progeny SINS scores.
Providing insight into the genetic architecture of SINS would be an important milestone in combating the syndrome, as husbandry improvement measures, often insufficient on their own, could be supported by targeted selection using less sensitive boars.It would make control more effective and sustainable.
This background was confirmed by a study with 19 boars (4 Duroc and 15 Pietrain boars) mated to 39 sows (Kühling et al. 2021b), where the offspring of the boars to be tested were born in the same litter.Offspring from Duroc boars had significantly lower SINS scores (4.87 ± 0.44) than offspring from Pietrain boars (10.13 ± 0.12).Even within the Pietrain breed, SINS scores of offspring were significantly affected by the boar.Total SINS scores in the offspring of the best Pietrain boars was almost 40% lower than that of offspring in the poorest Pietrain boars.These findings confirmed considerable genetic effects on the outcome of SINS under a given husbandry.The genetic background of SINS has recently been confirmed with a heritability of 0.2 (Leite et al. 2023), together with further interesting population parameters.
The present study was conducted to characterize the genetic background of these effects in a genome-wide association study (GWAS) approach.To examine whether evidence of associations with SINS phenotypes can be detected in the porcine genome.In addition, their distribution and magnitude should be evaluated, and candidate genes should be identified, particularly in the areas of inflammation, vasculitis, and necrosis.
Study design
The animal experiment was carried out in the conventional pig breeding stable of the Oberer Hardthof teaching and research station at Justus-Liebig University Giessen under the approval of the authorities in Giessen, Germany with file number 708_M.
The sow herd used (Topigs x German landrace) was a rotational cross of Topigs with German Landrace.The boars came from completely different breeds and had no breeding relation between them.The environment was the same for all animals.
The herd had a performance of 15 live born and 1.4 dead born piglets per sow.The sows were artificially inseminated with three extreme boars based on the above mentioned study by Kuehling et al. (2021b).Boars were used in pairs as mixed semen, where the semen of two boars was mixed within one dose.This means that piglets from two different boars were present in each litter at the same time.The design was applied to (i) limit the number of litters and experimental animals, (ii) to minimise environmental effects, (iii) to increase genetic variability within the piglets, (iv) to increase the sow-boar combinations and (v) to nevertheless achieve a manageable number of piglets.
The three boars were a Duroc boar whose progeny had the lowest levels of SINS in the preliminary study (4.3) and the two Pietrain boars whose progeny had the lowest (7.26) and highest (12.17) levels of inflammation and necrosis, respectively, within the Pietrain cohort.These boars were selected to achieve segregation of favourable and unfavourable gene variants in the progeny.All sows were inseminated only once, with mixed semen from two boars, so that piglets of the Pietrain boar classified as unfavourable (PI−) occurred in one litter together with piglets of the Duroc boar (DU) (in 13 litters) or together with piglets of the Pietrain boar classified as favourable (PI +) (in 14 litters).Taken together, a total of 27 sows produced 27 litters.Each sow had only one litter, but with piglets from two different boars.234 piglets were used, if they were anatomically normal developed and their SINS phenotype was recorded at their 3rd day of life.The piglets' father was detected by paternity testing after phenotyping.The results of paternity testing revealed 14 mixed litters (from 14 sows) with 77 piglets from the favourable Pietrain boar and 39 piglets from the unfavourable Pietrain boar, as well as 13 mixed litters from 13 sows with 48 piglets from the Duroc boar and 70 piglets from the unfavourable Pietrain boar.On average, 8.4 healthy piglets per litter with at least one piglet per inseminated boar were randomly selected in a blinded manner and used.
Paternity testing
Genetic matches between offspring and boars were used in paternity testing.DNA was extracted from the piglet's docked tails.Tail docking was done at day 4 of life, one day after clinical scoring (Reiner et al. 2021b).Genotyping was done with 14 microsatellites in 2 multiplex PCRs and microsatellite alleles were determined by capillary gel-electrophoresis.
Clinical scoring
Inflammation and necrosis were clinically assessed as described by Reiner et al. 2019.To ensure comparability with other studies, the piglets were scored on the 3rd day of life.Clinical signs were clearly visible during this period in all previous studies, and the piglets were not yet as much exposed to environmental effects like weaners and fatteners.To minimize the animal load, clinical signs were recorded using a digital camera (Canon EOS DC 8.1 V, Canon) according to a standardized scheme for later detailed evaluation of the images (Windows Media Player, Version 12, Microsoft GmbH, Germany).
Clinical alterations in the tail base and tail tip, the ears, the teats and navel, coronary bands, wall horn, heels and sole of the feet as well as the face were assessed individually.The following clinical characteristics were considered and scored 0, if the sign was not visible or 1 if the sign was visible (Table 1).The tail base and tail tip were independently scored for loss of bristles, swelling, redness, scab formation (tail tip only), rhagades, exudation, necrosis, bleeding (tail tip only), and ring-shaped constrictions (tail tip only).Ears were scored for a shiny skin, the loss of bristles, necrosis and congested ear veins.The face was scored for the absence or presence of edema at the eye lids and nose back.Teats were scored for swelling, reddening, scab formation, necrosis and congested blood vessels.The navel was scored for signs of inflammation in the form of redness or swelling.Claws were scored qualitatively for any signs of inflammation at the coronary bands (swelling, redness or exudation), wall bleeding, swelling and bleeding of the heels.The examined binary scores are presented by organ system as percentage of affected piglets.
All findings were summed up to give an additive body part score (Table 1).Scores could reach 2 to 9 points.All scores were summed up for the SINS score in an unweighted manner.This resulted in possible SINS scores between 0 and 36 for each piglet.All scores were assigned by two experienced persons together.An overview on the evaluated phenotypes is given in Table 1.Additionally, the SINS score was used after Z-transformation (ZSINS).For the descriptive presentation of the phenotypes of the progeny from the three boars, a generalised linear model with boar as effect was used in the case of binary data (Supplemental Table 1).The metric data (Table 2) were calculated using Anova, considering the boar as effect.
DNA extraction and sequencing
DNA was extracted from the piglet's docked tails using the smart DNA prep (m) kit (Analytik Jena, Jena, Germany) and quantified by Qubit Flex Fluorometer (Invitrogen, Thermo Fisher Scientific, Waltham, MA, USA) using the Qubit dsDNA broad range assay kit (Invitrogen, Thermo Fisher Scientific, Waltham, MA, USA).The DNA was diluted to a uniform concentration of 50 ng/μl.During library preparation, the samples were prepared to be compatible for sequencer processing.The paired-end library generated in this process was amplified.
Complete genome sequencing was performed using the Illumina NextSeq500/550 v2 and Illumina NovaSeq 6000 (Illumina, San Diego, USA).In this process, 150 bp pairedend reads were generated with a coverage of 15x.
Bioinformatics workflow
The files received from the sequencing company were decoded and converted from.bz2 to gzipped fastq files.
OVarFlow pipeline
The available raw data were transferred to the open source workflow OVarFlow for further bioinformatic analysis.This workflow is used for variant discovery of single nucleotide As part of the preparation of the workflow, the required input files were compiled in a configuration file in comma separated values (CSV) format.This file contains the reference genome and the annotation (Sus scrofa 11.1, Genebank Assembly Accession: GCA_000003025.6), the min sequence length (value = 1) and sample information on the Illumina short read sequencing data used in the analysis.
Variant calling
OVarflow performs variant detection at several intervals per individual to enable the highest possible degree of parallelisation.The actual variant calling was done using gatk Haplotype Caller (https:// gatk.broad insti tute.org/ hc/ en-us/ artic les/ 36005 08146 12-Haplo typeC aller).First, active regions were identified in which a significant number of reads showed variations beyond the expected background noise.De Bruijn-like graphs were created for these regions which were used to recreate a possible sequence and develop haplotype candidates.To narrow down potential sites of variation, each possible haplotype was aligned with the reference sequence using the Smith-Waterman algorithm.The existing reads were then aligned with each of the possible haplotypes and a matrix was created with the expected likelihoods of occurrence.Therefore, the PairHMM algorithm was used.The assignment of the most likely genotypes for the available samples was done according to Bayes' theorem.
With gatk GenotypeGVCFs (https:// gatk.broad insti tute.org/ hc/ en-us/ artic les/ 36005 08160 72-Genot ypeGV CFs), the data preprocessed with HaplotypeCaller were subjected to an additional joint genotyping across several individuals.The genotype information lost during the combination of the GVCF files was restored and the genotyping accuracy was improved.
Variant effect prediction
The functional annotation of the previously found genetic variants was carried out using snpEff, version 5.0 (Cingolani et al. 2012).Information on genetic coordinates and the effect of the respective variant is output, for instance if the variant is located in a coding region of a gene and whether it is a synonymous, non-synonymous or nonsense mutation.
Genome-wide association study (GWAS)
The preparation of the data output by OvarFlow for the GWAS and the actual execution of the GWAS was done in R, version 4.2.1 (R Core Team 2022).RStudio (RStudio Team 2022) was used as the graphical user interface.
The annotated VCF file output at the end of the OVarFlow workflow was converted to binary PLINK format using the PLINK package (Purcell et al. 2007; version 1.9) with the option that only SNPs are included.The first quality control of the genotype data were done with PLINK.Only variants and individuals that passed the following criteria were included in the GWAS: • Missingness per marker < 0,01 • Missingness per individual < 0,1 • Minor-Allele-Frequency (MAF) > 0,05 • Hardy-Weinberg equilibrium (HWE) p > 0.000001 With a special focus on the X chromosome, a second quality control was carried out with GenABEL version 1.0.0 (Aulchenko et al. 2007).This should detect incorrect heterozygous male X-linked genotypes and exclude them from the analysis.
A reduced SNP dataset was created in order to check for any population structure that might be present.SNP pruning was based on linkage disequilibrium and pairwise genotypic correlation using PLINK.With the reduced dataset, a principal component analysis (PCA) was performed using TASSEL 5.0 (Bradbury et al. 2007).Tassel 5.0 was also used for kinship analyses creating genetic distances between offspring of the three boars.
The actual GWAS was conducted by GAPIT (Wang and Zhang 2021; version 3) using the BLINK model (Huang et al. 2019).For this purpose, the genotype data were converted from PLINK to hapmap format using TASSEL 5.0.Bonferroni-corrected genome-wide and chromosome-wide significance thresholds were given for a significance level of α = 0.05.
For GWAS, effects of boar, sow (litter), contemporary group, sex, and PCA were used as fixed effects and birth weight was used as covariate.For PCA, 4 principal components were considered.Because each sow was used only once in the study, effects of litter and sow were identical.
Manhattan plots and quantile-quantile plots were generated using the R package qqman, version 0.1.8(Turner 2018).
After GWAS, associations were excluded from further analysis, if one of the expressions of the characteristics (0 vor 1) was represented by less than 5% of the cases (i.e., 12 out of 234 animals).
Statistical analysis of SNP effects on SINS characteristics
The effects of the genotypes of the significant SNPs from GWAS on the SINS characteristics were tested with Anova (metric data) and with a General Linear Model (bivariate data) in IBM-SPSS, version 27 (Statistical package for Social Sciences, IBM, Munich, Germany).SNPs were only included if they had at least three genotypes, if the effect was additive, i.e., if the heterozygote value did not exceed the value of the highest homozygous or fall below the value of the lowest genotype by more than 20%, and if the negative decadic logarithm of significance exceeded 8.4.We tested not only the effects of SNPs on the phenotypes with significant association in GWAS, but on all SINS genotypes.The rationale for this approach was the assumption that due to the syndrome character of SINS, involved gene loci should affect different SINS characteristics at the same time.
Kandidate gene prediction
Positional candidate genes were identified based on their distance from SNPs.The Genome data viewer (https:// www.ncbi.nlm.nih.gov/ genome/ gdv/?org= sus-scrofa) based on release Sscrofa11.1 (GCF_000003025.6)was used for this purpose.Positional candidate genes were selected as such if they were located no further than 1 Mbp from the SNP in either direction.
Information on positional candidate genes were obtained with GeneCards (Safran et al. 2022) human gene data base.This program was used to detect genes involved in inflammation, vasculitis and necrosis.Genes were displayed in order of the GeneCards Score, were best fitting genes obtain the highest order.
KASP
To confirm the genotypes of the 234 animals that were whole genome sequenced, 8 SNPs were selected from the set of predicted candidate genes and genotyped using KASP.With the intention of randomly checking the results of the GWAS, not all SNPs were selected.The specific selection of the 8 SNPs was based on their proximity to candidate genes (see Table 5) and their significance level.The KASP assay technology is based on competitive allele-specific PCR and allows bi-allelic scoring of SNPs and indels at specific loci.
For sample preparation, DNA was uniformly diluted to a concentration of 50 ng/μl and 25 μl were placed in 96-well plates.The actual genotyping was performed in the laboratory of LGC Genomics (Hoddesdon, United Kingdom).The primers were designed according to the rs numbers and sequences covering the range of 50 bases around the polymorphism that we had provided for the different SNPs.
In the initial PCR cycle, the matching allele-specific forward primer binds to the target region together with the common reverse primer.During amplification, the tail sequence located at the 5′-end of the primer is added to the newly synthesised strand.In the following cycles, further amplification of these takes place.
The KASP master mix used for the assay contains universal FRET (fluorescence resonance energy transfer) cassettes with Fam-or Hex-labelled regions.These regions correspond to one of the allele-specific tail sequences and enable their binding.In this case, the FRET cassette is no longer quenched and emits the corresponding fluorescence signal.If the genotype of the examined animal is homozygous at the SNP, only one out of two possible fluorescence signals is generated.In case of heterozygosity, a mixed fluorescence signal can be detected.The effects of the genotypes predicted by KASP were analysed by onefactorial variance analysis (Anova) in metric and with a generalised linear model in bivariat data.
Phenotypes
More than 86% of the three-day-old suckling piglets already showed swelling and bleeding of the heels and haemorrhages in the claw wall (Supplemental Table 1).The piglets were affected by eyelid oedema at the same level.The ears showed vein congestion in 86% and a shiny surface and bristle loss in over 65% of the animals.About 40% to 60% of the piglets showed swelling and redness of the tail base, venous congestion at the teats and signs of inflammation at the coronary bands of the front and hind limbs.Over 20% of the piglets had no bristles at the tail base, scab formation at the tail tip and swelling at the teats.As expected, the more severe symptoms, such as bleeding and necroses, were only detectable in 4 to 8% of the piglets at this age (Supplemental Table 1).
Any of the additive body part scores except for face and claw wall showed a significant effect of the boar.There were significant differences in the SINS scores in offspring groups from all three boars (P = 2.9 × 10 -8 ) (Table 2).The effect of the boar explained up to 14% of the phenotypic variance.In 56% of the phenotypic characteristics, phenotypes in offspring of Pietrain boars previously classified as unfavourable (PI−) were significantly worse than in DU-offspring (DU) and in 27% they were significantly worse than in offspring of Pietrain boars classified as favourable (PI +).In 29% of the cases, the DU-offspring were also significantly superior to the offspring of the favourable Pietrain boars (PI +) (Supplemental Tables 1 and 2).
With the exception of the face and the navel, the summarized scores of the individual body parts were significantly correlated with each other in accordance with the syndrome characteristics and correlations with the SINS score were between 0.3 and 0.62.Only the heels were correlated to a lesser degree with other SINS features (Table 3).
GWAS
There was a significant stratification of the data (Fig. 1) which was addressed by the inclusion of PCA data into the statistical model.Associations were excluded from further analysis, if one of the expressions of the characteristics (0 or 1) was represented by less than 5% of the cases (i.e., 12 out of 234 animals).GWAS identified 221 significant SNPs, from which 56 were chromosome-wide (P ≤ 6.4) and 165 were genome-wide significant (P ≤ 8.4) (Supplemental Table 2).With few exceptions, the SNPs were already known and listed under their rs-ID.These SNPs were associated with 25 different phenotypic signs.Seventy SNPs were closer to each other than 10 6 basepairs.They were condensed into 71 chromosomal regions, e.g., 1, 1 to 1, 9 on SSC1 (Supplemental Table 3).Twentynine of the SNPs were associated with more than one phenotype (Supplemental Table 4).
The Manhattanplot of Fig. 2 summarizes the effects of SNPs in association with the SINS score.Details are given in Supplemental Table 2. Effects were found on numerous chromosomes.The QQ plot and the genomic inflation factor (λ = 1.09) did not indicate population stratification and/ or cryptic relatedness between animals (Fig. 3).
Anova
Significant SNPs in GWAS were assigned to the responsible genotypes.The distribution of genotypes as well as mean values and standard error of the respective phenotype expressions can be found in Supplemental Table 5.Because some SNPs were significantly associated with several phenotypes of the syndrome, a total of 203 significant relationships could be found and the favorable and unfavorable genotypes for the respective SNPs could be derived.The SNPs explained between 14.7 and 30.7% of the phenotypic variance.The corresponding negative decadal logarithms of significance were between 8.1 and 18.6.
In 42 of these associations GWAS and ANOVA phenotypes were identical.In a further 76 associations GWAS and ANOVA traits represented the same body part.Several SNPs were associated with more than one body part and different signs of SINS.An overview of the localization of the most important significant SNPs in the genome including the associated phenotypes is shown in Fig. 4. The SNPs did not occur singularly but were distributed over the entire genome.In the area of numerous chromosomal regions (numbers right to the chromosome), associations with several signs of SINS were found.
Signs at the ears and claw wall were most often associated with SNPs (86 and 44 times, respectively).Tail base (n = 24) and SINS score (n = 25) were also associated with numerous SNPs.Fewer associations were detected for tail tip, teats, face and heels.
Verification of selected SNPs by KASP
Eight SNPs on SSC 9, 12, 13, 14, and 15 were selected for verification of the genome-wide sequencing results by KASP.Differences in genotypes between genomic sequencing and KASP were found in 1.6% of the individuals.They represented exchange between the major homozygote and the heterozygote genotype.Accordingly, Fig. 4 Distribution of significant SNPs with association to the different SINS genotypes across.The vertical lines characterise the extent of the 18 autosomes and the X chromosome in pigs (in Mbp).The pie charts on the lines correspond to the location of the SNPs with association to the SINS signs.The colours correspond to the indications in the legend.In the area of pie charts with several colours, there are associations with inflammation/necrosis in the area of several body parts.To the left of the vertical are the positional data in Mbp, to the right the numbers of the identified 71 chromosomal regions with significant associations in the GWAS there were no differences between sequencing and KASP in the effects of alleles on phenotypes.The 95% confidence intervals for e.g., SINS of the SNP 12_44738423 genotypes were clearly separated .1)(Table 4).The effects of this and two further SNPs are exemplarily shown in Fig. 5A-C.Additionally effects of distinct SNPs could be found in the offspring of all three boars, although to different degrees (Fig. 6).
Positional candidate genes
Of over 11,000 genes linked to inflammation, necrosis and vasculitis functions in GeneCards, 2300 were within 5 Mbp of a significant SNP.Fortynine genes were located no further than 100 kbp from a SNP and three no further than 10 kbp (data not shown).Of the genes located within a range of at most 100 kbp from a significant SNPs, 15 with direct involvement in inflammation, vasculitis, and necrosis were defined as candidate genes (Table 5).281 Genes located within a distance of 500 kbp are additionally shown in Supplemental Table 6.
Discussion
SINS is a syndrome in pigs characterized by inflammation and necrosis of various parts of the body that can lead to pain, suffering and damage.Various studies showed symptoms at the base of the tail, tail tip, ears, teats, navel, coronary bands, claws and heels (for review see Reiner et al. 2021a).The syndrome starts with inflammatory loss of bristles, swelling, and redness.Later, rhagades, exudation, and possibly necrosis occure.The inflammation was detected histopathologically in newborn piglets, suckling piglets, weaners and fattening pigs (Reiner et al. 2020;Kühling et al. 2021a, b).Severe vasculitis, vascular thrombosis and lymphoplasmacellular inflammation (Kühling et al. 2021a) seem to be associated with a shortage of supply of the downstream tissues (Reiner and Lechner 2019) in the sense of ischaemia as already proposed by Penny et al. (1971) and Blowey and Done (2003).
Observations in the field that offspring of different boars developed SINS to a significantly different extent under the same husbandry conditions were reproduced in a targeted mating experiment under station conditions (Kühling et al. 2021b).Already in these experiments, the boars in question were used as mixed semen simultaneously on the same sow to minimize environmental effects.The heritability for SINS was recently estimated at 0.2 in a very informative study (Leite et al. 2023).For the current study, three extreme boars were selected from the study of Kühling et al. (2021b) and used again on a different sow herd (again as mixed semen).This experimental design was intended to increase the variability of SINS symptomatology and comparability of the boars used, while minimizing the number of experimental animals required.Indeed, the boars′ effects regarding SINS of their offspring could be reproduced and repeated.
However, a weakness of the study arises from the selection of 3-day-old suckling piglets.This was thought to minimize environmental effects that increase with age (Reiner et al. 2020).However, it was also clear that, although massive inflammatory symptoms would occur in these young animals, the severe forms, such as exudation and necrosis, would be less frequent.In the end, some forms of necrosis to organ systems occurred so rarely that they could not be considered in GWAS.
The degree of overlap of symptoms in different parts of the body was striking in recent as well as in the present study.The simultaneous occurrence in such different parts of the body, the evidence of vasculitis, thrombosis, and intimal proliferation with intact epidermis, and the histopathologic evidence of granulocytes, macrophages, and lymphocytes in the affected tissue in newborn piglets argues for the endogenous genesis of the disease.Thus, the syndrome initially occurs independently of external factors such as biting or technopathies, although it can be modified by environmental conditions later in its course (Reiner et al. 2021a).
The importance of inflammation was also confirmed at various levels of clinical chemistry, metabolic, and transcriptomic findings (Löewenstein et al. 2021;Ringseis et al. 2021).Signs of inflammation were found in the liver of affected animals.In 3-day-old suckling piglets, mRNA levels of FGF21, haptoglobin, and IL-6 were elevated as a sign of the onset of an acute-phase.Increased ICAM1, TNF, and reduced IL-8 mRNA levels were indicative of stimulation of an inflammatory response (Ringseis et al. 2021).A significant increase in SOD1 mRNA can be interpreted as a response to oxidative stress by ischaemic injury.Overall, there was a consistent picture of increased numbers of monocytes and neutrophils, altered blood coagulation in weaners and thrombocytopenia in fatteners, as well as increased acute-phase proteins (especially C-reactive protein [CRP] and fibrinogen), altered serum metabolites and increased serum liver enzymes (Löewenstein et al. 2021).
221 SNPs associated with SINS-signs were mapped throughout the porcine genome.The significant SNPs explained between 10 and 35% of the phenotypic variance of the respective characteristics.These results support a polygenic architecture of SINS.It seems that a multitude of genetic variants could be involved in the phenotypic expression of the syndrome.Many of the SNPs were simultaneously associated with phenotypic variation in several traits.This is consistent with the general expectation for a syndrome in which different signs on multiple body parts are thought to be due to a common inflammatory cause.This aspect has already been extensively demonstrated in several experiments (see Reiner et al. 2021a, b).However, SNPs were preferentially found in non-coding gene regions, so their function could not be easily inferred.Thus, they could be non-functional markers in linkage disequilibrium with the as yet unidentified functional gene variant, because of the relatively low coverage of the sequence data it is not to be expected that all existing gene variants were already detected by the present study.On the other hand, other high-throughput genomic studies during the last years show that around 90% of more than a hundred of gene variants associated with immune-mediated diseases that have been identified, are located in non-coding regions, making it difficult to assign them to molecular functions (Farh et al. 2015;Tak and Farnham 2015;Hindorff et al. 2009).Such SNPs are often associated with long non-coding RNAs which have been identified in farm animals (Kosinska-Selbi et al. 2020), but also in several inflammatory diseases, even in regard with the stimulation of human endothelial cells with lipopolysaccharide (LPS) (Castellanos-Rubio and Ghosh 2019), a model that hits the assumptions for the pathogenesis of SINS (Reiner et al. 2021a).In 2018, a database of over 10,000 lncRNAs was made available (lncR-NAnet; Liang et al. 2018).However, screening with the SNPs of the current study did not match the listed lncRNAs.It remains for future studies to elucidate possible associations with the available SNPs and other, not yet described potential lncRNAs.Other forms of RNA, such as circular RNA (Yang et al. 2018), that can have regulatory effects on genes are also not excluded, but could not be investigated in the present study.On the other hand, 98% of the current SNPs of the present study are already known (Zhou et al. 2017;PigVar-The Pig Variations and Positive Selection Database;http:// 202. 200. 112. 245/ pigvar/ index. jsp).
Thus, the identification of candidate genes was difficult.TRIM68 plays a critical role as a negative regulator of type I IFN production in viral and bacterial contact, which is demonstrated by the development of spontaneous inflammation and disease in mice lacking these proteins (Wynne et al. 2014).We discovered a 5 prime UTR premature start codon gain variant in TRIM68 on SSC9 that was associated with ear score and wall bleeding.Nothing is known about TRIM68 in pigs, but generally, TRIM68 turns off type I IFN production and thus reduces proinflammatory cytokine production.Thus, the discovered variation in TRIM68 might be involved in differing susceptibility to SINS, according to the hypothesis that SINS can be triggered by MAMPs leading to inflammation (e.g., Reiner et al. 2021a, b).F2 (Thrombin) is a candidate gene on SSC2 that is associated with SNPs for wall score, including wall bleeding and also ear vein congestion.F2 is involved in blood homeostasis, inflammation and wound healing (Glenn et al. 1988), which are important features in SINS (Löewenstein et al. 2021;Ringseis et al. 2021).CD96 as a candidate gene on SSC13 also seems to be involved in inflammation (e.g., LPS-mediated) and immune response (Gaudet et al. 2011).This chromosomal region was associated with inflammation in tail base and ears.
ITIH4, the trypsin inhibitor inter-alpha heavy chain 4 is a type II acute-phase protein (APP) involved in inflammatory responses to trauma and acute ischemia in humans (Kashyap et al. 2009).It is induced by IL6 in hepatocytes and may also play a role in liver development and regeneration.It is located on SSC13 and associated with ear score and wall bleeding.The roles of ischemia (Reiner and Lechner 2019), IL6 (Ringseis et al. 2021) and acute-phase reaction (Löewenstein et al. 2021) in SINS have been demonstrated.The synthesis of acute-phase proteins takes place mainly in the liver under the stimulus of the pro-inflammatory cytokines IL-1β, IL-6 and TNF-α (Petersen et al. 2004).
Reticulon 3 (RTN3) is involved in endoplasmatic reticulum stress, apoptosis and inflammation (Wan et al. 2007).It is a repressor of NFkB and thus a candidate gene in SINS, as all three aspects have been described with the syndrome (Ringseis et al. 2021).It was located on SSC2, associated with inflammation of the ears.
SARM1 is involved in innate immune response in mammals.It is a negative Toll-like receptor regulator and inhibits TICAM1/TRIF-and MYD88-dependent activation of JUN/AP-1, TRIF-dependent activation of NF-kappa-B and IRF3, and the phosphorylation of MAPK14/p38 (Carty et al. 2006).This makes it a further interesting candidate gene for SINS that has a potential to regulate the LPS signal to inflammation which is an important part in SINS pathogenesis (Reiner et al. 2021a).It is located 72 kbp from a SNP on SSC12 associated with SINS and heels bleeding.
ZFAND6 (A20-type zinc finger domain) is involved in the regulation of monocytes and the regulation of TNF-alphainduced NF-kappa-B activation and apoptosis (Fenner et al. 2009).The according SNP lies on SSC7 and is associated with ear and face score and fits well to the important role of monocytes and apoptosis in SINS (Löewenstein et al. 2021).NUDT3 is also involved in the regulation of the NF-κB signaling pathway (Warner et al. 2013).P4HA1 is involved in collagenogenesis (Annunen et al. 1997).This gene may therefore be involved in the variability of skin and skin appendage sensitivity to inflammation and necrosis.
Some interesting candidate genes are located further apart from the SNPs.One example is CCL2 (distance: 3.9 Mbp from significant SNP) in the region of SSC12, associated with SINS.As a member of the chemokine superfamily of secreted proteins involved in immunoregulatory and inflammatory processes, CCL2 displays chemotactic activity for monocytes and basophils in humans (Weber et al. 1996).An earlier name for CCL2 was monocyte chemoattractant protein-1 (MCP-1).It has been implicated in the pathogenesis of diseases characterized by monocytic infiltrates, like psoriasis, rheumatoid arthritis and atherosclerosis (Zhang et al. 1994).CCL2 seems to be involved in the recruitment of monocytes into the arterial wall during the disease process of human atherosclerosis (Li et al. 1993).This process seems to be a typical finding of SINS (Kühling et al. 2021a, b, Löewenstein et al. 2021, Ringseis et al. 2021).Different gene variants (cis or trans) might lead to higher monocyte recruitment rates followed by higher degrees of SINS.It was shown that porcine CCL2 mechanisms works in the same way than in mouse and human and that porcine monocyte subsets differ in the expression of CCL2 and in their responsiveness to CCL2 (Moreno et al. 2010).
The relatively low number of animals is a significant weakness in this study.The reason lies in the extremely time-consuming recording of phenotypes.Here, future studies could work out possibilities to enable reduced data collection while retaining as much information as possible regarding SINS.Population stratification is one of the major confounding factors in GWAS (Liu et al. 2021, Yan et al. 2022).If case and control samples are drawn disproportionally from different populations and allel frequencies are differing in different populations, an inflation of type 1 error rates can arise (Freedman et al. 2004).This problem increases with the increasing numbers in sample-size in large-scale association studies (Reich and Goldstein 2001;Price et al. 2006;Hellwege et al. 2017;Liu et al. 2021).BLINK was used to incorporate principal components as covariates to reduce false positives due to population stratification and to iteratively incorporate associated markers as covariates to eliminate their connection to the cryptic relationship among individuals (Wang and Zhang 2021).Using principal components is a suitable tool to control populations stratification, but it also leads to negative bias (Bouaziz et al. 2011;Zhao et al 2018).Depending on the LD structure of the data it might happened that a substantial proportion of the genetic variation is removed from the data set, which leads to the increase in the number of false positives and a reduction of the power.Although Principal component analysis was used to control the stratification, there is still a risk regarding false positives and the loss in power.
Because of the relatively low coverage, the relatively low number of individuals with relatively complex scores and the hypothesis that SINS as a syndrome might have evolved as a side effect of complex selection in swine, it could not necessarily be expected to have large effects of major gene character and that all effects could actually be mapped by the present study.
The use of mixed semen also did not only show the expected favourable effects on the environmental control.
In about a third of the litters, one of the two boars prevailed particularly clearly, which influenced the distribution of piglets.The reason for this effect remains open, because in some litters boar 1 prevailed over boar 2, in some it was the other way round.As a result, both the number of offspring per boar was largely balanced in the end, and the degrees of relationship were reduced.Furthermore, the distributions were taken into account by considering the litter and the PCA values in the GWAS.
Nevertheless, the postulated genome-wide distribution of effects with association to the various clinical signs of the syndrome was confirmed.In fact, effects explaining more than 10 to 30% of the phenotypic variance also occurred.But the aforementioned aspects of the study meant that no functional SNP could be mapped.Future studies are reserved to determine whether functional SNPs are present in linkage with the mapped SNPs and whether these could be lncR-NAs, variation in promoter regions, exon mutations with amino acid exchange, or other genomic variants.
Conclusion
Swine inflammation and necrosis syndrome can affect a number of body parts with varying degrees of inflammation and necrosis.The inflammatory basis of the syndrome is well characterized and points to a variety of potential genetic factors.The present study is the first to demonstrate the genome-wide association of the syndrome with gene markers.The results suggest a polygenic inheritance.Candidate genes can be defined in the vicinity of numerous SNPs.The identification of the responsible functional polymorphisms is reserved for future studies.
Fig. 1
Fig. 1 Principal Coordinates Analysis (PCA)-Plot based on the distance matrix as calculated with Tassel 5.0.Black = PI + offspring, red = PI− offspring, green = DU-offspring.Axis 1 and 2 represent the two main principal components creating genetic distances between offspring of the three boars
Fig. 2
Fig. 2 Manhattan plot of GWAS with P-values for Z-transformed SINS scores.Negative decadic logarithm of the significance of SNPs in the genome-wide association study.The blue and red lines indicate chromosome-wide and genome-wide significance, respectively
Fig. 5
Fig. 5 Exemplary effects of selected SNPs on SINS phenotypes.The box-plots with whiskers represent the distribution of 90% of the piglets' values.A SNP 9_90241577 (SSC_position); TT: n = 166; TG:
Table 2
(Bathke and Lühken 2021)ions and deletions in model and non-model organisms(Bathke and Lühken 2021).The workflow enables automation, documentation and the associated reproducibility of the individual evaluation steps.
Table 3
Spearman correlations between phenotypic body part scores
Table 4
Verification of SNP-results by KASP with selected SNPs
Table 5
Kandidate genes involved in inflammation, vasculitis and necrosis (according to GeneCards) in the proximity of SNP markers *SNP was verified by KASP (see Table4)
|
2023-08-02T06:17:24.100Z
|
2023-08-01T00:00:00.000
|
{
"year": 2023,
"sha1": "d3413fd010aea4ff44af0396f6033bfb70e3ce58",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00335-023-10011-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "09abeccd64fe44f9b1cceb971ac79e7b8b4043cf",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255281807
|
pes2o/s2orc
|
v3-fos-license
|
Dead Zone Minimization Using Variable-Delay Element in CP-PLL for 5G Applications
The deadzone occurring in a phase-frequency detector (PFD) is a critical parameter that affects the performance of a phase-locked loop (PLL). Though a fixed-delay element reduces the deadzone, it creates an imbalance in the pulse-arrival time and among the up and down signals to the charge pump, which increases the phase noise in the output spectrum of the PLL. Therefore, in this work a new variable-delay element (VDE) is incorporated in the PFD to reduce the dead zone and consequently the phase noise of the PLL. The performance of the proposed PFD incorporated in PLL is analyzed using cadence virtuoso 90 nm CMOS technology, achieving a phase noise of −148.89 dBc/Hz at a frequency offset of 1 MHz, a lock time of 6.01 us, a power of 0.056 mW, and a dead zone of 110.5 ps, while operating at 3.5 GHz of frequency, making it suitable for 5G applications.
Introduction
Phase-locked loops (PLLs) are being focused on in research for their challenging aspects such as less dead zone, low phase noise, and less reference spur. The research in this field is looking for an additional wireless spectrum to offer higher capacity beyond 4G standards due to the need for faster mobile broadband connection. PLL is a widely used method for frequency synthesis and serves as the backbone for wireless transceiver systems. The implementation of the PFD, CP, VCO, loop filter, and frequency divider are the crucial elements defining in-band efficiency for an analogue-intensive PLL in terms of the trade-off among power and noise. The charge-pump phase-locked loop (CP-PLL) [1] consists of blocks such as the voltage-controlled oscillator (VCO), phase-frequency detector (PFD), charge pump (CP), loop filter, and frequency divider. The CP-PLL is chosen from among many types of the PLLs because it is very easy to integrate in micro-circuit devices.
Each block of the PLL governs a particular perspective of the signal, such as bandwidth consistency, spur reduction, phase noise, and wide lock range. The PFD is the second most important block of the PLL after VCO. In general, all parts of the PLL contribute to phase noise, with the VCO being the main source of noise. However, the noise generated by the PFD cannot be neglected. This noise from the PFD becomes prominent when the dead zone and reference spur goes beyond the limit.
In responding to the input reference CK re f and feedback signal CK Out , the PFD produces two signals, up and down, according to the generic phase-locked loop. The noise generated by the PFD modulates the widths of the up pulses and down pulses, resulting in an arbitrary element in the output current of the charge pump. Three scenarios could be drawn from this. Where the CP produces no net output, the phase noise can modulate both the widths of outputs of the PFD in equal quantity. Only with respect to the down signal can the position of the up signal be modulated by phase noise. Mostly, the intriguing part is when the phase noise [2,3] modulates both the width of the up and down signals simultaneously. Consequently, the random difference among the widths of the up pulses and down pulses appears to increase the phase noise of the PFD.
Hence, one of the critical building blocks is the PFD as it is the block that is also responsible for measuring the phase difference between the input and reference signals. This phase difference [4] has a reasonable effect on the PLL's overall performance, such as speed, locking ability, and noise performance. When the inputs vary in frequency, the phase difference changes every cycle by Here, T re f and T out are the time periods of the input reference CK re f and feedback signal CK out , respectively. As a result, an efficient PFD circuit should be developed [4][5][6] to effectively identify any phase difference and minimize as much risk as possible. The dead zone phenomenon is mostly responsible for an increased amount of phase noise in PFD. Whenever the positive-going edges of the two inputs to be matched are close to one another (or at around 360°of phase difference), the dead zone arises where the phase difference is quite minimal. If the PFD is in the dead zone, this could recognize phase errors and the PLL may get an erroneous condition and lock to the wrong phase. Therefore, the dead zone [7] must be successfully controlled.
As the noise [8][9][10] modulates the performance of the PFD significantly, in recent decades, designing an efficient PFD has become a challenging area for researchers, reducing phase noise by managing the dead zone. As reported by [2], the dead zone is frequently caused by the pre-charging time for the internal parasitic capacitances. Hence, they designed a PFD that diminishes the need for pre-charging. This method minimizes the dead zone to within the conceptual constraints required by process voltage temperature (PVT) differences. Such a PFD is consistent with the previously published high-speed PFD in [11], but it has been merged with a delay cell and the following two transistors. Retaining the frequency of operation at a nearly constant rate, the dead zone noted on this is 61 ps where, as in [6,12], the dead zone is reported as 156 and 221 ps, respectively. However, in [2] the phase-noise information and the lock time of the PLL require further research. When contrasted to its type-I counterpart, a classic tri-state phase-frequency detector [13] has the advantages of circuit robustness and a broad acquisition range. Nevertheless, the deadzone concern of the tri-state PFD can cause a slowdown in the speed of the operation when driving the charge pump. Another work [14] described a dual-loop type-II PLL that used many power-hungry CML-type sub-divisions, including a frequency detector, divider-by-2 phase detector. However, dead-zone power consumption can further be improved. A calibrated PFD [4] technique is utilized to reduce the reference spur sustained without the dead zone. In its reset path, the suggested circuitry employs a PFD with a variable-delay element, with the delay length managed by feedback from the CP. However, the circuitry is complex, and the the phase noise, dead zone, power consumption, and lock time are not shown.
After going through the above previous work and understanding its limitations, this paper presents a phase-frequency detector with a variable-delay element in its reset path, reducing phase noise by controlling the dead zone while consuming low power. Here, a novel PFD is incorporated in the phase-locked loop, and the lock range, lock time, power consumption, and dead zone are observed.
Phase-Frequency Detector
The phase difference of both of the input signals from the frequency divider and the reference signal, when fed to the PFD, produces the outputs, which are denoted as up and down signals. The phase difference among two input signals can be used to calibrate the width of these up and down signals. Based on these signals, the VCO alters the frequency of the output. The VCO enhances its output frequency whenever the reference signal precedes the feedback input signal. Consequently, the VCO lessens its output frequency whenever the reference signal stagnates behind the feedback input signal. The PLL becomes locked to the invalid phase or the frequency if the PFD is unable to identify the accurate phase or the frequency difference. Both up pulses and down pulses of one of the inputs of the PFD at the rising edge are low; then, the correlated output up or down goes high, and this is repeated till the input becomes low and the second input becomes high. Whenever two outputs become high, device resets. The PFD will be set or reset by these signals. When the PLL is about to lock, there is a minor phase difference in the two inputs.
If a rising edge is observed in either of the signals, the up or down signals will take a finite amount of time to propagate and switch on the CP. In this time frame, if the rising edge of the another input is found, at that moment the output becomes high again. Whenever two PFDs' outputs become high, a reset signal is produced. However, the amount of time for which both the up and down signal will be high is known as the dead zone, and at this duration the PFD will not be able to identify the phase differences. The output frequency of the PLL tends to vary, causing an increase in the overall phase noise. Regarding the standard PLL with a frequency divider in its feedback in [15], the phase-noise spectral density to the divider ratio gain, N, is Hz (2) where f CK re f is the operating frequency of the PFD, ∆t is known as blind zone and defined as the phase difference between two inputs of the PFD, PFD's gain attenuation factor [6] is where T D is delay of the delay element. Now, by solving Equations (2) and (4) From Equation (3), it can clearly be observed that the phase noise is correlated to the delay width. However, the practically dead zone cannot be made zero.
Researchers integrated a delay element block with a delay of T D in the PFD's reset path. However, inside the above delay T D , as seen in Figure 1a, the very next rising edge of the reference signal CK re f has no impact on the up signal, resulting in incorrect behavior due to the up signal already being turned on. As shown in Figure 1b, any rising edge of the CK re f identified during the high period of reset has no impact on the up signal. Due to the fact that the up signal's rising edge was absent in both situations, the down signal resulted in a negative output, enhancing the difference in CK re f and feedback signal CK out in the form of phase and frequency for phase differences greater than 2π − 2δ. Where, δ = 2πT D /T CK re f , the data about the phase or frequency difference immensely limits the effectiveness of the PFD and PLL. The PFD cannot identify the phase difference, which is less than φ dz . Within this range, the phase at the PLL output tends to vary freely, leading to increased phase noise. The easiest solution to the dead-zone issue is by placing a fixed-delay element in the reset path of the PFD. This forces up pulses and down pulses to be active at the same time for a fixed period in every cycle whenever the PLL is locked. The disadvantage of the above technique is an imbalance in pulse-arrival times (as shown in Figure 2), and the up and down currents in the CP cause a periodic disruption of the control voltage of the VCO, resulting in reference spurs in the PLL output spectrum. Thus, the variable-delay element resolves this drawback since there is no fixed delay period and noW calls for a novel PFD.
Proposed PFD with Variable-Delay Element
The input signals for various PLL applications differ. A fixed-delay element would not function efficiently to lessen the dead zone with these differences in input signals. If the fixed delay is greater than the required delay, the reference spurs will be enhanced. On the other hand, if the delay is below the desired value, the dead-zone issue will reoccur. To address this, a variable-delay element (Figure 3) is integrated in the reset path, ensuring that the overall delay is appropriate while accomplishing the simultaneous need of less phase noise and a minimum dead zone while making the PFD adaptable to PLLs operating at different frequencies. The proposed PFD has a variable-delay element (Q10-Q14) depicted in Figure 4, placed in the reset path of the PFD, which reduces the phase noise and the dead zone. The simulation and analysis are performed in a 90 nm CMOS process with a supply voltage of 1 V.
Variable-Delay Element
The proposed variable-delay element is shown in Figure 3, which consists of two inverters, i.e., a current-starved inverter and standard inverter. The term "current-starved" in this perspective means that the current flowing through the circuit is really constrained. A "standard" inverter is connected to the supply rails and ground directly. It can theoretically draw as much current as it wants. When the PFD resets, it creates a dead zone on the down transition. Thus, the design confines the current flow only in this transition. If the control voltage is increased, it slows down the transition by increasing the resistance included by the node "B" all through charging. Another inverter is not a currentstarved inverter; thus, the output of the variable-delay element has usual rise and fall times.
However, predicting the average current, which comparable to the saturation current of the MOSFETs in second inverting stage, yields a fitting expression. From the channellength modulation coefficient, where λ is the channel-length modulation coefficient. According to Equation (6), the second stage propagation delay can be conveyed as in which t pLH and t HL seem to be the propagation delays for low to high output transitions and high to low output switching, respectively. The equation for t p 2 undergoes switching from V DD to ground or conversely. In which, The propagation delay of the delay element is, The total delay of the delay element t p,total = t p 1 + t p 2 Moreover, if the fixed delay is greater than required due to variations in input signals for various PLL applications, the reference spurs will be enhanced. From the other end of the spectrum, unless the delay is quite limited to keep the complete PFD delay positive, a modification in operating conditions will alter the overall PFD delay to a negative value, reintroducing the dead-zone issue. To keep away from this issue, a PFD with a VDE in its reset path is used, ensuring that complete delay is positive with fewer reference spurs and the least dead zone of the PFD's output signals. The CP circuit converts output digital signals fed from the PFD to analog voltage, which is referred to as the control voltage of the VCO. The VCO produces a clock, and the frequency of the clock is managed by this control voltage V cont , which also gets tuned by the loop filter. The loop filter block eliminates the high-frequency elements from the signal produced by the CP. By dividing the frequency of output of the VCO by a particular division, phase noise is limited. The frequency divider of ratio-16 is used to divide the output frequency in this design.
Results and Discussion
Further analysis and discussions are made by simulating proposed PFD, CP-PLL [1] in 90 nm CMOS technology in Cadence virtuoso tools with a supply voltage of 1 V. Various PLLs are simulated for the comparison of performance, which are demonstrated in this section.
The transient analysis of conventional PLL and novel CP-PLL are simulated, and their control voltages are presented in Figure 5. By varying the delay voltage, transient analysis is carried out. As there are no passive elements such as the R and C components, the CP-PLL consumes less area. The calculations and comparisons of the power of innovative CP-PLL and standard PLL are made. The power consumption of conventional PLL is 0.058 mW, whereas for novel CP-PLL it is 0.056 mW. As per the results shown here in Table 1, the proposed CP-PLL has slightly lower power than the standard PLL. The dead zones of the conventional PLL and the novel PLL observed here are 113.03 ps and 110.5 ps, respectively, and are shown in Table 1. The phase-noise plot comparison of conventional PLL and novel PLL is shown in Figure 6. The phase noise of novel PLL is −148.89 dBc/Hz at 1 MHz offset, and that of the conventional PLL is −119.4 dBc/Hz at 1 MHz offset, shown in Table 1. For the simulation of novel PLL, the delay voltage is 0.5 V. If the delay voltage is less than this, the PLL will not lock. From the results compared in Table 1, it can be observed that the novel PLL has less phase noise than its counterpart due to the variable-delay element. The novel CP-PLL also has a wide lock range of 3.12-14.01 GHz, a lower dead zone of 110.5 ps, and a frequency of 3.5 GHz.
Corner Analysis
The proposed CP-PLL outperforms with regard to power, phase noise, dead zone, lock range, and lock time. Table 2 shows the results of the proposed CP-PLL and is further analyzed with temperature variations ranging from 27 degrees to −27 degrees across different corners (NN, FF, FS, SF, SS @27°C, 0°C, and −27°C). From the above shown results, the phase noise is comparatively identical in all five corners; however, the FS is better and the SS is worse. With a decrease in temperature, the phase noise increases. From the results, it is also observed that power varies from one corner to the other. The power consumption of the novel PLL is high in FF and low in the SS corner. The novel CP-PLL is fast locking in the FF corner and slow in the SF corner. The phase noise, lock time, and power consumption plots are depicted in Figures 7-9, respectively, and are compared in different corners such as NN, FF, FS, SF, and SS at various temperatures such as −27°C, 0°C, and 27°C. Table 3 compares the summary of the proposed CP-PLL to the recent works. As can be seen, previous research achieved a high oscillation frequency at the expense of increased power usage, a wide lock range, and increased phase noise, with the high dead zone rendering it unfit for the high-speed circuits.
Conclusions
In this work, the PLL is proposed with a variable-delay element in its reset path of the PFD. The above approach minimizes the dead zone, leading to improved phase-noise efficiency. This design's lock in time and lock range are also noted. The lock range is found to be less than that of the traditional PLL, and the lock time of the PLLs is significantly shorter than that of the PLL without any delay. Compared to traditional design, this unique PLL saves power up to 4.92%, phase noise up to 24.6%, dead zone up to 2.23%, and lock time up to 12.8%. The novel CP-PLL is analyzed with previous works and is shown here. The proposed PLL is also compared in different corners with various temperatures, beginning with −27°C to 27°C. The novel CP-PLL has a phase noise of −148.89 dBc/Hz at 1MHz, power consumption of 0.056 mW, a dead zone of 110 ps, a lock time of 6.01 us, a lock range of 3.12-14.01 GHz, and a frequency of 3.5 GHz, at a supply voltage of 1 V. These characteristics were compared, and the results showed that the novel circuit can function more effectively while reducing the lock range, power, phase noise, and dead zone, and by increasing the lock time to acceptable levels. Data Availability Statement: The authors confirm that the data supporting the findings of this study are available within the article, or below mentioned references.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2022-12-31T16:04:21.784Z
|
2022-12-29T00:00:00.000
|
{
"year": 2022,
"sha1": "08ab994bb3a10e4acbbbed38e80cd1b742234bc4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-666X/14/1/81/pdf?version=1672296106",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0a8131c07cf80692caa35841969d56f150a81e63",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
89614411
|
pes2o/s2orc
|
v3-fos-license
|
Thermodynamics and dielectric anomalies of DMAAS and DMAGaS crystals in the region of phase transitions ( Landau theory approach )
A simple description of thermodynamics of DMAAS and DMAGaS ferroelectric crystals by means of Landau expansion is proposed. Conditions of occurrence of phase transitions are established and their temperatures are obtained. The influence of external hydrostatic pressure on phase transitions is described. The temperature behaviour of dielectric susceptibility components and their anomalies in the vicinity of phase transition points are investigated. The results obtained are compared with the experimental data.
Introduction
DMA-GaS) ferroelectric crystals are intensively studied in recent years.Their interesting feature is a possible existence of a crystal in the ferroelectric or in the antiferroelectric state depending on external conditions (e.g.temperature, hydrostatic pressure).There is a significant difference in the thermodynamical behaviour of crystals despite the isomorphism of their structure.At ambient pressure DMAGaS crystal has three phases: paraelectric (T > T c ), ferroelectric (T 1 < T < T c ) and antiferroelectric (T < T 1 ) with temperatures of phase transitions T c = 136 K (first order transition close to the tricritical point) and T 1 = 117 K (first order transition).There are only two phases in DMAAS crystal at an ambient pressure: paraelectric (T > T c ) and ferroelectric (T < T c ) with T c = 155 K.
The nature of phase transitions in DMAAS and DMAGaS crystals was unclear up to the recent time.During the last years a conviction on the important role of dimethyl ammonium (DMA) groups in phase transitions due to their orientational ordering-disordering has been established (see, for example, [3,6,[13][14][15][16]).In [17] the microscopic approach based on the order-disorder model accounting for different orientational states of DMA groups was proposed.In the framework of the model the phase transition to a ferroelectric state has been described and the conditions of realization of this transition as of the first or of the second order have been established.Order parameters of the system have been constructed.They are connected with the differences of occupancies of four possible positions of nitrogen ions corresponding to different orientations of groups.As a result of symmetry analysis it has been established that components of the order parameters belonging to irreducible representation B u of point symmetry group 2/m of the high-temperature (paraelectric) phase describe ferroelectric ordering of DMA group along the ferroelectric axis OX (in crystallographic plane (ac)) as well as their antiferroelectric ordering along the OY axis (crystallographic axis b).The inverse ordering (the antiferroelectric along OX and the ferroelectric one along OY) corresponds to order parameter components belonging to the irreducible representation A u .The appearing of nonzero order parameters of B u type turns the system into ferroelectric state (point group m) while nonzero order parameters of A u type cause an antiferroelectric state (point group 2).
Notwithstanding the further prospects of microscopic approach by means of the four-state order-disorder model, the more simple but more general thermodynamical description based on Landau expansion is of interest.One can construct corresponding Landau free energy and in a standard way investigate possible phase transitions and obtain criteria of their realization with the use of data of the above mentioned symmetry analysis.This is a main goal of the present work.The results obtained in the framework of Landau expansion will be used for interpreting the changes induced by the external pressure in the picture of phase transitions and for describing dielectric anomalies in the phase transition points of the crystals investigated.
Thermodynamics of phase transitions (Landau theory approach)
Let us describe the thermodynamics of phase transitions in DMAAS and DMA-GaS crystals using the Landau expansion.We consider a simplified version when only one linear combination of the initial order parameter type is included for each of B u and A u irreducible representations.The combinations included are true order parameters: coefficients at their squared values tend to zero in the points of the corresponding second order transitions.
Order parameters, which transform according to irreducible representations B u and A u of point symmetry group 2/m of high-symmetry phase, are denoted as η b and η a correspondingly.The first parameter η b describes polarization of ferroelectric type along the OX axis with simultaneous antiferroelectric type ordering along the OY axis; the second one corresponds to the inverse orientation where antipolarization along OX is accompanied by polarization along OY.
We restrict ourselves to the case of the second order phase transition from the nonpolar high-temperature phase to the ordered one.In this case Landau expansion of free energy can be limited by terms of the fourth order: A linear dependence of coefficients a and b on temperature is assumed where condition T c > T ′ c is satisfied for a normal state of the crystal which corresponds to the transition from the paraelectric phase to the ferroelectric phase as to the first one at lowering of the temperature.
Conditions of thermodynamical equilibrium correspond to the minimum of free energy and look like At zero external fields there are following solutions -paraphase (P-phase); -ferroelectric phase (F-phase); -antiferroelectric phase (AF-phase). 1orresponding expressions for free energy in these phases are as follows The phase transition P→F which is of the second order in the approximation used takes place at temperature T c .The phase transition F→AF which can take place at lower temperatures occurs at The condition above determines the temperature of this first order phase transition: Dependence of phase number of the system and phase transition temperatures on values of system pa- where define the region of temperature T ′ c values where the F-phase exists as an intermediate one: 1 These conditions are illustrated by the phase diagram in figure 1.In the case T ′ c > T c a direct phase transition P→AF from the paraelectric phase to the antiferroelectric one can take place.
The experimentally observed changes of temperatures of P→F and F→AF phase transitions and consecutive disappearance of the F-phase as the result of increasing of the external hydrostatic pressure can be easily explained using the obtained diagram.Under the assumption that the effect of pressure leads mainly to shifts of temperatures T c and T ′ c and the changes of other Landau expansion parameters are negligible, the following relation is obtained where According to the data published in [18], dT c /dp ≡ x = −0.277K/MPa; ∂T 1 /∂p = 1.95 K/MPa and if one applies a linear approximation to the dependence of T ′ c on p then dT ′ c /dp ≡ x ′ = 0.86 K/MPa.
The obtained relations are illustrated by the diagram shown in figure 2. This diagram qualitatively matches the experimental (T,p) diagram for DMAGaS crystal (at T c0 = 136 K, T 0 1 = 116 K) [18].The obtained coordinates of triple point The pressure value (see figure 2) is an important characteristic of the model.At p * < 0, which is realized at T ′ c0 /T c0 > 1/κ, AF-phase exists in the region of low temperatures at an ambient pressure (this situation takes place for DMAGaS).At p * > 0 (i.e.T ′ c0 /T c0 < 1/κ) and ambient pressure only P-and Fphases occur; this case can correspond to DMAAS crystal.
Dielectric susceptibility
The approach used in the previous section allows us to derive expressions for components of dielectric susceptibility tensor in the vicinity of phase transition points and to describe their temperature dependencies in general.In the approximation used, the components P x and P y of polarization vector are defined by parameters η b and η a correspondingly.Hence and proceeding from equations (3) one can obtain where The following particular cases follow from the expression (19): 1. Paraphase (P): 2. Ferroelectric phase (F): here the notations are used: In this case susceptibility χ yy can be also expressed in the form where η b0 is a spontaneous value of the order parameter (polarization P s ) in the ferroelectric phase.
Antiferroelectric phase (AF)
: where the temperature is introduced such that T * > T c > T * * .An expression similar to the previous one takes place It relates the temperature dependence of longitudinal susceptibility in AF phase with the equilibrium value of the order parameter (polarization in one of sublattices).
The temperature behaviour of dielectric susceptibility components and their anomalies in the phase transition points are illustrated in figure 3 and 4 as temperature dependencies of inverse susceptibilities χ −1 αα .The inverse susceptibility χ −1 xx is equal to zero at the temperature T c .Its linear dependence on temperature in the vicinity of this point has an inclination b ′ at T > T c and 2b ′ in the ferroelectric phase (figure 3).This typical behaviour for second order phase transition changes if the phase transition P→F is of the first order.Such a situation takes place in the DMAGaS crystal where the first order phase transition close to the tricritical point is observed.Then the susceptibility χ −1 xx remains nonzero at T c and has a small jump (according to data [11], T c − T 0 ≃ 1.2 K, where T 0 is the temperature at which χ ).The mentioned changes are relevant only to a small vicinity of T c .The phase transition F→AF is a well pronounced first order phase transition accompanied by jump of the χ −1 xx function.The continuation of the straight line describing the temperature dependence of χ −1 xx in the AF phase passes the point T * * (see figure 3).χ −1 xx has the following values at the ends of its jump The value of susceptibility jump ∆χ xx | 1 can be positive or negative depending on the values of theory parameters.
Temperature behaviour of the inverse susceptibility χ −1 yy is essentially different.In the point of the second order phase transition P→F it remains nonzero with the value Its continuation to lower temperatures goes to zero at T → T ′ c .The continuation of the line of the inverse susceptibility in the antiferroelectric phase χ −1 yy (T ) = 2a ′ (T ′ c − T ) also goes across this point.In the ferroelectric phase region the function χ −1 yy (T ) is linear with the continuation passing the point T * .At the F→AF phase transition this function has a jump between points Similar to the case of the function χ −1 xx the jump can have positive or negative value.
Discussion
Proceeding from the formulae obtained in the previous section one can try to interpret the available data on the temperature dependence of dielectric susceptibility components of DMAAS and DMAGaS crystals.The majority of the performed measurements is devoted to the longitudinal dielectric permittivity ε x (or its real part ε ′ x for low frequency alternating current measurements) mainly in the region of the high-temperature phase transition for DMAGaS and the corresponding phase transition in DMAAS.Such data are reported in the works [5,8,9,11] (DMAGaS) and [4,7,8] (DMAAS); only in the paper [7] the temperature behaviour of all permittivity components (ε ′ a , ε ′ b , ε ′ c ) for DMAAS crystal in the wide range of temperatures (from ≃ 90 K to ≃ 280 K) was measured.In some papers the dependence of spontaneous polarization on temperature in the ferroelectric phase was investigated and coercivity fields were measured [8,9] (the value of P s in the state close to saturation is about 1.4-1.9C/m 2 for DMAAS and 0.9-2.0C/m 2 for DMAGaS).A particular investigation of the T c point vicinity in DMAGaS devoted to the influence of the external electric field on the first order phase transition point and the difference T c − T 0 is made in [11].Based on the available experimental data, Curie-Weiss constant (from the paraphase side) is estimated as 2700-3060 K for DMAGaS crystal and 2700-3000 K for DMAAS crystal.The phase transition to the ferroelectric phase in DMAGaS crystal is of the first order and close to the tricritical point; this fact however does not affect the behaviour of χ xx and χ yy far from the T c point.
The mentioned experimental data are incomplete, hence only partial comparison with the results of thermodynamical description is possible.For example one can obtain the values of the temperature T ′ c , parameters b ′ and κ for the DMAGaS crystal T ′ c = 125 K, κ = 2.22, b ′ = 0.33 • 10 −3 K −1 using the above mentioned data regarding the influence of the external hydrostatic pressure on phase transitions in DMAGaS crystal [18] and on the results of measurements of dielectric characteristics.
More comprehensive and selfconsistent evaluation of temperatures T ′ c , T * and T * * as well as Landau expansion parameters (or parameters a ′ , b ′ , κ, ξ and f ) by means of the relationships presented in this section become possible after goal-oriented investigations of temperature dependencies of χ −1 xx and χ −1 yy in a wide temperature interval including the regions of existence of all phases for DMAGaS and DMAAS.Proceeding from the obtained results it will be possible to ascertain suitability of the simple thermodynamical description where Landau expansion is limited to only one order parameter for each of B u and A u representations.Such a description is obviously much simplified comparing to the results of the microscopic approach based on the four-state model of order-disorder type [17].The investigation of DMA group ordering in the configurational space of four orientational states needs twocomponent order parameters η α b (B u ) and η α a (A u ), α = 1, 2. This fact could complicate temperature dependencies of dielectric characteristics of the model even for a thermodynamical description in the framework of Landau expansion.
Furthermore, the expression for Landau expansion of free energy (1) considered here includes the terms up to the fourth order.A consistent description of the first order phase transition P→F and of the related dielectric anomalies demands the inclusion of the sixth order terms into the expansion.Such a generalization is necessary for a comprehensive description of experimental data and can be performed relatively easily.
|
2019-01-02T00:54:36.250Z
|
2000-01-01T00:00:00.000
|
{
"year": 2000,
"sha1": "77c0632122ba5598ae1d9144aec0ecf05688e3d2",
"oa_license": "CCBY",
"oa_url": "http://www.icmp.lviv.ua/journal/zbirnyk.21/012/art12.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "77c0632122ba5598ae1d9144aec0ecf05688e3d2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
211195568
|
pes2o/s2orc
|
v3-fos-license
|
Applications of High-voltage Resistor Based on Saline Solution in High-voltage Impulse Generation and Measureme
This paper presents applications of high-voltage (HV) resistors based on saline solution for HV impulse generation and measurement. The electrical resistivity and relative permittivity of saline solution in the frequency range from 100 Hz to 100 MHz were investigated. The electrical resistivity and relative permittivity were calculated from experimental measurements of the resistance and capacitance of saline solution in a test cell. From the characteristics and the experimental results, the technical data of saline solution with various concentrations of substances were utilized in the design of HV resistors for HV impulse generation and measurement, as a current-limiting resistor and an HV resistor used in an HV part of a voltage divider. Moreover, the developed HV resistors were tested to confirm their effectiveness in HV generation and measurement. From the experimental results, it was found that the developed HV resistors have promising characteristics for practical HV impulse generation and measurement. According to the results of this study, an HV resistor based on saline solution has strong potential for application in HV impulse generation and measurement compared with a conventional voltage divider.
Introduction
Although electrical power equipment generally operates with the system voltage at normal levels, overvoltages occur as a result of switching operation and lightning strikes to the system. Therefore, high-voltage (HV) testing in accordance with international standards is required to confirm the insulation performance of electrical power equipment before its installation in systems. In the HV tests, an HV is generated from sources and applied to such equipment. The vital requirement of the components used in HV generation and measuring systems is high insulation performance. To fulfill this requirement, the components must be insulated. Liquid and gas insulation materials, i.e., mineral oil, synthetic esters, SF 6 , and CO 2 , are generally used as the insulation of such components.
The most commonly used liquid insulation material is mineral oil, which has a good insulation characteristic. However, the supply of its raw materials is unstable and its decay may take up to 1000 years, (1) adversely affecting the environment when leakage and contamination occur. SF 6 gas has an excellent insulation characteristic, has robustness to high voltages, is non-toxic and non-flammable, and has good heat transfer. However, SF 6 contributes to the greenhouse effect and remains in the atmosphere for 1278 years. (2) In addition, HV impulse generation is required to simulate the overvoltage waveform due to the switching operation and the effect of lightning in the system. According to IEC 60060-1, (3) the standard lightning voltage waveform has a front time (T 1 ) of 1.2 µs and a time to half of 50 µs. Moreover, some voltage waveforms complying with IEC standards (3,4) have a very short rise time. For example, in the voltage withstand test of insulators, the steep-front voltage waveform (4) has a time to peak of 100 ns order. Owing to the short rise time of the HV impulse, the components used for HV generation and HV measuring systems must have good characteristics in a wide frequency range. This background has led to the development of several HV measurement technologies. (5)(6)(7) In the past, the design and construction of fast-response measuring systems have encountered many problems. According to the IEC standards, (3,8) the time response of the measuring system should be in the range of 0-10 ns. In the measurement of a voltage with such a short rise time, stray capacitance and undesired inductance in the test circuit significantly affect the accuracy of HV measurement. (9) Therefore, non-inductive components are required for HV generation and measuring systems. Normally, HV and non-inductive resistors are made of ceramic materials and are insulated with mineral oil and/or SF 6 . (10) The cost of such components is also high. Moreover, damaged and unused components are difficult to dispose of, causing environmental problems.
To resolve these issues, the development of an HV resistor using a solution such as CuSO 4 or saline solution is an attractive approach owing to the lack of need for mineral oil or an insulating gas. (11)(12)(13)(14)(15) The advantages of using such a solution for the development of HV resistors are a simple design, easy construction, and cost-effectiveness. Also, such a solution can withstand high voltages, can be used in a wide frequency range, and has high energy absorption. (15)(16)(17) The stray capacitance and stray inductance of HV equipment with a solution as the resistor can easily be controlled by a suitable design of its dimensions and configuration. (14) In terms of environmental concerns, such a solution rapidly decays and is non-toxic. Hence, the development of HV resistors based on saline solution should be investigated.
In this paper, we propose the use of saline solution in the development of HV resistors for HV impulse generation and measurement. The electrical resistivity and relative permittivity of saline solution in the frequency range from 100 Hz to 100 MHz were studied. Using a simple equivalent circuit, the electrical resistivity and relative permittivity were extracted and calculated from experimental measurements of the resistance and capacitance of saline solution in a test cell. From the obtained electrical characteristics, the appropriate mixture of deionized water and normal saline can be selected for the design of HV resistors. Two HV resistors were designed for use as a current-limiting resistor in an HV impulse generator and an HV part of a voltage divider in an HV impulse-measuring system. The practicality of the developed components was validated by performing various experiments in an HV laboratory.
It was found that the developed HV resistors have promising characteristics for HV impulse generation and measurement. According to the results of this study, the HV resistor based on saline solution has strong potential for application in HV impulse generation and measurement compared with a conventional voltage divider.
Saline Solution
Saline solution can be used in HV devices owing to its many attractive characteristics. From a chemical viewpoint, saline solution does not cause the corrosion of aluminum or copper conductors, which are important components of HV devices. From a mechanical viewpoint, the large surface area and high volume of saline solution can improve heat transfer when a large current flows. When saline solution is employed in an impulse generator or impulse voltage divider, its electrical characteristics such as resistivity and relative permittivity must be taken into account. We investigated the electrical resistivity and relative permittivity in the frequency range of 100 Hz to 100 MHz as reported this section. Note that this frequency range covers the application frequencies in HV generation and measurement.
Electrical characteristics of saline solution
Electrical resistivity is an intrinsic property that quantifies how strongly a given material opposes the flow of electric current. (18) A material with low resistivity readily allows the movement of electric charge. For a resistor made of a specified material having a uniform cross section, when an electric current flows though this resistor, its electrical resistivity can be defined as where R is the electrical resistance of the resistor, ρ is the electrical resistivity (Ω⋅m), l is the length of the resistor (m), and A is the cross-sectional area of the resistor (m 2 ). Generally, the relative permittivity of a material is defined as the ratio of the capacitance C (farad) of a given configuration of the electrode with a specific material as a dielectric to the capacitance C 0 (farad) with the same configuration with vacuum as the dielectric. (18) Relative permittivity is typically denoted by r ε and is defined as where ε is the complex frequency-dependent absolute permittivity of the material, 0 ε is the vacuum permittivity of 8.854 × 10 −12 F/m, r ε ′ is the real part of the complex permittivity associated with the stored energy in the material, and r ε ′′ is the imaginary part of the complex permittivity related to the dielectric loss of the material. The relative permittivity is therefore related to the capacitance of a material. The capacitance of a material can be defined as where A is the area of two plates (m 2 ), d is the distance between the plates (m), and ε is the permittivity of the material.
Experimental setup
In this study, saline solution was loaded into a test cell. To avoid the effect of the inherent conductivity of the test cell, acrylic glass was used to construct the test cell. It is also a chemically inert material, which is appropriate for this experiment. The dimensions of the test cell were 10 cm × 10 cm × 10 cm. The test cell had two copper electrodes as shown in Fig. 1(a). The size of each electrode was fixed at 10 cm × 10 cm.
In this study, normal saline or saline 0.9 (9 g NaCl in 1 L water) was mixed with deionized water to ensure a consistent NaCl concentration. The deionized water had high resistance and high purity. The normal saline and deionized water are shown in Figs. 1(b) and 1(c), respectively.
The experimental setup is shown in Fig. 1(d). The tested solution was placed in a test cell connected to an impedance analyzer to measure the magnitude (Z) and phase (θ) in the frequency range from 100 Hz to 100 MHz. From the measured data, the electrical resistivity and relative permittivity were then calculated for the equivalent circuit model.
In this study, a simple equivalent circuit composed of a resistor in parallel with a capacitor was employed. An inductance of 0.3086 µH was observed from the current loop of the experimental setup. Therefore, an additional inductor was added to the former equivalent circuit as illustrated in Fig. 2.
Test procedure and results
In this study, 1000 cm 3 of deionized water was mixed with various normal saline volumes of 10 to 1000 cm 3 . These solutions were placed in the test cell and their impedances were measured using an impedance analyzer at frequencies from 100 Hz to 100 MHz. The electrical properties (resistivity and relative permittivity) were calculated using Eqs. (1) and (3). The electrical properties of the deionized water mixed with 10 to 100 cm 3 saline solution at various frequencies are presented in Fig. 3, and those of the deionized water mixed with 100 to 1000 cm 3 saline solution are presented in Fig. 4. From the experimental results, the electrical properties of saline solution are considered. The resistance of the saline solution is observed to be constant in the frequency range under investigation (100 Hz-100 MHz). The relative permittivity of the saline solution has a frequency-independent characteristic when measured in the frequency range from 10 kHz order to 100 MHz. The relative permittivity decreases in the low-frequency range (100 Hz to 10 kHz order) owing to the polarization effect of ionic charge accumulated at the interface between the fluid and the metallic electrode. (19) This effect results in the formation of a thin layer with a thickness of nm order from the electrode, and the layer is represented by a series contact impedance connecting the electrode and saline solution. This contact impedance will not have a significant effect in the case of the long resistors designed in this study. Therefore, saline solution still has very promising characteristics for the development of HV resistors.
Current-limiting Resistor in HV Impulse Generator
In this study, saline solution was used in various applications. The first was as part of a current-limiting resistor in an HV impulse generator. An HV impulse was generated to imitate the overvoltage caused by a switching operation or lightning. According to the IEC standard, (3) there are two standard waveforms, i.e., a lightning impulse voltage (LIV) and a switching impulse voltage (SIV). For the LIV, the front time and time to half are 1.2 and 50 µs, respectively. For the SIV, the front time and time to half are 250 and 2500 µs, respectively. (5,20) For HV impulse generation, the simple resistor and capacitor circuit in Fig. 5 are utilized. The circuit is composed of a charging capacitor (C s ), a sparking gap (SG), a front resistor (R d ), a tail resistor (R e ), and a load capacitor (test object). The waveform parameters (T 1 and T 2 ) are respectively controlled by adjusting the front and tail resistors. The peak voltage is controlled by the charging voltage across the charging capacitor. The sparking gap ensures an open circuit during the charging process and a short circuit when the charging voltage reaches the required voltage. (20,21) All HV resistors can be constructed in an insulating enclosure with terminal electrodes as shown in Fig. 6. The enclosure is selected to be in the form of coaxial cylinders with available diameters. The saline solution is placed between the inner and outer cylinders. For better understanding, we should consider the case in which a current-limiting resistor (R L ) is designed as an example. The voltage and current ratings of the resistor were selected to be 100 kV and 10 A, respectively. The design procedure of the HV resistor started with the selection of the insulating enclosure length. The critical electric field strength (voltage per length) used in the design was 3 kV/cm, so the designed enclosure length was 33 cm, and for safety reasons, the length was selected to be 36 cm. The inner and outer diameters were 34 and 60 mm, respectively. From the selected rating, the resistor should have a resistance of 10 kΩ. The resistivity of the saline solution was calculated using Eq. (1), and the calculated resistivity is expressed by ( ) From the calculated resistivity, the volume ratio of the normal saline and deionized water was selected from the results in Fig. 3(a). The resistivity (ρ) was 53 Ω•m. From the designed parameters, the resistor was constructed as shown in Fig. 6. Such a resistor has also been well used as the current-limiting resistor in an HV impulse generator.
Voltage Divider Based on Saline Solution
The second application of saline solution was as part of an HV divider. For HV measurement, a voltage divider is generally used to reduce the signal voltage to an appropriate level that can be measured by an instrument such as a digital oscilloscope or a peak voltmeter. Normally, a capacitive voltage divider is used in the laboratory because it is easily designed and constructed. However, the stray capacitance and undesired inductance can make it difficult to use a capacitive voltage divider to measure voltages with a very short rise time (100 ns order). (22)(23)(24)(25) A resistive voltage divider is also used to measure voltages with a very short rise time. The time response of a resistive voltage divider mainly depends on the resistance of the voltage divider and the stray capacitance. Decreasing the volume of a voltage divider will decrease the stray capacitance and response time. (26,27)
Design and construction of a voltage divider
The design of the saline solution voltage divider is depicted in Fig. 7(a). The HV part of the developed divider comprises a coaxial cylindrical plastic tube filled with saline solution mixed with deionized water. The ends of the resistor are in contact with aluminum electrodes. The HV part has a thickness (d HV ) of 0.5 mm and a height (S HV ) of 990 mm. There is a central aluminum plate connected to the low-voltage part of the voltage divider. The low-voltage part was constructed from a metal film resistor and connected to a cable connector for voltage measurement. The resistance of the HV part can be controlled by the resistivity of the saline solution mixture in deionized water. The resistance of the HV part was set to be 10 kΩ. The low-voltage part was constructed from four 200 Ω metal oxide resistors connected in parallel and enclosed in an aluminum housing. The HV part was connected to the low-voltage part and a coaxial cable of 50 Ω characteristic impedance. At the receiving end of the cable, a 50 Ω attenuator with a scale factor of 10 was connected to a digital oscilloscope. The equivalent circuit of the developed voltage divider is shown in Fig. 7(b). The structure of the saline solution voltage divider, the low-voltage part, and the attenuator are shown in Fig. 8.
Electrical field stress control of the voltage divider
For the designed HV resistor in Fig. 8(a), a high electrical field stress occurs at the top and bottom electrodes as shown in Fig. 9. Under HV application, electrical discharge occurs at the top and bottom electrodes, affecting the measured voltage waveform. Therefore, it is necessary to control the electrical field stress using grading rings. A computer simulation based on the finite element method was utilized to calculate the voltage distribution and electrical field stress in this paper. To reduce the electric field stress, grading rings were installed at the upper and lower parts of the HV resistor as shown in Fig. 10. The upper electrode was in the form of two rings having diameters of 65 cm with a distance between them of 25 cm. The lower ring was installed 15 cm below the top of the HV resistor. The total height of the HV resistor was 100 cm. With the appropriate locations and dimensions of the grading rings, the maximum electrical field stress was reduced as shown in Fig. 11.
Unit step response test
For the verification of the HV measuring system, international standards such as IEC (1,4,8) and IEEE (28) recommend that the parameters of the unit step response should correspond to the standard values. In particular, for a very fast transient voltage such as a chopped LIV, the measuring system should have a very fast response.
According to the IEC standard, (3) there are two important indicators used to evaluate characteristics of an impulse-measuring system. These are the scale factor and the parameters of the unit step response. The scale factor of the measuring system should be approximately 1000 to 10000 to measure a voltage in the range of kV to MV. However, a voltage divider with a high scale factor is inappropriate since the induced interference voltage in the HV part can disturb the low-voltage part of the measuring system. For this reason, a voltage divider with an attenuator is used in the HV measuring system. (11,(29)(30)(31) In addition, the factors of the measuring system are not varied by more than ±1% from the specified conditions and space clearance. The test circuit of the unit step response is depicted in Fig. 12(a) and the parameters of the unit step response are experimental response time (TN), partial response time (T α ), settling time (t s ), initial distortion time (T 0 ), and overshoot (β) as illustrated in Fig. 12(b). (8) To verify the measuring system, the unit step response parameters are in the ranges recommended in the IEC standard. (3,8) The recommendations in IEC 60060-2 are separated into three parts. (8) The first part is the measurement of full and tail-chopped LIVs. The second part is the measurement of frontchopped LIVs, and the third part is the measurement of the switching impulse. For a reference measuring system, the IEC standard has the recommended response parameters shown in Table 1.
In addition, IEC 61211 (4) (insulator puncture test in air) also recommended the response parameters of the measuring system for the steep-front impulse voltage reference shown in Table 2.
The unit step response tests of the divider with the saline solution were carried out by applying a standard unit step voltage generated by a unit step generator (GAUSS RIG1000H). A 500 MHz digital oscilloscope was employed and connected with the developed measuring system in the test. The experimental setup for the unit step response test is shown in Fig. 13. The scale factor of the measuring system is 4250. To determine the appropriate damping resistance, experiments were carried out while adjusting the resistance from 0 to 500 Ω. It was found that the best response time is obtained for a resistance of 200 Ω.
The normalized unit step response [g(t)] and the integral of the normalized unit step response obtained with the appropriate damping resistor are presented in Figs. 14(a) and 14(b), respectively. The time response parameters shown in Table 3, i.e., the partial response time (T α ) of 1.86 ns, the experimental response time (T N ) of 1.54 ns, and the settling time (t s ) of 42.80 ns, are in accordance with the requirement in the standard for the measurement of full and frontchopped LIV waveforms. With the promising response parameters, the developed measuring system can also be used as a reference measuring system.
LIV withstand test
Generally, the equipment for measuring LIVs in an HV engineering laboratory uses a voltage divider. A LIV testing system can be separated into two parts. The first part generates an HV impulse and includes the impulse generator and test object. The second part is the voltagemeasuring system. This part includes the voltage divider, a damped resistor (R d ), a coaxial cable, on attenuator, and a digital oscilloscope. A schematic of the LIV test is shown in Fig. 15.
A LIV withstand test was carried out to determine the insulation level. The test arrangement is shown in Fig. 16. It was found that the developed measuring system can pass a LIV withstand test at 330 kV with withstand lightning impulse waveforms having both positive and negative polarities as shown in Fig. 17.
Measurement of steep-front impulse voltage
To further verify the validity of the divider with saline solution, the capability of measuring a steep-front impulse voltage waveshape was determined. A steep-front impulse voltage is normally generated by a chopped wave impulse voltage. The equivalent circuit for the steepfront impulse voltage test is shown in Fig. 18. The test object is connected in series with a sparking gap. To generate the steep-front impulse voltage, a standard impulse voltage is generated across the gap and the test object. When the voltage across the gap reaches the controlled level, breakdown occurs at the gap. The applied voltage is suddenly generated across the test object and a steep-front impulse voltage is generated.
To confirm the performance of the developed measuring system in the measurement of a fast-rise-time impulse voltage or a steep-front impulse voltage, a voltage was generated across a suspension insulator. The developed measuring system was connected across the insulator. The experimental setup is shown in Fig. 19. The measured waveforms with positive and negative polarities in comparison with those from a conventional voltage-measuring system are shown in Fig. 20. Note that the conventional measuring system has the time parameters T α = 9.8 ns, T N = 13.6 ns, and t s = 119 ns according to Table 1, but they do not conform to the values in Table 3, so the system is suitable for the measurement of a standard lightning voltage but not for the measurement of a steep-front impulse voltage. From the test results, the developed system can provide measured waveforms more rapidly than the conventional system, with the measured waveforms having no oscillation.
Conclusions
In this study, saline solution has been developed as a part in HV devices. Its first application was in a current-limiting resistor, and its other application was in an impulse voltage divider. To achieve the desired functions of the saline solution, the frequency dependence of the electrical properties of the saline solution, i.e., resistivity and relative permittivity, was studied. It was found that both the resistivity and relative permittivity of the saline solution have frequencydependent characteristics below a specific frequency. This is appropriate for the development of an HV generator and an impulse voltage-measuring system employing saline solution. The design and construction of a current-limiting resistor and a voltage divider with saline solution have been discussed. Results obtained from experiments in an HV laboratory such as the unit step response, impulse voltage measurements, and steep-front voltage measurements have shown the promising performance characteristics of saline solution in the two applications.
|
2020-02-13T09:05:03.616Z
|
2020-02-10T00:00:00.000
|
{
"year": 2020,
"sha1": "90fc26e328929ada7508004a2128aaf044fe21a9",
"oa_license": "CCBY",
"oa_url": "https://myukk.org/SM2017/sm_pdf/SM2120.pdf",
"oa_status": "GOLD",
"pdf_src": "Unpaywall",
"pdf_hash": "90fc26e328929ada7508004a2128aaf044fe21a9",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
236606291
|
pes2o/s2orc
|
v3-fos-license
|
Chemical composition and antibacterial activity of the essential oil from Diphasia klaineana fruits and its fractions obtained by fractional distillation
This study aimed to analyse the chemical composition of essential oils from Diphasia klaineana fruits, its fractions and evaluate their antibacterial activity. The GC-MS analysis and agar disc diffusion and microbroth dilution methods were carried out. A total of 43 constituents were identified in the essential oil from fruits of Diphasia klaineana (FR) and the major compounds were β-elemol (30.23%), sabinene (9.28%), guaiol (5.12%), (E)-β-ocimene (5.11%) and δ-elemene (3.53%). The FR1 and FR3 fractions collected after one hour and after three hours were rich in hydrocarbon monoterpenes and showed bactericidal and bacteriostatic activities, respectively. No activity was found with FR5, FR7 and FR9 fractions. Therefore, fractional distillation may be an interesting process to upgrade the antibacterial activity of D. klaineana fruits essential oil againt Staphylococcus aureus.
Introduction
Essential oils (EOs) are natural compounds extracted from different parts of a plant, such as flower, leaves, stems, fruits, seeds, roots, barks, or resin. It is an important part of traditional healing practices in the human diseases. It is used as raw materials in cosmetics, spices, foods, perfumes, and in treatment of several health disorders [1] . In recent years, there has been a resurgence of interest in researching plant products as antimicrobial instead of several antibiotics with their side effects. Throughout history, natural substances and their derivatives have been an all-important source of therapeutic agents. The in vitro antimicrobial assays have effectively served as valid methods to reveal secondary metabolites with antimicrobial activity [2] . Plants extracts have contained different metabolites which have strong antimicrobial activity against both biofilms and pathogens resistant to several drugs. It is more difficult for bacteria to develop resistance to the multi-component EOs than to the antibiotics that are often composed of only a single molecular entity [3] . Since the biological activities of EOs are composition-dependent, no particular resistance or adaptation to EOs has been described to date. In addition to antimicrobial activity, EOs can act in synergy with some antibiotics, enhancing their biological properties [4] . Essential oils consist mainly of monoterpenes, sesquiterpenes and their oxygenated derivatives. The qualitative and quantitative analysis of the chemical composition of essential oils is important, as it is responsible for their effectiveness. That is, not only which compounds are present, but their proportion and amounts are important for their activity [5] . Among the medicinal plant, there is the Rutaceae family, which has species of ecological, economic and therapeutic importance. It belongs to the order of Sapindales with about 150 genders and over 1600 species. They are hugely distributed throughout the tropical and temperate regions of the globe, being more abundant in tropical America, South Africa and Australia [2] . Among the representatives of this essential oil family producers, stand out Diphasia klaineana Pierre, popularly known as courou-la-lemourou in Malinké, lobo in Abbey, pahiri in Bété, grénian in Krumen, hugely used as a medicinal resource by local people throughout Côte d'Ivoire. In folk medicine, there are several therapeutic properties described from D. klaineana P., which include the use of this plant in respiratory conditions, including sinusitis. Furthermore, few studies have investigated the chemical composition underlying the health benefits due to this medicinal plant.
Our previous work revealed the chemical composition of the essential oil from D. klaineana leaves, its fractions, their biological activities and the influence of flowering on the chemical composition and biological activity [6,7] . Therefore, the purpose of the present study was to determine the chemical composition and to evaluate the biological activities of D. klaineana fruits EO and its fractions as new potential source of natural antibiotic components.
Materials and methods
Collection and pre-treatment of plant material Fruits of Diphasia klaineana were collected in June 2017, from the Denguélé region (north-west of Côte d'Ivoire). The samples were identified at the Floristic National Center of Felix Houphouët-Boigny University (Abidjan-Cocody, Côte d'Ivoire). They were deposited in our laboratory for future references. The fresh plant material was stored in an airconditioned chamber at 18 °C for three days, protected from light, before extraction.
Essential oil distillation
The essential oils were extracted from the fruits of D. klaineana by continuous and fractional distillation hydrodistillation using an apparatus of Clevenger type. The essential oils were collected over water and dried over anhydrous sodium sulfate. The pure essential oils were stored in airtight glass containers in a refrigerator at 4 ºC until oils analysis and evaluation antibacterial activity test. For the continuous distillation, the extraction was carried out for 9 h to mix 500 g of plants in 1500 mL of distilled water. The extract obtained was encoded FR. For fractional distillation, the essential oils of plant material (500 g) were obtained by hydrodistillation for 9 h. The essential oil fractions were captured at regular time frames after 1 h: 0-1, 1-2, 2-3, 3-4, 4-5, 5-6, 6-7, 7-8, 8-9 h nonstop control. Thus, the FR1, FR2, FR3, FR4, FR5, FR6, FR7, FR8 and FR9 fractions were collected without interrupting the hydro distillation process. But we only used odd fractions for this study (FR1, FR3, FR5, FR7 and FR9).
GC and GC-MS analysis
The identification of the major constituents was conducted by GC-MS analyses. GC/MS analyses were carried out on Perkin Elmer auto system XL Gas equipped with a Rtx-1 Column nonpolar phase (60 m × 0.22 mm, coating thickness 0.25 μm) and a Perkin Elmer TurboMass mass detector. Analytical conditions were as follows: 0,2 μL of sample was injected using flow splitting 1:50; as carrier gas was helium with flow velocity of 1 mL/min; carrier gas helium at a regular pressure of 25 psi; oven temperature was programmed from 60 to 230 °C at 2 °C/min, with injector temperature at 250 °C and detector temperature at 280 °C. All mass spectra were acquired over the mass range 35-350 Da in-electron impact (EI) mode with ionization voltage 70 eV. The assignment of peaks in the chromatogram was based on the comparison of retention times with those of authentic samples, comparing their linear retention indices with respect to the series of nhydrocarbons, and the computer matching with the mass spectra of libraries comprise pure substances and components of known oil and MS literature [8,9] .
Bacterial strain and growth conditions
The essential oils were tested against a common pathogenic bacterial strain: Staphyloccocus aureus ATCC 25923, a Gram-positive bacteria obtained from the Swiss Center for Scientific Research of Côte d'Ivoire. The strain was streaked on to an agar plate to obtain single colonies, and then freshly grown on agar-based nutrient medium in the dark. All incubation accomplished aerobically for 24 h at 37 °C.
Determination of antimicrobial activity by the agar-well diffusion method
The antibacterial activity was carried out with the agar-well diffusion method [10,11] . Mueller-Hinton Agar medium was poured into a sterile petri dish and allowed to solidify. Colonies of S. aureus ATCC 25923 were directly suspended in 0.85% saline to obtain turbidity comparable to that of the 0.5 McFarland standard. Aliquots (0.1 mL) of the inoculum were spread over the surface of pre-dried agar plates with a sterile spreader. Six wells were drilled into the inoculated medium using a sterile cork borer (6 mm). Wells, every 6 mm, were cut through the agar using a sterile cork borer and the agar was removed leaving empty wells which were filled with 50 µL of each essential oil, the positive control (gentamycin) or the negative control / solvent. For about 30 minutes, it was allowed to diffuse and incubated for 18 to 24 hours at room temperature. The zone of inhibition was observed and measured in mm. The assay was carried out in triplicate.
Determination of minimal inhibitory concentration (MIC) and minimal bactericidal concentration (MBC) in liquid medium
The determination of the MIC of the essential oils was performed by microdilution assay following the standard protocol from the Clinical and Laboratory Standards Institute, with some modifications [12,13] . For the assay, the standard inoculum was prepared in sterile saline (0.85% w/v) from living colonies of S. aureus contained in plates of agar (final inoculum). The essential oil stock solution was prepared with MH broth using Tween 80 as emulsifier. From the stock solution, two-fold serial dilutions were made in a range from 50 to 0.39 mg/mL. 10 µL from the final inoculum were added to each well containing 50 µL of several essential oil concentrations, being the final volume in each well of 60 µL. The following controls were used: culture medium control (60 µL of MH broth); growth control (50 µL of MH broth + 10 µL of inoculum); Tween 80 emulsifier control (60 µL of MH broth with Tween 80) and growth control containing the emulsifier (50 µL of MH broth with Tween 80 + 10 µL of inoculum). Finally, microplates were incubated for 18-24 hours at 37 °C. The MIC was considered as the lowest essential oil concentration that inhibited visible bacterial growth. The determination of MBC was performed from wells containing essential oil concentrations where there was no visible bacterial growth. So, an aliquot of 100 µL was taken from each well and seeded in MH agar. Plates were incubated at 37 °C for 24 h. The MBC was defined as the lowest concentration of essential oil able to cause total bacterial death, represented by the visible absence of colonies of S. aureus on the agar plates.
Results and discussion Chemical composition of essential oil from continuous hydro-distillation
The continuous hydro-distillation of the fruits of Diphasia klaineana Pierre (Rutaceae) from Côte d'Ivoire was performed in a Clevenger-type apparatus, which yielded a yellowish color oil and specific aroma. Based on the fresh weight of the plant material, the essential oil of the fruits (FR) of Diphasia klaineana Pierre (Rutaceae) was obtained in yields of 0.14% w/w; in contrast, the essential oils of the same plant were found to be 1.65% and 1.53% w/w from the leaves before flowering and during flowering, respectively [6] . However, the yield of essential oil was 0.12% from the fruits of Hortia oreadica (Rutaceae) after 2 h. In general, plants belonging to the Rutaceae family are highly aromatic and have significant importance as a source of citrus fruits and as ornamental plants. Many essential oils of species of this family are used in the pharmaceutical and cosmetic industries, nutritional supplements and aromatherapy [14] . The chemical composition of essential oil from the fruits of Diphasia klaineana was analyzed by GC and GC-MS and the result was presented in Table 1. In total, 38 components were identified, representing 81.28% of the total amount. β-elemol (30.23%), a sesquiterpenoid that is isopropanol which is substituted at position 2 by a (3S,4S)-3-isopropenyl-4-methyl-4-vinylcyclohexyl group was found as the main component. Sabinene (9.28%) was the second major compound detected in FR coded essential oil, followed by guaiol (5.12%), (E)-βocimene (5.11%), γ-eudesmol (3.54%), terpinen-4-ol (3.37%), germacrene D (3.22%), limonene (2.92%), myrcene (2.52%), β-elemene (2.20%), (Z)-β-ocimene (1.74%), E-caryophyllene (1.46%), γ-terpinene (1.40%), linalool (1.08%), α-pinene (1.03%) and others were found to be the minor components in the essential oil from the fruits of Diphasia klaineana. Furthermore, the oxygenated sesquiterpenes (39.11%) and monoterpene hydrocarbons (27.52%) were the main chemical groups in the essential oil from the fruits of D. klaineana, and small amount of sesquiterpenes hydrocarbons (8.85%) and oxygenated monoterpenes (5.80%) were found (Fig. 1).
Fig 1: Monoterpene and sesquiterpene content of total essential oil (FR)
Data from literature showed that essential oils contain a large variety of substances with great potential as valuable source of bioactive molecules. Some of the identified compounds have been reported to have many biological activities. Many studies carried out with essential oils had reported that they exhibited many biological effects such as antibacterial, antifungal, anti-inflammatory, hepatoprotective, antiviral, anti-leishmanial, antioxidant and anti-proliferative properties [15,16] . Antibacterial activity of the essential oil against S. aureus Based on previous research, diameters of inhibition zone were appreciated as follows: Not sensitive (diameter ≤ 8.0 mm), moderately sensitive (8.0 < diameter < 14.0 mm), sensitive (14.0 < diameter < 20.0 mm), and extremely sensitive (diameter ≥ 20.0 mm) [17] . The results showed the FR-encoded essential oil had certain antibacterial activity on Staphylococcus aureus (S. aureus ATCC 25923, growth inhibition zone of 9 mm). The MIC and MBC determinations were made using FR-encoded essential oil from D. klaineana. The MIC for D. klaineana was 6.25 mg/mL and the MBC was 50 mg/mL. Several studies, which investigated the action of essential oil against pathogenic microorganisms, agreed that essential oils are more effective against Gram-positive bacteria than against Gram-negative [2] . According to the results of the previous researches, the major constituents of essential oils including monoterpene or sesquiterpene hydrocarbons, and their oxygenated derivatives demonstrate antibacterial activity [10] . This activity against Staphylococcus aureus may be due to the presence of the major compounds in the essential oil or linked to a synergy of action of all the compounds. After determining the components and antibacterial activity of the essential oil of fruits of D. klaineana, an analysis of main components responsible for the antibacterial activity was performed.
Conclusion
The essential oil from fruits of Diphasia klaineana and its fractions obtained by the fractional distillation process differ in terms of content and quantitative proportion of monoterpene hydrocarbons, oxygenated monoterpenes, sesquiterpene hydrocarbons and oxygenated sesquiterpenes, as well as in their antibacterial activity against Staphylococcus aureus.
Our results show the fractions collected after one hour and after three hours were rich in hydrocarbon monoterpenes and showed bactericidal and bacteriostatic activities respectively against Staphylococcus aureus. No activity was found against Staphylococcus aureus with FR5 et FR7 and FR9 fractions dominated by oxygenated sesquiterpenes. Even in low amounts, these compounds could work in synergy, and maintain biological activity. These findings represent an important result for the rationalization of using essential oils of Diphasia klaineana in traditional medicine. The FR1 and FR3 fractions can be used for their antibacterial activity because they offer a greater antibacterial activity compared to the original oil. Therefore, fractional distillation may be used to upgrade the antibacterial activity of essential oil from D. klaineana.
|
2021-08-02T00:06:30.940Z
|
2021-05-01T00:00:00.000
|
{
"year": 2021,
"sha1": "83a3cc5ff481a179d5f98f236bb6c322511d57e2",
"oa_license": null,
"oa_url": "https://www.phytojournal.com/archives/2021/vol10issue3/PartA/10-2-341-347.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "eb36d845c2cb25307bbcbfd486d2e9a6877cf5e7",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
4760099
|
pes2o/s2orc
|
v3-fos-license
|
Experiences of young smokers in quitting smoking in twin cities of Pakistan: a phenomenological study
Background Smoking is highly prevalent in Pakistan claiming the lives of over 100,000 individuals every year. A significant proportion of smokers (24.7%) make an attempt to quit each year but 97.4% fail to quit successfully. Little is known about the reasons for, and experiences of, failed quit attempts. This study was carried out to explore the experiences of young male smokers in quitting smoking in the twin cities of Pakistan Method A qualitative study was carried out using a phenomenological approach in Rawalpindi and Islamabad. A total of 11 participants were interviewed. All study participants were male and had made at least one quit attempt. Study participants were a mix of smokers who failed to quit smoking, intermittent smokers and successful quitters. Streubert’s (1991) method of phenomenology was followed during data analysis. Results The experiences of smokers while smoking “the smoking phase” have major effects on their journey towards quitting smoking. The smoking phase consists of three major stages: contact with initial smoking stimuli, the journey from first puff to enjoying smoking and then finally smoking becoming part of life. However, the journey towards quitting smoking is not as simple as the journey towards becoming a smoker. Instead, smokers get trapped in three overlapping cycles of smoking and quit attempts: smoking & forced quitting, smoking & intentional quitting, and smoking & intermittent smoking before successful quitting. Breaking the cycle is not easy in the presence of trapping factors (addiction, high availability, easy affordability, conducive social setup and low perceived risks of smoking). Three factors play a major role in breaking these cycles which are strong will power, continuous peer support and avoidance of smokers’ company. Conclusion A young smoker, during his experience of quitting smoking gets entrapped in several overlapping cycles of smoking & quit attempts before successful quitting. There are known entrapping factors as well as factors which help in breaking these cycles. Targeted interventions are needed to facilitate smoking cessation among young smokers in Pakistan. Electronic supplementary material The online version of this article (10.1186/s12889-018-5388-7) contains supplementary material, which is available to authorized users.
Background
Pakistan is among the top 15 countries for burden of tobacco related morbidities and mortality [1]. There are almost 24 million tobacco users in the country, the majority being smokers [2]. Currently very few smokers (24.7%) make quit attempts in Pakistan as compared with other countries where 40%-50% of users try to quit every year [2][3][4][5]. The success rate of quitting is also low in the country, only 2.6% succeed [6]. International literature points out higher rates of successful quitting compared with Pakistan. In Brazil, only 42.1% of smokers experience a relapse after a quit attempt [3]. Findings from Italy showed a quit attempt rate of 40% among smokers with success rate of 8% at the first attempt [4]. In the United States, nearly 50% of smokers make at least one quit attempt in their life, the success rate is 3-5% for unaided attempts each year [5,7]. A difference between the numbers of people willing to quit and those succeeding indicates a gap in the means available to achieve successful quitting [8].
Numerous epidemiological and qualitative studies have tried to understand this gap and identified factors responsible for successful or unsuccessful quitting. Successful quitting is dependent on higher socioeconomic status, older age, health status, quitting history, quit intentions, high taxation, awareness and use of assistance [9,10]. Pakistani data shows that the majority of smokers are unable to attribute reasons for unsuccessful quitting, while others relate it to addiction, stress and peer pressure [6,11,12]. Abdullah and Husten have also highlighted several obstacles that could obstruct smoking cessation in low and middle-income countries like Pakistan including; poor healthcare systems, low awareness about health hazards of smoking among public, no smoking cessation policies and tobacco industry marketing strategies [13].
Use of qualitative studies is of paramount importance in understanding the complex process of quitting and for designing targeted interventions. Various qualitative studies and behaviour models (like the health belief model, theory of reasoned action, social cognitive theory and stages of change model) have been used to get a deeper understanding of smoking cessation processes and design smoking cessation services [14][15][16][17][18][19]. However, almost all of these studies are based on data from high income countries. Smokers in these countries live in a different cultural context and social environment where knowledge about smoking hazards is relatively high, effective national level policies exist to promote cessation, healthcare systems are supportive, and specialized cessation clinics and services are available. In contrast low and middle income countries like Pakistan have different cultural context, with health systems lacking specialized support for cessation and a dearth of context specific explanations on failed quit attempts [13]. A deeper understanding of the smoking cessation process is needed in these countries to guide targeted interventions. Phenomenological studies can best capture the lived experience of smokers and ex-smokers and elucidate the whole process of quitting from smokers' perspectives [14]. This would not only highlight both barriers and facilitators of quitting, but also the need for specific support during different phases of quit journey. We carried out this study to capture the experiences of young smokers while quitting smoking in a low and middle income country, Pakistan, with the aim of elucidating the whole quitting journey. Such studies are relevant to policies in the Pakistani context where smoking cessation programs are in their infancy.
Method
A phenomenological approach was used to explore the experiences of adult male smokers in quitting smoking in Pakistan. Phenomenology involves capturing the lived experiences of individuals who have experienced the phenomenon of interest, in this case quitting smoking. We used Streubert's [20] procedural steps for carrying out phenomenology (attached as Additional file 1).
The study was conducted in Rawalpindi and Islamabad from April 2016 to July 2016. We recruited only males in the study considering the gender specific social context of smoking in Pakistan and other South Asian countries where smoking among men is common while less common and stigmatized among women [2,[21][22][23]. Other inclusion criteria for study participants were age up to 40 years and persons who were or had been smokers and had made at least one quit attempt. The age limit was applied considering the maximum health benefits gained by quitting before age 40 (evading more than 90% of the health risk), while quitting at age 50 would only result in evasion of 50% of the health risks [24,25].
Study participants were recruited using a mix of purposive, snow ball and theoretical sampling techniques to fully elucidate the phenomenon. The starting point for selection of participants were the universities of Rawalpindi and Islamabad where young male smokers who had tried quitting at least once were invited to participate in the study using notice boards, social media and mobile phone based text messages. A total of five individuals showed interest within ten days of the invitation. However, two refused to participate in the study after knowing details about interview method and recording. One of the three selected participants suggested two more smokers meeting the inclusion criteria. The next four participants were selected using theoretical sampling technique based on emerging findings to complete the description of phenomenon i.e., smoking cessation. This made a mix of five current smokers, four intermittent smokers and two successful quitters. Intermittent smokers were defined as those who were not daily or regular smokers [26][27][28] while successful quitters were those who had been abstinent from cigarettes for one year or longer [29,30]. This mix was required to fully elucidate the lived experiences of quitting smoking among young adults and identify what ultimately leads to successful quitting.
We interviewed a total of 11 men for this study, phenomenology usually requires a sample of 1-10 for full description of phenomenon [31]. Interview date, time and venue were based on mutual agreement of the participants and researcher. An in-depth interview guide was used to collect information from study participants as per Kvale's recommendations [32]. Interviews were taken in participants' native language, Urdu, to enable them to feel comfortable to talk and express themselves fully. Each interview was audio recorded by mobile (mp3). Interview duration ranged from 30 to 71 min.
We also used other data collection methods like a log book, field notes and a peer debriefing journal. An interview log was maintained to record socio-demographic details of participants and notes during interviews. Field notes were also taken to record the expression, emotions and body language of the participant. These were used at analysis step to guide the interpretations of audio recordings. The peer debriefing journal was maintained to record debriefing sessions with peers after each interview and also during analysis stage. The notes in the journal helped in expansion of interview guide for later interviews, guided the whole analysis process and model development. One focus group discussion was conducted after the main data analysis, to validate the model and record second hand information regarding experiences of young smokers in quitting smoking. This is called shadowed data technique [33]. Seven participants including health professionals, PhD scholars and students were recruited for focus group discussion. These participants were not smokers but had smokers in their families, social circle and were health professionals.
Data analysis was done side by side with data collection. All interviews were transcribed using online time stretcher software. The transcribed interviews were read many times before starting the analysis. Seven steps as per Streubert [20] method of phenomenology were followed during data analysis. Intuiting was the first step in data analysis which involved development of initial perceptions about the phenomenon (experience of quitting) by immersing in the descriptions of the experience [20]. Intuiting was begun after first interview. Participants' description of their experience was listened to attentively, their body language was observed intently and vocal intonations were carefully noted. Audio recorded interviews were listened to and transcripts were read several times to ensure quality of intuiting. After intuiting, the next step was phenomenological analysis, formulation of codes and common themes to reach more abstract essences [20]. OpenCode 4.02 software was used for coding. All codes, themes and essences were assigned manually. An example of a journey from transcript to essences is given in the Additional file 2 attached. Peer debriefing and imaginative variation was used in the later stages of analysis to identify relationships between and patterns across essences. During the iterative process of intuiting, analyzing, and apprehending essential relationships, participants were contacted for their feedback on formalized description of phenomenon to develop an accurate and thorough understanding of the phenomenon. These follow up interviews provided opportunities to identify whether codes, essences, or relationships across essences required modification. A summary of the formalized description of phenomenon was shared with interested participants for feedback. Five of these participants provided feedback and agreed that the findings were close to the discussions during interviews.
Ethical approval for research was taken from Institutional Review Board (IRB) of Al Shifa Trust Eye Hospital and written informed consent was obtained from all participants. Confidentiality and anonymity was assured to each participant.
Results
A total of eleven individuals participated in the study. All participants were male with mean age 26.9 (SD = 4.51) years ranging from 22 to 40 years. All participants were unmarried except one current smoker and one successful quitter. Table 1 lists the socio-demographic information of participants. Five participants were current smokers, four were intermittent smokers and two were successful quitters. One of the successful quitters had quit smoking 15 years previously and the second one had quit 2 years previously. Intermittent smokers reported smoking one or two cigarettes per week on average and occasionally exceeding this limit while current smokers reported 1-20 cigarettes per day. Smoking duration ranged between 2 and 11 years; the average time since smokers started smoking was 6.72 years. Most participants had initiated cigarette smoking during adolescence (mean age was 17 years) or during their college time. All the study participants had made at least one quit attempt while some had made several attempts. Most of the participants used cardamom and tea as alternatives to cigarettes during quit attempts. Two participants switched to snuff to lessen the craving for cigarettes. One of the participants used nicotine gum as a smoking cessation aid, he had heard about it from a commercial but he reported that it was not helpful. He had also consulted a doctor for cessation support who had suggested that he try a healthy diet as an aid to cessation but which he also found to be unhelpful. None of the participants had any idea about smoking cessation centers in Pakistan. None of the participants reported any diagnosed disease associated with smoking.
Experience of quitting smoking
After rigorous analysis of all data, lived experience of smokers were condensed into a model comprising of two sections (Fig. 1). The first section consists of different stages through which a smoker passes during the journey toward quitting. The second section encompasses a list of factors that are consistent throughout the smoking journey. These factors act as facilitators during smoking initiation and become barriers to quitting smoking.
Section a: Smoking phase and quitting phase
A smoker passes through several stages during his journey to becoming an ex-smoker. For young smokers, experiences of quitting smoking are deeply embedded in the experiences of starting smoking and living a smoker's life. They compare their feelings and experiences of quitting with the feelings and experiences at the time when they had started smoking and were living a smoker's life.
The whole experience of the smoking journey can be summarized in two phases; the smoking phase and the quitting phase, each comprising of different stages. The movement from one stage to the next stage depends on presence of pull factors and absence of entrapping factors.
Smoking phase
The smoking phase starts when a non-smoker gets exposed to multiple stimuli that motivate them to smoke. They cannot resist the urge and start smoking. After the first puff, young smokers learn proper smoking techniques and start to enjoy it. It then becomes a major part of their life. The factors listed in part B of the model (Fig. 1) act as facilitators in uptake of a smoking habit. The smoking phase can be described as consisting of three major experiences or stages: first exposure to stimuli that promote smoking, the journey from first puff to enjoyment of smoking and then finally smoking as part of their lives.
Exposure to stimuli that promote smoking
Becoming a smoker was expressed by participants as entering the world of fantasy. The non-smoker entered this world for pleasure and well-being and this entry was based on exposure to a mix of stimuli including curiosity, Fig. 1 Experiences of quitting smoking among 'young' male smokers peer pressure, trend and feeling a need to start smoking (Fig. 1).
Curiosity was found to be a major reason that motivated individuals to start smoking. Living, working or socializing with smokers had made the non-smoker curious about smoking. As participants expressed; "Whenever I used to see a person smoking, I used to get curious… I wanted to know what is smoking and how it feels to smoke.…, the way they inhale smoke and then the way they blow it out..." (Participant A) Participants also pointed out peer pressure as a stimulus which promoted smoking. Non-smokers felt they needed to start smoking to be friends with smokers and to socialize with them.
"My friends were smokers, they were my community and I used to feel left out as a non-smoker when they were smoking. Then, I started smoking with my friends"…. (Participant C) Participants declared cigarette smoking as fashionable, stylish and trendy which also acted as a stimulus for them to start smoking.
"Cigarette ads on TV were fascination for me, smoking was depicted as an amazing thing in commercials, as smokers were shown climbing on mountains…. it was something very cool to me"… (Participant E) Curiosity, peer pressure and smoking as a trend had instilled the need to smoke cigarettes as revealed by participants. Participants divulged that they had started smoking by their own will.
"…it was like… as if I felt an urge from inside to smoke cigarettes…you know the feeling when someone is hungry and has food in front of him, he feels a strong urge to eat that food… so the way smoker was smoking and enjoying captivated me to try one" … (Participant Q1) Journey from first puff to enjoying smoking Participants revealed that when they entered the world of smoking, at first, they did not know how to smoke. They started puffing and then were facilitated by their smoker friends about the actual technique of smoking. Their friends had fully guided them towards the right smoking technique. After the first successful smoking attempt, they celebrated their achievement and they remember it as a great memory which had given them pleasure and improved their well-being (Fig. 2).
Participants were taught that inhaling the smoke of a cigarette is real smoking.
"My friend told me how to actually inhale cigarettes smoke…. He told me that what I was doing was not real smoking, it's puffing…he explained to me.… soon after taking the cigarette in the mouth make a sound "sseee" with your tongue only in this way you can actually inhale smoke'… get the real effect of cigarette"….. (Participant C) After learning the actual technique of smoking, participants really enjoyed it and celebrated their achievement.
"When I smoked in the right way… I looked at my friend proudly… in a way…to show that… now I know how to smoke and how it feels. It was amazing" (Participant A) Among good memories, the most unforgettable memory for some participants was the feeling they got when smoking that first cigarette. Participants expressed that in pursuing that first feeling they actually became a regular smoker. Participants reported that smoking cigarettes had always given them enjoyment and relieved everyday stress from their lives.
"Cigarette is a felicity, bliss and happiness. I mean it is an enjoyment for me"….. (Participant F)
Smoking as a part of life
The participants reported that smoking was started as fun but gradually it became a necessity, a daily habit and an important part of their life.
"I had started it (smoking cigarettes) as fun, then it became my habit…now it is difficult for me to quit"… (Participant J) The participants also pointed out that their smoking habits are paired with their everyday activities.
"After eating the meal, I feel a strong urge to smoke, just do nothing but to smoke and if I do not smoke, I get a headache, a feeling as if something is missing and I have lost something. I feel like my meal is incomplete without cigarette" …. (Participant A) Participants expressed that they were so addicted to smoking that it became an important routine task of their life.
"For me, cigarette smoking is a routine and a habit"…… (Participant D) Smokers did not view cigarettes as tobacco ie: a substance, they expressed that cigarettes could be companions.
"Cigarette for me is a way to spend time, whenever I have nothing to do, I smoke cigarettes…. cigarette is my companion of loneliness"… (Participant C) The participants pointed out that they use smoking as a coping strategy for the emotional problems of everyday life.
"Whenever I feel like crying or I am stressed, cigarettes relax me to a great extent, they divert my attention"….
Participants developed a defensive attitude towards smoking at this stage and they were in denial about the serious harmful aspects of smoking.
"What's bad in smoking? We are not doing anything bad. No, smoking is not a bad thing" (Participant D)
Quitting phase
The quitting phase starts with a quit attempt as a result of some stimuli hitting smokers just like the stimuli hit them for smoking initiation. Smokers move through different stages namely forced quit attempt(s), intentional quit attempt(s), becoming an intermittent smoker and then a successful quitter. The journey from getting in contact with stimuli to successful quitting was not reported to be as smooth and simple as was the case in journey of smoking initiation. Instead, smokers get trapped in three overlapping cycles of: smoking & forced quitting, smoking & intentional quitting and smoking & intermittent smoking before successful quitting. Breaking the cycle is not easy in the presence of trapping factors (addiction, high availability of cigarettes, easy affordability of cigarettes, a conducive social setup for smoking and low perceived risks of smoking). Three factors were reported to play a major role in breaking these cycles: continuous peer support, strong will power and avoidance of smokers' company by young smokers and quitters.
Participants shared that quitting smoking was not a simple task. It was expressed as more than leaving a habit, in fact sacrificing a part of one's life. It was leaving a companion of loneliness, sacrificing a buddy through meddlesome events, parting from one's stress reliever, and changing one's everyday routine.
Cycle of forced quit attempts and smoking
Participants reported that their first quit attempt was not their personal choice rather a forced decision. Smokers expressed that their families compelled them to quit smoking emotionally or forcefully.
"When my family came to know about my smoking, they used the typical way of beating… they had beaten me, forced me to quit and they had taken the promise from me not to smoke again"… (Participant A) Forced quit attempts were never successful in the experience of our participants, no matter for how long the forced quit attempt lasted they always had a relapse and returned to their normal smoking habit. The major reason was the lack of personal will.
"I am not ready to quit smoking. There is no other reason just that I do not want to quit"…. (Participant D)
Fig. 2 Comparative journey of initiation and quitting smoking
Another important feature of this stage was that the family pressure was not continuous. It started with high intensity but then family members got used to the person as a smoker and believed that quitting was impossible.
"As I grew older, my family accepted my smoking… my mother sometimes asks me to quit but my other siblings declared me independent in this regard. They are like… do whatever you want to do"… (Participant A)
Cycle of intentional quit attempt and smoking
The cycle of smoking and forced quit attempts breaks when a right mix of stimuli hit smokers and instils a desire in them to quit. These stimuli were comparable to the ones reported in smoking initiation, however less intense and fewer in number (Fig. 1). The ultimate target was to improve well-being by quitting smoking. These stimuli were peer support, a desire for a better lifestyle and a wish to quit smoking (Fig. 2). When these stimuli were encountered by a smoker, he started his intentional journey towards becoming a non-smoker. Our data suggests that continuous presence of these stimuli at strong intensity in the smokers' lives was crucial for successful quitting. Most of the participants faced relapse and were trapped in this cycle. The previous experiences of smokers at this stage were highly crucial in influencing whether he would get trapped in this cycle of succeed in moving on.
Participants at this stage expressed that they had weak will power, weak peer support and strong barriers which were the addiction, high availability, easy affordability, conducive social setup for smoking and low perceived risk of smoking. Hence, relapse occurred at this stage for most of the participants (Fig. 1). Participants shared varied durations of temporary success before they resumed smoking i.e. from less than one day to seven months. They reported that they had experienced several such intervals.
Participants made intentional quit attempts with weak will power and weak support from peers. They did not get guidance, as they got when initiating smoking; instead, they were discouraged by peers (Fig. 2).
"When I tried to quit smoking, no one cooperated. If I tried to ask someone to help me, instead of helping, they taunted that why I had started smoking …. Even my friend, just for formality, said, 'yes, it is good to quit smoking but dear friend you cannot' (quit smoking)" (Participant C) In our findings, some smokers had made even more than 25 attempts to quit but they were not successful. This was the toughest stage of the quitting phase, as in spite of his own wish to quit, he failed to do so. Our findings revealed that here in this stage, a smoker's willpower alone, was not strong enough to quit smoking.
"Everyone told me that I can quit by will power. I think, may be my will power is weak and only those who have strong will power can do it"…. (Participant C)
Cycle of intermittent and regular smoking
Participants could break the cycle of intentional quit attempts & smoking when they got strong peer support and thereby developed strong will power. After developing strong will power and receiving strong peer support, participants had made quit attempts, they were successful in the sense that they had reduced number of cigarettes and were smoking occasionally. However, our participants could not all completely quit, some reduced their habit to smoke one cigarette in a week or smoked off and on and became intermittent smokers.
"I reduced the number of cigarettes by will power, for example from 12 cigarettes to 2 cigarettes but I could not completely quit smoking"… (Participant C) Intermittent smokers could relapse to smoking in the following situations: Stressful life events; when these intermittent smokers were sad, depressed, stressed, worried or had no other solution or support, they restarted smoking.
"Often when I am in depression, as now-a-days… normally people get depressed two or three times a day. So, I restarted smoking whenever I had this depression phase"… (Participant C) Smokers' Company; participants expressed that when they were in the company of smokers they smoked or sometimes relapse occurred upon staying with them for long time.
"My close friends are smokers…. It is very difficult for me to quit smoking"… (Participant C) Free time and loneliness; for a quitter, the most tough or challenging thing was free time. It was a big hindrance in maintaining quitting, when a smoker had nothing to do, he again started smoking as cigarettes made him feel busy and gave him the feeling of being in company.
"I started smoking again because most of the time I had to be alone. I did not have anything to do. I did not have much workload. So, I restarted smoking"… (Participant D) Movement from this stage to next stage was dependent on strong will power and peer support and absence of free time, loneliness, stressful life events and smokers' company.
Successful quitting
Successful quitters among our participants revealed that strong will power, continuous peer support and no longer seeking smokers' company helped them in maintaining quitting and decreased the chance of relapse. However, the entrapping factors (Section B, Fig. 1) made this transit very difficult and very few smokers reached the stage of successful quitting.
"In the start, I had quit smoking for hours, and then I had quit it for days. Then I changed my smoker friends, and finally, I have quit smoking forever. I think if you get a good companion, quitting smoking can be easy…. I got that companion, who never let me feel alone, talking to that friend never makes me feel time is not passing…. When I am with my friend, I never feel craving for cigarette"… (Participant Q1)
Section B: Facilitators in initiation and barriers during quitting smoking
Participants shared that the experiences of the quitting journey were not smooth and resulted in entrapment in different cycles because of some external factors. These factors had been acting as facilitators at the start of their smoking journey. However, these facilitators were transformed into strong barriers later quitting smoking. They acted as an entrapping force for smokers to keep them in the cycles. These entrapping factors were namely: high availability, easy affordability, conducive social setup for smoking and low perceived risks of smoking.
High availability
Cigarettes are easily available everywhere which makes non-smokers easily get attracted towards cigarettes. Seeing everyone smoking around, cigarette availability in almost every shop makes it very easy for non-smoker to start smoking. Participants had highlighted this issue; "Cigarette availability everywhere is the big reason of smoking. It's available in every shop… almost everywhere, this is the reason everyone is smoking… from teenagers to older ones"… (Participant Q2) High availability acted as a strong barrier in quitting smoking.
"If you want to quit (cigarette smoking), it is difficult… as it (cigarette) is available everywhere"… (Participant A)
Easy affordability
Another strong stimulator in smoking initiation and strong barrier in quitting smoking mentioned by participants was easy affordability. Participants mentioned that in Pakistan cigarettes are easily affordable at low cost.
"This is the biggest issue… I think cigarettes in Pakistan are very cheap and available everywhere. When I was in Saudi Arabia, cigarettes' availability was very low and cigarettes were much expensive. Their least costly cigarette, in Pakistan is the most expensive one"…. (Participant E)
Conducive social setup for smoking
Social acceptability was a big challenge as well. Cigarette smoking was not stigmatized in society as expressed by participants. It is quite acceptable behavior especially when you cross adolescence and when it is common in a family.
"On my elder brother's wedding, all cousins were gathered. We smoked cigarettes together and we did it throughout the night. It was fun and even our families did not mind it"… (Participant B) People compared cigarettes with other addictive substances which were labeled as bad and stigmatized but cigarettes as normal. So, this conducive setup promoted smoking initiation and acted as a strong barrier for a quitter in quitting smoking. As a participant said: "No, smoking is not bad; it does not harm you as such as compared to other things. If you say only smoking is bad, you are wrong as there are many other things which are bad and we do not pay attention to them"… (Participant D)
Low perceived risks of smoking
Participants perceived smoking as having low risks. This perception facilitated them to start smoking and acted as a strong barrier while planning to quit smoking.
"Well, I have seen a smoker, who is smoking for 8 years; he has not got cancer yet. I have many other examples as well… no one has seen a smoker yet who get cancer within 2 years or within 4 year…. so as a smoker I believe, it does not harm, it will not cause you cancer or any other major ailment"… (Participant C)
Discussion
The present research aimed to explore the experiences of young smokers in quitting smoking using the phenomenological approach. The main findings of the study suggest that quitting smoking is a complex journey, where quitting experiences are deeply embedded in the experience of starting smoking. A smoker gets entrapped into three overlapping cycles of smoking and quit attempts: smoking & forced quitting, smoking & intentional quitting, and smoking & intermittent smoking before successful quitting at a young age. Breaking the cycle is not easy in the presence of trapping factors (addiction, high affordability, easy availability, conducive social setup and low perceived risks of smoking). Three factors play a major role in breaking these cycles which are strong will power, continuous peer support and avoidance of smokers' company. Our model discusses smoking cessation as a stage wise process like the trans-theoretical model which explains a behavioral change in terms of different stages (Pre-contemplation, contemplation, determination, action, maintenance/relapse/recycle and termination) [17]. It also highlights factors external to individual control like affordability and availability.
Our findings suggest that smokers compare their quit journeys to their experiences when they first entered the smoking world. Entry into smoking is stimulated by curiosity, peer pressure and fashion. In contrast, similar factors to stimulate quitting are weak or non-existatnt. Previous studies have shown that smokers attempt to quit due to social pressure, intrinsic health concerns and stigmatization [34][35][36][37] whereas our study showed low perceived health risks, lack of stigmatization, and a socially conducive set-up for smoking in Pakistan. This may explain why fewer users attempt to quit (24.7%) in the country compared to other countries where almost 40%-50% users try to quit [2][3][4][5]. As smoking is reported to spread amongst groups of friends in Pakistan, using social groups in cessation may be of great help. Evidence shows that such groups provide an environment where people may find the support necessary to overcome the challenge of quitting tobacco addiction, share their experiences and difficulties [38,39]. Smoking cessation programs should also consider the key points of fashion and peer pressure while designing interventions for young people. Bringing smoking cessation into the media limelight and making it fashionable could help motivate smokers to quit. Likewise using the social capital of family and peers to promote smoking cessation. Participants highlighted that their initial attempts to quit were always based on family or friends' pressure but the pressure was not continuous and they had relapsed. Family pressure is an established triggering factor for quit attempts [34,40] and has been used for smoking cessation in different high income settings [41][42][43]. However, results of such interventions are not very promising [43]. Using family based interventions is an unexplored potential area in Pakistan where family bindings and structures are strong.
Our study emphasizes that smokers' entry to the smoking world is fully guided and facilitated by friends. Wang, Gjengedal and Larsen also presented similar findings that smoker friends properly train new smokers on how to smoke [34]. This is in sharp contrast with the quit journey captured in this study, where nobody guides smokers wishing to quit and there is possibility of discouragement as well. Smokers in Pakistan highlighted that there was no training, guidance or proper treatment to support cessation. Similar findings of lack of support, being alone in quit attempts, lack of role models and taunting comments from peers have been discussed in other studies as reasons for unsuccessful quit attempts [44]. Many countries have introduced behavioral training and pharmacologically or professionally mediated interventions to assist smoking cessation for different target groups [45][46][47]. There is a need to introduce easily accessible and affordable smoking cessation services in Pakistan especially for young smokers who are not suffering from chronic diseases. An immediate support for smoking cessation becomes more important for young smokers, as unassisted efforts lead to continuous relapse (entrapment in cycles or quitting and relapsing) and discourage further attempts as found in our study and other literature [48,49].
We found that smokers can break the cycle of intentional quit attempts & smoking when they develop strong will power. Other studies have also reported will power as a key variable in quitting, a strategy to counteract cravings and a personal quality or trait fundamental to quitting success [50][51][52]. Theory of reasoned action/ planned behavior states this as perceived control which is based on positive attitude (individual's belief ) and subjective norms [16] while social cognitive-behavioral theory name it 'self-efficacy' [18]. Will power as expressed by our participants is not something static; instead, it builds with time by intrinsic and extrinsic motivation like peer support. Continuous peer support provides an alternative to cigarettes by providing psycho-social support and substituting all such values attached to smoking. Our findings are supported by other studies which showed using this support concept as an intervention to help smokers to quit [41,49,50,53].
Breaking the cycle of quitting and relapsing is not easy in the presence of trapping factors such as addiction, high availability, easy affordability, conducive social setup for smoking and low perceived risks of smoking. These results showed convergence with studies which highlighted nicotine addiction, ease of purchase and availability of cigarettes as a hindrance in quitting smoking [14,54,55]. Easy access to cigarettes or availability everywhere was also discussed in studies as making quitting difficult and causes relapse more frequently [7,54]. Cigarettes are not only available everywhere but also at very low cost in Pakistan. The low price of cigarettes makes them readily affordable for everyone. Smokers can also purchase one or two cigarettes from open packs which decrease theirs cost further. There are varieties of cigarettes available at a range of prices in Pakistan. The minimum price of a cigarette is US$0.15 and the maximum is $1.3 with tax $0.09 [56]. If we compare this price with other countries, in Saudi Arabia a cigarette costs $1.68 to $22.4, in the US a cigarette costs more than $14 with tax of $4.3 per 20 cigarette pack and in the UK a pack/a cigarette costs $12.94 with tax $9.51 [57,58]. These figures show Pakistan offers cigarettes at the lowest price. This hinders quitting smoking. Similar findings are shown in a study which states ease of purchase of cigarettes makes quitting difficult and creates trigger for relapse [7,55]. Moreover, the conducive social setup which is a big barrier in a country like Pakistan in contrast to high-income countries where smoking is becoming more stigmatized [35,37]. Pakistan needs to strengthen its overall tobacco control program.
Our study represents only the experience of urbandwelling young male smokers as we conducted interviews with those who are living in urban areas. Moreover, we could not cover opinions of very poor and elite class smokers; who may have different experiences. Results of the study need to be cautiously generalized, as it is based on a small sample of young smokers below 40 years of age. We found it challenging to categorize smokers as regular, intermittent or quitters per se as participants had shared multiple movements in and out of these stages. Some of the participants had labelled themselves, as well as others, as successful quitters although they had actually either decreased their smoking frequency or were smoking on an intermittent basis.
Conclusion
The journey of quitting smoking is quite complex and is deeply embedded in the experience of starting smoking. The stimuli which cause initiation of smoking are important in quitting too. A young smoker, during his experience of quitting smoking gets entrapped into several overlapping cycles of smoking & quit attempts before successful quitting. Being entrapped in such attempts in itself discourages successful quitting. Breaking the cycle is not easy in the presence of trapping factors (addiction, high availability, easy affordability, conducive social setup for smoking and low perceived risks of smoking). Three factors play a major role in breaking these cycles which are continuous peer support, strong will power and avoidance of smokers' company.
Considering the complexity of smoking cessation phenomenon, multi-faceted smoking cessation programs are required in Pakistan. Our findings highlight the need to target fashion and peer pressure as stimulants to promote smoking cessation. Media campaigns could be launched to make non-smoking status fashionable.
Smoking should also be banned in TV dramas and movies; where it is often shown as a stylish way to cope with stress. An overall change in social norms is desired making smoking less of a normal behavior. As smoking behavior is adopted in groups, group interventions for young smokers may be successful. This study has highlighted peer support as key factor to quit smoking. Peer support systems have been successfully used for quitting smoking in other countries like the US and UK [45][46][47]. Peer support systems for young smokers should be developed in colleges, universities and communities to direct young ones towards constructive and productive social habits and positive common goals. In addition a "no smoking day or week" could be celebrated where ex-smoker can share their success story of quitting. There could also be motivational speeches of successful quitter with young smokers organized by universities and colleges. Pakistan can learn from the UK where smoking cessation rates have increased from below 14% to almost 20% due to developing a conducive environment for quitting and providing wide range of quitting methods like social support, motivational campaigns, and ban on attractive imagery on cigarette packs etc. [45].
Our research also highlights that smoking cessation is a physically, psychologically and emotionally charged process which requires expert advice and therapy as well as continuous peer support and will power. Our findings suggest lack of cessation services for young smokers in Pakistan. There is urgent need to make such context specific services available. Moreover the role of families and friends need to be explored in designing smoking cessation services for young smokers in Pakistan.
Additional files
Additional file 1: Streubert's procedural steps of phenomenology. Methodological steps followed during this study adapted from 'Streubert's procedural steps of phenomenology'. (DOCX 13 kb) Additional file 2: 'Example of a part of the analysis process. An example of the analysis process: From transcript to essence. (DOCX 13 kb)
|
2018-04-11T13:38:55.317Z
|
2018-03-01T00:00:00.000
|
{
"year": 2018,
"sha1": "e662b88fe594ba66a66ffb46589213f86f9e15ff",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-018-5388-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e662b88fe594ba66a66ffb46589213f86f9e15ff",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256808292
|
pes2o/s2orc
|
v3-fos-license
|
Pure non-Markovian evolutions
Non-Markovian dynamics are characterized by information backflows, where the evolving open quantum system retrieves part of the information previously lost in the environment. Hence, the very definition of non-Markovianity implies an initial time interval when the evolution is noisy, otherwise no backflow could take place. We identify two types of initial noise, where the first has the only effect of degrading the information content of the system, while the latter is essential for the appearance of non-Markovian phenomena. Therefore, all non-Markovian evolutions can be divided into two classes: noisy non-Markovian (NNM), showing both types of noise, and pure non-Markovian (PNM), implementing solely essential noise. We make this distinction through a timing analysis of fundamental non-Markovian features. First, we prove that all NNM dynamics can be simulated through a Markovian pre-processing of a PNM core. We quantify the gains in terms of information backflows and non-Markovianity measures provided by PNM evolutions. Similarly, we study how the entanglement breaking property behaves in this framework and we discuss a technique to activate correlation backflows. Finally, we show the applicability of our results through the study of several well-know dynamical models.
Introduction
Open quantum system dynamics describe the evolution of quantum systems interacting with an external system, typically represented by the surrounding environment.The unavoidable nature of this interaction made this topic of central interest in the field of quantum information [1,2].This reciprocal action may lead to two different regimes for the information initially stored into our system.An evolution is called Markovian whenever there are no memory revivals and therefore the system shows a monotonic information degradation.On the contrary, non-Markovian evolutions are those showing information backflows, where partial information stored into the system is first lost into the environment and then retrieved at later times (for reviews on this topic see [3][4][5][6][7]).Hence, the very definition of these evolutions implies the existence of an initial time interval when the dynamics is noisy, otherwise no backflow from the environment could be possible.
In this work, we address the question of whether all the initial noise applied by an evolution is necessary for the following non-Markovian phenomena.We identify two noise types.While the first, that we call useless, is not necessary for information backflows, only the information lost with essential noise takes part to the characteristic non-Markovian phenomena.Starting from this observation, we classify non-Markovian evolutions as noisy or pure, where the first have both types of noise, while the second implements essential noise only.Hence, the information initially lost with pure non-Markovian (PNM) evolutions always take part to a later backflow, which occur even in time intervals starting immediately after the beginning of the interaction with the environment.Instead, the useless noise of noisy non-Markovian (NNM) evolutions has the sole result of damping the information content of the open system and it diminishes the amplitude of backflows.
This classification is in close analogy with the structure of quantum states, where mixed states can be obtained through noisy operations on pure states.Similarly, NNM evolutions can be obtained via Markovian pre-processings of PNM evolutions, which we call PNM cores.Moreover, as well as pure states allow the best performances in several scenarios and protocols, PNM evolutions are characterized by the largest information revivals and non-Markovianity measures.
The interest in considering PNM cores of known NNM evolutions resides in the possibility to isolate a dynamics with the same non-Markovian qualitative features and at the same time with the largest possible non-Markovian phenomena.For instance, in case of an experimental setup where the visible non-Markovian phenomena generated by a target evolution are not significant due to various additional noise sources in the laboratory (preparation, measurements, thermal noise, ecc...), the possibility to isolate and implement the corresponding PNM core may allow to appreciate the same non-Markovian phenomena that we failed to detect with the noisy version.
The first main goal of this work is to identify the initial useless noise of generic non-Markovian evolutions.While doing so, we propose a structure for the timing of the fundamental non-Markovian phenomena happening in finite and infinitesimal time intervals.This framework provides a straightforward and natural approach to discriminate Markovian, NNM and PNM evolutions.We follow by showing how to isolate the PNM core of a generic NNM evolution and, conversely, how any PNM evolution can generate a whole class of NNM evolutions.Later, we explain how and to what extent PNM evolutions are characterized by larger information backflows and non-Markovianity measures.In particular, we focus on back-flows of state distinguishability.Finally, we show how the entanglement breaking property behaves in this scenario and we discuss a technique to activate correlation backflows that cannot be observed in presence of useless noise.Later we apply our results to several models, such as depolarizing and dephasing evolutions.
Quantum evolutions
We define S(H) to be the set of density matrices of a generic d-dimensional Hilbert space H.The time evolution of any open quantum system can be represented by a one-parameter family Λ = {Λ t } t≥0 of quantum maps, namely completely positive and trace preserving (CPTP) superoperators.We define Λ to be the evolution of the system, while Λ t is the corresponding dynamical map at time t.Hence, the transformation of an initial state ρ(0) ∈ S(H) into the corresponding evolved state at time t is ρ(t) = Λ t (ρ(0)) ∈ S(H).
We consider Λ as a collection of dynamical maps continuous in time.This is because any open quantum system evolution obtained through the physical interaction with an environment, even in case of non-continuous Hamiltonians, are continuous [8].Secondly, we assume divisibility, namely the existence of an intermediate map for any time interval.More precisely, for all 0 ≤ s ≤ t we assume the existence of a linear map V t,s such that Λ t = V t,s • Λ s .Invertible evolutions are an instance of divisible evolutions.We call an evolution invertible if, for all t ≥ 0, there exists the operator Λ −1 t such that Λ −1 t • Λ t = I, where I is the identity map on S(H).Indeed, in these cases V t,s = Λ t • Λ −1 s .While divisibility makes all the steps of the following sections easier, in Section 5 we show how to generalize our results to non-divisible evolutions.
We say that an evolution is CP-divisible if and only if between any two times it is represented by a quantum channel.Hence, this property corresponds to require that the intermediate maps V t,s are CPTP for all 0 ≤ s ≤ t.Remember that any implementable quantum operation is represented by a CPTP operator, which is the reason why dynamical maps Λ t are required to be CPTP at all times.In case of non-CP-divisible evolutions, there exist s ≤ t such that V t,s is not CPTP, while at the same time Λ t = V t,s •Λ s must be CPTP.In this case the transformation acting in the time interval [s, t] cannot be applied independently from the transformation applied in [0, s].
We define Markovian evolutions as those being CPdivisible.Thanks to the Stinespring-Kraus representation theorem [9,10], such a definition adheres with the impossibility of the system to recover any information that was previously lost.Indeed, as we better explain later, CPTP operators degrades the information content of the system.Therefore, an evolution is non-Markovian if and only if there exists at least one time interval [s, t] when the evolution is not described by a CPTP intermediate map.Indeed, whenever this is the case, it is possible to obtain an information backflow during the same time interval [11,12], even for non-invertible evolutions [13,14].
Given an evolution Λ, we define P Λ to be the collection of time pairs such that the corresponding intermediate maps are CPTP.This set can be obtained by considering the smallest eigenvalue λ t,s of the state obtained by applying the Choi-Jamiołkowski isomorphism to V t,s [15,16]: Indeed, V t,s is CPTP if and only if λ t,s ≥ 0. Similarly, we define the complementary set of P Λ to be N Λ : the collection of time pairs such that V t,s is non-CPTP: An evolution Λ is Markovian if and only if N Λ is empty.The border of {s, t} 0≤s≤t always belongs to P Λ : the vertical line {0, t} t≥0 corresponds to the (CPTP) dynamical maps and the diagonal line {t, t} t≥0 corresponds to the trivial intermediate (identity) maps.The pairs infinitesimally close to {t, t} t≥0 correspond to the infinitesimal intermediate maps V t+ϵ,t , where their CPTP/non-CPTP nature can be studied through the corresponding master equation rates [17,18] (see Section 10).Instead, any point in the interior of {s, t} 0≤s≤t can either belong to P Λ or N Λ but not every open set is allowed for N Λ : some constraints deriving from fundamental map composition rules have to be satisfied 1 .Finally, we prove that P Λ is closed and N Λ is open in Appendix A. Below, we show several representations of P Λ and N Λ .
Timing of information backflows
In this section we show how the timing of the main non-Markovian phenomena of an evolution are always ruled by three times: T Λ , τ Λ and t Λ .We start by explaining their operational meaning.
It is possible to observe an information backflows during a time interval if and only if it starts later than T Λ .Hence, there exist intervals [T Λ + ϵ, t] when the corresponding intermediate maps are not CPTP for infinitesimal ϵ > 02 .Among these time intervals, [T Λ + ϵ, t Λ ] is the shortest, namely t = t Λ is the earliest final time such that V t Λ ,T Λ +ϵ is not CPTP for infinitesimal ϵ > 0. Instead, τ Λ is the earliest time when an instantaneous backflow can be observed, namely the earliest t such that V t+ϵ,t is not CPTP for infinitesimal ϵ > 0. Hence, Blue/red regions represent times when the infinitesimal intermediate map Vt+ϵ,t is CPTP/non-CPTP.The time T Λ is the largest such that the preceding dynamics is CP-divisible and V t,T Λ is CPTP for all t ≥ T Λ .Indeed, for t ≥ T Λ , the information content of the system never exceeds the level at T Λ (green area).The information lost in [0, T Λ ] is never recovered (useless noise), while the noise applied in [T Λ , τ Λ ] is essential for the following backflows.τ Λ is the earliest time after which we have an instantaneous backflow.We have finite backflows in intervals [s, t] with s > T Λ and t Λ is the earliest t such that we have a backflow in ([τ Λ , τ Λ + ϵ]) is the shortest time interval with the earliest initial (final) time when the corresponding intermediate map is not CPTP.For these reasons, we call the noise applied by the evolution in [0, T Λ ] as useless for non-Markovianity, while the noise applied later than T Λ is essential for non-Markovian phenomena.We represent the typical role of these three times in Fig. 1.
We follow by giving the mathematical definitions of T Λ , τ Λ and t Λ .Given a generic evolution Λ, we define: (3) We briefly discuss conditions (A), (B) and (C).Condition (A) requires the evolution to be CP-divisible before T Λ : no non-Markovian effects can take place in [0, T Λ ].Condition (B) requires the evolution following T Λ to be physical by its own, namely such that the composition with the initial noise Λ T Λ is not needed for the intermediate maps {V t,T Λ } t≥T Λ to be CPTP.Finally, condition (C) is imposed because a unitary transformation is not detrimental for the information content of our system and we cannot consider it useless noise: it is "useless" (for non-Markovian phenomena) but not noisy.We remember that evolutions with dynamical maps that are unitary at all times can be simulated with closed quantum systems, and therefore we do not focus on these cases or where condition (C) is necessary.In Appendix A we show that Eq. ( 3) is indeed a maximum and not a supremum.
An evolution Λ is Markovian if and only if T Λ = +∞.Indeed, all the noise applied by the evolution is not necessary for the later evolution to be physical.Markovian evo-lutions can be interpreted as sequences of noisy independent operations.Indeed, the dynamics between any two times the evolution is represented by a (noisy) CPTP map.Below we show that a finite value of T Λ ≥ 0 implies the evolution to have non-CPTP intermediate maps and therefore to be non-Markovian.From now on, by T Λ ≥ 0 we mean T Λ ∈ [0, ∞).Hence, the time T Λ can be used to classify quantum evolutions as follows: • Markovian: T Λ = +∞; • Noisy non-Markovian (NNM): T Λ ∈ (0, +∞); • Pure non-Markovian (PNM): T Λ = 0.
We follow by defining τ Λ as the time when information begins to be instantaneously retrieved from the environment.Hence, it is defined by the earliest time when V T +ϵ,T is non-CPTP for infinitesimal ϵ > 0: The time intervals with the earliest initial time such that the corresponding intermediate maps are non-CPTP are of the form [T Λ + ϵ, t] (see Lemma 2).We define t Λ to be the earliest final time t such that the corresponding intermediate map is non-CPTP: The timing of the earliest information backflows is therefore dictated by T Λ , τ Λ and t Λ , which have definite values for all non-Markovian evolutions.These three characteristic times satisfy the following reciprocal relation (proof in Appendix C): We briefly discuss the possible equalities that can hold in the above equation.T Λ = 0 corresponds to PNM evolutions, which are largely analysed throughout this work.
For what concerns the possible equalities between T Λ , τ Λ and t Λ , we have that We mention some examples for the above-mentioned patterns of T Λ , τ Λ and t Λ .Concerning the difference between T Λ = 0 and T Λ > 0, we show how to obtain PNM evolutions (T Λ = 0) from NNM evolutions (T Λ > 0) and This interaction leads to information losses (blue arrows) and, in case of non-Markovian evolutions, backflows (red arrows).NNM evolutions Λ can be simulated with the following two-stage scenario.First stage (t ∈ [0, T Λ ]): the system interacts with a first environment E1 and information is lost monotonically (Markovian pre-processing).The dynamics during this first stage corresponds to the useless noise of Λ. Second stage (t > T Λ ): E1 is discarded, the system evolves while interacting with E2 and we have information backflows.The dynamics during this second stage corresponds to the PNM core Λ of Λ.
The following result generalizes Lemma 2: Hence, not only each non-Markovian evolution must have a non-CPTP intermediate map for time intervals starting immediately after T Λ (Lemma 2), but whenever T Λ < t Λ there is a whole continuum of non-CPTP intermediate maps V t Λ ,s for s ∈ (T Λ , t Λ ).Additionally, if V t,T Λ +ϵ is not CPTP for t > t Λ , all the intermediate maps V t,s with s ∈ (T Λ , t Λ ) are non-CPTP.
In case of T Λ = t Λ , the infinitesimal intermediate maps V t+ϵ,t are non-CPTP either for all the times inside a time interval of the type (T Λ , T ) for some T > T Λ or for infinite times that do not constitute an interval for any T > T Λ .We propose an example of the latter pathological case in Appendix B. A special case is given by the eternal NM model, which has non-CPTP intermediate maps V t,s for all 0 < s < t (see Section 10): T Λ = t Λ and it enjoys both properties described in Proposition 1.
Pure non-Markovian evolutions
We show that the initial noise that NNM evolutions apply in the time interval [0, T Λ ] is useless for the following non-Markovian effects to happen.By doing so, we prove that any NNM evolution can be simulated by a Markovian pre-processing of the input states followed by a PNM evolution.Finally, we better explain the role of PNM evolutions and we show that, if an evolution perfectly retrieves the initial information of the system, then it is PNM.
Simulation of NNM evolutions with PNM evolutions
We start by simulating NNM evolutions Λ with the subsequent interaction of the system with two different environments.We consider the Stinespring-Kraus representation theorem [9,10], which allows us to describe a continuous family of CPTP maps through the interaction of the system with an initially uncorrelated environment.Hence, we consider the system in contact with a first environment E 1 in the time interval [0, T Λ ] and at later times t > T Λ in contact with a different environment E 2 .Thus, consider the following two-step scenario: • t ∈ [0, T Λ ] (Markovian pre-processing): the evolution is simulated by the interaction with E 1 .
A unitary transformation U ′ t evolves the systemenvironment state, which at time 0 is in a product state (no initial system-environment correlations): This simulation is possible because Λ t is CPTP for all t ∈ [0, T Λ ].The phenomenology during this time interval is Markovian as Λ is CP-divisible (see Eq. ( 3)).This stage represents the useless noise of Λ.
• t ≥ T Λ (PNM core): the evolution is simulated by the interaction with E 2 .A unitary transformation U ′′ τ evolves the system-environment state: ) where τ = t − T Λ ≥ 0. This simulation is possible because V t,T Λ is CPTP for all t ≥ T Λ (see Eq. ( 3)).The phenomenology during this time interval is NM.
As we already noticed, no information backflow can be observed in [0, T Λ ]: the phenomenology in this time interval is Markovian.Now, thanks to this two-stage simulation, we can state that the information involved in the backflows was originally lost later than T Λ .Indeed, the non-Markovian effects of Λ do not depend on the behaviour of the dynamics in the time interval [0, T Λ ], when Λ is CP-divisible.This is the reason why we call the first stage a Markovian pre-processing and we say that Λ t , for t ∈ [0, T Λ ], generates the useless noise of the evolution.
Conversely, the (CPTP) intermediate maps V t,s for T Λ ≤ s ≤ t generate the essential noise needed for non-Markovian phenomena.We define Λ = {Λ τ } τ ≥0 to be the evolution that represents the interaction with the second environment, where: From the above definitions it is easy to see that the dynamical and intermediate maps of Λ are connected with the intermediate maps of Λ as follows: where V t,s is the intermediate map of Λ.The map Λ t is CPTP for all t ≥ 0 (see Eq. ( 3)) and therefore Λ is a valid evolution by itself.It is straightforward to check that T Λ = 0: Λ is PNM and we call it the PNM core of Λ.
Finally, the relation between the characteristic times of Λ and Λ is (see Eqs. ( 4), ( 5) and ( 10)): We conclude that NNM evolutions can be simulated via a Markovian pre-processing (physically represented by Eq. ( 7)) followed by the action of the corresponding PNM core (physically represented by Eq. ( 9)): ) This decomposition is depicted in Fig. 2. Naturally, different Markovian pre-processings of the same PNM core provide different NNM evolutions.We summarize the proven relations between NNM and PNM evolutions as follows:
Proposition 2. Any NNM evolution can be written as a Markovian pre-processing of a PNM evolution. Any PNM evolution defines a class of NNM evolutions
given by all its possible Markovian pre-processings.
We conclude this section by discussing the non-Markovian effects of NNM evolutions Λ and their corresponding PNM cores Λ. Differently from before, consider both evolutions acting from the initial time.Any intermediate map of Λ taking place later than T Λ is also present in the dynamics of Λ: Hence, the two evolutions have the same non-CPTP intermediate maps.It follows that the non-Markovian effects observable with Λ can also be observed with Λ. Nonetheless, the states evolved by Λ receive a mitigated version of the non-Markovian effects induced by Λ, where the total attenuation is represented by Λ T Λ .Later, we show more precisely how this damping acts on information backflows (Section 6) and non-Markovianity measures (Section 7).
Features of PNM evolutions
In this section we study the differences between PNM and NNM evolutions.First, we remind that an information backflow can be obtained in a time interval if and only if the corresponding intermediate map is not CPTP [13,14].Hence, Lemma 2 and the two-step simulation of NNM evolutions imply that: "The information that a quantum system loses with a NNM evolution Λ during the initial time interval [0, T Λ ] is never retrieved".Instead, if we restrict Proposition 1 to PNM evolutions, it reads: We can say that "The information that is initially lost with a PNM evolution always takes part in later backflows".Indeed, as soon as a PNM evolution starts, even by considering infinitesimal initial times ϵ > 0, we have at least one later time T such that there is an information backflow in [ϵ, T ].
We move our attention to an interesting class of evolutions, namely those allowing a complete information retrieval.More precisely, they first cause a partial or total degradation of the information encoded in the system and later, thanks to one or more backflows, they provide a perfect recovery of the initial information content of the system.These instances are represented by those Λ having a non-unitary dynamical map Λ s at time s and a unitary dynamical map Λ t = U at a later time t.Since unitary transformations do not degrade the information content of quantum systems, all these evolutions completely retrieve, in the time interval [s, t], any type of information lost in the time interval [0, s].These evolutions are always PNM, where V t,s is not even positive trace-preserving (PTP) (proof in Appendix D): Proposition 3.An evolution characterized by s < t such that Λ s is not unitary and Λ t is unitary is PNM with the intermediate map V t,s not even PTP.
We end this section by clarifying the nature of PNM evolutions via some examples.First, one may think that evolutions being first contractive and then unitary are exotic.Interestingly, many of the well-known NM models have PNM cores with this property, e.g., dephasing, depolarizing and amplitude damping channels.
Secondly, having a non-unitary map at time s and a unitary map at time t > s is sufficient but not necessary for an evolution to be PNM.Hence, not all PNM evolutions present complete information retrieval.Indeed, there exist P-divisible PNM evolutions (see Section 10).
As a consequence, we might conclude that non-Pdivisible PNM evolutions satisfy Proposition 3. The following counterexample proves that this is false.Consider the qubit dephasing map: where σ i=x,y,z are the Pauli operators and p i ∈ [0, 1].Take an evolution such that, for 0 < t 1 < t 2 < t 3 : (i) x , which is not even PTP.Moreover, we require Λ t = p x (t)ρ + (1 − p x (t))σ x ρσ x where p x (t) is a continuous function such that p x (t) < 1 and decreasing in t ∈ (0, t 1 ).It is easy to see that this evolution is PNM, e.g., by evolving ρ(0 ) is outside the state space, which proves that V t3,ϵ is non-CPTP for infinitesimal ϵ > 0. Nonetheless, even if at time t 3 the evolved state goes back to its initialization (ρ(0) = ρ(t 3 )), we have that In this case, the PNM Λ completely retrieves the information content of the system only if properly initialized.
Remarkably, the initial Markovian pre-processing (useless noise) of NNM evolutions may not affect some types of information contained in specific initializations for which the information is instead completely restored.Now, we describe a NNM evolution Λ that completely recovers the information to distinguish two states.Consider the elements of the previous example, where now x .The initial states |0⟩⟨0| and |1⟩⟨1| get closer during the time interval [t 1 , t 2 ] and later they recover their (maximal) initial distance during the time interval [t 2 , t 3 ].Nonetheless, this evolution is NNM because the initial noise Λ t for t ∈ [0, t 1 ] is useless (T Λ = t 1 ).Indeed, the initial Markovian pre-processing Λ T Λ = Λ t1 = Λ z , although it reduces the distance of several pairs of states, e.g., |+⟩⟨+| and |−⟩⟨−|, leaves |0⟩⟨0| and |1⟩⟨1| untouched.Hence, given a NNM evolution Λ, the corresponding Markovian pre-processing may not affect the information content of some initializations.Nonetheless, the corresponding PNM core typically provides larger backflows for generic initializations.For instance, the PNM core Λ of the previous example satisfies Proposition 3: it corresponds to the identity map:
Non-divisible evolutions
We briefly approach the case of non-divisible evolutions, namely those for which the intermediate map V t,s cannot be written for all 0 < s < t.Great part of the results presented in this work are connected with T Λ , which in turn is strictly connected with the properties of V t,s .Hence, studying the PNM core of a non-divisible NNM evolution sounds problematic.
We start by reminding that we are considering continuous evolutions (see Section 2).Moreover, continuous non-divisible evolutions must have an initial time interval of bijectivity [0, T N B ) when an inverse Λ −1 t exists [8].
Hence, we can consider intermediate maps of the form V t,s = Λ t • Λ −1 s for all s < T N B .Notice that this is possible for all final times t.Moreover, we remember that invertibility implies divisibility, but the inverse is not true.Therefore, any non-divisible evolution is characterized by a time T N D , in general larger than T N B , such that V t,s can be defined for all s < T N D .As a result, even for noninvertible evolutions there is always a finite time interval inside which we can look for T Λ .We replace Eq. ( 3) with where T N D = ∞ for divisible evolutions.Hence, we can proceed with the PNM core extraction introduced in Section 4 whenever T Λ < T N D .
An example where we obtain the PNM core of a noninvertible NNM depolarizing evolution can be found in Appendix F. This example should convince the reader that, for most of the non-divisible dynamics studied in the literature, which usually are almost always divisible [11,22], the evaluation of T Λ requires the same computational effort needed for divisible dynamics.
Distinguishability backflows
In this section we study the relation between NNM and PNM evolutions under the point of view of information backflows, as measured by the distinguishability between pairs of evolving states.Hence, we analyse the potential of non-Markovian evolutions to make two states more distinguishable in a time interval when the corresponding intermediate map is not CPTP.
Consider the scenario where we are given one state chosen randomly between ρ 1 and ρ 2 , and we have to guess which state we received.The maximum probability to correctly distinguish the two states through quantum measurements is called guessing probability and it corresponds to , where || • || 1 is the trace norm.The maximum value 1 is obtained for perfectly distinguishable (orthogonal) states.Instead, the minimum value 1/2 is obtained if and only if ρ 1 and ρ 2 are identical.For the sake of simplicity, we define ||ρ 1 − ρ 2 || 1 to be the distinguishability of ρ 1 and ρ 2 .Consider a composite system SA with Hilbert space S(H S ⊗ H A ), where S is evolved by Λ and A is an ancillary system.Hence, a generic initialization ρ SA (0) ∈ S(H S ⊗ H A ) evolves as ρ SA (t) = Λ t ⊗ I A (ρ SA (0)).Take two states ρ SA,1 (t) and ρ SA,2 (t) evolving under the same evolution.Any increase of ||ρ SA,1 (t) − ρ SA,2 (t)|| 1 represents a recovery of the missing information needed to distinguish the two states and is a signature of non-Markovianity [11,23].Indeed, this quantity is contractive under quantum channels: CPTP intermediate maps V t,s imply a distinguishability degradation For this reason, Markovian evolutions are characterized by monotonically decreasing distinguishabilities, while non-Markovian evolutions can provide distinguishability backflows.We remember that for all bijective (or almostalways bijective) evolutions there exists a constructive method for an initial pair ρ SA,1 (0), ρ SA,2 (0) that provides a distinguishability backflow in [s, t] if and only if the corresponding intermediate map is not CPTP [11].
We proceed by studying in which cases and to what extent NNM evolutions damp distinguishability backflows if compared with their corresponding PNM cores.We saw that the initial noise Λ T Λ of NNM evolutions is useless for non-Markovian phenomena.Now, we quantify how much Λ T Λ suppresses backflows for each specific initialization.
The results of Proposition 4 are independent from s, t and the magnitude of the corresponding distinguishability backflow.Indeed, it solely depends on the information lost after the Markovian pre-processing, namely the distinguishability at time T Λ .Hence, the results of Proposition 4 can be directly extended to all the backflows that the same pair shows, even without knowing their magnitude and when they take place Corollary 2. Consider a NNM evolution Λ and its corresponding PNM core Λ.For any pair of states ρ SA,1 (0), ρ SA,2 (0) that are not orthogonal at time T Λ and provide one or more distinguishability backflows when evolved by Λ, there exists a corresponding pair of orthogonal states such that, if evolved by Λ, each backflow is larger by a factor 2/||ρ SA,1 (T Λ ) − ρ SA,2 (T Λ )|| 1 > 1.The intermediate maps generating the backflows of the two evolutions are the same and the corresponding time intervals differ by a T Λ shift.
Non-Markovianity measures
A non-Markovianity measure M (Λ) quantifies the non-Markovian content of evolutions, where Markovianity implies M (Λ) = 0, while M (Λ) > 0 implies Λ to be non-Markovian.As we see below, those measures that are connected with the actual time evolution of one or more states are influenced by the initial noisy action that precedes non-Markovian phenomena.Hence, in these cases PNM cores provide higher non-Markovianity measures M (Λ) ≤ M (Λ): the largest values that any Markovianity measure of this type can assume can be obtained with PNM evolutions and any value assumed with a NNM evolution can be matched or outperformed by a PNM evolution.The main representatives of this class of measures are defined through the collection of the information backflows obtainable with Λ, where this quantity is maximized with respect all the possible system initializations [23,24,14,25].In the following, we refer to these cases as flux measures.Note that there are non-Markovianity measure satisfying M (Λ) ≤ M (Λ) while not being flux measures, e.g., [26].
On the contrary, those measures M that solely depend on the features of intermediate maps, without considering the action of the preceding dynamics, imply M (Λ) = M (Λ).Indeed, a NNM evolution Λ and its corresponding PNM evolution Λ have the same non-CPTP intermediate maps.The main representatives of this second class are the Rivas-Huelga-Plenio [17] measure and the kdivisibility hierarchy [27].We underline that, while flux measures represent the amplitude of phenomena that can be observed, this is not true for this class.
Flux measures
Flux measures quantify the non-Markovian content of evolutions as follow.Pick an information quantifier and maximize the sum of all the corresponding backflows that Λ shows with respect to all the possible initializations.More precisely, consider a functional W (ρ SA (t)) = W (Λ t ⊗ I A (ρ SA )) ≥ 0 which represents the amount of information as measured by W contained in the evolving state.
We can also consider quantifiers with multiple input states , e.g., state distinguishability.In order to consider W an information quantifier for S, we require it to be contractive under quantum channels on S, namely W (ρ SA (0)) ≥ W (Λ ⊗ I A (ρ SA )) for all ρ SA and CPTP Λ.
We define the information flux as Since Markovianity corresponds to CP-divisibility, Markovian evolutions imply non-positive fluxes.Instead, if σ(Λ t ⊗ I A (ρ SA )) > 0, we say that the evolution of ρ SA witnesses the non-Markovian nature of Λ through a backflow of W . Flux measures consist of the greatest amount of W that an evolution can retrieve during the evolution with respect to any initialization, namely: where the maximization is performed over the whole system-ancilla state space.
Being Λ the PNM core of Λ and Im(Λ t ⊗I A ) the image of the Λ at time t, namely those system-ancilla states that can be obtained as an output of Λ t ⊗ I A , we obtain: where the first equality is justified by the fact that backflows can only happen for t ≥ T Λ (any NNM evolution is CP-divisible in [0, T Λ ]), the second equality is a simple consequence of Λ t = V t,T Λ • Λ T Λ , the third equality follows from Eq. ( 10) and the inequality follows from the enlargement of the maximization space.
It is interesting to understand when we can obtain M W (Λ) < M W (Λ). Consider the information quantifier D(ρ SA,1 (t), ρ SA,2 (t)) = ||ρ SA,1 (t) − ρ SA,2 (t)|| 1 for a fixed ancilla A, where in this case we consider the evolution Λ.We call {ρ i SA,1 , ρ i SA,2 } those pairs that allow to obtain the maximum of Eq. (15).Notice that these pairs of states are always initially orthogonal: is the flux associated to D(ρ SA,1 (t), ρ SA,2 (t)) as in Eq. ( 14), we have: Thanks to Corollary 2, we can prove that: where we set Hence, if the information content of the pairs {ρ i SA,1 , ρ i SA,2 } at time T Λ is lower than at the initial time when evolved by Λ, then M D (Λ) < M D (Λ).Moreover, the proportionality factor between the two measures is given by the states {ρ j SA,1 , ρ j SA,2 } that get the closest at time T Λ .In Section 9 we explicitly evaluate M D (Λ) and M D (Λ) in case of depolarizing evolutions and we show that, even without ancillary systems, M D (Λ) < M D (Λ) is always verified.
A second measure similar to M W is given by [28]: which corresponds to the largest backflow of W that the dynamics is able to show in a single time interval.Finally, a third measure is [28]: where ⟨W (ρ SA (t))⟩ = t −1 t 0 W (ρ SA (s))ds.This measure corresponds to the largest difference, with respect to t, between the information W at time t and its average in the interval [0, t].Moreover, M W,av has a precise operational meaning connected with the probability to store and faithfully retrieve information by state preparation and measurement, where an attack performed by an eavesdropper may occur.It can be proven [28] that, for any Λ and W , M W,av (Λ) ≤ M W,max (Λ) ≤ M W (Λ). Similarly to Eq. ( 16), it is possible to demonstrate that:
Incoherent mixing measure
A second type of non-Markovianity measure corresponds to the minimal incoherent Markovian noise needed to make a non-Markovian evolution Λ Markovian [26].In order to describe this measure, we first consider an evolution obtained as convex combination of Λ and a generic Markovian evolution Λ M .We consider the mixed evolution Λ mix p = (1 − p)Λ + pΛ M .and define a non-Markovianity measure by looking for the minimal value of p, hence the minimal amount of Markovian noise, such that Λ mix p is Markovian, namely: In Appendix E we prove that: Finally, in Section 9 we show that M mix (Λ) < M mix (Λ) for all NNM depolarizing evolutions Λ.
RHP measure and k-divisibility
Consider a generic PNM evolution Λ and all the corresponding NNM evolutions Λ that can be obtained from Λ with a Markovian pre-processing.As we saw, Λ and all its corresponding Λ have the same non-CPTP intermediate maps.Therefore, the non-Markovianity measures that solely depend on the properties of non-CPTP intermediate maps, since they are not influenced by the particular (useless) noise that precedes their action, assume the same value for Λ and all its corresponding Λ.This is the case of the RHP measure I(Λ) (see Eq. (4) from Ref. [17]), and the k-divisibility non-Markovian degree NMD[Λ] (see Ref. [27]):
Entanglement breaking property
We call C(ρ AB ) a correlation measure for the bipartite system AB if: (i) C(ρ AB ) ≥ 0 for all ρ AB , (ii) C(ρ AB ) = 0 for all product states ρ A ⊗ρ B , (iii) ) for all ρ AB and CPTP maps Λ A and Λ B .Entanglement measures, denoted here by E, capture only non-classical correlations.Indeed, they satisfy the additional property of being nonincreasing under local operations, namely (iii), assisted by classical communication (LOCC).As a consequence, E(ρ AB ) = 0 for all separable states, namely those that can be written as statistical mixtures of product states: We discuss how the link between NNM and PNM evolutions behaves with respect to the entanglement breaking (EB) property.A quantum channel Λ S is EB if it destroys the entanglement of any input state, namely if Λ S ⊗I A (ρ SA ) is separable for all ρ SA .Consider a generic Λ.We say that it is EB if there exists a time t EB,Λ > 0 such that Λ t is EB for all t ≥ t EB,Λ .Take a PNM evolution Λ and a NNM evolution Λ that can be obtained with a Markovian pre-processing of Λ.This pre-processing cannot increase the amount of entanglement of any state.Hence, if Λ is EB, then Λ is EB.Nonetheless, in case of Λ and Λ EB, there is no general order for the corresponding EB times: t EB,Λ > t EB,Λ and t EB,Λ < t EB,Λ are both possible.
We have to keep in mind that, if a generic NNM evolution Λ is EB, we cannot immediately say anything about the EB nature of Λ and we must study the particular dynamics more in detail.Indeed, it is easy to find NNM evolutions Λ with EB useless noises Λ T Λ , where the corresponding PNM core Λ is not EB.Also, there exist cases where the Markovian pre-processing Λ T Λ is not EB, the PNM core Λ is not EB, but the corresponding NNM evolution Λ is EB.
Activation of correlation backflows
We now discuss a technique focused on entanglement revivals which can be easily generalized to other correlation measures.The isolation of the PNM core of a NNM evolution may lead to the activation of entanglement backflows.Take a bipartite system SA, where S is evolved by Λ and A is an ancilla.Whenever we have a backflow of E, the same backflow can also observed with Λ, namely the corresponding PNM evolution.Moreover, as we saw for the corresponding flux non-Markovianity measure, M E (Λ) ≤ M E (Λ).What is interesting is the possibility to activate backflows of entanglement through the isolation of the PNM core, namely when M E (Λ) = 0 and M E (Λ) > 0. This scenario is made possible when Λ is EB, the corresponding non-CPTP intermediate maps V t,s take place only for t EB,Λ ≤ s < t and the corresponding PNM core Λ is not EB.In this case, when an entangled state is evolved by Λ and a non-CPTP intermediate map takes place, all the entanglement has already been destroyed and no backflows are possible.Instead, for a system evolving under Λ, when the (same) non-CPTP intermediate map takes place entanglement can be non-zero and backflows are allowed.
Whenever a non-Markovian evolution does not provide correlation backflows, additional ancillary degrees of freedom can activate the possibility to observe backflows.This phenomena has already been studied for entanglement [29,30] and Gaussian steeriing [30].For instance, instead of evaluating entanglement among S and A, we would need to evaluate it among SA ′ and A, where A ′ is a second ancilla.Hence, our construction allows a different strategy to obtain correlation backflows in those situations where an SA setup does not show any: instead of implementing additional ancillary systems, which may result in an experimental setup that is more demanding to handle or not even realisable with the available tools, we can simply consider the PNM core of the studied evolution.
Depolarizing model
We apply our results to a simple model called depolarizing.Starting from a generic NNM depolarizing evolution Λ, we show how to find T Λ , τ Λ and t Λ , the corresponding PNM evolution Λ and we calculate the gains in terms of information backflows and non-Markovianity measures that Λ provides with respect to Λ.We conclude by applying our technique to an explicit toy model.Moreover, we show how our approach can be directly applied to nonbijective depolarizing evolutions in Appendix F.
We define a generic depolarizing evolution Λ through the corresponding dynamical map, namely where d is the dimension of the system S, I S is the identity map and 1 S /d is the maximally mixed state [26].The behaviour of the evolution is determined by the characteristic function f (t).The dynamical maps Λ t are CPTP, continuous in time and such that Λ 0 = I S if and only if For the sake of simplicity, from now on we restrict our attention to depolarizing evolutions with f (t) ∈ [0, 1] for all t ≥ 0. Those cases of f (t) assuming negative values necessitate a simple generalization of the techniques used here.An in-depth analysis of depolarizing evolutions with f (t) ∈ [−1/(d 2 − 1), 1] can be found in Ref. [26].The evolution Λ is invertible if and only if f (t) > 0 at all times.Indeed, f (t N B ) = 0 implies that every initial state is mapped into the same (maximally mixed) state: Λ t N B (ρ S (0)) = 1 S /d.In this case Λ t N B is non-invertible and V t,t N B cannot be defined.The interpretation of depolarizing evolutions is straightforward: at time t each state is mixed with the maximally mixed state 1 S /d with a ratio given by f (t).The larger f (t) is, the closer is ρ S (t) to the initial state ρ S (0).Moreover, this contraction towards the maximally mixed state is symmetric in the state space.Indeed, for any two initial states ρ S,1 (0) and ρ S,2 (0) evolving under Λ we have: The intermediate map corresponding to the depolarizing evolution during a generic time interval [s, t] assumes the same form of a depolarizing dynamical map, namely: Hence, the CPTP condition for V t,s coincides with is CPTP if and only if λ t,s ≥ 0, thanks to the evaluation of λ t,s we are able to obtain P Λ and N Λ , the collection of time pairs {s, t} such that V t,s is, respectively, CPTP and non-CPTP (see Eqs. ( 1) and ( 2)).
Non-Markovian depolarizing evolutions have nonmonotonic characteristic functions.An increase of f (t) in a given time interval corresponds to a corresponding non-CPTP intermediate map.Moreover, in the same time interval the trace distance between any two states increases, namely a distinguishability backflow.The largest distinguishability backflows are provided by initially orthogonal states, for which the trace distance is equal to 2f (t) (see Eq. ( 22)).We consider the flux non-Markovianity measure M D in case of no ancillary systems (see Eq. ( 17)): where ρ S,1 , ρ S,2 are any two orthogonal states, (t in,i , t f in,i ) is the i-th time interval when f ′ (t) > 0 and ∆ > 0 is the sum of all the revivals of f (t).Finally, the non-Markovianity measure given in Eq. ( 20) is equal to M mix (Λ) = ∆/(1 + ∆) [26].
Backflows timing and PNM core
We are ready to evaluate T Λ , τ Λ and t Λ .We can rewrite Eqs. ( 3), ( 4) and ( 5) in terms of f (t) and f ′ (t) as follows: where the last equality holds because Eq. ( 24) implies that f (T Λ ) > f (T Λ + ϵ) for infinitesimal ϵ > 0. As expected, condition (A) of Eq. ( 24) implies that Λ behaves as a Markovian depolarizing evolution in the time interval [0, T Λ ].Secondly, by considering (A) and (B) together, we can state that f (T Λ ) ∈ (0, 1).As discussed in Section 5, in case Λ is non-invertible and t N B is the earliest time when f (t N B ) = 0, we should add to Eq. ( 24) the constraint T Λ < t N B .Anyway, as we show in Appendix F, even without imposing such a constraint, T Λ < t N B .Generic non-Markovian evolutions are characterized by 0 ≤ T Λ ≤ τ Λ ≤ t Λ (see Eq. ( 6)).Nonetheless, depolarizing evolutions always satisfy 0 We obtain the PNM core of a NNM depolarizing evolution by exploiting the method presented in Section 4.1.Hence, if we apply Eq. ( 10) to the intermediate maps of a NNM depolarizing evolution Λ characterized by f (t), we obtain the PNM depolarizing evolution Λ characterized by ).The evolution in [0, T Λ ] is a Markovian pre-processing.Indeed, all the states get closer and this noise is not needed for later increases: f (t) cannot increase in time intervals starting before T Λ .The NM nature of f (t) is shown immediately after The first time after which f (t) increases instantaneously is τ Λ .The shaded region represents the PNM core damped by the Markovian pre-processing.Top right: D(Λt(ρS,1), Λt(ρS,2))/2 = f (t), where ρS,1,2 are orthogonal and Λ is the PNM core of Λ, the depolarizing evolution given by f (t) = f (t + T Λ )/f (T Λ ), which increases in time intervals starting immediately after the initial time: f (t Λ ) − f (s) > 0 for all s ∈ (0, t Λ ).Since f (t Λ ) = 1, we have Λ t Λ = IS: all the initial information is restored.The total increase of f (t) is ∆ = f (t Λ ) − f (τ Λ ) = ∆/f (T Λ ) ≃ 0.49.Bottom left: P Λ and N Λ , the sets of time pairs {s, t} when Vt,s is respectively CPTP and non-CPTP.Lighter blue (darker red) colours corresponds to larger (larger in modulo) positive (negative) values of λt,s = (1 − f (t)/f (s))/4, the lowest eigenvalue of the Choi operator of Vt,s.The minimum is obtained for λ t Λ ,τ Λ ≃ −0.241 (f (t) shows its largest increase in [τ Λ , t Λ ]).Along the s = t line λt,s = 0 and the adjacent points correspond to the infinitesimal intermediate maps Vt+ϵ,t, which are non-CPTP if and only if f ′ (t) > 0. Any NNM evolution (T Λ > 0) has P Λ that owns all the points {s, t} for s ≤ T Λ (dotted region): T Λ is the largest T such that {s, t} ∈ P Λ for all s ≤ T .Bottom right: P Λ and N Λ .The different shades represent different values of λt,s, the lowest eigenvalue of the Choi operator of V t,s.Since V t,s = V t+T Λ ,s+T Λ , the minimum is λ t Λ ,τ Λ ≃ −0.241.
and f (0) = 1) and T Λ = 0. Hence, the the corresponding dynamical maps are: The NNM evolution Λ can be expressed as a first time interval of Markovian pre-processing, expressed by Λ t for t ∈ [0, T Λ ], followed by the action of the PNM evolution Λ (see Eq. ( 12)).As we explained, Λ is nothing but Λ without the resultant of its Markovian pre-processing Λ T Λ , which, not only is useless for the appearance of non-Markovian phenomena but damps information backflows.Indeed, we can apply Corollary 2 and conclude that whenever we can obtain a distinguishability backflow with Λ in a time interval [s, t], we can observe a backflow with Λ in the time interval [s − T Λ , t − T Λ ], where the proportionality factor between the two revivals is 1/f (T Λ ) > 1.As expected, Λ is characterized by larger non-Markovianity measures than Λ: It can be proven that similar results holds true for the measures M W,max and M W,av (see Eqs. ( 18) and ( 19)).We conclude by noticing that all PNM depolarizing evo-lutions Λ completely retrieve the initial information of the system at time t Λ .In particular, all PNM depolarizing evolutions satisfy the conditions of Proposition 3, where Λ t Λ = I S .This result follows from the observation that f (t Λ ) = f (T Λ ), and therefore all PNM depolarizing evolutions are such that f (t Λ ) = f (0) = 1.Notice that t Λ may be divergent.
Example
We show how to apply our results to a simple characteristic function f (t) representing a NNM depolarizing evolution Λ.The toy model considered here is given by f (t) = (1 − 3t + 2t 2 + 2t 3 )/(1 + t 2 + t 3 + 3t 5 ) (see Figure 3): a continuous function with a single time interval of increase and an infinitesimal asymptotic behaviour.We start by calculating the times T Λ , τ Λ and t Λ .Hence, we consider the sets P Λ and N Λ , the sets containing the pairs of times {s, t} such that V t,s is respectively CPTP and non-CPTP (see Eqs. ( 1) and ()).We can obtain these sets by noticing that the smallest eigenvalue Instead, if we consider an initial time s sooner than T Λ , the characteristic function cannot increase: f (t) − f (s) < 0 for s < T Λ and s < t.The time τ Λ is the first time after which f ′ (t) > 0.Moreover, f ′ (t) > 0 only for t ∈ (t in , t f in ) = (τ Λ , t Λ ), where the total revival is ∆ = f (t Λ ) − f (τ Λ ) ≃ 0.164.We now analyse f (t) from the point of view of information backflows.The characteristic function f (t) is directly connected with the time-dependent distinguishability D(ρ S,1 (t), ρ S,2 (t)) of two states evolving under Λ (see Eq. ( 22)).In the first time interval [0, T Λ ] information is lost and never recovered.Indeed, we called this noise useless for non-Markovian phenomena and the resultant noise Λ T Λ represents a Markovian pre-processing.As discussed above, the damping of the initial Markovian preprocessing is quantified by f (T Λ ) ≃ 0.334.In the time interval [T Λ , τ Λ ] the system keeps losing information.Differently from the noise in [0, T Λ ], this noise is essential for the following non-Markovian phenomena.Indeed, we have increases f (t Λ )−f (s) > 0 for all the intervals [s, t Λ ] with s ∈ (T Λ , τ Λ ).The maximum information backflow is obtained in [τ Λ , t Λ ], when the system recovers information from the environment at all times (f ′ (t) > 0).Moreover, at time t Λ , the system goes back to the state assumed at time T Λ (f (t Λ ) = f (T Λ )), namely when useless noise ended and the essential noise started.
The characteristic function of the corresponding PNM core Λ is f (t) = f (t + T Λ )/f (T Λ ) (see Figure 3).We use Eq. ( 11) and get the characteristic times τ Λ ≃ 0.220 and t Λ ≃ 0.765 (T Λ = 0 because Λ is PNM).The total increase of f (t) is ∆ = ∆/f (T Λ ) ≃ 0.491.If we compare the non-Markovian effects of Λ and Λ, any distinguishability backflow is amplified by a factor 1/f (T Λ ) ≃ 2.990 (see Corollary 2) and through Eqs. ( 27) and (28) we can evaluate the values of the corresponding non-Markovianity measures: The main qualitative difference between Λ and the corresponding PNM core Λ is the presence of a time when all the initial information is recovered.If the system is evolved by Λ, any possible type of information is completely recovered to its original value at time t Λ .Indeed, f (t Λ ) = 1 and the dynamical map at this time is equal to the identity, namely Λ t Λ = I S .For instance, any pair of initially orthogonal states {Λ t (ρ S,1 ), Λ t (ρ S,2 )} goes from being perfectly distinguishable, to non-perfectly distinguishable for any t ∈ (0, t Λ ) and then back to perfectly distinguishable at time t Λ .As noticed above, all PNM depolarizing evolutions completely restore the initial information content of the system at time t Λ , namely f (t Λ ) = 1 for all PNM depolarizing Λ.Finally, we can see how the initial noise in this dynamics is essential for the following non-Markovian phenomena to happen.Indeed, as soon as we take a non-zero time s ∈ (0, t Λ ), we have a distinguishability backflow in the time interval [s, t Λ ].
Quasi-eternal non-Markovianity
We briefly introduce a qubit model to show the existence of evolutions with T Λ < τ Λ < t Λ = ∞ and T Λ = τ Λ = t Λ .The example dynamics are taken from the family of quasi-eternal non-Markovian evolutions [22], which generalize the well-known qubit eternal non-Markovian model [19][20][21].First, we define Pauli evolutions as those having dynamical maps with the following form: where σ x,y,z are the Pauli operators, σ 0 = 1, and p 0 (t) = 1 − p x (t) − p y (t) − p z (t).The Pauli map is CPTP if and only if p 0,x,y,z (t) ≥ 0. The easiest way to appreciate the non-Markovian features of Pauli evolutions is given by studying the corresponding master equation, namely the first-order differential equation defining the evolution of the corresponding system density matrix: where γ i (t) are time-dependent real functions.It can be proven that γ i (t) ≥ 0 for all i = x, y, z and t ≥ 0 if and only if the corresponding evolution Λ is Markovian [18].Moreover, if γ i (t) + γ j (t) ≥ 0 for all i j and t ≥ 0, the evolution is P-divisible, namely V t,s is at least P (but not necessarily CP) for all s ≤ t.The probabilities and the rates that define the quasi-eternal model are: where these time-dependent parameters generate maps Λ t that are CPTP at all times if and only if α > 0 and t 0 ≥ t 0,α = max{0, (log(2 1/α − 1))/2}, where t 0,α > 0 for α ∈ (0, 1) and t 0,α = 0 for α ≥ 1 [22].We call quasi-eternal non-Markovian the Pauli evolutions defined by the probabilities (30), or equivalently the solution of the master equation (29) with rates (31), where t 0 ≥ t 0,α .These evolutions are P-divisible and, since γ z (t) < 0 for t > t 0 , the infinitesimal intermediate maps are non-CPTP for all t > t 0 .
The intermediate map of a Pauli evolution assumes the Pauli form V t,s ( • ) = i=0,x,y,z p i (s, t)σ i ( • )σ i , where: Notice that, as any Pauli channel, the intermediate map V t,s is CPTP if and only if p 0,x,y,z (s, t) ≥ 0. The lowest eigenvalue of the Choi state of V t,s is λ t,s = p z (s, t).In Figure 4 we represent P Λ and N Λ for three PNM evolutions from this family, namely the collection of time-pairs {s, t} such that V t,s is respectively CPTP and non-CPTP.We see that for α ∈ (0, 1) we have T Λ < τ Λ = t Λ , while for α ≥ 1 we have We prove that T Λ = t 0 − t 0,α and τ Λ = t 0 (see Appendix G).The latter result is a direct consequence of the form of the master equation, which has negative rates if and only if t > t 0 .Indeed, V t+ϵ,t is CPTP for infinitesimal ϵ if and only if γ x,y,z (t) ≥ 0.
Interestingly, we can appreciate a peculiar scenario for α > 1, where we obtain a CPTP map through the composition of non-CPTP maps.Without loss of generality, we fix t 0 = t 0,α = 0.There exist initial times s ′ > 0 such that, V t,s ′ is non-CPTP for all t ∈ (s, t ′ ), while V t,s ′ is CPTP for all t ≥ t ′ .Notice that, since γ z (t) < 0 for all t > 0, V t+ϵ,t is non-CPTP for infinitesimal ϵ > 0 and all t > 0. Therefore, if we consider t 1 < t ′ < t 2 , we have that V t1,s is non-CPTP and V t2,s is CPTP.The latter map can be obtained via the composition of V t1,s with infinitesimal intermediate maps as follows , which, depending on t 1 and t 2 , can be either CPTP or not.
Finally, a simple variation of this model leads to a trivial example of T Λ < τ Λ = t Λ , where we exploit condition (C) of Eq. (3).Consider an evolution that is unitary in an initial time interval, namely Λ t = U t is unitary for t ∈ [0, t U ], and later it behaves as an eternal PNM evolution with α > 1 and t 0 = 0.Such an evolution would, for instance, be given by integrating Eq. ( 29) with {γ where θ(x) = 1 for x ≥ 0 and it is zero-valued otherwise.Indeed, in [0, t U ] the evolution would correspond to the identity and 0 = T Λ < τ Λ = t Λ = t U .
Discussion
We studied the difference between two types of initial noise in non-Markovian evolutions, where essential noise makes the system lose the same information that takes part during later backflows, while the information lost with useless noise is never recovered.Indeed, this last type of noise can be compared to a Markovian pre-processing of the system.We identified as PNM those evolutions showing only essential noise, while NNM evolutions have both type of noises.We proved that any NNM evolution can be simulated as a Markovian pre-processing, which generates the useless noise, followed by a PNM evolution, which represents the (pure) non-Markovian core of the evolution.In order to distinguish between PNM and NNM, we introduced a temporal framework that aims to describe the timing of fundamental non-Markovian phenomena.We identified the most distinguishable classes arising from this framework, where PNM and NNM evolutions fit naturally.Moreover, several mathematical features connected with this classification have been identified.
Later, we focused on the phenomenological side of this topic.Indeed, we addressed the problem of finding which backflows and non-Markovian measures are amplified when PNM evolutions are compared with their corresponding noisy versions, proposing constructive and measurable results within the context of state distinguishability.We studied how the entanglement breaking property is lost/preserved when we compare PNM cores and their corresponding NNM evolutions.Moreover, we discussed the possibility to activate correlation backflows when we extract the PNM core out of NNM evolutions.Through several examples we showed how to extract PNM cores, clarified the possible scenarios concerning the timings of non-Markovian phenomena and explained why useless noise has the only role of suppressing the backflows generated by the PNM core.Finally, it would be interesting to study a further classification that distinguishes between the classical and the quantum content of useless noise, essential noise and, more importantly, information backflows.Some dynamical models, such as dephasing and amplitude damping, have PNM cores that go from being nonunitary to unitary, i.e., satisfy Proposition 3. Nonetheless, not all PNM evolutions satisfy this property (see Section 4.2).It would be interesting to study which are the minimal conditions under which a given class of evolutions has PNM cores satisfying Proposition 3. A reasonable class could be given by the one-parameter evolutions, as described in Ref. [22], namely those with a single rate in the corresponding Lindblad master equation.More in general, concerning the possibility to lose and completely recover some type of information, it is crucial to understand whether PNM evolutions always enjoy this property, i.e., "If an evolution is PNM, there exist an initialization that during the dynamics loses and then completely retrieves the information content for at least one quantifier".
We generalized the definition of T Λ to non-divisible dynamics.We believe that the extraction of the PNM core of most of the well-known non-divisible models can be obtained within this framework (see Appendix F for the study of a non-divisible depolarizing model).Nonetheless, it would be interesting to understand whether and how a PNM core can be extracted when T Λ = T N D , with T N D being the non-divisibility starting time.For instance, image non-increasing and kernel non-decreasing NNM evolutions Λ may lead to this exotic scenario [8].If Λ shows non-Markovian effects after a dimensionality reduction of the state space, the PNM core extraction would acquire a more abstract meaning.Indeed, we may require Λ to act on a space with a higher dimensionality if compared to the space of states target of the non-Markovian effects of Λ.
We analysed when and to what extent distinguishability backflows are amplified by PNM cores.Moreover, we gave a constructive method to build the states that provide the largest backflows.It would be interesting to understand whether this approach can be generalized to other quantifiers, e.g., distinguishability of state ensembles [13], Fisher information [31,32] and correlations [22,29,30].Another interesting topic would be to understand whether PNM evolutions can lead to the activation of other non-Markovian phenomena, as discussed in the context of correlation backflows.
We saw that PNM evolutions have non-Markovianity measures that cannot be smaller than the associated NNM evolutions in Section 7.Moreover, we gave conditions under which PNM evolutions have strictly larger distinguishability measures.Understanding in which other cases and to what extent this strict inequality can be obtained with other information quantifiers and other non-Markovianity measures is interesting.{γ x (t), γ y (t), γ z (t)} = {1, 1, − sin(1/t)tanh(t)}.This is the case because V t+ϵ,t is non-CPTP at time t if and only if γ z (t) < 0 and − sin(1/t)tanh(t) has not got a definite sign in any time interval (0, T ).
□
C Proof that T Λ ≤ τ Λ ≤ t Λ and more The time T Λ cannot be larger than τ Λ or we would have a violation of condition (A) from Eq. (3).Since τ Λ is defined through the infimum, T Λ and τ Λ may coincide, but in this case V τ Λ +ϵ,τ Λ = V T Λ +ϵ,T Λ has to be CPTP for all ϵ > 0, otherwise the condition (B) for T Λ would be violated.Hence, when T Λ = τ Λ , Eq. ( 5) is not a minimum.Now we prove τ Λ ≤ t Λ by showing that a violation of this inequality leads to a contradiction.If t Λ < τ Λ , we would have that V t Λ ,T Λ +ϵ is not CPTP while at the same time V τ Λ +ϵ,τ Λ should be the earliest non-CPTP map for an infinitesimal time interval.These two statements are in contradiction because, if V t Λ ,T Λ +ϵ is not CPTP, there must be an infinitesimal time interval [t 1 , t 1 + ϵ] contained in [T Λ + ϵ, t Λ ] such that V t1+ϵ,t1 is not CPTP (see below).Hence, in this case we would have T Λ + ϵ ≤ t 1 ≤ t Λ < τ Λ .This contradicts Eq. ( 4), which defines τ Λ as the earliest time t for non-CPTP V t+ϵ,t .
Hence, we only need to prove that if [s, t] is a time interval where V t,s is not CPTP, then there exists an infinitesimal time interval [t 1 , t 1 + ϵ] such that V t1+ϵ,t1 is not CPTP.For any ϵ > 0, we can split [s, t] in subintervals of width ϵ and consider the composition Since the composition of CPTP maps is CPTP, if V t,s is not CPTP there must be at least one infinitesimal subinterval [t 1 , t 1 + ϵ] such that V t1+ϵ,t1 is not CPTP.Now, we prove that T Λ = τ Λ implies T Λ = τ Λ = t Λ .From the definition of τ Λ given in Eq. ( 4), it follows that there exists δ > 0 such that for all δ ∈ (0, δ), the intermediate map V T Λ +δ+ϵ,T Λ +δ is not CPTP for all ϵ ∈ (0, ϵ (δ) ), where ϵ (δ) > 0 depends on δ.On the other hand, t Λ,δ = inf{T |V T,T Λ +δ is not CPTP}.Hence, this infimum is given by T = T Λ + δ, and therefore t Λ = T Λ .
D Proof of Proposition 3
We start by considering the case where Λ is not characterized by an initial time interval [0, δ) when the corresponding dynamical maps are all unitary.Λ s is not unitary for s ∈ (0, δ) and Λ t = U is unitary for some t ≥ δ.We can write the intermediate map starting from an infinitesimal time to t as V t,ϵ = U • Λ −1 ϵ , which is not CPTP for all ϵ ∈ (0, δ).Indeed, since U is unitary, V t,ϵ and Λ −1 ϵ have the same eigenvalues and Λ −1 ϵ is non-CPTP because it is the inverse of a non-unitary CPTP map.It follows that T Λ = 0. Suppose now that Λ is unitary in [0, δ).Since we assumed that Λ s is not unitary for some s < t, there exists at least one finite time interval (δ, δ ′ ) such that Λ s is not unitary for s ∈ (δ, δ ′ ), where δ ′ ≤ t.Given the conditions (B) and (C) of Eq. (3), we have to check whether V t,s is CPTP for s = δ + ϵ with infinitesimal ϵ.Hence, if we write V t,δ+ϵ = Λ t • Λ −1 δ+ϵ = U • Λ −1 δ+ϵ cannot be CPTP because the inverse of a CPTP non-unitary map, namely Λ −1 δ+ϵ , is not CPTP and its composition with a unitary transformation Λ t = U has the same eigenvalues as Λ −1 δ+ϵ .Hence, V t,δ+ϵ is not CPTP and T Λ = 0. Finally, we have to prove that there exists at least one intermediate map which is not even positive (P).We call vol(Λ t ) the volume of the image of the evolution at time t.Since Λ s is CPTP and not unitary, vol(Λ s ) <vol(Λ 0 ) [33].It follows that vol(Λ 0 )=vol(Λ t ) <vol(Λ s ), the intermediate map V t,s cannot be positive [34].
E Proof that M mix (Λ) ≤ M mix (Λ) Let's say that M mix (Λ) = p, Λ is the PNM core of Λ and Λ mix = (1 − p)Λ + pΓ M is Markovian, namely Γ M is optimal to make Λ Markovian.The intermediate maps of this evolution are CPTP and read Now we take the Markovian evolution Γ M and consider Λ mix = (1 − p)Λ + pΓ M , where:
F Non-invertible depolarizing example
We proceed by showing how our framework behaves with a non-bijective depolarizing evolution.Consider the characteristic function given by f (t) = (2t − 1) 2 /(2t 3 − t + 1) (see Figure 5).This depolarizing evolution is not divisible.As we can see, the evolution is not bijective between t N B = 1/2 and any later time (f (1/2) = 0).We cannot define intermediate maps V t,t N B with initial time t N B .Nonetheless, Eq. ( 24) provides a T Λ smaller than t N B , even without imposing the extra condition T Λ < t N B .Indeed, T Λ is the earliest time such that there exists a time interval [T Λ + ϵ, t Λ ] when V t Λ ,T Λ +ϵ is not CPTP, while V t Λ ,T Λ is CPTP (see Proposition 1).Hence, V t Λ ,T Λ +ϵ not being CPTP implies f (t Λ ) − f (T Λ + ϵ) > 0, from which we can state that f (t Λ ) > 0. Similarly, from f (t Λ ) − f (T Λ ) = 0 we have f (T Λ ) > 0.
Since evolutions are continuous, so are characteristic functions.If f (0) = 1 and f (1/2) = 0, there must a be an intermediate time T Λ < 1/2 when the characteristic function assumes the value f (T Λ ) > 0 while f ′ (T Λ ) < 0. This property holds for any possible characteristic function that is zero-valued at one or more times, and therefore finding T Λ does not require any additional technique with respect to the divisible case.
Straightforward calculations lead to T Λ = 1/8, τ Λ = 1/2 and t Λ = 3/2.Most of the analysis made in Section 9.2 can be done similarly.Nonetheless, we underline some differences coming non-bijectivity.Both Λ and Λ have a time, τ Λ = t N B and τ Λ = t N B respectively, when all the states are mapped into the maximally mixed state, namely when the characteristic function is null.Nonetheless, only the PNM core Λ completely retrieves any possible type of information for at least one time.Indeed, Λ t Λ = I S .For instance, pairs of initially orthogonal states ρ S,1 (t), ρ S,2 (t) go from perfectly distinguishable (t = 0), to absolutely indistinguishable (t = τ Λ ) and back to perfectly distinguishable (t = t Λ ).Since we cannot write V t,t N B , the corresponding Choi operator lowest eigenvalues λ t,t N B = (1 − f (t)/f (t N B ))/4 cannot be evaluated for all 0 ≤ s ≤ t.Indeed, f (t N B ) = 0 and this quantity diverge.Hence, in Fig. 5 we plot P Λ and N Λ through the regularization l t,s = f (s)λ t,s = (f (s)−f (t))/4, which is non-negative if and only if there exists a CPTP V t,s and does not diverge for s = t N B .We underline an important subtlety.Even if V t,t N B does not exists, the evolution behaves as non-Markovian in the time intervals [t N B , t].Indeed, since Markovianity is defined through CP-divisibility, any other case is labelled as non-Markovian.Undoubtedly, this is the case, where we do not have a specific non-CPTP intermediate map, but since the evolution in [t N B , t] is not represented by a CPTP operator, then the evolution must show some non-Markovian features.As a matter of fact, the largest non-Markovian features are shown during these time intervals, where states goes from being identical to partially, or perfectly (in case of the PNM core and initial orthogonal states), distinguishable.Hence, these effects are not only quantitatively the largest, but they are qualitatively different.
G Proof that T Λ = t 0 − t 0,α We call Λ (α,t0) t the dynamical map at time t of the quasi-eternal NM evolution defined by the parameters α and t 0 .Similarly, we define V (α,t0) t,s .We need to find the maximum T such that conditions (A) and (B) from Eq. (3) are satisfied.
Figure 1 :
Figure 1: Typical information content of an open quantum system evolving under a NNM evolution.An increase, or backflow, of information, is a typical sign of non-Markovianity, namely of non-CPTP intermediate maps.Blue/red regions represent times when the infinitesimal intermediate map Vt+ϵ,t is CPTP/non-CPTP.The time T Λ is the largest such that the preceding dynamics is CP-divisible and V t,T Λ is CPTP for all t ≥ T Λ .Indeed, for t ≥ T Λ , the information content of the system never exceeds the level at T Λ (green area).The information lost in [0, T Λ ] is never recovered (useless noise), while the noise applied in [T Λ , τ Λ ] is essential for the following backflows.τ Λ is the earliest time after which we have an instantaneous backflow.We have finite backflows in intervals [s, t] with s > T Λ and t Λ is the earliest t such that we have a backflow in [T Λ + ϵ, t].
Finally, the
following results clarify the role of T Λ (proofs in Appendix B): Lemma 1. Conditions (A), (B) and (C) are simultaneously satisfied at time T if and only if T ∈ [0, T Λ ].If (A) is violated at time T , (B) is violated at a strictly earlier time.Lemma 2. Any non-Markovian evolution Λ has non-CPTP intermediate maps V T,T Λ +ϵ for one or more final times T > T Λ and infinitesimal values of ϵ > 0.
Figure 2 :
Figure 2: Open quantum systems are physically represented by a system S interacting with a surrounding environment E.This interaction leads to information losses (blue arrows) and, in case of non-Markovian evolutions, backflows (red arrows).NNM evolutions Λ can be simulated with the following two-stage scenario.First stage (t ∈ [0, T Λ ]): the system interacts with a first environment E1 and information is lost monotonically (Markovian pre-processing).The dynamics during this first stage corresponds to the useless noise of Λ. Second stage (t > T Λ ): E1 is discarded, the system evolves while interacting with E2 and we have information backflows.The dynamics during this second stage corresponds to the PNM core Λ of Λ.
for s ≤ t.Similarly, the infinitesimal intermediate map V t+ϵ,t is CPTP if and only if f ′ (t) ≤ 0. Indeed, Markovian depolarizing evolutions have non-increasing characteristic functions.The Choi state of V t,s is V t,s ⊗ I S the Choi state of V t,s is non-negative if and only if V t,s is CPTP.The same analysis is performed for the corresponding PNM core Λ.
We start with a technical analysis of f (t).Standard numerical methods lead to T Λ ≃ 0.275, τ Λ = 0.495 and t Λ ≃ 1.040.It is possible to have increases of f (t) only in time intervals [s, t] starting later than T Λ .Moreover, as explained by Proposition 1, these increases take place for a continuum of initial times: f namely the CPTP map V t2,s is obtained by composing infinitesimal non-CPTP maps V t+ϵ,t with the non-CPTP intermediate map V t1,s .The composition of infinitesimal intermediate maps that we wrote corresponds to V t2,t1
|
2023-02-13T06:41:42.197Z
|
2023-02-09T00:00:00.000
|
{
"year": 2023,
"sha1": "a12f2e8ca30b3f50a161eefc2938b66b56daef04",
"oa_license": "CCBY",
"oa_url": "https://quantum-journal.org/papers/q-2023-09-26-1124/pdf/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5d65ea9445b44a4bc9765a576f95ac4b10e3c364",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
219588700
|
pes2o/s2orc
|
v3-fos-license
|
Rethinking clinical oncology drug research in an era of value‐based cancer care: A role for chemotherapy pathways
Abstract The United States spends nearly 1/5th of its GDP on healthcare. Yet, to achieve value‐based care, the Economist describes the US healthcare system as handicapped by multiple, disparate silos that prevent the organization and sharing of data. This paper explores the current state of clinical oncology drug research and its relationship to value‐based cancer care. Clinical Chemotherapy Pathways are proposed as a unifying structure to bring together disparate sources of data to increase value.
| INTRODUCTION
In 2018, the United States spent 17.7% of GDP on healthcare. The per capita cost was $11172. Household spending is 28% of all funding sources. 1 From May 2018 to May 2019 , the FDA approved 58 new drugs or new indications for the treatment of solid tumors and hematologic malignancies. 2 All of these new drugs will be priced at substantial costs. A recent study by The Economist 3 argued that the US healthcare system is handicapped in a value-based environment by a myriad of disparate, and uncommunicative health information systems. This siloed system prevents various disciplines, groups, and institutions from organizing and sharing data. The current clinical cancer research component is among these.
To further characterize the relationships between novel drug research, cancer management, and Value-based cancer care (VBCC), realizing that each may consist of multiple silos, this paper explores the salient features of each and considers the structure imposed by chemotherapy pathways in fostering collaboration.
CLINICAL ONCOLOGY DRUG RESEARCH
In a herculean effort, Hirsch and colleagues reviewed all interventional oncology studies from 2007 to 2010. 4 Out of 40 790 studies, 8942 focused on medical oncology. About 62.3% were single armed, and 63.9% were non-randomized. About 83% were phase 1 or 2; the average size was 51 patients. About 41.8% were funded by industry. The authors also noted "we identified more than 25 000 outcomes across oncology trials that occurred only once or twice." Additionally, Booth, et al showed that in the last three decades, industry sponsored trials have increased from 4% to 57% of total trials. Industry sponsorship was associated with a higher rate of endorsement of the experimental agent. 5 The same group showed there was discordance between abstract presentations and published papers 63% of the time, 10% substantial. 6 Chan et al reported that positive phase 2 trials led to positive phase 3 trials only 50% of the time. 7 Industry sponsored trials were positive 89.5% vs 45% for all | 5307 HOVERMAN others. In 2009 Mathieu, et al 8 reported that 45% of randomized clinical trials were registered with ClinicalTrials.gov. Of these, 35% had discrepancies between registered intended outcomes and outcomes published. About 83% of these incorporated statistically favorable results. Requirements for registration and identification of the primary endpoint (PEP) of the study have since become more stringent. More recent reviews indicate publication in abstracts of randomized trials more frequently "reported positive unplanned endpoints and unplanned analyses than negative outcomes in abstracts…". 9 Another review showed that of 134 registered studies with a clearly defined PEP, 14% published a PEP differing from that in the registry, 15% had issues with methodology, and 22% had problems with interpretation. 10 There are additional concerns about the approval process of new drugs that are not well studied. (a) We know little about the difference in efficacy between drugs that are dosed slightly above the threshold response level and those dosed slightly below maximum tolerated dose. This is especially true for biologics and immune-oncology (IO) drugs where there is high dose tolerance in a wide effective range. 11 A recent report of dose intensity in Phase 1 drug trial showed responses in a wide range for IO drugs. For those with molecular or antibody targets, there was a general correlation of dose with response but stable disease was associated with a wide range of dosing. 12 Even with cytotoxic drugs, while there is usually a tight correlation with dose and response, some drugs have doses reduced due to excess toxicity at the proposed dose. 13,14 (b) Once approved, fixed doses are recommended for some of the newer IO drugs, whereas the pivotal trials used weight-based dosing. Although, vial size can make this challenging, being allowed to choose between dosing schemes would lower the overall cost. 15 (c) Some studies use vastly more expensive drugs in combinations when lower cost drugs are available. Gemcitabine/nab-paclitaxel in pancreas cancer is an example. 16 Recent trials of IO drugs with nab-paclitaxel are also relevant. 17 (d) Some studies have more than one intervention, making outcomes and value decisions difficult to isolate. Examples are those studies with an induction phase and a maintenance phase. In the maintenance phase, a drug is given for maintenance without a control arm or a drug with unknown benefit is piggybacked onto a drug considered standard of care. As an example, in lung cancer, bevacizumab was continued as maintenance with no standard treatment control arm. 18 The Point Break study 19 is an example of the latter, where the two maintenance arms were bevacizumab alone or in combination with pemetrexed. Neither was compared with the standard of care, which would have been pemetrexed alone. Bevacizumab has also been piggybacked with capecitabine without a capecitabine alone control arm. 20 The pivotal trial of pemetrexed-carboplatin with or without pembrolizumab has a maintenance arm following the three-drug combination of pembrolizumab for 24 months along with pemetrexed indefinitely. The control arm with standard chemotherapy has pemetrexed maintenance only. As this is a novel combination, there is no known harm or benefit to maintenance of any type, yet no placebo or start/ stop strategy or pembrolizumab alone or pemetrexed alone arm was studied as an option for maintenance with the combination therapy. With these drugs in combination, there is an enormous monthly cost without measured value. 21 (e) In a study by Hilal, Sonbol and Prasad, 97 studies that were tied to approval of 95 new cancer drugs were evaluated for the appropriateness of the control arm, that is whether the control arm represented optimal standard of care. Of these randomized controlled trials, 17% had suboptimal control arms. 22 (f) Randomized trials may have uncertain applicability to the usual medical oncology population of patients. Those patients on research trials are younger and healthier with fewer medications and comorbidities. 23,24 (g) Some studies may be marketed inappropriately when considering actual practice in a cost-effective environment. The media marketing of pegfilgrastim (gcsf) in metastatic breast cancer is based on a study of docetaxel given at a dose of 100 mg/M 2 with or without gcsf. 25 Docetaxel at that dose as a single agent is now rarely used. Another study showed there was no survival difference among doses of 100 mg/M 2 , 75 mg/M 2 , and 60 mg/M 2 . 26 Less intense dosing is consistent with American Society of Clinical Oncology guidelines for treatment of solid tumors in the non-curative setting. 27 (h) Study PEPs may not translate into meaningful survival differences. 28 Two recent studies with bevacizumab in ovarian cancer highlight the discordance between progression free survival (or response rate) and overall survival. 29,30 (i) Recent reports indicate that drugs given accelerated approval do not always translate the initial promise into meaningful survival benefit. Gyawali et al 31 showed that of 93 drug indications given accelerated approval, 20% showed improvement in survival, 20% showed improvement in the same surrogate measures as in the initial study, and 21% showed improvement in different surrogate measures. The remaining studies were ongoing, pending, or delayed. Kim and Prasad 32 found similar results: 57% of 54 drugs approved have "unknown affects on overall survival or fail to show gains in survival." Particularly notable is the case of bevacizumab in the treatment of glioblastoma. An initial improvement in response rate was not proven to lead to improvement in survival in the confirmatory trial. The bevacizumab cohort also had increased toxicity. Yet, bevacizumab received full approval for this disease. 33 To summarize current oncology research: Published trials have widely varying structures and outcomes. There are fewer impartially funded trials. Reported outcomes may be skewed toward statistically positive findings. Drug choice in combination trials may be done without consideration of lower cost alternatives. Drug choice in maintenance therapies may be made without clinical evidence. Marketing of trial results may not be associated with real world use and may even be counter to generally recommended guidelines. Surrogate endpoints may lead to drug acceptance or approval without improvement in meaningful outcomes. There is, as Hirsch et al 4 mention, the "lack of a standard ontology" that would allow comparisons across trials and even across databases.
These considerations do not address directly the issue of "multiplicity" raised by Prasad and Booth. 34 Multiplicity becomes a concern when there are "many trials testing similar hypotheses with similar drugs (such that) the likelihood that any one trial will yield a significant result is increased by the large number of times that something has been tested." The analysis here outlines the structural difficulties with the conduct of clinical trials and subsequent marketing that make multiplicity possible.
| VALUE-BASED CANCER CARE
The Economist report defines value-based care as "the creation and operation of a health system that explicitly prioritizes health outcomes that matter to patients relative to the costs of achieving those outcomes." There are four domains for this enterprise: (a) An enabling structure; (b) Explicit measurement of outcomes and costs; (c) Integrated patientcentered care; and (d) A payment system based on outcomes, not volume. 3 Value-based delivery models are rapidly inserting themselves into the cancer care delivery complex. This is particularly true for Medicare-aged patients. Cancer is predominantly a disease of the elderly. Most oncology practices will have Medicare, either as traditional fee-for-service Medicare or as Medicare Advantage for over 50% of new cancer patients. For traditional Medicare, there is the Oncology Care Model (OCM), a value-based program developed by the Center for Medicare and Medicaid Innovation (CMMI). 35 For practices participating in the OCM, up to 20%-30% of all new cancer patients will be covered by this program.
One goal of the OCM was to create a template for Medicare Advantage and commercial insurers to use. United Healthcare and Aetna have published results of earlier models. [36][37][38] They, as well as Cigna and Humana have programs that are operational or in development. Anthem and some of the regional Blue Cross insurers have also implemented value-based cancer programs. Although many of these programs are in the beginning stages, if effective, most oncology practices will have 50%-70% of all new cancer patients covered by a value-based delivery system within the next 3-5 years. To emphasize, this means that, in the near future, the typical oncology practice will have over 50% of their patients for whom the total cost of care will be important--their reimbursement linked directly to how well they meet the requirements of the value-based payment model. How well oncologists can manage the total cost of care will impact the financial health of these practices.
It is clear the current state of clinical oncology research is not designed to support the goals of VBCC. Trials are designed to measure outcomes that lead to FDA approval. Cost is not a consideration. In some trials, cost is added without any evidence of benefit or without studying less expensive alternative regimens. The trials do not typically assess the cancer patients we see in our clinics.
ENABLING STRUCTURE
The incorporation of Pathways into these recommendations requires some explanation. The Pathways programs were initiated to address situations where multiple regimens had similar outcomes in specified clinical situations. One early example was in metastatic non-small cell lung cancer where there were four equally effective regimens. 39 The primary tenet of a Pathways program was to evaluate outcomes and toxicity first and if these were the same for two or more regimens, costs could be considered as a deciding factor. Subsequent studies have shown that using Pathways can reduce drug costs. [40][41][42][43][44] The American Society of Clinical Oncology has developed recommendations for legitimizing Pathways programs for chemotherapy selection. 45 Payers look favorably on Pathways programs and have developed products of their own. 46 The principles of Pathways can be applied to other settings beyond clinical chemotherapy trials. 47,48 The key to these programs is meticulous assessment of the evidence for efficacy and toxicity and, only then, consider costs.
The challenge for oncology and medicine in general is to operationalize changes that enable alignment of clinical drug research with VBCC to improve outcomes and reduce costs. Enabling changes might include: (a) Develop an ontology and supporting structure across all research platforms. (b) Develop interoperability of electronic medical record (EMR) system platforms to get complete information on unstudied patient groups, such as those 70 and 80 years old, those on multiple medications and/or with comorbidities. This would allow practices and payers to use Real World Evidence (RWE) to answer questions about the impact of new drugs or other interventions on the outcomes and, therefore, value for these otherwise unstudied patients. (c) Standardize and measure patient reported outcomes, especially for the elderly. (d) Develop interoperability among EMR and claims databases, including Medicare, to measure total cost of care. RWE would bring together clinical information and claims data to study these critical cancer populations which represent the majority of patients in clinical practice. (e) Form contract relationships of Medicare and large payers with validated Pathways owners to use Pathways as a tool to assess the value of the research structure and outcomes. (f) Manufacturers would continue the current processes for FDA approval for new drugs. However, a new drug or regimen would have to demonstrate comparative value for a specific indication to be placed on Pathways. (g) If there were more than one option for a particular Pathway indication, an insurer, including Medicare, could make a coverage decision to narrow the choice based on value. (h) The specific Pathway indication could vary by age, comorbidity, and other risk factors. (i) Pathways could legitimize substitution of lower cost drugs known to be equivalent given with the same dosage and schedule, (j) Pathways could reject studies with surrogate endpoints, unmeasured variables, or inappropriate controls, (k) Pathways could do the follow-up needed for drugs given expedited approval, and (l) Where there are more than one choice for a Pathways indication, payers, including Medicare could negotiate Pathways placement based on price.
FOR VALUE MEASURES
Finding a consensus among practicing oncologists about value will be challenging. Yet "Defining a common understanding and measurement of value is a critical and necessary first step in improving the value of cancer care in the United States." 49 At the same time "Both quality and statistical precision have important implications for the creation and interpretation of value." 50 There are constructs available to assess the value of a cancer drug. The European Society of Medical Oncology (ESMO), 51 the American Society of Clinical Oncology (ASCO), 52 and the National Comprehensive Cancer Network (NCCN) 53 among others have primarily patient-oriented formats. However, none are placed in a structure to compare regimens by disease and line of therapy with payment implications in a value-based environment. 50,54 The ASCO and ESMO calculations do not specifically include costs. In the NCCN evidence blocks, the costs determinations are not consistent. 53 Improving on these formulations requires attention to evidence, addressing the concerns with clinical oncology drug research outlined here, and placing the assessments in an enabling structure tied to clinical decision-making and reimbursement. A Pathways environment can do this.
It is an interesting exercise to consider what, for example, the ESMO and ASCO value frameworks would look like in a Pathways structure. As a Pathways program would have assessed these, there would be no points awarded for disease-specific outcomes and patients would receive an accurate, validated representation of survival, either in a survival curve, or a bell-shaped curve for survival or specific numbers for median, 6-month, 1-year, 2-year, and 5-year survival. 55 This would be accompanied by cost numbers based on Medicare allowable per month and for treatment duration. Framework points would be awarded for various aspects of patient specific outcomes and cost. These could include projected inpatient and direct and indirect outpatient costs, toxicities, symptom burden with various patient reported concerns, incapacity, dependence, and other indicators meaningful for patients. This would allow for objective discussions of survival and costs and extensive exploration of patient values.
Nothing of this discussion suggests rationing of care, as the concern is that the current system does not adequately address the opportunity to reduce the use of low value care, and to negotiate pricing on value. The least controversial strategy to reduce costs and improve value is to eliminate instances of low or no value.
|
2020-06-11T09:03:42.191Z
|
2020-06-10T00:00:00.000
|
{
"year": 2020,
"sha1": "d813a2f4aaa899419d3504977826c852097e2f02",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cam4.3193",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "93cf890f733e3a77a1442243c92856c931488a98",
"s2fieldsofstudy": [
"Medicine",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
263825834
|
pes2o/s2orc
|
v3-fos-license
|
Valorizing the Input and Output Waste Streams from Three PtX Case Studies in Denmark—Adopting a Symbiotic Approach
: This study aimed to investigate the waste streams from the production of hydrogen energy carriers from PtX technology and identify how they can be valorized by applying a symbiotic approach to enable greater utilization of the inputs and outputs from such plants. Various electrolysis development projects are under development or in the pipeline in Europe and Denmark, but in many cases, it is not clear how waste streams are emphasized and valued in these projects. Thus, three exploratory case studies (a city, a rural, and an energy hub case) were investigated herein exemplifying state-of-the-art electrolysis projects currently being deployed, with a focus on identifying how and to what extent waste streams are being valorized in these projects and energy system integration is being pursued. Inspired by the industrial symbiosis literature, we analyzed how internal, regional, and long-distance symbiotic collaboration is realized within these cases and found them to be very different in terms of the energy carrier produced, the current development stage, and the access to appropriate energy infrastructure. This paper concludes that the co-location of PtX technology near biogas plants would provide a great opportunity for the integration of the produced energy carriers and waste streams into the existing energy system and, hence, could assist in stabilizing fluctuating renewable energy sources to enable their more efficient use in the energy system.
Introduction
Renewable energy sources (RES) are a necessity in the ongoing transition of our current energy system away from the historic dependence on fossil fuels toward the use of more sustainable energy sources.However, the fluctuating and inherent unpredictable nature of RES makes it challenging to fully replace the traditional carbon-intensive energy supply based on fossil fuel.Hydrogen (H 2 ), as an important energy carrier with great potential, is regarded by many governments, researchers, the energy industry, etc., as a key player in the future [1].Indeed, electricity and hydrogen are expected to be the dominant energy carriers in the future energy system, where it is expected that hydrogen will be stored and utilized for electricity, gas, and heat production, as well as producing various chemicals [2].By storing the peak RES electricity produced from technologies such as solar PV and wind power as hydrogen, it is envisioned that the energy system relying on fluctuating RES could be stabilized and become more flexible to meet demand, allowing the current carbon-intensive economy to be decarbonized.This transformation is already underway today, and around 70 Mt of hydrogen is already produced on a global scale, with about 4% coming from electrolysis and the rest from steam methane reforming, as well as from the gasification of coal and oil [3].However, it is still in the early days.Indeed, in the European region, electrolysis currently only accounts for about 2% of the energy mix [4].Like many countries, Denmark is striving to decarbonize its energy supply and has committed to reducing its CO 2 emissions by 70% by 2030 compared to 1990 emissions and to become carbon neutral by 2050 [5].As such, hydrogen and PtX production from RES-where PtX refers to "power-to-X" and is a collective term for electricity conversion technologies that can convert surplus electric power, such as excess load from RES, into carbon-neutral energy carriers (X) for use as synthetic fuels or chemicals, especially hydrogen-are regarded as offering a promising future pathway.
According to the Hydrogen Branch Organization (Brint Branchen) in Denmark, 43 hydrogen and PtX projects were announced in 2023, and if all the projects were to be realized to their full extent, this would correspond to 20-25 GW electrolysis capacity by 2030 [6].This would far outstrip the target set out in the national Danish PtX strategy that was launched in December 2021, which set a political ambition of 4-6 GW electrolysis capacity by 2030 and is expected to reduce national emissions of CO 2 by 2.5-4 Mt annually.The strategy further stresses that PtX must be competitive with other technical applications, that it should benefit the energy system and infrastructure in Denmark, and that options for the export of PtX technology and deployment solutions should be emphasized [7].The European Commission (EC) also has an ambitious PtX policy, in which they have set a target of 10 Mt annual hydrogen production by member states, as well as 10 Mt import of RES hydrogen by 2030, as stated in their REPowerEU initiative [8].According to ref. [9], this would require 64 GW of installed electrolysis capacity within the European Union (EU), which must be implemented within only a few years to enable meeting the EU target.When adding the needed global capacity of 850 GW suggested by the International Energy Agency [10], the future PtX portfolio seems huge.
In the long term, the EU aims to be a zero-emission region by 2050-the world's firstand recognizes the need for clean hydrogen and PtX to achieve this.In A hydrogen strategy for a climate-neutral Europe, 2020 [11] and the Hydrogen roadmap Europe, 2019 [12], the European Union envisions a path consisting of three stages that will facilitate these goals: A first stage (2020-2024), in which the current PtX production is sought to be decarbonized, for example, in the chemical sector.This will require 1 Mt of RES hydrogen to be produced by 2024.In the second stage (2024-2030), 10 Mt of RES hydrogen should be deployed, and 40 GW electrolysis capacity installed by 2030 (equal to 8 Mt of hydrogen), and by then, hydrogen should be an integral part of the energy mix for new sectors, like, for example, rail transportation, the steel industry, and shipping.The ambition is that most of the electrolysis capacity will be generated near the user or the RES resources and should be a key part of an integrated energy system.In the third stage (2030-2050), RES hydrogen should be disseminated across all sectors, also embracing sectors that may be more difficult to decarbonize [8,9].Ambiguous targets for hydrogen are hence envisioned for the future European market, and consequently, many PtX development projects are already in the pipeline in Denmark as elsewhere.In the following, we first exemplify several PtX development projects currently in the pipeline in Denmark and second describe some future energy island projects initiated by the Danish Government that are scheduled to be implemented before 2030.
Danish PtX Projects under Development and in the Pipeline
Ørsted and Skovgaard Energy: These two Danish energy companies are aiming to develop a PtX plant in Holstebro Municipality.The project-like most other PtX projectswill be built in several phases, with an aim that in the last phase, the plant will be expanded to an electrolysis capacity of 3 GW by 2030.Initially, the electrolysis capacity will be 150 MW.The RES utilized will be onshore wind and solar PV and later from offshore wind.
Ringkøbing-Skjern Municipality: The PtX project "Megaton" will have a planned electrolysis capacity of 2 GW when completed by 2030.The goal for the project is to produce 1 Mt of green fuel annually.In connection with the PtX plant, an energy park with a capacity of 4 GW of wind and solar energy is also being built.The project will cost around 60 billion Danish Kroner (DKK), equal to 8 billion euros.
Green Fuels for Denmark and Ørsted: The Green Fuels for Denmark project is led by Ørsted and connected to the company's combined heat and power plant (CHP), namely the Avedøre Power Plant.Many partners are involved in the project, including Haldor Topsøe, Nel, Everfuel, and Cowi as technology and knowledge partners, and A.P. Møller-Maersk, Copenhagen Airport, DFDS, DSV, and SAS as commercial partners due to their energy demand.The electrolysis capacity in the last phase will be 1.3 GW.By 2030, it is intended that up to 275,000 t of green fuel will be produced annually, considering an electrolysis capacity of 10 MW.Energy carriers like hydrogen and methanol will be produced, as well as kerosene, in the final phase, which should be sufficient to meet 30% of the annual demand for aviation fuel at Copenhagen airport.
Copenhagen Infrastructure Partners: Copenhagen Infrastructure Partners (CIP) is leading the "Høst" development project, which will be implemented in Esbjerg and is intended to produce green ammonia.The project has several partners and energy buyers, such as Esbjerg Municipality, DIN Forsyning, DLG, Arla, Danish Crown, DFDS, and A.P. Møller-Maersk.Høst should reach an electrolysis capacity of 1 GW and is expected to make Denmark self-sufficient in ammonia, with an annual production of about 600,000 t.The ammonia will be utilized in agriculture and shipping.The RES will be supplied by wind turbines and solar PV.The PtX plant will cover 30 ha and cost around DKK 10 billion (EUR 1.33 billion) and should be operational by 2027.
Three plants with 1 GW capacity each: Besides the four largest plants mentioned above, at least three other large projects are in the pipeline, all of which have an expected electrolysis capacity of 1 GW.These projects are H 2 Energy in Esbjerg, Green Hydrogen Hub in North Jutland, and HySynergy in Fredericia.
Considering all the projects mentioned above, around 10.3 GW of PtX capacity should be deployed before 2030, although this figure does not include some smaller plants also being implemented [13].But, as mentioned earlier, up to 43 electrolysis projects were announced in Denmark in 2023, and if all were deployed, it would correspond to around 20-25 GW electrolysis capacity by 2030 [6].
Offshore energy islands and PtX: The Danish Government has decided to also support building up its offshore wind capacity.Development will take place on two artificial islands to be located in two spots in the North Sea and in the Baltic Sea near the island of Bornholm, respectively.The first energy island to be developed in the North Sea will generate 3 GW of wind energy by 2033 and 10 GW by 2040 and will also incorporate PtX technology to produce hydrogen directly on the island in the future.Ongoing discussions about whether to distribute electricity for electrolysis onshore or whether to produce this directly on the energy islands are still ongoing [14].The final choice mainly revolves on the one hand, around the high costs of laying transmission cables from the energy islands to onshore facilities for electrolysis production and, on the other hand, on the benefits of using the power produced directly in electricity driven applications, instead of producing hydrogen alone, but with consequent energy losses [14,15].
Problem Field
As illustrated earlier, several Danish PtX development projects are currently in the pipeline, among which some are very large projects seemingly applying technical applications with no clear-or at least no transparent-vision of how to deploy the electrolysis production the most efficiently.This emphasizes the benefits for the energy system and infrastructure of generating capacity near the user-as prioritized by the EU-and the valorization of waste streams from the electrolysis process.The EU prioritizes integrated energy systems in their policies [16] and envisions PtX technology being adopted [9].It is thus pertinent to ask whether the PtX projects currently in the Danish pipeline apply this approach.This also raises the question of how to integrate and deploy the technology to fit the contexts of the local communities, for example, regarding the supply of RES and water resources to the plant, and how to use the energy carriers (x) and waste heat to stabilize the fluctuating RES production.Such issues are important aspects to consider when adopting PtX technology and should be an integrated part of development projects.
Much of the literature on PtX has been published within the last decade, as shown in [3], and several emphasize the different energy carrier routes that electrolysis processes can provide.The research to date has mainly investigated the efficiency of systems, such as PtMethanol, PtGas, and PtAmmonia, and pointed to the production of the energy carriers being feasible within the contexts analyzed [1,3,4,17].This literature also emphasizes the different types of electrolysis technologies, such as AEC, SOEL, and PEMEL, and discusses the benefits and drawbacks of these technologies, e.g., [18,19].However, there is scant literature seeking specifically to valorize the waste streams of PtX applications, and this topic is therefore underrepresented in the current academic literature, albeit a few studies specifically emphasize the potential for energy efficiency gains when using waste heat streams from PtX technology [20][21][22][23].Most often, however, the need for waste stream valorization is just noted as a side remark when concluding the research findings [24,25], with comments about the need for utilizing and managing heat and oxygen outputs from the PtX process.Thus, there is a gap in the current literature focusing on the waste streams of PtX production and how they potentially could find usage in the future.
The contribution of this work is hence its emphasis on which types of waste streams exist and how to optimize the valorization of the waste streams connected to PtX production.This was approached by identifying symbiotic "markets" or "use potentials" for these outputs within or in proximity to the PtX plant in the community, region, or nationally.Options for stabilizing and utilizing the current energy system (electricity, district heating, and gas systems) were assessed with the consideration that they should provide synergies with already adopted technology/systems.This will be proposed from the point of departure of an exploratory case study approach by revealing the state-of-the-art deployment of PtX technology in Denmark, with a particular emphasis on waste stream valorization.
Materials and Methods
The following sections provide a description of the methodologies applied in this work, how data retrieval was achieved, and the theoretical approach utilized.
Data Retrieval
This section elaborates on the data retrieval aspects and the methodological considerations regarding data collection and outlines the study methodology and various parts.
Exploratory Case Study
We utilized an exploratory case study approach [26,27] for the cases investigated, as we did not have any pre-determined expectations of the outcome when entering the empirical research field.Also, when holding interviews as part of the data collection process, we approached the interview situations without exact knowledge of the stakeholders' positions on the topic investigated, the layout/design of the PtX applications adopted, and their emphasis-whether reluctance or welcoming-of valorizing the waste streams in connection with the electrolysis process.According to refs.[26,28], the use of exploratory case studies is also appropriate when researchers need to gain very detailed descriptions of a social phenomenon or when there is a need to explore and investigate presumed causal links that are too complex for a survey or experiment.According to ref. [27], case studies are appropriate when asking "how," "why," "what," and "who" questions.For the three exploratory case studies in the present study, qualitative semi-structured interviews were held with the three plant managers, one at each site, and information was also collected on a guided tour of each plant area to obtain more details on different aspects of each case.After the interviews, a summary of the information provided during the interview and at the site visit was immediately written, with any uncertainties or remaining queries resolved in follow-up telephone interviews [29].The interviews followed the same format, guided by a "Questionnaire for Interviews", which can be found in Appendix A.
Case Study Design
Three case studies were chosen for this paper (see Section 3) in accordance with their different locations and plant layout/design.Case study 1 was the GreenLab Skive and PtX plant, representing a business and energy hub located near the city of Skive in the Northern part of Jutland.As the plant is situated in proximity to a range of industrial facilities, there are favorable opportunities to engage in symbiotic collaboration with the local community.The PtX plant is currently being constructed and will be developed further in the years to come.Case 2 is the Vinkel PtX plant, which is being developed as a stand-alone plant located in a rural area in the central part of Jutland.The plant is being implemented in the near future, and the main work will revolve around the biogas plant upgrading facility and current access to a natural gas network or gasnet.Being remotely located, the valorization of waste streams will be a particularly interesting topic to illuminate in the case study.Case 3 is a PtX plant that has just started operation and is being implemented in connection to a multi-utility company, DIN Forsyning, located in the 4th largest city in the Southern part of Denmark at Esbjerg harbor.As the PtX plant is located in the city, hence potentially near a heat market, this city center case study provides an interesting perspective on to what extent excess heat from the electrolysis process can be valorized.Moreover, it will also consider if other waste streams (input/output) than just excess heat can be valorized when a PtX plant is deployed in connection to a multi-utility company.
Literature Study
Besides the aforementioned case studies, a literature study was also conducted to qualify further information for this paper to use in the, e.g., introduction, problem field, methodology, and analysis.To perform the literature study, the relevant scientific literature was retrieved using search engines, such as Google Scholar, and inputting key phrases [30] such as "PtX technology efficiency", "electrolysis pathways", "hydrogen production and waste streams", and "hydrogen energy carrier routes".Generally, a literature study exemplifies the research within the field being studied and, as such, does not aim to provide a full review list of the published work within this research area, so we recognize that there are other relevant studies we have not cited herein.Moreover, reports from the European Commission, International Energy Agency, the Danish Government, etc., were utilized to outline the political agenda and the supporting policies for production from electrolysis within the European and Danish contexts.
Theoretical Outline
In the following section, we first elaborate on the symbiotic approach this work relied on to identify which waste streams exist in PtX plants and how they can be valorized, analogous to approaches taken in industrial symbiosis research.Second, a systemic investigation of the routes by which electrolysis is produced and eventually utilized was conducted to capture the important elements to be able to suggest how PtX technology could be deployed the most efficiently at its current stage.Several implications or focus areas to achieve valorization of the input and output waste streams were identified, which are presented ahead.Here, emphasis was placed on the resources, technology, hydrogen usage, processes, and integration of the energy carriers with the existing energy system in Denmark.
Symbiotic Approach
Symbiosis, which literally translates as an "interaction between organisms", is a term frequently used in biology to describe the interdependence of species in eco-systems.However, the idea of symbiosis is also used in industrial systems, where, for instance, businesses trade waste and by-products to cut down on their resource usage.Symbiotic relationships between businesses thus refer to interdependencies formed via the exchange of by-products and/or shared infrastructure.The industrial symbiosis metaphor, which has its roots in the industrial ecology idea, first appeared in academic writing on industrial organization in the late 1980s [31].According to refs.[32,33], the establishment of industrial symbiosis partnerships has the potential to close the materials loop, reduce energy usage, and hence create a more circular economic model for businesses.There are various types of industrial symbiosis systems, including (i) "classical" inter-firm collaborations between companies located near each other within a defined area, (ii) inter-firm collaboration between companies with symbiotic relationships that are located in the same region, and (iii) long-distance collaboration between companies [34].According to refs.[34,35], at least three different companies must connect and cooperate by exchanging at least two different by-products before they can be recognized as being part of an industrial symbiosis.
The overall rationale of industrial symbiosis is for companies to simultaneously realize cost reduction and environmental benefits, essentially using the by-products and wasteor waste streams-generated by one company as the raw materials in another company.Furthermore, industrial symbiosis can also involve the sharing of infrastructure, such as buildings and facilities, energy and water supplies, and many other types of resources [36].The difference between industrial symbiosis and industrial ecology is that the latter aims to build a sustainable closed system by combining industrial eco-systems, including aspects of industrial symbiosis, industrial metabolism, and consideration and response to environmental laws and regulations.Industrial symbiosis is, therefore, a means and a subset to achieve this, emphasizing exchanges to create synergy and thus reach the goals of an "industrial ecology" [37].
In this work, we used the concept of industrial symbiosis as a tool to analyze and assess the production and use of waste streams connected to PtX plants.Currently, PtX plants are being deployed in various ways and are connected to various industries or markets to enable a valorization of the inputs/outputs.This work, however, approaches industrial symbiosis from the perspective of considering the "markets" or "use potentials" for the waste streams from PtX plants and hence also looks beyond the sole focus on inter-firm collaboration to include other relevant arenas that potentially could enable a valorization of these waste streams.The use of waste streams within or in proximity to the PtX plant (i.e., classical inter-firm collaboration) could include, for example, residual water, CO 2 , or oxygen (O 2 ); the use of waste streams within a region could include the use of waste heat for district heating purposes (i.e., regional collaboration); the use of waste streams within national boundaries (i.e., long-distance collaboration) could include, for example, the supply of electricity to the grid or pure methane to the gasnet.The systemic elements connected to electrolysis production in Denmark, especially in regard to the three case studies explored, are detailed in the following.
Systemic Elements Connected to PtX Plants
The context in which PtX plants can be deployed, or their planned deployment, will to a large extent define how the technology can be integrated and become a part of the local or national energy system and hence play an important role in the green transition to cleaner energy.Figure 1 provides an overall model of the usability (note: non-exhaustive) of the current PtX technology, with an emphasis on the AEC Alkaline Water Electrolyzer, which is currently the most developed and disseminated technology [19,38,39].There are a number of critical elements to consider with PtX plants, as described ahead.
First, PtX plants need Resources, which generally consist of super-clean water and electricity.For the technology to become a part of the green transition, the electricity input should preferably be from RES, like wind power or solar PV.In this way, the PtX plants could assist in transforming fluctuating RES sources into more stable energy carriers.The cost of onshore RES is significantly lower than offshore RES and could, therefore, be prioritized in the initial stages of PtX development.Grid electricity could also be utilized, but this means the plant will not solely rely on green power unless the grid electricity is certified as green.
Waste 2023, 1 890 Waste 2023, 1, FOR PEER REVIEW 7 this means the plant will not solely rely on green power unless the grid electricity is certified as green.The water demand in PtX plants is usually large, and the quality goes beyond just drinking water quality.Surface water or groundwater can be provided to the process, and additional cleaning of the water is then undertaken.On average, 150-200 kg of super clean water is needed per MWh of electricity used [40,41].The wastewater from the rinsing steps to produce the super clean water could potentially be utilized as, e.g., secondary water in other processes.The water demand could, however, initially rely on wastewater, cleaned to the standard needed, to avoid taking drinking water from groundwater reservoirs in the electrolysis process [41].
Second, the Technology, namely the various technical elements of the PtX plant.Here, we focus on the AEC Electrolyzer, as emphasized earlier, which produces hydrogen with an average efficiency of 66%, besides waste heat and pure oxygen [38].The amount of waste heat is significant and may comprise up to one-third of the total supply of energy to a plant [38].This underpins the importance of locating PtX plants close to heat markets, such as large industrial facilities or local district heating networks, to facilitate a symbiotic use of this waste stream.
Third, the option for using hydrogen.There are generally five options for using hydrogen, i.e., the Hydrogen usage.These are illustrated in Figure 1 and can be described as follows: (i) the production of e-fuels for aviation and shipping, (ii) use of the hydrogen for transportation purposes, (iii) storage of the hydrogen for later usage, (iv) use of the hydrogen in combination with the natural gasnet, and (v) methanation of hydrogen, either to purify synthesis gas or to produce methane.
Fourth, the Processes are connected to the different usability routes, i.e., the various techniques and methods applied, which will, fifth, lead to different options for Energy system integration.These options include: The water demand in PtX plants is usually large, and the quality goes beyond just drinking water quality.Surface water or groundwater can be provided to the process, and additional cleaning of the water is then undertaken.On average, 150-200 kg of super clean water is needed per MWh of electricity used [40,41].The wastewater from the rinsing steps to produce the super clean water could potentially be utilized as, e.g., secondary water in other processes.The water demand could, however, initially rely on wastewater, cleaned to the standard needed, to avoid taking drinking water from groundwater reservoirs in the electrolysis process [41].
Second, the Technology, namely the various technical elements of the PtX plant.Here, we focus on the AEC Electrolyzer, as emphasized earlier, which produces hydrogen with an average efficiency of 66%, besides waste heat and pure oxygen [38].The amount of waste heat is significant and may comprise up to one-third of the total supply of energy to a plant [38].This underpins the importance of locating PtX plants close to heat markets, such as large industrial facilities or local district heating networks, to facilitate a symbiotic use of this waste stream.
Third, the option for using hydrogen.There are generally five options for using hydrogen, i.e., the Hydrogen usage.These are illustrated in Figure 1 and can be described as follows: (i) the production of e-fuels for aviation and shipping, (ii) use of the hydrogen for transportation purposes, (iii) storage of the hydrogen for later usage, (iv) use of the hydrogen in combination with the natural gasnet, and (v) methanation of hydrogen, either to purify synthesis gas or to produce methane.
Fourth, the Processes are connected to the different usability routes, i.e., the various techniques and methods applied, which will, fifth, lead to different options for Energy system integration.These options include: (a) Making e-fuel via a synthetic process, such as methanol, which would require an input of carbon (CO 2 ) and electricity from the energy system if not provided by local RES sources.The use of biogenetic CO 2 in this process could be supplied by upgrading the existing facilities (extraction of CO 2 from biogas to produce pure methane) connected to Danish biogas plants, or possibly from decentralized CHP plants fueled by biomass, such as residual straw or wood chips.The co-location of the PtX plant near such facilities would, hence, be beneficial.The use of biogenetic CO 2 is important from a symbiotic perspective, as in almost all cases, it would otherwise simply be emitted to the atmosphere from the biogas plants when upgrading their facilities.Currently, 675,000 t of biogenetic CO 2 are emitted annually from these energy plants [42], and thus not being valorized while also causing environmental issues, including contributing to global warming.Capturing and valorizing this output as a new input to the electrolysis process is therefore important, and carbon capture and utilization (CCU) could thus minimize this current wastage of biogenetic CO 2 , which could be facilitated within the PtX plant via a classical "inter-firm collaboration".(b) Pressurizing hydrogen to make it suitable as an energy carrier for transportation purposes, but this is an energy-intensive technique and will, in many cases, require an input of energy from the energy system to fuel high-pressure equipment unless energy is supplied from local RES sources.This process does not require carbon input.(c) Storing hydrogen and eventually utilizing it for the generation of electricity, e.g., via reelectrification.In this way, it is possible to produce an output (electricity on demand) that could assist in stabilizing the fluctuating RES production.(d) Injecting hydrogen directly into the gasnet [43].Up to a 15% (potentially 20% in newer gas pipes) mixture of hydrogen and gas (methane) could be supplied and hence "stored" in an already established gasnet [44].The stored energy within the gasnet could then later be utilized for, e.g., electricity and heat production or industrial usage, thereby helping stabilize the fluctuating energy supply from RES.(e) Methanation, where hydrogen, for example, is converted to additional biogas via an in situ injection of electrolysis-produced hydrogen directly into the biogas' reactor tank [17].This was investigated by ref. [45], which reported that up to 80% of the hydrogen was converted to methane, and between 40-60% of the biogas' content of CO 2 was removed.Alternatively, new ex situ co-located technology-like a biological methanation facility-could be established separately, where biogenetic CO 2 from an upgrading facility could be utilized and combined with hydrogen from the PtX plant in the following process [17,22].According to ref. [46], such a separate (ex situ) methanation plant could typically increase the biogas yield by up to 42%.
A mechanization plant could also be deployed without the upgrading facility, in which raw biogas, which consists of roughly 40% CO 2 and 60% methane [47], could react with hydrogen from the electrolysis process.The biogas could then be used for combined heat and power (CHP) production to provide district heating to citizens and power to the grid.An option here could be implementing the upgrading facility after the biological mechanization process to increase the gas yield (i.e., biomethane) even further.
The key input for the latter example is-besides hydrogen from the electrolysis-biogenetic CO 2 from the biogas plant upgrading facility, as well as CO 2 from the raw biogas from within the methanation plant.The methanation processes described earlier provide for a more circular and symbiotic usage of waste streams and can result in the provision and promotion of energy system services, like providing biomethane to the gasnet or/and biogas to drive motors/generators for CHP production, hence displaying an integration with the existing energy system and stabilization of the fluctuating RES sources.As far as technology maturity, there is, however, still a critical need for research and optimization of the ex situ methanation technology, while in situ technology has so far been proven to be more reliable.
Valorizing Waste Streams and Energy System Integration
As seen from the systemic elements detailed in Figure 1, 'Hydrogen usage' via the 'Process' of 'methanation' can potentially facilitate high energy system integration, whereby many synergies and energy services could potentially be obtained.This would also provide favorable options for stabilizing the fluctuating energy system from RES, as well as for storing and converting the hydrogen by utilizing already existing technical applications and energy infrastructure.This could be, e.g., gas and district heating networks, the use of CO 2 via CCU from the already established biogas plants, or the methanation of hydrogen via various techniques applicable at already implemented Danish biogas plants.Hence, the need for immediately investing in new highly expensive infrastructure in connection with PtX [48] could be avoided, and the maturation and dissemination of PtX technology could be coupled to existing and new biogas technology and could occur as a parallel process.Hydrogen pipes and extensive expansion of the electricity grid could, therefore, be implemented at a pace that follows the actual development and upscaling of the current electrolysis technology.
In the exploratory case study discussion, presented in Section 3, we investigate which waste streams exist in connection with the PtX plants implemented so far and how the three case study PtX plants could valorize waste streams (inputs and outputs) from the electrolysis process.Further, we identify whether energy system integration is obtained within the cases and, hence, was included as an important element for the plant layout/design.
The key questions we asked, and elements we investigated within the case studies to obtain a greater understanding of how a more symbiotic PtX development could be achieved, can be outlined as follows: • Renewable Energy Sources (RES): Where does the electricity come from?
•
Carbon dioxide: Does the plant rely on CO 2 from biogenetic or non-biogenetic sources?• Water: Where does the water for the electrolysis processes come from?• Waste heat: Is the surplus heat being utilized?• Oxygen: Does pure oxygen have any usage?
•
Energy system integration: Is the layout/design stabilizing the fluctuating RES within the energy system?
Results
In this section, the results of the exploratory case study reviews are presented.
Case Presentation
In the following, we present the results of the exploratory case study investigations of the three selected PtX plants, with the aim of identifying which and how waste streams are being, or will be, valorized, as well as to what extent energy system integration has been included in the plant layout/design.We introduce the PtX plants by following the topics presented earlier and depict the plant configurations in Figures 2-4, respectively.
Waste 2023, 1, FOR PEER REVIEW 10 future, the external storage of hydrogen will also facilitate the production of compressed hydrogen for the heavy transport sector and for industrial usage.
RES:
Electricity is provided by a nearby solar PV park with a 26 MW capacity, as well as 54 MW wind turbine capacity from a total of 13 Vestas turbines.This case thus represents a hybrid RES park.Overall, 80 MW RES energy is needed to achieve a 15 MW electrolysis capacity, but with a target of 120 MW electrolysis, a considerable amount of RES must be implemented in the future.At some point, a large battery will also be implemented to store electricity from the RES production.The supply of RES to the national grid is currently an option for consideration.
Energy system integration: The production of biogas and the distribution of biomethane to the gas network will help stabilize the energy system.This already happened before the PtX plant was established, but now, district heating will be supplied to the local community, which will assist in providing a stable energy system utilizing the already existing energy infrastructure [50].3.1.3.Case 3: DIN Forsyning and PtX Plant ('the City Case', Figure 4)
Introduction:
The PtX plant is being developed in Esbjerg harbor and produces hydrogen to fuel local boats that service the offshore industry.The plant will be established in 2023 and is owned by European Energy.DIN Forsyning, which is a municipally owned multiple distribution company (supplies water, district heating, and wastewater treatment), will be connected to the PtX plant via various services, as outlined ahead.
RES: Renewable energy produced from four wind turbines located in the community of Måde, close to Esbjerg, will be directly connected to the PtX plant and will facilitate the electrolysis process and, hence, the production of green hydrogen.
CO₂ usage: As the PtX plant will produce hydrogen only, there is no use of carbon in the process, and hence, no need for CCU technology to be connected to the process.
Oxygen: Currently, there are no plans for using the oxygen output from the PtX electrolysis process, but the oxygen could potentially be utilized within DIN Forsyning as oxygen in processes using aerobic microorganisms when rinsing wastewater.
Energy system integration: Sector integration is very important to ensure sustainability in PtX projects.The stabilization of the energy system can be facilitated by DIN Forsyning when operating heat pumps, electric water coolers, and heat accumulation tanks.The PtX plant in Esbjerg will, therefore, not contribute to stabilizing the energy system, but the connection to DIN Forsyning will ultimately contribute to this [41].
Discussion
This exploratory case study revealed large differences in how waste streams are managed and in the level of energy system integration within three different PtX plants investigated as case studies.In the following, we discuss which form of symbiotic collaboration (inter-firm, regional, or long-distance collaboration) is being achieved in these three different cases, being an energy hub case, a rural case, and a city case, respectively.How to locate forthcoming PtX plants, especially in regard to existing and future biogas plants as development drivers, is also discussed at the end of the section.
RES: Electricity is provided by a nearby solar PV park with a 26 MW capacity, as well as 54 MW wind turbine capacity from a total of 13 Vestas turbines.This case thus represents a hybrid RES park.Overall, 80 MW RES energy is needed to achieve a 15 MW electrolysis capacity, but with a target of 120 MW electrolysis, a considerable amount of RES must be implemented in the future.At some point, a large battery will also be implemented to store electricity from the RES production.The supply of RES to the national grid is currently an option for consideration.
CO 2 usage: Biogenetic CO 2 is captured from the on-site biogas plant's upgrading facility, amounting to around 26,500 t/y.The CO 2 is used for methanol production, whereas the hydrogen production does not require any supply of carbon.
Water: Water is supplied from groundwater resources that are polluted and not appropriate to use as drinking water.Alternatively, the desalination of water could be applied at the plant if no other water resources could be utilized.
Waste heat: The waste heat from the electrolysis process is currently not being utilized, except for minor quantities piped to local industries.The deployment of a future district heating system is being discussed as an opportunity for the local community.Up to one-third of the energy production ends up as waste heat, with a temperature of around 75 • C, which would be appropriate for district heating purposes.However, the cost of a new district heating network is DKK 500 M (EUR 67 M), meaning subsidies from the government would be required to install this or financial support from interested investors.GreenLab Skive is aiming to invite industries with a high heat demand to relocate to the site so that they can supply energy to them in the form of hydrogen from the PtX plant.Temperatures of 600 • C to 1100 • C are sometimes required by industry, which could be provided by the distribution of hydrogen to the companies.GreenLab Skive hopes to store energy in the future by storing electrons as thermal energy, like steam, and recognizes the importance of this for ensuring energy efficiency and stabilizing the fluctuating energy from RES.
Oxygen: There is no current use or future plans to use oxygen, and this output is not regarded as an important resource.
Energy system integration: GreenLab Skive is part of several projects investigating how to store heat.In the first project, the ambition is to develop an optimal design for a system that uses molten salt for heat storage and that can drive the steam supply for industrial processes.In the second project, "Energy Rocks", how to store excess electricity from wind turbines as heat in a rock bed is being investigated so that it can cover the thermal needs of GreenLab Skive and utilize the industrial park's wind and solar resources in the most optimal way.In this project, the rock bed is heated via a heater power of 60 MWel, and the target should store up to 40 MWth of energy.These projects should assist in stabilizing the fluctuating energy from RES.Thus, the thermal storage of steam, as well as the storage of hydrogen for later usage, such as in a national hydrogen pipe system or a cavern storage facility (Cluster North project), is how the PtX plant in the future can assist the stabilization of the fluctuating RES energy [49].
3.1.2.Case 2: Vinkel Bioenergy and PtX Plant ('the Rural Case', Figure 3) The Vinkel Bioenergy biogas plant was established in 2018 and utilizes 400,000 t of biomass feedstock annually, mainly comprising animal manure, crop residues, energy crops (maize), deep litter, and soon, also source-separated organic household waste from the city of Viborg.The plant produces 52,000 N m 3 (normal cubic meter) of biomethane annually, which is upgraded to natural gas standards and injected into the natural gas network.The PtX plant will produce methanol for the transport sector.Vinkel Biogas is currently implementing a biomass boiler (to burn residual straw and wood chips) to generate additional heat for upgrading the facility.This heat is produced from biogas today, but the energy is not sufficient.
RES:
The future electricity production will rely on solar PV, and the system will be implemented on a 105 ha large land area close to the biogas plant.As a small airport is located right next to the plant, no wind turbines can be erected, and certified electricity from wind energy will be purchased instead to supplement the renewable energy production Waste 2023, 1 895 from solar.The solar PV facility will possibly be established with local ownership as a solar pool.
CO 2 usage: The PtX plant will utilize biogenetic CO 2 from the existing upgrading facility, as well as from the biomass boiler currently being implemented.The plant will thus contribute to CCU.
Water: Vinkel Biogas plans to utilize local water resources, like polluted groundwater, and rinse water (liquid manure) from the biogas process.The digested manure (digestate) contains 5-7% dry matter, which, instead, could be separated into a fiber fraction and a water fraction, and the latter could be rinsed for use in the electrolysis process.It is planned that the fiber fraction, being rich in nitrogen, phosphorus, minerals, etc., will finally be returned to farmland as valuable fertilizer.
Waste heat: The surplus heat developed from the cooling of the electrolysis process can be distributed via the district heating network for the nearby Højslev community, and from there, the heat could be distributed even further to the city of Skive.There is, hence, a large nearby heat market for excess heat.Today, 4 MW of waste heat from the cooling of the upgrading facility is lost annually.The PtX plant could make it feasible to utilize this waste heat, which could be distributed together with the PtX waste heat as district heating in the nearby network.
Oxygen: Vinkel Biogas sees several opportunities for utilizing the oxygen output from the electrolysis process.One option is to use the oxygen in the biogas reactor to catch sulfur, which is an unwanted product in biogas.Today, this oxygen is delivered via an oxygen generator that utilizes electricity from the grid.Around 100 m 3 of oxygen per hour is used to capture the content of sulfur in the biogas.Another option is to utilize the oxygen in the tower for biological air cleaning, which can be carried out by supplying oxygen that feeds the bacteria into a tower that is capable of cleaning the air.Also, in this process, around 100 m 3 of oxygen per hour would be required.
Energy system integration: The production of biogas and the distribution of biomethane to the gas network will help stabilize the energy system.This already happened before the PtX plant was established, but now, district heating will be supplied to the local community, which will assist in providing a stable energy system utilizing the already existing energy infrastructure [50].
Introduction:
The PtX plant is being developed in Esbjerg harbor and produces hydrogen to fuel local boats that service the offshore industry.The plant will be established in 2023 and is owned by European Energy.DIN Forsyning, which is a municipally owned multiple distribution company (supplies water, district heating, and wastewater treatment), will be connected to the PtX plant via various services, as outlined ahead.
RES: Renewable energy produced from four wind turbines located in the community of Måde, close to Esbjerg, will be directly connected to the PtX plant and will facilitate the electrolysis process and, hence, the production of green hydrogen.
CO 2 usage: As the PtX plant will produce hydrogen only, there is no use of carbon in the process, and hence, no need for CCU technology to be connected to the process.
Water: The water usage connected with the project is high and will likely amount to around 1.5 M m 3 annually.Today, the water usage in Esbjerg city is around 3-4 M m 3 annually, so this represents a major increase in local use.It takes 13 kg of groundwater to produce 9 kg of pure water for electrolysis, which can produce 1 kg of hydrogen.Roughly 1.1 M m 3 pure water equals 1 GW electrolysis capacity; hence, the need for water is very high.DIN Forsyning will supply water-technical water-to the PtX plant, either as rinsed groundwater from alternative groundwater magazines, which are generally polluted with, e.g., pesticides or perfluorooctane sulfonic acid (PFOS), or from their wastewater treatment plants, where the dirty water is rinsed to obtain the required quality water.Groundwater for drinking water purposes is extracted from groundwater magazines that hold water that is cleaner than the ones planned to be used for the PtX plant.According to DIN Forsyning, other water resources might also be available and used as necessary.
Waste heat: The waste heat from the cooling of the electrolysis process is high, with around 20-30% of the generated energy ending up as surplus heat.The PtX plant and DIN Forsyning aim to utilize the waste heat via a district heating system, which can cover the heat usage of 200 households.Waste heat with a temperature of 70 • C will leave the PtX plant via the heat exchangers to the district heating network and return as cooling water at 40 • C. The colder return water will, hence, substitute alternative cooling processes that would otherwise have to be implemented to cool the water.
Oxygen: Currently, there are no plans for using the oxygen output from the PtX electrolysis process, but the oxygen could potentially be utilized within DIN Forsyning as oxygen in processes using aerobic microorganisms when rinsing wastewater.
Energy system integration: Sector integration is very important to ensure sustainability in PtX projects.The stabilization of the energy system can be facilitated by DIN Forsyning when operating heat pumps, electric water coolers, and heat accumulation tanks.The PtX plant in Esbjerg will, therefore, not contribute to stabilizing the energy system, but the connection to DIN Forsyning will ultimately contribute to this [41].
Discussion
This exploratory case study revealed large differences in how waste streams are managed and in the level of energy system integration within three different PtX plants investigated as case studies.In the following, we discuss which form of symbiotic collaboration (inter-firm, regional, or long-distance collaboration) is being achieved in these three different cases, being an energy hub case, a rural case, and a city case, respectively.How to locate forthcoming PtX plants, especially in regard to existing and future biogas plants as development drivers, is also discussed at the end of the section.
The use of waste streams within or in proximity, i.e., inter-firm collaboration, to the Vinkel Biogas plant and PtX plant has been highly developed, together with the valorization of the oxygen output as an input to the biogas production process and air filtering, while the digestate output from the biogas plant provides rinsed water input to the PtX plant.Besides, inter-firm collaboration is further strengthened using biogenetic CO 2 from the biogas' upgrading facility as an input in the process of producing methanol.At the GreenLab Skive and PtX plant, the use of biogenetic CO 2 is also valorized via inter-firm collaborations with the PtX plant as input to the production of methanol, just as the distribution of pipes within the hub can facilitate the use of minor quantities of heat within local companies in proximity to the PtX plant.For the case of the DIN Forsyning and PtX plant, it is merely the input of municipal wastewater, rinsed for PtX production, that displays a form of inter-firm collaboration, with no other waste streams currently being valorized within the surrounding proximity.Hence, the GreenLab Skive and PtX plant is the only case that solely relies on external water inputs from groundwater resources for the electrolysis process.
When it comes to regional collaboration, the exploratory case studies also showed great differences.At the GreenLab Skive and PtX plant, no heat output is provided as input to the community, and there are no expectations of any district heating systems to be deployed within the near future.Instead, future methods of storing the heat/steam outputs will eventually allow valorizing these waste streams later when opportunities are found and when a feasible technology solution to do so is available.DIN Forsyning, on the other hand, already valorizes its heat output as input to a regional district heating system and expects to include any future contributions from upscaling of the PtX plant and, hence, to utilize the excess heat in their system.The Vinkel Biogas plant and PtX plant also valorize heat output from the PtX plant as input to a larger district heating network in the region.
Long-distance collaboration for the supply of biomethane to the national gasnet is achieved by the Vinkel Biogas plant and PtX plant and the GreenLab Skive and PtX plant, but this was not initiated via the deployment of the PtX plants as this energy output was established before the electrolysis processes were even implemented.Moreover, the GreenLab Skive and PtX plant distributes outputs of electricity to the national power grid but expects that the deployment of a large battery will help store future electricity outputs for more flexible locale usage.Further, the storage of future hydrogen outputs in pipes and caverns will provide various benefits by supporting further energy system integration.The DIN Forsyning and PtX plant does not provide inputs to such energy services, being involved only in long-distance collaborations, as its compressed hydrogen is merely produced for the offshore shipping industry.The stabilization of the fluctuating energy system is applied by DIN Forsyning; however, not in connection with the PtX plant but rather in relation to some other technical applications adopted, e.g., heat pumps and heat accumulation tanks.
The exploratory case study illustrated the existence of a wide variety of energy system integrations of the energy services produced by the PtX technology, as well as future opportunities not yet harvested.The GreenLab Skive and PtX plant is in its first development stage, which currently negatively impacts the valorization of waste streams.In time, and with extensive economic resources, the hub could illustrate many benefits as far as producing, distributing, utilizing, and storing hydrogen, as well as heat, in new systems that must be tested and developed further.The current energy system integration is yet non-existent, and fluctuating RES are simply distributed to the national power grid without having any stabilizing effect applied to them.Thus, the naming of the GreenLab Skive (Lab = laboratory) is an accurate synonym for the energy hub being developed over a long time and where the expected results are unsure.
The Vinkel Biogas plant and PtX plant case study illustrates how PtX can benefit from being co-located next to a biogas plant, and hence, this would instantly increase and help realize the valorization of many of its waste streams, both internally and externally.This has been highlighted as an important factor for plant viability in the current scientific literature.Thus, energy system integration can be provided by the supply of heat as district heating and biomethane to the already applied national gasnet.The DIN Forsyning and PtX plant also provides energy system integration via the distribution of district heating in existing systems.However, its compressed hydrogen production does not utilize any of the surplus biogenetic CO 2 that is currently emitted to the atmosphere, as in the other cases, which is a missed opportunity, and thus, its energy system integration is limited in its symbiotic outreach.
While the GreenLab Skive and PtX plant could provide important knowledge and results for assessing PtX implementation in the future, we stress the importance of relying on biogas plants as drivers for the deployment of PtX plants right now.Currently, 675,000 t of biogenetic CO 2 annually is emitted to the atmosphere from Danish biogas plants' upgrading facilities [42] and is not valorized, as described in Section 2.2.2.The theoretical potential CO 2 available until 2040 is estimated to be between 700,000-1.3Mt of biogenetic CO 2 , and when including CO 2 emitted from biomass combustion and waste incineration plants, the potential increases to between 2.4-3.4Mt [51].CO 2 can, however, also be extracted from atmospheric air, but the technology is still immature and, hence, costly [51].The production of carbon-based energy carriers via electrolysis would, hence, initially benefit from being co-located next to biogas plants, where biogenetic CO 2 is available-and already being captured-which would, hence, facilitate the valorization of many other waste streams, as described.
If methanation is prioritized as an energy carrier system, existing biogas plants with upgrading facilities should be selected as drivers for PtX technology and suggested that such plants should normally be located near a national gasnet.Also, in situ or ex situ methanization techniques can be applied to increase the gas yield even further.PtX plants could also be co-located near biogas plants with no access or limited access to a gasnet and produce methanol as an energy carrier.Such biogas plants should typically be located near a district heating system that supplies heat to a community.However, methanol can also easily be transported via truck to other facilities and, hence, could be used to fuel the transport sector.In both cases, especially the first, the distribution of excess heat to a nearby heat market is pivotal to obtaining symbiosis in the production of electrolysis.Existing biogas plants, as well as the deployment of future biogas plants, could hence be planned and integrated into the PtX plant layout/design to enable the valorization of multiple waste streams and provide high integration with the existing energy system.In the case of the Vinkel Biogas plant and PtX plant, we discovered an example of a hybrid PtX plant layout/design, where methane, methanol, and district heating will be produced simultaneously, which might be possible in some cases depending on the energy system infrastructure, among other factors.
Conclusions
PtX is expected to become an important part of the European energy supply, and ambiguous political targets have been put forward to push the development of electrolysis technology.This is no different in Denmark, where 4-6 GW electrolysis capacity is expected to be deployed before 2030, and several PtX development projects have already been implemented, with several new ones also in the pipeline.This work investigated how waste streams from the electrolysis process could be valorized and create symbiotic collaborations in which resources could be utilized and not wasted and looked at where future PtX plants could be sited in proximity to already implemented technology.Three exploratory case studies were investigated (a rural case, a city case, and an energy hub case), showing very different emphases and capabilities for valorizing waste streams depending on, e.g., the type of PtX plant layout/design, the evolution stage currently reached, the energy carriers being produced, and the access to important energy infrastructure.The findings add to a current gap in the scientific literature, as stressed in, for example, refs.[20][21][22][23][24][25].
We conclude that PtX in the present development and implementation stage could benefit from co-locating next to biogas plants-as illustrated in the rural case-where upgrading facilities could provide an opportunity for the capture of biogenetic CO 2 that is currently being emitted to the atmosphere and hence lost.This applies to both hydrogen for e-fuels and hydrogen for methanation.Using biogenetic CO 2 as an input for the electrolysis processes and co-locating these plants in connection with biogas plants would enable the symbiotic usage of carbon and options for the high valorization of various waste streams, such as waste heat, oxygen, wastewater (digestate), and additional biogas yield via methanation.Energy system integration could thus immediately be achieved, as energy outputs would comply with the existing energy system, which would hence stabilize RES sources.Thus, superior internal, regional, and long-distance symbiotic collaborations could be realizable when focusing on biogas plants ('the rural case') as drivers for the current PtX development compared to in 'the city' and 'the energy hub' cases investigated.
Figure 1 .
Figure 1.Systemic elements typically connected to a PtX plant.
Figure 1 .
Figure 1.Systemic elements typically connected to a PtX plant.
|
2023-10-11T15:26:30.949Z
|
2023-10-06T00:00:00.000
|
{
"year": 2023,
"sha1": "f70ac4b2965ae77af6e479593255350b14b57207",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2813-0391/1/4/51/pdf?version=1696558729",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f54bbcd1ae58d18af1a0ad9b31ddfc7a6baca9de",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
1037
|
pes2o/s2orc
|
v3-fos-license
|
Cellular Systems with Full-Duplex Amplify-and-Forward Relaying and Cooperative Base-Stations
In this paper the benefits provided by multi-cell processing of signals transmitted by mobile terminals which are received via dedicated relay terminals (RTs) are assessed. Unlike previous works, each RT is assumed here to be capable of full-duplex operation and receives the transmission of adjacent relay terminals. Focusing on intra-cell TDMA and non-fading channels, a simplified uplink cellular model introduced by Wyner is considered. This framework facilitates analytical derivation of the per-cell sum-rate of multi-cell and conventional single-cell receivers. In particular, the analysis is based on the observation that the signal received at the base stations can be interpreted as the outcome of a two-dimensional linear time invariant system. Numerical results are provided as well in order to provide further insight into the performance benefits of multi-cell processing with relaying.
I. INTRODUCTION
Techniques for provision of better service and coverage in cellular mobile communications are currently being investigated by industry and academia. In this paper, we study the combination of two cooperation-based technologies that are promising candidates for such a goal, extending previous work in [1] [2]. The first is relaying, whereby the signal transmitted by a mobile terminal (MT) is forwarded by a dedicated relay terminal (RT) to the intended base station (BS) [3] (see also [4] for a more recent account). The throughput of such hybrid networks has recently been studied in the limit of asymptotically many nodes [5] [6]. Moreover, information theoretic characterization of related single-cell scenarios has been reported in [7]. The second technology of interest here is multi-cell processing (MCP), which allows the BSs to jointly decode the received signals, equivalently creating a distributed receiving antenna array [8]. The performance gain provided by this technology within a simplified cellular model was first studied in [9] [10], and then extended to include fading channels by [11], under the assumption that BSs are connected by an ideal backbone (see [12] [13] for surveys on MCP).
Recently, the interplay between these two technologies has been investigated for amplify-and-forward (AF) and decodeand-forward (DF) protocols in [1] and [2], respectively. The basic framework employed in these works is the Wyner uplink cellular model introduced in [9]. Following the linear variant of this model, cells are arranged in a linear geometry and only adjacent cells interfere with each other. Moreover, intercell interference is described by a single parameter α ∈ [0, 1], In this work we relax the latter restrictions by allowing fullduplex operation at the RTs and considering the signal path between adjacent RTs. Focusing on an intra-cell time-division multiple-access (TDMA) operation and non-fading channels, we assess the gain provided by the joint MCP approach over the conventional single-cell processing (SCP) scheme by deriving the per-cell sum-rate in the two scenarios. We finally remark that a further contribution of this paper with respect to [1] [2] is the extension to a relaying scenario of the analytical framework introduced in [9], whereby the signal received by the BSs is interpreted as the outcome of a linear time-invariant system.
II. SYSTEM MODEL
We consider the uplink of a cellular system with a dedicated RT for each transmitting MT. We focus on a scenario with no fading and employ the framework of a linear cellular uplink channel presented by Wyner [9]. RTs are added to the basic Wyner model following the analysis in [1][2] (see Fig. 1 for a schematic diagram of the setup). Throughout this paper we make the following underlying assumptions: • The system includes infinitely many identical cells arranged on a line. • A single MT is active in each cell at a given time (intracell TDMA protocol). • A dedicated single RT is available in each cell to relay the signal from the MT. • The signals from the MTs are received by the BSs via the relays (and not directly from the MTs). • Each RT receives the signals of the MTs from its own cell and the two adjacent cells only. • Each BS receives the signals of the RTs from its own cell and the two adjacent cells only. • The channel power gain from the MT to its local RT, and its two adjacent RTs are denoted by β 2 and α 2 respectively. • The channel power gain from the RT to its local BS, and its two adjacent BSs are denoted by η 2 and γ 2 respectively. • The channel power gain from the RT to its two adjacent RTs is µ 2 . • The MTs use independent randomly generated complex Gaussian codebooks with zero mean and power P . • The average transmit power of each RT is Q. • The RTs are assumed to be oblivious and to use an AF relaying scheme. • The RTs are assumed to be capable of receiving and transmitting simultaneously (i.e., we assume full-duplex operation, which amounts to assuming perfect echocancellation between transmit and receive paths). • The RTs amplify and forward the received signal with a delay of λ ≥ 1 symbols (an integer). • The propagation delays between the different nodes of the system are negligible with respect to the symbol duration. • No cooperation is assumed among MTs. • No cooperation is assumed among RTs. • All the attenuation parameters are known to the BSs. The main differences between the current model and the model presented in [1] [2], are: (a) full-duplex operation at the relays (which introduces the relaying delay λ); (b) no direct connection between the MTs and the BSs; and (c) the RTs receives also the signals of the two adjacent MTs.
Accounting for the underlying assumptions listed above, a baseband representation of the signal transmitted by the m'th RT for an arbitrary time index n is given by R m,n = g (βX m,n + αX m−1,n + αX m+1,n + µR m−1,n−λ + µR m+1,n−λ + Z m,n ) , (1) where Z represents the additive complex Gaussian noise process Z m,n ∼ CN (0, σ 2 Z ), which is assumed to be independent and identically distributed (i.i.d.) with respect to both the time and cell indices. The received signal at the m'th BS antenna is given by where W represents the additive complex Gaussian noise process W m,n ∼ CN (0, σ 2 W ), which is assumed to be i.i.d. with respect to both the time and cell indices and to be statistically independent of Z. In addition, the RTs' gain g is selected to satisfy the average power limitation
III. SUM-RATE ANALYSIS
In this section, we derive the per-cell sum-rate of the cellular system at hand with MCP at the BSs and in the reference case with SCP.
A. Joint Multi-Cell Processing
In this section we assume that the signals received at all BSs are jointly decoded by an optimal central receiver. The receiver is connected to the BSs via an ideal backbone and is assumed to be aware of the Gaussian codebooks of all the MTs. It is noted that using similar arguments as in [9], it can be shown that in this setup an intra-cell TDMA protocol is optimal.
Extending the one dimensional (1D) model introduced in [9], the linear equations (1) and (2) describing the network of Fig. 1 can be interpreted as a two dimensional (2D) linear time invariant (LTI) system. The block diagram of the equivalent 2D LTI system is depicted in Fig. 2 where the 2D filters read with δ n denoting the Kronecker delta function. The corresponding 2D Fourier transforms of the signals in (3) are given by Since the noise processes Z and W are zero mean i.i.d. complex Gaussian and statistically independent of each other and of the input signal X, the output signal at the BSs can be expressed as Y m,n = S m,n + N m,n , where S m,n and N m,n are zero mean wide sense stationary (WSS) statistically independent processes representing the useful part of the signal and the noise respectively. Now, using the 2D extension of Szegö's theorem [9], the achievable rate in the channel (5) (without spectral shaping), which is equal to the achievable per-cell sum-rate of the network, is given for arbitrary g by where S S (θ, ϕ) and S N (θ, ϕ) are the 2D power spectral density (PSD) functions of S and N respectively. On examining Fig. 2, we see that the PSD of the useful signal is given by while the PSD of the noise is given by (4).
Proof: See Appendix A. It can be seen that the optimal gain is achieved when the relays use their full power Q, and that g o −→ Q→∞ 1/(2µ). Other observations are that the sum-rate R mcp is not interference limited and that it is independent of the actual RT delay value λ. In the following, we consider some relevant special cases.
In addition, by setting µ = 0 in (10) we obtain that 2) Half-duplex operation: In this case, the RTs are not capable of simultaneous receive-transmit operation. Accordingly, the time is divided into equal slots: during odd numbered slots the MTs are transmitting with power 2P and the RTs only receive, while during even numbered slots the MTs are silent and the RTs transmit. It is easily verified that the per-cell sumrate in this case is given by multiplying (11) by 1/2 while replacing P and Q respectively with 2P and 2Q, in both (11) and (12).
B. Single Cell-Site Processing
In this section we consider a conventional SCP scheme in which no cooperation between cells is allowed. According to this scheme, each cell-site receiver is aware of the codebooks of its own users only, and it treats all other cell-site signals as interference. Notice that since the RTs are oblivious, their AF operation is not influenced by the fact that the BSs are not cooperating. In addition, since the input signals and noise statistics remain the same, expression (10) is also valid for the current setup.
The output signal can be expressed as where the useful part of the output signal S U is defined as and h S and h N are the signal and noise space-time impulse response functions whose Fourier transforms are given in (7) and (8) respectively. The interference part of the output signal S I is defined as and the noise part of the signal is defined as Since X, Z, and W are independent of each other, zero-mean complex Gaussian and i.i.d. in space and time, it is easily verified that S U , S I , and N are independent and zero-mean complex Gaussian as well. It is also evident that for each m the processes are WSS along the time axis n. Accordingly, the output process at the m'th cell can be seen as a Gaussian inter-symbol interference (ISI) channel with additive colored independent interference and noise.
Proposition 2
The per-cell sum-rate of SCP with AF relaying is given for an arbitrary relay gain 0 < g < g o , by Proof: See Appendix B. It is noted that in contrast to the MCP scheme, R scp is interference limited. It is also easy to verify that R scp is independent of the actual RT delay value λ.
IV. NUMERICAL RESULTS
In Fig. 3-a the sum-rates per-cell of the MCP and the SCP schemes are plotted as functions of the inter-relay interference factor µ for P/σ 2 = 10 [dB], Q/σ 2 ≤ 20 [dB], σ 2 Z = σ 2 W = σ 2 = 1, α = η = 0.2, and β = γ = 0.8. The curves are plotted for an optimal selection of the relay gain g, which is shown for both schemes in Fig. 3-b. Examining the figures, it is observed that for this setting the MCP scheme demonstrates a meaningful improvement on performance over the SCP scheme. The deleterious effect of increasing inter-relay interference µ is also demonstrated for both schemes. Moreover, the optimal relay gain for both schemes also decreases with µ. Another observation is that the optimal gain of the SCP scheme is lower than that of the of the MCP scheme for µ larger than some threshold. Hence, using the full power of the RTs is sub-optimal for the SCP scheme under certain conditions.
V. CONCLUDING REMARKS
In this paper, joint MCP of MTs that are received only via dedicated RTs applying full-duplex AF relaying, has been considered. The received signal at the BSs can be seen as the output of a 2D LTI channel. Using the 2D version of Szegö's Theorem, a closed form expression for the achievable per-cell sum-rate of intra-cell TDMA protocol has been derived. As a reference the rate of a conventional SCP scheme, which treats other cell MTs' signals as interference, has also been derived. Comparing the rates of the two schemes, the benefits of the MCP scheme has been demonstrated. Moreover, we have observed that the rates of both schemes are decreasing with the intra-relay interference factor, µ. The latter can be explained for the MCP scheme, by the fact that the equivalent 2D LTI channel becomes more distorted with increasing µ. Since no MTs cooperation is allowed and no rate splitting is used, this distortion can not be mitigated by power allocation over time or space, and the resulting rate decreases with µ. We also have shown that using the full power of the RTs is unconditionally optimal only for the MCP scheme. Numerical results have revealed that under certain conditions, the SCP setting produces an equivalent noisy ISI channel, the rate of which is not necessarily maximized by using the full RTs power. Other more sophisticated relaying schemes, are currently under further investigation.
A. Proof of Proposition 1
It is easily verified that the RT output signal R m,n (1) is a WSS complex Gaussian 2D process with zero mean. Hence, its power can be expressed by where the third equality is achieved by substituting (4). Examining (13), it is clear that in order for the relay to transmit finite power (or for the whole system to be stable) the poles of the integrand must lie inside the unit circle. Assuming that g is real this condition implies that It is also verified by differentiating the integrand of (13) with respect to g that σ 2 r (g) is an increasing function of g with σ 2 r (0) = 0. By making a change of variable ϕ ′ = λϕ, and integrating (13) over ϕ ′ we get where the last equality is achieved by using formula 3.616.2 of [14] and some algebra. It is noted that (14) implies that the power of the relay signal is independent of the actual relay delay duration. Expression (14) can be further simplified into its final closed form of (10), by applying formulas 3.653.2 and 3.682.2 of [14] and some additional algebra.
To derive the per-cell sum-rate expression for an arbitrary RT gain g, we substitute (7) and (8) into (6) to obtain It is easily verified by differentiating the integrand of (15) with respect to g, that the rate is an increasing function of the RT gain g for 0 ≤ g < 1/(2µ). We can conclude that, since σ 2 r (g) is also an increasing function of g, the rate is maximized when the RTs use their full power by setting their gain to g o which is the unique solution to σ 2 r (g) = Q. Finally, by substituting (4), applying formula 4.224.9 of [14] twice to (15), and using some algebra we obtain (9).
B. Proof of Proposition 2
First, we express the three PSDs of interest in terms of the system signal and noise 2D transfer functions H S (θ, ϕ) and H N (θ, ϕ). Starting with the noise component, it is easily verified that its PSD is given by where the 2D filter H N (θ, ϕ) is defined in (8).
To calculate the useful signal PSD, let us define the following 2D filterĥ U m,n δ m h S m,n . It is easily verified that where * * denotes a 2D cyclic convolution operation, and δ(ϕ) denotes the Dirac delta function. Hence, the useful signal PSD becomes S U (ϕ) = P 1 2π
|
2007-05-07T13:47:37.000Z
|
2007-05-07T00:00:00.000
|
{
"year": 2007,
"sha1": "1d4dce77832c61e0036282a2907712872eae207d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0705.0999",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1d4dce77832c61e0036282a2907712872eae207d",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
267971514
|
pes2o/s2orc
|
v3-fos-license
|
Spectroscopic Study of Molybdenum Impurity Generation in LHW Sustained Plasmas on TST-2 Spherical Tokamak
Impurity generation mechanisms including RF sheath sputtering and heating from fast electrons were explored in LHW sustained plasmas on TST-2 spherical tokamak. Molybdenum impurity was measured with a high-resolution spectrometer and the heating e ff ect on a molybdenum target plate was estimated with a fast camera system. The LHW power modulation experiment indicates that the RF sheath sputtering dominated impurity generation from the antenna limiters (molybdenum) under the current plasma parameters. In addition, the target plate insertion experiment shows that molybdenum atoms were released from the target when heated by fast electrons accelerated by LHW. Although the heating e ff ect was negligible for the antenna limiters, it could become significant under higher plasma parameters or during longer pulses.
Introduction
Spherical tokamaks (STs) are expected to achieve nuclear fusion economically and efficiently due to the compact configuration and high β [1].However, the configuration limits the space for a central solenoid.Therefore, non-inductive current drive methods including radio frequency (RF) wave injection have been actively studied on STs.Lower hybrid wave (LHW), which is one of the RF waves, has been investigated on traditional tokamaks, such as JT-60 [2], JET [3], TORE SUPRA [4], and EAST [5].The results show its high efficiency in plasma heating and plasma current ramp-up.The applicability of LHW on STs is still under research.In TST-2, various launching conditions for LHW have been explored using capacitively coupled comb-line (CCC) antennas, which utilize capacitive coupling between neighboring elements to achieve high directionality, low refractivity, and convenient feeding [6][7][8].Thus, understanding the characteristics of CCC antennas is of particular importance for future devices, and the RF sheath effect is one of the topics.RF rectified potential, namely a time-averaged increase in sheath potential, enhances the acceleration of ions in the rectified sheath width and the sputtering on the antenna surface and limiters [9,10].The erosion of the antenna surface is crucial for steady-state operation [11][12][13].Meanwhile, the released high-Z impurities can dissipate plasma power and deteriorate current drive efficiency.Therefore, it is essential to experimentally evaluate the importance of RF sheath sputtering on CCC antennas.In this experiment, it was confirmed for the first time on a CCC antenna with specauthor's e-mail: tian@fusion.k.u-tokyo.ac.jp troscopic diagnostics.
In plasmas sustained by LHW, electrons are accelerated almost in parallel to magnetic field lines through electron Landau damping.The electron velocity distribution can become highly anisotropic, and fast electrons, which are important components carrying the plasma current, are formed during the acceleration.According to the fast electron transport model [14], orbit expansions of fast electrons may induce energy deposition on limiters.If the local temperature on limiters exceeds the melting point, a substantial quantity of impurities may be generated.Consequently, heating from fast electrons can be a critical concern for antenna limiters.In earlier studies, it was observed that fast electrons generated within the scrape-off layer (SOL) were responsible for depositing heat onto components near the antennas [15,16].However, in this research, the heating was found on a target plate far from the antennas.The majority of fast electrons that hit the target had an energy level exceeding 50 keV, as indicated by hard X-ray (HXR) measurements.This energy level is significantly higher than that of the fast electrons generated in the SOL, as shown in Ref. [16].Hence, it is more likely that these electrons are associated with suprathermal electrons in the core plasma.
This paper is organized as follows.In Sec. 2, the experimental setup for the high-resolution spectrometer and the fast camera system is given.Section 3 presents the LHW power modulation experiment results, focusing mainly on the RF sheath sputtering.In Sec. 4, the heating from fast electrons is discussed, and results from temperature reconstruction, impurity measurements, and HXR measurements are compared.
Experimental Setup
In TST-2, molybdenum limiters are used to protect the outer-midplane [6], top [7], and outer-off-midplane [8] antennas.In recent LHW experiments, MoI emission lines at 390.3 nm, 386.4 nm, 379.8 nm, and 313.3 nm were observed with a high-resolution spectrometer (FWHM ∼ 0.02 nm).MoI at 379.8 nm was measured to investigate the molybdenum generation mechanisms because of the strongest intensity among the observed lines.Most of the measurements on limiters were taken through chord 1 at tangential radius R tan = 450 mm as shown in Fig. 1.Note that the plasma major and minor radii are less than 360 mm and 230 mm [17].Furthermore, a Mo target plate was inserted into the outboard SOL in some discharges to examine its interaction with fast electrons.The target is placed far from the antennas, where the RF amplitude is low, thus minimizing the RF sheath effect at the target.It can be moved along the major radius, and measurements presented in this paper were taken when the target tip was positioned at R target = 570 mm, where a maximum emission was observed.Note that the outboard last closed flux surface (LCFS) is usually located around R = 560 mm, and the limiters are located at R = 585 mm.Additionally, the target's rotatability enables varying the poloidal projection area of the fast electron flux, which is almost perpendicular to the poloidal plane.Measurements of the target were taken through chord 2 as shown in Fig. 1.
In estimating the target's temperature, a fast camera system was utilized, employing an IR filter (> 715 nm) and a high frame rate setting.The influence of plasma light became negligible when compared to the heated target.Subsequently, the target was considered a gray body.A tungsten filament which is located at the same distance in the plasma experiments and has a relatively large flat area of 5 mm × 20 mm was used to establish the relationship be- tween filament temperatures and pixel intensities of the fast camera.Note that both molybdenum and tungsten have similar black body radiation emissivities.The filament temperatures under different consumed power were estimated by analyzing their emission spectra with Planck's law.To reduce errors during temperature estimation, particularly for lower temperatures, the consumed power of the filament was fitted to the temperatures using the Stefan-Boltzmann law with heat conduction correction [18].The obtained relation then converted pixel intensities to temperatures.
LHW Power Modulation
Figure 2 shows the typical waveforms of a discharge where the antenna was switched from the outer-midplane antenna to the top antenna at t = 45 ms.Simultaneously, the plasma current and the line-averaged density were maintained around 10 kA and 1.5 × 10 17 m −3 , respectively.Figure 2 (e) shows the intensity of MoI measured at chord 1, and the difference before and after the switching indicates the strong dependence of molybdenum generation on LHW injection.Similar behavior is often observed in CuI emission lines, which is seen as a good indicator of RF sheath sputtering in TST-2, as the antenna elements are coated by copper.A 50 kW power modulation experiment was then conducted with 2 ms power-on and 2 ms power-off phases as shown in Fig. 3.The measured intensities were processed by a conditional averaging technique, namely accumulated according to power modulation cycles, to enhance the signal-to-noise ratio as shown in Fig. 4.Both CuI and MoI (chord 1) show rapid responses to the power modulation, which is consistent with the expectation for RF sheath sputtering.The typical rising times (10% to 90% magnitude) were 0.18 ms and 0.2 ms for MoI and CuI, respectively.Note that time delays exist between the measured intensity and the power pulse, and are 0.12 ms and 0.14 ms for MoI and CuI, respectively.The time delay is defined as the time difference between amplitude satu- ration (90% magnitude).In principle, the RF sheath effect responds almost instantaneously to changes in the RF field amplitude, which is proportional to the square root of the RF power.However, the impurity emission was measured via an observation chord, and a certain amount of time is required for released particles to travel and saturate the chord, resulting in a noticeable delay.Theoretical order estimations of the response time scales for the RF sheath effect are as follows.A point particle source releasing impurities in a half-sphere, as shown in Fig. 5, is considered.The emission rate P is assumed to be proportional to the LHW power.The density n of released impurities inside the observation chord at time t is contributed by the past source as n = P(t−l/V) 2πl 2 V , where l denotes the distance to the particle source, and V represents the velocity of particles.After reaching maximum LHW power, the emission rate P becomes constant, and the delay is reflected by the variation in impurity density n.Therefore, the time scales lie between the traveling time of the minimum and maximum l.The electric field amplitude near strap 1, with an injection power of 50 kW, was estimated to be 1.1 × 10 5 V/m based on Fig. 4 in Ref. [19], where the electric field amplitude is shown for 1 W. The sheath potential at strap 1 was calculated to be 280 V using equations in Ref. [20], considering a typical Langmuir probe measurement n e = 1 × 10 16 m −3 [21] and a Thomson scattering measurement T e = 50 eV [22] at the edge.TRIM.SP, a Monte Carlo program based on the binary collision approximation, calculates the energy of sputtered atoms [23].Simulation results by TRIM.SP with various projectiles and sputtered materials are summarized in Ref. [24].The average energies of sputtered molybdenum and copper atoms, with normal incidence, were found to be 4.5 eV (∼ 3000 m/s) from limiter B and 5.1 eV (∼ 3900 m/s) from strap 1, respectively.Consequently, the time scales for MoI and CuI are 0 to 0.3 ms and 0.01 to 0.24 ms, respectively.Here, l is estimated from the geometrical arrangement of the antenna, the limiter B, and the viewing chord 1.The experimental time delays are consistent with the above estimation.However, the estimated velocity of copper is greater than that of molybdenum, while CuI exhibits a longer delay time compared to MoI in the experiment.The reason for this will be discussed in the next paragraph.Volume 19, 1402010 (2024) The time scale of spectral line emissions, including excitation and relaxation, is generally at a level of nanoseconds, which will not affect the estimation results.However, it should be noted that antenna power decays from strap 1 (next to limiter B) to strap 13 (next to limiter A) as shown by Fig. 5.3 in Ref. [25].Since the power is proportional to the square of the RF field amplitude, the RF sheath effect also decays with the distance from limiter B. In this case, the Debye sheath of potential 160 V gradually dominates over the RF sheath, considering the same temperature and density at the edge.The energy of incident particles decreases with the potential drop, leading to a reduction in both the energy and sputtering yield of the resulting sputtered particles.The measured line intensity essentially results from combining point sources representing each strap or limiter.The contribution of each source point to the average velocity can be determined by weighting it according to its respective sputtering yield.Here, the average velocity refers to the average velocity of all released particles of the same impurity.For molybdenum and copper, the sputtering yield dependence on the energy of incident particles is quite different [26].Molybdenum's sputtering yield at limiter B is about 10 times greater than that at limiter A, and copper's sputtering yield remains almost constant among antenna straps.Such difference causes a different reduction in the average velocities compared to the case considering only strap 1 or limiter B. As a result, the reduced average velocities are about 2900 m/s and 3500 m/s for molybdenum and copper, respectively.The larger reduction in copper's average velocity is due to a higher weight assigned to the straps with lower energy of sputtered particles.This partly explains the longer delay time of CuI compared to MoI in the experiment.Another contributing factor is the variation in l among different source points.However, explaining the variation in l requires a complete reconstruction of the emission process, which will not be addressed in this paper.Additionally, measured CuI exhibits a longer decay time at the power-off phase compared with the rising time at the power-on phase, which is insignificant in MoI (chord 1).This is probably due to self-sputtering [27], as the antenna surface (coated by copper) is significantly larger than that of the limiters, leading to a higher probability of self-sputtering.
Apart from the fast response attributed to the RF sheath effect, H β emission (Fig. 4 (c)) shows a baseline (i.e., offset) component and a long decay time during the power-off phase, indicating a relatively slow response to the modulation.The release of hydrogen atoms is caused by knocking from charge exchange neutrals and ions in the SOL, which indirectly depends on the LHW power.The energy of incident particles is around the ion temperature of about 10 eV, as determined by Doppler spectroscopy.Such energy is insufficient to generate copper or molybdenum impurities at a level comparable to RF sheath sputtering.In addition, the MoI from the target measured at chord 2 is shown in Fig. 4 (d), having the slowest re-sponse during the power-on phase among measured scenarios.This is probably because the production rate of heating-released particles increases with time.As a source of heating, we consider fast electrons, which are accelerated by LHW and enter the SOL [14].This scenario is discussed in the next section with other experimental results.On the other hand, the sputtering-released particles have a relatively stable production rate under a specific sputtering condition.The occurrence of sputtering or any change in sputtering condition can be reflected by reaching maximum emission intensity in a much shorter time compared to heating, as shown by the power-on phases in Fig. 4.During the power-off phase, the response in Fig. 4 (d) is relatively fast, suggesting that the quenching (heat dissipation) of heated regions was fast.This is similar to the behavior of electron thermal spikes produced by energetic electron beams, and the melting of samples is often observed in transmission electron microscope experiments [28].Thermal spikes refer to spatially localized increases in temperature produced by the energetic radiation on matter [29].The localized heat deposition to form thermal spikes is almost instantaneous, causing a large temperature gradient in the surrounding area, and the heat dissipation follows classical thermal conduction.Therefore, expecting a fast quenching of electron thermal spikes during the poweroff phase is reasonable.Notice that during the power-on phase, continuous beams of fast electrons hit the target, and the number of generated thermal spikes increases over time, as indicated by the slow rising time.
Heating from Fast Electrons
Additional evidence of target heating from fast electrons will be presented in this section.The monotonic increase in MoI intensity contributed by the target became evident with unmodulated RF power, as shown in Fig. 6 (e), where the background was contributed by molybdenum limiters without target insertion.Note that the monotonic increase appears from around 50 ms, which is qualitatively consistent with the heating-release scenario, where a certain temperature threshold exists.The threshold does not necessarily mean that a section of the target reaches the melting point, but rather an overall increase in the target's temperature, which enhances the generation of thermal spikes and results in a detectable difference in the impurity level.Therefore, both the monotonic increase and the delayed appearance are consistent with the heating-release scenario qualitatively.The heating from fast electrons was also captured through a fast camera system, and the target configuration is shown in Fig. 7.The most intense heating effect was often observed just before the LHW power turned off, and the target started to cool down while plasma still existed for more than 20 ms.The whole process of cooling down is longer than 200 ms after the termination of plasma.The temperature distribution on the target is shown in Fig. 8 for cases of differ- ent target angles at the moment of highest temperature.Note that the size corresponding to one pixel on the target is about 0.27 mm × 0.27 mm, which means that the fast camera's resolution is not capable of directly observing thermal spikes, for which we expect a much finer scale.
Although the observed highest temperature of 3200 K exceeded molybdenum's melting point of 2900 K, no slowdown of the temperature increase due to the latent heat was observed, suggesting that the temperature error is over 300 K.The temperature distribution reveals that the heating effect on the target always occurs from the right side, which faces fast electrons.This happens regardless of the target's angle, indicating that fast electrons serve as the heating source.Meanwhile, only the tip region was found to be heated during experiments, which means heating power was deposited on a narrow region on the side of the target.Hereafter, the heat load is interpreted by integrating temperatures over the bright area pixels on the target.A similar experiment as described in Sec. 3 was conducted to examine the response of the heat load to the power modulation, and the accumulated result is shown in Fig. 9.
The integrated temperature increases during the poweron phase similar to Fig. 4 (d), but the response during the power-off phase is slow.This is because the temperature change for integrated temperature is governed by macroscopic heat conduction, where the temperature gradient is overall smaller than localized electron thermal spikes, which are responsible for molybdenum generation.
The rotation angle dependence of the target is compared between normalized total energy of HXR, integrated temperature, and MoI intensity as shown in Fig. 10, where the background (without target) is subtracted for MoI measurement, and it was negligible for HXR and fast camera measurement.The HXR detector was located at the tangential window (Fig. 1), aiming at the side of the target.HXR energy, MoI intensity, and the integrated temperature show the highest values when the target angle is 90 • , while they show the lowest values at 0 • .These behaviors are qualitatively consistent with the scenario where the power deposition and the resultant three quantities arise from the fast electrons hitting the target, and the power is proportional to the projection area of fast electrons on the target.However, since the change of geometry could potentially change the heat flux distribution on the side of the target and the boundary condition of heat conduction, which means that the three quantities are nonlinear functions of the deposited power, quantitative discussion on the angle dependence becomes difficult.
Conclusions
Impurity generation mechanisms were investigated using a high-resolution spectrometer and a fast camera system in plasmas sustained by LHW.At the plasma parameters of I p ∼ 10 kA, n e ∼ 1.5×10 17 m −3 , and P LHW ∼ 50 kW, the molybdenum impurity was primarily found to be generated through RF sheath sputtering on the antenna limiters.The highest RF sheath potential reached approximately 280 V compared to the Debye sheath of potential 160 V.The impurity response under RF power modulation was consistent with experiments.This is the first quantitative evaluation of the RF sheath effect on a CCC antenna.To further understand the RF sheath effect on CCC antennas, a parametric study is necessary, e.g., P LHW dependence of sheath quantities.We would like to leave it as a future task.Furthermore, a heating effect and impurity release were observed on a molybdenum target plate inserted in the outboard SOL.The heat is expected to be deposited on the target by orbit expansions of fast electrons formed in the core plasma.The modulation of RF power indicates that electron thermal spikes are probably responsible for the generation of molybdenum impurities through the heating of the target.Temperature distributions (righthand side in Fig. 8) reveal that only a narrow region on the side of the target received heating power from fast electrons.The rotation angle dependence of integrated temperature and MoI intensity shows that the maximum heat load on the target was received at an angle of 90 • , while the least heat load was at 0 • .This trend aligns with the deposited energy of fast electrons on the side of the target, as measured through HXR detectors.Therefore, it is reasonable to expect that heating from fast electrons could become a more serious problem in longer pulse experiments or with improved plasma parameters.
Fig. 1
Fig. 1 Observation chords of the high-resolution spectrometer under a top view of TST-2.
Fig. 2
Fig. 2 Waveforms of plasma current I p (a), line-averaged density n e (b), LHW power P LHW from the outer-midplane antenna (c), P LHW from the top antenna (d), and MoI emission (e) during the antenna switching experiment.
Fig. 3
Fig. 3 Waveforms of I p (a), n e (b), the radial position of outboard LCFS R LCFS (c), and P LHW from the outer-midplane antenna (d) during LHW power modulation experiment.
Fig. 5
Fig. 5 Impurity density inside the observation chord is contributed by the past source with emission rate P.
Fig. 6
Fig. 6 Waveforms of I p (a), n e (b), R LCFS (c), and P LHW from the outer-midplane antenna (d) and MoI emission with (light blue) and without (black) target insertion (e) during unmodulated RF experiments.
Fig. 7
Fig. 7 Photograph of target tip (1 mm × 10 mm) at 45 • .The photograph was taken by the same fast camera system without plasma and with another illumination light source.
Fig. 10
Fig. 10 Comparison between normalized total energy of HXR, integrated temperature, and MoI intensity at different target angles.
|
2024-02-27T18:15:56.851Z
|
2024-02-15T00:00:00.000
|
{
"year": 2024,
"sha1": "824e6ddabc05cbccb1637e4afec36f43d5fa7470",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/pfr/19/0/19_1402010/_pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a35234369da43a472d17fd66e5b7b0bdd6d7f04f",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": []
}
|
125487137
|
pes2o/s2orc
|
v3-fos-license
|
Construction of Three-Dimensional Road Surface and Application on Interaction between Vehicle and Road
The quantitative description is given to three-dimensional micro and macro self-similar characteristics of road surface from the perspective of fractal geometry using FBM stochastic midpoint displacement and diamond-square algorithm in conjunction with fractal characteristics and statistical characteristics of standard pavement determined by estimation method of box-counting dimension.The comparative analysis between reconstructed three-dimensional road surface spectrum and theoretical road surface spectrum and correlation coefficient demonstrate the high reconstruction accuracy of fractal reconstructed road spectrum. Furthermore, the bump zone is taken as an example to reconstruct amore arbitrary 3D roadmodel through isomorphism of special road surface with stochastic road surfacemodel.Measurement is taken to assume the tire footprint on road surface to be a rectangle, where the pressure distribution is expressed with mean stiffness, while the contact points in the contact area are replaced with a number of springs. Two-DOF vehicle is used as an example to analyze the difference between three-dimensional multipoint-andplane contact and traditional point contact model.Three-dimensional road surface spectrum provides a more accurate description of the impact effect of tire on road surface, thereby laying a theoretical basis for studies on the dynamical process of interaction of vehicle-road surface and the road friendliness.
Introduction
To effectively achieve the active safety control of vehicle, it is essential to attain the status and tire-pavement friction coefficient for a running vehicle, which is based on the consideration of interaction between vehicle and road surface in real time.However, the fact is that the inadequate description of road surface morphology details by existing road surface spectrum models is restricting the study on vehicle-road interaction.In the field of road engineering, road transportation safety is principally studied from perspective of road skid resistance, and road surface topography is considered to be significantly associated with skid resistance of road surface (Kane et al. 2013; Wang et al. 2014) [1,2].Since the shape, size, and distribution characteristics of asphalt pavement texture substantially determine the skid resistance of road surface, rational texture feature parameters help to accurately predict the skid resistance of road surface.For the in-depth research of the effect of road surface topography on skid resistance, the PIARC employs the undulating longitudinal wave length of road to describe the road surface topography, having definitely proposed four types of textures, that is, microtexture, macrotexture, megatexture, and roughness.Meanwhile PIARC presented a variety of physical phenomena that correspond to the interaction between running vehicle and road surface, which played a significant role in studying the skid resistance mechanism of road surface, noise, and vehicle safety (PIARC, 1996) [3].According to a series of researches of PIARC, it is sure that microtexture refers to the morphology with horizontal wavelength of less than 0.5 mm and vertical amplitude ranging from 0 to 0.2 mm, which principally characterizes the surface asperity of aggregate particles and has effect on the actual contact area between tire/road surfaces.Microtexture determines the basic frictional properties of road surface and mainly affects the pavement skid resistance at low speed.Referring to the morphology with horizontal wavelength ranging from 0.5 to 50 mm and vertical amplitude ranging from 0.2 to 10 mm, macrotexture is principally dependent on aggregate shape, size, and distribution and may bring about tire rubber deformation and hysteresis energy loss that result in friction force; macrotexture principally affects the road surface skid resistance in the case of high-speed travel and rainy days (Pulugurtha et al. 2012) [4].From this point of view, the surface topography of pavement aggregate particle and the distribution of pavement particle protrusion jointly affect the skid resistance of road surface.In recent years, domestic and foreign scholars have carried out a great deal of studies on asphalt pavement morphology information acquisition, characterization, skid resistance evaluation, prediction, and so on and made extremely valuable achievements.Many studies have demonstrated that the friction coefficient is determined by the morphological features between tire and ground, which is also an important factor causing traffic accidents (Kotek and Florková 2014; Qian and Meng 2017) [5,6].The UK-based Traffic and Road Research Laboratory (TRRL), as one of the earliest organizations engaged in studies on pavement skid resistance, studied the correlation between the risk of traffic accident on damp road surface and the slippery degree of road surface and developed skid resistance test equipment such as pendulum friction coefficient tester and lateral force coefficient test car SCRIM [7].Ergun et al. (2005) [8] developed an asphalt pavement microtexture measurement system composed of planar mobile platform, light source system, microscope, CCD camera, image processing system, and so forth of which the horizontal and vertical resolutions are 0.006 mm and 0.01 mm, respectively.Khoudeir et al. (2004) [9] described asphalt pavement's microtexture characteristics by extracting the statistics (mean value and standard deviation), autocorrelation function deviation, and so on.The gray gradient value of image is analyzed to describe friction property of asphalt pavement, which can be utilized to assess the effect of road surface roughness on pavement wear.Due to the obvious randomness and complexity of the micro and macro morphology of road surface, traditional parameters exhibit instability along with the change in measuring size and range; studies have shown that the morphology illustrates self-similarity and scale independence with the variation of measuring dimension, which means that the road surface topography has indeed fractal characteristics.Kokkalis et al. (2002) [10] proposed a roughness function combining fractal dimension and scale coefficient to describe road surface topography, finding that the skid resistance of asphalt concrete pavement is well correlated with the roughness function value.Zhang et al. (2013) [11] used a laser profiler to test the road surface macroscopic texture profile at 35 test points.It is found that the macroscopic texture profile of asphalt concrete road surface exhibits typical multifractal characteristics, while the multifractal spectrum is rightward hooked and the distribution width of multifractal spectrum reflects the variation range of pavement profile undulation amplitude, which is significantly correlated with the mean profile depth (MPD) of road surface.
Researchers in the field of automotive engineering carried out studies on the driving safety, ride comfort, and handling stability of vehicle from the perspective of the mechanical properties of tire, the stress distribution between tire and road surface, and the load transfer.From all perspectives of research it is found that inadequate skid resistance of pavement is a major cause of traffic accident and the road surface topography is significantly associated with vehicle driving performance [12].The reconstruction of real and accurate road surface roughness has been discussed both in the fields of automotive engineering and road engineering.It has been widely accepted that road surface roughness is a stochastic process [13], which is described as the frequencydomain characteristics of roughness through power spectral density (PSD), power function, and rational function.However, both functions share the same model constitution and provide time-domain model for automobile dynamics study.Domestic and foreign studies on roughness evolved from linear to nonlinear analysis, from frequency-domain to timedomain analysis, and from analytical analysis to simulation calculation.It is particularly that the time-domain model evolved from single point to multipoint model, from singtrack to double-track model, and from 2D to 3D model.Ngwangwa et al. (2014) [14] built a two-dimensional road surface model through artificial neural network, chose some road surface data as training data, and reconstructed the entire road surface using the trained network.Yu et al. (2007) [15] reconstructed three-dimensional road roughness based on multisensor fusion technology.Liu et al. (2014) [16] performed analysis calculation of power spectral density based on the acquired road spectrum data, reconstructed the graded power spectra of various typical road spectra, and analyzed the reconstruction accuracy of road spectrum through correlation coefficient.Road spectrum reconstruction is now principally focused on two-dimensional road surface.However, it fails to take into account the statistical self-similarity of road surface, which results in significant deviation of road spectrum from original road surface spectrum in terms of statistical property.Wullens and Kropp (2004) [17] proposed a three-dimensional contact model for the tyre/road to calculate the dynamic radial contact forces, the local deformation, and normal forced vibrations of the tyre structure.The contact problem is solved using an elastic half-space; the road is assumed as rigid.However, the three-dimensional microroughness of the road surface is also not considered.The fractal theory [18][19][20] based on fractal interpolation reconstruction method makes up for the shortcomings of traditional refactoring method in respect of road surface roughness self-similarity and employs limited elevation data to reconstruct three-dimensional road surface spectrum similar to the original road surface spectrum while satisfactorily preserving the statistical property and fine structure of road surface data.Wang et al. (2016) [21] built a three-dimensional stochastic road surface model through harmonic superposition, and it was found perfectly consistent with measured road surface according to the comparison in respect of power spectral density.Lu et al. (2014) [22] built a theoretical model of three-dimensional road surface using iterative function method, thereby demonstrating the consistency of reconstructed road surface spectrum with original spectrum.Kogut and Jackson (2006) [23] made a comparison between contact mechanics results obtained with statistical and fractal approaches to characterize surface topography.It is found that differences in the simulated contact area and load can be related solely to the different approach employed for surface characterization.Jiang et al. (2009) [24] proposed that contact stiffness model using fractal geometry topography description is to research the contacts between rough surfaces of machined plane joints.Barnsley (1993) [18] obtained the normal contact stiffness based on fractal theory through a Weierstrass-Mandelbrot function.The results of the theoretical contact stiffness according to the fractal method were also certified by the experimental data.Buczkowski et al. (2014) [25] proposed a modified contact model of fractal rough surface.They concluded that the contact area depends on the contact loading and the contact stiffness increases with increasing contact loading.However, the fractal technique is rarely used to study the problem of contact between tire and ground.
In view of the complexity of tire and road surface materials and surface contact characteristics, friction coefficient is far from enough to interpret the tire-pavement contact mechanism.Furthermore, road surface properties are more important for road transportation safety in the event of sudden change.The previous researches on vehicle safety neglected the effect of road surface morphology, and the inadequate description of road surface morphology details in road surface spectra restricts the studies on vehicle-road interaction.For the statistical self-similarity of road surface, finer reconstruction of three-dimensional road surface spectrum could be accomplished through fractal theory.In contrast with two-dimensional road surface spectrum, the three-dimensional road surface spectrum is closer to real road surface.Therefore, this paper employs the fundamental principle [26] of fractal Brownian motion (FBM) and uses the standard deviation of randomly excited roughness of road surface with fractal dimension.The three-dimensional road surface spectrum is reconstructed by diamond-square algorithm and fractal Brownian motion theory.Furthermore, a two-DOF 1/4 car model is built and the tire contact with road surface is modeled as a multipoint-and-plane contact relationship.The coupling between three-dimensional road surface spectrum and three-dimensional tire is analyzed, while a comparative analysis is also done between multipointand-plane contact and point contact between tire and rough road surface.It provides a theoretical basis for studies on the dynamic process of interaction between vehicle and road surface, vehicle ride comfort, and road friendliness.
Time-Domain Model of Random Road
Surface Roughness
Power Spectra at Spatial Frequency and Time Frequency.
The fitting expression of road surface power spectral density () is as follows: where is spatial frequency; −1 is the reciprocal of wavelength ; 0 is reference spatial frequency, 0 = 0.1 m −1 ; is frequency index, that is, the diagonal slope on loglog coordinate, which determines the power spectral density frequency structure of road surface; ( 0 ) is the road surface power spectral density at reference spatial frequency 0 called road surface roughness coefficient, m 3 .When a car travels at through a road surface with a spatial frequency of , the equivalent temporal frequency (Hz) is And the relation between power spectrum densities at temporal frequency and spatial frequency is where () is the power spectral density at temporal frequency and () is the power spectral density at spatial frequency.
Description of Road Surface Spectrum Time-Domain
Model.The PSD of road surface spectrum is statistics corresponding to a certain sort of road surface roughness.However, the reconstructed pavement elevation is not the only one for a given road surface PSD.The road function also corresponds with a sample function in the equivalent pavement elevation of the given road surface spectrum at a certain running velocity.Two prior conditions must be satisfied in order to reconstruct the road time-domain model that satisfies given power spectral density based on a known road surface spectrum: A The road process is a stationary Gaussian stochastic process.B The road process is ergodic.The basic process of reconstruction is as follows: Abstract the random fluctuation of road surface elevation into white noise that meets certain conditions, and perform fast Fourier inverse transform to achieve the time-domain model of random roughness of road surface through fitting.Many great methods could be employed to generate road surface elevation time-domain model for stationary Gaussian stochastic process.Principal methods include filtered white noise generation method, stochastic sequence generation method, harmonic superposition method, AR (ARMA) method, and fast Fourier inverse transform generation method.The filtered white noise method is frequently used due to its clear physical significance and easy calculation, as well as the immediate determination of road surface model parameters based on road surface power spectrum values and travel speed.The fitting form of the rational function of power spectral density is as follows: where , are the constants associated with road surface grade.
Filtered White Noise Generation for Stochastic
Road Surface.When a car travels at a constant speed , the time-domain When → 0, () → ∞.Therefore, if the lower cutoff angular frequency is taken into account, the actual power spectral density [12] could be expressed as follows: where 0 is lower cut-off angular frequency Equation ( 6) could be taken as the response of white noise excited first-order linear system.According to stochastic vibration theory, where () is frequency response function; is white noise; () is power spectral density that takes = 1, so It can be obtained as where 0 is lower cut-off spatial frequency, 0 = 0.011 m −1 ; ( 0 ) is road surface roughness coefficient, m 3 ; () is white noise Gauss with mean value equal to zero; () is random elevation displacement of road surface, m.
Box-Counting Dimensions Method-Based Road Surface Parameter Extraction
The author calculates the fractal dimension of 5,000 m road surface random elevation and makes statistical and fractal analysis of road surface elevation data through box-counting dimensions method-based programming.Fractal dimension could be defined as where is measurement scale; () is curve length measured with the th scale; () is number of measurement scales; is fractal dimension of measurement curve.Box-counting dimensions method calculates fractal dimension through covering fractal curve with small boxes with side length of .Some boxes are empty, while some cover part of the curve.The number of boxes is counted when the box is not empty.And then the number of nonempty boxes is regarded as ().The followed thing is that the size of box gradually reduces, while the () will immediately rise.Then the formula is obtained when → 0: The least square method is used to seek a series of and () and to fit a straight line in log-log coordinate.The slope of straight line obtained should be the desired fractal dimension.Table 1 shows the fractal dimensions and standard deviations that correspond to road surface spectra at all levels attained from box-counting dimension method.
As shown in Table 1, there is no difference under different fractal dimensions with various levels, which indicates obvious self-similarity.The reason is that the inverse Fourier transform assures the coincidence of the straight line of road surface power spectral density obtained through simulation at log-log coordinate, which also reserves the similarity information of road surface roughness.
In the case of standard road surface, slight difference is observed between fractal dimensions of road surfaces at various levels obtained by box-counting dimensions method, and the mean value is approximately 1.6, which is taken as the fractal feature index for standard road surface.Since the standard deviation achieved with box-counting dimensions method falls in the international specified range of standard deviation, it could be taken as a statistical indicator for road surface grading.The above-noted two indicators constitute the grading criteria for reconstruction of three-dimensional standard road surface.
Construction of Three-Dimensional Road Surface Spectrum with Road Surface Morphology Features
The road surface presents random performance and statistical self-similarity, which could be reconstructed through fractal Brownian motion (FBM).The midpoint displacement method is also known as the random midpoint displacement method, which is the simplest and classical method applied in FBM, especially for describing one-dimensional random process.In addition, the diamond-square algorithm is based on midpoint displacement method, which can produce twodimensional or three-dimensional topography.It can not only simulate three-dimensional pavement, but also get a higher reconstruction accuracy certified in our article later.and Carpenter and is also known as diamond-quadrangle algorithm [27]: (1) Initialization: the two-dimensional array is initialized and the same elevation value is assigned to the four angles.The size of each dimension is supposed to be 2's nth power plus 1 (e.g., 33 × 33, 65 × 65, 129 × 129).
Figure 1 shows the diamond-square algorithm process of a 5 × 5 array, where the elevation value of the 4 angles in Figure 1(a) is initialized and indicated with black spot.Actually, after five-iteration calculation, the road surfaces is separated by a distance of about 3 cm, which can meet our research requirement.
(2) The "diamond" stage: as shown in Figure 1(b), a random value is generated using the four points forming a square at the midpoint of such square, that is, the intersection of two diagonals.The midpoint value is equal to the sum of the mean of four corner points' values and the said random value.Diamond comes into being when a number of squares exist in grid, in which case the center point is indicated with black spot.
(3) The "square" stage: use the four points forming diamond to generate a random value with the same value range as the previous step at the midpoint of that diamond.Also, this midpoint value is equal to the sum of the mean of four corner points' values and the said random value as shown in Figure 1(c), thereby generating a square.
(4) Iterate the process above for the specified number of times.
Process of Determining One-Dimensional Stochastic
Interpolation.The Brownian motion in two-dimensional plane generates three-dimensional morphology of landscape, which means that coordinates and in the plane bring about (,) as the surface gradient of position (, ).The change of gradient that occurs during constant speed travel along the straight line path in the plane is fractal Brownian motion.Assuming the travel distance in plane is Δ (where: ), the curve gradient variation is given by [28,29] The formula of stochastic interpolation algorithm is shown in where represents Gaussian random function and means the elevation value of related point.When ̸ = 1/2, (12) leads to [30] var In (14), () represents the Gaussian distribution when the mean value is equal to zero, and variance is equal to to 2 .Assuming () ∼ (0, 2 ), ∼ (0, Δ 2 ), the stochastic interpolation model is where (0) = 0. Solving Δ 2 is the key to achieve the distribution of stochastic increment.The main process is described as follows: The first iteration, (1/2) = (1/2)[(1) − (0)] + 1 , according to ( 14) The variance of one-dimensional stochastic increment after iterations is The one-dimensional stochastic increment submits to () ∼ (0, 2 ), ∼ (0, (
Process of Determining Two-Dimensional Stochastic
Interpolation.First, define the matrix range of each point in two-dimensional space as Define
Shock and Vibration
Hence, each point in the two-dimensional space could be described as These discrete points are subjected to the Gaussian distribution; the expected value is 0, while the variance is 2 .Then, the four vertexes are initialized as where a midpoint (1/2, 1/2) is needed; the initial iterative interpolation is expressed as The first iteration leads to and the variance of two-dimensional stochastic increment after iterations is Then the stochastic increment of two-dimensional is subjected to (, ) ∼ (0, 2 ), ∼ (0, Δ 2 ).
Fractal Brownian Motion Theory-Based Determination of
Random Displacement.The random increment in diamondquadrangle algorithm could be derived from fractal Brownian motion; that is, where is Hurst index, = 2 − 2 , 2 = 1.6003. is the segment spacing after segmentations.The following formula is normally employed for numerical calculation of random displacement: or where "scale" means the scale factor and normally falls within (0, 1): Gauss is a random function submitting to standard normal distribution (0, 1); represents the fractal parameter value of chosen regional terrain; means the number of iterations for stochastic midpoint displacement; stands for the segment spacing after segmentations.
The random increment of the first iteration is Accordingly, the random increment after iterations is In this paper, five iterations are performed (i.e., = 5), and the sampling interval is 1.The number of iterations is where represents fractal parameter and means the segment spacing after iterations.Then the successive number of iterations is The number of iterations is determined by researchers to satisfy research requirements.Figure 2 shows the flow chart of calculating the elevation of each point in three-dimensional road surface spectrum through fractal Brownian motion and diamond-square algorithm on the premise of five iterations.
Reconstruction of Three-Dimensional Rough Road Surface
Spectrum.The foregoing theory is employed to simulate grade A∼H standard highways; Figure 3 shows the simulation result after eight-time iterations.The road surface is 600 m in length and 33 m in width.The value is taken from random sample obtained through inverse Fourier transform of standard road surface power spectrum; to simplify the analysis, the direction and direction of road surface and the line segments in any plane share the same fractal characteristics, thus having the same Hurst index in simulation.
Three-Dimensional Rough Road Surface Spectrum Reconstruction Accuracy Analysis.
The three-dimensional road surface with 5000 m is reconstructed through fractal interpolation theory and the solution method for power spectral density.Compared with the power spectrum of two-dimensional road surface, the accuracy of road reconstruction proposed in this paper is verified.Due to space limitations, Figure 4 only list the comparison diagrams of standard power spectra for grades A∼D between theory and simulation results.As shown in Figure 4, road surface roughness sequence consistent with statistical property of standardized grade road surface could be achieved by reconstructing road surface roughness using fractal curve.Actually, power spectrum comparison diagram is inadequate for reorganization model accuracy analysis.Since the quantitative analysis is also needed to perform mathematical statistics-based regression analysis by taking road surface roughness power spectrum as regression model.It is necessary to check whether the data of regression model could perfectly fit standard or target data.The power spectral density curve of standard-grade road surface is a straight line, so the regression for simulation of power spectral linear regression analysis should be linear regression, which normally employs residual sum of squares (Se), residual standard deviation (σ), or correlation index ( 2 ).Any of these three variables could determine the quality of regression equation: Se, σ, the smaller, the better; 2 , the larger, the better, where where 1 , 2 , . . ., represents the reconstructed road surface roughness data; ŷ1 , ŷ2 , . . ., ŷ means the original road surface roughness data; is the number of sampling points; means the mean of reconstructed road surface roughness data.The reconstructing accuracy for the grade A∼D road surface is obtained by the square sum of residuals and correlation coefficient.
As shown in Table 2, the power spectrum curve attained through the reconstruction of road surface roughness with ideal fractal curve is quite close to standard power spectrum, exhibiting extremely high reconstruction accuracy.
Reconstruction of Three-Dimensional
Road Surface Spectrum with Special Road Surface Features.Upon the establishment of three-dimensional stochastic road surface spectrum for each grade of road surface that reflects the road surface topography, some special road surface model can be also simulated to form the more authentic three-dimensional road model.According to transport industry standard JT/T713-2008 "Pavement Rubber Bump" [31], the profile of rubber bump should be approximately trapezoidal, and the bottom width and height of bump should be 300∼400 mm and 30∼ 60 mm, respectively.However, no complete standard has been issued for pavement contour curve in terms of specific cross section.The common bump cross-sectional profiles include trapezoid, circular arc, and parabola.Figure 5 shows the parabola-shaped profile.
The parabolic cross-sectional profile shown in Figure 5 is expressed as a mathematical equation: The road bump model file available for simulation is achieved by preparing road bump model file generating program with MATLAB software, developing nodal coordinate matrix through (33), and generating element matrix with MATLAB.Build the parabolic road bump model based on the assumption that the road bump is 300 mm in width and 50 mm in height; identify the installation location of road bump based on the generated three-dimensional road surface; remove the corresponding road surface nodes and cell data from road surface spectrum file based on bump width, and add the node and cell data of bump to appropriate position; realize the isomorphism between road bump and stochastic road surfaces at all levels through newly generated road surface spectrum files.Assuming that the road surface length is 12 m, Figure 6 only presents the road surface model with isomorphism between bump and grade-C road surface.
Model of Tire Contact with Three-Dimensional Rough Road Surface
The interaction between vehicle and road surface is an extremely complicated dynamic process that involves vehicle dynamics, pavement structural mechanics, and frictional mechanics.When a car travels on road surface, the roughness of road surface is transferred as displacement excitation via tires and suspension to car body and results in the random vibration of car body.The present section builds a 1/4 car body model and further introduces the process of tire contact with random road surface; the load characteristics on tire and tire footprint are integrated through test; the contact surface is considered to be composed of a finite number of points; the contact model is shown in Figure 7.
6.1.Determination of Tire/Road Contact Area.Different pressure is applied to a heavy tire through self-developed actuator system.The test object is heavy-duty radial tire 10.00R20, the standard tire pressure is 830 kPa.The testing site is shown in Figure 8; the tire contact distribution is shown in Figure 9.
The contact area is 45,955 mm 2 at a standard load of 30,000 N. Test results show that the width of footprint is 200 mm and the contact length is 230 mm.Since the distance between two adjacent points of three-dimensional road surface is 7.8125 mm, there should be 725 contact points under standard tire pressure and load, including 25 points in vertical direction and 29 points in horizontal direction.
Two-DOF Vehicle Model.
A Two-DOF 1/4 vehicle suspension model could be expressed by a spring and damper connected in parallel, while the tire could be expressed with a mass block and spring as shown in Figure 10, where is the unsprung mass, including rim, tire, and axle; is the sprung mass, including compartment and load; is tire stiffness coefficient; is suspension system stiffness coefficient; is tire damping constant; is damping constant of shock absorber in suspension system; () is ground elevation (road surface roughness), a stochastic process; () is vertical displacement of sprung part; () is vertical displacement of unsprung part.
The system motion equation established based on Newton's Second Law of Motion is Take the tire-road surface contact model as a model where there are a finite number of contact points, all of which have the same stiffness; calculate the vehicle-road coupling system under random excitation of road surface using the multipoint-and-plane contact of three-dimensional road surface spectrum and the two-dimensional curve of any profile section, including the vertical acceleration of car body, suspension distortion, and tire force as shown in Figures 11 and 12, where the dotted line represents multipoint-and-plane contact model, while the real line represents single point contact model.As shown in Figure 11, the peak values of car body response and tire force of multipoint-and-plane contact model, that is, plane contact model, are much smaller than that of point contact model; the peak value of car body acceleration is smaller by 47%, while the root mean square value is smaller by 57.6%; the peak value of suspension distortion is smaller by 60%, while the root mean square value is smaller by 56%; the peak value of tire force is smaller by 46.7%, while the root mean square value is smaller by 54%.This means the plane contact between tire and ground has buffering and inclusive effect on road surface.
Analysis of Response under Bump Excitation.
A parabola-shaped road bump model is built by assuming the road bump width and height to be 300 mm and 50 mm, respectively.As shown in Figure 12, single point contact model and multipoint-and-plane contact model are used to achieve the dynamic response of vehicle-road coupling system.According to Figure 12, the overall values of car body response and tire force of multipoint-and-plane contact model are smaller than that of point contact model, but the peak value changes slightly when the vehicle is passing through the bump.Furthermore, there is little difference in action time between plane contact model and point contact model.
Conclusion
(1) There is no difference in fractal dimensions at various levels obtained with box-counting dimensions method, which show obvious fractal characteristics of road surface irregularity.
(2) The multiscale characterization method is proposed to reconstruct standard-grade road surface morphology through FBM.The contrast analysis between reconstructed spectrum of three-dimensional road surface and theoretical three-dimensional spectrum of two-dimensional road surface in terms of power spectral density demonstrates the higher reconstruction accuracy.
(3) Two-DOF vehicle model is taken as example to identify the difference between three-dimensional tire-rough road surface multipoint-and-plane contact model and traditional point contact model in terms of the response to car body acceleration, suspension distortion, and tire force.It is included that the multipoint-and-plane contact model reflects the real vertically inclusive characteristics of tire, especially under the excitation of bump.This paper does not take into account such special conditions of road surface as turning, uphill, and downhill, which are to be subjected to more in-depth studies based on practice in future.
Figure 4 :
Figure 4: Power spectrum comparison of road roughness.
Figure 5 :
Figure 5: Parabolic cross-sectional profile curve of road bump.
Figure 6 :
Figure 6: Bump and grade-C random road surface isomorphism simulation result.
Figure 7 :
Figure 7: Three-dimensional contact model of tire and rough road surface.
Random Three-Dimensional Road Surface Spectrum.Assuming the road surface length 0 = 10m in the case of grade-C random road surface, single point contact model and multipoint-and-plane contact model are used, respectively, to determine the dynamic response of vehicle-road coupling system as shown in Figure 11.
Figure 11 :
Figure 11: Vehicle-road response under random three-dimensional road surface spectrum.
Table 1 :
Fractal dimension and standard deviation of standard pavement samples.
Table 2 :
Regression analysis of standard grade road.
|
2019-04-22T13:10:51.877Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "8e542b8e771dcc8000a928a87cabc96002f6fa63",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/sv/2018/2535409.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8e542b8e771dcc8000a928a87cabc96002f6fa63",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
56129541
|
pes2o/s2orc
|
v3-fos-license
|
Recombinant factor Xiii and congenital factor XIII deficiency : an update from human and animal studies
License. The full terms of the License are available at http://creativecommons.org/licenses/by-nc/3.0/. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. Permissions beyond the scope of the License are administered by Dove Medical Press Limited. Information on how to request permission may be found at: http://www.dovepress.com/permissions.php Research Reports in Clinical Cardiology 2013:4 159–165 Research Reports in Clinical Cardiology Dovepress
Factor XIII
Factor XIII (FXIII) belongs to the family of protransglutaminases and is the final enzyme of the blood coagulation cascade.Following activation by thrombin in the presence of calcium, FXIII becomes active transglutaminase that crosslinks γ-glutamyl-ε-lysine residues of fibrinogen chains, leading to increased stability of the fibrin clot. 1 Activated FXIII also crosslinks antiplasmin to fibrin, making the clot more resistant to fibrinolysis by plasmin. 2 Plasma factor XIII (pFXIII) is an Mr ∼340,000 heterotetramer composed of two catalytic A subunits (FXIII-A 2 ; Mr ∼82,000) and two carrier B subunits (FXIII-B 2 ; Mr ∼76,500) linked together through noncovalent bonds. 3An intracellular form of FXIII (cFXIII) is present in monocytes/macrophages and macrophages/platelets as a homodimer of two A subunits (cFXIII-A 2 ).The first 37 amino acids at the N-terminus of FXIII-A comprise an activation peptide (AP-FXIII).Thrombin cleaves the R37-G38 peptide bond and then, in the presence of Ca 2+ , the B subunits dissociate and the dimer of truncated FXIII-A (FXIII-A 2 ') assumes an enzymatically active configuration (G38-FXIII-A 2 *) exposing an active Cys 314 residue. 3Fibrin polymers are an important cofactor to generate activated FXIII (FXIIIa). 4he three-dimensional structure of FXIII-A determined by X-ray crystallography disclosed that the A subunit is divided into four sequential domains: the β-sandwich (Glu 43 to Phe 184), the catalytic core (Asn 185 to Arg 515), and two barrel domains (Ser 516 to Thr 628 and Ile 629 to Arg 727). 5 The three-dimensional structure of FXIII-A provides further information concerning the structure-function relationship of FXIII.It was demonstrated that Cys 314 forms a hydrogen bond with His 373, while the other nitrogen atom in the histidine ring forms a hydrogen bond with Asp 396. 5 The FXIII-B 2 functions as a carrier protein for FXIII-A 2 , 6 stabilizing the A subunits in the circulation and regulating calcium-dependent activation of factor XIII. FXIII-B is composed of ten homologous consensus ("sushi") repeats.7 Each repeat is approximately 60 amino acids in length and contains four disulfide bonds, with Cys 1 linked to Cys 3 and Cys 2 linked to Cys 4.
The gene for FXIII-A is located on chromosome 6p24-p25. 8,9The gene spans 177 kb and is composed of 15 exons.Exon I consists of the 5′ noncoding region, and the activation peptide (amino acid 1-37) is encoded by exon II. 9 The gene for FXIII-B has been localized to chromosome 1q31-q32.1, 10and spans 28 kb composed of 12 exons.The first exon encodes the leader sequence whereas exons II through XI each encode a single "sushi" repeat. 10XIII-A is synthesized in megakaryocytes and during platelet formation is packed into newly formed platelets. 112][13][14][15] FXIII-B is synthesized in the liver 13 and secreted as a dimeric form (FXIII-B 2 ) into plasma. 16ssembly of the factor XIII A and B subunits probably occurs in the circulation.The average plasma concentration of the A 2 B 2 heterotetramer is approximately 22 µg/mL and its half-life is 9-14 days. 17XIIIa catalyzes the formation of peptide bonds between adjacent molecules of fibrin monomer and thus imparts chemical and mechanical stability to a clot.In addition, a number of other proteins are also substrates for FXIIIa, including factor V, plasminogen activator inhibitor-2, collagen, thrombospondin, von Willebrand factor, vinculin, vitronectin, fibronectin, actin, myosin, and lipoprotein (a). 3
Consequences of FXIII deficiency in animal models
In addition to the fundamental role of FXIII in hemostasis, its importance in thrombosis and wound healing is emphasized in FXIII deficiency, which is characterized by bleeding, abnormal wound healing, and spontaneous miscarriage in females.An animal model of factor XIII deficiency, ie, FXIII-A knockout mice, manifests as intrathoracic, intraperitoneal, and subcutaneous hemorrhage. 18Impaired tissue repair was observed in FXIII-A knockout mice in the left ventricles after myocardial infarction, while high FXIII activity was observed within the infarct of wild-type mice. 19n male FXIII-A knockout mice, fibrosis of the myocardium with deposition of hemosiderin, a marker of hemorrhage, was observed. 20Cutaneous wound closure was reduced by almost 30%, with necrotic tissue formation and delayed re-epithelialization in FXIII-A knockout mice compared with wild-type mice. 21Treatment with FXIII has been shown to be effective in correcting clinical manifestations in FXIII-A knockout mice, supporting the importance of FXIII in thrombus formation and wound healing. 18,21,22Similarly, in rats with experimentally induced colitis, lesion severity was significantly reduced when treated with FXIII. 23Reduced edema formation in reperfused ischemic rat heart was observed following FXIII administration ex vivo.Further, treatment with FXIII reduced vascular leakage in a guinea pig model of antiserum-induced vascular damage. 24In addition to its role in wound healing, FXIII also promotes angiogenesis, and it has been shown that FXIII treatment promotes angiogenesis in rabbit cornea and in a heterotopic heart allograft in FXIII-A deficient mice. 25,26
Congenital FXIII deficiency
The first case of severe congenital FXIII deficiency was described more than 50 years ago in a Swiss boy with bleeding diathesis and impaired wound healing. 27To date, about 800 patients have been diagnosed with FXIII deficiency. 1 Congenital FXIII deficiency is a severe bleeding disorder transmitted in an autosomal recessive manner.Typical bleeding manifestations include umbilical stump bleeding during the first few days of life, postoperative bleeding, and intracranial hemorrhage, which is observed more frequently in FXIII deficiency than in other inherited bleeding disorders.Other bleeding manifestations include ecchymoses, hematomas, and prolonged bleeding following trauma. 28Hemarthroses and bleeding into the muscles are less common than in the hemophiliacs.
Habitual abortion is commonly observed in affected females and in mice deficient in FXIII-A. 29The mechanism/s underlying this effect of FXIII might be due to intrauterine bleeding. 29In addition, formation of the cytotrophoblastic shell is impaired in affected homozygous women, probably due to deficient fibrin/fibronectin crosslinking at the implantation site leading to detachment of the placenta and miscarriage. 30elayed wound healing was reported in approximately 30% of FXIII-deficient patients. 31The significance of FXIII in wound repair is based on several lines of evidence: sporadic reports demonstrate the beneficial effect of FXIII concentrates in clinical situations such as inflammatory bowel disease, graft versus host colitis, and healing of surgical wounds; [32][33][34] FXIII can modulate the composition and stability of the fibrin network; FXIII enhances migration, proliferation, 35 and phagocytosis 36 of monocytes and fibroblasts, which are essential components of the tissue repair process; and, finally, FXIII facilitates new blood vessel formation in both in vivo and in vitro models. 25,37Since angiogenesis is an essential process for tissue repair and remodeling, it is possible that by its proangiogenic activity FXIII contributes further to the healing process.The diverse activities of FXIII in tissue repair are summarized in Figure 1. 32st congenital FXIII deficiency is caused by FXIII-A subunit deficiency, which occurs at a frequency of approximately one in two million. 38Congenital deficiency of the FXIII-B subunit is a rare cause of clinically significant FXIII deficiency. 39Among 104 reported mutations causing FXIII-A deficiency, 50% are missense mutations, 26 are deletions/ insertions, nine are splice site mutations, and ten are nonsense mutations.Only 16 mutations have been reported in the FXIII-B gene. 39An updated list of mutations is available on the Internet (http://www.f13-database.de).
Diagnosis
The prothrombin time and activated partial thromboplastin time are normal in factor XIII deficiency.Patients deficient in FXIII-A lack plasma and platelet FXIII-A.In patients deficient in FXIII-B, the B subunit is absent in plasma while plasma FXIII-A is decreased to 5%-40% but normal in platelets. 40(FXiiia).At this stage, the platelet-rich thrombus is anchored to the vessel wall to prevent further bleeding.FXIIIa subsequently crosslinks adjacent fibrin chains and fibrinolysis inhibitors such as α2-antiplasmin and plasminogen activator inhibitor-2 to fibrin to form a stable platelet-fibrin clot that is resistant to fibrinolysis.In addition, FXIIIa mediates the incorporation of fibronectin and other extracellular matrix proteins into the fibrin clot to form a provisional matrix for migration and proliferation of macrophages and fibroblasts into the wound area.FXIIIa further facilitated new vessel formation by direct stimulation of endothelial cell migration, proliferation, and survival by upregulation of egr-1 and c-Jun and downregulation of TSP-1, thereby providing nutrients for the newly formed tissue.Abbreviations: FXiii, Factor Xiii; FXiiia, thrombin-activated Factor Xiii; eC, endothelial cell; eCM, extracellular matrix; TSP-1, thrombospondin-1; egr-1, early growth response protein 1. Historically, the diagnosis of FXIII deficiency was established by confirming decreased FXIII activity using tests that demonstrated increased clot solubility in 5 M urea, dilute monochloroacetic acid, or acetic acid.However, these tests have substantial disadvantages: they detect only the most severe FXIII deficiencies (,0.5%-2% activity); they are poorly standardized; and their sensitivity depends on the features and concentration of the solubilizing agent as well as on the concentration of fibrinogen.Thus, solubility tests must not be used as screening tests for FXIII deficiency, and a quantitative functional assay for the determination of plasma FXIII activity should be used instead.FXIII activity is determined quantitatively by measuring the incorporation of fluorescent or radioactive amines into proteins. 41Quantitative photometric assays are not accurate at levels of FXIII activity between 0% and 10% of normal because they overestimate FXIII activity, thus misdiagnosing severe cases of FXIII deficiency.For this reason, a plasma blanking procedure is recommended 42 and modified sensitive photometric assays with a detection limit .0.6% have been developed recently (Reanalker, Budapest, Hungary).Specific enzyme-linked immunosorbent assays have also been developed to establish FXIII-A, FXIII-B, and FXIII-A 2 B 2 antigen levels. 43
Treatment
The severity of bleeding symptoms in congenital FXIII deficiency is the main reason for preventive regular replacement therapy.Replacement therapy for factor XIII deficiency is highly satisfactory because of the small quantities of factor XIII needed for effective hemostasis (∼5%) and the long half-life of FXIII (9-14 days).Until recently, only plasmaderived sources of FXIII have been available, 44,45 including fresh frozen plasma, cryoprecipitate, and a plasma-derived, virally inactivated FXIII concentrate.Recently, a recombinant FXIII concentrate has been developed (described below).Fresh frozen plasma and cryoprecipitate are widely available but carry an increased risk of blood-borne infections, allergic reactions, and have uncertain potency and unknown pharmacokinetics.Plasma-derived FXIII concentrate (Fibrogammin ® P) underwent a virus-inactivated procedure; however, this virucidal procedure does not eliminate the possible transmission of non-lipid-enveloped pathogens such as parvovirus B19 and hepatitis A. In addition, since there is currently no screening test for blood donors for the prion that causes variant Creutzfeldt-Jakob disease it should be kept in mind that prions are not destroyed by current virucidal methods.The cost of the recombinant concentrate is higher than that of cryoprecipitate or FFP, but lifelong exposure to a recombinant product devoid of any mammalian proteins has a substantial advantage with regard to patient safety.
Prophylactic therapy with Fibrogammin P at a dose of 10-20 U/kg every 4-6 weeks has been successful in achieving normal hemostasis. 44During pregnancies, more frequent injections are necessary to prevent habitual abortion. 45In a small study of seven patients, the mean annual number of spontaneous bleeds was 2.5 events per year prior to Fibrogammin P prophylaxis and 0.2 events per year during prophylaxis. 46Yoshida et al reported that bleeds markedly decreased from 4.2±1.5 per year to 0.2±0.2 per year, with no life-threatening hemorrhage, including intracerebral hemorrhage, in four patients given regular replacement therapy with Fibrogammin P every 4 weeks for 10-19 years. 47Finally, a recent prospective study showed that on prophylaxis with Fibrogammin P, the majority of patients with FXIII deficiency had no hemorrhage, supporting the effectiveness of prophylactic treatment. 48ecently, it was shown that treatment with FXIII concentrate Fibrogammin P in patients with congenital FXIII-A deficiency increases platelet adhesion to fibrinogen by almost 30%, thereby improving further platelet function. 49This may have important therapeutic implications for the use of FXIII concentrates.In addition to the well established effect of FXIII on secondary hemostasis (clot stability), FXIII concentrate might also enhance primary hemostasis (platelet function) in patients with congenital FXIII-A deficiency.
A new recombinant FXIII (rFXIII) homodimer (rFXIII-A 2 ), originally developed by Zymo Genetics Inc (Seattle, WA, USA), and later transferred to Novo Nordisk A/S (Copenhagen, Denmark), has been manufactured in Saccharomyces cerevisiae (yeast) and contains no human/mammalian products.rFXIII-A 2 homodimers associate in plasma with endogenous FXIII-B to form the stable heterotetramer FXIII-A 2 B 2 .
Clinical studies conducted with rFXIII concentrate are summarized in Table 1.1][52] In a Phase I clinical trial, rFXIII had a half-life similar to that of native FXIII. 53This new product was found to have a good safety profile and is appropriate for development for monthly prophylactic administration in patients with FXIII-A subunit deficiency. 53A multinational, open-label, single-arm, multiple-dosing, Phase III prophylaxis trial was undertaken to evaluate the efficacy and safety of rFXIII for the prevention of bleeding in congenital FXIII-A subunit deficiency. 54The estimated half-lives of the FXIII-A 2 subunit, FXIII-A 2 B 2 , and FXIII activity were similar to those reported for plasma derived FXIII-containing products, 55 and for rFXIII in the previous Phase I study. 53ith regard to efficacy, no spontaneous treatmentrequiring bleeds or intracranial hemorrhage occurred during the rFXIII treatment period.Bleeds requiring treatment were observed in only four of 41 participating patients, all of them due to trauma.The bleeding frequency was significantly lower than the rate of 2.91 bleeds requiring treatment per year in patients receiving on-demand treatment in data collected retrospectively.
No safety issues were raised besides development of transient, low-titer, non-neutralizing anti-rFXIII antibodies in four of 41 patients, who continued to be treated with either rFXIII or plasma-derived FXIII.The presence of these non-neutralizing antibodies was not associated with any treatment-requiring bleeds, changes in FXIII pharmacokinetics, allergic reactions, or specific genotype.Further, the antibodies declined below the detection limit in all patients, despite repeated exposure to rFXIII or other FXIII-containing products, indicating that they were not clinically significant.Taken together, this study demonstrated that rFXIII as monthly replacement therapy is efficacious and safe for prophylactic treatment in patients with congenital FXIII-A subunit deficiency. 54Currently, rFXIII concentrate (NovoThirteen ® , Novo Nordisk) was approved for the treatment of FXIII-A deficiency by the European Medicine Agency for the countries in the European Common Market and Canada.
In summary, all the symptoms of FXIII deficiency can be prevented or controlled by lifelong monthly prophylactic treatment with FXIII concentrates, thereby enabling a normal and active life for every patient.
Figure 1
Figure 1Pleiotropic role of FXIII in tissue repair.During the first stages of wound healing, formation of a fibrin clot is stabilized by (FXiiia).At this stage, the platelet-rich thrombus is anchored to the vessel wall to prevent further bleeding.FXIIIa subsequently crosslinks adjacent fibrin chains and fibrinolysis inhibitors such as α2-antiplasmin and plasminogen activator inhibitor-2 to fibrin to form a stable platelet-fibrin clot that is resistant to fibrinolysis.In addition, FXIIIa mediates the incorporation of fibronectin and other extracellular matrix proteins into the fibrin clot to form a provisional matrix for migration and proliferation of macrophages and fibroblasts into the wound area.FXIIIa further facilitated new vessel formation by direct stimulation of endothelial cell migration, proliferation, and survival by upregulation of egr-1 and c-Jun and downregulation of TSP-1, thereby providing nutrients for the newly formed tissue.Abbreviations: FXiii, Factor Xiii; FXiiia, thrombin-activated Factor Xiii; eC, endothelial cell; eCM, extracellular matrix; TSP-1, thrombospondin-1; egr-1, early growth response protein 1.
Table 1
Clinical studies with recombinant FXiii
|
2018-12-09T22:11:14.392Z
|
2013-10-01T00:00:00.000
|
{
"year": 2013,
"sha1": "2a1a0199dab57e9de67cebcd0b9e266550f6d98e",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=17951",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f196b19fabcc9ac69510d26974ba5ddb92d4bfad",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244916651
|
pes2o/s2orc
|
v3-fos-license
|
Discordance between the triglyceride glucose index and HOMA-IR in incident albuminuria: a cohort study from China
Background To date, there have no study comparing the associations between TyG index and HOMA-IR on the risk of incident albuminuria. Accordingly, the objective of the present study is to use discordance analysis to evaluate the diverse associations between TyG index and HOMA-IR on the risk of incident albuminuria. Methods A community-based prospective cohort study was performed with 2446 Chinese adults. We categorized participants into 4 concordance or discordance groups. Discordance was defined as a TyG index equal to or greater than the upper quartile and HOMA-IR less than the upper quartile, or vice versa. Results During a median follow-up period of 3.9 years, 203 of 2446 participants developed incident albuminuria (8.3%). In the multivariable logistic analyses, the high TyG index tertile group was associated with a 1.71-fold (95% confidence interval (CI) 1.07–2.72) higher risk of incident albuminuria, comparing with the low tertile group. Participants in TyG (+) & HOMA-IR (−) group had a greater risk of incident albuminuria compared with those in TyG (−) & HOMA-IR (−) group after multivariate adjustment. Subgroup analyses showed that low HOMA-IR and discordantly high TyG index was closely related to a highest risk of incident albuminuria in cardiovascular metabolic disorder subjects. Conclusions Participants with a discordantly high TyG index had a significantly greater risk of incident albuminuria, especially in metabolic dysfunction subjects. The TyG index might be a better predictor of early stage of chronic kidney disease than HOMA-IR for subjects with metabolic abnormality. Supplementary Information The online version contains supplementary material available at 10.1186/s12944-021-01602-w.
Background
Chronic kidney disease (CKD), a major disease burden, affects 8 to 16% of the population worldwide [1,2]. Globally, metabolic disorders are the most common reasons for CKD [3]. Since extensive evidence has confirmed the strong association between CKD and an increased risk of cardiovascular disease [4,5]. Dysregulated metabolic factors, including diabetes mellitus, hypertension and dyslipidaemia, play leading roles in mediating this relationship, the early identification of CKD is critical for preventing clinical cardiovascular events. Additionally, large-scale studies have proven that albuminuria is a sensitive biological marker of progression of kidney diseases in early stage of CKD [6], and increased urinary albumin excretion is also an important indicator of cardiovascular metabolic risk factors [7,8].
Insulin resistance (IR) is an early metabolic change in individuals with CKD [9]. Since the hyperinsulinaemiceuglycaemic clamp (HIEC) is a well-accepted 'gold standard' approach for evaluating IR, the homeostasis model assessment of IR (HOMA-IR) is a relatively most widely used tool to assess IR. Moreover, considering the convenience of implementation, researchers often use the upper quartile of HOMA-IR as the standard in population research. With regard to triglycerides (TGs) and high-density lipoprotein (HDL) cholesterol are components of metabolic disorders [10]. Previous studies have reported that lipid ratios, such as TG/HDL cholesterol, the non-HDL cholesterol/HDL cholesterol and triglyceride-glucose (TyG) index, are good indicators of the early identification of IR and have been widely used in clinical practice [11]. Moreover, the TyG index, calculated by fasting glucose and triglycerides, has been shown to perform better than HOMA-IR [12] and to be significantly correlated with HIEC [13].
To date, this kind of studies have rare explored the connection of the TyG index and albuminuria. A community-based study designed by Zhao et al. [14] recently showed that, in elderly individuals, higher levels of the TyG index were closely related to a greater risk of CKD and microalbuminuria. However, no studies have compared the abilities of the TyG index and HOMA-IR to measure new-onset albuminuria in the general population. Therefore, the present study was carried out to use discordance analysis for the assessment and further comparison of the effects of TyG index and HOMA-IR on incident albuminuria risk.
Study design and participants
A community-based investigation was undertaken from June to August 2009 in the Songnan community, Baoshan District, Shanghai, China. A circumstantial introduction of this research population has been previously published [15,16]. In total, 4012 participants underwent this examination at baseline. Serum and urine specimens were collected to detect TyG and urinary albumin-to-creatinine ratio (UACR). All participants were asked for to take part in a follow-up visit, of which 2883 individuals attended and tested for UACR between March and May in 2013. This current research was designed to explore the relationship of the TyG index and HOMA-IR to new-onset albuminuria. The exclusion criteria for the current analysis were subjects who (a) had self-reported kidney diseases at baseline (n = 36); (b) had UACR ≥ 30 mg/g or estimated glomerular filtration rate (eGFR) < 60 mL/min per 1.73 m 2 (n = 390); (c) lacked UACR (n = 5); and (d) had missing data for TyG index or HOMA-IR (n = 6). Finally, 2446 individuals were included in this current study (Fig. 1).
The upper quartiles of baseline for the TyG index and HOMA-IR were calculated to classify participants into following 2 classes: low (lower than the upper quartiles) and high (equal to or higher than the upper quartiles). Then, participants were divided into 4 groups on the basis of the low/high value of the TyG index and HOMA-IR, as follows: The study protocol was approved by the Institutional Review Board of Rui Jin Hospital, Shanghai Jiao Tong University School of Medicine. All participants have written informed consent.
Data collection and measurements
The trained physicians used a normalized questionnaire to collect information including sociodemographic features, education levels, lifestyle and history of chronic disease with two face-to-face interviews. The status of current smoking or drinking was defined as smoking or drinking frequently in the past half year. The International Physical Activity Questionnaire was often used to evaluate the degrees of physical activity [17]. Body weight, height and blood pressure (BP) were measured by experienced nurses on the basis of a standard protocol [15]. Participants were asked to rest for 5 min, and their seated blood pressure was measured three times on a nondominant arm with a 1 min interval. The average value of blood pressure was applied in the following analysis. Pulse pressure (PP) was obtained as the mean of three measurements by subtracting diastolic BP (DBP) from systolic BP (SBP).
Since the detection methods and instruments of blood samples and first-voided urine samples at early morning were previously described in published studies, repeat specification was no longer required here [15,18,19].
Definitions
New-onset albuminuria was regarded as a UACR level of 30 mg/g or higher. The definition of CKD was an eGFR ≤ 60 mL/min per 1.73 m 2 or albuminuria. Hypertension was accepted as a SBP level of 140 mmHg or higher, DBP level of 90 mmHg or higher, or self-reported previous history of hypertension by professionals. The definition of diabetes was a fasting plasma glucose (FPG) level of 7.0 mmol/L or higher, 2-h glucose after 75-g oral glucose tolerance test (OGTT) level of 11.1 mmol/L or higher, glycated haemoglobin (HbA1c) level of 6.5% or higher, or self-reported diagnosis by physicians and taking hypoglycaemic medications on the basis of the 2010 American Diabetes Association (ADA) criteria.
Statistical analysis
All data were analysed on the SAS version 9.4 platform (SAS Institute, Cary, NC, USA). A two-tailed P value < 0.05 was considered statistically significant. Baseline variables were compared according to 4 concordance or discordance groups. Continuous data were shown as means ± standard deviation, while categorical variables were displayed as numbers (%). Differences in baseline characteristics among the 4 concordance or discordance groups were carried out by one-way analysis of variance or the χ2 test.
The relationships of the TyG index tertiles, HOMA-IR tertiles and the 4 concordance or discordance groups with new-onset albuminuria were explored using multivariate-adjusted logistic regression models. Covariates involved in the analysis included age, sex, status of current smoking or drinking, education levels, physical activity, HbA1c, PP, HDL cholesterol, LDL cholesterol, total cholesterol, BMI and medication usage of angiotensin-converting enzyme inhibitors (ACEIs) or angiotensin receptor blockers (ARBs).
Furthermore, stratified analysis of the 4 concordance or discordance groups with the risk of newonset albuminuria was repeated according to the status of diabetes, hypertension and age categories (≥ 60 years old or < 60 years old).
Additionally, the above analyses were repeated on the outcome of incident CKD. The flow diagram was shown
Results
Baseline characteristics of participants in 4 concordance or discordance groups according to the TyG index and HOMA-IR Baseline demographic and clinical characteristics were compared across the 4 concordance or discordance groups according to low or high categories for the TyG index and HOMA-IR ( Table 1). The average age of enrolled subjects was 59.17 years old, and 968 of them were men (39.6%). Age, current smoking status, education level, diabetes, hypertension, and dyslipidaemia were different among the 4 groups. Furthermore, participants in TyG (−) & HOMA-IR (+) group were more probable to get a higher body mass index (BMI), SBP, DBP, FPG, fasting insulin, post load glucose and LDL cholesterol in comparison with TyG (+) & HOMA-IR (−) group and had a higher prevalence of diabetes and hypertension. Scatterplots and prevalence of discordance and concordance defined according to the upper quartile values of TyG index and HOMA-IR were depicted in Supplementary Fig. 2.
Relationship of the TyG index, HOMA-IR and concordance or discordance groups with new-onset albuminuria and CKD Table 2 shows the odds ratios (ORs) of newonset albuminuria in participants according to TyG index Table 1 Baseline characteristics of the concordance and discordance according to low or high TyG index and HOMA-IR categories Stratified analysis for associations between the concordance or discordance groups and newonset albuminuria and CKD Subgroup analysis was carried out to explore the association between the concordance or discordance groups and new-onset albuminuria in enrolled participants (Fig. 2). Compared with the TyG (−) & HOMA-IR (−) group, participants in TyG (+) & HOMA-IR (−) group contributed to a significantly higher risk of new-onset albuminuria in patients with diabetes (OR: 1.96, 95% CI 1.05-3.65), with hypertension (OR: 1.83, 95% CI 1.05-3.21) and who were aged>60 (OR: 2.29, 95% CI 1.19-4.39) after multivariable adjustment. The stratified analysis was repeated for the concordance/discordance groups with the outcome of incident CKD and the results were presented in Supplementary Table 2.
Discussion
This research observed that TyG index was significantly relevant to incident albuminuria in a dose-response manner after adjusting for confounding factors in middle-aged and older participants in China. Furthermore, the discordance analysis showed that participants in TyG (+) & HOMA-IR (−) group experienced a higher risk of incident albuminuria after full adjustment, indicating that the TyG index was more apparently relevant to incident albuminuria than the HOMA-IR. Notably, the risk of incident albuminuria was greatest among the subgroup analyses of individuals with a discordantly high TyG index, suggesting that the TyG index might be a more effective indicator in participants with metabolic abnormalities, such as diabetes, hypertension and ageing. According to what we know, this initial study compared the influence of the TyG index and HOMA-IR on the risk of incident albuminuria in general population firstly. IR in CKD individuals is closely associated with risk factors resulting in cardiovascular diseases, and the underlying mechanisms may include chronic inflammation, oxidative stress and endothelial dysfunction [21]. The results from previous studies found that patients with early-stage CKD with near-normal creatinine had defects in the insulin-mediated metabolic pathway of glucose. A retrospective cohort research carried out in Korea by Jiang et al. [22] demonstrated that IR was positively associated with the development of albuminuria in healthy individuals without diabetes. Albuminuria is an early manifestation of CKD. In addition, it is also a predictive factor of cardiovascular diseases in subjects with or without diabetes. HIEC is the "gold standard" approach for IR, where as it is not practical to use HIEC in the clinic due to its time-consuming and labourintensive nature. HOMA-IR is currently the most commonly applied clinical indicator to assess IR. A lot of researches have presented that HOMA-IR is strongly associated with the progression of albuminuria. Recently, the REACTION (Risk Evaluation of Cancers in Chinese Diabetic Individuals: A Longitudinal) study conducted in China [23] found that HOMA-IR was positively related to UACR in prediabetes or diabetes groups, but this relationship was not found in the normal glucose tolerance group. Another large prospective study showed that HOMA-IR quintiles were correlated with the incidence of CKD in adults without diabetes. In the current study, HOMA-IR was further confirmed to have a dose-response relationship with new-onset albuminuria in the general population.
Previous studies have demonstrated a close linkage between the TyG index and cardiovascular diseases. A study including healthy subjects demonstrated that an increasing TyG index was related to a greater risk of cardiovascular disease independent of diabetic status [24]. Limited researches have illustrated the relationship between TyG index and nephric disease. Only one community-based cross-sectional study [14] discovered that higher TyG index was related to elevated microalbuminuria (OR: 1.61, 95% CI 1.22-2.13) and CKD (OR: Fig. 2 Stratified analysis of the association between concordance or discordance groups of TyG index and HOMA-IR with incident albuminuria. ORs (95% CIs) were adjusted for age, sex, current smoking, current drinking, education, physical activity, HbA1c, PP, HDL-cholesterol, LDLcholesterol, total cholesterol, BMI and medication treatment of ACEIs or ARBs. TyG, triglyceride glucose; HOMA-IR, homeostasis model assessment for insulin resistance; OR, odds ratio; CI, confidence interval; PP, pulse pressure; BMI, body mass index; HbA1c, glycated hemoglobin; LDL, low density lipoprotein; HDL, high density lipoprotein; ACEIs, angiotensin-converting enzyme inhibitors; ARBs, angiotensin receptor blockers 1.67, 95% CI 1.10-2.50) risk. This finding was consistent with the present study. Furthermore, data from this study supported that the TyG index might be an improved IR surrogate marker compared with HOMA-IR in the early stage of renal disease. According to the present study, participants in TyG (+) & HOMA-IR (−) group experienced a significantly greater risk of incident albuminuria independent of traditional cardiovascular disease risk factors in discordant analysis, first elaborating the diagnostic value of the TyG index in the early stage of nephrotic damage.
Abnormal metabolic status such as diabetes and hypertension are established risk factors of cardiovascular diseases. Previous studies [25][26][27] have illustrated that diabetes and hypertension harmed the microvascular system, and the underlying mechanism between diabetes, hypertension and microcirculation might include hypertrophic remodelling in small vessels, endothelial dysfunction and vascular dysfunction at the capillary network. These ultimately lead to an increase in microvascular permeability to large molecules (such as albumin) and impaired insulin sensitivity. Since then, several epidemiological studies [28,29] have reported the UACR as a predictive index of cardiovascular events and mortality in diabetes, hypertension and the general population. In the current study, the risk of incident albuminuria across the 4 concordance or discordance groups in different cardiovascular metabolic disorder groups was further investigated, showing that the TyG index performed more effective than HOMA-IR in identifying incident albuminuria risk in subjects with higher cardiovascular metabolic risk, such as diabetes status and hypertension status.
Ageing is related to IR to a certain content. A community-based study of participants aged ≥ 65 years old in northern Shanghai [14] obtained a result that elevated TyG index had a greater risk of new-onset microalbuminuria or CKD. Rather than comparing the TyG index with other markers of IR in ageing participants, they revealed an linkage between the TyG index and nephric dysfunction. In this research, through the discordance analysis with HOMA-IR, the results further proved that the TyG index could recognize early renal stage in elderly individuals, which could improve the early detection of diseases.
Comparisons with other studies and what does the current work add to the existing knowledge
Previous studies have demonstrated the associations between the two indexes mentioned above with microalbuminuria or CKD separately in specific populations, such as elderly individuals or those with diabetes [14,23]. However, this study compared the performance of TyG index with HOMA-IR in general population. Meanwhile, TyG index implemented more effective than HOMA-IR at identifying new-onset albuminuria in subjects with metabolic disorders.
Study strengths and limitations
This current study has some strengths. This study directly compared the effectiveness of the TyG index and HOMA-IR at evaluating new-onset albuminuria risk in the general population for the first time and approved that the TyG index was better at identifying metabolic disorder subjects with new-onset albuminuria than HOMA-IR. There are also several limitations of this study that should be considered. First, this research is presented in a Chinese population aged more than 40 years old, which is not generalizable to other ethnic groups. Second, the length of the follow-up time (3.9 years follow-up) may not have been sufficient to completely capture the probable occurrence of albuminuria and CKD. Third, the widespread use of the TyG index as the best cut-off value as an alternative marker requires further research in future studies. Lastly, some underlying metabolic-related disorder factors that might influence the results were not considered in the current research, like primary triglyceride abnormalities.
Conclusion
This present study summarized that participants companying a discordantly high TyG index had a significantly greater incident albuminuria risk, especially in subjects with cardiovascular metabolic abnormalities. This conclusion supports the clinical value of the TyG index, with its readily available and reliable feature in clinical practice and could help clinicians identify the early stage of CKD in advance.
Additional file 1. Supplementary Table 1 Incidence of CKD using the TyG index, HOMA-IR and concordance/discordance groups (N=2448). Supplementary Table 2 Stratified analyses of the association between concordance/discordance groups and CKD. Supplementary Figure 1 Participant Flow Diagram of CKD outcome. Supplementary Figure 2 Scatterplots and prevalence of discordance and concordance defined according to the upper quartile values of TyG index and HOMA-IR.
|
2021-12-06T14:48:57.174Z
|
2021-12-01T00:00:00.000
|
{
"year": 2021,
"sha1": "56aa5ed92724db29cf6a68f9e43dfb905a953865",
"oa_license": "CCBY",
"oa_url": "https://lipidworld.biomedcentral.com/track/pdf/10.1186/s12944-021-01602-w",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ea16ac58e8e24f550a0e2275684a8a0fd1f9c07a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
265503332
|
pes2o/s2orc
|
v3-fos-license
|
VZV Encephalitis with Brucella coinfection—case report
Abstract Encephalitis occasionally occurs due to the central nervous system (CNS) infection by Varicella-zoster virus (VZV). The coincidence of herpes Encephalitis-brain infection and brucellosis occurs rarely. In this case, a 56-year-old woman was described with low consciousness, seizures, fever, and mood disorders. The brain CT revealed no pathological lesions, but MR showed non-specific plaques in the periventricular white matter. VZV was detected in molecular tests for the panel of viral Encephalitis in cerebrospinal fluid (CSF). The blood culture and the Wright test revealed the presence of Brucella spp. The antiviral treatment of choice was Acyclovir, Levetiracetam to control seizures, and Ampicillin/Sulbactam as prophylaxis antibiotics. Coinfections common poor prognoses makes it crucial to administer antiviral medications immediately. Many clinical challenges require a multidisciplinary team, including involvement of the CNS, resistance to viral strains, reactivation of diseases, and drug toxicity. The early detection of Encephalitis and treatment can promptly prevent exacerbation and complications.
INTRODUCTION
Encephalitis is inf lammation of the brain in response to a virus or microorganism.Varicella-zoster virus (VZV), a human αherpesvirus, can cause primary varicella infection (Chickenpox) or herpes zoster infection (Shingles) after the reactivation of the dormant virus.VZV infection is usually self-limited but can be exacerbated in immune-compromised patients, such as patients with HIV or transplant recipients, and result in VZV reactivation [1].According to the World Health Organization, VZV causes Encephalitis in an estimated one out of every 33 000-50 000 cases [2].
On the other hand, Brucellosis is a zoonotic infection and a multisystem disease caused by intracellular gram-negative Brucella spp, characterized by nonspecific symptoms such as fever, sweats, anorexia, headache, and backache [3].According to reports, brucellosis incidence in Iran in 2021 ranged from 22 to 59 cases per 100 000 people [4].This case report indicates how swift recognition and treatment can reduce the infection in a patient suffering from Encephalitis consequence of Shingles and Brucellosis.
CASE PRESENTATION
A 56-year-old female presented to the emergency department of the hospital following to vital signs of generalized clonic seizure, fever (up to 38.3 • C), sweating, and aggressive behavior.In neurological examination, a type of seizure with a decreased level of consciousness occurred earlier in the week.Moreover, she has been suffering from a cognitive impairment since a few days ago.
Just before hospitalization, the patient underwent phlebotomy (Venipuncture).Then, due to dizziness and fainting, she fell and became unconscious.
The patient had no history of any chronic disease.The physical examination revealed no neck rodor, and the force of organs and cranial nerves were normal, and no skin lesions were identified.Spadework was requested such as Electrocardiography (ECG), binary sequence (BS) tests every 6 h, and computerized tomography (CT) scan, brain MRI (magnetic resonance imaging) with contrast, color doppler sonography of abdominal vessels and pelvic, and Lumbar Puncture (LP).As part of admission to the ward, LP was conducted with the patient's consent with anesthesia consultation.The CSF liquid was sent for culture analysis for the panels of Paraneoplastic, Autoimmune meningitis, and Encephalitis.Immunological tests were ordered for Hbs Ag, Hbs Ab, HCV Ab, ANA, and IgG on CSF.As well, molecular tests were arranged for HSV, CMV, EBV, VZV, Ev and HPeV on CSF.
She has initially prescribed Levetiracetam (Levebel) (500 mg IV every 8 h) to control seizures, and Pantoprazole (40 mg IV) for gastric protection, Captopril (25 mg sublingual once a day) for hypertension urgency.Then for f luids maintenance intravenously, Serum N/S (500 cc with KCl 5 cc + high potassium diet (10 cc KCl in fruit juice)) were added to the patient's daily treatment regime.
There were no pathological lesions in the CT scan result.In MRI, there was SVD (Small vessels disease) as limited and nonspecific hyper signal plaques scattered around the periventricular in white matter (Fig. 1).An elevated intracranial pressure (ICP) of 25 cm H 2 O appeared in MRI.The viral panel of pathogens in CSF, causing viral Encephalitis are depicted in Table 1, shows a VZV positive PCR test of the CSF sample.Therefore, Acyclovir 2.
The reduction of liver enzymes appeared to be improving with the recovery of the underlying infectious agent and refining liver condition.Acyclovir continued, while Levetiracetam and Pantoprazole ampoules were replaced by the relevant tablets.The Bcomplex tablet and Gabapentin (100 mg tablet once a day) were prescribed, and the patient's potassium treatment eliminated.
Ultimately, the patient was discharged after 16 days, with good general condition and a mention of neurological warning signs and medication orders.
DISCUSSION
ICU admission and follow-up decisions were based on some criteria.The patient was admitted to ICU because of a seizure with a decreased level of consciousness.She benefited from receiving the proper medication and treatment for the severity of her illness.A patient with fever, headache, behavioral abnormalities, seizures, or altered mental status, should be suspected of Encephalitis.Patients with suspected Encephalitis should initiate practical Acyclovir therapy, awaiting further diagnostic results.Acyclovir is a small molecule that readily passes through the BBB.However, there were no dermatomal rashes or skin lesions in physical examinations.The general diagnosis methods of VZV Encephalitis may involve a combination of medical history, physical examination, laboratory tests, imaging tests, and pathogen tests [5].Neuroimaging revealed inf lammatory and vascular abnormalities.VZV vasculopathy occurs when VZV replicates within the arterial wall.An early lumbar puncture is crucial to the Encephalitis diagnosis, except for contraindicated patients.In this case, the CT or MRI can indicate elevated ICP.CSF should be tested for cell count, chemistry, stains, fungal and bacterial culture, and viral studies (PCR).In general, diagnostic tests on CSF aren't sensitive enough for slow-growing or uncultivable organisms in the small volume of CSF.Here CSF was examined for the presence of Encephalitis pathogens.In addition to the clinical manifestations, using conventional culture and PCR methods confirmed the result.CSF analysis in patients with viral Encephalitis typically shows normal glucose levels but elevated protein levels.Inf lammatory or invading cells may produce protein in CSF [6].
VZV Encephalitis treatment options include intravenous acyclovir, which is the primary treatment, with a duration of Valaciclovir and ganciclovir (not common), and adjunctive corticosteroids, which are controversial and should be considered on a case-by-case basis [5].
Brucellosis is an endemic infectious disease in Iran.Microbiological diagnosis of brucella is often challenging.Brucellosis general diagnosis methods involves medical history, physical examination, laboratory tests, imaging, pathogen tests [7].Blood culture tests are the gold standard for Brucella laboratory diagnosis.The Wright and 2ME tests are also of great value.Even so, there is growing evidence that molecular identification can assist in the accurate diagnosis and treatment of Brucellosis.Here, the PCR test for Brucella spp. was positive.Antimicrobial regimens and treatment durations for Brucellosis vary widely in the literature.However, it is recommended to prescribe three antibiotics for at least three months [8].The decision was to use a combination treatment with Rifampin (aminoglycoside), Doxycycline, and Cefotaxime for this patient to reduce the risk of deterioration.However, Trimethoprim-sulfamethoxazole (TMP-SMZ) is a combination antibiotic that may be used as an alternative treatment option for brucellosis [9].Cefotaxime and TMP-SMZ were equally effective, and no clear preference existed for one.
Brief ly, the case presented stands out for the CNS infection/Encephalitis caused by VZV, accompanied by Brucella infection.During the initial examination, convulsions and mood swings were the main reasons for suspecting Encephalitis.Then, elevated levels of LFT and ESR in the blood signified the presence of inf lammation.The infection agent was diagnosed with Brucellosis following a blood culture test.Brucellosis is responsible for the slight elevation of liver function tests.This case was particularly intriguing because of the coincidence of varicella Encephalitis and Brucellosis.
Eventually, the patient was discharged after 16 days, with acceptable general condition with neurological warning and medication orders.The chronic condition and low physiological capacity made a long-term post-hospital follow-up necessary.The patient follow-up at six months was asymptomatic, and blood cultures were negative.Recurrence occurs in up to 30% of Brucellosis cases and should also be monitored clinically for up to 2 years [10].
Managing VZV encephalitis with Brucella coinfection is difficult because the clinical features are nonspecific and indistinguishable from other viral CNS infections and require specialized testing.The rising incidence of VZV encephalitis highlights the need for proper diagnosis and management.Getting diagnosed and treated with antiviral therapy early on can improve outcomes, but it is important to note that effectiveness can differ from case to case.
The main takeaway lesson illustrates the emerging role of bacterial and viral coinfections in the complexity of illness etiology.Treatment was intricate due to complex drug selection, antibiotics challenges, antiviral toxicity, disease reactivation, and CNS involvement.
CONCLUSION
When Encephalitis is suspected, it is critical to investigate all possible causative agents (bacterial, viral, virions or fungi) to ensure rapid diagnosis and prevent neurological complications.Consideration of various etiological possibilities should also involve the patient's immune status.
Table 1 .
Panel of pathogens in CSF causing viral Encephalitis IgM, and ANA were normal, as well as color doppler sonography and abdominal and pelvic ultrasound.HBS-Ag, HCV-Ab, ANA test, and serum toxic panel were all negative.High levels of LFT and ESR in the peripheral blood indicate the presence of inf lammation.As a result, additional tests were requested for detecting microbial infection by blood culture.Blood culture was positive for Brucella spp.So, PCR, 2-Mercapto Ethanol reduction (2ME) test, and Wright's test were ordered to confirm.Then PCR, 2ME, and Wright's test were positive and established Brucellosis.Subsequently, antibiotics treatment including Cefotaxime (one gram intravenously twice a day), Doxycycline capsule (100 mg twice a day), and Rifampin capsule (3 mg twice a day) was recommended against Brucella spp.The Liver function tests (LFT) were requested due to high level of ESR.The summary of laboratory test results is given in Table ResultsHSV-1 (Herpes Simplex Virus-1)Negative HSV-2 (Herpes Simplex Virus-2) Negative CMV (Cytomegalovirus) Negative EBV (Epstein Barr Virus) Negative VZV (Varicela-Zoster Virus) Positive EV (Entrovirus) Negative HpeV (Pareachovirus) Negative treatment started with the standard dose of 10 mg/kg three times a day (30 mg/kg/day).Other viral markers of Encephalitis were negative, and IgG,
Table 2 .
Laboratory test
|
2023-12-01T05:08:37.168Z
|
2023-11-01T00:00:00.000
|
{
"year": 2023,
"sha1": "ab8732e6cd99da62230eb75422177beb32daae66",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1093/omcr/omad121",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ab8732e6cd99da62230eb75422177beb32daae66",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
73231021
|
pes2o/s2orc
|
v3-fos-license
|
Biosafety & Health Education A Pilot Study Evaluating the Effect of Daily Education by a Pharmacist on Medication Related HCAHPS Scores and Medication Reconciliation Satisfaction
Purpose: The purpose of this study is to determine if daily pharmacist counseling improves Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) medication scores in a 25 bed medical surgical unit. Secondary objectives included determination of Full-time equivalent (FTE) hours required to complete the task of a pharmacist completing daily counseling and medication reconciliation for each patient on a 25 bed hospital unit, as well as determining if medication reconciliation performed on each patient improved satisfaction survey scores among staff. Methods: This was a single center, controlled, parallel study in two medical surgical units. Patients included were those admitted to the control or intervention unit and the primary investigator (PI) completed daily counseling in the intervention unit and counseling once during admission on the control unit. Medication reconciliation was also completed by the PI on the intervention unit, and satisfaction was assessed through a survey provided to caregivers before and after the study. An FTE analysis was completed to determine the FTE and cost burden to implement this practice model. Results: A total of 128 patients were included in the study over 27 days. Overall medication communication scores increased by 11.4% and decreased by 0.9% in the intervention and the control unit, respectively. Communication about side effects increased by 43% (p = 0.007) and 13.3% (p = 0.013) in the intervention and control units, respectively. A number of medication reconciliation satisfaction endpoints trended towards significance including decreased number of medication misadventures (p = 0.107), increased efficiency of patient admission (p = 0.157) and decreased interference with patient discharge (p = 0.157), and decreased total time to complete the discharge process (p=0.058). The FTE cost analysis indicated that on average, an additional 16 minutes of counseling is required per 3 day admission. Therefore, an additional four to seven FTEs will be required to incorporate this model into our institution. Conclusion: Daily counseling by a pharmacist resulted in a statistically significant increase in communication about side effect HCAHPS survey scores and an overall increase in medication communication compared counseling once during admission.
Pharmacists may require additional time to counsel patients when increasing the frequency of patient counseling. The feasibility of increased counseling time added to other pharmacists' duties requires an assessment before such a practice is incorporated into daily work flow. Currently, there is no data available to describe the feasibility and time requirement for a pharmacist to complete daily counseling and MR on an adult medical surgical unit. Our primary hypothesis was that pharmacists can be utilized to improve HCAHPS medication communication scores by counseling patients daily versus once during admission. Our secondary hypothesis was that a MR intervention completed by pharmacists would improve healthcare worker satisfaction. This study was conducted at Hillcrest Hospital, a 500-bed Cleveland Clinic community hospital. Researchers conducted the study in two similar, 25-bed, medical surgical units. The Cleveland Clinic Foundation Institutional Review Board reviewed the study protocol and approval was received prior to study initiation.
Study Design
This was a 27 day prospective, pilot-study performed on two similar medical/surgical units within a 500-bed community hospital. The study was composed of three main parts as shown in Figure 1, a patient counseling intervention, MR intervention and satisfaction survey, and a full-time equivalent analysis. The primary objectives of this study were to determine if the initiation of a daily pharmacist-based counseling service improved HCAHPS medication scores compared to HCAHPS medication scores in a second group of patients counseled only once during admission. Secondary objectives included assessment of the number of full time equivalent hours required to complete admission medication counseling, daily and discharge counseling, and MR for each patient on a 25 bed hospital unit. Other secondary outcomes were analyzed, including the number of orders a pharmacist was able to process while doing MR and patient counseling versus how many orders were verified by the pharmacist staff verifying the control unit orders. Finally, we sought to determine if MR performed on each patient in the intervention unit improved satisfaction survey scores among nursing, physician and ancillary staff.
Pharmacy Counseling Intervention
Patients were included for the pharmacy counseling intervention if they were admitted to either the control or intervention unit and did not have dementia unless a family member was present to receive medication counseling from the pharmacist. Patients were excluded if they were scheduled to be discharged to a skilled nursing facility, rehabilitation facility, long-term acute care facility, or nursing home. The pharmacy counseling intervention was performed on two similar medical/surgical units. The control unit was a 25 bed unit where the Primary Investigator (PI) counseled each patient once upon admission. The intervention unit was a 25 bed unit where the PI counseled each patient on a daily basis and performed MR for each patient on admission and at discharge. Patient counseling was completed on these two units by one primary evaluator (pharmacist) for 27 consecutive days from the hours of 10 am to 6 pm. Each patient on the control unit was counseled about their medications within 24-48 hours after admission. Medication counseling included the indication for each medication and potential side effects. If the patient was incapable of interpreting or understanding the counseling as assessed by the diagnosis of dementia, a family member received the counseling. Patients on the intervention unit received daily counseling. Counseling occurred within 24 hours of admission, on each day of hospital stay, and prior to discharge. Each day the pharmacist addressed any ongoing medication issues with the patient in addition to providing counseling on new medications. The pharmacist also reviewed discharge medications ordered for the patient and any changes made to the patients' drug regimen as compared to prior to admission. An inpatient note was recorded after each counseling session in the patient's chart documenting the patient counseling for that specific intervention. Patient counseling was not performed by any other pharmacist on either the control unit or intervention unit during the 28 day study to control for bias and variability.
The HCAHP survey was distributed at random to a group of individuals on both the control and intervention units. The Cleveland Clinic hires an independent group to disperse their HCAHP surveys to patients after they are discharged from the hospital. The researchers were blinded regarding which patients received the survey. Selected patients received a phone call after discharge asking them to evaluate their stay at Hillcrest Hospital. The patients could have taken the survey or refused at will. The researchers did not know how many declined the survey, just how many took the survey. The portion of HCAHP survey of interest in this study were the two medication related questions (Appendix 1 and 2). The HCAHP percentage for each question only included the "always" answers or the top-box choice. The percentages of patients who answered "always" to the medication question were given to an independent reporter and were reported to the researchers. The total percentage of participants who answered "always" to one or both of the medication questions of the HCAHP survey were recorded and reported by the quality office and this data was used to determine the difference in HCAHP scores before and after the interventions.
Prescription order verification was completed by the PI for the intervention unit. Stat or immediate orders were required to be verified within fifteen minutes of order entry by the ordering provider. If stat orders were not verified by the PI within fifteen minutes, another pharmacist in the hospital verified the order. Non-stat orders remained in the order verification screen for thirty minutes before a pharmacist outside the intervention unit verified the order [6]. A report was run identifying how many orders were verified by the PI and how many were verified by the other pharmacists.
Medication Reconciliation Intervention and Satisfaction Survey
MR was performed by a pharmacist for each patient in the intervention unit. MR was performed for each patient in one of two ways. The first method occurred in the emergency room (ER). The ER pharmacist spoke with the patient, patient representatives, the patient's pharmacy or primary physician office to clarify current medications. The emergency room pharmacist completed the MR when they were available. The emergency room pharmacist works seven days on and seven days off. The PI completed MR for all other patients admitted to the intervention unit. A note was added to each patient's chart indicating the MR had been completed. A MR template was used by both the PI and the pharmacist in the emergency room.
MR satisfaction was assessed before and after the MR intervention on the intervention unit. A ten question survey was designed using a likert scale and approved by the IRB as a quality tool. The survey was distributed as part of a quality initiative and was phase one of this study. The survey was distributed to nurses, physicians, and social workers during this phase. Phase two of the MR study involved a reassessment of nurses, physicians, and social workers satisfaction with the MR process after the PI finished the 27 day pilot program. The survey in phase 1 was used to assess the overall evaluation of the current MR process in the hospital during the time prior to the study initiation. The phase 2 survey was distributed within one week of completion of the pharmacist intervention month to reevaluate the overall satisfaction of the caregivers for the MR process in the hospital. The same ten questions were distributed to the same individuals and the differences in answers pre and post intervention were analyzed.
Full-time Equivalent Analysis
A full-time equivalent analysis was performed in order to assess the feasibility of incorporating the tasks of daily pharmacist counseling and MR into a pharmacy practice model. FTEs were calculated based on the time allotted for the task of the primary evaluator in addition to order entry. The minutes spent on each task were recorded after each patient counseling session (admission, interim, and discharge), each record of the patient note, and each MR during admission and at discharge for the intervention unit. The researchers also recorded the minutes spent counseling on the control unit. Average minutes spent counseling was compared between the control and intervention unit. A cost analysis was completed to determine the financial requirement of implementing a pharmacy practice model similar the one demonstrated in this study. The following assumptions were made to complete our cost-analysis: 500-bed hospital, average of 47 non-ICU adult admissions per day, average length of stay three days.
Statistical Analysis
The sample size calculation was based on the primary endpoint; change in HCAHPS survey scores before and after the intervention. This calculation was based on 95% confidence interval, a power set at 80%, and an effect size of 5% which required a sample size of 64 patients preintervention and 64 patients post-intervention. The 64 pre-intervention patients included the patients completing the HCAHPS survey 1 month prior to the study month. The 64 patients post-intervention included the patients completing the HCAHPS survey during the study. The primary endpoint was the change in percentage of daily counseled patients rating the medication counseling portion of the HCAHP survey as "always" from baseline versus change in percentage from baseline in a second group of patients counseled only on admission. This was analyzed using a Fisher's exact test. The Wilcoxon signed rank matched pairs test was used to analysis the change in HCAHP survey scores within each group compared to their historical HCAHP survey data.
The secondary endpoints included FTE hour calculation required by a pharmacist performing daily counseling in a 25 bed unit versus counseling only on admission on the control unit, MR for each new admission on a 25 bed unit, and the number of orders entered by the PI on the unit during a 10 hour shift. The difference in full-time equivalent requirements was analyzed using a t-test. We hypothesized that an additional four hours or 0.5 FTEs might be required to counsel each patient daily and perform MR on admission and discharge. Also, the change in MR satisfaction in assigned unit from baseline was assessed. The MR satisfaction survey data was analyzed by determining the mode for each question as well as evaluating the change in the percentage of answers for each individual question after the MR intervention. This change was analyzed using the Wilcoxon signed ranks matched pairs test. All data was analyzed using SigmaPlot 10.0 [7].
Results
One-hundred and twenty-eight patients were included in the study population with 71 on the intervention unit and 58 on the control unit ( Table 1). The average age of the patients was 61 years and 62% were female. The primary admitting diagnoses on the intervention unit were orthopedically related, including patients who were post-hip and knee arthroplasty. Orthopedic admission accounted for 24% of the admissions included from this unit. The primary admitting diagnoses on the control unit were gastrointestinal related, accounting for 47% of the patients included from this unit. Approximately 80 patients were excluded secondary to skilled nursing, long-term care, or acute rehabilitation placement at discharge. An additional 10 patients were excluded for communication barriers such as language and hearing.
HCAHPS Counseling
Two days into the study, the time allotted per day for patient counseling and MR was changed from 10 hours per day (10am to 8pm) to 8 hours per day (10am to 6pm). This change was based on the determination the completion of study tasks was feasible in an 8 hour period. A total of 10 and 14 surveys were completed on the intervention and control units, respectively, by patients discharged during the study month (February 2013). The overall change in medication communication scores, as compared to one month prior to the study intervention (January 2013), increased by 11.4% and decreased by 0.9% in the intervention and control units, respectively (Figure 2). The percentage of patients answering always to question 1 (Communication about what the medication is for) decreased on both units compared to January 2013: 26% (p = 0.186) and 15.2% (p = 0.179) percent decrease in the intervention and control units, respectively. A statistically significant increase in the percentage of patients who answered "always"
Medication Reconciliation Intervention
MR was successfully completed for each patient admitted on the intervention unit and documented in the patients' chart. The first phase of the MR survey was administered to staff working on the intervention unit approximately 6 months prior to the study month. A total of 28 individuals (71% nurses) completed the first phase of the MR survey. The second phase of the MR survey was completed within approximately 2 weeks of the completion of the MR intervention. Only 13 (46%) of the individuals completed the second phase of the survey. We were unable to contact certain nurses who had left the health-system, or were relocated to a different floor within the hospital, and therefore they would not be able to adequately assess MR on the intervention unit. Four of the ten questions that assessed various aspects of the MR process trended towards a significant change of improvement when compared before and after the study. These included decreased number of medication misadventures (p = 0.107), increased efficiency of patient admission (p = 0.157), decreased interference with patient discharge (p = 0.157), and decreased total time to complete the discharge process (p = 0.058).
FTE Analysis
A total of 256 counseling sessions were completed during the 27 day study. The pharmacist spent an average of 10 minutes per counseling session on the intervention unit and an average of 13 minutes per session on the control unit. Each patient on the intervention unit received an average of 2.7 counseling sessions versus one counseling session on the control unit. A total of 16 extra minutes were required to counsel patients daily throughout their three day admission if they were on the intervention unit versus the control unit.
A total of 1613 orders were verified on the intervention unit by the PI during the hours of 10 am and 6 pm. Seventy-four percent (1194) orders were verified successfully by the PI. Approximately 15 orders were not verified by the PI daily. The investigators discovered this was likely caused by pharmacists who were working "as needed" within our institution and were unaware of the research project. Education was provided to those individuals throughout the month to reinforce the need to defer the orders to the primary investigator.
An FTE cost-analysis was completed to determine the potential financial requirement to incorporate the type of pharmacy practice model conducted in this study throughout our 500-bed hospital. We utilized an additional 16 minutes of counseling would be required per each patient's three day admission on the intervention unit versus the control unit. Based on the assumptions stated in our methods, we calculated our required FTEs based on these assumptions as well as the total number of pharmacists we would need to be placed in each adult medicine unit throughout the hospital. Therefore, we would require an additional 4 to7 FTEs (approximately $400,000-$700,000) to implement this practice model throughout our institution. We could utilize a minimum of 4 additional FTEs if we allowed cross-coverage between units where our volume was lower.
Discussion
We found that counseling by a pharmacist statistically significantly improved side effect communication when provided to a patient multiple times versus once throughout their admission. Additionally, a trend was seen towards improvement in overall medication communication scores. Communication about the purpose of medications decreased after the study intervention on both units. This may be explained by the fact that the actual questions on the HCAHPS survey identify "hospital staff " in general as being responsible for the medication communication. Patients likely considered the communication by nursing and physician staff instead of pharmacists alone. Patients consider this question to pertain to the direct administration of medications, and how often a nurse explains to them the use of a medication when it is administered. Therefore, pharmacists can make an impact on these endpoints, but ultimately, multiple healthcare providers contribute to the medication communication HCAHPS survey results. Our institution can rework our patient counseling processes and continue to work with our nursing team to improve medication communication and remind each other as colleagues to touch on these points when administering or discussing medications.
The impact of increased counseling by pharmacists was assessed in a study including 125 patients with low literacy [8]. Patients were 1:1 to have medication counseling completed by either a pharmacist or a nurse or physician taking care of the patient. The patients randomized to the pharmacist group received counseling upon admission, discharge, and follow-up after discharge. A survey was provided to patients to assess the utility of the different components of the intervention after the intervention was completed. Seventytwo percent of the patients reported it was "very helpful" to talk to the pharmacist about their medications. Sixty-three percent and 72% of patients said the intervention was "very helpful" in preventing and managing side effects and understanding how to take their medications, respectively [8]. Our study had similar results, although the number of patients that completed the HCAHPS survey was much lower. The survey provided to patients in the aforementioned survey identified the pharmacist as the healthcare provider in the question [8].
A MR intervention was included in this study to improve our current processes within the hospital. The key difference with our study was that a pharmacist completed the process as opposed to nursing staff. Authors conducted a study to evaluate the effect of pharmacist driven medication reconciliation on preventable medication errors post-discharge in low-literacy patients.9 Patients were randomized 1:1 to have medications reconciled by a pharmacist or a nurse or physician taking care of them. Authors included 851 patients, in which 432 (51%) experienced one or more clinically important medication errors during 30 days after hospital discharge. Mean number of medication errors were similar per patient in the intervention and control groups, 0.87 per patient versus 0.95, respectively (p=0.92). Authors concluded that no statistical difference existed between the prevention of medication errors when a pharmacist completed medication reconciliation versus a physician or nurse [9]. Our survey results showed a trend towards reduction in medication errors post intervention by a pharmacist. This data is based on opinion, and actual incidence of medication errors is important to study in the future similar to the aforementioned study [9].
Our FTE analysis was completed to accurately reflect the financial burden that may be required to implement a pharmacy practice model within our hospital similar to the one utilized in our study: a pharmacist providing daily counseling, completing MR, and verifying all medication orders for a patients on a 25 bed unit. The major difference between the practice model utilized in our study and the one that would be implemented within our hospital, is that patients being discharged to skilled nursing facilities and long-term care facilities would not be excluded from MR and patient counseling. This likely will increase the amount of time spent completing MR and patient counseling by the pharmacist and may limit the time a pharmacist could spend with each patient, thus affecting the quality of the counseling sessions.
Several limitations existed within our study including differences between the patient populations on the intervention and control units, low return on completed HCAHP and medication reconciliation satisfaction surveys, barriers to complete medication reconciliation interventions, and limited ability to apply our FTE cost analysis elsewhere within our institution. We attempted to include the most similar units within the hospital when choosing our intervention and control units, although there were inherit differences that could have affected our results. The primary differences included admitting indications. On the control unit, most patients were admitted postorthopedic surgery. Medication communication often included pain management topics as well as bowel regimen control. These patients require a considerably larger amount of pain medications compared to the control unit, where the primary indication was gastrointestinal related surgeries. Patients were discharged post-operatively sooner on the control unit (within 24-48 hours) than the intervention unit (within 72 hour average or longer), indicating the need for a longer duration of pain management post-operatively on the intervention unit. Increased pain levels as well as pain medication use, could have affected the patients' overall experience within the hospital, and in turn contributed to their survey results. Additionally, we did not meet our required number of completed patient surveys to accurately measure the impact of daily counseling on this patient population.
Although, MR was reviewed upon admission for each patient included on the intervention unit, a number of barriers may have limited the success of this process. The medication orders for most MR interventions were already approved by the physician by the time the patient arrived on the intervention unit. This required the PI to make MR changes only after speaking to the physician. If the patient's MR was completed overnight, the PI had to make the changes the next day, and the patient may have received medications in error in the mean time. The greatest disadvantages to MR conducted in this manner, included increased time requirement to contact the physician and inability to catch the errors before administration to the patient (i.e. if patient was admitted overnight). Ideally, the MR could be completed either in the emergency room by a pharmacist or by a pharmacist on the floor prior to physician orders being placed. We hope to conduct MR in this manner in the future. Additionally, the second phase of our MR survey was not completed by over half of the staff members who completed phase I of the survey. This limited our ability to fully evaluate the effect of MR completed by a pharmacist versus the nursing staff.
Lastly, our cost analysis was based on the average length of stay on one of our adult medical-surgical units which was 3 days. This does not reflect the varying length of stays throughout our other adult medicine units; therefore the additional time requirement per patient admission for daily counseling may be greater in other hospital units.
Conclusion
Our data indicated that daily counseling by a pharmacist can improve medication communication related HCAHPS scores, and thus improve patient care. Although our results were limited by low numbers of completed HCAHPS and MR satisfaction surveys, our trends towards significance indicates the positive impact this practice model could have on patient care if implemented into practice. Our study provides data for a larger study to completely validate the effectiveness of pharmacists in improving medication communication and the medication reconciliation process. In the future, when medication reconciliation is completed, pharmacists should document the number of changes made to the medication regimen and the incidence of medication errors should be evaluated. Our FTE analysis indicated that additional time will be required to counsel daily throughout admission on the intervention unit. The counseling requirements and lengths of stay vary from unit to unit within our hospital and therefore this practice model should be tested on other adult medicine units within our facility. Further study is needed to assess the impact of a pharmacist on patient readmission rates and improved medication communication when a program similar to ours is instituted.
|
2019-03-11T13:07:58.013Z
|
2012-01-01T00:00:00.000
|
{
"year": 2012,
"sha1": "405017b7fdc2708a3face707bf96cc1aeba3cc15",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/open-access/a-pilot-study-evaluating-the-effect-of-daily-education-by-a-pharmacist-on-medication-related-hcahps-scores-and-medication-reconciliation-satisfaction-2332-0893.1000105.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "50902e5c80a49f5fc320c6ca7574ada866331f8b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
148611355
|
pes2o/s2orc
|
v3-fos-license
|
Depression among smokers of a web-based intervention to quit smoking : a cross-sectional study
Introduction. Web-based interventions for smoking cessation are an innovative strategy to reduce the burden of smoking. Although many web-based interventions are freely available in many languages and have proven to be effective, so far no study has covered in detail the association between depression and smoking. Objective. The aim of this study was to evaluate the prevalence of depression among users of the Viva sem Tabaco, a web-based intervention for smoking cessation. Method. This was a retrospective cross-sectional study. In the internet-based intervention participated 1 433. Inclusion criteria were: being 18 years or older and a smoker; exclusion criteria were: omitting to fill out two questions of the screening depression questionnaire PHQ-2 and having made multiple accesses within a limited time span, characterizing invalid access. At the end, the sample had 461 participants. Participants answered questions related to sociodemographic characteristics, tobacco history, depression (PHQ-2 and PHQ-9), alcohol use, and intervention use. Results. Participants average age was 42.3 years (SD = 12.1). Most participants were female (67%), and 70% were employed during the time of the study. From the total sample, 36.4% of the participants presented depression according to PHQ-2. Being screened with depression was associated with tobacco dependence (OR = 1.10; 95% CI = 1.00, 1.20), and associated with not having a job (OR = .53; 95% CI = .29, .97). Discussion and conclusion. Depression may be a factor to be considered in programs that offer support to quit smoking through the internet for Portuguese Speakers.
Many approaches are proven to be effective for smoking cessation (Hartmann-Boyce, Stead, Cahill, & Lancaster, 2013).Web-based and m-health interventions are considered innovative and attractive by young smokers and women who smoke.These interventions may benefit smokers who are willing to quit when used alongside with other treatments, such as nicotine replacement therapy and counseling (Civljak, Stead, Hartmann-Boyce, Sheikh, & Car, 2013).Web-based and m-health interventions can be used in the absence of any other treatment, in relapse prevention, and as a complement to standard care.Moreover, they can be accessed simultaneously and are available in multiple languages (Muñoz, 2010).
There is debate an ongoing as to which treatments are best suited for depressed smokers.It is known, however, that smoking cessation treatment does not exacerbate depression symptoms.Whereas relapse is associated with depressive symptoms, depression does not have a negative impact on cessation outcomes, and the self-medication hypothesis (i.e., individuals take substances of abuse to ameliorate symptoms) does not account for tobacco dependence and depression comorbidity (van der Meer, Willemsen, Smit, & Cuijpers, 2013).In addition, studies suggest that cessation rates increase in smokers with current and past depression when psychosocial mood management is combined to standard treatment (van der Meer et al., 2013).According to Taylor et al. (2014), quitting smoking is associated with lower levels of anxiety and stress, and better quality of life.
In regard to web-based and m-health interventions, little attention has been given to depression and other comorbidities (Civljak et al., 2013).Only few studies assessed depression, and they found that 12-40% of the participants met criteria for depression (Bricker, Wyszynski, Comstock, & Heffner, 2013;Muñoz et al., 2009;Rabius, Pike, Wiatrek, & McAlister, 2008).The aim of this study is to evaluate the prevalence of depressive symptoms among users of an open-source web-based intervention for smoking cessation and also the participants' characteristics associated with depression.
METHOD Design
This was a cross-sectional study.
Subjects
All 1 433 users who had signed up for a web-based intervention for smoking cessation were invited to enroll between October 2013 and March 2017.Inclusion criteria were being 18 years or older, smoker, and in agreement with the consent term.Participants who did not answer the questions of screening depression and test accounts were excluded.After these procedures, data from 461 (32.2%) participants were analyzed.
Measures
All measures included were chosen after a bibliographic review on smoking cessation studies, including randomized clinical trials of internet interventions (Bricker et al., 2013;Muñoz et al., 2009;Rabius et al., 2008).
Sample characteristics and smoking history.We measured age, gender, education, and employment (Are you employed?).Smoking history was measured by such as questions the number of cigarettes smoked per day (How many cigarettes per day do you smoke?) and the number of previous quitting attempts (How many times have you tried to quit smoking?).Questions were adapted from the Global Adult Tobacco Survey (2010).
Patient Health Questionnaire (PHQ-2 and PHQ-9).We used the PHQ-2 to screen depression and the PHQ-9 (Manea, Gilbody, & McMillan, 2015;Pettersson, Boström, Gustavsson, & Ekselius, 2015) to evaluate the severity of depression and the existence of a major depressive disorder.PHQ-2 is a two-item questionnaire, where each item is scored 0-3, with questions that address depressive humor and anhedonia in the past two weeks ("Little interest or pleasure in doing things" and "Feeling down, depressed, or hopeless").As all participants were Brazilians, we used the cut-point proposed by Santos et al. (2013).PHQ-9 is a nineitem questionnaire, each item scored 0-3.The severity score of PHQ-9 is calculated by summing the scores of each question.PHQ-9 scores range from 0 to 27, scores of 5, 10, 15, and 20 represent cut points for mild, moderate, moderately severe, and severe depression, respectively.Both versions have been validated in several samples as pointed out by Manea et al. (2015), including Brazilian samples (Santos et al., 2013).
Contemplation ladder.It is a single question scale that assesses the readiness to change.Smokers choose one of ten options that better describes their readiness to quit.Scores range from 0 to 10, where 0 means not motivated at all ("I like smoking and I do not consider quitting") and 10 totally motivated ("I have already quit and I am not going to smoke again").The scale is validated and used in a Brazilian quitline service (Terra et al., 2009).
Fagerström Test of Nicotine Dependence (FTND).FTND assesses nicotine dependence (Heatherton, Kozlowski, Frecker, & Fagerström, 1991).This is a six-item questionnaire widely recommended in clinical guidelines (US Department of Health and Human Services, 2008) and also research studies.Scores range from 0 to 10; scores of 2, 4, 5, 7, 10 represents cut-points of very low, low, moderate, high, and very high nicotine dependence, respectively.The scale is validated in Brazilian samples (De Meneses-Gaya et al., 2009).
Alcohol Use Disorders Identification Test Consumption (AUDIT-C).AUDIT-C screens alcohol use disorders or hazardous drinking and was developed by the World Health Organization.It is a three-item questionnaire, scored in a scale of 0-12, where values higher than 4 in men or 3 in women are considered positive.Generally, higher scores mean higher odds that alcohol is affecting user's health and safety.The test is validated in Brazil (De Meneses-Gaya et al., 2010).
Procedures
Participants were recruited from different sources: social media (Facebook Ads), Google Ads, email, and news published on the internet.Recruitment strategies were primarily aimed to enroll new users in the web-based intervention "Viva sem Tabaco"."Viva sem Tabaco" is an open-source web-based intervention for smoking cessation, available in seven languages (Portuguese, Spanish, English, Russian, Arabic, Italian, and German); it is fully automated, and its content is based on guidelines for treating smokers (US Department of Health Services, 2008) and meta-analyses published by the "Cochrane Tobacco Addiction Group" (Cahill, Lancaster, & Green, 2010;Civljak et al., 2013;Hartmann-Boyce et al., 2013).The intervention content is structured in three major areas according to the user degree of motivation for change.More details on intervention development can be found in Gomide, Bernardino, Richter, Martins, and Ronzani (2016).
Users were invited to enroll in the study after they filled the registration form.The consent form and the site policy were provided to the participants by means of links.
The consent form was sent by e-mail to those who agreed to participate in the study.After consent, participants answered the following questionnaires: characterization of the subjects and smoking history, Patient Health Questionnaire (PHQ-2), Fagerstrom Test for Nicotine Dependence (FTND), Contemplation Ladder, and AUDIT-C.After that, participants could navigate the intervention content.Additionally, 345 (74.8%) out of 461 users filled the Patient Health Questionnaire (PHQ-9), available in one intervention page intervention about smoking and depression.Data collection with missing cases is depicted in Figure 1.The depression page suggests users that meet criteria for depression to seek professional help to quit smoking and acknowledges that depression reduces the odds of quitting.
Statistical analysis
Data were collected from the intervention database.Tables from the database were merged using a primary key.To remove possible false user accounts (i.e., excessive clicks on the same page; access to multiple pages in a short timespan), we inspected all cases.After that, we compared the intervention usage (time spent on the intervention and average number of pages accessed) among those who filled in all questionnaires and those who did not in order to check if data missingness was not at random, one of the assumptions of the multiple imputation method.No significant differences were found.Then, we conducted exploratory analysis and applied test χ 2 , t-test and Spearman's correlation.To identify characteristics related to positive screening for depression (PHQ-2 > 2), we performed logistic regression analysis.First, we added variables found to be related to depression: nicotine dependence and employment.Then, we added the variables: gender, age, education, motivation to quit, nicotine dependence, cigarettes smoked per day, audit score, previous quit attempts, and number of pages visited.The percentage of missing across the variables ranged between 0 and 53%.The main reason of missing values was the inclusion of a questionnaire after the sign-in form, part of an update.We used multiple imputation (Rubin, 1987) to create and analyze 55 multiply imputed datasets.Data analysis was performed using R (v.3.2) (R Core Team, 2017) and the following packages mlogit (Croissant, 2013) and mice (v.2.2) (Buuren & Groothuis-Oudshoorn, 2011).Model parameters were estimated with multiple regression applied to each imputed dataset separately.These estimates and their standard errors were combined using Rubin's rules (1987).For comparison, we performed the analyses on the subset of complete cases.We adopted p < .05 as the criteria for statistical significance.
Ethical considerations
All research procedures were performed after approval of the Institutional Review Board of the Universidade Federal de Juiz de Fora (n.CEP -1376638).Users who received a positive screen for depression were prompted to the following message: "Your score suggests that you have significant depressive symptoms.We recommend seeing a doctor to help you in trying to quit smoking.Remember that the use of medication and psychotherapy increase your chances of success!However, feel to go to the next step".
Sample characteristics
Most participants were female (67%), with a mean age of 42.3 years (SD = 12.1).The majority of the sample had a high school as higher scholarly (55.9%), 24.7% higher education, 11.8% elementary school and 7.5% postgraduate.Seventy one percent were employed at the time of the study.Ninety-three percent of participants said they tried to quit smoking.Among those who tried to quit smoking, the median number of attempts was 3.5 (IQR = 4).Approximately one-third of the participants (36.4%) presented symptoms indicative of depression according to the PHQ-2 criteria.The characteristics of the participants are described in Table 1.
Among users who completed all PHQ-9 questions, (N = 461), 38.3% were diagnosed with major depression syndrome.The classification of severity according to PHQ-9 is described in Figure 1.There was a low correlation between the PHQ-9 score and the Fagerstrom Nicotine Dependence Test score (r = .2;t (4), gl = 300, p < .001),as shown in Table 2.
icant predictors of being screened as depressed.Higher levels of nicotine dependence were associated with depression (OR = 1.10; 95% CI = 1.00, 1.20).While for unemployed people, the risk of having depression was higher (OR = .53,95% CI = .29,.97);that is, an employed person is .53times less likely of having depression.More complex models (i.e., with variables as: age, gender, nicotine dependence, previous quit attempts) were not statistically significant.
DISCUSSION AND CONCLUSION
In this study, we evaluated the prevalence of depressive symptoms and found which participants' characteristics were associated with depression.We found that depression affects a proportion of smokers seeking help to quit smoking.Moreover, the results also suggest an association between being unemployed and being diagnosed with depression.There was also a positive weak association between nicotine dependence and depressive symptoms.It was also found a moderate positive correlation between the number of cigarettes smoked per day and nicotine dependence and a weak negative correlation between smoking dependence and participants' motivation.However, we did not find significant correlations for the variables: age, sex, education, motivation to quit, attempts to quit, and the number of pages visited.
The proportion of smokers with depression in webbased interventions for smokers varies between 12.9% to 40.0%.The percentage we found was similar to the Bricker's study (2013), in which approximately 40% of the participants were diagnosed with depression.In the study carried out by Rabius et al. (2008), the rate of participants with depression diagnosis was 30%.However, the values were different from the study by Muñoz et al. (2009), in which 12.9% of the participants were diagnosed with depression.It is important to note that all studies used different measures for assessing depression.Besides, the prevalence we found was different from the study of Andrade et al. (2012) conducted in a representative sample in Brazil.Andrade found the 12-month prevalence of major depression was 9.4%.Besides, we found that women were screened as depressed in a greater proportion than men even though the result was not statistically significant.Depression is more prevalent in women than men in the general population despite age (Salk, Hyde, & Abramson, 2017) and have greater impact on smoking cessation treatment (Weinberger, Mazure, Morlett, & McKee, 2013).
Our results also suggest an association between unemployment and depression, complying with studies that describe unemployment as a risk factor for depression (Wilhelm, Mitchell, Slade, Brownhill, & Andrews, 2003;Zimmerman & Katon, 2005).In addition, according to Kessler, Greenberg, Mickelson, Meneades, and Wang que, 2010).Although these studies have found a correlation between depression and unemployment, it is difficult to establish the causal relationship.Our findings about the positive association between depression and nicotine dependence also have been reported elsewhere (Breslau, 1995;Breslau, Peterson, Schultz, Andreski, & Chilcoat, 1996).
While studies show that different factors play a role in predicting quitting attempts and their success, especially motivation (Smit, Hoving, Schelleman-Offermans, West, & de Vries, 2014), the results of our analyses showed no relationship between them.Although participants were motivated to stop smoking, we found no correlation between motivation and previous attempts to quit smoking.
Our study has some limitations.First, we collected information through online self-reported questionnaires.To reduce false responses, user accounts were inspected and potential false users were removed.Second, data collection was performed at different intervention pages, which led to missing data.To address this issue, we used the multiple imputation method to reduce the standard errors and provide more accurate results that need to be verified in further studies.Third, other comorbidities (e. g., anxiety disorders, substance use disorders, and personality disorders) were not assessed.Future studies should address the prevalence of these comorbidities.
In conclusion, depression may be present among Portuguese speakers that seek help to quit smoking online.Future studies should evaluate the impact of depression in the treatment outcomes.Programs available on the internet could also include tailored content to depressive smokers, as well as provide referral to health services.Given the association between depression and smoking, websites offering counseling or crisis management for depression may also refer users to smoking services.Future studies are also needed to identify other comorbidities such as anxiety and personality disorders.
Figure 1 .
Figure 1.Data collection by intervention page and missing cases.
Table 3
Comparison between logistic regression models with imputed data and the complete cases Notes: 1 Model with 55 multiple imputations; 2 FTND = Fagerstrom Test for Nicotine Dependence; *p < .05.
Table 2
Spearman's correlation matrix of sociodemographic variables, nicotine dependence, motivation to quit, alcohol drinking, and use of the intervention by participants(n = 461)
|
2019-05-11T13:06:33.659Z
|
2017-12-14T00:00:00.000
|
{
"year": 2017,
"sha1": "d54d3a34d60c633b77cb4f8ac4e6b13a633bc79a",
"oa_license": "CCBYNC",
"oa_url": "http://revistasaludmental.mx/index.php/salud_mental/article/download/SM.0185-3325.2017.035/3277",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d54d3a34d60c633b77cb4f8ac4e6b13a633bc79a",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233746021
|
pes2o/s2orc
|
v3-fos-license
|
National genotype prevalence and age distribution of human papillomavirus from infection to cervical cancer in Japanese women: a systematic review and meta-analysis protocol
Background Despite prophylactic human papillomavirus (HPV) vaccination being a safe, effective and cost-effective public health intervention for the prevention of cervical cancer, the HPV vaccine is not actively recommended or promoted by the Ministry of Health Labour and Welfare in Japan. With already very low levels of cervical screening below 30%, and vaccination levels that are below levels that award any population effect at 0.3% of the eligible population, cervical cancer mortality is higher than other similar high-income countries at 4.4/100,000 (2900) deaths per year in 2015. There is limited population-based or nationally representative data for HPV genotype distribution in Japan, thus making an assessment of the burden of vaccine-preventable cervical cancer difficult. Therefore, this systematic review and meta-analysis aims to determine the HPV genotype prevalence and age distribution of HPV infection in women with a cytological or histological diagnosis of normal through cervical cancer in Japan. We anticipate this information will guide and enhance programme interventions to reduce vaccine-preventable cervical cancer mortality in Japan. Methods PubMed, Embase and the Japan Medical Abstract Society Database will be searched from the date of establishment to March 2021 to identify original research articles that report the prevalence of HPV genotypes in Japanese women with normal cervical cytology, low grade, high grade and cancerous cervical lesions. No exclusion criteria relating to language or publication date will be applied. The quality of the studies will be assessed using the Joanna Briggs checklist for prevalence studies. Randomised control trials, cohort studies, cross-sectional and prevalence studies will be considered eligible. Study findings will be combined using a traditional random-effects or fixed-effects meta-analysis to summarise pooled prevalence and 95% confidence intervals depending on heterogeneity. Subgroup analyses and meta-regression will be used to investigate heterogeneity, and sensitivity analyses will be conducted to assess the robustness of the findings. Discussion To our knowledge, this is the first systematic review protocol that includes both Japanese and English peer-reviewed articles for the determination of genotype-specific HPV prevalence in cytological or histological confirmed normal cervical specimens, low- and high-grade intraepithelial lesions and cervical cancers by age in Japan. We anticipate this information will guide and enhance programme interventions to reduce vaccine-preventable cervical cancer mortality in Japan. Systematic review registration PROSPERO CRD42018117596 Supplementary Information The online version contains supplementary material available at 10.1186/s13643-021-01686-6.
Background
Oncogenic HPV types are causal and necessary factors of cervical cancer [1][2][3]. It has now been shown that prophylactic HPV vaccines are immunogenic and effective against HPV vaccine genotype infections that can otherwise result in precancerous and cancerous lesions, as long as vaccination occurs prior to HPV infection [4][5][6]. Global evidence shows that HPV vaccination is safe, and that cross-protection against non-vaccine genotypes and herd effect also occur after vaccination [7][8][9][10][11]. In many countries, HPV vaccination programmes as a public health intervention have now been shown through systematic evaluations to be safe, effective and cost-effective methods for the prevention of HPV cervical infection and related disease [12][13][14]. Paradoxically in Japan, the implementation experience with HPV vaccination has been problematic.
At the time of implementation of the HPV vaccine programme, vaccination coverage for eligible adolescent girls in some prefectures was as high as 80% [15]. In fact, in light of such success, the HPV vaccine was added to the national routine vaccination register in April 2013 and was recommended the vaccine should be made available to all girls between the age of 12 and 16 [16]. However, in response to a series of reported adverse events in June 2013, the Human Papillomavirus Vaccination Programme was partially suspended by the Japanese Ministry of Health, Labour and Welfare (MHLW) [17]. Since then, the MHLW has directed prefectural governments not to actively recommend or promote adolescent HPV vaccination [18,19].
As a direct result of this suspension, vaccination coverage amongst adolescent girls has dramatically declined to 0.3%, a level that does not award any population benefit [15,[19][20][21][22]. At the same time, the cervical screening participation rate is below 30% [23]. Cervical cancer incidence (12.5/100,000) and mortality (2.19/100,000) since 1991 has decreased. However, more recently, the number of cervical cancer cases has risen from 10,520 (10.9/100,000) in 2013 to 11,200 (11.0/100,000) new cases in 2015, and the number of deaths has risen from 2656 (4.1/100,000) to 2900 (4.4/100,000) deaths over the same time period [23]. Currently, municipalities in Japan comprehensively collect population-level cancer screening performance data and report to the MHLW, whilst mortality data is collected by the National Vital Statistics, and incidence and survival data are collected by the prefectural cancer registries and later the national framework of cancer registries. However, there is still limited nationally representative data in Japan assessing the prevalence of HPV infection at the national or subnational level.
Comprehensive studies conducted internationally describe the genotype prevalence of HPV in many countries [21,22,[24][25][26][27]. However, Japan is commonly underrepresented in these studies. The limited nature of HPV genotype prevalence and data is likely to hinder effective advocacy for and planning of primary prevention strategies. To fill this gap, an estimate of the prevalence of HPV infection is essential and can be used to evaluate vaccine impact after reimplantation of the HPV vaccine in the future.
Evidence-based decision-making that is context specific has been key to the successful advocacy for and development of effective HPV vaccination and screening programmes. Therefore, this systematic review and meta-analysis aims to determine the HPV genotype prevalence and age distribution of HPV infection in women with a cytological or histological diagnosis from normal through to cervical cancer for Japanese women residing in Japan, by best utilising existing data.
Research aims
Primary aims: (1) To determine the HPV genotype-specific prevalence in women with a cytological or histological diagnosis of normal, low-and high-grade cervical intraepithelial neoplasia (CIN) and cervical cancer in Japan. (2) To determine the age-specific prevalence of any HPV infection in women with a cytological or histological diagnosis of normal, low-or high-grade cervical intraepithelial neoplasia (CIN) and cervical cancer in Japan.
Secondary aims:
(1) To determine the proportion of infections, precancerous lesions and cervical cancers that could be prevented by prophylactic HPV vaccination or are screening detectable in Japan. (2) To determine the prevalence of HPV infection at the national, prefectural and regional levels in women with a cytological or histological diagnosis of normal, low-or high-grade cervical intraepithelial neoplasia (CIN) and a cervical cancer in Japan.
Protocol and registration
This protocol was developed in line with the Preferred Reporting Items for Systematic reviews and Meta-Analyses guidelines for protocols (PRISMA-P) [28]. The PRISMA-P Checklist for this study is reported in Table S1. In addition, this review protocol has been registered in the International Prospective Register of Systematic Reviews (PROSPERO), with registration number CRD42018117596.
Search strategy
PubMed, Embase and ICHUSHI (Igaku Chuo Zasshi) will be searched from inception to March 2021. Search terms will include relevant headings and keywords in the title, abstract and text, including human papillomavirus in Japan. ICHUSHI is the domestic database for the Japan Medical Abstracts Society Database. The use of this database requires the development of a Japanese language search strategy. The search strategy will use the following general terms, expanded and appropriately modified for each database: 'Japan' and 'human papillomavirus' or 'HPV', and 'cervical cancer', and 'genotype', for 'normal cytology', and 'cervical disease' or 'cervical intraepithelial neoplasia'. For this systematic review, there will be no restrictions on the date or language of articles to be reviewed. This search strategy will be constructed and performed with the assistance of a librarian. The search strategy is outlined in Table S2.
The reference lists of identified studies will also be reviewed, evaluated and included if eligible. Grey literature shall also be considered for inclusion if the abstracts contain sufficient information to assess their eligibility. Possible sources of grey literature will include (1) identified authorities of this subject matter, (2) conference papers and (3) government documents and published guidelines. The search strategy will be developed according to Cochran Guidelines in collaboration with a librarian and subject matter expert in both Japanese and English.
Eligibility criteria
The population of interest for this review are Japanese people with a cervix residing in Japan who were screened at least once regardless of screening interval and stage of diagnosis. Males will be excluded from this study because they do not have a cervix. If identified, transgender men with a cervix will be included in this analysis. There will be no restriction on the age of participants in the studies for inclusion. Studies will be eligible if they are randomised control trials, case-control studies, case series studies, cohort studies or cross-sectional studies. Systematic reviews will not be eligible but their reference lists will be searched to identify any further eligible studies.
In order to achieve comparability to other international studies of HPV genotype prevalence [29], the following inclusion criteria shall be used: (1) studies that assess cervical carcinoma, low-grade or high-grade cervical lesions must include a minimum of 20 cases [21,22,24]; (2) studies that describe HPV infection in normal cytology must include a minimum of 100 cases [25][26][27]; (3) studies must include at least one HPV genotype; (4) DNA or RNA polymerase chain reaction (PCR)-based assays should be used and sufficiently described; and (5) the study must include a detailed methodological description of cervical sampling techniques.
Studies must have been performed in Japan. For studies that were not conducted in Japan, or for multi-country studies, only studies containing primary data reporting HPV genotype prevalence for women resident in Japan will be included. Studies using nucleic acid testing of blood or blood components to detect HPV or reporting HPV prevalence in anatomic sites other than the cervix will be excluded from calculations of prevalence.
Selection of studies
Covidence review software will be used to screen titles and abstracts of all studies that are initially identified by two independent reviewers according to the selection criteria [30]. The text of all potentially relevant studies will then be evaluated in detail against the eligibility criteria by two independent reviewers.
For the full-text review, the reviewers will independently classify articles as (1) included, (2) excluded or (3) maybe. A maybe status will imply that a decision to include or exclude the article is dependent on additional information being obtained from the author. Where additional information is needed, the corresponding author of the study will be contacted via email. A second email will be sent after 1 week in the event of no response to the initial email. A 2-week waiting period after the submission of the second email will be allowed for sufficient response. After which, these studies will be excluded [31]. Articles that both reviewers classify as excluded will be removed, whereas those that both reviewers classify as included will be included. Discrepancies will be resolved through discussion with a third independent reviewer until consensus is obtained. The opinion of a subject matter expert will be sought, if necessary.
In accordance with the PRISMA guidelines, a summary of the search process, study selection and reasons for exclusion of studies will be included [32]. A summary of all selected studies will also be included.
Outcome measures
The outcome measure of interest in this study is the HPV genotype-specific prevalence in women with a cytological or histological diagnosis of normal, low-or high-grade lesions or cervical cancers. HPV prevalence will be measured in cervical specimens from women where cytological classification is defined as normal, atypical squamous cells of undetermined significance (ASCUS), low-grade squamous intraepithelial lesion (LSIL) and high-grade squamous intraepithelial lesion (HSIL), and histological classification is defined as normal, cervical intraepithelial neoplasia 1 (CIN1), cervical intraepithelial neoplasia 2 (CIN2), cervical intraepithelial neoplasia 3 (CIN3), adenocarcinoma in situ (AIS), invasive cervical cancer (ICC)-unspecified, ICCsquamous cell carcinoma or ICC-adenocarcinoma.
Definitions
Type-specific prevalence is defined as the total number of women who are positive for a HPV genotype (n), expressed as a proportion of the total number of women who are tested for the given HPV genotype (N) with a DNA-or RNA-based PCR assay. This is given by the equation below: Type−specific prevalence HPV ð Þ ¼ Number of women HPV positive n ð Þ Total number of women tested N ð Þ Â 100%:
Data extraction
Data will be extracted into a standardised extraction template and verified independently by a second reviewer using Microsoft Excel™. For study characteristics, the data extracted will include the location of study (city, municipality, prefecture and region), study year (year), study sample type (population based, convenient or others), setting (hospital or clinic), study design (RCT, cohort, case-control or cross-sectional), sample collection method (swab, cytobrush, cervical or vaginal wash or others), sample collection (self-collection, practitioner or others), type of cervical specimen (fresh biopsy, fixed biopsy or exfoliated), cell storage medium, HPV assay, PCR primers used and HPV typing method (DNA or RNA).
In the same extraction template, the sample size of the number of women tested (N) and the number of HPV-positive women (n) will be extracted. Where available, these will be grouped by cytological (normal, ASCUS, LSIL and HSIL) and histological classification (normal, CIN1, CIN2, CIN3, AIS, ICC-unspecified, ICCsquamous cell carcinoma or ICC-adenocarcinoma).
Managing missing data
In the event that data is missing, the corresponding author of the study will be contacted via email and missing data will be requested. A second email will be sent after 1 week in the event of no response to the initial email. A 2-week waiting period after the submission of the second email will be allowed for sufficient response. After which, these studies will be excluded [31].
Critical appraisal
The critical appraisal for all included studies will be performed using the 'Joanna Briggs Institute Prevalence Critical Appraisal Tool', by two independent reviewers (Table S3) [33]. This critical appraisal tool uses 9 criteria to evaluate studies; a 'Yes', 'No' response is required for each of the 9 criteria. Where assessment against a criterion is not possible because of incomplete data, then it will be recorded as 'unclear' against that particular criterion. Any conflicts that occur between reviewers will be resolved by discussion with a third independent reviewer until consensus is reached.
Studies that meet all quality appraisal criteria are categorised as being high-quality studies. Studies that do not meet 1 or more of the required quality appraisal criteria are categorised as low-quality studies. For studies where a quality appraisal criterion is 'not applicable', the reviewers would then discuss the results of the categorisation. If consensus on the final critical appraisal cannot be concluded then, a third reviewer may be required.
Data analysis and synthesis
Stata version 15 will be used, utilising 'Metaprop', a Stata command to perform a meta-analysis of binomial data to calculate pooled prevalence estimates as described below [34]. Analysis will be performed using the Freeman-Tukey double arcsine transformation, and Der Simonian-Laird random-effects methods will be used to compute the weighted overall pooled estimates with confidence intervals (CIs) [35].
Statistical heterogeneity will be quantified using Cochran's Q and the I 2 test statistic to determine the extent of variation in effect estimates that is due to heterogeneity rather than chance. Cochrane's χ 2 Q test statistic will be performed using an α cut-off level of 10% [31]. The I 2 test statistic will be used to quantify statistical heterogeneity between studies: heterogeneity from 0 to 30% will be classified as might not be important; heterogeneity from 30 to 75% will be classified as may represent moderate heterogeneity; and heterogeneity from 75 to 100% will be classified as considerable heterogeneity. In this study, the random-effects model will be chosen over the fixed-effects model. If there is substantial heterogeneity above 75%, a meta-analysis will not be performed.
HPV genotype-specific prevalence estimates
The pooled genotype-specific HPV prevalence for each HPV genotype or genotype group as previously defined will be estimated independently. Where data allows, pooled estimates will also be stratified by cytological disease stage: normal, ASCUS, LSIL and HSIL, and histological classification is defined as normal, CIN1, CIN2, CIN3, AIS, ICC-unspecified, ICC-squamous cell carcinoma or ICC-adenocarcinoma.
Age-specific prevalence estimates
The age-specific prevalence of each HPV genotype or genotype group as previously defined will be calculated for 5 year age groups for the interval 20 to 69 years. Studies that do not report age-specific HPV prevalence will be excluded from age-specific estimates.
Vaccine-type prevalence estimates
The proportion of vaccine-preventable infections will be estimated for each vaccine type group as previously defined. Where data allows, pooled estimates will also be calculated and stratified by vaccine type. This will include the bivalent, quadrivalent and nonavalent vaccines.
HPV prevalence estimates by geographic location
In order to examine the geographical distribution of HPV genotypes across Japan, pooled estimates of HPV genotype prevalence will be estimated nationally and stratified by region, prefecture and municipality where data allows.
Assessment and management of heterogeneity
Subgroup analyses and meta-regression will be conducted to identify sources of between-study heterogeneity in the pooled prevalence of HPV infection. Subgroups to be investigated will include study sample type (population based or convenient), study design (cross-sectional or case-control study; RCT or cohort), year of publication, sample collection device, cell storage medium, HPV assay, primers used and HPV typing method. The relative reduction of between-study variance (τ2) will provide an indication of the factor's contribution to heterogeneity.
Sensitivity analysis
Sensitivity analysis will be conducted to assess the impact of studies with a low and high critical appraisal score. The impact of specific studies on the pooled prevalence estimate will be determined by exclusion.
Assessment of reporting bias
The potential for publication or reporting bias will be explored by funnel plots and Egger's regression asymmetry test, where at least 10 studies are available. Asymmetry of funnel plots will indicate the presence of publication bias [36]. A p-value below 10% with Egger's test will be considered statistically significant.
Discussion
The outcomes of this study will provide, for the first time, national and subnational estimates of HPV infection in Japan. It is intended that the outcomes of this study will provide evidence in order to evaluate the impact of HPV vaccination to protect against HPV infection and related disease, and inform action needed to eliminate vaccine-preventable cervical cancer in Japan. It will do this by assessing the age distribution of the prevalence of HPV infection, in addition to providing national and regional prevalence estimates of vaccinepreventable and screening-detectable HPV infections from women with cytology-or histology-confirmed normal, low-and high-grade lesions and cervical cancer in Japan.
Strengths and limitations of this study
This is the first systematic review protocol that includes both Japanese and English peer-reviewed articles for the determination of genotype-specific prevalence of HPV in normal cervical cytology specimens, low-and high-grade intraepithelial lesions and cervical cancers in Japan. This is the first systematic review evaluating the overall prevalence of human papillomavirus stratified by municipality, prefecture or region in Japan. This protocol was developed in line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines for protocols (PRIS MA-P). This study only consists of published data or epidemiological studies from female populations participating in cervical HPV sampling or screening programmes. A limitation will be that the quality of findings from this review will be dependent on the availability, number and quality of studies included in the final review.
Ethics and dissemination
Given that the data used in this study will be published and anonymised, publicly available and peer reviewed, ethical approval is not a requirement. This review will be reported in line with the PRISMA statement and will include the PRISMA Checklist. The findings will be published in a peer-reviewed journal and as part of a doctoral thesis.
Funding
Grants-in-Aid for Scientific Research from Japan Society for the Promotion of Science (17H03589).
Declarations
Ethics approval and consent to participate Given that the data used in this study will be published and anonymised, publicly available and peer reviewed, ethical approval is not a requirement for this study. This review will be reported in line with the PRISMA statement and will include the PRISMA Checklist. The findings will be published in a peer-reviewed journal.
Consent for publication
Not applicable.
Competing interests
Kota Katanoda has been an expert committee member of the Pharmaceuticals and Medical Devices Agency since 2019. The authors declare that they have no competing interests.
Author details 1
|
2021-05-06T13:17:13.619Z
|
2021-05-05T00:00:00.000
|
{
"year": 2021,
"sha1": "45bf959b8e002450e628d5115814ed6c06f930f5",
"oa_license": "CCBY",
"oa_url": "https://systematicreviewsjournal.biomedcentral.com/track/pdf/10.1186/s13643-021-01686-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e80c16f8abec2f1b7515f9d2633fd7301202940a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247089656
|
pes2o/s2orc
|
v3-fos-license
|
The Novel Protease Activities of JMJD5–JMJD6–JMJD7 and Arginine Methylation Activities of Arginine Methyltransferases Are Likely Coupled
The surreptitious discoveries of the protease activities on arginine-methylated targets of a subfamily of Jumonji domain-containing family including JMJD5, JMJD6, and JMJD7 pose several questions regarding their authenticity, function, purpose, and relations with others. At the same time, despite several decades of efforts and massive accumulating data regarding the roles of the arginine methyltransferase family (PRMTs), the exact function of this protein family still remains a mystery, though it seems to play critical roles in transcription regulation, including activation and inactivation of a large group of genes, as well as other biological activities. In this review, we aim to elucidate that the function of JMJD5/6/7 and PRMTs are likely coupled. Besides roles in the regulation of the biogenesis of membrane-less organelles in cells, they are major players in regulating stimulating transcription factors to control the activities of RNA Polymerase II in higher eukaryotes, especially in the animal kingdom. Furthermore, we propose that arginine methylation by PRMTs could be a ubiquitous action marked for destruction after missions by a subfamily of the Jumonji protein family.
Introduction
Arginine methylation, a ubiquitous post-translation modification (PTM) of proteins, was discovered more than 50 years [1] and is appreciated in recent decades [2][3][4][5][6]. Nine arginine methyltransferases (PRMTs) have been characterized [3]. Among them, PRMT1/2/3/4/ 6/8, characterized as type I, are responsible for asymmetric methylation on the sidechain of arginine; PRMT5 and PTMT9, as type II, are responsible for symmetric methylation, while PRMT7 is the only member of type III for monomethylation with preferred sites containing sequences of RGG/RG, RXR, GRG, proline-rich, proline-glycine-methionine-rich motifs [3,5]. The exact function of arginine methylation on proteins is still controversial; however, accumulating data about its roles within a large number of RNA-binding proteins with intrinsic disorder regions (IDRs) or low complexity domains (LCDs) showed that it plays critical roles in the phase separation of these proteins and closely relates to neurodegenerative diseases, cancers, and other diseases [7]. Most of these IDR-containing proteins are responsible for the formation of a large number of membrane-less organelles (MLOs) including the nucleus, nuclear speckles, nuclear stress bodies, histone locus body, Cajal body, PML nuclear body, paraspeckles, perinucleolar compartment, stress granules, P-bodies, germ cell granules/nuage, neuronal granules, etc. [8,9]. As we know, most RNAs are vulnerable to attacks by nucleases within cells, and all aforementioned cell bodies could Figure 1. Possible histone methylation. Histone lysine methylation is very specific and limited while histone arginine methylation seems not specific and ubiquitous.
Arginine Methylation and Phase Separation for Ribonucleoproteins (RNPs)
A large number of proteins in eukaryotes have IDRs with some repeating low complexity domains containing aromatic rich residues such as tyrosine, as well as argininerich repeat domains such as RGG/RG [9,28]. It is very well recognized that these two motifs could make cation-π interaction, which is the driving force for phase separation [29][30][31][32][33][34]. However, it is still controversial whether arginine methylation enhances or suppresses the phase separation. Some reports seem to support the latter [30][31][32][33][35][36][37], though there is a plethora of data supporting that arginine methylation plays a critical role in promoting phase separation [38][39][40][41][42][43][44][45][46][47][48][49]. As regards cation-π interaction, arginine methylation enhances the interaction between the methylated sidechain of arginine and the aromatic sidechain. This concept is very well supported by numerous structural and biochemical data since either lysine methylation or arginine methylation increases the binding affinity between methylated lysine or arginine and the correspondent binding partner. A pioneering structural and biochemical analysis of H3K4me3 and PHD domain from Dr. Dinshaw Patel's group revealed that the binding affinity increases with methylation of lysine from monomethylation to trimethylation [50]. The rich aromatic side-chain within the PHD domain accounts for the strengthening interactions ( Figure 2A). This is also true for the interaction between methylated arginine and Tudor domain from PIWI-binding proteins; an aromatic cage holds the methylated guanidinium moiety, as reported by Drs. Tony Pawson and Jinrong Min's groups [51], in which methylation increases the binding affinity from ~94 µM to ~10 µM, almost 10-fold ( Figure 2B). Interestingly, the catalytic core of JMJD5 also owns a Tudor domain-like structure with a rich aromatic side chain to build a cage to specifically bind to methylated arginine instead of methylated lysine [23,24]. The methylation of arginine of histone H3R2 could enhance the binding affinity from 7 µM to 100 nM, almost ~70-fold ( Figure 2C). Interestingly, judging from one report of FUS phase separation, arginine methylation seems to promote membrane-less droplets to form much more tightly ordered spherical shapes; without methylation, however, they have disordered shapes though they become larger [31]. These phenomena were also indicated in the other three reports [30,32,33]. However, it is beyond the scope of this study to interpret other researchers' data.
Arginine Methylation and Phase Separation for Ribonucleoproteins (RNPs)
A large number of proteins in eukaryotes have IDRs with some repeating low complexity domains containing aromatic rich residues such as tyrosine, as well as arginine-rich repeat domains such as RGG/RG [9,28]. It is very well recognized that these two motifs could make cation-π interaction, which is the driving force for phase separation [29][30][31][32][33][34]. However, it is still controversial whether arginine methylation enhances or suppresses the phase separation. Some reports seem to support the latter [30][31][32][33][35][36][37], though there is a plethora of data supporting that arginine methylation plays a critical role in promoting phase separation [38][39][40][41][42][43][44][45][46][47][48][49]. As regards cation-π interaction, arginine methylation enhances the interaction between the methylated sidechain of arginine and the aromatic sidechain. This concept is very well supported by numerous structural and biochemical data since either lysine methylation or arginine methylation increases the binding affinity between methylated lysine or arginine and the correspondent binding partner. A pioneering structural and biochemical analysis of H3K4me3 and PHD domain from Dr. Dinshaw Patel's group revealed that the binding affinity increases with methylation of lysine from monomethylation to trimethylation [50]. The rich aromatic side-chain within the PHD domain accounts for the strengthening interactions ( Figure 2A). This is also true for the interaction between methylated arginine and Tudor domain from PIWI-binding proteins; an aromatic cage holds the methylated guanidinium moiety, as reported by Drs. Tony Pawson and Jinrong Min's groups [51], in which methylation increases the binding affinity from~94 µM to~10 µM, almost 10-fold ( Figure 2B). Interestingly, the catalytic core of JMJD5 also owns a Tudor domain-like structure with a rich aromatic side chain to build a cage to specifically bind to methylated arginine instead of methylated lysine [23,24]. The methylation of arginine of histone H3R2 could enhance the binding affinity from 7 µM to 100 nM, almost~70-fold ( Figure 2C). Interestingly, judging from one report of FUS phase separation, arginine methylation seems to promote membrane-less droplets to form much more tightly ordered spherical shapes; without methylation, however, they have disordered shapes though they become larger [31]. These phenomena were also indicated in the other three reports [30,32,33]. However, it is beyond the scope of this study to interpret other researchers' data. Taken together, it is likely that arginine methylation could promote the property of self-/intra-oligomerization such as in FUS, EWS, TAF15, etc., to form membrane-less organelles. At the same time, arginine methylation generates a docking site for the Tudor domain, which brings two or more molecules together for inter-oligomerization to form isolated particles such as TDRD proteins and SMN proteins [17,52]. Arginine methylation also creates sites for the recognition by other domains. Finally, and most importantly, all these bodies or particles formed through phase separation create relatively isolated microenvironments for RNA splicing, rRNA biogenesis, DNA repair, generation of micro-RNAs, small interfering RNAs, mRNA biogenesis, protection, transporting, etc.
Arginine Methylation of Histone Subunits and Transcription Activation
Although arginine methylation on histone also occurs in singular cells, it is mostly related to the heterochromatin regions with repressed transcription activity [14]. However, arginine methylation on histone tails participates in the transcription activation in higher eukaryotes and possibly couples with the Pol II-pausing regulation, a unique transcription regulation mechanism that only occurs in higher eukaryotes.
It is reported that PRMT1 and CARM1 (PRMT4) function synergistically with each other [53,54]. CARM1 associates with coactivator glucocorticoid receptor-interacting protein 1 (GRIP1) to stimulate transcriptional activation [55]. PRMT1 is recruited by nuclear receptors or coactivators to methylate histone H4R3, while CARM1 works on histone H3 [56][57][58][59]. Knockouts of PRMT1, which are responsible for over 85% of arginine methylation in vivo [60], are embryonic lethal not beyond E6.5 [61,62], while knockout of CARM1 is neonatal lethal [63]. ChIP-seq data show that both CARM1 and its potential methylation target H3R17 are located at the promoter region or transcription start site [64], similar to PRMT2 and its potential substrate H3R8 [65]. PRMT6 deposits H3R2me2a at promoter and enhancer regions [66]. H3R2me2a occupancy at both enhancer and promoter regions drastically increases upon activation by all-trans retinoic acid through a nuclear receptor [66], further confirming earlier reports that PRMTs are recruited by nuclear receptors or their coactivators [56][57][58][59]. It has been a major mystery as to why transcription activation needs arginine methylation on +1 nucleosomes ( Figure 3). Taken together, it is likely that arginine methylation could promote the property of self-/intra-oligomerization such as in FUS, EWS, TAF15, etc., to form membrane-less organelles. At the same time, arginine methylation generates a docking site for the Tudor domain, which brings two or more molecules together for inter-oligomerization to form isolated particles such as TDRD proteins and SMN proteins [17,52]. Arginine methylation also creates sites for the recognition by other domains. Finally, and most importantly, all these bodies or particles formed through phase separation create relatively isolated microenvironments for RNA splicing, rRNA biogenesis, DNA repair, generation of micro-RNAs, small interfering RNAs, mRNA biogenesis, protection, transporting, etc.
Arginine Methylation of Histone Subunits and Transcription Activation
Although arginine methylation on histone also occurs in singular cells, it is mostly related to the heterochromatin regions with repressed transcription activity [14]. However, arginine methylation on histone tails participates in the transcription activation in higher eukaryotes and possibly couples with the Pol II-pausing regulation, a unique transcription regulation mechanism that only occurs in higher eukaryotes.
It is reported that PRMT1 and CARM1 (PRMT4) function synergistically with each other [53,54]. CARM1 associates with coactivator glucocorticoid receptor-interacting protein 1 (GRIP1) to stimulate transcriptional activation [55]. PRMT1 is recruited by nuclear receptors or coactivators to methylate histone H4R3, while CARM1 works on histone H3 [56][57][58][59]. Knockouts of PRMT1, which are responsible for over 85% of arginine methylation in vivo [60], are embryonic lethal not beyond E6.5 [61,62], while knockout of CARM1 is neonatal lethal [63]. ChIP-seq data show that both CARM1 and its potential methylation target H3R17 are located at the promoter region or transcription start site [64], similar to PRMT2 and its potential substrate H3R8 [65]. PRMT6 deposits H3R2me2a at promoter and enhancer regions [66]. H3R2me2a occupancy at both enhancer and promoter regions drastically increases upon activation by all-trans retinoic acid through a nuclear receptor [66], further confirming earlier reports that PRMTs are recruited by nuclear receptors or their coactivators [56][57][58][59]. It has been a major mystery as to why transcription activation needs arginine methylation on +1 nucleosomes ( Figure 3). It is very well established that H3.3K4me1 at enhancer regions are generated by MLL3/4 [67][68][69][70][71]. It was found that there is no methylation of H3.3K27 marks at enhancer regions before zygotic genome activation (ZGA) [72][73][74][75]; therefore, MLL3/4 does not need to recruit UTX or KDM7 to generate H3K27me0 sites at this time. However, there are scarce data regarding how MLL3/4 is recruited to enhancer regions. Two reports showed that the ectopic expression of CEBPβ or HOXA9 is sufficient to bring MLL3/4 to enhancer regions, to generate H3K4me1 [67,76], suggesting that MLLL3/4 could be recruited by p300/CBP, coupled with transcription factors. One report shows that the tandem of PHD4-6 of MLL4 is essential for the functioning of MLL4 and that it could specifically bind to H3R3me0 and H4R3me2a but not H4R3me2s [77], suggesting that arginine methylation on +1 nucleosome could be critical for recruiting MLL3/4. It was later confirmed that H4R17 plays an essential role in recruiting MLL3/4 [78]. Interestingly, another report showed that acetylated H4K16ac binds to the PHD6 finger of MLL4 [79]. We propose that both methylated arginines generated by PRMT1/CARM1 on +1 nucleosome and acetylated H4 generated by CBP/p300 on the enhancer nucleosome (-N) could work together with additional elements, such as the enhancer DNA sequence, to bring MLL3/4 to the enhancer region, generating H3.3K4me1 at the enhancer region ( Figure 4). Further characterization of the binding specificity of these PHD fingers with MLL3/4 is required. It is very well established that H3.3K4me1 at enhancer regions are generated by MLL3/4 [67][68][69][70][71]. It was found that there is no methylation of H3.3K27 marks at enhancer regions before zygotic genome activation (ZGA) [72][73][74][75]; therefore, MLL3/4 does not need to recruit UTX or KDM7 to generate H3K27me0 sites at this time. However, there are scarce data regarding how MLL3/4 is recruited to enhancer regions. Two reports showed that the ectopic expression of CEBPβ or HOXA9 is sufficient to bring MLL3/4 to enhancer regions, to generate H3K4me1 [67,76], suggesting that MLLL3/4 could be recruited by p300/CBP, coupled with transcription factors. One report shows that the tandem of PHD4-6 of MLL4 is essential for the functioning of MLL4 and that it could specifically bind to H3R3me0 and H4R3me2a but not H4R3me2s [77], suggesting that arginine methylation on +1 nucleosome could be critical for recruiting MLL3/4. It was later confirmed that H4R17 plays an essential role in recruiting MLL3/4 [78]. Interestingly, another report showed that acetylated H4K16ac binds to the PHD6 finger of MLL4 [79]. We propose that both methylated arginines generated by PRMT1/CARM1 on +1 nucleosome and acetylated H4 generated by CBP/p300 on the enhancer nucleosome (-N) could work together with additional elements, such as the enhancer DNA sequence, to bring MLL3/4 to the enhancer region, generating H3.3K4me1 at the enhancer region ( Figure 4). Further characterization of the binding specificity of these PHD fingers with MLL3/4 is required.
Interestingly, it was reported that PRMT1 is required to recruit JMJD2C, to trigger transformation in acute myeloid leukemia through removing methyl groups on H3K9 [80]. This is consistent with our recent discoveries that the Tudor domain of the JMJD2 family prefers binding to arginine-methylated histone tails instead of lysine-methylated histone tails (Liu et al., unpublished). It may be known that JMJD2 family members are lysine demethylases that remove methyl groups on H3K9 and H3K36, as characterized by several groups including ours [81][82][83][84]. It was reported by Dr. Bruno Amati's group that methylation of histone H3R2 by PRMT6 and H3K4 by an MLL complex (which is likely dominant at -1 nucleosome) are mutually exclusive [85], while Dr. Ernesto Guccione's group revealed that H3K4me2s is rich on +1 nucleosome [86], similar to H4R3me2s found to accumulate at promoter regions [87]. Based on the above-collected data, we currently hypothesize that arginine-methylated histone tails on +1 nucleosome recruit JMJD2 family members to remove methyl groups of H3K9 on enhancer nucleosomes to convert it from inactive enhancer to active enhancer coupled with the function of MLL3/4 ( Figure 5). Interestingly, it was reported that PRMT1 is required to recruit JMJD2C, to trigger transformation in acute myeloid leukemia through removing methyl groups on H3K9 [80]. This is consistent with our recent discoveries that the Tudor domain of the JMJD2 family prefers binding to arginine-methylated histone tails instead of lysine-methylated histone tails (Liu et al., unpublished). It may be known that JMJD2 family members are lysine demethylases that remove methyl groups on H3K9 and H3K36, as characterized by several groups including ours [81][82][83][84]. It was reported by Dr. Bruno Amati's group that methylation of histone H3R2 by PRMT6 and H3K4 by an MLL complex (which is likely dominant at -1 nucleosome) are mutually exclusive [85], while Dr. Ernesto Guccione's group revealed that H3K4me2s is rich on +1 nucleosome [86], similar to H4R3me2s found to accumulate at promoter regions [87]. Based on the above-collected data, we currently hypothesize that arginine-methylated histone tails on +1 nucleosome recruit JMJD2 family members to remove methyl groups of H3K9 on enhancer nucleosomes to convert it from inactive enhancer to active enhancer coupled with the function of MLL3/4 ( Figure 5). Interestingly, it was reported that PRMT1 is required to recruit JMJD2C, to trigger transformation in acute myeloid leukemia through removing methyl groups on H3K9 [80]. This is consistent with our recent discoveries that the Tudor domain of the JMJD2 family prefers binding to arginine-methylated histone tails instead of lysine-methylated histone tails (Liu et al., unpublished). It may be known that JMJD2 family members are lysine demethylases that remove methyl groups on H3K9 and H3K36, as characterized by several groups including ours [81][82][83][84]. It was reported by Dr. Bruno Amati's group that methylation of histone H3R2 by PRMT6 and H3K4 by an MLL complex (which is likely dominant at -1 nucleosome) are mutually exclusive [85], while Dr. Ernesto Guccione's group revealed that H3K4me2s is rich on +1 nucleosome [86], similar to H4R3me2s found to accumulate at promoter regions [87]. Based on the above-collected data, we currently hypothesize that arginine-methylated histone tails on +1 nucleosome recruit JMJD2 family members to remove methyl groups of H3K9 on enhancer nucleosomes to convert it from inactive enhancer to active enhancer coupled with the function of MLL3/4 ( Figure 5). In summary, arginine methylation of histone tails on +1 nucleosomes generates docking sites for the JMJD2 family (through Tudor domain), MLL3/4 (through plant homeodomain, PHD, and/or WDR domain-containing proteins within COMPASS-like complex), etc. (Figures 3-5). JMJD2 family members, joined by other monomethyl-removing Jumonji family members, remove methyl groups on H3K9 (or H3K36 if the enhancers are located within intron regions) of nucleosomes at enhancer regions (-L, -M, -N), which are further modified by MLL3/4 to generate H3K4me1 on nucleosomes at enhancer regions. Arginine methylation on +1 nucleosomes of genes controlled by stimulating signals could be an essential step for transcription activation of genes regulated by enhancers, and it is likely coupled with Pol II pausing.
Arginine Methylation and Transcription Repression
Compared with PRMT1-and CARM1 (PRMT4)-accompanying transcription activation [53,56,58,88,89], PRMT5 and PRMT6 are always coupled with the transcription repression process (Figure 6). PRMT5 symmetrically di-methylates H2AR3, H4R3 [90], and H3R8 to mediate transcriptional repression [91,92]. PRMT6 functions mainly as a Biomolecules 2022, 12, 347 7 of 21 transcription co-repressor by asymmetrically di-methylating H3R2 [85] or H2AR29 [93]. It remains unknown why transcription repression requires arginine methylation on histone tails. Interestingly, Tudor domains could be found in a series of proteins, which participate in transcription repression or heterochromatin formation. An H3K9 methyltransferase SETDB1 contains two Tudor domains [94,95]. It is reported that a subset of the histone H3K9 methyltransferases Suv39h1, G9a, GLP, and SETB1 form a complex to function together [96]. It is obvious that methylated arginines of histone tails on nucleosome +1 may recruit SETDB1 to convert H3.3K9me0 to H3.3K9me3 on nucleosomes at the enhancer region (-N) with the help of other H3K9 methyltransferases, which can be specifically recognized by the heterochromatin protein 1 (HP1/CBX5) family [97], to form constitutive heterochromatin for a complete silence of a target gene (Figures 7 and 8). A large number of PRC2-associated protein families, PFH, also contain Tudor domains [98,99]. It is likely that they may help PRC2 to be recruited to promoter regions to shut down the transcription unit. Interestingly, Tudor-domain-containing proteins ARID4A and ARID4B [98] are found to associate with Sin3/Rpd3 repression complex [100], which is a major histone deacetylase complex to remove acetyl groups on histone tails. essential step for transcription activation of genes regulated by enhancers, and it is likely coupled with Pol II pausing.
Arginine Methylation and Transcription Repression
Compared with PRMT1-and CARM1 (PRMT4)-accompanying transcription activation [53,56,58,88,89], PRMT5 and PRMT6 are always coupled with the transcription repression process ( Figure 6). PRMT5 symmetrically di-methylates H2AR3, H4R3 [90], and H3R8 to mediate transcriptional repression [91,92]. PRMT6 functions mainly as a transcription co-repressor by asymmetrically di-methylating H3R2 [85] or H2AR29 [93]. It remains unknown why transcription repression requires arginine methylation on histone tails. Interestingly, Tudor domains could be found in a series of proteins, which participate in transcription repression or heterochromatin formation. An H3K9 methyltransferase SETDB1 contains two Tudor domains [94,95]. It is reported that a subset of the histone H3K9 methyltransferases Suv39h1, G9a, GLP, and SETB1 form a complex to function together [96]. It is obvious that methylated arginines of histone tails on nucleosome +1 may recruit SETDB1 to convert H3.3K9me0 to H3.3K9me3 on nucleosomes at the enhancer region (-N) with the help of other H3K9 methyltransferases, which can be specifically recognized by the heterochromatin protein 1 (HP1/CBX5) family [97], to form constitutive heterochromatin for a complete silence of a target gene (Figures 7 and 8). A large number of PRC2-associated protein families, PFH, also contain Tudor domains [98,99]. It is likely that they may help PRC2 to be recruited to promoter regions to shut down the transcription unit. Interestingly, Tudor-domain-containing proteins ARID4A and ARID4B [98] are found to associate with Sin3/Rpd3 repression complex [100], which is a major histone deacetylase complex to remove acetyl groups on histone tails. Figure 6. A model of transcription repression and arginine methylation on +1 nucleosome. When transcription repressors (TF2) bind to the enhancer regions, which recruit nuclear receptor corepressor (NCoR). NCoR, in turn, recruits PRMT5/6 to generate methylated arginine on the +1 nucleosome. Figure 6. A model of transcription repression and arginine methylation on +1 nucleosome. When transcription repressors (TF2) bind to the enhancer regions, which recruit nuclear receptor co-repressor (NCoR). NCoR, in turn, recruits PRMT5/6 to generate methylated arginine on the +1 nucleosome. In summary, several Tudor-domain-containing transcription cofactors that are involved in transcription repression could be recruited by arginine-methylated histone tails on +1 nucleosomes of stimulation-regulated genes, with correspondent repressing complexes such as Sin3-containing histone deacetylase complex, which remove acetyl groups on nucleosomes at enhancers, or PRC2 complex, which builds facultative heterochromatin to convert an active transcription unit back to a repressive unit for a later reactivation, a very popular action after zygote activations during embryonic development. However, data on direct and specific interactions between these Tudor domains of cofactors and arginine-methylated histone tails are still lacking. This could be a future exciting field to explore.
The Ubiquitous Arginine Methylation and Potential Final Destination
Proteomic analysis of the status of arginine methylation reveals an astonishing factnamely, that arginine methylation is similar to those of phosphorylation and ubiquitination; it is ubiquitous (Figure 1) [15]. It is very well established that both phosphorylation and ubiquitination are reversible, which are achieved by either large family members of phosphatases [101] or a large group of deubiquitinating enzymes [102]. However, the hunting of arginine demethylases is not very successful so far. It is of interest to know whether it exists in vivo and, most importantly, the exact biological consequence of the In summary, several Tudor-domain-containing transcription cofactors that are involved in transcription repression could be recruited by arginine-methylated histone tails on +1 nucleosomes of stimulation-regulated genes, with correspondent repressing complexes such as Sin3-containing histone deacetylase complex, which remove acetyl groups on nucleosomes at enhancers, or PRC2 complex, which builds facultative heterochromatin to convert an active transcription unit back to a repressive unit for a later reactivation, a very popular action after zygote activations during embryonic development. However, data on direct and specific interactions between these Tudor domains of cofactors and arginine-methylated histone tails are still lacking. This could be a future exciting field to explore.
The Ubiquitous Arginine Methylation and Potential Final Destination
Proteomic analysis of the status of arginine methylation reveals an astonishing factnamely, that arginine methylation is similar to those of phosphorylation and ubiquitination; it is ubiquitous (Figure 1) [15]. It is very well established that both phosphorylation and ubiquitination are reversible, which are achieved by either large family members of phosphatases [101] or a large group of deubiquitinating enzymes [102]. However, the hunting of arginine demethylases is not very successful so far. It is of interest to know whether it exists in vivo and, most importantly, the exact biological consequence of the actions. As mentioned above, arginine methylation is not simply required for some stimulating signals such as heat shock, DNA repairing, differentiation cues, etc., but also for phase separation of membrane-less bodies to serve different purposes in RNAs, such as splicing, rRNA biogenesis, snRNAs and microRNAs biogenesis, etc. for non-histone proteins. In this regard, we tend to speculate one destiny of arginine methylation: Irrespective of the purpose they serve in either non-histone proteins or histone subunits, they are doomed for destruction after missions accomplished, similar to a popular ubiquitination pathway-the proteasomeorientated degradation. The most outstanding feature for targets bound to ubiquitination degradation is frequently to be regenerated without major economic burdens. As we know, most of the cell bodies are disassembled; even nucleolus is dissolved during mitosis. On the other hand, accumulating data suggest that arginine methylation on +1 nucleosomes of stimulating genes also fits this category; a gained property during evolution in higher eukaryotes, especially in the animal kingdom, it may be born to be destroyed to regulate transcription activities of a unique group of stimulating genes. The discoveries of proteases activities of JMJD5, JMJD6, and JMJD7 from our group indicate there does exist a novel destruction mechanism of arginine-methylated proteins, as described below [10,[23][24][25][26]103].
The Novel Protease Activities of JMJD5 Arginine-Methylated Histone Tails Coupled with CDK9 to Release Paused Pol II
After some pioneering studies in characterizing lysine demethylases of the Jumonji protein-containing JMJD2 subfamily [81][82][83], we sought to identify potential candidates of arginine demethylases from this same Jumonji protein family. The first candidate we attempted to identify was JMJD6. However, we failed to detect any activities of JMJD6 toward methyl groups on arginines of histone tails, as we mention in the next section. At the same time, there was also some controversy regarding the function of JMJD5 [104], which was first characterized as lysine demethylases [105], while another group found it has an arginine hydroxylase activity [106]. JMJD5 seems to play a critical role in the early development of mice since knockout leads to early embryonic lethality [107][108][109]. Interestingly, we detected a drop in the content of methylated arginines when bulk histone was treated with JMJD5 [23]. However, it was impossible for us to identify any activities of removal of methyl groups on potentially methylated arginines of histone tails with synthesized peptides. To avoid the nonspecific issues of antibodies used for the readout of methylated arginines, we generated C 14 radioactive-methylated histone tails by treating bulk histone with PRMT1/5/6/7 and C 14 -SAM [23]. These C 14 -positive substrates were subjected to an enzymatic reaction of JMJD5. To our surprise, short fragments started to appear after treatment of JMJD5 [23]. Follow-up characterization revealed that JMJD5 owns both endopeptidase and carboxy-exopeptidase activities, a novel protease family ( Figure 9) [23], which contains all essential structural features to act as a hydrolase ( Figure 10). Interestingly, another group also found protease activities of JMJD5 on H3, though specific for lysine residue later [110]. Furthermore, we found that there is a unique substrate recognition feature within the Tudor-domain-like motif, which could discriminate the side chain of arginine from that of lysine, suggesting a very specific recognition mode between JMJD5 and methylated arginines ( Figure 10) [24]. Another surprising discovery is that JMJD5 affects the homeostasis of both arginine-methylated histones and histone overall; depletion of JMJD5 leads to the dramatic accumulation of both components in MEF cells or human cancer cells [23], providing strong evidence to support the cleavage roles of JMJD5 on arginine-methylated histones. These data suggest that arginine-methylated histone on +1 nucleosomes, as we discussed earlier, are doomed for destruction instead of recycling through demethylation or reversible recovery, a hallmark of epigenetics. Our later ATAC-seq data showed that the location of +1 nucleosomes from a large number of genes shifted upstream in the JMJD5 knockout MEF cells, suggesting JMJD5 only works on +1 nucleosomes of some stimulating genes [25]. However, this novel discovery also raises a significant question on how JMJD5 is recruited and carries out its function. This question actually leads to another novel discovery, which may raise the curtain on the unique mysterious transcription regulation mechanism of promoter-proximal Pol II pause in higher eukaryotes.
There was an uncharacterized N-terminal domain (NTD-JMJD5) with a flexible linker to connect to the C-terminal catalytic core domain of JMJD5. From secondary and threedimensional structural predictions, we found that this N-terminal domain is quite similar to those of NRD1, PCF11, Ritt103, SCAF8, and RPRD1A/B, well-recognized C-terminal of Pol II (CTD-Pol II) binding proteins [111][112][113][114][115]. This observation guided us to explore the potential association between NTD-JMJD5 and phosphorylated CTD-Pol II. We found that NTD-JMJD5 could pull down a very special specie of phosphorylated CTD-Pol II [25], which is only recognized by a rabbit polyclonal antibody generated from CTD-heptad repeats with phosphorylated serine-2 within each repeat (-YS(p)PTSPSYS(p)PTSPS-) [116] but not by a widely used monoclonal antibody 3E10, which was raised using a single phosphorylated Serine-2 CTD-heptad peptide (-YS(p)PTSPS-) [117]. Further characterization revealed that NTD-JMJD5 has a very high binding affinity toward a CTD-heptad repeating peptide with both serine-2 phosphorylation and additional Serine-5 phosphorylation in the second repeat (-YS(p)PTSPSYS(p)PTS(p)PS-) with an extremely high binding affinity (~9 nM) [25]. Interestingly, this unique phosphorylation pattern of CTD-Pol II actually has been revealed in higher eukaryotes but not in yeast, though the authors and reviewers may have neglected novel differences and significances [118]. This novel discovery leads to another significant question: Which kinase is responsible for the generation of this unique phosphorylated CTD-Pol II pattern in vivo? As reported early, CDK9 could phosphorylate the serine-2 of CTD-Pol II at the early stage of transcription and is critical for the release of paused Pol II in higher eukaryotes [118][119][120][121][122][123][124][125], which is unique in higher eukaryotes, though some reports claimed that Bur1 is the homolog of CDK9 in yeast [126,127], which is still a hotly debated topic in the transcription field. In line with our expectations, inhibition by flavopiridol or depletion through ubiquitin targeting of CDK9 lead to the dramatic drop in this phosphorylation pattern of CTD-Pol II [25]. Based on (1) the early discoveries, which showed that +1 nucleosome is the cause of Pol II pause [128][129][130][131], (2) our findings that JMJD5 cleaves specifically on arginine-methylated histone tails on +1 nucleosomes to generate "tailless nucleosomes" [23,25], and (3) the recruitment of JMJD5 by the phosphorylated CTD-Pol II generated by CDK9 [25], we concluded that JMJD5 might couple with CDK9 to release paused Pol II in higher eukaryotes for the stimulation-regulated genes [25]. function. This question actually leads to another novel discovery, which may raise the curtain on the unique mysterious transcription regulation mechanism of promoter-proximal Pol II pause in higher eukaryotes. . c-JMJD5 and mutant c-JMJD5 cleavage on H3R2me2s. The top portion is the sequence of the H3R2me2s peptide with symmetric di-methylation on R2 with MW 2,749.47Da. After cleavage, a major band of MW 2,494.3 (peak A) is the product of peptide with the first two residues missing. (B). c-JMJD7 generated a similar profile [23].
JMJD6 Cleaves MePCE to Disrupt the 7SK snRNP Complex to Release p-TEFb
JMJD6 is one of the most controversial proteins in the field of biology [132]. It was first cloned as phosphatidylserine (PS) receptor [133] but was corrected as a nucleus-existing protein unrelated to PS [134][135][136]. It was later reported to contain arginine demethylase activity on histone tails [18], hydroxylase activity on splicing factor U2AF65 [137] and histone tails [138], and both arginine demethylase activities on histone tails and RNA demethylase activities on 5 prime of 7SK snRNA [19], and, surprisingly, PS binding [139,140]. The exact or cognate substrate(s) of JMJD6 remained elusive, though it was found that JMJD6 belongs to a functionally not very well characterized Jumonji domain-containing hydroxylase family [134][135][136]. After determining the structure, we found that JMJD6 has some unique structural features besides similarities to a hydroxylase family member known as the factor of hypoxia-inducing factor 1 inhibitor (FIH-1) [141]. The structural information guided us to characterize the function of the JMJD2 family, one of the pioneering members of which has been identified as histone lysine demethylase [81][82][83]. However, we had a hard time finding any enzymatic activities of JMJD6 toward either methylated lysine or arginine of histone, which held us back to publish it, until several years later, when we found that it nonspecifically recognizes single-strand RNA (ssRNA) through its disordered C-terminal arginine-rich motif with high affinity (~40 nM) [103]. We speculated that it could be an RNA demethylase based on its tight binding to the ssRNA [103]; this assumption proved incorrect, based also on our current discoveries [10]. Interestingly, we indeed observed a loss in arginine-methylated histone tails using bulk histone as substrate by specific antibodies against arginine-methylated histone tails when JMJD6 was added [23]. This may explain why JMJD6 was identified as a histone arginine demethylase [18,19].
Several lines of evidence drove us to explore the protease activities of JMJD6. First, a noteworthy report from Dr. Michael Rosenfield's group showed that JMJD6 could destroy the 7SK snRNP complex to release P-TEFb [19]. Second, the discoveries of the unexpected proteases activities of JMJD5 and JMJD7 on histone tails during the characterization of the protease activities of JMJD5 and JMJD7 attest that it is possible that JMJD6 could also act as a protease [23]. Third, we indeed found unspecific protease activities of JMJD6 when it was applied to bulk histones [23]. Finally, it is straightforward for us to select 7SK snRNP as a potential substrate based on the findings of Dr. Rosenfield's group after we failed to identify specific activities of JMJD6 on histone tails. Based on the novel protease activities of JMJD5 and JMJD7 [23,24], the high structural similarity among catalytic cores of JMJD5, JMJD6, and JMJD7 [24,103], and severe phenotypes among knockouts of JMJD6 and JMJD5 in mice [107,108,136,142], we hypothesized that JMJD6 may contain protease activity working on methylated arginines on some protein candidates, which regulate the activity of Pol II, especially promoter-proximal paused Pol II. This was found to be true, as JMJD6 specifically cleaves an arginine-rich sequence (-KRRRR-) site within MePCE, a major component of the 7SK snRNP complex (Figure 11) [10], which primarily functions to sequester the CDK9-containing p-TEFb [143,144]. Methyl phosphate capping enzyme (MePCE) was first characterized as a component of the 7SK snRNP complex that acts as a capping enzyme on the gamma phosphate at the 5 end of 7SK RNA [145], though another group claimed it also has RNA methyltransferase activities on 5 phosphate of microRNAs [146]. Furthermore, a capping-independent function of MePCE via stabilization of 7SK snRNA and facilitation in the assembly of 7SK snRNP was reported by Dr. Qiang Zhou's group [147]. Knockdown of MePCE led to the destabilization of the 7SK snRNP complex in vivo [147][148][149]. We found that depletion of MePCE dramatically increased the activities of CDK9, which is consistent with the discoveries reported previously [147][148][149]. Most importantly, the novel protease activity of JMJD6 toward MePCE elucidates the underlying mechanism of how the activity of CDK9 is strictly controlled and requires the help of both BRD4 and JMJD6, further suggesting that there is virtually no free CDK9 complexes for super elongation complex (SEC) to recruit under normal circumstances. Based on these discoveries, we proposed that JMJD6 cleaves MePCE to release p-TEFb [10].
protease activities of JMJD5 and JMJD7 attest that it is possible that JMJD6 could also act as a protease [23]. Third, we indeed found unspecific protease activities of JMJD6 when it was applied to bulk histones [23]. Finally, it is straightforward for us to select 7SK snRNP as a potential substrate based on the findings of Dr. Rosenfield's group after we failed to identify specific activities of JMJD6 on histone tails. Based on the novel protease activities of JMJD5 and JMJD7 [23,24], the high structural similarity among catalytic cores of JMJD5, JMJD6, and JMJD7 [24,103], and severe phenotypes among knockouts of JMJD6 and JMJD5 in mice [107,108,136,142], we hypothesized that JMJD6 may contain protease activity working on methylated arginines on some protein candidates, which regulate the activity of Pol II, especially promoter-proximal paused Pol II. This was found to be true, as JMJD6 specifically cleaves an arginine-rich sequence (-KRRRR-) site within MePCE, a major component of the 7SK snRNP complex (Figure 11) [10], which primarily functions to sequester the CDK9-containing p-TEFb [143,144]. Methyl phosphate capping enzyme (MePCE) was first characterized as a component of the 7SK snRNP complex that acts as a capping enzyme on the gamma phosphate at the 5′ end of 7SK RNA [145], though another group claimed it also has RNA methyltransferase activities on 5′ phosphate of microRNAs [146]. Furthermore, a capping-independent function of MePCE via stabilization of 7SK snRNA and facilitation in the assembly of 7SK snRNP was reported by Dr. Qiang Zhou's group [147]. Knockdown of MePCE led to the destabilization of the 7SK snRNP complex in vivo [147][148][149]. We found that depletion of MePCE dramatically increased the activities of CDK9, which is consistent with the discoveries reported previously [147][148][149]. Most importantly, the novel protease activity of JMJD6 toward MePCE elucidates the underlying mechanism of how the activity of CDK9 is strictly controlled and requires the help of both BRD4 and JMJD6, further suggesting that there is virtually no free CDK9 complexes for super elongation complex (SEC) to recruit under normal circumstances. Based on these discoveries, we proposed that JMJD6 cleaves MePCE to release p-TEFb [10]. On the other hand, it appears that the super elongation complex (SEC) is unlikely to recruit P-TEFb without the assistance of JMJD6 and BRD4. Compared with efficient recruitment of p-TEFb by TAT protein in the human immunodeficiency virus (HIV) [150], BRD4 is claimed to be responsible for the endogenous recruitment of p-TEFb to the promoters of Pol II pause-regulated genes [143,144,149]. However, BRD4 lacks an RNA-binding motif, compared with TAT (Wu et al. 2007). Therefore, we hypothesized that there must exist another factor to help BRD4 recruit p-TEFb and engage in the instigation of Pol II transcription elongation. Besides the classic Bromo domains, which recognize acetylated histone tails, BRD4 contains an extra terminal domain (ET) that recognizes JMJD6 [151,152]. Combined with our discoveries that JMJD6 nonspecifically binds to single-stranded RNA with high affinity [103], these findings led us to propose that JMJD6 may be recruited by both BRD4 and newly transcribed RNAs from Pol II, to help BRD4 recruit p-TEFb, acting analogously to that of TAT protein associating with both p-TEFb and TAR [10].
Interestingly, JMJD6 was found to coexist with stress granule (SG)-related protein G3BP1, and removal of JMJD6 leads to the accumulation of arginine-methylated G3BP1 and affect its function of SG formation; therefore, the authors further suggested that JMJD6 could be an arginine demethylase [37]. It will be of interest to investigate whether JMJD6 cleaves arginine-methylated G3BP1 and affect the overall homeostasis of G3BP1. If this is true, the above result could also be interpreted alternatively-namely, that JMJD6 may cleave arginine-methylated G3BP1 and lead to final degradation of G3BP1, so as to resolve SGs when the challenge is eliminated.
The Protease Activity of JMJD7 on Histone Tails and Beyond
JMJD7 is a barely touched Jumonji protein family member. Based on the sequence similarity with JMJD5 at the catalytic core, the enzymatic activities of protease on argininemethylated histone tails were characterized simultaneously with JMJD5 by us ( Figure 9) [23]. Further structural and functional characterizations revealed that there is a high structural similarity between JMJD7 and JMJD5, with specificity toward arginine-methylated histone tails but different from JMJD5 such as different sensitivity toward combinations of modification of histone tails [24]. Knockout of JMJD7 in a human cancer cell line dramatically represses the growth of the cell, suggesting a critical role in cell proliferation [23]. Furthermore, the depletion of JMJD7 also leads to the accumulation of the contents of arginine-methylated histone, as well as the overall histone, suggesting its similar role as that of JMJD5 in regulating the homeostasis of the histone [23]. Interestingly, a recent proteomic analysis showed that JMJD7 is associated with several transcription factors, such as FOXI1 and Pogo transposable element with ZNF domain (POGZ) [153], the latter of which is a critical transcription factor to regulate human fetal hemoglobin expression [154], as well as having critical roles in neuron development in the brain [155]. An early report showed that JMJD7 and POGZ work together to regulate the differentiation of Osteoclast, while JMJD7 was found to occupy the promoter regions of several genes associated with the Osteoclast differentiation [156], suggesting that JMJD7 is recruited to the promoter regions of these genes and possibly required for activation of these genes. Another important feature of JMJD7 is that it has been frequently found to fuse with a phosphatase, PLA2G4B, and regulates the proliferation of the head and neck squamous cell carcinoma [157], though the underlying mechanism remains to be investigated. Interestingly, another group claimed that JMJD7 is a lysine hydroxylase, specific for two translation factors-TRAFAC and GT-Pases [158]; it will be of interest to find the consequence of hydroxylation of these GTPases.
Compared with JMJD5, JMJD7 lacks a similar N-terminal domain, which is required for the recruitment of JMJD5 to Pol II. It is likely that JMJD7 may be recruited through another mechanism, such as direct transcription factor recruitment as POGZ to the promoter regions of regulated genes to cleave arginine-methylated histone tails and lead to the similar "Tailless nucleosomes" as that of JMJD5. There are numerous questions that remain to be answered. Most importantly, there is a lack of a knockout model of JMJD7 in mice. Proteomic analysis of JMJD7-associated partners is also an exciting direction to explore.
Cancers Coupled with Upregulations of JMJD5/JMJD6/JMJD7 and PRMTs
Based on the important roles of JMJD5/6/7 in the development of embryos and transcription activations in higher eukaryotes, as analyzed above, it is not surprising that all of them are upregulated in various cancers. JMJD5 is highly expressed in breast cancer [105], lung cancer [159], colon cancer [160], prostate cancer [161], etc. JMJD6 is also found to express highly in numerous cancers [162][163][164][165][166][167][168][169]. Even JMJD7 fused with PLA2G4B is elevated in head and neck squamous cells [157]. In this regard, inhibitors of JMJD5, JMJD6, and JMJD7 could be effective anticancer drug targets to treat different cancers. Interestingly, inhibitors of JMJD6 have been developed and have shown repressive effects on some types of cancers [163,[170][171][172][173][174][175]. There is no report about inhibitors on JMJD5 and JMJD7 yet. However, following the same facts, these inhibitors could be toxic to animals, including human beings, due to the critical function of JMJD5, JMJD6, and JMJD7. PRMTs are highly involved in cancers, and inhibitors have been developed to treat cancers. It is currently a highly active topic in the field of anticancer drug development [2,5].
Conclusions and Perspectives
Based on the detailed analysis presented in this study of the novel protease activities of JMJD5/6/7 and ubiquitous activities of PRMTs, we speculate that these two enzyme systems may build up a novel protein destruction pathway similar to that of ubiquitous pathways but responsible for more specific regulations by participating in the regulation of individual pathways. There are more than 60 members of the Jumonji protein family, and besides JMJD5/6/7, the exact biological function of several members of the small size subfamily including JMJD4, JMJD8, and others are not very well characterized [109]. It will be of great interest to investigate the phenotypes of knockouts of each member. We expect that some of them could act as proteases to digest protein substrates containing methylated arginines. Similar to ubiquitous pathways, it will take considerable effort to build up methodologies to tackle their details. Some significant questions could be addressed, for example, why this pathway instead of the ubiquitous pathway to destroy proteins, how the two pathways are communicating with each other, at what stage they merge together, etc. Interestingly, this report reveals that JMJD5 promotes the ubiquitousorientated degradation of circadian oscillator protein CRY1 [176]. It will be an exciting topic to investigate the molecular basis of this process.
|
2022-02-26T00:01:55.377Z
|
2022-02-23T00:00:00.000
|
{
"year": 2022,
"sha1": "137dada849bf6fbd0d2163084eac3ed38a7131a6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-273X/12/3/347/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c0e367e388736ff9a986fe4705147111e07377fb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233237754
|
pes2o/s2orc
|
v3-fos-license
|
Determining the psychometric properties of a novel questionnaire to measure “preparedness for the future” (Prep FQ)
Background People are living longer than ever before. However, with living longer comes increased problems that negatively impact on quality of life and the quality of death. Tools are needed to help individuals assess whether they are practicing the best attitudes and behaviors that are associated with a future long life, high quality of life, high quality of death and a satisfying post-death legacy. The purpose of paper is to describe the process we used to develop a novel questionnaire (“Preparedness for the Future Questionnaire™ or Prep FQ”) and to define its psychometric properties. Methods Using a multi-step development procedure, items were generated, for the new questionnaire after which the psychometric properties were tested with a heterogeneous sample of 502 Canadians. Using an online polling panel, respondents were asked to complete demographic questions as well as the Prep-FQ, Global Rating of Life Satisfaction, the Keyes Psychological Well-Being scale and the Short-Form 12. Results The final version of the questionnaire contains 34 items in 8 distinct domains (“Medico-legal”, “Social”, “Psychological Well-being”, “Planning”, “Enrichment”, “Positive Health Behaviors”, “Negative Health Behaviors”, and “Late-life Planning”). We observed minimum missing data and good usage of all response options. The average overall Prep FQ score is 51.2 (SD = 13.3). The Cronbach alphas assessing internal reliability for the Prep FQ domains ranged from 0.33 to 0.88. The intra-class correlation coefficient (ICC) used to assess the test–retest reliability had an overall score of 0.87. For the purposes of establishing construct validity, all the pre-specified relationships between Prep FQ and the other questionnaires were met. Conclusion Analyses of this novel measure offered support for its face validity, construct validity, test–retest reliability, and internal consistency. With the development of this useful and valid scale, future research can utilize this measure to engage people in the process of comprehensively assessing and improving their state of preparedness for the future, tracking their progress along the way. Ultimately, this program of research aims to improve the quality and quantity of peoples live by helping them ‘think ahead’ and ‘plan ahead’ on the aspects of their daily life that matter to their future. Supplementary Information The online version contains supplementary material available at 10.1186/s12955-021-01759-z.
insufficient funds [4]. And when they cross the finish line, many are poorly prepared for the final stages of life and experience poor quality end of life care [5]. All of this begets; can people do more to prepare to be older?
All of us are in training to become an older person. Perhaps, part of the problem is that we, as a society, do not realize or prioritize the fact that the choices we make today determine how successfully we will age in the future. To the extent that people do think ahead and see themselves as an older person, place some value on that future stage of life, and believe they have 'control' over their destiny, they will make better lifestyle choices today to arrive at a better place tomorrow [6]. For example, people in the general population who never smoke, maintain a normal body mass (BMI range of 18.0-24.9), do 30 min or more of vigorous exercise daily, maintain moderate or no alcohol consumption, and have a healthy diet could prolong their life expectancy at 50 by an additional 14.0 and 12.2 years for women and men respectively, compared to controls that did not adopt any of these lifestyle behaviors [7]. These same lifestyle choices also translate into lower chances of developing medical diseases, like diabetes, heart problems, dementia, etc. [8]. The economic consequences of these 'modifiable' lifestyle factors are staggering. In the USA, a recent analysis determined that 27% of the annual health care spending was attributable to these five modifiable risk factors [9]. That translate into 730 billion dollars annually spent in managing diseases related to behaviors we have the ability to control. Further, experts suggest that 75% of what determines how well we age is due to lifestyle factors or other factors within our locus of control [10]. So if a person sees themselves in training to become an older person, they are more likely to make better lifestyle choices today that will increase their chances of living longer and living better.
We recently surveyed 502 Canadians over the age of 18 and asked them questions about their views on aging (unpublished data from prior sample, see Additional file 1: eTable 1). Whilst the overwhelming majority felt it was important to think about themselves as an older person, few people regularly spend time thinking about what it will be like for them as an older person. When they do, respondents were split whether they saw themselves as an older person in the future in a positive light or a negative one. A significant number of respondents lacked the confidence that they could successfully grow older and many felt it was not up to them, that there were external factors influencing the success of their aging experience. It seems that people need help in 'thinking ahead' , 'planning ahead' and putting themselves in the driver's as their own locus of control, so as to move forward with confidence in creating a long, high-quality life and high-quality death.
Measurement precedes improvement. If we want to be able to help people better prepare for the future, we need to be able to measure their current 'state of preparedness for the future. ' The purpose of this paper is to describe the process we used to develop a novel questionnaire ("Preparedness for the Future Questionnaire ™ or Prep FQ") and to assess its psychometric properties for evaluating a person's current state of preparedness for their future as an older person. The aim of this questionnaire is to be used to help people think more about their future as an older person and realize the things they could be doing today to age optimally.
Methods
This project is a multi-phase study aimed at developing and providing initial validation of a novel questionnaire, the Preparedness for the Future Questionnaire ™ (Prep FQ).
Item generation and refinement
Items for the Prep FQ were generated from three sources. Conceptually, we believe we can improve the health outcomes, quality of life, survival, and end of life experience by helping people think ahead and plan ahead [6]. For example, there is a high level of evidence that current lifestyle behaviors, such as smoking, eating healthy, etc., impact longevity [7] or that planning for your future medical care in advance translates into improved health outcomes for both patients and their substitute decision-makers [11]. Accordingly, we carefully searched the broad scientific literature to identify several key aspects of successful aging, optimal aging, death preparation, end of life, and post-death (legacy contributions). Potential topics were included if a particular activity in the present was shown to impact future quality or quantity of life (such as quitting smoking or healthy eating) or if it was a practical suggestion associated with optimal life and death experiences (legally documenting substitute decision-maker or wills and estate planning, for example). Included topics and evidence supporting their impact are summarized in Additional file 1: eTable 2.
In addition, in 2019, we surveyed a separate cohort of 500 Canadians regarding their views on aging (unpublished data discussed in the introduction). We asked the following open-ended question, "Please describe what activities or behaviors you are currently doing to prepare for a great future as an older person. " Responses were reviewed by the principal author (DKH) to generate items for consideration for the Prep FQ. The responses included actions such as eating healthy, exercising regularly, good sleep habits, saving money, for example. If responses were supported by data and consistent with our conceptual framing, they were considered for inclusion in the questionnaire. Thirty-one items were included in the initial version of the questionnaire.
Each of these potential aspects was then incorporated into an item on the questionnaire. We created response options that reflected the degree of completion or adherence with the related attitude or behaviors listed in the questionnaire. Finally, we piloted an early version of the questionnaire on a group of 20 lay people and health professionals, either individually or in a focus group. We solicited feedback on both the items and response options and whether they had additional items for consideration. Feedback led to further refinement of the items and response options. Three items (Leisure participation, Legacy Planning and Life-long learner) were added to the list as a consequence of this consultation.
In the final version of the questionnaire, we assigned points based on the item's impact on quality and quantity of life (see Additional file 1: eTable 2). Items that impacted quantity of life were given a weight 3 times more and items that impacted quality of life were weight twice as much compared to just practical suggestions and more points per item were given in the respondent was more compliant with that item. The principal authors (DKH, PP, AD) collaborated and agreed on all point assignments. The "overall" Preparedness score is the sum of points from the responses to all of the answered questions. The domain scores are the sum of points from all answered questions belonging to each domain. The domain scores would have been considered missing if more than half of the responses applicable to the respondent for the domain were missing, but the Qualtrics system did not allow entry of records with any missing data. All scores were re-scaled to range between 0 (worst -lowest possible total points given the applicable answered questions) and 100 (best -highest possible total points given applicable answered questions). Not all questions were applicable to all participants.
Determining the psychometric properties of the preparedness for the future questionnaire
The validation phase consisted of a cross-sectional survey of 502 Canadians registered with Qualtrics' online polling panels. To be eligible for this project, panelist had to be living in Canada, speak and read English, and be 18 years of age or older. We strategically sampled 1/3 of participants from each of the following age ranges so we ended up with a representative sample of adults and would be able to compare subgroup differences: 44 years of age or less, 45-65, and 66 years old or older. To obtain a representative sample of Canadians, we aimed to enroll 1/4 from Western provinces, 1/4 from Ontario, 1/4 Quebec; and 1/4 from Atlantic Canada.
In order to test the reproducibility (test-retest reliability) of our novel questionnaire, the Qualtrics staff readministered the Prep FQ to a subset of 50 participants who are enrolled in the project one week later. Prior to approaching the participants for the reliability assessment, the assistant asked the potential respondent if their life circumstances have changed in the past week. People who said 'No' were recruited to this re-test. We justify a one-week period as this is considered sufficient time for the respondents to have forgotten their original answers, but a short enough interval for little change in their lives to have occurred.
At the time of the first online interview, we also collected the following demographic data from participants: age, sex, location, marital status, family circumstances, level of education, language used on a daily basis, global rating of quality of life, and presence of significant health problems. All participants were asked all questions except patients < 60 years old were not asked 7 questions pertaining to physically and social activities, leisure activities, living independently, funeral and burial plans and legacy plans as they were judged to be less relevant to younger people and much of the supportive literature has been conducted in older persons exclusively. In addition, two questions (small business succession planning and family caregiver support) were conditional and not intended to be answered by all respondents.
The primary purpose of this phase was to determine the psychometric properties of the Prep FQ (item response rates, validated domains, internal consistency, reproducibility, and construct validity). We reviewed the response distribution and frequencies, and percent non-response for each item. Items with large amounts of non-response were flagged for potential removal. As a secondary objective, we then conducted an exploratory factor analysis (EFA) to help identify the factor structure of the questionnaire. This EFA guided the combining of items into domains.
With the finalized version of the questionnaire, we used Cronbach's alpha and McDonald's Omega coefficient scores to evaluate the internal consistency of responses to items from the same domain [12,13]. As there is no 'gold standard' or other validated instruments for measuring future preparedness, we developed a multifaceted approach to validating our novel questionnaire. In the development to date, we utilized a rigorous, comprehensive approach to establishing face and content validity. In this study, we examined construct validity. We expected the finalized version of the questionnaire to be associated with other validated questionnaires measuring potential health and psychological outcomes of someone that is well prepared for the future. Hence, once the questionnaire had been finalized, the Prep FQ domain scores were compared to a single item Global Rating of Life Satisfaction (GRLS), the Psychological Well-Being (PWB) scale and the Short-Form 12 (SF-12), a general status healthrelated quality of life measure that has 2 summary measures, the Physical Component Summary (PCS) score and the Mental Component Summary Score (MCS).
Global rating of life satisfaction
Subjective well-being is a high-level concept that captures the affective feelings and cognitive judgments people have about the quality of their lives. Life satisfaction is a component of subjective well-being that focuses on whether one is happy with one's life. Greater life satisfaction is associated with positive life outcomes, such as health [14], income [15], and better workplace performance [16]. We expect patients who are more prepared for the future to have greater life satisfaction, using this measure as an indication of construct validity. While longer measures to assess life satisfaction exist, a single item, global rating of life satisfaction has been shown to have similar psychometric properties to longer scales and is considered both reliable and valid [17,18]. Therefore, we asked, "In general, how satisfied are you with your life?" with a 7-point scale from 1 (Completely Dissatisfied) to 7 (Completely Satisfied) used to categorize their answers.
Psychological well-being scale
Based on an extensive review of the literature, as well as existential and utilitarian philosophy, Ryff [19] defined psychological well-being as a process of self-realization, consisting of six dimensions: autonomy, environmental mastery, personal growth, positive relations with others, purpose in life and self-acceptance [19]. She then created a scale to measure these constructs known as the "Psychological Well-Being Scale" or PWB. This scale has been used widely and has been shown to be reliable, valid, responsive to psychological interventions [20][21][22]. Domain scores measuring the six dimensions are calculated by averaging the response scores of the seven items from each dimension. We created an overall score by averaging all 42 items. We expected people who score higher on the Prep FQ would have a greater PWB domain scores and people who score higher on the psychology domain of Prep FQ to have even a greater correlation with PWB domain scores compared to the correlation with the overall Prep FQ score.
Short Form-12
The SF-12v1 is a multipurpose survey of general health status consisting of eight domains that uses just 12 questions to measure functional health and well-being from the patient's point of view [23]. Taking only two to three minutes to complete, the SF-12v1 covers the same eight health domains as the SF-36v2 with one or two questions per domain and is highly correlated with the summary scores of the SF 36. The SF12v1 has excellent validity, reliability and internal consistency [23]. Given the long duration of the whole question set for participants, we felt the SF-12v1 is a practical, reliable and valid measure of physical and mental health for our purposes. We expected people who score higher on the Prep FQ would have higher SF-12 summary scores and people who score higher on the health domain of Prep FQ to have even a greater correlation with SF-12 Physical component summary scores.
Construct validity
In summary, a priori, we hypothesize that we would observe weak-to-moderate correlations between these different but related measures. Specifically, we expect the following: 1. Overall Prep FQ score would correlate in a positive direction with all PWB domain and overall scores, GRLS, and SF-12 PCS and SF-12 MCS because someone who is better prepared for the future should enjoy a higher quality of life and life satisfaction. However, we expected these correlations to be weakmoderate and not strong because there are other determinants to these health outcomes than the state of preparedness. 2. We expected the Psychological Well-being domain of the Prep FQ would correlate in a positive direction with PWB domain and overall scores, GRLS, and SF-12 MCS and that these correlations will be greater in magnitude that the correlations observed with the overall Prep FQ because the Prep FQ overall score includes measures unrelated to psychological wellbeing.
We expected the correlation of the Psychological
Well-being domain of the Prep FQ to be weakly correlated with the SF-12 PCS because they are measuring 2 different health constructs and that this correlation would be less than the correlations observed in the above 2 analyses. 4. We expected the Positive Health Behavior domain of the Prep FQ will correlate in a positive direction with the SF-12 PCS and that these correlations will be greater in magnitude that the correlations observed with the overall Prep FQ (Analysis #1) because of the tight relationship of the 2 measures of physical behavior and health in contrast to the overall score, which includes unrelated measures.
In addition, to further add to the validity of the questionnaire, we examined the Prep FQ scores in various subgroups to demonstrate the ability of the novel questionnaire to discriminate different states. Specifically, we expected to see higher scores in the Medico-legal domains in people who were married and with children and lower scores in the Positive health Behavior domains in people with chronic health conditions.
Sample size and justification
We planned to enroll 500 participants so that the average width from the lower to upper 95% confidence limits for Pearson correlations of 0.3, 0.5 and 0.8 would be 0.16, 0.13 and 0.06 respectively, and for Cronbach Alpha's of 0.5, 0.7 and 0.9 assuming at least 3 items would be 0.15, 0.09 and 0.03 respectively. We enrolled a sub-sample of 50 participants for the test-retest reliability assessment so the average one-sided lower 95% confidence limit for ICCs of 0.7, 0.8, and 0.9 would be 0.56, 0.70 and 0.84.
Statistical methods
Staff at Qualtrics were responsible for data collection and delivery of de-identified data via a secure method to the Clinical Evaluation Research Unit (CERU) at the Kingston General Hospital who were responsible for the analysis. Descriptive statistics were used to describe the responses to all questionnaires. We used Cronbach's alpha to measure the internal consistency of the items within each domain [12]. Separate Cronbach's alphas were calculated for the various participant subgroups based on age (patients ≥ 60 years and < 60 years) and the numbers of conditional questions answered so that the final domains scores could be assessed for each participant subgroup. We also calculated the McDonald's Omega coefficient scores as well which takes into account the strength of association.
between items and constructs and the item-specific measurement errors [13].
We assessed the reproducibility of our novel questionnaire over a one-week period using Intraclass correlation coefficients (ICC) calculated from the one-way Analysis of Variance. For each domain and the overall score we report he ICC with 95% confidence intervals. The ICC is the proportion of the total variance between assessments that is due to difference between respondents rather than differences between the two assessments within the same respondent [24]. ICC values above 0.7 are generally considered good to excellent. Pearson's correlation coefficient between the Prep PQ domains and other instruments were used as described in the prior section to assess construct validity.
As a secondary analysis, we used exploratory factor analysis (EFA) to guide the grouping of items into domains. The EFA used the common factor model. Since the responses were not normally distributed and some domains were expected a-prior to be correlated, we used iterated principal factor analysis with the oblique PRO-MAX rotation [25]. Although face validity was considered when loadings were equivocal, items were generally assigned to the factor to which they loaded most heavily. The main EFA only considered the 25 items that were applicable to all participants so that the full sample could be used. We conducted separate EFAs on the common data set (25 items) for participants aged < 60 and 60 and over. We considered item weightings and clinical sensibility in determining the final factor structure. Based on face validity, the 2 conditional questions were assigned to the 'Planning' domain and the remaining 7 items only answered by older cohort were combined to form a 'Latelife Planning' domain. Domain scores were calculated for each respondent by summing the points to the questions applicable to them. The domain was then linearly rescaled so that 0 and 100 were the worst and best possible score given the items applicable to the respondent.
Finally, we also report the Kaiser-Meyer-Olkin (KMO) measure and Bartlett test of sphericity to judge the suitability of conducting the factor analyses. It is suggested that KOM measure of below 0.50 is unacceptable; > 0.60 is tolerable, overall KMO should be greater than 0.80 [26]. With the Bartlett test of sphericity, a small p value (p < 0.05) indicates rejecting the null hypothesis which suggest that the data are appropriate for factor analysis [26].
All analysis were done using SAS Version 9.4 (SAS Institute Inc., Cary, NC, USA). We obtained Research Ethics Board approval from Queen's University. Given this project used de-identified responses from panelists that had consented to participate in Qualtrics survey work, our ethics board waived the need for us to obtain informed consent from study subjects.
Results
Five hundred and two participants completed all questionnaires. Table 1 presents demographics data on all participants. The average age was 53 ± 17.5 (SD) with 218 (43%) of the sample with an age 60 or more. There were slightly more females (58%) compared to males (42%).
The final version of the questionnaire contains 34 items in 8 distinct domains ("Medico-legal", "Social", "Psychological Well-being", "Planning", "Enrichment", "Positive Health Behaviors", "Negative Health Behaviors", and "Late-life Planning"). The "Late-life Planning domain contained 7 items that were only answered by participants aged 60 or older. The raw response frequencies to the numbered items of the Prep FQ are shown in Additional file 1: eTable 3. There were no missing data because the Qualtrics system required a response to all items. The highest non-response (i.e. responded "prefer not to say/answer") rate for any Prep FQ question was 3% for question 25 "Do you have adequate insurance. " Based on our prior expectations and the results of the EFA with items answered by all respondents, we selected a 7-factor structure (see Table 2). These 7 factors were named, "Medico-legal", "Social", "Psychological Well-being", "Planning", "Enrichment", "Positive Health Behaviors" and "Negative Health Behaviors. " All items loaded with coefficients > 0.30 and ranged from 0.33 to 0.80. KMO and Bartlett test indicate that the data was suitable for factor analysis (see legend of Table 2). However, when considering the secondary EFAs in participants younger and older than 60, considerable differences in item loadings were apparent (see Additional file 1: eTable 4a, b). These differences affect 8 items summarized in Additional file 1: eTable 4c. Additionally, 'maintaining an optimal BMI' did not load adequately on any factor in either of these EFAs and 'smoking' did load on any factor in the EFA in participants ≥ 60 years.
Average Table 3). The overall score had a reliability of 0.87. The Cronbach's alpha values for the other questionnaires used in this study are shown in Additional file 1: eTable 6. Table 4 shows that the Prep FQ met all the prespecified relationships with the other questionnaires as stated a priori for the purposes of establishing construct validity (see Additional file 1: eTable 7 for complete results of correlations between Prep FQ domains and other questionnaires). Specifically, the overall Prep FQ score was weak to moderately correlated in a positive direction with GRLS, SF-12 PCS, SF-12 MCS and all PWB domains. A one standard deviation increase in the overall Prep FQ score was associated with a one-half standard deviation increase in the GRLS and overall PWB scores and vice versa. In addition, we observed that the Psychological Well-being domain of the Prep FQ was also correlated in a positive direction with GRLS, SF-12 MCS, and all domains of the PWB and that these correlations were greater in magnitude than the correlations observed with the overall Prep FQ except PWB Personal Growth domain. Also, we observed that the Psychological Well-being domain of the Prep FQ was weakly correlated with the SF-12 PCS and that this correlation was less than the correlations observed in the above 2 analyses. Finally, the Positive Health Behaviors domain of the Prep FQ did correlate in a positive direction with the SF-12 PCS and this correlation was greater in magnitude that the correlations observed with the overall Prep FQ.
In subgroup analysis, people who were married had a higher Prep FQ Medico-legal domain score compared to people who were not married (35.4 vs. 21.6, p < 0.001). People with more than one child also had a higher Medico-legal domain score than those with one child or without children (35.1 vs. 22.7 vs. 28.7, p < 0.001). Finally, people with some chronic health problems had a lower Positive Health Behavior score compared to those who did not have any chronic health problems (48.3 vs 55.2, p < 0.001).
Discussion
We set out to develop and validate a novel questionnaire to enable self-assessment of individuals contemplating their future as an older person. We derived our items from content analysis of lay respondents, the scientific literature and focus groups with experts and lay representatives. We supported our inclusion of various items referencing the corresponding published evidence. Based on the development methods, we concluded that our questionnaire has face and content validity. In this cross-sectional survey, we collected responses on 502 respondents across Canada and demonstrated good utility (limited "preferred not to say' and use of full range of responses), validated domain structures and scores, and good test-test reliability. The internal reliability was acceptable for most domains, but was particularly low for the "negative health behaviors" domain which included an item for tobacco use and an item for alcohol consumption. Although there was not a strong response correlation between these two items, we believe that the literature clearly supports that tobacco use and excessive alcohol use are both important negative health behaviors. Thus, we decided to keep these two items in the same domain. This multi-dimensional questionnaire differs from existing validated questionnaires, such as GRLS, SF-12, or PWB scale, in that it attempts to measure all attitudes, behaviors, and key practices that portend for a longer, higher quality life, high quality death, and positive legacy experience. As such, we did not expect it to correlate highly with questionnaires that measure one aspect of the human experience-life satisfaction, health status, or psychological well-being, for example. Conceptually, we considered these latter measures as related but distinct from Prep FQ and as possible outcome measures. In other words, if a person is highly engaged in thinking about and preparing for a high-quality future, they should have higher scores in these outcome measures, which is what we observed. Practically, then, the Prep FQ can be used as a diagnostic test or self-assessment questionnaire, to help the respondent evaluate where they are at in their 'readiness for the future. ' Their Prep FQ score, particularly when bench-marked to peers, may serve to motivate individuals to engage more in preparing for the future. Scores on individual items and domains will give individuals a sense of areas of improvement to have a more successful aging experience. We further observed that a one standard deviation increase in the overall Prep FQ score (13 points) was associated with a onehalf standard deviation increase in the GRLS and overall PWB scores. By improving a few points on each of the 5 major lifestyle factors and/or engaging in advance care planning, financial planning, legal planning etc., people can easily improve their scores and state of preparedness and this will translate in clinically important and moderately large increases in their life satisfaction, health status and psychological well-being. Given historical high levels of mental illness, low levels of psychological wellbeing, and an epidemic of obesity and high prevalence of chronic, non-communicable diseases [27,28], it would be important to widely disseminate tools that help individuals assess and self-manage their health and well-being. This novel self-assessment questionnaire begins to move people in that direction.
The strengths of this work are the robust approach to the development and evaluation of this novel questionnaire including a rigorous sampling method and questionnaire administration that results in a nationally representative sample with no missing data. The weakness of this work include: (1) We could not perform a systematic review to identify all topics that impact quality and quantity of life, quality of death and legacy experience as the topics were too broad and some (such as most of the planning topics), did not have an evidentiary basis to support their impact. Consequently, we cannot be absolutely sure that we did not miss some important aspect of life and death that belongs in this questionnaire. Having said that, we reached saturation with our existing searches and consultations with over 500 individuals, we are confident that all major aspects are included in this version of the questionnaire. (2) Whilst our sample was representative of people across Canada and reflected both the age and gender mix across Canada, our findings may not be generalizable to minority groups, people of low socioeconomic status, or non-English speakers. Future work with this questionnaire can explore the adaptability and utility in these subgroups. (3) Some of the psychometric properties related to individual domain scores, such as the Cronbach's alpha and McDonald's Omega coefficient of the Negative Health Behaviors did not meet minimal threshold standards. While low Cronbach alpha's were not ideal, they were not surprising either, given that each items contained within the Prep FQ represents a unique behavior or attitude. However, the potential alternative of adding items (and in turn participant burden) to increase the internal consistency of the domains is of little value to the ultimate purpose of this scale. We tried to develop the most parsimonious scale that identified key behaviors and attitudes associated with a better future that can be used for individuals to identify their personal opportunities for improvement, and therefore it is the single item that informs the decision rather than the domain. This methodological weakness is only relevant to the domain scores which are less actionable at an individual level. We further acknowledge that the domains suggested by our exploratory factor analyses require further evaluation in a subsequent sample with confirmatory factor analyses. (4) Because some questions did not pertain to all ages, our EFA was repeated separately in participants < 60 and 60 or older. As a consequence, we saw a limited number of items that did not fall into the same age specific domain structure as the overall EFA. These findings question the legitimacy of the domain scores used in a heterogenous sample, but if the questionnaire is principally used as it was intended as a tool for self-assessment and improvement where the individual item and overall score are all that is required, this is not a major impediment to its use. Alternately, having separate questionnaires for younger and older populations may be a barrier to uptake and further development.
Conclusions
This work represents the development and evaluation of a unique questionnaire to measure the state of people's preparedness for their life as an older person. Now that we can measure this construct in a reliable and valid way, we can begin to engage people in the process of improving their state of preparedness for the future and will have a psychometrically sound measure to monitor progress of individuals or groups of individuals. Future work will need to establish the 'predictive validity' of this questionnaire (that it identifies people that will have poor health or well-being in the future) and its responsiveness (that it changes subsequent to lifestyle interventions). Ultimately, this program of research aims to improve the quality and quantity of peoples live by helping them 'think ahead' and 'plan ahead' and take control of the things that matter for their future.
|
2021-04-15T14:06:56.923Z
|
2021-04-15T00:00:00.000
|
{
"year": 2021,
"sha1": "d0bb82c180410fdbc950d48923736a63922414ad",
"oa_license": "CCBY",
"oa_url": "https://hqlo.biomedcentral.com/track/pdf/10.1186/s12955-021-01759-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d0bb82c180410fdbc950d48923736a63922414ad",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
61684487
|
pes2o/s2orc
|
v3-fos-license
|
Embedded Web Server Application for Industrial Automation
Web Server based monitoring has been an issue for industries as they make use of PC-based servers which consume large power and occupy large area. This limitation can be overcome by replacing the existing PC-based server by an Embedded Web Server using Raspberry Pi. The prime objective of this paper is to design a remote data acquisition system which is controlled by Linux portable ARM processor and web server application with General Packet Radio Service (GPRS) technology. This system focuses not only on device monitoring but also controlling it. The monitoring and data collection is accompanied by a Short Message Service (SMS), an email alert which is initiated in order to avoid the occurrence of a critical event. The system is capable enough to withstand power failure and capable of restarting from the point of failure. *Author for correspondence Indian Journal of Science and Technology, Vol 8(S2), 272–277, January 2015 DOI:.10.17485/ijst/2015/v8iS2/72374 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 Embedded Web Server Application for Industrial Automation Ajith Kumar P. Shetty*, K. Ketan and M. Shanmugasundaram SENSE, VIT University, Vellore, Tamil Nadu 63201, India; akpshetty5@gmail.comIm
Introduction
The origin of Web server comes from the requirement of a Client trying to access data which is made through HTTP (Hypertext Transfer Protocol) so that the web server can process, store and send the data on the request of client. Although the vital role of Web servers is to provide data it can also in some instances accept data from clients. Traditional methods make use of Unix and Linux workstations 1 , typically requiring, large database storage systems occupying large area and high setup cost 2 . The sole purpose of this paper is to overcome the area and cost constraints which can be cut down and the system can be made more efficient. The Embedded web server provides services with minimum computing resources. The embedded industry has hardly evolved in past years. 8-bit microcontrollers are the bread and butter for the industry but slowly now more and more devices are not only gaining popularity but these embedded systems is also getting smart enough to be able to connect them to a network 2 . The embedded web server should be relatively small in size and easily integration with many devices and Raspberry pi is fit for that. Although they have limited hardware and storage capa bilities, these hurdles hardly matter and it is still capable enough to perform vital tasks with these limitations. Internet is starting to get into day to day life of everyone and has become an integral part of our life. Users all over the world, is it home or industry want to access their devices remotely using the internet technology. The expectation that embedded web server carry with it is that it should be able to replace the personal computers and give way to enhancements in all the parameters which will boost the overall efficiency of the system. These parameters which provide an upper hand over the traditional computers are listed in Table 1. The data which is available on the embedded web server should be secured in the sense that any unidentified person should not be allowed unless his authentication is verified. The information provided by the module is collected and this data can be displayed on web pages 3 . These pages are basically located in the memory. Here the need of Raspberry pi over microcontroller can be understood from the fact that whenever an IP address is entered by the user on the address bar user intends to access the data collected by server. The embedded server will provide dynamic data whenever requested by the client.
Embedded Web Server
The arm processor present in the Raspberry pi provides the platform for data acquisition, the control unit and the embedded web server. Figure1 depicts the working of embedded web server in a nutshell. The embedded web server is continuously monitoring the temperature values from DS1820 temperature sensor and placing them on the server. This task is accompanied by a control action on server side if the client also intends to do so. The Raspberry pi has to continuously serve the asynchronous interrupts 4 . The system is designed such that any particular increase in temperature over a predefined threshold will turn the control device off and this is accompanied by sending an e-mail and SMS to the user. The embedded web pages are written and designed in HTML. These pages are designed user friendly to avoid unnecessary complexities on client. The client on the other side can access a remote device using embedded web server, all the client has to do is to login to the page using a valid user name and password and within second he is able to access all the data.
Raspberry Pi
Raspberry pi is a credit card sized computer developed in the UK. It is different from that of the regular computers because it's not only small in size but also has the ability to integrate itself with electronic components which is of vital importance when designing an embedded web server. It overpowers the traditional microcontrollers in the sense that it has high capacity of RAM and a powerful processor which makes it an ideal choice for handling embedded applications. The need to use Raspberry pi as an embedded web server can be understood from the fact that to control a device, microcontroller is good pick but to do the same remotely pi stands out due to its 512Mb capacity of RAM and to be able to provide a clock frequency of 700 MHz. There are multiple ways of using Raspberry pi right from controlling an LED to getting a basic understanding of operating system. It is the best way to experiment with the board and get an idea of the working from inside. It has in-built compilers for good number of languages and the best support is for the language python as pi in Raspberry pi means python. This module is also best in the sense that the price to afford this platform is low Like other computers Raspberry pi also needs an operating system and the OS which it uses is Raspbian. Digital and analog output is provided by HDMI port. The processor has some features which require special device drivers and that is not available in its Linux Distribution.
Digital Temperature Sensor
The sensor used is DS1820 a Digital Thermometer which provides a 9-bit to 12-bit temperature readings preferably controlled by user. The DS1820 can measure temperature over the range of -55°C to +125°C in 0.5°C (Resolution) increments. Information is sent from the DS1820 over a 1-Wire interface, so that only one wire needs to be connected to GPIO pin which avoids unnecessary wirings.
Software Design 2.3.1 SQLite and Apache
SQLite is one of the public domain software packages that provide database management system. SQLite has a unique ability of being lightweight when it is compared on platforms like complexity, administrative overhead involved, and amount of resource usage. SQLite's small code size and conservative resource use makes it well suited for embedded systems running limited operating systems.The Apache HTTP server software or a program runs in the background on an operating system. It provides user with multi-tasking and services to other applications that connect to it, such as client web browsers. The Apache Web server provides a full range of Web server features, including CGI, SSL, and virtual domains.
Cron Tab
The software utility Cron is a time based job scheduler in UNIX like computer operating systems. It is used to schedule jobs (commands or shell scripts) to run periodically at fixed times, dates, or intervals. It has the capability to start the execution after recovering from a power failure.
Methodology
The system should be able to acquire data from remote areas, store and should be in a position to reproduce the data whenever demanded by the client at the other end. DS1820 is the sensor used for acquiring temperature. There is even a provision for controlling an electronic component from the client end which is demonstrated by controlling an LED.
The methodology is such that there are temperature sensors and LED in the remote area which are connected to the Raspberry pi module which acts as a Mini-computer in this case. This will be continuously monitoring the sensors and storing it in the database using SQL which is a light weight Database Management System. Since data is stored at a very high frequency, lot of unnecessary data is stored continuously in the Memory leading to filling up of memory space. To avoid this undesired event Cron Job is used which is basically a job scheduler in UNIX like operating System. This helps in scheduling and updation of data in database at a fixed time interval that can be decided by the user. So every five minutes (as defined by user) the Cron job automatically executes the program and stores temperature values in the database. The client on the other end is able to access the data using a Login page where authentication of his ID is checked using a Password if they do match client will not be allowed to access the data. After acquiring the temperature sensor values they are compared to that of threshold values and on the basis of comparison if they exceed the device is turned off and vice-versa. This simultaneously is accompanied by a control action to another device connected on the embedded web server which can be exercised by client. This is depicted in Figure 2.
This paper focuses mainly on device controlling task which is the upper hand when compared to the already existing systems wherein the communication takes place only in one direction and to make this system more user friendly Bidirectional connectivty is provided. Thus it provides user with multiple options of controlling a device from remote area which plays a vital role when considering that switchingoff a device can avoid a catastrophic event. The person at the client end can access the current as well as the previous data. To make the data Comprehensible, it is displayed as a graph for the ease of the user as shown in Figure 3.
Experimental Results
Figure 4 depicts the continuous temperature values collected by Raspberry pi module at the inception which displays data at different intervals of time on the teminal screen. This data is to be managed by SQlite for storing the data in memory and it regularly flushes out the unnecessary data so that the memory does not overflow. Figure 5 shows that their is a login page that is created to test the authenticity of the user. It dynamically checks for the credentials provided by the user so that an access to a web server's information can be provided to that user.
As of now a single users login and password details are created which can extended to many users depending on the size of the RAM system has. Figure 6 shows the control switches designed at the client end so that the client can control the device from a remote area.The control action can be, controlling a LED or a motor from the client end.
Existing Work
The method of using single chip data acquisition has a limitation in processing capability and also lags in producing reactive output.
Conventional web servers demand large amount of memories and area which lead also to an increase in cost. A comparison between existing and the proposed system is shown in Table 1.
Proposed Work
The problem of Size, cost and power consumption are overcome by using the Raspberry Pi module as it does well in all the domains in which the conventional systems fail. Using Raspberry Pi as web server we are not only able to receive data from server but also able to control a device present in a remote area through proper authentication.
Conclusion
The rapid development in industrial sector demands an efficient implementation of web server.
The Raspberry pi Embedded Web Server is an effective solution for acquiring the data and reproducing it in the form of a graph with current and previous values, this is done on clients demand which stands out in comparison to that of the traditional method of using PC-Based Unix servers. This system plays a vital role in cutting down the cost and area requirement. The module has an advantage that it can continue its operation even after a power interruption without human intervention.
|
2019-02-15T14:16:49.258Z
|
2015-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "8100e6ed29ad2e2d5260c8e416e2c32859d0c5aa",
"oa_license": null,
"oa_url": "https://doi.org/10.17485/ijst/2015/v8is2/72374",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f111cc12405df61759651166a34cdaa549fc0aaa",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
14016512
|
pes2o/s2orc
|
v3-fos-license
|
The determinants of reduced dietary intake in hospitalised colorectal cancer patients
Purpose Patients with colorectal cancer (CRC) often experience malnutrition and weight loss, largely resulting from reduced dietary intake. The aim of this study was to identify determinants of reduced dietary intake in order to facilitate early recognition of malnutrition and optimise nutritional treatment. Methods Data from nutritionDay, an international 1-day survey investigating patient, disease and food profiles, were used. To identify determinants of dietary intake, defined as normal vs. reduced in the last week, univariate and multivariate logistic regressions were performed. Results Of 1131 hospitalised CRC patients, 54% reported reduced dietary intake. Patient- and disease-related characteristics significantly associated with reduced dietary intake were female gender (odds ratio (OR) 1.38), cancer stage III (OR 1.52) or IV (OR 1.70) vs. I, performance status 2 (OR 1.56), 3 (OR 2.37) or 4 (OR 4.15) vs. 0, duration since hospital admission of ≥ 4 days (OR 4–7 days, 1.91; 8–21 days, 1.97; > 21 days, 1.92) vs. < 4 days, and unintentional weight loss (OR 2.56). Additionally, higher symptom scores of pain, weakness, depression, tiredness and lack of appetite were associated with reduced intake. Conclusions Patient- and disease-related determinants for reduced dietary intake were being female, higher cancer stage, worse performance status, duration since hospital admission ≥ 4 days and unintentional weight loss. Furthermore, multiple symptoms were associated with a reduced dietary intake. Future trials should assess whether early recognition of patients at risk of malnutrition and the combination of treating symptoms and dietary advice result in improved intake and treatment-related outcomes.
Introduction
Colorectal cancer (CRC) is the third most common cancer in the world, representing nearly 10% of the global cancer incidence and 8% of all cancer deaths [11,30]. Patients with CRC often experience undesirable disease-related symptoms such as malnutrition and weight loss. The prevalence of malnutrition in CRC patients varies from 29 to 60% [9,13,20,[24][25][26]33] and is suggested to be even higher during hospital stay [18,31,33]. Previous studies have shown that malnutrition is associated with worse clinical outcomes for this patient group. A poor nutritional status in preoperative patients negatively affects postoperative outcome and is predictive of increased length of hospital stay [17,29], whilst for patients receiving chemotherapy, malnutrition is associated with lower treatment tolerance and reduced survival [1,3,28].
Malnutrition in cancer patients can be a consequence of both metabolic changes and reduced dietary intake [32]. Whilst treatment of metabolic changes mainly concerns treatment of the underlying cancer, reduced dietary intake can often be avoided. Reduced dietary intake is the main driver in the development of malnutrition; thus, early detection of a reduced intake and intervention aiming to increase intake are essential in the prevention of malnutrition [2]. In order to identify patients with or at risk of a reduced dietary intake, determinants of a reduced dietary intake should be established.
The current literature suggests that particular patient-and disease-related characteristics are associated with poorer dietary intake. Characteristics of cancer patients associated with poorer dietary intake include being female and/or elderly, having prior surgery or chemotherapy, receiving more than one treatment mode or having a more progressive disease [9,15,23,24,27,33]. In addition, emerging literature speculates that a low body mass index (BMI), a worse performance status and being unmarried may also increase the chances of poorer intakes [9,29]. The observed reduction in food intakes is thought to be explained by unwanted symptoms and side effects of treatment for CRC, which often includes chemo-and/ or radiotherapy. Whilst loss of appetite is accepted as the main driver for lower dietary intakes, cancer treatments can also induce severe nausea, vomiting and diarrhea that can lead to the development of food aversions, and mucositis that can distort ability to taste [5,8,12,21,24]. Additional treatment-induced symptoms such as fatigue, depression and pain and also tumor-induced symptoms such as cachexia, bloating and early satiety are similarly suggested to play a role in influencing dietary intakes in this patient group [7,9,10,12,14].
Although some determinants of reduced dietary intake in CRC patients are suggested in the current literature, they are not well elucidated as study protocols often include several cancer types. Furthermore, it is not known to what extent the presence of disease-related symptoms are related to reduce dietary intake. These symptoms may also have to be taken into account in CRC patients with an indication for nutritional intervention. The aim of this study was to evaluate determinants that are associated with reduced dietary intake in hospitalised CRC patients to enable easier recognition of patients at risk of malnutrition and improve interventions to prevent malnutrition-related intercurrences.
Study design and patients
This study is based on data from nutritionDay surveys taken between 2012 and 2015. The nutritionDay is a 1-day crosssectional audit investigating nutritional status in hospitalised patients worldwide. Spanning 62 countries, the nutritionDay database provides information on food intake, patient characteristics, disease profile and symptoms. The nutritionDay survey has been designed so that data can be collected by local caregivers and patients using four questionnaires. A detailed description of the study design and its main outcomes has been published [16]. The nutritionDay survey co-ordinating centre in Vienna received ethical approval for multicentre data collection, and local ethics approval was obtained as appropriate. All patients received verbal and written study information before giving informed consent. For the current study, patients with CRC were selected from the nutritionDay database (n = 1300). Patients were excluded from the analyses if data for dietary intake were missing (n = 137) or if patients were in a terminal stage of their disease (n = 32) resulting in a total of 1131 included patients.
Data collection and definitions
The primary outcome in this study was dietary intake during the week preceding nutritionDay. This was subjectively assessed with the question 'How well have you eaten during the last week?' with the following response options: 'normal', 'a bit less than normal', 'less than half of normal' and 'less than quarter to nearly nothing'. For the purpose of this study, dietary intake was dichotomised as follows: normal vs. a bit less, half or less.
To determine which variables were associated with dietary intake (normal vs. less than normal), variables were classified as patient and disease-related characteristics or as symptom scores. The patient-and disease-related characteristics age, sex, cancer stage, therapy situation, therapy goal, comorbidities, duration since hospital admission and body mass index (BMI) were recorded by the medical staff. Therapy situation was categorised into seven groups: diagnosis, systemic treatment (chemotherapy and targeted therapy), surgery, radiotherapy, complications (cancer-or therapy-related), palliative and multiple. Therapy goal was dichotomised as curative vs. palliative. The following options were available to the medical staff for reporting comorbidities: diabetes, stroke, chronic obstructive pulmonary disease, myocardial infarction, cardiac insufficiency or others. For this study, comorbidity was categorised as none vs. one or more. Duration since admission to hospital was categorised based on the association with dietary intake with separate categories for longer duration since hospital admission, resulting in four groups: < 4, 4-7, 8-21 and >22 days. Body mass index (BMI) was calculated as weight in kilograms divided by the square of height in metres and classified into six groups (underweight, < 18.5 kg/m 2 ; normal weight, 18.5-25 kg/m 2 ; overweight, 25-30 kg/m 2 ; obesity class I, 30-35 kg/m 2 ; obesity class II, > 30 kg/m 2 ). Unintentional weight loss in the past 3 months was evaluated by the patient as yes or no. Self-reported performance score was assessed following the guidelines of the Eastern Cooperative Oncology Group (ECOG) [22], with the following question and options: which of the following activities can you perform at the maximum? The categories are fully active (0), able to carry out light activities (1), able to carry out selfcare (2), able to carry out limited self-care (3) or confined to Data are presented as number (%) or mean ± standard deviation (4). The number of drugs ingested daily was indicated by the patient and categorised into four groups: 0, 1-2, 3-5 and > 5. The symptom scores of had pain, felt weak, felt depressed, felt tired and lacked appetite during the past week were reported by the patient with questions concerning the last week. Symptoms could be rated on a 4-point Likert scale: not at all, a little, quite a bit and very much.
Statistics
Statistical analyses were performed using SPSS v 23 (IBM Corp., USA). Descriptive data are presented as mean ± standard deviation or as total frequencies and proportions. Patientand disease-related characteristics and symptom scores were analysed separately. This was done because the symptoms are expected to be caused by patient-and disease-related characteristics and are potentially mediating the association with dietary intake.
First, univariate logistic regressions were done to determine variables associated with reduced dietary intake. On all variables significantly associated with a reduced dietary intake in univariate analysis, correlation coefficient analyses were performed. For correlation coefficients, all missing values were excluded. Depending on the type of variables, Spearman's (two ordinal), phi (two binary), Cramer's V (binary and categorical) or the Kruskal-Wallis (ordinal and nominal/binary) tests were performed. For any two variables that were strongly correlated (b > 0.5), a decision was made to exclude one of the variables from the subsequent multivariate analysis so that it did not disrupt the model. Next, all variables associated with reduced dietary intake in the univariate logistic regression model were simultaneously entered into a multivariate logistic regression model (multivariate model 1), using p < 0.10 for entering into the model. Backward elimination was done until all variables in the multivariate model reached a significance of p < 0.05 (multivariate model 2). For all logistic regression analyses, 95% confidence intervals (CI) for odds ratios (OR) were reported.
Categories with more than ten missing values were considered as a separate group in logistic regression. If ten or less missing values existed for any variable, then patients with this missing value were excluded from that analysis. Two sensitivity analyses were performed by rerunning the backward regression for patient-and disease-related characteristics, one with dietary intake dichotomised into normal or a bit less vs. half or less and one without the variable 'self-reported performance score', because this variable concerned the audit day and may have changed in the preceding week. In addition, interactions between determinants for eating less than normal were checked. Model fit of the multivariate model was expressed as the Nagelkerke R 2 .
Patient characteristics
Patient-and disease-related characteristics of all 1131 patients are shown in Table 1. The mean age was 65 ± 13 years (range 19-98) and 626 (56%) patients were male. Of all patients, 418 (41%) had cancer stage IV, 305 (28%) were receiving systemic treatment and 346 (32%) were admitted for surgery. Eating less than normal in the past week was reported by 615 (54%) patients and unintentional weight loss in the past 3 months was reported by 683 (64%) patients.
Patient-and disease-related characteristics
The following patient-and disease-related characteristics were significantly associated with a reduced dietary intake in univariate analyses: female gender (p = 0.017), higher cancer stage (p = 0.008), lower self-reported performance score (p < 0.001), longer duration since hospital admission (p < 0.001), unintentional weight loss during the past 3 months (p < 0.001), lower BMI (p = 0.002), therapy situation (p = 0.001), palliative therapy goal (p = 0.015) and higher number of drugs ingested daily (p = 0.006) ( Table 2). These variables were simultaneously entered into a multivariate logistic regression model with reduced dietary intake as outcome ( and patients with a palliative vs. curative treatment have higher odds to eat substantially less than normal. Also sensitivity analysis by backward regression without the variable 'self-reported performance score' resulted in a model with the same significant determinants (n = 1116, R 2 = 0.139).
Symptoms
Patient-reported symptom scores in relation to dietary intake during the past week are shown in Table 3. The symptoms pain, weakness, depression, tiredness and lack of appetite experienced during the past week were all significantly associated with dietary intake during the past week (p < 0.05, Table 3). Because all of the symptom scores were highly correlated, multivariate regression was not performed.
Discussion
The present study shows that 54% of hospitalised colorectal cancer patients ate less than normal in the week preceding nutritionDay, a 1-day cross-sectional audit investigating nutritional status in hospitalised patients worldwide. Being female, higher cancer stage, worse self-reported performance score, longer duration of hospital stay and unintentional weight loss were significantly associated with reduced dietary intake and can therefore be used to identify patients at risk of malnutrition. In addition, the symptoms having pain, lacking appetite and feeling weak, tired and depressed were significantly associated with reduced dietary intake. Since patientand disease-related characteristics cannot always be influenced, nutritional interventions may benefit from alleviating these negative symptoms reported by patients to further optimise nutritional status. These predicting characteristics have to some degree been identified in previous literature, yet the low patient numbers, the different cancer types and the use of different definitions of malnutrition made it difficult to apply these findings in clinical practice. Previous studies have demonstrated that being female, a higher ECOG performance status and weight loss were significantly associated with higher nutritional risk as indicated by the PG-SGA [9,18]; however, one study included various cancer types and another had poor questionnaire compliance. Associations between cancer stage and malnutrition were found in two studies using different definitions of malnutrition [23,33], as well as associations between performance status and reduced dietary intake were found [3]. Moreover, associations between increased length of hospital stay and nutritional risk have been reported in colorectal cancer patients using the NRS-2002 tool [20] and associations with nutritional status have been found in gastrointestinal cancer patients using the SGA [34]. Thereby, the present study confirms what was already known from previous studies. However, the present study, with data derived from the largest ongoing survey of nutrition in hospitalised patients, enriches the current literature with explicit results exclusively investigating colorectal cancer patients with dietary intake as the primary outcome. This provides a complete overview of the determinants of dietary intake in this patient cohort and makes comparisons between variables now possible due to standardised data collection.
The present multivariate analysis shows clear relationships between the increases in cancer stage and performance status and the increases in outcome odds for reduced dietary intake, with a high risk for patients with a performance score of 3 or 4 and patients with stage IV cancer. These findings thereby underpin the importance of considering patients' performance score and cancer stage when assessing the likelihood of reduced dietary intake. There were little differences in the odds ratios for days since hospital admission when compared to the reference category of < 4 days. The increased odds for eating less than normal when admitted to the hospital 4-7 days may partly be due to the fact that nutritionDay is normally on a Thursday and therefore, this category included patients admitted to the hospital in the weekend. Because being admitted to the hospital in the weekend is usually not planned, these likely are unplanned hospital admissions and may include patients in poorer condition than the patients being admitted at weekdays. In addition, patients admitted to the hospital for < 4 days still ate the majority of their meals at home, potentially resulting in a larger number of normal dietary intakes. The fact that there were minimal differences between 8-21 and > 22 days indicates that being admitted for a longer period of time (≥ 8 days) results in higher odds for eating less than normal, regardless of the number of days. Patients in these categories may also be in a poorer condition that patients being admitted < 4 days.
Unintentional weight loss has previously been identified as a strong determinant of malnourishment [18] and here, we confirmed its association with reduced dietary intake. This simple and easy to establish measure should be used to indicate patients who need dietary interventions, particularly for heavier patients who have lost weight but are missed by assessment tools that use 'healthy' threshold cut-offs, and in hospitals where full body composition measurements are not feasible. The comorbidity groups in this study were categorised into none vs. more than one due to the majority of patients falling into the 'other' comorbidity category and thus leaving small patient numbers in the specified categories. Perhaps with additional information, a more reliable test of the association between specific comorbidities and reduced dietary intake would be available.
In this study, higher symptom scores of pain, weakness, depression, tiredness and lack of appetite were significantly associated with eating less than normal during the past week. This is in line with previous findings that pain and fatigue were correlated with low energy intake to a similar degree as loss of appetite in pancreatic cancer patients [6]. It is suggested that some symptoms (such as pain and weakness) directly contribute to reduced dietary intake whilst others (such as emotional states, for example, depression) act through driving appetite loss [27]. These findings have important implications considering the current weighted importance given to symptoms associated with reduced dietary intake. We suggest that, in addition to usual dietary practices, nutritional interventions should include identification and individual treatment of these symptoms to reduce the risk of inadequate dietary intake. Assessing dietary intake in the past week as 'normal' or 'less than normal' as a primary outcome has its limitations.
Preferably, actual dietary intake should be evaluated in order to estimate absolute energy and protein intake in comparison to a patient's requirements. However, in the present study, dietary intake was assessed by asking the patient how well he/she had eaten. Although this does not provide information on absolute nutritional intake, it is an indication of dietary intake compared to what is normal for a patient. Eating less than normal has shown to be an important risk factor for malnutrition. Early identification of these patients at risk may be an indication to assess dietary intake into more detail and could provide the opportunity to prevent malnutrition with appropriate nutritional intervention [2,4,19].
Conclusion
Determinants for reduced dietary intake in colorectal cancer patients during hospital admission are being female, higher cancer stage, worse performance status, longer duration since admission and unintentional weight loss. In addition, the symptoms pain, weakness, depression, tiredness and lack of appetite are related to reduced dietary intake. In patients at risk of reduced dietary intake, assessment of dietary intake may be indicated to evaluate whether nutritional intervention is needed. Management of related symptoms should be included to achieve an optimal nutritional intake. Future trials should test the effectiveness of these intervention recommendations on dietary intake and body composition, in order to consequently achieve better treatment-related outcomes.
Funding This research did not receive any specific grant from funding agencies in the public, commercial or not-for-profit sectors. The nutritionDay project received sponsoring from the European Society for Clinical Nutrition and Metabolism (ESPEN).
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of interest.
Open Access This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http:// creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
|
2018-04-03T01:11:52.269Z
|
2018-01-19T00:00:00.000
|
{
"year": 2018,
"sha1": "3ba2b8163d821e10bf9726c448b028fce965401b",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00520-018-4044-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "3ba2b8163d821e10bf9726c448b028fce965401b",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9298734
|
pes2o/s2orc
|
v3-fos-license
|
Aerobic Damage to [FeFe]-Hydrogenases: Activation Barriers for the Chemical Attachment of O2
[FeFe]-hydrogenases are the best natural hydrogen-producing enzymes but their biotechnological exploitation is hampered by their extreme oxygen sensitivity. The free energy profile for the chemical attachment of O2 to the enzyme active site was investigated by using a range-separated density functional re-parametrized to reproduce high-level ab initio data. An activation free-energy barrier of 13 kcal mol−1 was obtained for chemical bond formation between the di-iron active site and O2, a value in good agreement with experimental inactivation rates. The oxygen binding can be viewed as an inner-sphere electron-transfer process that is strongly influenced by Coulombic interactions with the proximal cubane cluster and the protein environment. The implications of these results for future mutation studies with the aim of increasing the oxygen tolerance of this enzyme are discussed.
1 Details of the QM calculations: geometry optimisation All geometry optimisations were carried out with BP86 functional [1] and def2-TZVP basis [2] as implemented in Turbomole [3] program. Additionally, the empirical dispersion corrections in form proposed by Grimme were included [4] with the B-J dumpig scheme. [5] Resulting functional is denoted as BP86+D3 throughout the manuscript. Geometry optimisations were performed with the default settings (SCF convergence 10 -6 , grid size m3). All single point energies were calculated with tightened SCF convergence (10 -7 ) and enlarged grid size ('m4' in the Turbomole nomenclature and 'fine' in NWChem). We took the advantage of the resolution of identity approximation [6] in all calculations. The activation was followed with constrained optimisations where Fe d …O 1 distance was varied between 3.5 Å to 1.8 Å and all C α and nitrogen atoms in NH 3 + groups of LYS 322 and 359 (Cp numbering, 201 and 238 in case of Dd) remained frozen.
Small model and CA1-B3LYP functional
The small model structure consist of an isolated [2Fe] H cluster where the [Fe 4 S 4 ] cubane was replaced with H + ion and bridging cysteine was substituted with a smaller CH 3 -Sfragment (see Figure S1). We followed the energy change with different methods in a series of singlepoint calculations on the top of optimised structures where the Fe…O distance was varied between 3.5 Å and 1.8 Å. In Figure S1 we show most the most important geometry changes that accompanies the oxygen binding. The oxygen-bound state was found at the Fe…O distance of 1.95 Å with O 1 -O 2 bond length of 1.29 Å. Such lengthening of a bond between two oxygen atoms together with calculated vibration frequency of 1186 cm -1 indicates that the oxygen moiety is best described as a superoxo entity O 2 -. [7] Figure S1. Geometry changes upon oxygen activation on the small cluster model.
2
The reference data were obtained with NEVPT2 method which base on the CASSCF reference wave function. [8] Because of higher basis-set demands of the wave function-based methods the def2-TZVPP basis set was employed which features additional polarization functions on Fe and S atoms. Although a full-valence CAS was not possible, ligands like CO/CN-create so strong ligand field that most of the doubly occupied and empty 3d orbitals of both iron atoms could remain inactive. We demonstrate this effect in Figure S2 where a qualitative scheme of orbital energies is presented. In fact, d z2 and d xz orbitals of Fe d , two π* orbitals of O 2 molecule and one π* orbital of the bridging CO along with five electrons [CAS(5,5)] were sufficient to describe the binding process (MOs enclosed in the teal rectangle in Figure S2; in Figure S3 the natural orbital isosurfaces are presented). In order to stabilise the active space over whole potential energy surface scan we employed stateaveraging approach in which the CASSCF wave function was optimised for three doublet and two quartet states. Because selected active space can be considered as minimal, we performed additional MR-DDCI2 with small def2-SVP basis and we found that at each Fe…O distance the weight of the CASSCF reference was not smaller than 0.85 with other weights lower than 0.01. Figure S2. Qualitative molecular orbitals scheme of the small model system with active space orbitals marked in teal rectangle. Figure S3. Isosurfaces of the natural orbitals obtained in the state-averaged CASSCF (5,5) calculations for Fe…O distance of 2.4 Å.
The reference calculations were compared with various functionals and in Figure S4a the one dimensional potential energy surface cut calculated with various DFT approaches is presented. Functionals tested include semi-local BP86, global hybrids B3LYP, [1a,9] BHandLYP [1a,b,d,8] with 20% and 50% of the Hartree-Fock exchange, respectively, and two range-separated (CAM-B3LYP, [10] LC-ωPBE [11] ). The latter functionals base on the Ewald split of the Coulomb operator: [12] 1 r ଵଶ = 1 − [α + β • er fሺμr ଵଶ ሻ] r ଵଶ + α + β • erf (μr ଵଶ ) r ଵଶ where r ଵଶ is the interelectronic distance, erf is an error function and μ is range-separation parameter that is typically between 0.1 and 1. The parameter α defines a fixed amount of Hartree-Fock exchange (HFX) at all values of r ଵଶ while β regulates the variable amount. The sum of both these parameters give the maximum portion of HFX at long distance. The main differences between CAM-B3LYP and LC-ωPBE, apart from the form of the correlation part, are in the values of α, β and μ parameters which are α = 0.19, β = 0.46, μ = 0.33 for the former and α = 0.0, β = 1.0, μ = 0.4 for the latter.
In case of global hybrid functionals the barrier increases with the amount of HFX but the binding becomes less favourable. BHandLYP functional shows that the process is endothermic by 0.2 kcal/mol. Some improvement over B3LYP is its range-separated counterpart for which the barrier is 2.8 kcal/mol. Unfortunately, the error binding energy (3.9 kcal/mol) is still large. LC-ωPBE performes similar to BHandLYP. Thus we decided to tune the CAM-B3LYP functional to reproduce reference NEVPT2 activation barrier and binding energy. To decrease the number of parameters we set α + β = 1 so in the limit of infinite interelectronic separation the functional features 100% of HFX. We calculated the energies of bound state, transition state at Fe…O distance of 2.4 Å and complex at long Fe…O distance of 3.5 Å with α values between 0.0 and 0.4 and μ between 0.1 and 0.9. For each combination we calculated the relative error in ∆E and ∆E ‡ with respect to reference NEVPT2 data (-7.6 kcal/mol and 5.6 kcal/mol, respectively). In the next step, we took the sum of absolute values of these errors as a measure of total error. For each value of μ we computed the average of such total errors for all five α values between tested. Obtained data can be found in Figure S5.
We see that for μ = 0.5 a minimum has been found. The lowest error for this value of rangeseparation parameter was calculated for α = 0.1. Thus the final optimised set of parameters are α = 0.1, β = 0.9, μ = 0.5. We also note that our relatively large range-separation parameter was reported to provide better activation barriers [13] what is now confirmed in our calculations. The new functional is denoted as CA1-B3LYP. We also carried out calculations on non-covalent complexation energies test set (NCCE31/04 test set of Zhao and Truhlar [14] ) in order to evaluate the performance of CA1-B3LYP functional for interaction energies. The statistical evaluation can be found in Table S1. In most cases the new functional outperforms BP86 and B3LYP, especially for charge-transfer-dominated complexes. It performs surprisingly well for weak interactions as well what will be important for calculations on the large clusters. We note however that we set α + β to 1 and possible improvement is offered by an independent optimisation of all three parameters.
5 Figure S4. Relative energy change upon oxygen binding to the distal iron of the small cluster calculated with various methods. For each method energy at R = 3.5 Å is taken as a reference. Figure S5. The dependency of the sum of unsigned relative errors with respect to rangeseparation parameter µ. The reference curve (black solid line) in Figure 2b was obtained with a doublet ground-state state-specific CAS configuration interaction (CASCI) wave function with the state-averaged CASSCF orbitals presented graphically on the right (orbital labels taken from Turbomole program).
Large cluster model and ONIOM calculations
The cluster models were constructed from X-Ray structures of Cp and Dd proteins (PDB codes: 3C8Y and 1HFE, respectively). Consistently with recent experimental and theoretical studies, [15,16] the di-µ-dithiolato bridge was modelled as SCH 2 (NH)CH 2 S. Residues included in the model are listed in Table S2. In both models all residues that can form a N-H…S(Cys) hydrogen bonds were included (cut-off 4 Å). Cuts through covalent bonds were terminated with hydrogen atoms. The H ox state has a total spin of 1/2 originating from strong antiferromagnetic coupling of 18 electrons within the cubane (two high-spin Fe III and two high spin Fe II ) that in turn couple weakly with one unpaired electron at the [2Fe] H site ( Figure 1b). The geometry optimisations were performed with usual set-up. In the ONIOM calculations described in the main body of the manuscript the hydrogen atoms were used as a link atoms between high-and low-layers. The distance scaling factors for C-C and C-N(amide) bonds were 0.709 and 0.729, respectively. The starting orbitals for calculations were converted with an in-home written Perl script from converged Turbomole alpha and beta ASCII files to input files that can be used with asc2mov program (part of official NWChem 6.3 distribution) to 8 obtain binary orbital files of NWChem. In this way we kept the same broken-symmetry coupling in all calculations.
Intermediate model set-up for Gibbs' free energy corrections calculations and explanation of the role of the [Fe 4 S 4 ] cubane
Because we kept some of the atoms fixed in our model and approximate nature of the transition states the direct usage of the harmonic approximation to account for Gibbs' free energy corrections was not possible. We decided to construct a model (blue and red atoms in Figure 1a) for which the Gibbs free energy corrections can be evaluated easily. The system comprised from entire H-cluster along with anchoring CYS residues that were replaced with CH 3 -Sfragments as usual. The oxygen molecule was optimised separately. We noted that the oxygen molecule already lost some translation entropy due to penetration of the protein and according to MD simulations [17a] this effect is about 4 kcal/mol. In our calculations this can be estimated by noting that O 2 molecule lost some translation entropy due to entering into the binding pocket (average radius of ~3 Å 17 ). In standard condition one oxygen molecule occupies ܸ = 3.73 • 10 ିଶ m ଷ while the protein pocket volume is ܸ = 1.13 • 10 ିଶ଼ m ଷ . We thus scaled the translation partition function to account for this reduced volume what resulted in the -TS contribution of 10.4 kcal/mol at 298.15 K. Further corrections for zero-point energy change and enthalpy change were small: 1.5 kcal/mol and -1.0 kcal/mol, respectively. Thus, we found that 9.9 kcal/mol need to be added to the electronic binding energy to account for the free energy change. The same shift was applied to the electronic activation energy what constitute an upper limit for ∆G ‡ .
We observed dramatic change (~10 kcal/mol) between binding energies of the small ( Figure 2) and large models ( Figure 3). Thus we looked at the binding energies in the intermediate model (one used for ∆G corrections) and found that ∆E goes down to -20.6 kcal/mol in comparison to the small model (to -5.2 kcal/mol). Moreover, the binding energy becomes less favourable by 3 -5 kcal/mol for the large models in comparison to the medium system. This is mainly due to inclusion of the counter-charges and partially due to some less pronounced screening effects.
Calculations of oxygen inactivation rates
Within the steady-state approximation, the rate for ligand binding is given by where k +1 is the bi-molecular rate constant for ligand diffusion from the solvent to the active site cavity, k -1 is the rate constant for the reverse process and k 2 is the rate constant for chemical attachment of the ligand initially located in the active site cavity.
Converting the computed free energies into k 2 using transition state theory and adopting values for k +1 and k -1 from previous MD simulations for [NiFe]-hydrogenase, [22] we obtain values for k in of 3.6 s -1 mM -1 and 1.2 s -1 mM -1 for Cp and Dd enzymes, respectively. We believe that this should provide good first approximation to the corresponding diffusion rates in [FeFe]-hydrogenases because for small k +2 k in is proportional to k +1 /k -1 which in turn only depends on the size of the active site cavity (for the same concentration).
|
2018-04-03T03:03:27.156Z
|
2014-03-11T00:00:00.000
|
{
"year": 2014,
"sha1": "e08b5c02239c3cf3cacd0befe38db362a60f69b6",
"oa_license": "CCBY",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4143129",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "cd0f696115e1fd553e1fa13119f7e41e5ac9345d",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
249224447
|
pes2o/s2orc
|
v3-fos-license
|
Dyslipidemia in Children Treated with a BRAF Inhibitor for Low-Grade Gliomas: A New Side Effect?
Simple Summary The use of targeted therapies is revolutionizing the prognosis of many cancers; however, there is still limited knowledge of their side effects. Dyslipidemia is often present in cancer patients due to mechanisms that are directly or indirectly related to cancer or therapies. The aim of our study is to investigate the effects of vemurafenib on lipid metabolism in a cohort of pediatric patients treated for brain tumors. For the first time, we describe dyslipidemia as a possible side effect of the BRAF inhibitors. A better understanding of the pathways that are involved in dyslipidemia could also help with a better understanding of the drug-resistance mechanisms in cancer cells. Abstract BRAF inhibitors, in recent years, have played a central role in the disease control of unresectable BRAF-mutated pediatric low-grade gliomas (LGGs). The aim of the study was to investigate the acute and long-term effects of vemurafenib on the lipid metabolism in children treated for an LGG. In our cohort, children treated with vemurafenib (n = 6) exhibited alterations in lipid metabolism a few weeks after starting, as was demonstrated after 1 month (n = 4) by the high plasma levels of the total cholesterol (TC = 221.5 ± 42.1 mg/dL), triglycerides (TG = 107.8 ± 44.4 mg/dL), and low-density lipoprotein (LDL = 139.5 ± 51.5 mg/dL). Despite dietary recommendations, the dyslipidemia persisted over time. The mean lipid levels of the TC (222.3 ± 34.7 mg/dL), TG (134.8 ± 83.6 mg/dL), and LDL (139.8 ± 46.9 mg/dL) were confirmed abnormal at the last follow-up (45 ± 27 months, n = 6). Vemurafenib could be associated with an increased risk of dyslipidemia. An accurate screening strategy in new clinical trials, and a multidisciplinary team, are required for the optimal management of unexpected adverse events, including dyslipidemia.
Introduction
Low-grade gliomas are the most common central-nervous-system (CNS) tumors among children [1]. The prognosis for these tumors is generally excellent, with the 10-year SEs and the AEs in pediatric populations, considering the necessity of the long-term use of BRAFi (as is commonly carried out with mTORi in patients with tuberous sclerosis and SEGA). Moreover, the novel agents uncovered unexpected and unexplored AEs, and represent an important medical challenge. A better understanding of the SEs of these therapies is imperative.
The aim of our study is to expand the knowledge of the long-term AEs of BRAF inhibitors in pediatric populations by analyzing, retrospectively, the serum lipid concentrations in a cohort of pediatric patients treated with vemurafenib for LGG at our institute. We also discuss the possible role of lipid metabolism in resistance to BRAFi.
Materials and Methods
We collected and retrospectively reviewed clinical, laboratory, and instrumental data of all patients treated with the BRAF inhibitor vemurafenib for LGG at the Giannina Gaslini Children's Hospital in Genoa, Italy, between 1 May 2015 and 31 December 2021. Patients treated with vemurafenib, with an age up to 18 years of age at the time of diagnosis, and with follow up of at least 6 months and 2 lipid-level samples (after starting treatment), were eligible.
The histology diagnosis was confirmed by the national reviewer, for all patients. All patients had a BRAFV600E mutation confirmed by sequencing performed using the polymerase chain reaction (kit Easy Braf real-time PCR, Diatech Pharmacogenetics, Jesi, Italy).
Before and during the BRAFi treatment (at 1, 3, 6, 12 months, and every 6 months thereafter), each patient underwent a detailed clinical and laboratory investigation to rule out possible organ disorders that contraindicated a BRAFi treatment (including a complete blood count, biochemical liver-and kidney-function tests, auxological and endocrinological assessments, ECG, and echocardiography with a cardiological examination). After the incidental finding of dyslipidemia in the second treated patient, fasting blood samples for lipid panel test were collected within 1 month before and at 1, 3, 6, 12 months after the initiation of treatment with vemurafenib (and every 12 months thereafter).
The auxological data were obtained from the auxo-endocrinological assessments to which the patients were routinely subjected. Height was measured by a Harpenden Stadiometer, with an accuracy of ±1 mm. The weight was measured on a digital scale, with an accuracy of ±0.1 kg. BMI was calculated as weight (kg) divided by height (m) squared and transformed to standard-deviation scores using the WHO reference values [29].
All the families receive a dietary recommendation based on the Mediterranean diet [30] at the first interview with the oncologist pediatrician. Patients with alterations in the lipid or glucose profiles are referred for nutritional counseling.
Fasting blood samples were obtained from 6 patients with LGG. Venous blood samples were collected by venipuncture or central venous catheter between 8 a.m. and 12 p.m., after an overnight fast. The serum and plasma were immediately separated, the lipid panel (triglycerides, total cholesterol, high-density lipoprotein cholesterol (HDL)) was quantified on the same day. An enzymatic colorimetric assay was used to determine total cholesterol, triglyceride, and direct HDL levels. Fasting plasma low-density lipoprotein cholesterol (LDL-C) was calculated using the Friedewald formula [31]. Fasting glycemia was determined using an enzymatic hexokinase assay.
Descriptive statistics were generated for the whole cohort, and data were expressed as mean and standard deviation for continuous variables. Median value and range were calculated and reported, as were absolute or relative frequencies for categorical variables. We analyzed the data available for all patients, before and after the incidental finding of dyslipidemia in the second patient (after which the systematic and scheduled analysis of the lipid profile was started). The box plots were used to show distributions of numeric variable values at 1 month before treatment, and at 1 month, 3 months, 6 months, 12 months, and at last follow-up after initiating treatment with vemurafenib. Box plots visually show the minimum value, the first quartile, the median, the third quartile, and the maximum value.
All data were analyzed with SPSS software for Windows (IBM SPSS Statistics for Windows, version 26. IBM Corp., Armonk, NY, USA).
This study was conducted in accordance with the Declaration of Helsinki. Informed consent was obtained from all the families.
Study Population
We enrolled six patients (three males, three females) treated with the BRAFi vemurafenib, which was the first-choice BRAF target therapy in our hospital until December 2018. The demographic and clinical features of the six patients are reported in Table 1 and in Supplementary Table S1. In the absence of pharmacokinetic and pharmacodynamic data for vemurafenib in pLGGs, we decided to start the treatment with a low dose of 370 mg/m 2 (twice a day). Subsequent increases in the dosage were made every 2 weeks until the target range of 960-1100 mg/m 2 /day was reached after 1 month. Dose adjustments were made on a case-by-case basis during follow-up, depending on the patient's drug tolerance. A patient with ganglioglioma reached the target dose after 30 months of starting the treatment, which was due to a resurgence of skin toxicity during dose-escalation attempts.
A patient with ganglioglioma was switched from the BRAFi vemurafenib to the BRAFi dabrafenib and the MEKi trametinib after 21 months of treatment because of severe skin toxicity. A patient with a pilocytic astrocytoma discontinued therapy with vemurafenib after 13 months because of tumor progression (neuroradiologically confirmed), which required antiedema therapy with dexamethasone, followed by chemotherapy and radiotherapy. In all the other patients, the treatment with BRAFi is still ongoing at the closing date of the database. The most frequent locations of tumors were the optic pathway/hypothalamic region (n = 4), followed by the basal ganglia (n = 1) and the spinal cord (n = 1). At the start of the target therapy, a tumor from one patient was disseminated.
The mean age at the start of targeted therapy was 8.4 ± 6.1 years (range: 3.5-18.8). The mean time from the last follow-up during treatment to the start of vemurafenib was 44.6 ± 26.5 months (range: 14.7-77.2). Before the vemurafenib, five out of six patients were treated with a neurosurgery partial resection, and five out of six with chemotherapy (one of these patients was also treated with radiotherapy) (Supplementary Table S1).
At the start of treatment, two patients were obese, and the mean BMI of the subjects included in the analysis was 0.9 ± 1.8 kg/m 2 . None of the subjects were taking any medication to specifically control glucose and/or lipid metabolism (such as statins, fibrate, hypoglycemic agents). They were free of overt liver, renal, and cardiac disease. The fasting blood glycemic levels were normal (Supplementary Table S3). The thyroid function and the hypothalamic-pituitary axis were normal or were well substituted with hormonereplacement therapies ( Figure 1). The GH-deficiency treatment with rhGH in a patient was delayed by 12 months because of her oncological clinical condition.
ongoing at the closing date of the database. The most frequent locations of tumors were the optic pathway/hypothalamic region (n = 4), followed by the basal ganglia (n = 1) and the spinal cord (n = 1). At the start of the target therapy, a tumor from one patient was disseminated.
The mean age at the start of targeted therapy was 8.4 ± 6.1 years (range: 3.5-18.8). The mean time from the last follow-up during treatment to the start of vemurafenib was 44.6 ± 26.5 months (range: 14.7-77.2). Before the vemurafenib, five out of six patients were treated with a neurosurgery partial resection, and five out of six with chemotherapy (one of these patients was also treated with radiotherapy) (Supplementary Table S1).
At the start of treatment, two patients were obese, and the mean BMI of the subjects included in the analysis was 0.9 ± 1.8 kg/m 2 . None of the subjects were taking any medication to specifically control glucose and/or lipid metabolism (such as statins, fibrate, hypoglycemic agents). They were free of overt liver, renal, and cardiac disease. The fasting blood glycemic levels were normal (Supplementary Table S3). The thyroid function and the hypothalamic-pituitary axis were normal or were well substituted with hormonereplacement therapies ( Figure 1). The GH-deficiency treatment with rhGH in a patient was delayed by 12 months because of her oncological clinical condition.
One month after initiating treatment with vemurafenib, the lipid levels of triglycerides (107.8 ± 44.4 mg/dL, n = 4), total cholesterol (221.5 ± 42.1 mg/dL, n = 4), and LDL (139.5 ± 51.5 mg/dL, n = 4) were abnormal. The data remained elevated 3 months after the start of treatment: triglycerides (115 ± 45.6 mg/dL, n = 4), total cholesterol (238 ± 36.5 mg/dL, n = 4), and LDL (148.8 ± 40.2 mg/dL, n = 4). Moreover, the mean lipid levels of triglycerides (134.8± 83.6 mg/dL; Figure 2), total cholesterol (222.3 ± 34.7 mg/dL; Figure 3), and LDL (139.8 ± 46.9 mg/dL; Figure 4) were confirmed to be pathologically high at the last follow-up (from the start of treatment until the last follow-up, the mean time was 44.6 ± 26.5, range: 14.7-77.2 months). The HDL levels remained normal/acceptable ( Figure 5). and LDL (139.8 ± 46.9 mg/dL; Figure 4) were confirmed to be pathologically high at the last follow-up (from the start of treatment until the last follow-up, the mean time was 44.6 ± 26.5, range: 14.7-77.2 months). The HDL levels remained normal/acceptable ( Figure 5). and LDL (139.8 ± 46.9 mg/dL; Figure 4) were confirmed to be pathologically high at the last follow-up (from the start of treatment until the last follow-up, the mean time was 44.6 ± 26.5, range: 14.7-77.2 months). The HDL levels remained normal/acceptable ( Figure 5). The analysis of the individual patient data shows that the incidence of blood hypertriglyceridemia (according to the 2011 NHLBI) [32] after 1 month of vemurafenib was 50% (n = 2/4). According to the Common Terminology Criteria for Adverse Events v 5.0 (CTCAE) [33], one case was grade 0 and one case was grade 1, at long-distance follow-up, and the available incidence of hypertriglyceridemia was 84% after vemurafenib (n = 5/6, five cases were grade 0, and one case was grade 1 CTCA). The analysis of the individual patient data shows that the incidence of blood hypertriglyceridemia (according to the 2011 NHLBI) [32] after 1 month of vemurafenib was 50% (n = 2/4). According to the Common Terminology Criteria for Adverse Events v 5.0 (CTCAE) [33], one case was grade 0 and one case was grade 1, at long-distance follow-up, and the available incidence of hypertriglyceridemia was 84% after vemurafenib (n = 5/6, five cases were grade 0, and one case was grade 1 CTCA). (CTCAE) [33], one case was grade 0 and one case was grade 1, at long-distance follow-up, and the available incidence of hypertriglyceridemia was 84% after vemurafenib (n = 5/6, five cases were grade 0, and one case was grade 1 CTCA).
According to the 2011 NHLBI [32], hypercholesterolemia was presented in 100% of the patients (n = 4/4, all cases were grade 1 CTCA) after 1 and 3 months of vemurafenib. At long-distance follow-up, the available incidence of hypercholesterolemia after vemurafenib was 100% (n = 6/6, all cases were grade 1 CTCA). After 1 month, the LDL levels were elevated in 50% (n = 2/4), although there is no specific CTCA score for LDL, and all cases were grade 1 CTCA because of the required diet changes in the patients. Similarly, after 3 months, 75% (n = 3/, all cases were grade 1 CTCA), and, at long-distance follow-up, 83% (n = 5/6, all cases were grade 1 CTCA) of the vemurafenib group had elevated LDL levels.
Discussion
During the past years, target therapies have revolutionized therapeutic possibilities in oncology. The reports of side effects have increased proportionally to the increase in the number of available molecules. However, the knowledge of metabolic side effects is currently limited.
For the first time in children, we describe dyslipidemia (hypertriglyceridemia, hypercholesterolemia, and an increase in LDL) as common early and late adverse events after starting the BRAFi vemurafenib. However, the patients treated with vemurafenib were commonly pretreated with traditional chemotherapy. In a patient in whom vemurafenib was replaced by the combination of dabrafenib and trametinib for clinical reasons (photosensitivity and cutaneous side effects), the total cholesterol values returned to normal, as well as the other side effects, in a few weeks (Supplementary Table S2).
To the best of our knowledge, dyslipidemia, as an adverse event of BRAFi, is reported only in a few studies, and the blood-test screening of the occurrence is not included in the ongoing pediatric clinical trials. Therefore, a correlation between this possible side effect and resistance to BRAFi in clinical practice has not yet been explored. Because of the small cohort and design of our study, it was not possible to correlate dyslipidemia to the neuroradiological response after initiating BRAFi, and/or to resistance to the target therapy.
In a phase I study that investigated the pharmacokinetics, efficacy, and tolerability of vemurafenib (960 mg twice daily) in 42 Chinese patients (median age: 42, 19-69) with BRAFV600-mutation-positive unresectable or metastatic melanoma, dyslipidemia was a common AE, compared to the BRIM-3 study in Caucasians (cholesterol-level increase in 59% vs. <1%, hypertriglyceridemia in 22% vs. <1%) [24]. However, the blood-chemistry analysis of the full fasting lipid profile was not performed in the BRIM-3 protocol, and, therefore the incidence of dyslipidemia may be strongly underestimated. Severe hypercholesterolemia was reported (CTCA grade ≥ 3) in only 1 of 27 Chinese patients with hypercholesterolemia.
We know that some of the drugs that are used for targeted therapies have significant metabolic consequences, including dyslipidemia. On the other hand, the stimulation of lipid synthesis may result from the direct activation of oncogenic pathways in tumor cells. Many oncological mutations result in the aberrant activation of several signaling pathways, which can reprogram cancer-cell metabolism and cellular processes, including cell proliferation, differentiation, and the development of resistance to chemotherapy.
Among the target therapies, mTOR inhibitors are burdened with frequent dyslipidemia and, therefore, the etiopathology of this side effect has been the object of many studies. We know that the mTORi reduce the gene expression of lipogenic enzymes, such as acetyl-CoA Cancers 2022, 14, 2693 9 of 14 carboxylase, fatty acid synthase, and stearoyl-CoA desaturase. Indeed, the mTORi are responsible for an increase in the total cholesterol and/or triglycerides by interfering with the protein kinase of the mTOR pathway [35].
New evidence supports that lipid metabolism is implicated in driving the tumor microenvironment and the cancer-cell phenotype, which contributes to the development and survival of cancer cells [36]. Changes in lipid metabolism can affect numerous cellular processes, including cell proliferation, differentiation, and motility [36]. In the tumor cells, the lipids can be used to store energy, synthesize the basic elements that are necessary for the cellular growth and proliferation (such as membranes), and participate in cell signaling [36,37]. Cancer cells compete for oxygen and nutrients with the host cells, and they maintain their malignant potential by modifying the lipid metabolism. The oxidative catabolism of lipids provides ATP and NADH, both of which are essential to controlling environmental stress and promoting survival [37,38].
Therefore, lipid-metabolism reprogramming is an essential link between the tumor and the host metabolism, with implications in sensitivity to chemotherapies [37], including target therapies [39].
A potential link between BRAFV600E and lipid-metabolism regulation in cancer cells is suggested by some cell and mouse model studies [37,40,41]. In 2015, Kang et al. [42] demonstrated the interaction between oncogenic BRAF V600E and the enzyme 3-hydroxy-3-methylglutaryl-CoA lyase (HMGCL), which is involved in lipid metabolism by producing ketone bodies. HMGCL expression is upregulated in BRAF V600E melanoma and hairy-cell leukemia. BRAF upregulates HMGCL through an octamer transcription factor, Oct-1, which leads to increased intracellular levels of the HMGCL product, acetoacetate, which selectively enhances the binding of the BRAF V600E, but not the BRAF wild-type to MEK1 in V600E-positive cancer cells, to promote the activation of MEK-ERK signaling and, therefore, tumor growth. In 2017, Xia et al. [43] showed that a high-fat ketogenic diet increased the serum levels of acetoacetate, which led to the potential tumor growth of BRAF V600E-expressing human melanoma cells in xenograft mice. The high-fat diets resulted in increased growth rates, masses, and sizes of tumors, without affecting the body weight in these mice. In contrast, a high-fat diet did not affect the tumor growth rates, masses, sizes, or the body weight in mice with tumor xenografts expressing an active NRAS Q61R mutation. The increased tumor growth in xenograft mice (BRAF mutated) fed with a high-fat diet was not due to differences in the quantity of the food intake. In both mice models, the consumption of a high-fat diet did not significantly affect the serum levels of D-b-hydroxybutyrate (3HB), but significantly increased the serum cholesterol levels compared to control mice fed with a normal diet. Treatment with hypolipidemic agents or an inhibitory homolog of acetoacetate attenuated the BRAF V600E tumor growth [43]. Valvo et al., in 2021 [41], showed that, in BRAFV600E papillary thyroid carcinoma, the de novo lipid synthesis significantly increased (1.58-and 1.34-fold changes in heterozygous and homozygous BRAFV600E -derived cell lines, respectively) within 6h in vemurafenibtreated cancer cells. The xenograft mouse data further showed that human BRAFV600E tumor cells became less responsive to vemurafenib within two weeks, and ultimately exhibited increased tumor growth when the Acetyl-CoA Carboxylase 2 gene (ACC2) was knocked down. This suggests that silencing the ACC2 (a rate-limiting enzyme for de novo lipid synthesis and the inhibition of fatty acid oxidation) may contribute to BRAFV600Einhibitor (e.g., vemurafenib) resistance and increased tumor growth. BRAFV600E inhibition increased the de novo lipid-synthesis rates, decreased fatty acid oxidation due to the oxygen-consumption rate, and increased the intracellular reactive-oxygen-species (ROS) production, which can trigger tumor-cell proliferation or death [41].
The modulation of numerous genes, including multiple oncogenes, growth factors, and tumor suppressors, are activated by reactive oxygen species (ROS) and the modification of the level of the AMP/ATP ratio that is due to cancer-cell metabolic plasticity (both possible effects of cancer and anticancer therapy, including BRAFi and MEKi) [44]. The HIF-1 and AMP-activated protein kinase (AMPK), which operate as energy biosensors of oxidative stress and master regulators of cellular metabolism, play a crucial role in this phenomenon. [45,46]. The AMPK regulates the ATP level through the switch from anabolic to catabolic metabolism via the stimulation of glucose uptake, aerobic glycolysis, and mitochondrial oxidative metabolism, which is mainly due to the β-oxidation of fatty acids [46]. These pathways interplay with HIF-1, and, therefore, a variety of oncogenes, such as Ras, c-Myc, and p53, and the Akt/PKB, PI3K, and mTOR signaling pathways, sustain cancer-cell proliferation and survival [47][48][49]. The gene KRAS is also directly implicated in ROS generation by NADPH oxidases [50]. Cancer-cell survival and metastasis can be sustained by lipid biosynthesis that is promoted by a shift in the glutamine metabolism from oxidation to reductive carboxylation [49].
In BRAF V600E melanoma cells, altered lipid metabolism could contribute to targeted therapy resistance through the modification of the activation of several lipogenesis pathways [51][52][53]. New evidence shows that the SREBP-1-dependent activation of lipogenesis is required for tumor growth and for cell survival in multiple cancer models, including high-grade glioma [54,55]. In BRAF-mutant melanomas, therapy resistance to vemurafenib is supported by the Sterol Regulatory Element-Binding Protein (SREBP1) activation [56] and the upregulation of the S1 P-dependent signaling pathway [37,40,51,52,57]. In sensitive BRAF-mutant models, vemurafenib caused the decrease in lipogenesis and the activation of SREBP-1. All showed high levels of lipogenesis, even in the presence of the inhibitor. However, this was not seen in therapy-resistant models, in which BRAFi only induced a moderate decrease in the SREBP-1 levels and did not significantly affect lipogenesis [56]. Probably this is due to the activation of the alternative ERK pathway that is linked to therapy resistance and that is a known regulator of SREBP [58,59], as is shown by the decreased levels of SREBP-1 in therapy-resistant cells treated with the MEK inhibitor trametinib [56]. The expressions of well-established mSREBP-1 downstream targets, such as ACLY, ACACA, and FASN, were also consistently reduced. These findings indicate that the reactivation of the ERK pathway contributes to sustained SREBP-1 activity in therapy-resistant melanoma cells [56]. Moreover, the expression of key lipogenic enzymes-SREBP-1 downstream targets-such as fatty acid synthase (FASN) acetyl-CoA carboxylase-1, were found to be inversely associated with drug resistance in BRAF-mutant cell lines [56].
In the current study, most of these laboratory AEs met the criteria as AEs because the events were medically significant and required diet modification. All these events were grade 1 CTCA, asymptomatic, and did not require a change in treatment or dose modification. However, because of the possible need for long-term use, these observed results may affect the overall benefit/risk assessment of vemurafenib in patients with high cardiovascular risk.
Conclusions
The targeted therapies for brain tumors are innovative and promising oncological treatments, and as a result, their use has expanded widely. The effectiveness of BRAFi, and its use in combination with other new target therapies, is increasing, and therefore the spectrum of side effects needs to be further explored.
The toxicities that are related to these new agents are generally not life threatening; however, the long-term effects are unknown, and they could potentially be a limiting factor in chronic life-long use. An accurate screening strategy in new clinical trials, and a multidisciplinary team, are required for the optimal management of unexpected adverse events.
We describe, for the first time, the possible side effects of BRAFi in a case series of children treated for LGG. In our study, children treated with vemurafenib showed a worsening in their lipid profiles, with a significant increase in triglycerides, LDL, and total cholesterol over time. New prospective and multicentric clinical trials of larger study groups are needed to confirm our observation; therefore, the evaluation of the serum lipid balance should be implemented in future experimental protocols, including BRAFi and/or MEKi.
Because of the large amount of data that show the possible role of lipid metabolism in the mechanisms of resistance and response to biological therapies, new future studies should explore this hypothesis.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/cancers14112693/s1, Supplementary Table S1: Detailed demographic and clinical features of the patients treated with vemurafenib; Supplementary Table S2: Lipid levels before and after switch of treatment from vemurafenib to dabrafenib and trametinib; Supplementary Table S3: Fasting blood glucose, biochemical liver, and kidney-function tests before and during vemurafenib. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2022-06-01T15:24:01.907Z
|
2022-05-29T00:00:00.000
|
{
"year": 2022,
"sha1": "79c26c5270f6f81145c609d9772bf48fb8a58bf0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/14/11/2693/pdf?version=1653835937",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e953cfe164b26dc037f5fc05c8f11aabcea29597",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
}
|
146116154
|
pes2o/s2orc
|
v3-fos-license
|
Chronic dysfunction of Stromal interaction molecule by pulsed RNAi induction in fat tissue impairs organismal energy homeostasis in Drosophila
Obesity is a progressive, chronic disease, which can be caused by long-term miscommunication between organs. It remains challenging to understand how chronic dysfunction in a particular tissue remotely impairs other organs to eventually imbalance organismal energy homeostasis. Here we introduce RNAi Pulse Induction (RiPI) mediated by short hairpin RNA (shRiPI) or double-stranded RNA (dsRiPI) to generate chronic, organ-specific gene knockdown in the adult Drosophila fat tissue. We show that organ-restricted RiPI targeting Stromal interaction molecule (Stim), an essential factor of store-operated calcium entry (SOCE), results in progressive fat accumulation in fly adipose tissue. Chronic SOCE-dependent adipose tissue dysfunction manifests in considerable changes of the fat cell transcriptome profile, and in resistance to the glucagon-like Adipokinetic hormone (Akh) signaling. Remotely, the adipose tissue dysfunction promotes hyperphagia likely via increased secretion of Akh from the neuroendocrine system. Collectively, our study presents a novel in vivo paradigm in the fly, which is widely applicable to model and functionally analyze inter-organ communication processes in chronic diseases.
Energy homeostasis is pivotal to life. It is well-established that long-term positive energy balance drives obesity (reviewed in 1,2 ), which is defined as abnormal fat accumulation causative for a number of human diseases 3 .
The prevalence of obesity in children and adults has substantially increased around the world since 1980 4 . Moreover, over the past decades, no country has been successful in combating the obesity pandemic 4 . It is still under debate whether the increased supply of high calorie food 5 combined with sedentary life style 6,7 alone could explain the spread of obesity. But it is clear that the primary driver of obesity is the mismatch of energy intake and expenditure. Specifically, the maintenance of energy balance requires coordination of the major energy handling tissues such as liver (reviewed in 8 ) and adipose tissue (reviewed in 9 ) in mammals. Next to storing energy, adipose tissue also senses the energy status and communicates via adipokines to affect energy intake 10 and expenditure 11 . Therefore, it is vital to understand how energy storage tissue regulates organismal energy balance both in an organ-specific manner and systemically.
Much like mammals, flies also have an energy storage tissue called fat body, which functions similar to liver and white adipose tissue (reviewed in 12 ). Importantly, the majority of extra energy in mammals and flies
Results
Chronic in vivo gene knockdown by RNAi Pulse Induction (RiPI) in adult fly storage tissue. To generate RiPI in adult Drosophila flies (Fig. 1A), we employed the GAL4 system 29 combined with the temperature-sensitive TARGET system 30 to control UAS-RNAi transgenes in a switchable fat body-specific manner (ts-FB-GAL4) 31 . As a proof of concept, we targeted the Adipokinetic hormone receptor (AkhR) gene, a well-characterized key regulator of lipid mobilization in the fat body 30 . Adult flies at the age of six days, which carried the ts-FB-GAL4 and short hairpin (sh) AkhR RNAi transgene (UAS-shAkhRi), were either subjected to a TARGET system-based RiPI (TRiPI) (34)(35)(36)(37)(38)(39)(40)(41)(42)(43)(44)(45)(46)(47)(48) hours at the permissive temperature 29 °C; AkhR-TRiPI On) or kept continuously at the restrictive temperature 18 °C (AkhR-TRiPI Off) (Fig. 1B). To exclude possible confounding effects of the temperature shift per se 32 , we also examined control flies carrying the UAS-shAkhRi transgene only under the two temperature regimes described above (AkhR-TRiPI On control and AkhR-TRiPI Off control) (Fig. 1B). To demonstrate that our conditional expression system is on-off switchable, we took advantage of ts-FB-GAL4 flies carrying a UAS-GFP reporter transgene (see Material & Methods). As expected, GFP mRNA was significantly up-regulated after the induction pulse (day 1) but was at control levels at day 10 after return to the restrictive conditions (Fig. 1C). Similarly, high levels of AkhR siRNA were detected right after the AkhR-TRiPI (21 fold higher than that of AkhR-TRiPI Off flies at day 0). Strikingly, however, AkhR siRNA levels still remained 13-fold higher compared to controls even 10 days after transgene switch-off (Fig. 1D). Consistently, at day 10 after the return to restrictive conditions, AkhR mRNA abundance was just around half compared to control flies; a similar gene knockdown level as observed immediately after the end of TRiPI (Fig. 1E). This chronic AkhR knockdown in the adipose tissue causes persistent body fat increase (Fig. S1A), which is characteristic for AkhR loss-of-function flies 33 . In contrast, we detected no body fat content increase in response to the same temperature shift regimen when ts-FB-GAL4 was absent (Fig. S1A). Importantly, body fat control by tissue-specific, chronic AkhR-TRiPI is not restricted to the UAS-shAkhR transgene but also works in combination with a UAS-long double-stranded (ds)AkhR transgene (Fig. S1B). Collectively, these data suggest that the turnover of shRNA or dsRNA transgene-derived siRNAs targeting AkhR is slow enough to ensure chronic AkhR knockdown for at least three weeks after the induction pulse. This finding opens the possibility to monitor chronic disease progression in response to tissue-specific gene interference in an otherwise undisturbed in vivo setup.
Next, we targeted by RiPI the ER calcium sensor Stromal interaction molecule (Stim), a key component of the store-operated calcium entry (SOCE). Stim is a broadly expressed, essential gene in flies 28 and mice 26 and accordingly not accessible to conventional long-term knockout mutant analysis in adult animals. We have previously shown that short-term Stim knockdown in the fat body impairs gene function on the mRNA and protein level, which causes excessive lipid accumulation in fly fat cells 15 . Now we subjected six days old adult male flies to an adipose tissue-targeted Stim-TRiPI regimen (Fig. 1B) using a UAS-dsStim RNAi transgene (Stim-RNAi1; Fig. S2A). Consistent with our finding on AkhR, the expression of Stim dsRNA was effectively switched on and off in response to the Stim-TRiPI regimen. Stim dsRNA went high immediately after the induction pulse (day 0) but was back to control levels at day 1 and day 10 after the return to the restrictive temperature ( Fig. 2A). In contrast to the transient dsRNA pulse, the Stim mRNA abundance remained chronically decreased to about 50% at days 1 and 10 (Fig. 2B). In line with the chronic impairment of Stim function, Stim-TRiPI On flies accumulated significant body fat at day 1 after RiPI, and then progressively added fat (doubling at day 10) to reach remarkable 3.5 times the body fat content of Stim-TRiPI Off control flies at day 21 (Fig. 2C). This massive fly obesity development is a universal response to chronic Stim impairment, which is largely independent from fly age and sex. Next to young (Fig. 2C) and older (Fig. S2F) unmated male flies, also virgin female flies (Fig. S2C,G) and mated male flies of different ages (Fig. S2B,H) show significant body fat increase three weeks after Stim-TRiPI. By contrast, continuously mated female flies of different ages were resistant to Stim-TRiPI-dependent obesity (Fig. S2E,I) for currently unknown reasons.
To exclude RNAi off-target effects in the etiology of Stim-TRiPI-dependent obesity, we used a second RNAi transgenic line (Stim-RNAi2), which targets an independent sequence of Stim (Fig. S2A). Consistent with the previous results, the body fat content of Stim-RNAi2 flies increased significantly at day 11 after Stim-TRiPI and remained higher than controls at day 21 (Fig. S3A). Finally, we confirmed the causal role of Stim-TRiPI in fat accumulation by simultaneous expression of Stim-RNAi with either one of the two cDNA-based Stim transgenes: the RNAi-sensitive Stim-RA or the RNAi-resistant Stim-Rm transgene (for details see Material & Methods). Fat body-targeted expression of each of the transgenes causes body fat reduction in flies (Fig. S3B,C), which is characteristic for Stim gain-of-function 15,25 . RiPI on flies, which carry the Stim-dsRNAi and the RNAi-sensitive 29 °C for 38-48hs, starting at day 6 after eclosion of adult flies) of Temperature-sensitive RiPI (TRiPI, based on TARGET system) on target genes, i.e. AkhR. TRiPI On refers to TRiPI activity in response to pulse induction at 29 °C was switched on in adult flies (red bar). TRiPI Off refers to TRiPI activity was at off state in adult flies as they were kept at 18 °C (light red bar). TRiPI On control refers to flies containing the RNAi transgene only, which were also transiently shifted to 29 °C (deep green bar); TRiPI Off control refers to flies of the same genotype compared to TRiPI On control flies, kept constantly at 18 °C (light green). (C) Abdominal GFP mRNA expression of TRiPI flies containing temperature-sensitive GFP transgene displayed substantial higher expression level at day 1 compared to day 10 after Stim-TRiPI On as compared to Stim-TRiPI Off males. (D) High abundance of AkhR RNAi effector (siRNA) produced from the AkhR-shRNAi transgene at day 0 and maintained at significant high levels at day 10 in the abdomen of AkhR-TRiPI On males when compared to AkhR-TRiPI Off males. Relative (Rel.) mRNA or siRNA levels are represented as fold change, which refers to the basis value (=1) obtained from corresponding TRiPI Off. (E) Fly abdominal AkhR mRNA expression level reduced at day 1 and day 10 after AkhR-TRiPI On as compared to AkhR-TRiPI Off males. Relative (Rel.) GFP, siRNA or AkhR RNA levels are represented as fold change, which refers to the basis value (=1) obtained from corresponding TRiPI Off flies. Data are presented as means ± standard deviations from 3-6 replicates. All data were analyzed by the two-tailed unpaired Student's t-test. No *p ≥ 0.05, *p < 0.05, **p < 0.01, ***p < 0.001. www.nature.com/scientificreports www.nature.com/scientificreports/ cDNA-RA transgenes shows progressive fat accumulation as early as day 0 after the pulse induction (Fig. S3D). This fat accumulation profile is consistent with the interpretation that Stim-dsRNA downregulates both, the endogenous and the transgenic overexpression of Stim. In contrast, concomitant pulse induction of Stim-dsRNAi and the RNAi-resistant Stim-Rm transgene rescues the normal body fat phenotype at day 0 but not at day 11 or www.nature.com/scientificreports www.nature.com/scientificreports/ 21 (Fig. S3E). Again, these data are in agreement with the Stim-Rm expression being switched off after RiPI, while the dsRNA-mediated Stim gene knockdown persists.
To test the general applicability of the RiPI approach for chronic disease modeling, we tested a TARGET-independent switchable gene expression system. Accordingly, we combined Stim-dsRNA with the drug (Mifepristone)-inducible geneSWITCH (GS) system 34 to achieve switchable ubiquitous expression control using dautherlessGS (daGS 35 ) (Fig. 2D). As in the case of Stim-TRiPI, drug-dependent RNAi pulse induction (DRiPI) of Stim caused a significant transient induction of Stim-RNAi just after RiPI (day 0), which returned to control levels at day 4 and following ( Fig. 2E). Comparing to fat body-targeted Stim-TRiPI, ubiquitous Stim-DRiPI also reduced Stim mRNA to about 40% of the abundance in control flies at day 8 (Fig. 2F). Consequently, this chronic Stim knockdown caused progressive body fat accumulation (Fig. 2G). Notably, this obesity is not only independent from age and sex of the flies but also from food composition. While the absolute fly body fat content varies widely in response to dietary composition, the relative body fat content of Stim-DRiPI flies is consistently at least doubled compared to controls on rich and poor food (Fig. S3F).
Collectively, our data present RiPI as a versatile tool to monitor chronic disease progression caused by long-term tissue-specific gene impairment. Specifically, our findings establish Stim-RiPI as potent method to study the pathophysiological consequences of chronic SOCE impairment in the fly adipose tissue.
Chronic adipose tissue impairment of Stromal interaction molecule causes metabolic disease in flies.
After having demonstrated that Stim-RiPI flies suffer from excessive fat accumulation, we asked whether these flies also display other phenotypes characteristic for mammalian obesity such as increased girth, elevated body weight, and reduced physical fitness. While Stim-TRiPI On and Stim-TRiPI Off flies are indistinguishable at day 1 ( Fig. 3A), abdominal girth of day 21 Stim-TRiPI On flies is visibly inflated compared to Stim-TRiPI Off flies (Fig. 3B). Consistently, Stim-TRiPI On flies gained about 22% body weight in three weeks to become 18% heavier than age-matched Stim-TRiPI Off flies (Fig. 3C) and a dry weight gain of Stim-TRiPI On flies is also observed (Fig. S3G). Consistent with inflated abdominal girth, and increased body weight, thin layer chromatography (TLC) data confirmed that the body TAG content of day 21 Stim-TRiPI On flies was 90% higher compared with day 21 Stim-TRiPI Off flies (Fig. 3D,E). Fly fat body tissue is functionally similar to mammalian adipose tissue and liver, which is a major organ for fat and glycogen reserve 12 Consistent with their increased fat reserves, obese Stim-TRiPI flies are more starvation resistant compared with control flies (Fig. S4A). Noteworthy, substantial post mortem lipid stores suggest a lipid mobilization impairment in Stim-TRiPI flies (Fig. S4B). Much in contrast to fat, the glycogen storage under ad libitum feeding and the glycogen mobilization upon starvation is comparable between Stim-TRiPI On and Stim-TRiPI Off control flies (Fig. S4C). By contrast, we found that day 10 Stim-DRiPI On flies displayed substantially higher level (around 1.9 fold) of circulating sugar levels compared to the value of day 10 Stim-TRiPI Off control flies (Fig. S4D). These results demonstrate that the metabolic dysregulation of body energy stores includes not only a defect in TAG mobilization but also hyperglycemia, which likely contributes to or at least correlates with the obesity phenotype.
We asked next, whether Stim-TRiPI-dependent obesity in flies correlates with physical fitness impairment. Indeed, the climbing ability of day 24 Stim-TRiPI flies was reduced by 40% compared to controls (Fig. S4D). Also, the median lifespan of Stim-DRiPI flies was significantly reduced by about 21% compared to corresponding control flies (Fig. S4E), which is reminiscent of the increased mortality correlated with human obesity 4 . Collectively, these data show that the Stim-TRiPI model specifically affects the storage lipid metabolism and recapitulates hallmarks of mammalian obesity.
Next, we asked how long-term knockdown of Stim changes systemic energy homeostasis. Consistent with previous acute RNAi expression studies 15 , we observed that Stim-TRiPI On flies increased food intake by 90% immediately after pulse induction (Fig. 4A) and these flies stayed hyperphagic the following 10 days (Fig. 4B). Similarly, food intake was increased by around 50% in Stim-DRiPI On flies for eight days using this alternative RNAi induction system (Fig. S5A). These data suggest that hyperphagia contributes to extra fat accumulation. In line with this hypothesis are the results from restrictive pair-feeding experiments. In this paradigm, Stim-TRiPI On flies were offered slightly less than the food amount consumed by ad libitum fed Stim-TRIP Off flies to largely match the dietary intake of experimental and control flies. Under pair-feeding, the body fat content of Stim-TRiPI On flies increased by moderate 127% during the first ten days, and reach 1.85-fold the value of age-matched Stim-TRiPI Off flies (Fig. 4C). The corresponding body fat values increase under ad libitum fed conditions are 343% during the first ten days (with a 30% increase already at day 1 after pulse induction) to reach 2.57-fold the value of Stim-TRiPI Off flies (Fig. 4C). To assess the contribution of hyperphagia-dependent lipogenesis we used dietary 14 C glucose pulse-labeling to compare de novo lipid synthesis between Stim-DRiPI On and Off flies (Fig. S5B) as glucose intake is known to drive lipogenesis 36 . Indeed, the 24 h 14 C incorporation into neutral lipids of Stim-DRiPI On flies was almost five times higher compared to Stim-DRiPI Off flies (Fig. S5C), which substantially exceeds the corresponding incorporation into polar lipids (79%; Fig. S5D) or the food intake increase. These results suggest that in addition to hyperphagia, hyperactivity of TAG biosynthesis and/or reduced energy expenditure causes the excessive fat accumulation. Therefore, we used the respirometry-based metabolic rate estimation in fly 37 to detect a 29% decrease in CO 2 production at noon and a 17% decrease in the evening in the of Stim-DRiPI On flies compared to the corresponding control flies (Fig. S5E). Collectively, these data www.nature.com/scientificreports www.nature.com/scientificreports/ demonstrate that hyperphagia, increased lipogenesis, and reduced energy expenditure contribute to fly obesity in response to chronic Stim knockdown by unknown molecular mechanisms.
Adipose tissue lipolysis dysfunction, Adipokinetic hormone resistance, and insulin signaling impairment caused by chronic impairment of Stromal interaction molecule.
To address the tissue-autonomous mechanisms of chronic Stim knockdown, we performed differential transcriptome profiling by comparative RNAseq analysis on dissected fat body tissues of day 10 Stim-TRiPI On and Off flies ( Fig. S6A-C). Unsurprisingly, gene ontology analysis revealed a complex regulatory response to chronic SOCE impairment suggesting the down-regulation of various metabolic processes (Fig. 5A). A group of significantly down-regulated genes comprises aralar1, Mdh1, Mdh2, Got1, and Got2 (Fig. 5B,C), all of which encode enzymes involved in NADH shuttling between mitochondria and cytoplasm (Fig. S6D). To understand the correlation between Akh signaling and genes involved in NADH shuttling, we co-analyzed the RNAseq dataset of fly larval fat body samples in response to Akh overexpression vs. control condition 38 and our RNAseq dataset of adult fly fat body tissue under Stim-TRiPI On vs. Off condition. Intriguingly, 47 genes were up-regulated in larval fly fat body overexpressing Akh but down-regulated in Stim-TRiPI On adult fly fat body tissue: four of them are Mdh1, Got1, Got2 and aralar1, which could be considered as Akh signaling reporter genes (Fig. S7A). Moreover, Mdh1 mRNA is down-regulated as early as day 1 in Stim-TRiPI On flies (Fig. 5D) but not in Stim-TRiPI On control flies (Fig. S7B). Another gene contrarily expressed under the described conditions is Gprk2 (Fig. S7A), which is directly correlated with cAMP levels in fly tissue 39 . Consistently, both RNAseq and RT-qPCR show that the AkhR mRNA level is up-regulated in the fat body of Stim-TRiPI On flies, however not the AkhR downstream signaling gene tobi 40 (Fig. 5E). Collectively, these finding suggest a differential impairment of Akh signaling in the fat body of obese Stim-TRiPI flies.
Previous research showed that Akh/AkhR signaling is causative for lipid mobilization 33,41 . Consistently, RNAseq indicates the transcriptional down-regulation of the paralogous lipase genes, doppelgänger of brummer (dob) and brummer (bmm), which encodes the fly homolog of mammalian adipose triglyceride lipase (ATGL) 42 in response to chronic Stim impairment (Fig. S8A). Moreover, this analysis suggests a coordinated down-regulation of mitochondrial and peroxisomal β-oxidation pathways in obese Stim-TRiPI On flies (Fig. S8A). We confirmed the transcriptional down-regulation of lipid catabolic pathway genes by selected RT-qPCR analyses of bmm, carnitine O-palmitoyltransferase (CPT1), dACADVL, and dATP5B (Fig. S8A,C). In contrast, midway (mdy), which encodes the homolog of the key human lipid synthesis enzyme diacylglycerol acyltransferase 1 (DGAT1), was www.nature.com/scientificreports www.nature.com/scientificreports/ only significantly up-regulated at day 1 but not at day 10 in Stim-TRiPI On fat body (Fig. S8D). Similarly, other key genes involved in lipid biosynthesis such as dFAS (fatty acid synthase gene), dACS (Acetyl Coenzyme A synthetase gene), dLipin (Mg 2 + -dependent PA phosphatase gene) were not significantly up-regulated at day 10 in Stim-TRiPI On fat body (Fig. S8D,E). Collectively, these data suggest that a partial Akh resistance as well as impairment of lipolysis and mitochondrial function mediate obesity progression in the fat body in response to chronic Stim knockdown.
To understand why Stim-DRiPI On flies have higher circulating sugar levels, we checked the expression of the Hex-C gene, a homolog of mammalian Glucokinase (GCK) likely involved in glucose clearance by fly fat body 43 and insulin signaling 44 . Stim-TRiPI On flies show around 35% reduced expression of the Hex-C gene at day 10 ( Fig. S9A), which might account for the hyperglycemia. Since insulin-deficient flies display hyperglycemia 45 and insulin signaling in the fat body promotes circulating sugar clearance 46 , we checked peripheral insulin signaling in Stim-TRiPI On flies. The insulin signaling target gene d4EBP's expression was not significantly up-regulated at day 1, and mildly but significantly up-reguated by around 25% at day 10 ( Fig. S9B), which is repressed by the insulin signaling through active phosphorylated Akt 47 . Besides, although insulin signaling was shown to promote fly fat cell proliferation and fat accumulation 48 , we did not find a measurable increase of new DNA synthesis indicated by EdU staining in Stim-TRiPI On fly fat body tissues (Fig. S9C). Consistently, the phosphorylated Akt (at Ser505 43 but not Thr342 49 ) level, the downstream readout of insulin signaling 50 , shows a reduced trend by 37% in Stim-DRiPI On peripheral tissues (Figs S9D, S10A, S11A,B,C). Moreover, loss-of-function of the three central insulin peptide genes dIlp2,3,5, the master regulators of sugar homeostasis 51,52 , did not prevent the extra fat accumulation of Stim-DRiPI On flies (Fig. S9E). Consistent with our previous finding 15 , insulin producing cells (IPCs) of Stim-TRiPI On flies also show reduced staining intensity of dIlp-2 by around 30% (Fig. S9F,G), indicative of increased dIlp-2 secretion 53 . In line with this, RNAseq and RT-qPCR analysis identified the up-regulation of insulin secretion promoting gene CCHamide-2 (CCHa2) 54 by around 67% and dawdle (daw) 55 by around 35%, but down-regulation of insulin signaling inhibiting gene Ecdysone-inducible gene L2 (ImpL2) 56 by around 85% and insulin secretion suppression gene Limostatin (Lst) 57 by around 70% at the mRNA level (Fig. S10A). To measure circulating dIlp-2 directly, we combined the Stim-DRiPI system with dIlp-2HF (epitope-tagged dIlp-2 gene) 52
and found no significant difference on dIlp-2HF levels between obese Stim-DRiPI On and Stim-DRiPI
Off flies (Fig. S10D). While the contradictory findings on circulating dilp levels need future research attention, the results collectively show that insulin signaling is impaired in the fat body tissues of obese Stim-RiPI flies and partially dispensible for long-term body fat storage increase.
Systemic Adipokinetic hormone signaling controls hyperphagia in response to chronic
Stromal interaction molecule dysfunction in the fat body. Adipokinetic hormone signaling does not only promote lipid catabolism in the fat body, but also regulates food intake with ectopic Akh expression causing hyperphagia 58,59 . Therefore, we asked if Stim dysfunction of fat body tissue promotes food intake via Akh signaling. We first quantified the Akh mRNA levels of day 1 and day 10 Stim-TRiPI On flies to find no expression differences compared to control flies (Fig. 6A). Since Akh signaling is controlled by Akh peptide secretion from the neuroendocrine corpora cardiaca (CC) cells 60 , reduced Akh immunostaining intensity in CC cells is used as a proxy for increased secretion of the hormone 60,61 . Therefore, we assessed Akh peptide levels by comparative immunohistochemistry on dissected CC cells of day 10 Stim-TRiPI On/Off flies using antibodies directed against Akh and the synaptic cytoskeletal protein Bruchpilot (served as neuronal marker for signal normalization) (Fig. 6B,C). Normalized Akh staining intensity in CC cells of Stim-TRiPI On flies was significantly reduced by around 27% compared to Stim-TRiPI Off flies (Fig. 6B,C) suggesting increased Akh secretion in response to chronic SOCE impairment in the fat body. To directly address the role of Akh in obese flies subject to chronic Stim fat body dysfunction, we compared the body fat content and the food intake of Akh heterozygous and homozygous mutant flies subjected to Stim-DRiPI On/Off (Fig. 6D,E). Loss-of-Akh completely suppressed obesity development (Fig. 6D) and hyperphagia (Fig. 6E) in Stim-DRiPI On flies.
Since Stim-TRiPI On flies have higher circulating sugar levels than control flies, the increased Akh secretion cannot be triggered by lower circulating sugar levels as reported previously 60 . Thus, we propose that increased Akh secretion in Stim-RiPI On flies might be regulated by unknown fat body tissue secreted factors. RNAseq analysis on Stim-TRiPI fat body tissues identified 16 differentially expressed genes encoding proteins belong to "secreted" group (supplementary data file 2). Interestingly, Lst, CCHa2, daw, Est-6, and PGRP-SD were also transcriptionally regulated in fly larval fat body overexpressing Akh (Fig. S10B), which qualifies them as fat body-derived signal candidates to regulate Akh secretion. Collectively, these results support the model that the chronic impairment of Stim activity in the fat body remotely triggers Akh secretion from the CC cells by currently unknown mechanisms, which in turn stimulates food intake and thereby contributes to obesity progression. the malate-aspartate shuttle genes only Mdh1 is an early response gene to Stim-TRiPI On as revealed by comparison to the Stim-TRiPI Off flies. (E) RT-qPCR identifies transcriptional up-regulation of the AkhR gene but no change on tobi mRNA level. Data are presented as means ± standard deviations from 3-6 replicates. All qRT-PCR data were analyzed by the two-tailed unpaired Student's t-test. No *p ≥ 0.05, *p < 0.05, **p < 0.01, ***p < 0.001. (2019) 9:6989 | https://doi.org/10.1038/s41598-019-43327-y www.nature.com/scientificreports www.nature.com/scientificreports/ . Data are presented as means ± standard deviations. All data were analyzed by the two-tailed unpaired Student's t-test. No *p ≥ 0.05, *p < 0.05, **p < 0.01, ***p < 0.001. www.nature.com/scientificreports www.nature.com/scientificreports/
Discussion
Chronic knockdown of target genes by RNAi Pulse Induction (RiPI). Here we present in vivo evidence for chronic targeted gene knockdown following RiPI in the adult Drosophila fat body. We found that a short pulse induction of shRNA targeting the AkhR gene generates persisting siRNAs, which causes significant down-regulation of AkhR for at least 10 days. Persistence of RNAi has been associated with RNA-dependent RNA polymerase (RdRP)-mediated siRNA amplification in C. elegans 62,63 and in human cells 64 . In Drosophila, however, it remains controversial whether the genome encodes a functional RdRP 65,66 . Therefore, slow degradation of the transgene-derived siRNAs might confer the chronic gene knockdown. In fact, RNAi effector double-stranded siRNAs (21nt and 24nt) are more stable than the 18nt double-stranded RNAs in the human cytosolic extract 67 . Moreover, in human HEK293T cells, the anti-sense strand of siRNA is more resistant to intracellular nucleases compared to the sense strand of the siRNA duplex, which is likely due to the incorporation of anti-sense siR-NAs into the activated RNA induced silencing complex (RISC) 68,69 . Therefore, the involvement of RISC might allow the slow degradation of siRNAs in adult Drosophila fat body cells. The slow decline of the siRNA level is apparently sufficient to chronically knockdown the endogenous gene expression of AkhR and Stim, which causes progressive body fat increase. This mode of action is further supported by the fact that pulsed overexpression of RNAi resistant Stim-mRNA only transiently rescues the fat content increase due to Stim-TRiPI. Consistently, long-term gene silencing (at least 11 days) is also observed in adult flies after injection of low concentrations of dsRNAs 70 . Similarly, in an EGFP-transgenic mouse model, the inhibition of the reporter expression lasts as long as two months after siRNA injection 71 . In summary, we show here that in vivo RiPI generates long-lasting RNAi, which allows chronic knockdown of target genes in a tissue-specific manner. We propose that RiPI is a versatile tool to study causative relationships and temporal sequences in inter-organ communication processes. A Drosophila obesity model triggered by chronic Stromal interaction molecule dysfunction in the fly adipose tissue. Using RiPI, we established a Drosophila obesity model based on chronic, adipose tissue-directed knockdown of Stim, which shares remarkable similarity to characteristics of human obesity. First, the visibly enlarged abdomen of the obese flies corresponds to increased waist circumference, which gains importance as meaningful parameter to assess android adiposity 72 . Similarly, body fat accumulation causes significant weight gain, another readout to quantify obesity in rodents 73 and human 74 . Second, the excessive fat accumulation correlates with climbing deficits of the obese flies, with physical fitness reduction being another hallmark of human adiposity 75 . Moreover, obese Stim-TRiPI flies have reduced life span, which is reminiscent of the higher mortality rates in human obesity patients 76 . Third, we demonstrate that early-onset hyperphagia drives the positive energy balance in Stim-TRiPI flies. Consistently, increased food intake is the major driver of human obesity 5 . Hyperphagia is linked to increased dietary glucose conversion into storage fat in obese Stim-TRiPI flies. Notably, increased food intake and elevated glucose conversion into storage lipids has also been reported after silencing obesity blocking neurons in the fly central brain 77 . With hyperphagia being an important contributor, obesity development in Stim-RiPI flies is not monocausal. It is noteworthy that the rise in fat storage in Stim-DRiPI substantially exceeds the food intake increase. Moreover, matching the food intake of Stim-TRiPI On and Off flies still results in body fat accumulation. Importantly, there is a significantly reduced metabolic rate of Stim-DRiPI flies. Finally, the observed hyperglycemia at day 10, physical fitness reduction at day 24 and shortened life span of Stim-TRiPI On flies are associated with obesity development, similar to type 2 diabetes (T2D) 78 , exercise intolerance 79 and mortality 80 , which are also highly correlated with human obesity. In summary, chronic knockdown of Stim in the adult fat body causes fly obesity by a number of physiological factors culminating in organismal energy imbalance similar to mammalian adiposity.
Chronic knockdown of Stromal interaction molecule causes adipose tissue dysfunction. Our study
highlights the critical roles played by Stim in interaction with Akh/AkhR signaling and insulin signaling in the fly fat body tissue. Reduced expression of Mdh1 and Gprk2 suggests impaired Akh/AkhR signaling in the fat body of Stim-TRiPI flies. Mammalian MDH1 has been linked to glycolysis in cells with mitochondrial dysfunction 81 , obese Stim-TRiPI On flies display normal glycogen storage and mobilization during starvation. Similar findings are also observed in Akh A , Akh AP , and AkhR 1 mutant fly larvae 82 and adult flies, albeit their capability to mobilize glycogen is weakly impaired 41 . A possible explanation is that flies employ corazonin, a starvation-responsive pathway complementary to Akh, to utilize glycogen 83 . In addition to storage glycogen, the reduced expression of genes involved in lipolysis predicts an impairment of starvation-induced storage lipid mobilization. Indeed, obese Stim-TRiPI flies display an abnormal lipid mobilization profile under starvation and die with residual fat resources. Similarly, impaired lipid mobilization is also observed in flies with loss-of-function mutation in the TAG lipase gene bmm 42 or in flies lacking either InsP3R 84 or AkhR 33 . Consistently, loss-of-function of STIM1/2 in mammalian cells, also impairs lipolysis via down-regulation of cAMP 26 . Moreover, decreased catecholamine-stimulated lipolysis has been identified in human obese individuals 85 . Collectively, our results show that fat body tissue of obese Stim-RiPI On flies is resistant in response to Akh signaling, which drives the obesity development.
Moreover, our study supports the possibility to model T2D in adult flies. Obese Stim-TRiPI flies show reduced expression of the glucose clearance gene Hex-C, whose mammalian homolog was also suppressed in T2D patients 86 . Besides, we also provide evidences to support that obese Stim-TRiPI flies have hyperglycemia, impairment of insulin signaling in fat body tissue, and larger lipid droplets. Similar features were also described in fly larvae reared on high sugar diet 36,87 , which resemble mammalian insulin resistance 88 . Regarding unchanged circulating dIlp-2 level in obese Stim-DRiPI flies, insulin-like peptide secretion might be interfered by the knockdown of Stim in the insulin producing cells of Stim-DRiPI flies mediated by ubiquitous driver daGS, more investigation on circulating insulin levels of obese Stim-DRiPI flies by specific driver need to be done in future. Interestingly, the indicators of insulin signaling impairment mentioned above occur at later stage of Stim-RiPI obesity development, and accordingly are possibly the consequence of Stim-TRiPI On mediated-fat gain, which also supports the concept that obesity compromises insulin signaling.
www.nature.com/scientificreports www.nature.com/scientificreports/ Metabolic fat body dysfunction drives organismal energy imbalance. Apart from the specific role of the fat body in storage lipid handling and glucose clearance, we show that chronic knockdown of Stim in this organ remotely promotes Akh secretion from the fly CC neuroendocrine cells, which leads to hyperphagia. Our RNAseq and gene expression analysis indicate a list of genes encoding candidate hormone or secreted proteins. Among them, CCHa2, daw, and Lst has been shown to function as hormones to regulate insulin-like peptide secretion 54,55,57 . In addition, CCHa2, daw, Lst are also regulated by Akh overexpression in opposite direction. Whether differential expression of these genes mentioned above mediate the (mis)communication between the fat body and the CC cells is currently unknown. Nevertheless, the communication between the fat body and the CC cells is essential for the food intake increase as well as further obesity development induced by long-term knockdown of Stim. Interestingly, a study provided evidence that muscle tissue in flies communicates with the CC cells to control Akh secretion via the myokine Unpaired2 (Upd2) 89 . Upd2 had been previously shown to act as adipokine, which signals the fed state from the fat body. Unlike mammalian leptin, Upd2 remotely acts on insulin-producing cells in the central brain to regulate insulin secretion but not food intake 90 . Recently, Akh mRNA expression was shown to be regulated by a gut-neuronal relay via midgut-secreted peptide Buriscon α in response to nutrients 91 . Given the fact that the transcription of Akh is unaffected in Stim-RiPI On flies, identification of the adipokine, which regulates the Akh release directly or indirectly to affect food intake in the Stim-RiPI fly obesity model requires future research efforts.
In conclusion, our work introduces RNAi Pulse Induction as a novel in vivo paradigm for chronic, tissue-specific gene interference. RiPI makes essential genes accessible to long-term functional analysis in the adult fly, as exemplified here by establishing a Drosophila obesity model caused by chronic knockdown of Stim in the adult fat body. Moreover, this study reveals, that the fat body integrates the tissue-autonomous and the systemic branches of Akh signalling: by regulation of lipid mobilization via SOCE in the fat body, and possibly by remote-control of Akh secretion from the CC cells. Recently, the evolutionarily conserved role of SOCE in controlling energy metabolism has attracted the interest of mammalian studies 26,27 . While Akh is structurally not conserved to humans, there is a growing number of remotely-controlled orexigenic peptide hormones in mammals with asprosin being one of the latest additions 92,93 . Collectively, our findings in the fly add further evidence to the existence of conserved regulatory principles in animal energy homeostasis control emanating from SOCE signalling in fat storage tissues.
Material and Methods
Fly stocks and husbandry. The following Drosophila fly stocks were used in this study: StimRNAi1 94 , ts-FB- Table S1. Quantification of body fat, glycogen, and protein levels. Unless stated differently, three to six replicates of each condition (typically five adult male flies per replicate) were collected. A coupled-colorimetric assay for TAG equivalents was carried out as previously described 97 . Glycogen level was determined with method described in Gáliková et al. 41 . Protein level for normalization of body fat and glycogen content was quantified with Bicinchoninic acid assay (BCA) as described in Gáliková et al. 58 .
Fly body weight measurement. Three to six replicates of five adult males each were transferred into a 1.5 mL Eppendorf tube and the total weight determined using a Sartorius Microbalance MC5 (Sartorius AG, Göttingen, Germany). For dry weight, the open tubes containing flies were incubated at 65 °C for 24 hours. The empty tube weights were previously determined by averaging the values of three different measurements and subtracted from the total weight to determine the wet and dry weight of the flies. The experiment was repeated twice.
Food intake and physiological assays. To quantify adult fly food intake, a capillary feeding (CAFE) assay was performed according to Ja et al. 98 with slight modifications 31 . At least 36 flies per condition were assayed for food intake analysis. The following physiological assays were carried out based on previously described methods with minor modifications: starvation resistance assay 42 , startle-induced climbing assay 41 , and metabolic rate assay 37 ; for further details see supplementary information.
Ex vivo adult fly fat body tissue imaging, lipid and cell area quantification. Three to five flies per condition were analyzed. The experiment was done as described in Baumbach et al. 25 . Further details refer to supplementary information.
Thin layer chromatography (TLC). Four replicates (five adult males per replicates) were collected for TLC, which was done as described in Baumbach et al. 25 with minor modifications. Experiments were repeated twice. Details are described in supplementary information.
www.nature.com/scientificreports www.nature.com/scientificreports/ RT-qPCR. Three to six independent biological replicates were used. RT-SYBR qPCR was done as described in Beller et al. 31 . RT-Taqman qPCR followed the instructions of the kit manufacturer. Details including primer sequences on quantitative RT-qPCR are described in supplementary information.
RNAseq analysis of adult fly fat body tissues.
Abdominal fat body tissues of adult male flies at day 10 and 11 after Stim-TRiPI On/Off induction were dissected in cold PBS buffer, snap frozen in liquid nitrogen, and immediately stored at −80 °C. Dissected fat body tissues of about 20 flies were pooled as one biological replicate per condition of each Stim-TRiPI On/Off experiment. Three independent biological replicates of fat body tissues were used for total RNA isolation using a commercial RNA extraction kit (Norgen BIOTEK, Cat. #: 36200). Typical RNA samples contained at least 1 μg of RNA with concentrations around 50 ng/μL. RNA sample quality analysis, library preparation, sequencing, sequence data trimming, mapping, and reads counting were performed by the Max Planck-Genome-centre Cologne, Germany (http://mpgc.mpipz.mpg.de). Further details on RNAseq quality control, gene differential expression analysis and gene ontology analysis are described in supplementary information.
Western blot analysis. Three independent biological replicates were used for the experiments. Akh and dIlp-2 immunostaining and confocal imaging. Akh and dIlp-2 immunohistochemistry was performed as described 25 with modifications described in supplementary information.
Circulating sugar level measurement. Circulating sugar level of adult flies was determined as described 41 with modifications described in supplementary information.
Statistical analysis. Unless stated differently, data were analysed by two-tailed unpaired Student's t-tests using Microsoft Excel 2011. Statistical significance of the differences between samples is represented by asterisk: *p < 0.05, **p < 0.01, ***p < 0.001. All error bars represent standard deviations of the mean.
|
2019-05-07T13:56:00.628Z
|
2019-05-06T00:00:00.000
|
{
"year": 2019,
"sha1": "df3795fdc9507f26110f9e56e4b29c7302a0e93d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-43327-y.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "162de30a73c7ef5671f45bafcd4a328f26b234f1",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
211534509
|
pes2o/s2orc
|
v3-fos-license
|
Spontaneous Hyphema from Iris Microhemangiomatosis in an Elderly Patient with Hypertensive Crisis
Background Iris microhemangiomatosis is a rare vascular iris tumor, with potential severe complications such as increased intraocular pressure (IOP). We aim to describe a case report of a patient presenting with hyphema secondary to iris microhemangiomatosis triggered by excessive high blood pressure. Case Presentation A 74-year-old woman was treated for hypertensive crisis. After her high blood pressure had been controlled and stabilized, she was discharged home. However, the same day, she complained about an acute decrease in vision in her left eye. Best corrected visual acuity was 20/20 on the right eye and 20/200 on the left eye. On biomicroscopy, a hyphema was seen. Iris neovascularization was absent, IOP and fundus examination were normal. After spontaneous resolution of the hyphema, a fluorescein angiography of the anterior segment was performed, which revealed bilateral subtle early hyperfluorescence with late staining scattered at the pupillary margin. The patient was diagnosed with iris microhemangiomatosis. During the follow-up of 24 months, the blood pressure was stable and well controlled. The patient did not experience any recurrent hemorrhage. Discussion and Conclusion Spontaneous hyphema is the most common complication of iris vascular tumors. We report the occurrence of a spontaneous hyphema triggered by uncontrolled blood pressure in a patient with a very rare condition, i.e., iris microhemangiomatosis. In order to avoid complications of microhemangiomatosis such as uncontrolled glaucoma or recurrent bleeding requiring surgery, blood pressure should be monitored closely and controlled.
avoid complications of microhemangiomatosis such as uncontrolled glaucoma or recurrent bleeding requiring surgery, blood pressure should be monitored closely and controlled.
© 2020 The Author(s) Published by S. Karger AG, Basel
Background
Tumors of the iris have a relatively low incidence, and vascular tumors of the iris are even more uncommon, comprising 2% of all iris tumors. Shields et al. [1] recently published the largest series of 3,680 cases of iris tumors. Only 57 eyes presented vascular iris tumors, including racemose hemangioma, cavernous hemangioma, capillary hemangioma, varix, and microhemangiomatosis.
Iris microhemangiomatosis is usually characterized by bilateral small clusters of tightly coiled blood vessels at the pupillary margin [1][2][3]. Iris microhemangiomatosis is typically found in elderly adults. The etiology seems to be idiopathic.
We aim to describe a case report of an uncommon trigger for hyphema in an elderly patient with iris microhemangiomatosis.
Case Presentation
A 74-year-old woman was complaining about chest pain and shortness of breath. At the emergency room she was diagnosed with hypertensive crisis and blood pressure of 280/140 was measured. After adjusting the hypertensive medication, she was discharged home.
Shortly after her discharge, she presented to our eye institute complaining of acute decreased vision in her left eye. Prior ocular history was negative. She reported that her vision had dropped immediately after getting up that day. On her first visit to our institution, her best corrected visual acuity (BCVA) was 20/20 on her right eye, and 20/200 on her left eye. Biomicroscopy examination revealed a dense hyphema coming from active bleeding (Fig. 1a, b), without evidence of neovascularization of the iris or the angle on gonioscopy. Moreover, a small iris nevus was seen in the upper iris. The blood was clearly coming from the pupillary border. Examination of the right eye was normal. Intraocular pressure (IOP) was 15 mm Hg in both eyes. Fundus examination showed no abnorm. Macular optical coherence tomography (OCT) (Cirrus HD-OCT, Zeiss, Germany) was normal. Fundus fluorescein angiography (FA) showed no retinal vascular alterations. As the IOP was normal, the patient was discharged with topical treatment of prednisolone acetate 1% (PredForte) and cyclopentolate 1% TID and scheduled for follow-up.
The patient was seen in our clinic 15 days later and the hyphema had resolved. An iris FA was performed showing subtle early hyperfluorescence with late staining scattered at the pupillary margin bilaterally presenting ectatic dilatation of the iris vessels (Fig. 2a, b, 3a, b). The patient was diagnosed with iris microhemangiomatosis.
The patient has been followed for over 24 months without recurrence of the hyphema and she had a BCVA of 20/20 at distance in the affected eye. Blood pressure was well controlled during the follow-up.
Discussion and Conclusion
Spontaneous hyphema is the most common complication of iris vascular tumors. Iris vascular lesions represent 2% of all iris tumors, and microhemangiomatosis is a very uncommon entity (5% of all vascular iris tumors) [1]. The small vascular clusters at the pupillary border carry the risk for spontaneous hyphema and secondary increase in IOP [2,4,5].
We hypothesize that the hypertensive crisis in the patient presented here triggered the occurrence of the hyphema. Notably, all cases of iris microhemangiomatosis were diagnosed in patients older than 60 years old. Therefore, we strongly believe that measurement of blood pressure should be performed in all patients presenting with spontaneous hyphema. As this diagnosis is difficult to establish on clinical examination without the use of iris FA, this condition might be underdiagnosed.
The differential diagnosis of spontaneous hyphema includes iris neovascularization, iris or ciliary body neoplasms, uveitis-glaucoma-hyphema syndrome, and blood dyscrasias [6]. While rubeosis presents with vessel leakage on FA, iris microhemangiomatosis shows dilated ectatic vessels without any leakage.
Another important tool in the workup of patients with spontaneous hyphema is ultrasound biomicroscopy in order to rule out neoplasms of the iris and ciliary body. In the current case, ultrasound biomicroscopy was not performed as the cause of the hyphema was demonstrated on FA.
Uncontrolled high blood pressure might produce hyphema due to iris microhemangiomatosis. Even more, in this case report, bleeding occurred only once. However, it has been published that spontaneous hyphema from iris microhemangiomatosis can be recurrent and treatment should be performed. This led us to the conclusion that in order to avoid complications of microhemangiomatosis such as uncontrolled glaucoma or recurrent bleeding requiring surgery etc., blood pressure should be controlled.
Statement of Ethics
There was no need for ethics approval or consent.
Disclosure Statement
The authors declare that they have no competing interests.
Funding Sources
No funding or grant support.
Author Contributions
P.J.N., D.Z., A.L., and M.I. were involved in data analysis, writing and reviewing the paper. P.J.N. was also involved in data acquisition. All authors read and approved the final manuscript.
Pedro J. Nuova and Dinah Zur contributed equally to this work.
|
2020-02-06T09:07:31.559Z
|
2020-02-04T00:00:00.000
|
{
"year": 2020,
"sha1": "5fa9be5f0cba51ae5c1b09572660b44af4a28f55",
"oa_license": "CCBYNC",
"oa_url": "https://www.karger.com/Article/Pdf/505963",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5ef517d1dd5c0a16a14c4f142d1523896c7057b6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
222408445
|
pes2o/s2orc
|
v3-fos-license
|
Differential diagnosis of syndromic craniosynostosis: a case series
Purpose Syndromic craniosynostosis is a rare genetic disease caused by premature fusion of one or multiple cranial sutures combined with malformations of other organs. The aim of this publication is to investigate sonographic signs of different syndromic craniosynostoses and associated malformations to facilitate a precise and early diagnosis. Methods We identified in the period of 2000–2019 thirteen cases with a prenatal suspected diagnosis of syndromic craniosynostosis at our department. We analyzed the ultrasound findings, MRI scans, genetic results as well as the mode of delivery, and postnatal procedures. Results Eight children were diagnosed with Apert Syndrome, two with Saethre Chotzen syndrome, one with Crouzon syndrome, and one with Greig cephalopolysyndactyly syndrome. One child had a mutation p.(Pro253Leu) in the FGFR2 gene. We identified characteristic changes of the head shape as well as typical associated malformations. Conclusion Second trimester diagnosis of syndromic craniosynostosis is feasible based on the identified sonographic signs. In case of a suspected diagnosis a genetic, neonatal as well as surgical counseling is recommended. We also recommend to offer a fetal MRI. The delivery should be planned in a perinatal center.
Introduction
Craniosynostosis is the result of a premature fusion of one or multiple cranial sutures. Depending on the affected sutures, the head can develop asymmetrically which is detectable in in utero with prenatal ultrasound. Postnatally, surgery may be necessary in case of an increase in intracranial pressure.
Isolated craniosynostosis is mostly sporadic, with an incidence of 1:2000-2500 [1]. In contrast, syndromic craniosynostosis usually involves multiple sutures combined with malformations of other organs [2]. Syndromes most frequently associated with craniosynostosis are Apert-, Crouzon-, Pfeiffer-, and Saethre Chotzen syndrome. Of these, the Apert syndrome is the most common with a prevalence of 1:100,000 [3]. A mutation of Fibroblast Growth Factor Receptor 2 (FGFR2) gene causes the autosomal dominantly inherited Apert syndrome [4]. However, most individuals with Apert syndrome develop the disorder as the result of a de novo mutation in the FGFR2 gene.
The Apert syndrome causes variable deformation of the skull due to bicoronal craniosynostosis, midface hypoplasia and complex syndactyly of hands and feet [5,6]. It can also be associated with the central nervous system's abnormalities, including malformations of the corpus callosum, the limbic system and abnormal gyration [7]. Neurological development disorders are possible that mostly lead to mild and rarely moderate to severe impairment [6]. After birth, high intracranial pressure can indicate a need for surgery.
The autosomal dominant Crouzon syndrome is also caused by mutations in FGFR2 gene [8]. Similar to Apert syndrome, the clinical features of Crouzon syndrome are a tall, flattened forehead caused by bicoronal craniosynostosis, and midface hypoplasia. The degree of the malformations is milder, and limbs are usually not affected [9]. Bicoronal craniosynostosis and syndactyly characterize Pfeiffer syndrome that is caused by autosomal dominant mutations of FGFR1-or FGFR2 gene [10,11]. Additional sutures can be affected, and skeletal, central nervous system and gastrointestinal abnormalities can occur [12]. Saethre Chotzen syndrome is characterized by mild craniosynostosis of different cranial sutures and syndactyly and is caused by the autosomal dominant mutation of TWIST gene and the FGFR2 gene [13,14].
The focus of our case series is to identify the contribution of prenatal ultrasound for an early and precise prenatal diagnosis of syndromic craniosynostosis [5]. In case of a suspected diagnosis, genetic counselling and testing including the newest methods of whole genome/exome sequencing and/or targeted panel diagnosis is recommended. A precise differentiation between syndromic and nonsyndromic causes is paramount to allow for specific counselling [15]. A fetal MRI should also be performed to confirm the diagnosis and identify possible central nervous malformations [5].
We aim to describe the prenatal sonographic signs and their contribution in the diagnostic work up in cases with suspected syndromic craniosynostoses. We aim to raise awareness for this disease complex to facilitate a precise and early diagnosis which is essential for perinatal management and the interdisciplinary counseling of the parents.
Materials and methods
We identified thirteen cases of syndromic craniosynostosis in the Viewpoint (GE, Solingen, Germany) and the SAP (Walldorf, Germany) patient databases of the Department of Obstetrics and the Department of Pediatric Surgery at Charité-Universitätsmedizin Berlin in the years 2000-2019. We searched for keywords indicating abnormal biometric parameters of the head, brain anomalies or both, and eventful findings of the limbs. We furthermore used exact keyword search for the keywords Apert syndrome, craniosynostosis, Crouzon-, Saethre Chotzen-or Pfeiffer syndrome.
This search resulted in 389 cases of abnormal findings. A subsequent manual review identified syndromic craniosynostosis in thirteen fetuses.
In addition, we compared the results with the surgery records of the Department of Pediatric Neurosurgery. No additional cases were identified. After retrieving the patients, we analyzed ultrasound findings, MRI scans, genetic results and the mode of delivery and postnatal procedures.
Results
Between 2000 and 2019, we identified thirteen pregnancies with high suspicion of syndromic craniosynostosis in our department. A detailed description of the sonographic findings is found in Table 1. In ten cases, Apert syndrome was suspected due to specific sonographic features. Molecular genetic testing revealed a p.(Pro253Arg) mutation in the FGFR2 gene and confirmed the diagnosis in five cases. However, in one case, the postnatal genetic test detected a mutation in GLI3-gene, which causes Greig cephalopolysyndactyly syndrome, which is associated with craniosynostosis [16]. In another case, the genetic test revealed a p.(Pro253Leu) mutation in the FGFR2 gene. The subsequent tests of the parents identified the same mutation in the father who had not been diagnosed previously. Three patients did not give consent for genetic testing.
Two children were diagnosed with Saethre Chotzen syndrome, one child with Crouzon syndrome. The gestational age when the diagnosis was suspected was between 20 + 1 and 33 + 4 weeks of gestation. Nine patients received the diagnosis in the second trimester, four patients in the third trimester. In all cases after 2017, we recommended a fetal MRI. This was conducted in three cases and confirmed the sonographic results.
In the fetuses with Apert syndrome, typical sonographic features were frontal bossing (5/8 cases) as well as a cloverleaf skull (4/8) (Table 1 and Figs. 1a, b, 2a-e). In two cases, the examiner described a turricephaly, a tall head shape caused by coronal craniosynostosis (Tables 1, 2). In all cases, a prenatal ultrasound revealed a diagnosis of syndactyly (Fig. 1c).
In two separate Saethre Chotzen syndrome cases, the diagnosis of a bicoronal craniosynostosis occurred in one case before birth, along with identification of a saddle nose and a flat profile ( Table 1). The diagnosis of syndactyly of hands and feet occurred after birth. In the second fetus with Saethre Chotzen syndrome, the only anomaly detected was a prominent forehead.
The patient with fetal Crouzon syndrome presented with an abnormal shape of the head with a flat occiput, depressed frontoparietal bones (Fig. 3) and protruding eyes, and a small thorax with short ribs ( Table 1).
The fetus with Greig cephalopolysyndactyly syndrome exhibited hypertelorism, agenesis of the corpus callosum, a right-sided aortic arch, and polydactyly (Table 1, Figs. 4a, 5). Postnatally, these findings were confirmed (Figs. 4b, c, 5b); furthermore, additional identification of malformations in the child with Greig cephalopolysyndactyly , c). The prenatal MRI confirms the findings (e). After the birth, a scaphocephaly with a long and narrow skull and high fore-head is seen (d). The genetic examination showed a mutation in FGFR2-gene (Pro253Leu), the father had the same mutation. At this amino acid position is the pathogen mutation p.Pro253Arg located, which leads to Apert syndrome syndrome included a subaortic ventricular septal defect and a deformation of the feet.
After the prenatal diagnosis of a fetal syndromic craniosynostosis and after extensive interdisciplinary counselling, seven couples decided to terminate the pregnancy. In all of these cases, a fetal Apert syndrome was diagnosed. Of these couples, four decided for an autopsy of the fetus. The pathoanatomical examinations and the radiological fetograms confirmed the sonographic findings. Of the other children, five were delivered by cesarean section and one child was delivered through vaginal birth. In one case the cesarean section was indicated due to fetal breech position. Three patients decided to have a preventive cesarean section and one patient had a repeat cesarean section.
For analyzing the long-term postnatal outcome, the data of six children who were treated at our department for pediatric neurosurgery was available. One child (born 2011) with Crouzon syndrome received a decompressive craniectomy in his first year of life due to high intracranial pressure. Following that multiple surgeries (last in 2020) for fronto-orbital remodeling and multiple corrections of craniofacial defects were performed. This child also suffers a hearing loss due to aural atresia and subsequent speech development disorder.
Of the two children diagnosed with Saethre Chotzen syndrome one (born 2014) underwent fronto-orbital remodeling ten months after birth. The other one (born 2019) was treated with a strip craniectomy four months after birth. As of now, no surgery was indicated for the child with Apert syndrome (born 2019). The child diagnosed with the p.(Pro253Leu) mutation (born 2019, Fig. 1) had a biparietal strip craniectomy two months after birth due to raised intracranial pressure. The child with Greig cephalopolysyndactyly syndrome had a foot surgery to correct the hexadactyly and the pes supinatus (Fig. 5).
Discussion
Syndromic craniosynostosis is a rare disease complex that shows characteristic features detectable in prenatal ultrasound. An early precise diagnosis is important for the interdisciplinary counseling of the parents and the perinatal management. Our study confirms that a prenatal detection of syndromic craniosynostosis in the second trimester is possible. In our case series, the diagnosis was suspected at the time of the second trimester screening in 9/13 patients and in 4/13 patients between 27 + 0 and 33 + 4 weeks of gestation. However, the diagnosis can be challenging as the extent of the skull deformity can vary and the standardized measurements of the head (biparietal diameter and head circumference) are not necessarily outside the normal range [17]. To confirm the diagnosis, it can be helpful to use a threedimensional ultrasonic skeletal imaging mode to image the [5,6] Acrobrachycephaly, flat occiput, hypertelorism, flat forehead, midface hypoplasia, asymmetric cranial shape [24], turricephaly [5] Syndactyly, central nervous malformations (dysgenesis of corpus callosum, abnormal gyration) [5,24], neurological development disorders possible Crouzon syndrome Variable; often coronal and sagittal sutures [24] Acrobrachycephaly, (symmetric) midface hypoplasia, hypertelorism [24], Extremities not involved, no neurological development disorders Saethre Chotzen syndrome Uni-/bilateral coronal sutures, other sutures possible [25] Plagiocephaly, brachycephaly, acrocephaly, deep hairline, small ears, high forehead, asymmetric face (overall milder than Apert syndrome) [25] Syndactyly possible, brachydactyly, rarely neurological development disorders, skeletal and cardiac malformations [25] Greig cephalopoly-syndactyly syndrome Frontal und sagittal sutures [16] Macrocephaly, prominent forehead, hypertelorism [26], trigonocephaly [16] Polydactyly, broad thumbs/big toes, cutaneous syndactyly, neurological development disorders possible [26], dysgenesis of corpus callosum [16] Pfeiffer syndrome Bilateral coronal-und lambdoidal sutures, rarely sagittal suture [27] Flat occiput, high forehead, hypertelorism, cloverleaf skull, variable midface hypoplasia [12,27] Broad thumbs/big toes with deviation, syndactyly possible, neurological development disorders possible [12,27] skull with the sutures (Figs. 6, 7, 8). Though diagnosis is mostly feasible without using the skeleton mode, as we have used B-mode and conventional 3D ultrasound in our cases to establish the diagnosis.
Value of sonographic signs: skull shapes
In accordance to the literature, an abnormal shape of the skull was the leading sonographic sign in our case series. The basic biometric parameters may be out of range. Depending on the affected sutures the biparietal diameter (BPD) and the cephalic index (CI) can be raised or lower [17]. After the birth, a high forehead with down-slanting palpebral fissures and a low nose root is seen In our cohort, all cases with Apert syndrome exhibited an abnormal shape of the skull (Figs. 1, 2). We detected frontal bossing in 5/8 cases, a cloverleaf skull in 4/8 cases, a turricephaly in 2/8 cases and a dolichocephaly in 1/8 cases (see Table 1). In one of the fetuses with Apert syndrome, agenesis of corpus callosum was diagnosed. Malformations of midline structures like dysgenesis of the corpus callosum are reported in up to 11% as well as alterations of the temporal lobe [7,18].
An abnormal shape of the skull was the leading ultrasonographic finding also in the other cases of syndromic craniosynostosis. In one case of Saethre Chotzen syndrome, the fetus presented with turricephaly with flat profile and saddle nose. In the other case the head was small without other sonomorphological alterations. In the fetus with Greig cephalopolysyndactyly syndrome a hypertelorism and agenesis of corpus callosum were noted and the fetus with Crouzon syndrome had protruding bulbi of the eyes.
In accordance to other published case series the diagnosis of syndromic craniosynostosis was suspected in most cases in the second trimester as the skull deformation develops at this time [14]. Owing to premature fusion of sutures, the underlying structures are harder to visualize. This effect has been identified as an early indirect sign for craniosynostosis called `brain shadowing sign'. It can be noted prior to changes of the skull shape and is also reported in fetuses with mild changes [19]. In this retrospective study, cases of syndromic craniosynostosis were detected in 25 weeks of gestation. However, the brain shadowing sign is not specific for craniosynostosis and can also be seen in fetal head molding or open spina bifida. The authors also report that in accordance to our results an abnormal head shape was seen in 23 of 24 cases. Other signs were facial abnormalities, syndactyly and ventriculomegaly [19]. Furthermore, the cranial bones with the sutures and facial alterations can be seen more detailed with three-dimensional ultrasound, especially with the feature "skeleton mode", as compared to B-mode (Figs. 6,7,8) [20]. In the skeleton mode, the premature fusion of the suture can be imaged exactly (Fig. 8).
Value of sonographic signs: other malformations
Other organ malformations may occur in syndromic craniosynostosis. A thorough sonographic examination of hands and feet is mandatory, as malformations of extremities are common and may be used to distinguish between isolated and syndromic craniosynostosis. The type of limb abnormalities indicates which type of syndromic craniosynostosis is likely. In accordance to the literature, in all our cases of Apert syndrome, syndactyly of upper and partly of the lower extremities were diagnosed by ultrasound (Table 1 and Fig. 1c) [17,19,20]. In one case of fetal Saethre Chotzen syndrome, syndactyly was diagnosed only postnatally. In another case series, anal atresia and a patent ductus arteriosus were detected postnatally in a child with Saethre Chotzen syndrome [19]. In accordance to Hurst et al. we diagnosed a polydactyly in a fetus with Greig cephalopolysyndactyly syndrome [16]. Besides the thorough sonographic assessment of the skull, the central nervous system and limbs, it is important to examine the other organ systems as well to detect possible accompanied malformations which might have an impact on the child's prognosis. Our case of Greig cephalopolysyndactyly syndrome was associated with a right aortic arch. A subaortic ventricular septal defect was additionally diagnosed postnatally. In the literature, congenital heart defects are described in Greig cephalopolysyndactyly syndrome and include ventricular septal defects, atrial septal defects and patent ductus arteriosus as well as double outlet right ventricle [16]. Congenital heart defects in combination with craniosynostosis and polydactyly can also be seen in Pfeiffer syndrome and the rare Carpenter syndrome [21]. The rare Antley Bixler syndrome is characterized by craniosynostosis, humero-radial synostosis, a curved femur and contractures of the joints. Cardiac and urogenital defects are possible [22].
Value of fetal MRI
In our case series, a fetal MRI was performed in three cases. This examination confirmed the sonographic findings and identified no further abnormalities of the central nervous system (Fig. 2e). However, a study of Rubio et al., compared the results of ultrasound exams and fetal MRI after the diagnosis of syndromic craniosynostosis. The MRI detected two cases of dysgenesis of corpus callosum and one tethered cord syndrome that were not detected with ultrasound [5]. Malformations of the central nervous system can cause neurodevelopmental disorders and are important when counseling the parents. In conclusion, a fetal MRI should be offered to all patients with suspected syndromic craniosynostosis. However, a precise prenatal prognosis regarding developmental disorders is not possible.
Value of molecular genetic tests
Genetic testing must be offered to the patients in order to distinguish between isolated and syndromic craniosynostosis and to confirm the entity of the craniosynostosis. A genetic examination of the parents can be performed since not only sporadic mutations, but also autosomal dominant inheritance with variable symptoms is possible. In our study, one parent was previously diagnosed with Saethre Chotzen syndrome. In another case of suspected Apert syndrome a genetic testing was conducted. The result showed a p.(Pro253Leu) mutation in the FGFR2 gene not only in the DNA of the fetus, but also in the father's DNA who had not been diagnosed with craniosynostosis previously.
Mode of delivery
In our case series, five patients had a cesarean section and one patient a vaginal birth. Similar to our results Harada et al., described a high rate of cesarean sections (73%) in patients with fetal craniosynostosis [17]. If the fetus is in cephalic position and the head circumference is not raised excessively there is no absolute indication for a cesarean section. The patients should be informed about the higher risk of arrested labor and emergency cesarean section in case of significant skull deformities. The delivery should be planned in a perinatal center to assure an ideal postnatal care of the newborn especially regarding the airway management. Prenatally a Pediatric Neurosurgeon should be consulted and inform the parents about possible operative procedures.
After birth a cranial ultrasound and a cranial MRI should be performed. Owing to the higher prevalence of cardiac defects an echocardiography is recommended as well as an examination by a pediatric surgeon.
Limitations
Due to the low incidences of syndromic craniosynostosis we could only analyze thirteen cases with suspected craniosynostosis. Another limitation is the retrospective character of the study. A comparison between ultrasound and MRI is limited as we did not perform an MRI in all cases.
Conclusion
In conclusion, syndromic craniosynostosis is a rare disease. A prenatal diagnosis in the second trimester is feasible based on the sonographic signs described here. An early and precise diagnosis should be achieved to allow for targeted counselling. In case of a suspected diagnosis a genetic, neonatal and surgical workup is recommended and a fetal MRI should be conducted. Owing to midface hypoplasia and alterations of the upper airway the newborns can be respiratory compromised. Consequently the delivery should be planned in a perinatal center [23].
Author contribution TC: manuscript writing, data collection, data analysis, project development. DH: manuscript editing, data analysis. WH: manuscript editing, data analysis. SV: project development, manuscript editing.
Funding Open Access funding enabled and organized by Projekt DEAL.
Availability of data and material
The data available on request from the corresponding author.
Code availability Not applicable.
Conflict of interest
The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
2020-10-16T13:12:31.569Z
|
2020-10-01T00:00:00.000
|
{
"year": 2021,
"sha1": "46c4884bb35e09f3c00c318e95f072ff71be7057",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00404-021-06263-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "7cdf5245cf89d5fbb76ad901eb97b5b9200ff8b2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
88784540
|
pes2o/s2orc
|
v3-fos-license
|
The distribution of the dwarf succulent genus Conophytum N.E.Br. (Aizoaceae) in southern Africa
main biomes in the region (the Succulent Karoo, Nama Karoo, Desert and Fynbos biomes), 94 % of Conophytum taxa are found only in the Succulent Karoo biome and predominantly (88 % of taxa) within South Africa. Endemism within specific bioregions is a feature of the genus and ~60 % of taxa are endemic to the Succulent Karoo. Approximately 28 % of all taxa could be considered point endemics. Whilst the genus has a relatively wide geographical range, we identify a pronounced centre of endemism in the southern Richtersveld. Conclusion: The genus Conophytum can be used as a good botanical model for studying patterns of diversity and speciation in the Succulent Karoo biome, the effects of climate change on dwarf succulents, and for informing conservation planning efforts.
Introduction
The environmental conditions in south-western Africa have resulted in a unique flora well adapted to those conditions. The resulting flora is dominated by a large number of leaf succulents, notably members of the Aizoceae and Crassulaceae (Cowling & Hilton-Taylor 1999). The area is strongly species rich with approximately 5000 vascular plant species recorded for the Succulent Karoo biome, an area of 116 000 km 2 that is recognised as a global biodiversity hot spot (Driver et al. 2003). Within the Aizoceae, a high degree of speciation is evident, no more so than in the miniature or dwarf succulent genus Conophytum, with 165 recognised taxa of which 108 are recognised at the species level (Hammer & Young in press). The miniaturisation of growth form in leaf succulents (as seen in Conophytum) is an adaptation unique to the region and most evident in the Succulent Karoo biome (Desmet & Cowling 1999a).
In the Succulent Karoo, the combination of high temperatures, low humidity and low cloud cover is characteristic, especially inland from the coastal strip where the temperature range is reduced compared to further inland. The biome itself is characterised by a low winter rainfall (Desmet & Cowling 1999b), with rainfall levels declining east to west and south to north. Matimati et al. (2012) concluded that non-rainfall moisture was a vital element in sustaining dwarf succulents. Fog makes a substantial and reliable contribution to total moisture levels, especially on the west coast. The contribution made by dew to moisture availability is less well understood, although it is much more widespread and less localised than the effects of fog (Matimati et al. 2012).
The highest levels of floristic species diversity within the region, especially amongst dwarf succulents, are commonly associated with koppies or rocky outcrops, which are often small in extent or isolated (Desmet & Cowling 1999a). Such biodiversity is coupled with high levels of endemism with approximately 40% of plants restricted to the Succulent Karoo biome. The flora is further characterised by high numbers of local or range-restricted endemics (Cowling & Hilton-Taylor 1994;Hilton-Taylor 1996). At its extreme, point endemism is most pronounced amongst succulents, especially members of the Aizoaceae, notably in Conophytum (e.g. Desmet, Ellis & Cowling 1998;Ihlenfeldt 1994). Plant form varies widely within the dwarf succulent guild, and several morphologically defined Sections are recognised in Conophytum as a result (Hammer 1993(Hammer , 2002Hammer & Young in press).
The genus Conophytum occupies a wide range of habitats, from the quartz-pebble-rich plains of the Knersvlakte through the high, granite-dominated mountains of the Khamiesberg to the quartz inselbergs of Bushmanland and the southern Namib Desert. The plants are rarely found in large or dense communities but are rather widely dispersed amongst other succulent shrubs or occupy discrete habitat niches, for example, small rocky outcrops in large sandy plains, the edges of shallow grit pans, shaded (south-facing) lichen-covered granite slopes or flat gravel plains ( Figure 1). The area occupied by many Conophytum populations is therefore often small, to the extent that such small populations may not always be recognised by current national vegetation mapping.
Flowering may be diurnal (as seen in a majority of taxa) or nocturnal, but in some taxa the distinction between day and night flowering is less evident (Hammer 1993). Pollination itself appears to be non-specialist, relying on moths and pollen wasps for nocturnal and diurnal flowering taxa, respectively (Jürgens & Witt 2014). As such, this would not appear to be a significant limiting factor in the distribution of the genus in south-western Africa. Members of the genus have, however, adapted the timing of flowering to provide a temporal separation from that of the vast majority of other succulents in the region. The bulk of Conophytum taxa flower in the late summer to early autumn period, with just a few flowering in spring (e.g. Conophytum khamiesbergense) or summer. A recent study has shown the presence of several a b d c Source: Photographs taken by Andrew J. Young http://www.abcjournal.org Open Access discrete pairs of Conophytum taxa that lie geographically close to each other (within just 2 km) and in which one taxon flowers in the autumn and the other in the spring (Young et al. 2015).
The aim of this study was to examine the biogeography of members of the genus Conophytum, specifically their association with recognised biomes, bioregions and vegetation units in south-western Africa.
Research method and design
The database of Conophytum locality data (the 'Conobase') used to inform this study has been compiled by the authors over several years. The database consists of more than 2700 individual locality records for the genus, updated to reflect the latest revision of the genus (Hammer & Young in press
Results
The distribution of the genus Conophytum in southern Africa is shown in Figures 2a and 2b. The vast majority of the 2798 individual records that comprise the database lie within the western part of the Northern Cape region and the northern extent of the Western Cape of South Africa. By contrast, data records are relatively poor for Namibia. This is primarily a consequence of the restricted access to the Sperrgebiet and as a result, the limited number of botanical explorations of this area. The longitudinal range of the genus spans ~800 km from the small Namibian town of Luderitz in the north to Paarl in the Western Cape of South Africa in the south. In contrast, the genus Conophytum occupies a fairly narrow latitudinal corridor typically less than 200 km wide, predominantly corresponding to the area that receives winter rainfall. The genus only extends to the east to any significant extent in the northern part of Bushmanland and further south in the Rainshadow Valley Karoo. The northern limit of distribution lies within Namibia in the southern Namib region at the northern limit of the winter-rainfall region, and the genus is not found in the Namib Desert region (see also Hammer 1993). The extent of the genus can also be considered by examining the number of individual locality records per Quarter Degree Square (QDS). This is shown in Figure 3a, and as might be expected, the majority of observed populations lie in those areas with relatively easy access, in the vicinity of towns and near roads or tracks.
The distribution of diurnal and nocturnal flowering taxa is shown in Figures 2a and 2b Mapping the distribution onto the recognised biomes of the region reveals a very strong association with the Succulent Karoo (Figures 2a and 2b). Almost 83% of all recorded Conophytum populations are located solely within this biome ( Figure 4). The majority of the remaining observations are found within the Fynbos and Desert biomes, with fewer recorded populations in the Nama Karoo biome as it is currently geographically defined.
The occurrence of individual taxa (to subspecies level) across all the region's biomes is shown in Figure 5. Here the close association of the genus with the Succulent Karoo (predominantly in South Africa rather than Namibia) is once again evident with 94% of all recognised Conophytum taxa recorded for this single biome, with 96 separate taxa (i.e. > 60%) endemic to the biome. A number of taxa are found in more than one biome with crossover between the Succulent Karoo and Desert biomes being most pronounced (e.g. Conophytum bilobum ssp. bilobum, Conophytum marginatum ssp. haramoepense and Conophytum lydiae). This is discussed further below. All but eight Conophytum taxa are endemic to South Africa, with the others restricted to south-western Namibia. Just a handful of species (e.g. C. saxetanum, Conophytum loeschianum and Conophytum angelicae ssp. tetragonum) are found on both sides of the Orange River, sometimes with individual subspecies restricted to just one side of the border, for example Conophytum ernstii ssp. ernstii in South Africa and C. ernstii ssp. cerebellum in Namibia.
a b
Source: Authors' own creation Note: The Succulent Karoo biome is shaded.
Succulent Karoo Biome
The Succulent Karoo alone is home to 149 Conophytum taxa, including 65 separate species. However, within this biome, the distribution of the genus across the recognised bioregions (Mucina & Rutherford 2006; Table 1) and their vegetation units (Table 2) varies greatly. The vast majority of taxa are found within the Namaqualand Hardeveld bioregion (84 taxa, with > 50% being endemic to the bioregion), especially the Namaqualand Klipkoppe Shrubland (SKn1) and Namaqualand Blomveld (SKn3) vegetation units (Table 3). Namaqualand Klipkoppe Shrubland is the single most important vegetation unit for the genus with 66 separate taxa, including 23 taxa endemic to that vegetation unit. Elsewhere within the Succulent Karoo, the Richtersveld bioregion is host to 67 individual Conophytum taxa including 24 endemic to that bioregion. Within the Richtersveld, species richness is highest within the Kosiesberg Succulent Shrubland (SKr12) and Bushmanland Inselberg Shrubland (SKr18) vegetation units. The Umdaus Mountains Succulent Shrubland (SKr16) is also particularly species rich and an important area for endemism. By contrast, the numbers of both recorded Conophytum taxa and endemics in the Knersvlakte, Namaqualand Sandveld, Rainshadow Valley Karoo and, especially, the Trans-Escarpment Succulent Karoo bioregions are very low (Table 1). In Namibia, a majority of taxa and all Namibian endemics are found within the Desert and Succulent Steppe vegetation zone (see Giess 1971), particularly the Mountain Succulent Dwarf Shrubland vegetation unit.
The incidence of point endemism in the genus is very high, and approximately one fifth of all taxa can be categorised as such ( Table 3). The majority of these point endemics (defined here as a contiguous population lacking morphological variation lying within an area of < 10 km 2 ; though areas are typically much smaller) are found in the Succulent Karoo biome ( Figure 6). These are severely range-restricted taxa and the area of occupancy for such point endemics can be as small as approximately 1000 m 2 -2000 m 2 (e.g. in the case of Conophytum jarmilae and Conophytum burgeri).
Taxon richness (determined per QDS) is most strongly associated with the Succulent Karoo and especially with the Namaqualand Klipkoppe Shrubland (SKn1) vegetation unit ( Figure 3b). Areas on the geographical fringe of this vegetation unit, including the north-western extent of the Bushmanland bioregion of the Nama Karoo and Desert biomes may also show high levels of richness. It is worth noting that even the largest inselbergs in this area may possess no more than one or two taxa (e.g. C. lydiae). Taxon richness tends to decline further south and especially in the south-eastern part of the range of the genus. Here, it is more common to find just one or two taxa per QDS. The area with the highest species diversity for the genus lies around the small Northern Cape town of Steinkopf in the south-eastern Richtersveld (see also Hammer 1993). In the vicinity of this town, it is not uncommon to observe 20 or more individual Conophytum species and subspecies occupying each of several adjacent QDS. Particularly notable for their richness are the quartz hills and gravel plains that can be found on many of the farms to the west of Steinkopf, throughout Umdaus, around Klipbok mine and in the vicinity of the small village of Eksteenfontein.
Nama Karoo Biome
The presence of 30 Conophytum taxa within the Nama Karoo (as determined by mapping onto the latest edition of the VegMap) largely reflects the presence of discrete plant populations close to the boundary with vegetation units of the Succulent Karoo biome. The vast majority of such taxa are found within the Bushmanland Arid Grassland NKb3 vegetation unit in the Bushmanland bioregion. No Conophytum taxa are endemic to the South African Nama Karoo biome, and just one taxon (Conophytum quaesitum ssp. densipunctum) is endemic to the Namibian Nama Karoo biome ( Figure 6).
Biome Species and subspecies
South Africa Namibia Note: Point endemics are marked ( †) and are defined here as a contiguous population lacking morphological variation lying within an area < 10 km 2 . See also Figure 3 for the distribution of point endemics. C., Conophytum.
Fynbos Biome
Conophytum is relatively poorly represented in the Fynbos biome with only 27 taxa present, of which 7 (26%) are endemics ( Table 3). Members of the genus are widely distributed across the biome and are present in 9 of the 12 bioregions and in 26 vegetation units, but typically with only 1 to 3 taxa found in each unit. The Bokkeveld Sandstone Fynbos FFs1 and Namaqualand Granite Renosterveld FRg1 units are the richest, with eight taxa found in each unit ( Table 4). Three of the taxa endemic to this biome are found in the north-west Fynbos bioregion, and the remaining taxa are found in at least two bioregions (Tables 1 and 2).
Greater Cape Floristic Region
Mapping the distribution of the genus onto the centres of the Greater Cape Floristic Region (as defined by Born, Linder & Desmet 2007) provides a further interpretation of the distribution of the genus and its relationship to other floras ( Table 5). Because of the importance of the Richtersveld to Conophytum, this centre is separated out from the other rainfall transition centres. The vast majority of taxa are found within the Richtersveld centre and the Namaqualand region. By contrast, the Hantam-Tanqua-Roggeveld region has only three Conophytum taxa (all nocturnal), none of which are endemic to the region. These taxa are also primarily restricted to the western extent of the region where winter-rainfall conditions prevail. Overall, the region has surprisingly few taxa (22) and the genus is absent from the Karoo centre (see Born et al. 2007). The level of endemism within each centre of the Greater Cape Floristic Region that possesses Conophytum is very high (33% -59%), reflective of a high degree of habitat specialism.
Source: Authors' own creation Note: Point endemics are defined here as a contiguous population lacking morphological variation lying within an area < 10 km 2 . The Succulent Karoo biome is shaded.
Conservation
The conservation status of members of the genus has been examined for formal and informal protected areas, as well as focus areas identified by the South African National Protected Areas Expansion Strategy (NPAES). Currently, 47 taxa (including 36 species) lie within formal protected areas, representing ~30% of the genus (Table 4). In addition, a further six taxa are found within informal protected areas, of which three are not already protected. When combined, such areas in South Africa provide protection for 50 separate Conophytum species and subspecies. The NPAES focus areas have the potential to extend protection to a further 76 taxa and provide additional area for 30 protected taxa. Within Namibia, all endemic Conophytum taxa, with the exception of C. quaesitum ssp. densipunctum, can be found within or are restricted to existing protected areas, namely the Sperrgebiet National Park and the Ai-Ais Richtersveld Transfrontier National Park.
Discussion
With almost 3000 records, the Conobase currently provides the most up-to-date, comprehensive and accurate record of distribution of members of the dwarf succulent genus Conophytum. In compiling this data set, a large number of records have been excluded where there were uncertainties concerning the accuracy of the locality data (a particular problem with older herbaria records, e.g. those held at the Bolus, Compton or Kew herbaria) or with their taxonomic identification (e.g. with dried samples). Nevertheless, there are limitations associated with its use, including (1) sampling bias towards Namaqualand and away from either Namibia (in part because of access restrictions) or the south-eastern extent of the genus' distribution; (2) challenges in successfully identifying taxa to subspecies level within some species (e.g. C. jucundum), and even the separation of some species (especially with the taxa that comprise section Ophthalmophyllum) and (3) large-scale vegetation mapping exercises such as the South African VegMap do not always successfully capture the presence of localised or very small populations of plants (e.g. on discrete rocky outcrops such as those favoured by some Conophytum taxa) with the result that some of the point locality data used here may not always align well with boundaries between biomes (a particular issue with the boundary between Succulent Karoo and Nama Karoo biomes in this study). This is discussed in more detail below.
The distribution of the genus Conophytum is closely associated with the winter-rainfall region of southern Africa, and especially the Succulent Karoo biome (Figure 2a and 2b). The influence of environmental factors governing the distribution of the genus in the region will be analysed in more detail in a further study. Rainfall levels across the Karoo diminish from south to north and from east to west (Desmet & Cowling 1999b) and rainfall, together with significant local influences of fog and dew, is a major factor in influencing both the distribution and richness of flora across the region.
The influence of precipitation cannot be viewed in isolation and the importance of seasonal temperatures is also a significant factor in determining the distribution of the genus Conophytum (Young et al. 2016).
Within the Greater Cape Floristic Region, high levels of Conophytum taxon richness and endemism prevail in just a few centres, notably Springbok, Kamiesberg, Richtersveld and, to a lesser extent, the Knersvlakte (see below). This pattern generally correlates with the observations of Born et al. (2007) concerning the wider flora of the region. However, whilst the Namaqualand and Hantam-Tanqua-Roggeveld centres are floristically similar (Born et al. 2007), the genus Conophytum is almost completely absent from the latter (where members of the Aizoaceae are generally well represented). Conophytum is completely absent from the predominantly summer-rainfall Karoo centre. Surprisingly, fewer than 15% of taxa are found within the Cape Floristic Region itself (Table 5), with the amount of rainfall rather than its seasonality thought to be a limiting factor. Overall, many Conophytum can be considered to be habitat specialists, well adapted to the prevailing environmental conditions as seen, for example, in those taxa occupying the Namib-Desert region.
When examined at the biome level, it is clear that the Succulent Karoo possesses the highest degree of taxon richness for the genus (c.f. Figures 3a and 3b). A recognised global biodiversity hot spot and one of only two entirely arid hotpots Mittermeier et al. 2004), the biome is characterised by a high degree of floral endemism, especially in dwarf-leaf succulents (Driver et al. 2003;Mucina et al. 2006a). Both taxon richness and endemism of Conophytum within the Namaqualand Hardeveld bioregion are consistent with the pattern seen in the flora as a whole (Snijman 2013). However, the level of Conophytum endemism in the Knersvlakte bioregion is low by comparison to that seen in other plants.
More than 93% of Conophytum taxa are recorded in the Succulent Karoo biome, with many of the remaining taxa located on the biome's immediate fringes and often in the transitional area at the boundary of winter-and summerrainfall areas. The propensity towards endemism, which is reflective of the extent of speciation in this genus, is also a strong characteristic, with ~60% of all Conophytum taxa endemic to this biome. Most of the biome is characterised by low and rather unpredictable levels of rainfall in the winter months, and it is in these parts of the biome that the genus predominates. Conophytum is much less common, and may be absent, in the eastern parts of the biome that experience year-round or bimodal rainfall patterns (e.g. in the Trans-Escarpment Succulent Karoo and the Rainshadow Valley Karoo; (Bradshaw & Cowling 2014)).
Normalising the data shown in Table 2 to account for differences in the areas occupied by individual vegetation units serves to identify four units with very high taxon richness per unit area: Dg6 Helskloof Canyon Desert, Skr5 Vyftienmyl se Berge Succulent Shrubland, SKr9 Tatasberg Mountain Succulent Shrubland and SKr19 Aggeneys Gravel Vygieveld. Whilst these units are all relatively small in extent, they provide niche habitats for some succulents.
The absence of day-flowering taxa from the south-eastern extent of the distribution of the genus is not fully understood but is probably related to the absence of suitable pollinating vectors in the flowering season. Nocturnal flowering taxa are most frequented by moths, whilst day-flowering taxa are predominantly visited by pollen wasps (Jürgens & Witt 2014). A characteristic of the genus is that, with the exception of a handful of taxa, flowering displays a temporal shift compared to the vast majority of the Aizoaceae in the region (autumn and winter for the vast majority of Conophytum taxa and spring for most other genera). Jürgens and Witt (2014) suggested that the frequency of nocturnal flowering (as seen in approximately 25% of the genus) is a result of this temporal separation from other related genera. There does not appear to be any particular influence or association governing the distribution of those Conophytum taxa that display 'out-of-season' flowering (i.e. in spring or summer; data not shown). However, it is interesting to note that in describing a newly discovered species of Conophytum, Young et al. (2015) observed three pairings of closely related 'in-season' and 'out-of-season' flowering species in which both species from each pairing grew in close vicinity to each other at separate, range-restricted, sites. Although pollination in the genus is non-specialist (Jürgens & Witt 2014), incidences of natural hybrids are relatively uncommon, even in areas of high species richness (e.g. in the vicinity of Steinkopf). The best example occurs just to the east of Springbok, where dense colonies of Conophytum ectypum ssp. brownii (magenta flower) regularly hybridise with the less (locally) abundant C. bilobum (yellow flower) in the form of C. × marnerianum (red and orange flower forms). It is not understood why such hybridisation is only rarely observed, although species may be vertically stratified on a hillside.
The vast majority of records in the Conobase arise from South Africa, whilst data for Namibia remains poor. Despite this, all the taxa known to Namibia are recorded: 15 in total, including six endemic to the country. Within Namibia, Conophytum taxa are once again primarily found in the Succulent Karoo (within the Desert and Succulent Steppe vegetation zone; Giess 1971), as is the case south of the Orange River. Within South Africa, the genus is most prevalent and is well distributed across both the Namaqualand Hardeveld and Richtersveld bioregions (Table 1). Geology plays a very significant part in the distribution of the genus with separate taxa showing distinct preferences for sandstone, granite, gneiss and especially quartz (Hammer 1993;Young et al. 2016). The Succulent Karoo shares boundaries with the Desert, Nama Karoo, Fynbos and Thicket biomes. Whilst the transition from one biome to the next is often unclear (especially that between the Succulent and Nama Karoo), Conophytum taxa are found in all but the Thicket biome (Table 1). It is interesting to note that none of the taxa that have been discovered over the past 10-15 years have extended the known geographical range of the genus. Instead, new taxa have been found in close proximity to other Conophytum populations, often in areas of existing high biodiversity in the Succulent Karoo (e.g. Conophytum smaleorum in the southern Richtersveld).
Equally low, rainfall as in the Succulent Karoo, but with much greater variability and oriented to a summer phenological peak characterise the Nama Karoo (1999b), resulting in very low diversity and abundance of succulents; Mucina et al. (2006b). The biome is not especially florally rich and local endemism is relatively poor compared to other biomes Mucina et al. 2006b;Van Wyk & Smith 2001). This is reflected in the genus Conophytum with just a few taxa and only a single endemic (in Namibia) present in this biome. The occurrence of Conophytum in the Nama Karoo is always in association with habitats or vegetation types that are still under the influence of the winter-rainfall systems. This is especially prevalent in the Bushmanland Inselberg Region where the topography supports isolated outliers of Succulent Karoo vegetation units. Nearly all taxa are found in Bushmanland Arid Grassland (NKb3), but such instances are nearly always on the fringes of the Succulent Karoo itself, often within a few hundred metres of the boundary between the biomes. A good example of a taxon that could be considered endemic to the Succulent Karoo is Conophytum ratum but strict application of the VegMap also situates it in the Nama Karoo. Such examples highlight some of the limitations in using the VegMap to analyse such point data but would also support the need to further refine the biome and vegetation unit boundaries on the VegMap to aid conservation planning.
The Desert biome occupies an area along the Atlantic Coast and the Orange River. By contrast with the Succulent Karoo, this biome generally affords rather sparse vegetation cover.
Rainfall is highly variable and shows marked west-east transition (Jürgens 2006). Similarly, temperatures are often very variable, and extremely high. Here fog can play a key role as a critical source of moisture and it is along the Orange River and the mountainous desert section of the Richtersveld region with exposure to winter rainfall and coastal fog that floral endemism is most prevalent (Jürgens 1991(Jürgens , 1997. This is similarly reflected in the distribution of the genus Conophytum (Tables 1 and 2).
Bordering the Succulent Karoo to the south and south-west, the Fynbos biome is host to surprisingly few Conophytum taxa, with just seven endemic to the biome ( Figure 5 and Table 3). This contrasts with the overall species richness of the biome, especially the Renosterveld vegetation complex (Rebelo et al. 2006). Winter rainfall is most prevalent in the western part of the Cape region, where the majority of taxa are found. In common with all other biomes, geology is a major factor in plant distribution with sandstone and quartz playing a key role in creating Conophytum habitat (Hammer 1993;Young et al. 2016).
A strong feature of the genus is its geographical fragmentation into spatially isolated taxa, often with a highly restricted distribution: almost one quarter of all Conophytum taxa are known from only a single locality. Such localities may include single mountains (e.g. Conophytum cubicum on Black Face Mountain), Bushmanland inselbergs (e.g. Conophytum achabense on Achab), small hills (e.g. Conophytum schlecteri on Farquarson-se-kop), quartz ridges (e.g. Conophytum regale north of Springbok) or quartz gravel flats (e.g. C. burgeri at Aggeneys). Whilst the geographical range of the majority of taxa is generally restricted, a few species, such as C. bilobum, Conophytum jucundum, Conophytum pellucidum and Conophytum pageae, have a longitudinal range of hundreds of kilometres. Most notable amongst these is C. pageae, which is one of a handful of taxa to be found in both Namibia and South Africa, with a range extending from the small town of Garies in the Northern Cape to the Sperrgebiet in the Namib Desert. Within its range, several forms of C. pageae can be readily identified, including the grey-bodied udabibense form from Namibia, the large subrisum form found around Kliprand on the south-western edge of Bushmanland and the tiny red-lipped form from the Garies area.
Examples of point endemism in Conophytum are common, especially within the Succulent Karoo, and account for ~28% of all taxa (see Table 3 and Figure 6). The Namaqualand Klipkoppe Shrubland (SKn1) is the most species rich (66 taxa) and has most endemics (23) and most point endemics (14). Whilst point endemics have, by definition, an extremely restricted range, in Conophytum this may sometimes be measured in the hundreds or a few thousand square metres. Examples include Conophytum tantillum ssp. amicorum, C. burgeri, C. jarmilae and the more recently discovered Conophytum youngii -all of which are currently known to occupy areas < 2000 m 2 . Such taxa are clearly highly vulnerable to both anthropogenic (especially mining and farming) and environmental threats. Whilst rare, a small number of point endemics (e.g. C. achabense) are found occupying a range-limited area that spans across parts of two contiguous vegetation units. The mapping of the genus in this study has revealed a clear centre for endemism lying to the west of the small town of Steinkopf in the southernmost part of the Richtersveld. This can be described as a florally diverse area characterised by the presence of multiple vegetation units (e.g. one particular QDS in the Succulent Karoo is host to nine vegetation units, all of which possess Conophytum taxa).
Data sets such as the Conobase can aid conservation planning, not only in terms of informing the National and Global Red Lists but also to help identify suitable areas for possible protection by informing the extent of endemicity and species richness (when used alongside data for other genera and the presence of threats). It is worthwhile to note that less than a third of Conophytum taxa (47) are presently protected within existing formal conservation areas (e.g. C. burgeri adjacent to Black Mountain's mining operations at Aggeneys; see Table 4). This study would suggest that the quartz-rich areas in Namaqualand especially around the towns of Steinkopf and Springbok offer some of the greatest potential for conservation purposes. Such areas are, at least partially, already identified as NPAES focus areas (see also Driver et al. 2003). Even in a scenario that would see all formal, informal and NPAES focus areas in place, approximately one quarter of all Conophytum taxa (including some vulnerable point endemics, including Conophytum bolusiae ssp. primavernum) would lie outside any protection. Given the level of regional endemism, especially point endemism, within the genus as a whole this is a concern. The Conobase now provides a tool to inform such matters by initially identifying those taxa at potential risk. The data have most recently informed the South African Red List in which approximately 50% of Conophytum taxa (including recognised varieties) are now provisionally categorised as threatened, critically rare or rare (South African National Biodiversity Institute 2015).
One of the questions that this study sought to explore was whether the genus Conophytum could be employed as a model dwarf succulent taxon for the Succulent Karoo biome.
The data here show that the genus has a strong association with this particular biome and many of its bioregions and vegetation units, with taxa found in all but 14 of the 63 vegetation units currently recognised by the 2009 vegetation map, including all six of the Namaqualand Hardeveld bioregion and 18 of 19 that comprise the Richtersveld bioregion. However, the association is weaker in the Rainshadow Valley Karoo (10 of 14 vegetation units) and Namaqualand Sandveld (5 of 13 units) bioregions and, in the latter, is generally restricted to isolated, often small, rocky outcrops.
Conclusion
The genus Conophytum displays a strong affinity with the Succulent Karoo biome, that is, that component of the arid ecotone between winter-and summer-rainfall regimens in south-western Africa that demonstrates a strong winter phenological peak. Members of the genus are found in a majority but not all vegetation units of the biome, with taxon richness and endemism highest in those units falling within the Namaqualand Hardeveld and Richtersveld bioregions.
|
2019-04-01T13:11:25.029Z
|
2016-05-30T00:00:00.000
|
{
"year": 2016,
"sha1": "c070287f383bc5dbd830b878ae161554b14061c9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4102/abc.v46i1.2019",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "225cb91731346870be820b0c032fbec18e3e2450",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
249579027
|
pes2o/s2orc
|
v3-fos-license
|
dCAM: Dimension-wise Class Activation Map for Explaining Multivariate Data Series Classification
Data series classification is an important and challenging problem in data science. Explaining the classification decisions by finding the discriminant parts of the input that led the algorithm to some decisions is a real need in many applications. Convolutional neural networks perform well for the data series classification task; though, the explanations provided by this type of algorithm are poor for the specific case of multivariate data series. Addressing this important limitation is a significant challenge. In this paper, we propose a novel method that solves this problem by highlighting both the temporal and dimensional discriminant information. Our contribution is two-fold: we first describe a convolutional architecture that enables the comparison of dimensions; then, we propose a method that returns dCAM, a Dimension-wise Class Activation Map specifically designed for multivariate time series (and CNN-based models). Experiments with several synthetic and real datasets demonstrate that dCAM is not only more accurate than previous approaches, but the only viable solution for discriminant feature discovery and classification explanation in multivariate time series. This paper has appeared in SIGMOD'22.
Data series classification is a crucial and challenging problem in data science [18,67]. To solve this task, various data series classification algorithms have been proposed in the past few years [3], applied on a large number of use cases. Standard data series classification methods are based on distances to the instances' nearest neighbors, with k-NN classification (using the Euclidean or Dynamic Time Warping (DTW) distances) being a popular baseline and computed for two instances (of (a) "badminton smash" and (b) "badminton clear") of Rack-etSport UCR/UEA dataset method [12]. Nevertheless, recent works have shown that ensemble methods using more advanced classifiers achieve better performance [4,37]. Following recent breakthroughs in the computer vision community, new studies successfully propose deep learning methods for data series classification [9,11,27,32,63,72,73], such as Convolutional Neural Network (CNN), Residual Neural Network (ResNet) [65], and InceptionTime [28].
[Classification Explanation] While having a trained and accurate classification model, finding explanations of the classification result (i.e., finding the discriminative features that made the model decide which class to attribute to each instance) is a challenging but essential problem, e.g., in manufacturing for anomaly-based predictive maintenance [70], or in medicine for robot-assisted surgeon training [26]. Such discriminant features can be based on patterns of interest that occur in a subset of dimensions at different timestamps or the same timestamp. For some CNN-based models, the Class Activation Map (CAM) [74] can be used as an explanation for the classification result. CAM has been used for highlighting the parts of an image that contribute the most to a given class prediction and has also been adapted to data series [27,65].
[Challenges] Nevertheless, CAM for data series suffers from one important limitation. Since CAM is a univariate time series (of the same length as the input instances) with high values aligned with the subsequences of the input that contribute the most for a given class identification, in the specific case of multivariate data series as input, no information can be retrieved from CAM on the level of contribution of specific dimensions. As an example, Figure 1 illustrates CAM (top heatmaps) applied on two instances (belonging to two different classes) of the RacketSport UCR/UEA dataset. We observe that CAM explains why the data series correspond to a badminton "smash" or "clear" gesture by highlighting the same temporal window across all dimensions (variables). It is thus not clear what aspect of the gesture distinguishes it from the other. Addressing this significant limitation is a sought-after challenge.
[Contributions] In this paper, we present a novel approach that fills-in the gap by addressing this limitation for the popular CNNbased models. We propose a novel data organization and a new CAM technique, dCAM (Dimension-wise Class Activation Map), that is able to highlight both the temporal and dimensional information at the same time. For instance, in Figure 1, dCAM (bottom heatmaps) is pointing to specific subsequences of particular dimensions that explain why the two gestures are different. Our method requires only a single training phase, is not constrained by the architecture type, and can efficiently and effectively retrieve discriminant features thanks to a technique that exploits information from different permutations of the input data dimensions. Thus, any kind of architecture in which we can apply CAM can benefit from our approach. Our contributions are as follows.
• We develop a new method that transforms convolutional-based neural network architectures: whereas previous network architectures can only provide an explanation for all the dimensions together, our transformation represents the only deep learning solution that enables explanation in individual dimensions. Our approach can be used with any deep network architecture with a Global Average Pooling layer. • We demonstrate how we can apply our method to three modern deep learning classification architectures. We first describe dCNN, inspired by the traditional CNN architecture. We then describe how more advanced architectures, such as ResNet and Inception-Time, can be transformed as well. We name these transformed architectures dResNet and dInceptionTime. • We propose dCAM, a novel method (based on dCNN/ dRes-Net/dInceptionTime) that returns a multivariate CAM, identifying the important parts of the input series for each dimension. • We experimentally demonstrate with several synthetic and real datasets that (among Class Activation Map-based methods) dCAM is not only more accurate in classification than previous approaches, but the only viable solution for discriminant feature discovery and classification explanation in multivariate time series. Finally, we make our code available online [1].
BACKGROUND AND RELATED WORK
We first present useful notations and definitions, and discuss related work. Table 1 summarizes the symbols we use in this paper.
[Data Series] A multivariate, or -dimensional data series ∈ R ( , ) is a set of univariate data series of length . We note = [ (0) , ..., ( −1) ] and for ∈ [0, − 1], we note the univariate ,ℓ ∈ R ℓ of the dimension ( ) of the multivariate data series is a subset of contiguous values from ( ) of length ℓ (usually ℓ ≪ ) starting at position ; formally, [Neural Network Notations] We are interested in classifying data series using a neural network architecture model. We define the neural network input as ∈ R for univariate data series (with the ℎ value and ,ℓ the sequence of ℓ values following the ℎ value), and X ∈ R ( , ) for multivariate data series (with , the ℎ value on the ℎ dimension and X , ,ℓ the sequence of ℓ values following the ℎ value on the ℎ dimension). Dense Layer: The basic layer of neural network is a fully connected layer (also called ) in which every input neuron is weighted and summed before passing through an activation function. For univariate data series, given an input data series ∈ R , given a vector of weights ∈ R and a vector ∈ R , we have: is called the activation function and is a non-linear function. The commonly used activation function is the rectified linear unit (ReLU) [39] that prevents the saturation of the gradient (other functions that have been proposed are ℎ and Leaky [66]). For the specific case of multivariate data series, all dimensions are concatenated to give input , ∈ R * . Finally, one can decide to have several output neurons. In this case, each neuron is associated with a different and , and Equation 1 is executed independently. Convolutional Layer: This layer has played a significant role for image classification [31,33,65], and recently for data series classification [27]. Formally, for multivariate data series, given an input vector X ∈ R ( , ) , and given matrices weights W, B ∈ R ( ,ℓ) , the output ℎ ∈ R of a convolutional layer can be seen as a univariate data series. The tuple ( , ) is also called kernel, with ( , ℓ) the size of the kernel. Formally, for ℎ = [ℎ 0 , ..., ℎ ], we have: In practice, we have several kernels of size ( , ℓ). The result is a multivariate series with dimensions equal to the number of kernels, . For a given input X ∈ R ( , ) , we define ∈ R ( , ) to be the output of a convolutional layer ( , ℓ). is thus a univariate series corresponding to the output of the ℎ kernel. We denote with ( ) the univariate series corresponding to the output of the ℎ kernel, when is used as input. Global Average Pooling Layer: Another type of layer frequently used is pooling. Pooling layers compute average/max/min operations, aggregating values of previous layers into a smaller number of values for the next layer. A specific type of pooling layer is Global Average Pooling (GAP). This operation is averaging an entire output of a convolutional layer ( ) into one value, thus providing invariance to the position of the discriminative features. Learning Phase: The learning phase uses a loss function L that measures the accuracy of the model and optimizes the various weights. For the sake of simplicity, we note Ω the set containing all weights (e.g., matrices W and B defined in the previous sections). Given a set of instances X, we define the average loss as: (Ω) = 1 |X | X∈X L (X). Then for a given learning rate , the average loss is back-propagated to all weights in the different layers. Formally, back-propagation is defined as follows: ∀ ∈ Ω, ← − . In this paper, we use the stochastic gradient descent using the ADAM optimizer [30] and cross-entropy loss function.
' " (c.2) Given class -: 2) Given an instance (b.2) Compute , the output of the last convolutional layer of the 23 kernel
Convolutional-based Neural Network
We now describe the standard architectures used in the literature. The first is Convolutional Neural Networks (CNNs) [27,65]. CNN is a concatenation of convolutional layers (joined with activation functions and batch normalization). The last convolutional layer is connected to a Global Average Pooling layer and a dense layer. In theory, instances of multiple lengths can be used with the same network. A second architecture is the Residual Neural Network (ResNet) [27,65]. This architecture is based on the classical CNN, to which we add residual connections between successive blocks of convolutional layers to avoid that the gradients explode or vanish. Other methods have been proposed in the literature [11,27,28,58], though, CNN and ResNet have been shown to perform the best for multivariate time series classification [27]. InceptionTime [28] has not been evaluated on multivariate data series, but demonstrated state-of-the-art performance on univariate data series. Finally, other kinds of architectures than convolutional ones have been proposed in the literature. Attention-based models have been introduced, such as TapNet [71]. For the specific case of temporal data, recurrent-based models, such as Recurrent Neural Neworks [53] (RNN), Long-Short Term Memory [23] (LSTM), and Gated Recurrent Unit [10] (GRU) have received a lot of attention. These three models are relevant to the data series classification task, and we include them in our experimental study.
Class Activation Map (CAM)
Once the model is trained, we need to find the discriminative features that led the model to decide which class to attribute to each instance. Several methods have been proposed to extract meaningful information from CNNs, such as grad-CAM [57] that uses the gradients of the weights to compute the discriminant features, and CAM [74]. The latter has been proposed to highlight the parts of an image that contributes the most to a given class identification. The latter has been experimented on data series [27,65] (univariate and multivariate). This method explains the classification of a certain deep learning model by emphasizing the subsequences that contribute the most to a certain classification. Note that the CAM method can only be used if (i) a Global Average Pooling layer has been used before the classifier, (ii) the model accuracy is high enough. Thus, only the standard architectures CNN and ResNet proposed in the literature can benefit from CAM. We now define the CAM method [27,65]. For an input data series , let ( ) be the result of the last convolutional layer ( , ℓ), which is a multivariate data series with dimensions and of length . ( ) is the univariate time series for the dimension ∈ [1, ] corresponding to the ℎ kernel. Let C be the weight between the ℎ kernel and the output neuron of class C ∈ C. Since a Global Average Pooling layer is used, the input to the neuron of class C can be expressed by the following equation: The second sum represents the averaged time series over the whole time dimension. Note that weight C is independent of index . Thus, C can also be written by the following equation: Finally, C ( ) = [ C ,0 ( ), ..., C , −1 ( )] that underlines the discriminative features of class C is defined as follows: As a consequence, C ( ) is a univariate data series where each element at index indicates the significance of the index (regardless of the dimensions) for the classification as class C . Figure 2(a) depicts the process of computing CAM and finding the discriminant subsequences in the initial series.
CAM Limitations for Multivariate Series
As mentioned earlier, a CAM that highlights the discriminative subsequences of class C , C ( ), is a univariate data series. The information provided by C ( ) is sufficient for the case of univariate series classification, but not for multivariate series classification. Even though the significant temporal index may be correctly highlighted, no information can be retrieved on which dimension is significant or not. Solving this serious limitation is a significant challenge in several domains. For that purpose, one can propose rearranging the input structure to the network so that the CAM becomes a multivariate data series. A new solution would be to decide to use a 2D convolutional neural network with kernel size (ℓ, 1), such that each kernel slides on each dimension separately. Thus, for an input data series , A ( ) would become a multivariate data series for the variable ∈ [1, ], and ( ) ( ) ∈ A ( ) would be a univariate time series that would correspond to the dimension of the initial data series. We call this solution cCNN, and we use cCAM to refer to the corresponding Class Activation Map. Figure 2(b) illustrates cCNN architecture and cCAM. Note that if a GAP layer is used, then architectures other than CNN can be used, as well, such as ResNet and InceptionTime. We denote these baselines as cResNet and cInceptionTime.
Nevertheless, new limitations arise from this solution. The dimensions are not compared together: each kernel of the input layer will take as input only one of the dimensions at a time. Thus, features depending on more than one dimension will not be detected.
Recent studies study the specific case of multivariate data series classification explanation. A benchmark study analyzing the saliency/explanation methods for multivariate time series concluded that the explainable methods work better when the multivariate data series is handled as an image [25], such as in the cCNN architecture. This confirms the need to propose a method specifically designed for multivariate data series. Finally, some recently proposed approaches [2,24] address the problems of identifying the discriminant features and discriminant temporal windows independently from one another. For instance, MTEX-CNN [2] is an architecture composed of two blocks. The 1st block is similar to cCNN. The 2nd block consists of merging the results of the 1st block into a 1D convolutional layer, which enables comparing dimensions. A variant of CAM [57] is applied to the last convolutional layer of the 1st block in order to find discriminant features for each dimension. The discriminant temporal windows are detected with the CAM applied to the last convolutional layer of the 2nd block. In practice however, this architecture does not manage to overcome the limitations of cCNN: discriminant features that depend on several dimensions are not correctly identified by MTEX-CNN, which has similar accuracy to cCNN (we elaborate on this in Section 5).
In our experimental evaluation, we compare our approach to the MTEX-CNN, cCNN, cResNet and cInceptionTime, and further demonstrate their limitations when addressing the problem at hand.
PROBLEM FORMULATION
Given a set T of multivariate data series = { (0) , (1) , ..., ( −1) } of dimensions belonging to classes C ∈ C, and a model : T → C, we aim to find a function ( , ) that returns a multivariate is a series that has high values if the corresponding subsequences in discriminate of belonging to another class than C .
PROPOSED APPROACH
Based on a new architecture that we call dCNN (and variant architectures, e.g., dResNet, dInceptionTime), dCAM aims to provide a multivariate CAM pointing to the discriminant features within each dimension. Contrary to the previously described baseline (cCNN, cResNet and cInceptionTime), one kernel on the first convolutional layer will take as input all the dimensions together with different permutations. Thus, similarly to the standard CNN architecture, features depending on more than one dimension will be detectable while still having a multivariate CAM. Nevertheless, the latter has to be processed such that the significant subsequences are detected.
Symbol
Description a data series | | length of ( ) ℎ dimensions of number of dimension C set of all classes C one class of C C weight of connecting the ℎ convolutional layer and class C neuron ( ) output of the ℎ convolutional layer for input C ( ) output of C neuron for input C ( ) Class Activation Map for class C and input Σ set of all possible permutations of dimensions with one possible permutation of its dimensions ( ∈ Σ ) number of permutations number of permutations that have been correctly classified by the model We first describe the proposed architecture dCNN that we need in order to provide a dCAM, while still being able to extract multivariate features. We then demonstrate that the transformation needed to change CNN to dCNN can also be applied to other, more sophisticated architectures, such as ResNet and InceptionTime, which we denote as dResNet and dInceptionTime. We demonstrate that using permutations of the input dimensions makes the classification more robust when important features are localized into small subsequences within some specific dimensions.
We then present in detail how we compute dCAM (based on a dCNN/dResNet/dInceptionTime architecture). Our solution benefits from the permutations injected into the dCNN to identify the most discriminant subsequences used for the classification decision.
Dimension-wise Architecture
As mentioned earlier, the classical CNN architecture mixes all dimensions in the first convolutional layer. Thus, the CAM is a univariate data series and does not provide any information on which dimension is the discriminant one for the classification. To address this issue, we can use a two-dimensional CNN architecture by reorganizing the input (i.e., the cCNN solution we described earlier). In this architecture, one kernel (of size (1, ℓ, 1)) slides on each dimension independently. Thus, for a given data series ( (0) , ..., ( −1) ) of length , the convolutional layers returns an array of three dimensions ( , , ), each row ∈ [0, − 1] corresponding to the extracted features on dimension . Nevertheless, the kernels (1, ℓ, 1) get as input each dimension independently: such an architecture cannot learn features that depend on multiple dimensions.
A first Architecture: dCNN
In order to have the best of both cases, we propose the dCNN architecture, where we transform the input into a cube, in which each row contains a given combination of all dimensions. One kernel (of size ( , ℓ, 1)) slides on all dimensions times. This allows the architecture to learn features on multiple dimensions simultaneously. Moreover, the resulting CAM is a multivariate data series. In this case, one row of the CAM corresponds to a given combination of the dimensions. However, we still need to be able to retrieve information for each dimension separately, as well. To do that, we make sure that each row contains a different permutation of the dimensions. As the weights of the kernels are at fixed positions (for specific dimensions), a permutation of the dimensions will result in a different CAM. Formally, for a given data series , we note ( ) ∈ R ( , , ) the input data structure of dCNN: ...
Note that each row and column of ( ) contains all dimensions. Thus, a given dimension ( ) is never at the same position in ( ) rows. The latter is a crucial property for the computation of dCAM.
In practice, we guarantee the latter property by shifting by one position the order of the dimensions. Thus (0) in the first row is aligned with (1) in the second row. A different order of dimensions will thus generate a different matrix ( ). Figure 3 depicts the dCNN architecture. The input ( ) is forwarded into a classical two-dimensional CNN. The rest of the architecture is independent of the input data structure. The latter means that any other two-dimensional architecture (containing a Global Average Pooling) can be used (such as ResNet), by only adapting the input data structure. Similarly, the training procedure can be freely chosen by the user. For the rest of the paper, we will use the cross-entropy loss function and the ADAM optimizer.
Observe that multiple permutations of the original multivariate series will be processed by several convolutional layers, enabling the kernel to examine multiple different combinations of dimensions and subsequences. Note that the kernels of the dCNN will be sparse, which has a significant impact on overfitting.
The dResNet/dInceptionTime Architectures
As mentioned earlier, any architecture using a GAP layer after the last convolutional layer can benefit from dCAM. Thus, different (and more sophisticated) architectures can be used with our approach. To that effect, we propose two new architectures dResNet and dInceptionTime, based on the state-of-the-art architectures ResNet [65] and InceptionTime [28]. The transformations that lead to dResNet and dInceptionTime are very similar to that from CNN to dCNN, using ( ) as input to the transformed networks. The convolutional layers are transformed from 1D (as originally proposed [28,65]) to 2D. Similarly to dCNN, the kernel sizes are ( , ℓ, 1) and convolute over each row of ( ) independently.
We demonstrate in the experimental section that these architectures do not affect the usage of our proposed approach dCAM, and we evaluate the choice of architecture on both classification and discriminant features identification.
Dimension-wise Class Activation Map
At this point, we have our network trained to classify instances among classes C 0 , C 1 , ..., C . We now explain how to compute dCAM that will identify discriminant features within dimensions. We assume that the network has to be accurate enough in order to provide a meaningful dCAM. We evaluate in the experimental section the relation between the classification accuracy of the network and the discriminant features identification accuracy of dCAM.
At first glance, we can compute the regular CAM C ( ( )) = C ( ( )). However, a high value on the ℎ row at position on C ( ( )) does not mean that the subsequence at position on the ℎ dimension is important for the classification. It instead means that the combination of dimensions at the ℎ row of ( ) is important.
Random Permutation Computations.
Given those different combinations of dimensions (i.e., one row of ( )) produce different outputs (i.e., the same row in C ( ( ))), the positions of the dimensions within the ( ) rows have an impact on the CAM. Consequently, for a given combination of dimensions, we can assume that at least one dimension at a given position is responsible for the high value in the CAM row. For the rest of this paper, we use Σ as the set of all possible permutations of dimensions, and ∈ Σ for a single permutation of . E.g., for a Figure 4 depicts an example of CAMs for different permutations. In this Figure, for three given permutations of (i.e., 0 , 1 and 2 ), we notice that when (2) is in position two of the second row of ( ), the Class Activation Map ( ( )) is greater than when (2) is not in position two. We infer that the second dimension of in position two is responsible for the high value. Thus, we may examine different dimension combinations by keeping track of which dimension at which position is activating the CAM the most. We now describe the steps necessary to retrieve this information. We can now define the following transformation M.
Definition 2. For a given data series = { (0) , (1) , ..., ( −1) } of length , a given class C and Class Activation Map, we define M ( C ( ( ))) ∈ R ( , , ) (with C ( ( )) ∈ R ( , ) and C ( ( )) its ℎ row) as follows: Figure 5 depicts the M transformation. As explained in Definition 2, the M transformation enriches the Class Activation Map by adding the dimension position information. Note that if we change the dimension order of , their M ( C ( ( ))) changes as well. Indeed, for a given dimension ( ) and position , ( ( ) , ) will not have the same value for two different dimension orders of . Thus, computing M ( C ( ( ))) for different dimension orders of will provide distinct information regarding the importance of a given position (subsequence) in a given dimension. We expect that subsequences (of a specific dimension) that discriminate one class from another will also be associated (most of the time) with a high value in the Class Activation Map.
Merging
Permutations. We compute M ( C ( ( ))), for different ∈ Σ .Note that the total number of permutations for high-dimensional data series is enormous: |Σ | = !. In practice, we only compute M for a randomly selected subset of Σ . We thus merge = |Σ | permutations , by computing the averaged matrixM C ( ) of all the M transformations of the permutations: Figure 6 illustrates the process of computingM C ( ) from the set of permutations of , Σ .M C ( ) can be seen as a summarization of the importance of each dimension at each position in ( ), for all the computed permutations. Figure 6(b') (at the top of the figure) depictsM C ( ) , which corresponds to the ℎ row (i.e., the dotted box in Figure 6(b)) ofM C ( ). Each row ofM C ( ) corresponds to the average activation of dimension (for each timestamp) when dimension is in a given position in ( ).
Note that all permutations of are forwarded into the dCNN network without training it again. Thus, even though the permutations of generate radically different inputs to the network, the network can still classify most of the instances correctly. For permutations, we use to denote the number of permutations that the model has correctly classified. We provide an analysis (see Section 5) of / w.r.t the classification accuracy of the model and the impact that / has on the discriminant features identification accuracy.
dCAM Extraction.
We can now use the previously com-putedM C to extract explanatory information on which subsequences are considered important by the network. First, we note that each row of ( ) corresponds to the input format of the standard CNN architecture. Thus, we expect that the result of a row ofM C (one of the ten lines in Figure 6 Figure 6(d)). Moreover, in addition to the temporal information, we can extract temporal information per dimension. We know that for a given position and a given dimension ,M , C ( ) represents the averaged activation for a given set of permutations. If the ac-tivationM , C ( ) for a given dimension is constant (regardless of its value, or the position ), then the position of dimension is not important, and no subsequence in that dimension is discriminant. On the other hand, a high or low value at a specific position means that the subsequence at this specific position is discriminant. While it is intuitive to interpret a high value, interpreting a low value is counterintuitive. Usually, a subsequence at position with a low value should be regarded as non-discriminant. Nevertheless, if the activation is low for and high for other positions, then the subsequence at position is the consequence of the low value and is thus discriminant. We experimentally observe this situation, where a non-discriminant dimension has a constant activation per position (e.g., see dotted red rectangle in Figure 6(b): this pattern corresponds to a non-discriminant subsequence of the dataset). On the contrary, for discriminant dimensions, we observe a strong variance for the activation per position: either high values or low values (e.g., see solid red rectangles in Figure 6(b): these patterns correspond to the (injected) discriminant subsequences highlighted For a given multivariate time series = { ()) , … , (*+,) }, and -. ∈ Σwith Σthe set of permutations of the dimensions of :
Standard
$ for random permutations of the dimensions in red in Figure 6(e)). We thus can extract the significant subsequences per dimension by computing the variance of all positions of a given dimension. We can filter out the irrelevant temporal windows using the averaged (M C ( )) for all dimensions, and use the variance to identify the important dimensions in the relevant temporal windows. Formally, we define C ( ) as follows.
Definition 3. For a data series and class C , C ( ) is: ... .
Time Complexity Analysis
Observe that since the permutations can be computed in parallel, the most important parameter for the execution time is .
Further Observations
We note that since in real use cases, labels are not available, the number of correctly classified permutations (called ) could be used as a proxy to assess the quality of the explanation (see Section 5.6). Moreover, when analyzing sets of series, we can use dCAM on each one independently, and then aggregate the dCAM results to identify global discriminant features (see Section 5.8).
EXPERIMENTAL EVALUATION 5.1 Experimental Setup
We implemented our algorithms in Python 3.5 using the PyTorch library [48]. The evaluation was conducted on a server with Intel Core i7-8750H CPU 2.20GHz x 12, with 31.3GB RAM, and Quadro P1000/PCle/SSE2 GPU with 4.2GB RAM, and on Jean Zay cluster with Nvidia Tesla V100 SXM2 GPU with 32 GB RAM.
Our code and datasets are available online [1].
Datasets.
We conduct our experimental evaluation using real datasets from the UCR/UEA archive [12] to evaluate the classification performance of the competing methods. The real datasets are injected with known discriminant patterns and a real use case from the medical domain to evaluate the discriminant features identification. We use the StarLightCurves (classes 2 and 3 only), ShapesAll (classes 1 and 2 only), and Fish (class 1 and 2 only) datasets from the UCR archive [12], in which we inject subsequences that will generate discriminant features. We build two types of datasets to study the ability of the algorithms to identify the discriminant patterns guiding the classification decision, (1) when these patterns occur in a subset of the dimensions at different timestamps, and (2) when these patterns occur in a subset of the dimensions at the same timestamp.
(1) For the 1 datasets, we build each dimension of Class 1 by concatenating random instances from one class of one of our two UCR seed datasets. We build Class 2 by injecting in the series of the other class of our two UCR datasets a pattern in 2 random dimensions at a random position in the series.
(2) For the 2 datasets, we build each dimension of Class 1 by concatenating random instances from one of the classes of our two UCR datasets and injecting patterns from the other class in random dimensions and at different positions. We build Class 2 by injecting patterns at the same positions of 2 random dimensions.
Examples of 1 and 2 5-dimensional datasets based on StarLightCurves are depicted in Figures 7(a), and 7(b), respectively; we use 1000 such datasets. In addition, we consider a use case from medicine related to robot-assisted surgeon training ( Section 5.8).
Evaluation Measures.
We first evaluate the classification accuracy, -. This measure corresponds to the ratio of correctly classified instances among all instances in the test dataset.
We then evaluate the discriminant features accuracy, -, for Class 1 (see Figure 7). We define -as the PR-AUC for CAM/cCAM/dCAM obtained from the models and the groundtruth. The ground-truth is a series that has 1 at the positions of discriminant features (see Figure 7(a.2): ground-truth contains 1 at the positions of the injected patterns, marked with the red rectangles, and 0 otherwise). We motivate the choice of PR-AUC (instead of ROC-AUC) because we are more interested in measuring the accuracy of identifying the injected patterns (representing at max 0.02 percent of the dataset) than measuring the accuracy of not detecting the non-injected patterns. In this very unbalanced case, PR-AUC is more appropriate than ROC AUC [13].
Note that even though we annotate each point of the injected subsequences as discriminant, only some subparts of these sequences may be discriminant, thus, leading to -less than 1. Finally, for CNN/ResNet/InceptionTime, we compute the -scores by assuming that their (univariate) CAM values are the same for all dimensions. We mark their -scores with a star in Table 2.
We are using the same architecture setup for all models. We then use CAM for CNN, ResNet, InceptionTime, cCAM for cCNN, cResNet and cInceptionTime and dCAM for dCNN, dResNet and dInception-Time to identify discriminant features. For CNN, cCNN and dCNN, we are using 5 convolutional layers with (64, 128, 256, 256, 256) filters respectively. We are using a kernel size of 3 and a padding of 2. For ResNet, cResNet, and dResNet, we are using three blocks with three convolutional layers of 64 filters (for the first two blocks) and 128 layers (for the last block). We are using kernel sizes equal to 8, 5, and 3 for each block for the three layers of the block. For InceptionTime, cInceptionTime and dInceptionTime, we are using the same architecture as originally defined [28]. We also include MTEX-CNN [2](MTEX) as a baseline, representative of other kinds of architectures that can provide a multivariate CAM. The explanation is computed separately for discriminant features and timestamps using grad-CAM [57] (MTEX-grad). The latter is a variant of the usual CAM using the gradients of the weights instead of the GAP layer to compute the activation.
We finally include three recurrent neural networks: the usual Recurrent Neural Network [53] (RNN), Long-Short Term Memory [23] (LSTM), and Gated Recurrent Unit [10] (GRU) to our benchmark. As following previous evaluation work conducted in the UCR/UEA archive [60], we use for all networks one recurrent hidden layer (RNN, LSTM, and GRU respectively) of 128 neurons. We then add one dense layer connecting the 128 neurons to the classes neurons.
We split our dataset into training and validation sets with 80 and 20 percent of the total dataset, respectively (equally balanced between the two classes). The training dataset is used to train the model, and the validation dataset is used as a validation dataset during the training phase. We generate a fully new test dataset for synthetic datasets and evaluate -and -. We train all models with a learning rate = 0.00001, a maximum batch size of 16 instances (less if GPU memory cannot fit 16 instances), and a maximal number of epochs equal to 1000 (we use early stopping and stop before 1000 epochs if the model starts overfitting the test dataset). For dCAM, we use = 100 (number of random permutations), a value that we empirically verified (due to lack of space, a detailed analysis of the effect of is in the full version of the paper).
Classification Accuracy evaluation
We first evaluate the classification performance of our proposed approaches (denoted as -Baselines and -Baselines in Table 2) and the different baselines (denoted as Baselines in Table 2) over the UCR/UEA multivariate data series. We run each method ten times and report the average -.
We first observe that the recurrent models (RNN, GRU, LSTM) are less accurate by approximately 0.10 than CNN-based models (CNN, ResNet and InceptionTime). These results confirm the observations of previous works [26,27,52,65]. We then observe that ResNet-based architecture performs better than CNN-based and InceptionTime-based architectures. Moreover, we note that, overall, dCNN and dResNet have a better -than CNN and ResNet, respectively. This observation confirms that our proposed architectures (dResNet, dCNN) do not result in any loss in accuracy; on the contrary, they are slightly more accurate than usual architectures (ResNet, CNN). We notice that dResNet is, on average, one rank higher than ResNet. Similar observations can be made when comparing dCNN and CNN.
Moreover, Table 2 confirms that using cCNN baselines (or cRes-Net and cInceptionTime) implies a drop in classification accuracy. For instance, CNN architecture is 0.05 more accurate than cCNN architecture. Thus, -Baselines cannot guarantee at least equivalent 2, in which the discriminant factor is the fact that the two injected patterns are injected at the same position. Table 2: -averaged accuracy for 10 runs over UCR/UEA datasets.
accuracy. Figure 8(a) depicts the comparison between dCNN -(on the y-axis) and CNN/cCNN -(on the x-axis; CNN: blue circles; cCNN: red crosses). The dotted line corresponds to cases when both classifiers have the same accuracy. We observe that almost all cCNN -(red crosses) are above the dotted line, which shows that dCNN is more accurate for most datasets. Similarly, we observe that most of the CNN -(blue circles) are above the dotted lines, which means that dCNN is more accurate than CNN. The same observation can be made when examining Figure 8(b), in which dResNet is compared with ResNet and cResNet.
However, the same observation is not true when comparing dIn-ceptionTime with InceptionTime and cInceptionTime. Even though in Figure 8(c) most of the red crosses are above the dotted line, indicating that dInceptionTime is most of the time more accurate than cInceptionTime, the blue circles are equally distributed above and under the dotted line. Thus, dInceptionTime is not more accurate than InceptionTime. The results in Table 2 also show that the averaged -across all datasets (as well as the averaged rank) is lower for dInceptionTime than for InceptionTime. Nevertheless, the performance of dInceptionTime is very close to that of Incep-tionTime. Thus, transforming the original architecture into one that supports dCAM does not penalize classification performance.
Finally, we observe that the accuracy of MTEX-CNN is lower than that of Baselines and the d-Baselines. We note that MTEX-CNN and cCNN have very similar performance (average accuracy of 0.71 and 0.70, and average rankings of 6.39 and 7.30). As we explained earlier (see Section 2.3), the MTEX-CNN architecture is divided into two blocks. The experiments demonstrate that the 2nd block cannot capture all discriminant features, and thus, cannot reach the accuracy of a traditional CNN. We conclude that MTEX-CNN is not as accurate as traditional architectures (such as CNN) or our proposed architectures (such as dCNN).
Discriminant Features Identification
We now evaluate the classification accuracy ( -) and the discriminant features identification accuracy ( -) on synthetically built datasets. Table 3 depicts both -and -on 1 and 2 datasets, when varying the number of dimensions from 10 to 100. In this experiment, we keep as baselines only ResNet and cResNet, which are the most accurate methods among all other baselines.
Overall, all methods have better performance (both -and -) on 1 datasets than on 2. This was expected: discriminant features located in single dimensions are easier to find than discriminant features that depend on several dimensions. Table 3: -and -averaged accuracy for 10 runs over synthetic datasets. We then notice that for low dimensional ( =10) datasets, ResNet, dResNet, dCNN, and dInceptionTime are performing nearly perfect C-acc. Moreover, ResNet and MTEX-CNN are performing well for low-dimensional data series but start to fail for a more significant number of dimensions. While the drop is already significant for the 1 dataset built from the StarLightCurve dataset, it is even stronger for 2 datasets, for which ResNet fails to classify instances with a number of dimensions ≥ 20. On the contrary, dCNN, dResNet, and dInceptionTime, which use the random permutations in the input, are not sensitive to the number of dimensions and have an almost perfect -for most of 1 datasets. We observe a -drop for dCNN, dResNet and dInceptionTime as dimensions increase for 2 datasets. However, this drop is significantly less pronounced than that of ResNet. Overall, dCNN, dResNet, and dInceptionTime, which have on average the three highest ranks, are the most accurate methods.
Regarding cResNet, although it achieves a nearly perfect C-acc for 1 datasets, we observe that it fails to classify correctly instances of 2 datasets. As explained in Section 2, the input data structure is not rich enough to allow comparisons among dimensions, which is the main way to find discriminant features between the two classes of 2 datasets. We also observe that MTEX-CNN fails to classify instances of 2 datasets. Thus, this architecture does not correctly detect the discriminant features across different dimensions. Overall, Figure 9(a) shows that dCNN, dResNet and dInceptionTime are equivalent to cResNet for 1 (Figure 9(a.1)), outperforming all the baselines for 2 (Figure 9(a.2)), and in general are better than the baselines (ResNet and cResNet) for both types (Figure 9 ). We now compare the different methods using the -measure. We observe that the baseline cCAM (computed with cCNN) is outperforming CAM (computed with ResNet) and dCAM (with all of dCNN, dResNet and dInceptionTime) for 1 datasets. This is explained by the fact that these classes can be discriminated by treating dimensions independently. Thus, cCAM (with no comparisons between dimensions) is naturally the best solution. Nevertheless, as 2 datasets require comparisons among dimensions to discriminate the classes, cCAM fails on them, with a -very similar to the one of a random classifier. This confirms that such a baseline cannot be considered as a general solution for multivariate data series classification. We also observe that -of the explanation method of MTEX-CNN (MTEX-grad) is lower than dCAM for 1 and close to -of cCAM for 2, meaning that it cannot identify discriminant features of 2 datasets. We then compare CAM and dCAM (used with dCNN/dResNet/ dInceptionTime). Figure 9(b) shows that dCAM significantly outperforms CAM, and that -reduces for all models as the number of dimensions increases. Nevertheless, -of dCAM remains relatively high for both 1 (Figure 9(b.1)) and 2 (Figure 9(b.2)) datasets (for less than 60 dimensions).
This result demonstrates the superiority of dCAM over state-ofthe-art methods. Besides, the average ranks in Table 3indicate that dCAM computed from ResNet has the highest rank of 2.15.
Influence of
This section analyzes the influence of the number of permutations on the discriminative features identification accuracy ( -). We compute the -for 20 different instances for which dCAM is computed using a value of between 1 and 400. We randomly select the 20 instances from the 9 ShapesAll datasets, 1 and 2. (We excluded the 2 ShapesAll dataset with 100 dimensions, because no model trained on this dataset leads to reasonably accurate results: see Table 3.) Figure 10(a.1) for 1 datasets and (a.2) for 2 datasets depicts the evolution of -(on average for the 20 instances) as increases, for dCNN, dResNet and dInception Time. The results show that the model architecture influences convergence speed, and that convergence speed reduces as the number of dimensions increases. Figure 10(b) shows that the number of permutations needed to reach 90 percents of the best -is greater when is higher. The latter holds for 1 (Figure 10(b.1)) and 2 datasets (Figure 10(b.2)). Overall, we notice that the dCAM computation with dResNet and dInception-Time converges faster than dCNN. Studying deep neural network architectures that could reduce the number of permutations needed to reach the maximum -is an open research problem.
-versus -
In this section, we first analyze the relation between -and -. We then evaluate the impact that -has on the number of permutations that have been correctly classified . We finally evaluate the impact that has on -. Figure 11(1) depicts the relation between -and -for dCNN (Figure 11(a.1)), dResNet (Figure 11(b.1)) and dInceptionTime (Figure 11(c.1)) for all synthetic datasets. Note that all methods have a logarithmic relation (dotted red line) between -(x-axis) and -(y-axis). This confirms that the accuracy of the trained model has a significant impact on discriminant feature identification. Figure 11(3) depicts on the y-axis the ratio of correctly classified permutations ( ) among all permutations ( ) versus the -(on the x-axis). In this case, for all of dCNN (Figure 11(a.3)), dResNet (Figure 11(b.3)) and dInceptionTime (Figure 11(c.3)), we observe that there exists a linear relationship for -between 0.7 and 1. This means that will be greater when the model is more accurate. Nevertheless, for -between 0.5 and 0.7, we observe a high variance for / . Thus, an inaccurate model may still lead to a high . Finally, Figure 11(2) depicts the relation between / Figure 11: Evaluation of -, -, and ratio between number of permutations and number of permutations correctly classified , for dCNN, dResNet and dInceptionTime.
(on the y-axis) and -(on the x-axis). We observe a similar relationship between -and -, which means that a low may lead to inaccurate discriminant features identification.
As hypothesized in Section 4, the experimental results confirm that an inaccurate model (for all of dCNN, dResNet, and dInception-Time) cannot be used to identify discriminant features. Moreover, since in a real use-case it is not possible to measure -, we can use / to estimate the discriminant feature identification accuracy. Even though Figure 11(2) demonstrates that a high / does not always lead to a high -, in practice, we can safely assume that a low / will most probably correspond to a low -. Therefore, such measure can be used as a proxy for the estimation of the explanation quality.
Execution time evaluation
In this section, we evaluate the execution time of our proposed approaches and the baselines. Figure 12(a) depicts the training execution time (for one epoch) when we vary the data series length with a constant number of dimensions fixed to 10 ( Figure 12(a.1)), and when we vary the number of dimensions with a constant data series length fixed to 100 (Figure 12(a.2)). In these two experiments, we use a batch size of 4 for all models. Overall, CNN and InceptionTime-based architectures are faster than ResNet-based architectures, and CNN, ResNet, InceptionTime, and MTEX-CNN are faster when the number of dimensions and the data series length is increasing. Nevertheless, both dCNN/dResNet/dInceptionTime and cCNN/cResNet/cInceptionTime require the same training time. We now evaluate the execution time and the number of epochs required to train our proposed approaches and the baselines. Figure 12(c) depicts the time (in seconds) and the number of epochs to reach 90% of the best loss (on the test set) for 1 ShapesAll datasets varying the number of dimensions between 10 and 100. We use for all models a batch size of 16. The red dot indicates that a model is either overfitted or underfitted (i.e., the loss for the first epoch is approximately equal to the best loss). We observe that cCNN/cResNet/cInceptionTime and dCNN/dResNet/dInceptionTime require the same amount of time to be trained, but traditional baselines require more epochs than the proposed d-methods. Thus, training time for ResNet is longer than dResNet for = 10 and = 20.
Finally, we measure the execution time required to compute dCAM (for dCNN, dResNet, and dInceptionTime), when we vary the number of dimensions with a constant data series length fixed to 400 (Figure 12(b.1)), when we vary the data series length with a constant number of dimensions fixed to 10 ( Figure 12(b.2)), and when we vary the number of permutations (Figure 12(b.3)). Note that the dCAM execution times are very similar for the three types of architectures. Moreover, the execution time increases superlinearly with the number of dimensions but is linear to the data series length and the number of permutations .
Use Case: Surgeon skills explanation
We now illustrate the applicability of our method to a real-world use case. In this use case, we train our dCNN network on the JIG-SAWS dataset [21] to identify novice surgeons, based on kinematic data series when performing surgical suturing tasks (i.e., wound stitching) using robotic arms and surgical grippers.
[Dataset] The data series are recorded from the . The multivariate data series are composed of 76 dimensions (an example of multivariate data series is depicted in Figure 13(a)). Each dimension corresponds to a sensor (with an acquisition rate of 30 Hz). The sensors are divided into four groups: patient-side manipulators (left and right PSMs: green rectangle in Figure 13 To perform a suture, the surgeons perform different gestures (11 in total). For example, 1 refers to reaching for the needle with the right hand, while 11 refers to dropping the suture at the end and moving to end points. Each gesture corresponds to a specific time segment of the dataset, involving all sensors. For example, the dotted red rectangle in Figure 13(a) represents gesture 6: pulling the suture with the left hand. Surgeons that reported having more than 100 hours of experience are considered experts, surgeons with 10-100 hours are considered intermediate, and surgeons with less than 10 hours are labeled as novices. We have 19 multivariate data series in the novice class, denoted as C , 10 in the intermediate, C , 10 multivariate data series in the expert class, C . More information on this dataset can be found in [21].
[Training] For the training procedure, we use 80% of the dataset (randomly selected from the three classes) for training. The rest 20% of the dataset is used for validation and early stopping. Since the instances do not have the same length, we use batches composed of one instance when training the models in the GPU.
[Evaluation] Similar to what has been reported in previous work [26], we achieve 100% accuracy on the train and test datasets (for ten different randomly selected train and test sets). We proceed to compute the C for every instance of the novice class C . The C of the multivariate data series named _ 001 (Figure 13(a)) is displayed in Figure 13(b). In the latter, the deep blue color indicates low activated subsequences (i.e., non-discriminant of belonging to the novice class C ), while the yellow color is pointing to highly activated subsequences. First, we note that some groups of sensors (dimensions) are more activated than others. In Figure 13(b), the left and right "MTM gripper angles" are the most activated sensors. Figure 13(c), which depicts the box-plot of the maximal activated values per sensor, confirms that in the general case, MTM gripper angles, as well as the MTM and PSM tooltip rotation matrices (three of these sensors are highlighted in red in Figure 13(a)), are the most discriminant sensors. On the contrary, linear and angular speeds are not discriminant and hence cannot explain the novice class C . As explained in Section 4.6, we now extract global explanations at the scale of the dataset. We compute the dCAM for each instance, and we then extract global statistics on the sensors, i.e., aggregated over all instances. Figure 13(d) depicts the averaged activation per sensor per gesture. Overall, C identifies gesture 9 (using the right hand to help tighten the suture) as a discriminant gesture, because of the discriminant subsequences present in the sensors "right MTM gripper angle", "5 ℎ element", and "7 ℎ element" (marked with red ovals in Figure 13(d)). These three identified sensors (dimensions) are relevant to the right PSM tooltip rotation matrix and are important for the suturing process. Similarly, we observe that gesture 6 (i.e., pulling suture with left hand) is discriminant, and activated the most by the "left NTM gripper angle" sensor. This result is consistent with a previous study [26], which also identified gesture 6 as discriminant of belonging to the novice class. Nevertheless, this previous study was using CAM to only highlight the time interval corresponding to gesture 6. On the contrary, dCAM provides more accurate (and useful) information: it does not only identify the discriminant gesture 6, but also the discriminant sensors. This allows the analysts to recognize exactly what aspects of the particular gesture are problematic.
[Summary] The application of dCAM in the robot-assisted surgeon training use case demonstrated its effectiveness. Our approach was able to provide meaningful explanations for the classification decisions, based on specific gestures (subsequences), and specific sensors (dimensions) that describe particular aspects of these gestures, i.e., the positioning and rotation angles of the tip of the stitch gripper. Such explanations can help surgeons to improve their skills.
CONCLUSIONS
Even though data series classification using deep learning has attracted a lot of attention, existing techniques for explaining the classification decisions fail for the case of multivariate data series. We described a novel approach, dCAM, based on CNNs, which detects discriminant subsequences within individual dimensions of a multivariate data series. The experimental evaluation with synthetic and real datasets demonstrates the superiority of our approach.
|
2022-06-12T13:08:33.125Z
|
2022-06-10T00:00:00.000
|
{
"year": 2022,
"sha1": "75f84819bef728ddf2a19884e3bf4d3b66e07855",
"oa_license": null,
"oa_url": "https://dl.acm.org/doi/pdf/10.1145/3514221.3526183",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "a15f32d3ad6af824791b77b5eb8824e45c783a61",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
269238648
|
pes2o/s2orc
|
v3-fos-license
|
Reservoir characterization of the Abu Roash D Member through petrography and seismic interpretations in Southern Abu Gharadig Basin, Northern Western Desert, Egypt
This research combines petrography and seismic analysis to assess the Upper Cretaceous Abu Roash (AR)/D’s carbonate member composition in the Southwest Abu-Sennan oil field in the Southern Abu Gharadig Basin within the Northern Western Desert of Egypt. Various datasets were used, including petrographic thin sections, and electrical well logs for four stratigraphic wells (01, -02, -03, and, -04), along with a time domain seismic dataset covering the study area. Petrographic analysis across multiple depths and intervals has provided valuable insights. Well-01 demonstrates mud-wackstone with diverse mineral components at 1671–74 m MD, indicating favorable reservoir quality. Well-02 exhibits diverse compositions at intervals 1740–43 m MD and 1746–49 m MD, also showcasing good reservoir quality. Well-03 reveals a packstone rock type at 1662–65 m MD with favorable reservoir characteristics. Well-04 displays peloids Wack-Packstone and Oolitic Packstone at intervals 1764–67 m MD and 1770–73 m MD, respectively, both indicating good reservoir quality. Integrating the petrography and seismic attributes results concerning the structural level of AR/D concerning the used wells, it's evident that Well-03 stands out due to its relatively high structural level, drilled near a major fault, revealing distinct fracture sets that contribute to a notably high reservoir quality as depicted in the RMS amplitude and Ant track attributes maps. AR/D reservoir levels in wells 02, and, 04, are relatively positioned at structurally lower levels, and face challenges with overburden pressure and mechanical compaction, resulting in diminished facies quality for the reservoir. Seismic attributes like the Ant track and RMS amplitude indicated that the presence of fractures within the AR/D Member's carbonate is linked to the prevalence of interpreted normal faults. The implemented procedure in this research can be applied to enhance comprehension of AR/D carbonate reservoirs in adjacent regions, thereby increasing the hydrocarbon exploration possibilities.
Petrography is an essential tool in reservoir characterization in detailed geological studies and provides crucial information about rocks' mineralogical composition, texture, and diagenetic history 1 .This information is essential for interpreting reservoir quality, pore types, and fluid distribution.On the other hand, seismic interpretation offers a view of subsurface structures through seismic data analysis [2][3][4][5] .This study integrates the available seismic data with petrography enhances the delineation of structural and stratigraphic features, providing a more accurate depiction of reservoir geometry and architecture 6 .There are some limitations in data availability and quality, affecting the study's overall precision.
This study aims to assess the composition and reservoir quality of the Upper Cretaceous member Abu Roash AR/D's carbonate member in the SWS oil field in the Southern Abu Gharadig Basin within the Northern Western Desert of Egypt to address the key deficiencies in the study area [7][8][9][10] .The study area includes wells drilled away from the intended structure, leading to the identification of unfavorable facies.Additionally, it lacks research focusing on carbonate reservoirs.The study area is the Southwest Abu-Sennan (SWS) oil field in the Southern Abu Gharadig Basin within the Northern Western Desert of Egypt lies between Latitudes 29°32′ to 29°35′ North and longitudes 28°30′ to 28°35′ East (Fig. 1).
The AR/D carbonate reservoir in the Southern Abu Gharadig Basin, situated in the northern Egyptian Western Desert, is a significant hydrocarbon-bearing formation within the Upper Cretaceous strata 17 , specifically in the Abu Roash Formation.This reservoir plays a crucial role in the region's overall hydrocarbon potential.
Paleoenvironmental conditions
During the Upper Cretaceous, the northern Egyptian Western Desert underwent a complex interplay of marine and non-marine conditions 18 .The AR/D reservoir likely originated in a shallow marine setting, influenced by periodic sea-level fluctuations.The Abu Roash Formation, known for its Upper Cretaceous carbonate-rich sequences, exhibits alternating layers of limestone, dolomite, and shale, reflecting diverse depositional environments.In the Southern Abu Gharadig Basin, a thick limestone body intercalated with thin shale streaks, Figure 1.(a) Regional Two Way Time (TWT) near the lower Cretaceous level across the Northern Egyptian Western Desert (Updated after [11][12][13] ) (b) location for the Southern Abu Gharadig area and the data used in this study.
exhibiting various colors and textures, is reported in 19 .Macrofaunal and microfaunal content in the AR/District indicates shallow marine carbonates with detrital clastic material influx.
Source rock and migration
The regional hydrocarbon source rock in the Southern Abu Gharadig Basin is associated with the Jurassic and Upper Cretaceous aged formations.Organic-rich shales within these formations served as prolific source rocks, generating hydrocarbons that migrated upward and accumulated in the porous and permeable intervals of the possible reservoirs.Diagenetic processes, including cementation, dolomitization, and fracturing, significantly influenced the reservoir quality of the AR/D carbonate interval.Understanding the diagenetic history is crucial for reservoir characterization and production optimization.
Structural framework
The geological history and successive structural events in the Northern Egyptian Western Desert, particularly in the Abu-Gharadig Basin, are characterized by EW and ENE-WSW trending faults spanning the Tertiary, Cretaceous, and Jurassic periods [7][8][9][10]20 . TheAbu-Gharadig Basin exhibits folds, rotated fault blocks, faults, and unconformities, with their dominance outlined by 11 .These geological structures play a crucial role in the overall architecture of the basin 21 . Th structural framework of the Southern Abu Gharadig Basin mirrors that of the Northern Egyptian Western Desert and Abu-Gharadig Basin, characterized by northwest-southeast trending anticlines and synclines dissected by EW and ENE-WSW trending faults.These structural features indicate tectonic activity during the Cretaceous 22,23 and have played a key role in the trapping and accumulation of hydrocarbons in the AR/D reservoir.In conclusion, the AR/D carbonate reservoir is a complex geological entity shaped by stratigraphic, structural, and diagenetic factors.Ongoing research and exploration activities refine our understanding of the reservoir, facilitating the sustainable development of hydrocarbon resources in the northern Egyptian Western Desert.
Petrography analysis and the assessment of visual porosity were conducted on thin sections related to four wells presented in 19 employing 25 rock classification.The analysis facilitated the determination of reservoir components, porosity types, and diagenesis processes such as cementation, recrystallization, and compaction.
The workflow defines the practical steps followed in the study is represented in Fig. 3.A stratigraphic correlation was performed between the wells concerning the Upper Cretaceous members, as depicted in Fig. 4. A lithostratigraphic analysis was also performed based on ditch cuttings investigation and mud log description across multiple wells in the designated area.Seismic attributes 26,27 such as the Ant track and RMS amplitude to recognize edge geometries and stratigraphical anomalies, as outlined by 28,29 were performed.
Petrography analysis of the AR/D member in different wells
Our investigation into the AR/D Member in various wells within the Southern Abu Gharadig Basin is fundamental to interpreting the geological complexities of this carbonate reservoir.The analysis of 01, 02, 03, and 04 wells has yielded comprehensive insights into the lithological variations and reservoir quality of the AR/D Member, significantly enhancing our understanding of its hydrocarbon potential.
In the dominion of petrographic analysis, the assessment of visual porosity 30 from thin sections plays a pivotal role in unraveling the geological intricacies of subsurface formations.Integrating this microscopic examination with porosity charts provides a comprehensive understanding of the reservoir characteristics.The porosity chart, graphing depth on the Y-axis against porosity percentage on the X-axis, becomes a visual representation of the evolving porosity profile throughout the AR/D reservoir intervals.As the chart unfolds, distinct patterns emerge, showcasing the intricate dance of different types of porosities-secondary interparticle porosity (SWP), Moldic porosity (MO), and fracture-related porosity (FR), contributing to the overall reservoir heterogeneity and influencing fluid flow dynamics in subsurface environments.Thus, the combination of visual porosity assessments from thin sections and the graphical representation in porosity charts becomes a powerful tool for geoscientists and petroleum engineers in deciphering the hidden complexities beneath the Earth's surface.
The examination of visual porosity from thin sections holds paramount significance in geological and petrological studies, providing invaluable insights into the physical characteristics of rocks.Thin sections, prepared by slicing rock samples into ultra-thin slices, allow for detailed microscopic analysis of mineral constituents and their spatial arrangements.Visual porosity analysis aids in the identification and quantification of pore spaces within rocks, contributing to a comprehensive understanding of reservoir properties, fluid flow dynamics, and the overall geologic history of a region.Recent advancements in imaging techniques and analytical tools have enhanced the precision and efficiency of visual porosity assessments, facilitating more accurate interpretations of subsurface processes.A notable recent reference in this field is the work of 31 where innovative methodologies were employed to investigate porosity variations in sedimentary rocks, emphasizing the continued evolution and refinement of porosity analysis techniques.
AR/D member petrography analysis in Well-01
The depth interval of 1671-74 meters The composition of the sedimentary profile indicates a prevalence of mud-wackstone.This type of rock is characterized by a minor presence of benthic foraminifera and very minimal terrigenous clays.The plate description also notes rare occurrences of Echinoides, ostracods, detrital quartz, dolomite, ferroan calcite, pelecypodes, ferroan dolomite, pyrite, phosphatic fragments, and glaucony.Despite the scarcity of these elements, the reservoir quality at this depth is notably good.The presence of minerals and fossils in the mud-wackstone suggests a sedimentary environment conducive to their preservation, contributing to the overall quality of the reservoir.
The depth interval of 1674-77 meters
The sedimentary profile transitions to a wackstone rock type.This lithological unit primarily consists of a minor proportion of benthic foraminifera and terrigenous clays, with very minor occurrences of pelloides and dolomite within the rock matrix.Despite the varied composition, the reservoir quality is assessed as moderate, with foraminifera and clays playing a significant role in the overall rock character.
The depth interval of 1677-1680 meters
The subsurface strata are characterized as wackstone, primarily consisting of minor terrigenous clays, with lesser amounts of dolomite and benthic foraminifera.Sparse occurrences of Echinoides, ostracods, pelloides, pyrite, detrital quartz, and ferroan dolomite contribute to the overall composition.The reservoir quality at this depth interval is deemed moderate, indicating a balance between porosity and permeability within the rock structure.
The depth interval of 1680-1683 meters
The sedimentary profile reveals a wackstone rock type dominated by terrigenous clays, pelloides, and benthic foraminifera.The rock exhibits a minor presence of dolomite and Echinoides.Although rare elements like pyrite, residual hydrocarbon, and ferroan dolomite are present in small quantities, the reservoir quality is assessed as moderate, suggesting potential hydrocarbon accumulation with some geological challenges.The petrographic examination highlights dynamic transitions from mud-wackstone to wackstone, providing crucial insights into the geological framework of the AR/D Member (Figs. 5 and 6).
AR/D member petrography analysis in well-02
Figures 7 and 8 illustrate petrographic thin section analysis for Well-02, covering intervals between 1740 and 52 m.The observations reveal the complex composition and reservoir characteristics of the AR/D Member, showcasing a dynamic transition from mud-wackstone to wackstone.Various components, including echinoides, ostracods, detrital quartz, dolomite, ferroan calcite, pelecypodes, ferroan dolomite, pyrite, phosphatic fragments, and glaucony, influence the reservoir quality, categorized as good to moderate across different intervals.www.nature.com/scientificreports/ The depth interval of 1740-43 meters The sedimentary profile is characterized by a diverse composition, including major proportions of ooids, echinoides, benthic foraminifera, detrital quartz, terrigenous clays, and dolomite.Despite the varied composition, the reservoir quality is deemed good, promising favorable conditions for potential resource extraction.
The depth interval of 1743-46 meters
The sedimentary profile is dominated by terrigenous clays, forming a wackstone rock type.There are trace amounts of Echinoides, pelecypods, and pyrite, with rare occurrences of ferroan calcite.The reservoir quality is assessed as moderate, indicating a moderate potential for fluid storage and flow within this geological formation.
The depth interval of 1746-49 meters
The sedimentary profile is characterized by a packstone rock type, exhibiting a minor presence of ooids, pelecypods, Echinoides, and benthic foraminifera.Despite minimal occurrences of detrital quartz, residual hydrocarbon, and pyrite, the reservoir quality is deemed good, reflecting favorable conditions for potential resource extraction or further geological studies.
The depth interval of 1749-52 meters
The prevailing geological composition is encapsulated within a wackstone rock type.This sedimentary unit primarily consists of terrigenous clays, algae, and ooids, indicating a depositional environment influenced by a mixture of terrestrial and marine processes.Despite the diverse sedimentary components, the reservoir quality of the rock is deemed unfavorable, pointing towards suboptimal conditions for fluid flow and extraction.
AR/D member petrography analysis in well-03
Figures 9 and 10 present the petrographic analysis of Well-03, showcasing transitions from packstone to ooliticpackstone and varied lithological compositions and their impact on reservoir quality.This emphasizes the importance of considering different facies in evaluating the hydrocarbon potential of the AR/D reservoir.
The depth interval of 1662-65 meters
The sedimentary profile reveals a distinctive packstone rock type dominated by common benthic foraminifera, with minor components of ooids, echinoides, terrigenous clays, pelecypods, dolomite, and pyrite.Ostracods and ferroan calcite are present in very minor quantities, while ferroan dolomite is exceptionally rare.The reservoir quality within this interval is assessed as good, indicating favorable conditions for fluid storage and migration.
The depth interval of 1665-68 meters
The geological composition is characterized by an oolithic wackstone, predominantly comprising common ooids, with minor quantities of benthic foraminifera, terrigenous clays, and chert.Despite the diverse nature of the sediment, the reservoir quality is described as poor, indicating limitations in the potential for hydrocarbon extraction.
The depth interval of 1668-71 meters
The rock formation is characterized as an oolitic packstone, revealing common ooids with minor occurrences of benthic foraminifera and terrigenous clays.Very minor proportions of pelecypods, dolomite, ferroan dolomite, rare ostracods, and ferroan calcite are noted.Despite the varied components, the reservoir quality of this oolitic packstone is deemed good.
The depth interval of 1671-74 meters
The geological composition is primarily represented by mudstone, consisting of terrigenous clays with minor occurrences of dolomite, ferroan dolomite, benthic foraminifera, Echinoides, ooids, glucony, and pyrite.Unfortunately, the reservoir quality of this geological formation is characterized as poor due to the prevalence of mudstone and limited occurrences of other minerals and microorganisms, making it challenging for hydrocarbon exploration and extraction.
AR/D member petrography analysis in well-04
Petrographic thin sections and visual porosity analysis (Figs. 11 and 12) from Well-04 reveal dynamic transitions from mud-wackstone to oolitic-packstone.
The depth interval of 1761-64 meters
The plate description indicates the predominant presence of common fine terrigenous clays, with minor occurrences of Echinoides.There is a very minor presence of pyrite and rare occurrences of dolomite, suggesting poor reservoir quality.The rock type is identified as mud-wackstone, reflecting a lithological classification characterized by a matrix of mud-sized particles.
The depth interval of 1764-67 meters
The plate characterizes the rock as pelloids wack-packstone, with common peloids, minor occurrences of Echinoides, and a very minor presence of pelecypods and ostracods.Despite the sedimentary components, the reservoir quality is deemed good.www.nature.com/scientificreports/ The depth interval of 1767-70 meters The plate analysis reveals a composition characterized by common peloids, a minor presence of Echinoides, and very minor occurrences of ostracods.Rare residual hydrocarbons are observed, but the overall reservoir quality is described as poor.The rock type associated with this depth interval is identified as packstone.
The depth interval of 1770-73 meters
The rock prevalent in this depth range is identified as Oolitic Packstone, showcasing a composition dominated by common ooids.There are minor components of terrigenous clays, and very minor amounts of pyrite are noted.
The reservoir quality within this interval is considered moderate, indicating the potential for fluid movement within the rock formation.In analyzing the structural level of AR/D concerning our wells, it's evident that Well-03 stands out due to its relatively high-altitude structural position, drilled near a major fault, revealing distinct fracture sets that contribute to a notably high reservoir quality as depicted in the RMS amplitude and Ant track attributes maps illustrated in Figs. 13 and 14.These observations are matched with the petrography analysis performed.Well-01 exhibits conditions similar to Well-03, situated in the central highly fractured part of the study area where high amplitude/phenomenal anomaly is preserved.Conversely, Wells 02 and 04 are relatively positioned at structurally lower levels, and face challenges with overburden pressure and mechanical compaction, resulting in diminished facies quality 32 for the reservoir (Fig. 15).
Discussion
Expanding on Noureldin's previous work in structural, stratigraphical, and petroleum system analysis [7][8][9][10] , this work extends the subsurface assessment to encompass a broader characterization of the Upper Cretaceous carbonate member AR/D.Drawing on accumulated expertise in this field, the aim is to further detail the mapping of AR/D Upper Cretaceous carbonate members, with a particular focus on enhancing characterization through petrographic and seismic analysis.This study is compared to 33 's findings that focus on the seismic interpretation of the Abu Roash D Member, emphasizing the tectonic history and the relationship between fractures and normal faults.
Conclusion
The study aimed to assess the composition and reservoir quality of the Upper Cretaceous member AR/D's carbonate member in the Southern Abu Gharadig Basin within the Northern Western Desert of Egypt.The findings of this research follow: • The study delves into the geological intricacies of the AR/D carbonate reservoir in the SWS oil field in the Southern Abu Gharadig Basin, Egypt.• Integration of petrographic analysis, electrical well logs, and seismic data provides insights into composition, lithology, and the controlling structure of the target reservoir.• Overcoming challenges in well placement and facies identification, the study establishes a foundation for further exploration in the region.• Petrographic analysis reveals transitions from mud-wackstone to wackstone, packstone, and oolitic packstone, influencing reservoir quality.• Diagenesis processes such as dolomitization and dissolution refine the understanding of the geological frame- work.• Well analysis: • Well-01 exhibits mud-wackstone with various mineral components at 1671-74 meters MD, indicating good reservoir quality.• Well-02 shows diverse compositions at intervals 1740-43 meters MD and 1746-49 meters MD, with good reservoir quality.• Well-03 reveals a packstone rock type at 1662-65 meters MD with good reservoir quality.
• Seismic interpretation highlights structural complexity, including an asymmetrical anticline intersected by normal faults.Seismic attributes like the Ant track and RMS amplitude aid in characterizing petrophysical properties and confirming hydrocarbon potential.Fractures within the AR/D carbonate, correlated with faults, act as structural traps for hydrocarbons.• Insights gained from the study can extend to neighboring areas, enhancing hydrocarbon exploration poten- tial.
Figure 3 .
Figure 3.The workflow defines the practical steps followed in the structural analysis.
Figure 5 .
Figure 5. Petrographic descriptions and reservoir qualities of rock samples from different depths in the well.(a) Depth range 1671-74 MD showing MUD-WACKSTONE with a minor presence of benthic foraminifera, very minor terrigenous clays, and rare occurrences of various minerals.Reservoir quality is assessed as good.(b) Depth range 1674-77 MD displaying WACKSTONE with minor benthic foraminifera and terrigenous clays, and rare occurrences of other minerals, indicating moderate reservoir quality.(c) Depth range 1677-1680 MD exhibiting WACKSTONE with minor terrigenous clays, very minor dolomite and benthic foraminifera, and rare occurrences of other minerals, with a moderate reservoir quality.(d) Depth range 1680-83 MD showing WACKSTONE with minor terrigenous clays, pelloides, and benthic foraminifera, very minor dolomite and echinoids, and rare occurrences of pyrite and ferroan dolomite, indicating a moderate reservoir quality.
Figure 6 . 6 Figure 7 .
Figure 6.Well-01 analysis (a) Porosity Chart depicting the relationship between depth (Y-axis) and porosity percentage (X-axis), illustrating variations in subsurface porosity.(b) Pie chart illustrating the distribution of different types of porosities.
Figure 8 .
Figure 8. Well-02 analysis (a) Porosity Chart depicting the relationship between depth (Y-axis) and porosity percentage (X-axis), illustrating variations in subsurface porosity.(b) Pie chart illustrating the distribution of different types of porosities.
Figure 10 .
Figure 10.Well-03 analysis (a) Porosity Chart depicting the relationship between depth (Y-axis) and porosity percentage (X-axis), illustrating variations in subsurface porosity.(b) Pie chart illustrating the distribution of different types of porosities.
Figure 12 .
Figure 12.Well-04 analysis (a) Porosity Chart depicting the relationship between depth (Y-axis) and porosity percentage (X-axis), illustrating variations in subsurface porosity.(b) Pie chart illustrating the distribution of different types of porosities.
References 7 -Figure 13 .
Figure 13.RMS Average Magnitude Amplitude Map of Upper Cretaceous AR/D Member.
Figure 14 .
Figure 14.Ant track Extracted Value Map of Upper Cretaceous AR/D Member.
Figure 15 .
Figure 15.Integrated Petrography and seismic analysis of the AR/D carbonate reservoir through the utilization of the RMS seismic attribute surface depicting the relative facie distribution across the study area against the controlling structural elements, highlighting a distinct anomaly in its central region.
|
2024-04-20T06:17:24.375Z
|
2024-04-18T00:00:00.000
|
{
"year": 2024,
"sha1": "f713598aed7d0692a013fd4b08b8c38f03584d57",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "29ca2bf0b1ce9fb12afa4184fc12571819333be4",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
35368876
|
pes2o/s2orc
|
v3-fos-license
|
Nuclear Import of Insulin-like Growth Factor-binding Protein-3 and -5 Is Mediated by the Importin β Subunit*
Although insulin-like growth factor-binding protein (IGFBP)-3 and IGFBP-5 are known to modulate cell growth by reversibly sequestering extracellular insulin-like growth factors, several reports have suggested that IGFBP-3, and possibly also IGFBP-5, have important insulin-like growth factor-independent effects on cell growth. These effects may be related to the putative nuclear actions of IGFBP-3 and IGFBP-5, which we have recently shown are transported to the nuclei of T47D breast cancer cells. We now describe the mechanism for nuclear import of IGFBP-3 and IGFBP-5. In digitonin-permeabilized cells, where the nuclear envelope remained intact, nuclear translocation of wild-type IGFBP-3 appears to occur by a nuclear localization sequence (NLS)-dependent pathway mediated principally by the importin β nuclear transport factor and requiring both ATP and GTP hydrolysis. Under identical conditions, an NLS mutant form of IGFBP-3, IGFBP-3[228KGRKR → MDGEA], was unable to translocate to the nucleus. In cells where both the plasma membrane and nuclear envelope were permeabilized, wild-type IGFBP-3, but not the mutant form, accumulated in the nucleus, implying that the NLS was also involved in mediating binding to nuclear components. By fusing wild-type and mutant forms of NLS sequences (IGFBP-3 [215–232] and IGFBP-5 [201–218]) to the green fluorescent protein, we identified the critical residues of the NLS necessary and sufficient for nuclear accumulation. Using a Western ligand binding assay, wild-type IGFBP-3 and IGFBP-5, but not an NLS mutant form of IGFBP-3, were shown to be recognized by importin β and the α/β heterodimer but only poorly by importin α. Together these results suggest that the NLSs within the C-terminal domain of IGFBP-3 and IGFBP-5 are required for importin-β-dependent nuclear uptake and probably also accumulation through mediating binding to nuclear components.
The mitogenic effects of insulin-like growth factors (IGFs) 1 are modulated by a family of IGF-binding proteins (IGFBPs). Following their secretion into the extracellular environment, the IGFBPs inhibit or stimulate cell growth by regulating access of the extracellular IGFs to the type I IGF receptor (1). However, some IGFBPs, including IGFBP-3, also have effects on cell growth that are type I receptor-independent (2,3). Expression of recombinant human IGFBP-3, for example, has been shown to inhibit the proliferation of murine fibroblasts with a targeted disruption to the type I receptor (4). This growth inhibitory effect was directly related to the induction of apoptosis by IGFBP-3 (5). In addition, a number of potent growth-inhibitory and apoptosis-inducing agents such as transforming growth factor 1, retinoic acid, tumor necrosis factor-␣, and anti-estrogens also induce IGFBP-3 gene expression (6,7). These effects on cell growth may be mediated by IGFBP-3 in an IGF-independent manner. There are fewer reports of IGF-independent effects of IGFBP-5; these include its ability to stimulate bone cell growth in the absence of increased IGF-I binding to its receptor (8). The mechanism(s) for the IGFindependent effects of IGFBP-3 and IGFBP-5 are currently unknown but may involve a direct nuclear action.
Although some proteins appear to be constitutively nuclear, others enter the nucleus only under defined conditions (9). Thus, cells are able to control the activity of nuclear proteins by regulating their nuclear uptake during differentiation and changes in the metabolic state of the cell. The central pore of the nuclear pore complex allows molecules up to 45 kDa to move freely between the nuclear and cytoplasmic compartments (10). For proteins larger than 45 kDa, nuclear transport is generally an active, nuclear localization sequence (NLS)-dependent process that requires specific targeting sequences contained within the primary sequence of the transported protein or a cotransported protein (11).
The cell contains multiple signal-dependent pathways for nuclear transport, of which the best characterized requires the cytosolic receptors, importin ␣ and  (12), the monomeric guanine nucleotide-binding protein, Ran (13)(14)(15), and interacting proteins such as nuclear transport factor 2 (16 -18). Conventionally, the importin ␣ subunit acts as an adapter, binding to the NLS of cytosolic proteins as well as to importin , which together with Ran effects translocation through the nuclear pore complex. The importin ␣/ heterodimer recognizes three different classes of NLS: those that contain basic residues arranged as a single stretch (e.g. the NLS of the SV40 large tumor antigen, T-ag) (9,19,20) or as two clusters of basic residues separated by a spacer region (bipartite NLS) (9,21) or those resembling the NLS of the yeast homeodomain protein Mat␣2 (22). Other signal-dependent pathways have been described that include the transport of proteins that bind directly to and are transported by members of the importin  family (in this pathway the adapter, importin ␣, is not required to effect nuclear transport) (23)(24)(25) and those that do not require soluble cytosolic receptors at all but appear to require ATP (26 -28).
Significantly, in the context of IGF-independent nuclear action, the C-terminal regions of IGFBP-3 and IGFBP-5 contain a domain with strong sequence homology to the bipartite NLS consensus motif (29). This basic domain is highly conserved in IGFBP-3 and IGFBP-5 from different species, suggesting that it has functional significance. Similar basic sequences have been identified in a number of other secreted proteins and shown to be important for their respective signaling roles. These include platelet-derived growth factor A (30), acidic fibroblast growth factor (31), and parathyroid hormone-related protein (32). We and others have described the nuclear transport of IGFBP-3 and IGFBP-5 in a number of cell lines (33)(34)(35)(36).
As part of our investigation into the role of nuclear IGFBP-3 and IGFBP-5, the present study examines the mechanisms for their nuclear import. We report that previously identified NLSlike sequences within IGFBP-3 and IGFBP-5 are necessary and sufficient for their nuclear accumulation. IGFBP-3 nuclear import is an energy-dependent process requiring ATP and GTP hydrolysis and mediated by importin . In addition, IGFBP-3 and IGFBP-5 are both recognized specifically by importin  and the importin ␣/ heterodimer but not by importin ␣. Thus, nuclear import of IGFBP-3, and by analogy IGFBP-5, appears to occur by a signal-dependent importin -mediated pathway. In addition, we show that, possibly mediated by its NLS, IGF-BP-3 is capable of interaction with nuclear binding sites, which may play an important role in its nuclear accumulation.
Materials-Recombinant human IGFBP-3 and IGFBP-3[ 228 KGRKR
3 MDGEA] were produced by a replication-deficient adenovirus-mediated expression system, as described previously (37). IGFBP-3 was purified from conditioned media by IGF-I affinity chromatography and reverse-phase high pressure liquid chromatography (38). In studies requiring fluorescently labeled IGFBP-3, the protein was conjugated to dichlorotriazinylaminofluorescein I HCl as described previously for Cy3 (35). The fusion proteins generated by linking -galactosidase to the bipartite NLS derived from the Xenopus laevis phosphoprotein, N1N2 (N1N2 NLS:-gal), or the T-ag NLS were expressed, purified, and, where appropriate, fluorescently labeled with 5-iodoacetamidofluorescein as described previously (39). Recombinant human IGFBP-5 was a generous gift from J. Zapf (Zü rich, Switzerland). IGFBP-1 was purified from human amniotic fluid (40), recombinant human IGFBP-2 was provided by Sandoz (Basel, Switzerland), and IGF-I was provided by Genentech (South San Francisco, CA). Antiserum against IGFBP-3 was prepared in this laboratory following immunization of rabbits with purified antigen, and the monoclonal antibody specific for importin  (mAb3E9, from purified ascites fluid) was from S. Adam (Chicago, IL). Dichlorotriazinylaminofluorescein I HCl was purchased from Research Organics and 5-iodoacetamidofluorescein and Texas Red-dextran (ϳ70 kDa) were purchased from Molecular Probes. Creatine phosphokinase, creatine phosphate, ATP, and FITC-dextran (ϳ77 kDa), leupeptin, apyrase, GTP␥S, CHAPS, Triton X-100, and RIA grade BSA were obtained from Sigma.
In Vitro Nuclear Transport Assay-In vitro nuclear transport was carried out as described previously (41). Chinese hamster ovary (CHO) cells were cultured on glass coverslips, washed with ice-cold transport buffer (50 mM Hepes/KOH, pH 7.3, 110 mM potassium acetate, 5 mM sodium acetate, 2 mM magnesium acetate, 1 mM EGTA, and 1 mM dithiothreitol), and permeabilized with 50 g/ml digitonin (Calbiochem) for 5 min on ice. The cells were washed with transport buffer, and the coverslips were inverted over 20 l of transport buffer containing either IGFBP-3 (5 ng/l) or N1N2 NLS:-gal (0.2 g/l) with 45 mg/ml rabbit reticulocyte lysate (RRL) (Promega), an ATP-regenerating system (0.125 mg/ml creatine phosphokinase, 30 mM creatine phosphate, 2 mM ATP), and 200 g/ml Texas Red or FITC-labeled dextran. Although no RRL-induced proteolysis of IGFBP-3 was detected by Western immunoblotting (data not shown), 1 g/ml leupeptin, 25 units/ml trasylol (Bayer), and 40 g/ml bestatin (Roche) were routinely added as protease inhibitors to the transport assay. The cells were incubated in a humidified environment for 30 min at 22°C.
For competition experiments, RRL was incubated at 22°C for 30 min with an excess of unlabeled IGFBP-3 prior to the addition of labeled IGFBP-3. Where RRL was omitted, 0.25% BSA was included in the transport buffer. For studies carried out in the absence of an ATPregenerating system, apyrase was used to pretreat cells (0.2 unit/ml, 37°C, 15 min), and RRL (800 units/ml, 22°C, 10 min) was used to remove endogenous ATP. To investigate the role of importin  in IGF-BP-3 nuclear import, the transport assay was carried out in the presence of an anti-importin  antibody (80 g/ml) without the addition of RRL. The role of GTP hydrolysis in nuclear import of IGFBP-3 was examined by preincubating RRL with the nonhydrolyzable GTP analogue, GTP␥S (2 mM), for 10 min at 22°C. When added to the cells, the final concentration of GTP␥S was 300 M. To assess the contribution to nuclear accumulation of binding to nuclear components, the nuclear envelope was permeabilized by addition of 0.025% CHAPS in 2 mM Tris⅐HCl, pH 7.0, and 1% glycerol to the transport solution (42), and the duration of the assay was reduced to 10 min at 22°C.
For transport assays using labeled IGFBP-3 or N1N2 NLS:-gal, fluorescence was detected directly following fixation of the cells. Where unlabeled IGFBP-3 was used, the subcellular localization was determined using indirect immunocytochemistry. Cells were fixed using Histochoice (Amresco), the nuclear envelope was permeabilized with 0.25% Triton X-100, and the cells were blocked with 1% BSA in phosphate-buffered saline for 1 h at 22°C. The cells were then incubated with antiserum against IGFBP-3 or nonimmune rabbit serum (1:5000) diluted in blocking buffer for 1 h at 22°C. Cells were then washed and incubated with goat anti-rabbit IgG conjugated with rhodamine (Immunotech) diluted 1:200 in blocking buffer for 1 h at 22°C. Cells were mounted in an antifade medium and examined using a confocal laser scanning microscopic (CLSM) system (Optiscan F900e Personal Confocal System, Victoria, Australia) fitted with a krypton-argon laser and dual channel detection optics. Individual cells were optically sectioned in the xy plane with multiple scan averaging. All images were collected under identical, nonsaturating conditions. The intensity of fluorescent labeling within cells was analyzed using the program NIH Image version 1.61. Pixel intensity, as a measure of fluorescence intensity, was measured within specific regions of the cell (cytoplasmic and nuclear) as well as in regions outside the cell (background). The pixel intensity from each subcellular region was averaged over at least 100 cells. After correction for background fluorescence, the results were expressed as the ratio of nuclear to cytoplasmic fluorescence (Fn/c).
Construction of EGFP Fusion Proteins-A 1080-base pair EcoRI-PvuII fragment containing the full coding sequence of human IGFBP-3 was inserted into pSELECT (Promega) to generate pSF106 (38). Sitedirected mutagenesis of pSF106 was carried out with the following oligonucleotides to introduce specific mutations: 5Ј-CCCAACTGTGAC-AAGAACGGATTTTATAAGAAAAAGC was used to generate pSF184 ( 216 K 3 N); 5Ј-GTGACAAGAAGGGATTTTATCACTCCCGCCAGTGT-CGCCCTTCCAAAGG was used to generate pSF170 ( 220 KKK 3 HSR) and 5Ј-TTTTATAAGAAAAAGCAGTGTCGCCCTTCCATGGACGGGG-AGGCGGGCTTCTGCTGGTGTGTGGATAAGTATGGG to generate pSF110 ( 228 KGRKR 3 MDGEA). Nucleotides that differ from the IGF-BP-3 sequence are underlined. The cDNA for human IGFBP-5 was generated from total RNA isolated from U2-OS osteosarcoma cell-line by reverse transcription-polymerase chain reaction. The resulting 868base pair fragment containing the full coding sequence of IGFBP-5 was inserted into pAC-CMV to generate pSF601.
Oligonucleotides containing KpnI and BamHI restriction sites for subcloning were synthesized on an Oligo 1000 DNA Synthesizer (Beckman Instruments). Fragments containing the 18-amino acid wild-type or mutant NLSs of IGFBP-3 or IGFBP-5 were amplified by Pfu turbo polymerase (Stratagene) and cloned into the EGFP C-terminal fusion vector, pEGFP-C1 (CLONTECH).
The IGFBP-3 double mutant 216 K 3 N and 220 KKK 3 HSR (BP-3) was amplified from pSF170 using primers 2 and 5, and the double mutant 216 K 3 N and 228 KGRKR 3 MDGEA (BP-3) was amplified from pSF110 using primers 5 and 12. Following amplification, all polymerase chain reaction products were cloned inframe into the KpnI and BamHI restriction sites of pEGFP-C1 and checked by sequencing.
Cell Culture and Transient Transfection-CHO cells were maintained in ␣-modification of Eagle's medium supplemented with 10% fetal calf serum (Cytosystems). For transient transfection, CHO cells were cultured on glass coverslips in 6-well dishes. At 70 -80% confluence, 2 g of EGFP fusion plasmid was transfected into cells using LipofectAMINE (Life Technologies) according to the manufacturer's instructions. At 24 h after transfection, cells were fixed with Histochoice for 20 min, mounted in an antifade medium, and scored using an Olympus BX60 fluorescent microscope.
Statistical Analysis-Data were analyzed by analysis of variance followed by Fisher's protected least significant difference test using Statview 4.02 (Abacus Concepts, Inc.).
Dot Blot and Western Ligand Binding Assays-Binding of IGFBP-3 and IGFBP-5 to importin subunits was examined by dot blot or Western ligand binding assays as described previously (43). The mouse ␣and -importin subunits were expressed as glutathione S-transferase fusion proteins and purified as described previously (43). The binding proteins and controls were applied directly to a nitrocellulose membrane (dot blot) or separated on 10% SDS-polyacrylamide gel electrophoresis prior to membrane transfer (Western ligand blot). The two different approaches allowed these studies to be carried out with the binding proteins in their native (dot blot) and denatured forms (Western ligand blot). Where indicated, IGFBP-3 was preincubated with an equimolar amount of IGF-I before application in the dot blot assay. The membranes were blocked in intracellular buffer containing 5% BSA for 4 h at 22°C and hybridization at 4°C for 16 h in intracellular buffer containing 1% BSA and either a preformed complex of mouse ␣ and  importin subunits fused to glutathione S-transferase (1:1 molar ratio) at a final concentration of 150 nM or with the individual subunits alone at the same concentration. Binding of importins to IGFBPs was detected using a glutathione S-transferase-specific antibody (Amersham Pharmacia Biotech), followed by an alkaline phosphatase-conjugated secondary antibody (Sigma) and nitro blue tetrazolium/5-bromo-4chloro-3-indolyl-1-phosphate (Promega) (dot blot) or a horseradish peroxidase-conjugated secondary antibody (Amersham Pharmacia Biotech) and ECL (Amersham Pharmacia Biotech) (Western ligand blot). In the case of the latter, imaging was carried out using a Fujifilm FLA-3000 Gel Imager, with quantitation performed using the Image Gauge 3.11 software.
RESULTS
Nuclear Import of IGFBP-3 Is a Specific and Saturable Process-NLS-dependent nuclear protein import is an active process requiring cytosolic factors including importin ␣/, Ran, and interacting factors (44). Because IGFBP-3 (40 -45-kDa glycosylated doublet) is close to the theoretical limit for diffusion into the nucleus, we investigated whether nuclear import could occur by a conventional NLS-mediated pathway. An in vitro nuclear transport assay was used in which the plasma membrane of CHO cells was permeabilized with the weak nonionic detergent digitonin, leaving the nuclear envelope intact (41). Nuclear transport of fluorescently labeled IGFBP-3 was examined in the presence of a transport solution containing RRL (a source of cytosolic proteins), an ATP-regenerating system (to provide energy for translocation), and fluorescently labeleddextran (to control for membrane integrity). The subcellular distribution of the fluorescent signal was determined using CLSM. In all cells where the plasma membrane had been permeabilized but where the nuclear envelope remained intact (fluorescently labeled dextran being specifically excluded from the nucleus), IGFBP-3 was localized to the cell nuclei (Fig. 1A). Quantitation using NIH Image version 1.61 (see "Experimental Procedures") showed that IGFBP-3 accumulated in the nucleus at levels 4.7-fold greater than in the cytoplasm (Fig. 1D).
To demonstrate that nuclear import of IGFBP-3 was a specific and saturable process, we competed fluorescently labeled IGFBP-3 with unlabeled IGFBP-3. Cytosol was preincubated with a 10-or 20-fold excess of unlabeled IGFBP-3 prior to the addition of fluorescently labeled IGFBP-3. Results of the in vitro nuclear transport assay showed that the nuclear to cytoplasmic fluorescence ratio was reduced to 2.6-fold in the presence of a 10-fold excess of unlabeled IGFBP-3 (Fig. 1, B and D). Following preincubation with a 20-fold excess of unlabeled IGFBP-3 (Fig. 1C), this was further reduced to 1.5-fold (Fig. 1D), close to an Fn/c value of 1.0 representing equal fluorescence in the nucleus and cytoplasm. Therefore, an Fn/c value of 1.5 suggests that little nuclear fluorescence was detectable following the addition of a 20-fold excess of unlabeled IGFBP-3.
Nuclear Transport of IGFBP-3 Is an Energy-dependent Process Mediated by the Importin  Subunit-The role of individual components of the nuclear transport pathway can be examined by their selective addition to the in vitro nuclear transport assay. We compared the nuclear uptake of IGFBP-3 with that of N1N2 NLS:-gal, which is transported to the nucleus by the conventional NLS-mediated nuclear protein import pathway utilizing Ran and the importin ␣/ heterodimer (45,46). Following nuclear transport, the subcellular localization of unlabeled IGFBP-3 was monitored using indirect immunocyto-
5Ј-CCCAACGGTACCAAGAAGGGATTTTATAAG
5Ј-AGCAGGATCCAGCCTCGCCGTCCATGGAAGGTTTGCACTGCTTTC chemistry, the control protein was directly fluorescently labeled, and both were detected using CLSM. As was observed for fluorescently labeled IGFBP-3 (Fig. 1A), all cells with an intact nuclear envelope contained nuclear IGFBP-3 ( Fig. 2A). When specific IGFBP-3 antiserum was replaced with nonimmune rabbit serum in similarly treated cells, only light background labeling was detected, indicating that the signal was specific to IGFBP-3 (data not shown). As previously shown, the N1N2 NLS directed nuclear accumulation of -galactosidase ( Fig. 2A) (46). In the absence of an ATP-regenerating system, both IGFBP-3 and N1N2 NLS:-gal were localized to the cytoplasm, being generally excluded from the nucleus (Fig. 2B). A requirement for ATP in NLS-dependent nuclear import has been described for a number of proteins (26,42).
When cytosolic proteins (in the form of RRL) were omitted from the transport solution, the pattern of nuclear accumulation of IGFBP-3 ( Fig. 2C) was indistinguishable from that seen in its presence ( Fig. 2A). Therefore, in contrast to N1N2 NLS: -gal, which demonstrated cytosol-dependent nuclear transport ( Fig. 2C), nuclear import of IGFBP-3 was independent of exogenously added cytosol. Previous studies have shown that although importin ␣ is released following treatment of cells with digitonin, sufficient importin  may remain to sustain basal nuclear import (12,47,48). Therefore, the independence of nuclear uptake of IGFBP-3 on cytosolic factors suggests that importin ␣ is not required for nuclear import, whereas the possibility remains that importin  may act alone as the transport receptor for IGFBP-3. NLS-dependent nuclear import, where the transported protein binds directly to importin  independently of importin ␣, has been documented for a number of proteins (23)(24)(25)49).
Nuclear import of IGFBP-3 was examined following neutralization of importin  from the assay by the addition of an anti-importin  antibody in the absence of RRL. Results showed a significant reduction in the level of nuclear import of both IGFBP-3 and N1N2 NLS:-gal (Fig. 2D). Because no importin ␣ was added to the system in this experiment, the results suggest that importin ␣ is unlikely to have a role in IGFBP-3 nuclear import. In a similar experiment where RRL was preincubated with the anti-importin  antibody prior to addition to the assay, nuclear import of IGFBP-3 was also reduced (data not shown). The role of GTP hydrolysis in nuclear transport of IGFBP-3 was examined following preincubation of the cytosol with the nonhydrolyzable GTP analogue, GTP␥S. Again nuclear accumulation of both IGFBP-3 and N1N2 NLS:-gal was reduced in the presence GTP␥S (Fig. 2E), with the pattern of IGFBP-3 labeling similar to that observed for cells treated with the anti-importin  antibody (Fig. 2D). These results suggest that nuclear accumulation of IGFBP-3 is an active process with a requirement for both importin  and GTP hydrolysis in its nuclear transport and appears to be independent of importin ␣. In addition, because some nuclear uptake of IGFBP-3 remains (Fig. 2D), other uptake/accumulation mechanisms may be operating in addition to that utilizing importin . Although the cytosol-independent nature of IGF-BP-3 nuclear import implies that a role for Ran is unlikely, other GTP-binding proteins may be involved in nuclear protein import, and the action of these proteins may constitute the basis of the inhibition of IGFBP-3 nuclear import by GTP␥S. FIG. 2. Nuclear import of IGFBP-3 is an energy-dependent process mediated by the importin- subunit. Digitonin-permeabilized CHO cells were incubated with IGFBP-3 and N1N2 NLS:-gal (a control for the conventional importin ␣/-mediated nuclear import pathway directed by a bipartite NLS) and visualized by CLSM. In vitro nuclear transport (see "Experimental Procedures") was carried out in the presence of cytosol and an ATP-regenerating system (A). The effect on nuclear transport of omitting the ATP-regenerating system (B) or cytosol (C) was examined. Transport studies were also carried out in the presence of an anti-importin- antibody without the addition of cytosol (D) and following preincubation of cytosol with the nonhydrolyzable GTP analogue GTP␥S (E). Images are representative of at least three independent experiments. Scale bar, 50 m.
Alternatively, Ran may not have been fully depleted from the transport assay following permeabilization of the cells.
Nuclear Accumulation of IGFBP-3 Is Prevented When the Putative NLS Is Mutated or Lost by Proteolytic Cleavage-We have previously shown that the mutant, IGFBP-3[ 228 KGRKR 3 MDGEA], obtained by exchanging part of the putative NLS of IGFBP-3 for the corresponding sequences in IGFBP-1, is not transported to the nucleus of intact cells (35). However, we have also shown that this IGFBP-3 mutant is unable to bind at the cell surface (38), leading to the speculation that transport to the nucleus may be blocked at the level of the plasma membrane rather than at the level of entry into the nucleus. To address this issue, we compared nuclear uptake of wild-type and mutant IGFBP-3 (in cells were the plasma membrane had been permeabilized) using the fully reconstituted in vitro nuclear transport assay. In contrast to wild-type IGFBP-3 (Fig. 3A), the mutant, IGFBP-3[ 228 KGRKR 3 MDGEA], was not localized to the nucleus (Fig. 3B), suggesting that residues 228 -232 within the basic region of IGFBP-3 are indeed required for nuclear accumulation as well as plasma membrane binding.
During an early round of purification of wild-type IGFBP-3, a 30-kDa proteolytic fragment was generated. The purified fragment was subject to N-terminal sequencing and shown to be an N-terminal fragment of IGFBP-3. It therefore lacks the basic region present in the C-terminal domain of the protein.
When nuclear uptake was examined, this truncated form of IGFBP-3 did not accumulate in the nucleus (Fig. 3C). Therefore, with respect to nuclear transport, the proteolytic fragment behaved in a similar fashion to the mutant form of IGFBP-3, supporting the observation that sequences within the C-terminal domain are required for active nuclear transport.
IGFBP-3 Binds to Insoluble Nuclear Components-In the presence of a permeabilized nuclear envelope, soluble proteins are able to pass freely between the cytoplasm and nucleus. Under these circumstances, nuclear accumulation occurs only if the protein binds to insoluble nuclear components (42). To investigate whether IGFBP-3 was capable of such interactions, we used the in vitro transport assay on cells where the nuclear envelope had been permeabilized with the detergent CHAPS. To demonstrate that CHAPS was effectively permeabilizing the nuclear envelope, experiments were carried in the presence of FITC-dextran (molecular mass, ϳ77 kDa). Under these conditions FITC-dextran distributes evenly between the nucleus and cytoplasm. In the presence of a fully reconstituted assay and the absence of a barrier to nuclear entry, nuclear accumulation of wild-type IGFBP-3 was observed (Fig. 4A). The same field of cells visualizing the FITC-dextran signal (Fig. 4B) showed that accumulation of IGFBP-3 only occurred in those cells with a perforated nuclear envelope. In contrast, N1N2 NLS:-gal did not accumulate in the nucleus, instead equilibrating between the nuclear and cytoplasmic compartments (data not shown).
Nuclear accumulation of the mutant IGFBP-3[ 228 KGRKR
3 MDGEA] was also examined following permeabilization of the nuclear envelope; under these conditions the mutant failed to accumulate in the nuclei of cells with a permeabilized nuclear envelope (Fig. 4, C and D). These results indicate that, in contrast to the conventional NLSs, the IGFBP-3 NLS contains sequences capable of conferring nuclear accumulation in the absence of an intact nuclear envelope, presumably through interaction with nuclear components.
The NLSs of IGFBP-3 and IGFBP-5 Are Capable of Targeting a Heterologous Protein to the Nucleus-In a previous study
we showed that both IGFBP-3 and IGFBP-5 are transported to the cell nucleus in intact cells (35). The ability of the putative NLS regions within IGFBP-3 and IGFBP-5 to target EGFP to the nucleus was examined by fusing these sequences to EGFP and expressing the resultant fusion protein in CHO cells. EGFP is a 27-kDa protein and as such is capable of passive diffusion into the nucleus (50). Nuclear accumulation occurs only if EGFP is fused to sequences that confer active nuclear import or nuclear binding. Transfected cells were scored in two categories: where the nuclear and cytoplasmic fluorescent intensity was equivalent (no nuclear accumulation) and where the nuclear signal was greater than the cytoplasmic signal (nuclear accumulation). Expression of EGFP alone resulted in only a small percentage of cells (7.3%) with a nuclear signal greater than that observed in the cytoplasm (Fig. 5A and Table II). However, expression of the wild-type IGFBP-3 NLS (residues 215-232) fused to EGFP resulted in 92.8% of cells where nuclear was greater than cytoplasmic intensity (Fig. 5B). Likewise, when the wild-type IGFBP-5 NLS (residues 201-218) was fused to EGFP, 96.9% of cells had accumulated EGFP in the nucleus (Fig. 5C). From this data we conclude that these 18residue basic sequences within the C-terminal domains of IGF-BP-3 and IGFBP-5 are sufficient for nuclear uptake of the binding proteins.
To identify which residues within these sequences were necessary for nuclear accumulation, we mutated each of the three basic clusters within the NLS to the corresponding sequences in IGFBP-1 (a binding protein we have shown is not transported to the nucleus) and fused these mutant sequences to EGFP (Table II). The mutations 216 K 3 N (BP-3) and 202 K 3 N (BP-5) did not affect nuclear transport of the fusion protein giving 83.3 and 89.3% of cells, respectively, with nuclear greater than cytoplasmic fluorescence; these values are not significantly different from those seen for the wild-type sequences. A more radical mutation of IGFBP-3 and IGFBP-5
FIG. 3. Nuclear accumulation of IGFBP-3 is prevented when the putative NLS is mutated or lost by proteolytic cleavage.
Nuclear transport of wild-type IGFBP-3 (A) was compared with the mutant IGFBP-3[ 228 KGRKR 3 MDGEA] (B) and a 30-kDa N-terminal proteolytic fragment of IGFBP-3 (C). In vitro nuclear transport was carried out in digitonin-permeabilized CHO cells in the presence of a fully reconstituted transport system. Images were collected using CLSM and are representative of three independent experiments. Scale bar, 50 m. within the first basic cluster, where both the basic residues were substituted with alanine, 215 KK 3 AA (BP-3) and 201 RK 3 AA (BP-5) , had a significant effect on nuclear transport (Table II), suggesting that the sequence responsible for nuclear accumulation had a bipartite nature. The mutations 220 KKK 3 HSR (BP-3) and 206 KRK 3 HSR (BP-5) also had a significant effect on nuclear transport of the fusion protein giving 62.2 and 55.2% of cells, respectively, with nuclear greater than cytoplasmic signal. The mutations 228 KGRKR 3 MDGEA (BP-3) (Fig. 5D) and 214 RGRKR 3 MDGEA (BP-5) (Fig. 5E) abolished nuclear accumulation of the fusion protein, giving 1.2 and 0.3% of cells, respectively, with nuclear greater than cytoplasmic fluorescence (Table II). As expected, a double mutation of the IGFBP-3 NLS, which involved the first and third basic cluster prevented nuclear transport of the fusion protein (Table II). Mutation of both the first and second basic clusters resulted in a further decrease in nuclear transport (40.3%) of the fusion protein compared with the central mutant alone (62.2%). Together these results suggest that the sequences 228 KGRKR (BP-3) and 214 RGRKR are essential for nuclear import of the binding proteins but that the other basic residues within the NLS probably contribute to the overall efficiency of nuclear accumulation.
IGFBP-3 and IGFBP-5 Are Recognized by the Importin ␣/ Heterodimer through the Importin  Subunit-The ␣ and  importin subunits constitute the high affinity NLS receptor used by many proteins to effect their nuclear import (12). The ability of full-length IGFBPs to be recognized by importin subunits was examined using dot blot (Fig. 6A) and Western ligand binding analysis (Fig. 6B) (43) to examine interactions of native and denatured binding proteins, respectively. Wild-type IGF-BP-3 showed strong binding by the importin complex (Fig. 6, A, lane 6, and B, top panel). This binding was of a similar intensity to that obtained for the T-ag NLS:-galactosidase fusion protein (positive control) (Fig. 6A, lane 2). In contrast, the NLS mutant, IGFBP-3[ 228 KGRKR 3 MDGEA], displayed weaker binding by the importin ␣/ heterodimer NLS (Fig. 6, A, lane 5, and B, top panel) compared with wild-type IGFBP-3. However, this binding appeared to be stronger than that obtained for -galactosidase, which does not contain an NLS (negative control) (Fig. 6A, lane 1). Of the other binding proteins tested, IGFBP-5 was also recognized by the importin heterodimer (Fig. 6B, top panel). IGFBP-1 and IGFBP-2, which are not translocated to the nucleus in intact cells (35), showed no detectable binding by the importin complex (data not shown).
Because IGFBP-3 has been shown to act as a carrier for IGF-I nuclear transport (34), we investigated whether the binary complex would bind more strongly to the importin subunits. However, when equimolar amounts of IGFBP-3 and IGF-I were equilibrated and subjected to dot blotting, there was no discernible difference in binding of either wild-type (Fig. 6A, lane 4) or mutant IGFBP-3 (Fig. 6A, lane 3) to the importin complex compared with these proteins in the absence of IGF-I (Fig. 6A, lanes 5 and 6). In addition, when IGF-I was added to the hybridization mix containing importin ␣/, there was no change in the binding of importin to wild-type or mutant IGFBP-3 analyzed by Western ligand binding (data not shown).
Because nuclear transport of IGFBP-3 appears to be mediated by importin  (independently of importin ␣), we examined whether the observed importin ␣/ heterodimer binding was mediated by the importin ␣ (Fig. 6B, middle panel) or  (Fig. 6B, bottom panel) subunit. The Western ligand binding assay clearly showed that IGFBP-3 and IGFBP-5 were both recognized by importin  (Fig. 6B, bottom panel). In contrast, neither binding protein was recognized to any great extent by importin ␣ (Fig. 6B, middle panel), nor did mutant IGFBP-3 exhibit significant binding to either importin subunit (Fig. 6B, middle and bottom panels). Quantitation of binding indicated that, in the case of IGFBP-3, importin  binding accounted for almost 40% of the binding of the importin heterodimer (Fig. 6C). Although importin ␣ binding to IGFBP-3 represented only a small proportion of importin ␣/ binding, it was still an appreciable amount compared with importin  binding only. However, in the case of IGFBP-5, importin  binding accounted for essentially all of the binding of the importin heterodimer (Fig. 6C). The mutant form of IGFBP-3 displayed less than 50% binding to the importin ␣/ heterodimer and the individual importin subunits, compared with wild-type IGFBP-3. It was concluded that importin , but not importin ␣, was able to recognize IGFBP-3 and IGFBP-5 and that there appeared to be a quantifiable difference in importin subunit binding to these IGFBPs.
DISCUSSION
This study identifies an 18-amino acid region of IGFBP-3 and IGFBP-5 that is necessary and sufficient for nuclear transport and accumulation. Our results suggest that nuclear import of IGFBP-3, and probably by analogy IGFBP-5, is an energy-dependent process mediated by importin  and following nuclear entry, IGFBP-3 can actively accumulate through binding to detergent-insoluble nuclear components. Mutation of the basic regions within the C-terminal domain of IGFBP-3 and IGFBP-5 attenuates nuclear import and/or accumulation and, as shown for IGFBP-3, reduces importin binding.
We have studied nuclear transport of IGFBP-3 using an in vitro nuclear transport assay that allows the role of individual components of the transport system to be examined. As a control, we examined the nuclear uptake of -galactosidase fused to the N1N2 bipartite NLS (46). In the presence of cytosolic factors and an ATP-regeneration system, full-length wildtype IGFBP-3 was capable of nuclear uptake in all cells where the plasma membrane had been permeabilized, but the nuclear envelope remained intact. Previous studies in intact cells have shown that nuclear transport of IGFBP-3 is detected only in a low percentage of cells in the monolayer (33)(34)(35)(36). Therefore, these data suggest that the plasma membrane is an important regulator of nuclear uptake of IGFBP-3 and consequently also of its subsequent function in the nucleus.
Nuclear protein import directed by bipartite NLSs conventionally requires cytosolic factors such as importin ␣/, Ran, and nuclear transport factor 2 (44). However, we show here that nuclear transport of IGFBP-3 can occur efficiently in the absence of added soluble transport factors. Analogous observations have been reported for nuclear transport conferred by the HIV-I Tat NLS (26) and the heterogeneous nuclear ribonucleoprotein (hnRNP) K sequence KNS (28). The Wnt signal transduction pathway component -catenin (27), which appears to be able to bind directly to nucleoporins, similarly appears not to require soluble factors for nuclear import; interestingly, nuclear transport of -catenin appears to occur through a Ranindependent pathway (51). However, unlike IGFBP-3, the targeting signals of Tat and hnRNP K represent novel nuclear targeting signals not resembling the T-ag or bipartite NLSs and, furthermore, do not appear to be recognized by importin ␣/. Previous studies have found that sufficient importin , but not importin ␣, may remain associated with the nuclear pore complex to support a basal level of nuclear import subsequent to digitonin permeabilization (12,47,48). Therefore, nuclear import in the absence of added cytosol suggests that the adapter, importin ␣, is not required for nuclear transport but does not rule out the possibility that importin  alone is mediating uptake. Our findings that inclusion of an antibody specific to importin , both in the presence and absence of added cytosol, inhibited nuclear import of IGFBP-3, suggests that importin , but not importin ␣, is required to sustain basal nuclear import. Similarly, addition of the nonhydrolyzable GTP analogue, GTP␥S, to the transport assay significantly reduced nuclear import of IGFBP-3. Together these results suggest that both importin  alone, and GTP hydrolysis are required for efficient transport. Importin  appears to be the sole nuclear targeting signal receptor used by parathyroid hormone-related protein (24), T-cell protein tyrosine phosphatase (25), and the yeast transcription factor, GAL4 (23). Finally, we cannot exclude the possibility that the cytosolic-independent nature of IGFBP-3 nuclear import is effected by other factors that, like importin , may not be completely solubilized during digitonin permeabilization.
Nuclear import mediated by conventional NLSs such as those found in T-ag and retinoblastoma (42), as well as by novel NLSs of HIV-I Tat (26) and the hnRNP K (28) and hnRNP A1 M9 sequences (52), has been shown to be ATP-dependent; cytosolic factor-independent nuclear import of -catenin has also been shown to require ATP (27). In the case of HIV-I Tat, ATP hydrolysis is believed to effect its release from cytoplasmic retention factors and enhance binding to nuclear components. Because nuclear import of IGFBP-3 was not observed in the absence of an ATP-regenerating system, IGFBP-3 may also require ATP for cytoplasmic release and/or enhanced nuclear binding. Alternatively, as is the case for many proteins transported to the nucleus, ATP may be involved in other, unknown actions that augment nuclear uptake.
In the presence of a permeabilized nuclear envelope, proteins are free to diffuse between the cytoplasmic and nuclear compartments and are only able to accumulate in the nucleus through binding to insoluble nuclear components such as chromatin and lamin (42). Under conditions where the nuclear envelope was permeabilized, IGFBP-3, but not the mutant, IGFBP-3[ 228 KGRKR 3 MDGEA], was capable of nuclear accumulation. These results support the proposal that, upon entry into the nucleus, IGFBP-3 accumulates through nuclear binding and suggest that residues 228 -232 are required for these nuclear interactions. The ability of IGFBP-3 to bind to structures within the nucleus supports the hypothesis that it may have a role in the nucleus, possibly in direct regulation of gene transcription (33)(34)(35)(36). This has been suggested for granzyme A and B (53,54) and parathyroid hormone-related protein (24), which can also accumulate in the nucleus even in the absence of an intact nuclear envelope.
Fusion of EGFP to isolated motifs has been used to study changes in subcellular distribution directed by these sequences (50). Using this approach, we investigated the role of the basic motifs of IGFBP-3 and IGFBP-5 in their nuclear transport. As the fusion proteins are expressed in living CHO cells, the proteins, factors and metabolic pathways present and active in living cells are able to exert their effects on nuclear transport, thus representing an in vivo nuclear transport assay. EGFP is small enough to enter the nucleus by passive diffusion, and as expected, the fluorescent signal derived from recombinant EGFP expressed in CHO cells was evenly distributed between the nucleus and cytoplasm. Fusion of the basic region within the C-terminal domain of IGFBP-3 and IGFBP-5 to EGFP caused nuclear accumulation of the fusion protein in greater than 90% of transfected cells, indicating that the basic regions are sufficient for nuclear import. However, these results do not distinguish between active nuclear transport and nuclear binding. Thus, there may be passive diffusion into the nucleus followed by accumulation resulting from interaction between these basic sequences and nuclear binding sites, as well as interaction of the basic residues with importins, leading to active nuclear import.
Mutation of the three basic clusters within the putative NLS of IGFBP-3 and IGFBP-5 caused different degrees of attenuation of nuclear transport of the EGFP fusion proteins. As was observed for the mutant form of full-length IGFBP-3, 228 KGRKR 3 MDGEA, the same mutation of both the IGFBP-3 and IGFBP-5 NLS when fused to EGFP, abolished nuclear uptake of EGFP. Mutation of both amino acids within the N-terminal basic cluster had a significant effect on nuclear transport of the EGFP fusion proteins, suggesting that the basic region may represent a classical bipartite NLS similar to that described for other proteins (9,21). However, mutation of the central basic cluster reduced nuclear transport of the fusion proteins by approximately 60%, implying that these basic clusters also influence nuclear accumulation and that the NLS is more complex than a classical bipartite NLS. As discussed above, the ability of the fusion protein to diffuse freely into the nucleus means that a distinction cannot be drawn between enhancement of nuclear import and accumulation because of nuclear binding. Therefore, mutations affecting nuclear transport may relate to either or both effects. However, in vitro studies on the mutant, IGFBP-3[ 228 KGRKR 3 MDGEA], suggest these sequences are involved in both effects. In this assay the basic sequences derived from IGFBP-5 behaved identically to those derived from IGFBP-3, suggesting that IGFBP-5, which is small enough to diffusion through the nuclear pore complex (molecular mass, 30 kDa), accumulates in the nucleus by a similar mechanism.
We found that the importin  subunit recognized both wildtype IGFBP-3 and IGFBP-5 but recognized only to a limited extent the mutant form of IGFBP-3. This suggests that transport of these binding proteins occurs by a signal-mediated pathway, consistent with the observation that importin  is required to effect in vitro nuclear transport of IGFBP-3. Interestingly, all the importin ␣/ binding to IGFBP-5 could be accounted for by importin  binding, whereas for IGFBP-3 importin  binding appeared significantly lower compared with its binding to the heterodimer. This was not compensated for by an appropriate increase in importin ␣ binding. The explanation for this is unclear but may relate to differing affinities or accessibility of the binding proteins for the importin subunits. Alternatively, importin ␣ may be binding to importin  in this assay and increasing the strength of its interactions with IGF-BP-3. However, an essential role for importin ␣ in IGFBP-3 nuclear transport is not supported by our findings that nuclear import occurs in the absence of exogenous cytosol.
An interesting question raised by this study is why IGFBP-5, and to a lesser extent IGFBP-3, possess functional NLSs when, because of their size, they have the potential to diffuse from the cytoplasm into the nucleus. There are several reasons why smaller proteins possess NLS-dependent mechanisms for nuclear import. As has been described for interleukin-5 (55), a functional NLS enables the cotransport of larger non-NLS containing proteins to the nucleus. In analogous fashion, IGFBP-3 or IGFBP-5 may possess functional NLSs to facilitate entry when part of a high molecular mass complex. Thus, they may require an NLS-dependent mechanism when cotransporting IGFs or other signaling molecules to the nucleus. Interestingly, a preliminary report suggests that IGFBP-3 interacts specifically with the retinoic acid X receptor-␣ (56). IGFBP-3 may thereby modulate the activity of nuclear transcription factors or have a specific signal transduction role in the nucleus. It may also regulate gene expression directly by binding to chromatin as has been reported for basic fibroblast growth factor (57) and the growth hormone receptor (58). Apart from the selective use of an NLS-dependent mechanism for the transport of high molecular mass complexes, the kinetics of nuclear import of IGFBP-3 and IGFBP-5 may be enhanced by their ability to interact effectively with importin. A major focus of future work in this laboratory is to distinguish between these possibilities. Understanding of the mechanisms of nuclear import of IGFBP-3 and IGFBP-5 should greatly assist in defining their nuclear functions.
|
2018-04-03T02:53:11.227Z
|
2000-08-04T00:00:00.000
|
{
"year": 2000,
"sha1": "ee56e91a0625a1e20567b86460a9544c6e99cbfe",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/275/31/23462.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "da75f40ac5bfaf0c42c49a2bef7b4ee479f34810",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
195695865
|
pes2o/s2orc
|
v3-fos-license
|
The effectiveness of inspections on reported mosquito larval habitats in households: A case-control study
Background Dengue is an arboviral disease that imposes substantial health and economic burdens across the globe. Vector control remains a key strategy in settings where Dengvaxia (a dengue vaccine) has not been licenced due to safety concerns and where mass immunization programmes are not cost-effective. Though inspections are used as part of arboviral disease control programmes, evidence of their impact on the entomological activity in households is sparse. Methodology/Principal findings We analysed nationally representative household inspection data collected from Singapore over a 3-year period, to determine the effect of inspections on reported mosquito larval habitats in households. A case was a household with a positive report of a mosquito larval habitat in its most recent inspection in 2017. A control was a household that was reported free of mosquito larvae in its most recent inspection in 2017. Using multivariable logistic regression, we analysed 3,205 cases and 557,044 controls. Households averaging three inspections per annum were associated with reduced odds of mosquito larval habitat reports [Adjusted Odds Ratio (AOR): 0.49, 95% Confidence Interval (95% CI): 0.38 to 0.63]. The effect of inspections declined with decreasing inspection frequencies but remained protective at lower levels. Longer intervals (30 to 36 months) between the most recent two successive inspections were associated with increased odds of mosquito larval habitat reports (AOR: 1.28, 95% CI: 1.06 to 1.56) compared to those carried out less than 6 months apart. Mosquito larval habitat reports exhibited a dependence on spatial and household-level characteristics such as the location of the community district, housing type and housing floor level. We observed a four-fold increase in the odds of mosquito larval habitat reports in households with an immediate previous report of larval activity compared to those that did not have one (AOR: 4.52, 95% CI: 3.67 to 5.56). Conclusions/Significance Our study confirms the protective effect of inspections on reported mosquito larval habitat reporting in households. Spatial, temporal and household-level characteristics should be accounted for in prioritizing vector control resources. Alternative strategies may help address recurrent entomological activity in households.
Methodology/Principal findings
We analysed nationally representative household inspection data collected from Singapore over a 3-year period, to determine the effect of inspections on reported mosquito larval habitats in households. A case was a household with a positive report of a mosquito larval habitat in its most recent inspection in 2017. A control was a household that was reported free of mosquito larvae in its most recent inspection in 2017. Using multivariable logistic regression, we analysed 3,205 cases and 557,044 controls. Households averaging three inspections per annum were associated with reduced odds of mosquito larval habitat reports [Adjusted Odds Ratio (AOR): 0.49, 95% Confidence Interval (95% CI): 0.38 to 0.63]. The effect of inspections declined with decreasing inspection frequencies but remained protective at lower levels. Longer intervals (30 to 36 months) between the most recent two successive inspections were associated with increased odds of mosquito larval habitat reports (AOR: 1.28, 95% CI: 1.06 to 1.56) compared to those carried out less than 6 months apart. Mosquito larval habitat reports exhibited a dependence on spatial and household-level characteristics such as the location of the community district, housing type and housing floor level. We observed a four-fold increase in the odds of mosquito larval habitat reports in households with an immediate previous report of larval activity compared to those that did not have one (AOR: 4.52, 95% CI: 3.67 to 5.56
Introduction
Dengue is an arboviral disease that imposes a substantial burden on economic development and human health across the globe. The annual global economic cost of this vector-borne disease has been estimated at USD$8.9 billion [1]. Global estimates place the annual number of dengue infections at 390 million [2] and deaths at nearly 10,000 [3]. The health impact of dengue is enormous yet disproportionate, with the highest incidence occurring in Asia [2]. In 2017, the World Health Organization (WHO) set ambitious goals for the global reduction of at least 75% and 60% in vector-borne disease mortality and morbidity respectively by 2030 [4].
To reach these goals, public health services will need to accelerate the application of effective interventions.
A vaccine would represent an important advancement in controlling dengue, especially in the tropics and sub-tropics where the disease is common. Ongoing clinical trials evaluating the tetravalent dengue vaccine candidates include those developed by Takeda (TAK-003) and the National Institute of Allergy and Infectious Diseases (TV-003/TV-005). Although the Dengvaxia vaccine (CYD-TDV) which was developed by Sanofi-Pasteur and first registered in 2015 demonstrated intermediate efficacy [5], the dengue seronegative recipients of this vaccine experienced a higher risk of severe dengue symptoms [6]. To ensure safer health outcomes for Dengvaxia vaccine recipients, pre-immunisation screening is highly recommended [7,8] but this may add to the burden of resources required for mass immunization programmes.
Dengue vaccines may hold the promise of disease relief for dengue endemic countries that have the necessary resources to implement and sustain immunization programmes. However, in settings where Dengvaxia has not been licenced due to safety concerns and mass immunization programmes are not cost-effective, vector control remains a key strategy in mitigating the impact of dengue transmission. The use of both immunization and vector control strategies may be required to control the disease more effectively [9].
Dengue infections are driven primarily by the Aedes aegypti mosquito [10], though the Aedes albopictus mosquito also plays an important role [11]. Besides dengue, Aedes mosquitoes are also vectors for Chikungunya, Zika and Yellow Fever [12]; therefore interventions aimed at their reduction are protective for multiple diseases. Integrated Vector Management (IVM) is a rational decision-making process to optimize the use of resources and is the WHO's preferred approach to improving vector control [13]. An important part of IVM is strengthening the evidence for setting-specific public health interventions in order to inform decisions on vector control and resource allocation [14]. One systematic review of cluster-randomized controlled trials aimed at controlling the Aedes aegypti mosquito reported that community mobilization and participation interventions were effective in reducing entomological indices but not those that relied on chemical control [15]. The meta-analysis from another systematic review reported that (i) window and door screens and (ii) community-based environmental management and water container covers, were both effective in reducing the risk of dengue transmission [16]. These studies estimated the effects of interventions over a baseline of ongoing government vector control programmes but none estimated the independent effect of household inspections on dengue transmission or entomological activity. Systematic reviews have highlighted the paucity of evidence for the effectiveness of Aedes vector control interventions [16][17][18]. Strengthening the evidence for vector control interventions is thus necessary to improve arboviral control policy and practice.
Inspections devoted to the elimination of mosquito larval habitats are also part of the arboviral control programmes in places such as Queensland (Australia), Florida (United States of America), Taiwan and Singapore [19], as well as in most other dengue endemic countries. However, little is known about the impact of such inspections on entomological outcomes in households. In this study, we examined the effectiveness of inspections on the number of reported mosquito larval habitats in homes in Singapore, over a 3-year period.
Ethics statement
This study was granted approval by the Environmental Health Institute of the National Environment Agency, Singapore (TS231). The study did not involve human participants.
Study setting
Located within Southeast-Asia, Singapore is a city-state with a land area of 719 km 2 and an estimated population of 5.6 million [20]. Singapore experiences a tropical climate all year round, with ambient air temperature usually reaching a peak in the middle of the calendar year and rainfall occurring on almost half of all calendar days [21]. Dengue is endemic in this country and exhibits a cyclical epidemic trend characterized by the switching of dengue virus serotypes 1 and 2 [22].
In Singapore, apartment blocks are the most common housing type, accounting for 95% of homes occupied by residents [23]. These blocks may have as few as 3-storeys or as many as 50-storeys, with the majority between the 10-and 30-storey range. Apartment blocks built by the government (known as "public apartments") are the majority of all residential housing while those built by the private sector (known as "private apartments") are the minority. In general, public apartments are more affordable compared to private apartments. A small proportion of residents live in landed houses built by the private sector and these are generally the least affordable among the three types of housing. It is common to observe plants and artificial containers in the external paved and turf areas of landed houses. All homes in Singapore are linked to a national piped drinking water network and have access to daily garbage removal services coordinated either by the state or municipal authorities.
The National Environment Agency (NEA) is responsible for implementing the national arboviral control strategy in Singapore. In each of the five community districts (see Fig 1), inspectors from the NEA's five corresponding public health inspectorates conduct entomological surveillance and disease control activities in residential and non-residential premises [24]. The inspectors carry out household inspections in the presence of the occupant(s) and communicate their findings and recommendations for mosquito and disease prevention. More than 1 million inspections for mosquito breeding are performed each year. Pupae and larvae collected from natural and artificial containers during inspections are sent for independent entomological identification to the Environmental Health Institute. Common mosquito habitats found in homes include pails, flower pot plates and trays, vases, hardened soil and plant axils [25]. Standing water from containers positive for mosquito larvae are treated by inspectors with Temephos to ensure that none of the mosquito immatures emerge as adults. Punitive fines for the detection of entomologically identified mosquito larval habitats in households are set at S$200 [26], though the law allows for a higher quantum at the Director-General for Public Health's discretion [27]. The law also allows for entry into inaccessible premises without the consent of the owner if the risk for mosquito breeding is high [27], and such inspections may be carried out in the absence of the occupants, though these are infrequent.
Study participants and design
Our main study question was "Are the reported mosquito larval habitats in households associated with the number of past inspections?". The outcome of interest in our study was the reports of entomologically identified mosquito larval habitats. We defined a case as a household with a positive result for mosquito larval habitats in its most recent inspection in 2017. We defined a control as a household with a negative result for mosquito larval habitats in its most recent inspection in 2017. We applied an epidemiological "case-control" study design to nationally representative inspection data.
Statistical analysis
We obtained the records of all household inspections carried out by NEA from 2014 to 2017. The data comprised reports of entomologically identified mosquito larvae found breeding in de-identified households, inspection dates, the nature of each inspection (outbreak or non-outbreak related), housing type, floor level of household and the community district where the household was located. We examined and excluded any de-identified inspection records which contained values that were outside the range for any variable. These included impossible values for residential floor levels and inspection time intervals. We coded the dependent variable as '1' if the household was reported to have a mosquito larval habitat in the latest inspection in 2017 and '0' otherwise. We modelled the total number of inspections for each household in the past 36 months from the date of the final inspection as a categorical independent variable. We aggregated the individual inspection frequencies into 4 stratums (see S1 Table). The data contained records of the time interval between the most recent two successive inspections reported on a daily timescale. We represented the effect of the time interval on the outcome of interest by categorizing the data using a 6-monthly timescale. We used values representing the registered storey of each household to create a categorical variable by coding storeys from 1 to 3 as '1', 4 to 6 as '2', 7 to 9 as '3', 10 to 12 as '4' and those higher than 12 as '5'. We assessed several potential confounding factors, which included the community district of residence, housing type and proximity to the ground level, the nature (outbreak/non-outbreak related) and within-year calendar timing of the most recent inspection because they could influence the outcome measure.
The patterns of independent variables for case and control groups were described separately using percentages. We made comparisons between groups using the chi-square test (χ 2 ). We also used multivariable logistic regression, which is appropriate for assessing the relationship between a dependent categorical variable and multiple independent categorical or continuous variables [28]. We used the Likelihood Ratio Test (LRT) to determine the associations between the independent variables and the outcome of interest. The measure of the effect for each independent variable on the outcome of interest in the multivariable model was expressed as an adjusted odds ratio. In sensitivity analysis, we compared the stratum specific effect estimates for individual inspection frequencies (10 stratums) with the 4 stratums we used, to assess if we had aggregated the individual frequencies appropriately. We evaluated the statistical significance at the 5% level and presented chi-squared test (χ 2 ) and LRT p-values, stratum specific AORs and the corresponding 95% CIs for the effects of independent variables. All analyses were performed using Stata 12.1 software (StataCorp, USA).
Characteristics of study population
We obtained the inspection records for 589,904 households and excluded 5% (n = 29,655) that had impossible values. We analysed the remaining inspection records for 560,249 (100%) households. There were 3,205 cases (0.6%) and 557,044 controls (99.4%). Among households with a higher frequency of inspection (�5 previous in the past 3-year period), the frequency of inspection appeared to be slightly lower for the cases compared to the controls. In contrast, the frequency of inspection was slightly higher for the cases compared to the controls at the lower inspection frequency (<5 past previous in the past 3-year period). There were 101 households with repeated reports of a mosquito larval habitats and this comprised 0.02% of all households analysed. The proportion of public apartments among cases was approximately 20% lower than that of the controls while the proportion of landed homes was about 20% higher for cases than for the controls. Among cases, the proportion of households with reports of mosquito larval breeding habitats appeared to decline with an increasing vertical distance from the ground level, while this distinction among controls was less clear. There were some differences in the spatial characteristics and the calendar month of the most recent inspections carried out between both groups (p<0.001). The characteristics of each group are summarised in Table 1.
Multivariable regression analysis
For the univariate analysis, the odds of the reported habitats in households that had 9 to 10 past inspections over the 36-month period (an average of 3 inspections per annum) was 0.58 (95% CI: 0.45 to 0.74) compared to that of households that did not have any inspections (see S2 Table). After adjusting for the effects of potential confounders, the frequency of 9 to 10 previous inspections remained protective 0.49 (95% CI: 0.38 to 0.63) (Fig 2). Inspections exhibited a protective but reduced effect on the reported mosquito larval habitats at lower frequencies [AOR: 0.72 (95% CI: 0.63 to 0.82) for households with 5 to 8 previous inspections, AOR: 0.80 (95% CI: 0.71 to 0.90) for households with 1 to 4 past inspections]. These results were consistent with the individual inspection frequency-specific estimates obtained in the sensitivity analysis (see S3 Table and S1 Fig).
When the two most recent successive inspections were carried out 30 to 36 months apart, there were increased odds of mosquito larval habitat reports in households (AOR: 1.28, 95% CI: 1.06 to 1.56) compared to those carried out within 6 months. The direction and the magnitude of the effect for other time intervals on the reports of mosquito habitats were similar, though statistically significant only for the 6-to 12-month interval (AOR: 1.11, 95% CI: 1.01 to 1.22). We observed a more than four-fold increase in the odds of mosquito larval habitat reports in households that had a report in the immediate previous inspection compared to those that did not have one (AOR: 4.52, 95% CI: 3.67 to 5.56).
Outbreak related inspections were more likely to report mosquito larval habitats than routine ones (AOR: 1.28, 95% CI: 1.12 to 1.46). Among the three classes of household types, landed houses had the highest odds of mosquito larval habitat reports while private apartments had slightly higher odds compared to public apartments. We observed a clear trend with the odds of mosquito larval habitat reports decreasing in households with increasing vertical distance from the ground level. Compared to those in the Central community district, reports of mosquito larval habitats in households located in the North West district were less likely (AOR: 0.83, 95% CI: 0.75 to 0.93). The association between the calendar month of inspection and reported mosquito larval habitats was inconsistent-with higher odds in some months and lower odds in others relative to January (see S4 Table).
Discussion
To our knowledge, this is the first large scale, nationally representative study that examined the effect of inspections on reported mosquito larval habitats in households. After adjusting for the effects of potential confounders, we found that households that were inspected more often were less likely to be associated with positive reports of mosquito larval habitats. The propensity for mosquito larval habitats in households that averaged three inspections per annum over a 3-year period was half that of those that were not inspected at all. Inspections remained protective at lower frequencies, though to a lesser degree. This study strengthens the evidence for the use of household inspections as an effective vector control intervention. In Singapore, public health inspectors carry out inspections of accessible homes in the presence of at least one occupant. The occupant accompanies the inspector throughout the inspection process and may acquire knowledge of the specific locations where water is more likely to stagnate and become conducive to the growth of mosquito larvae. General reminders by the health inspector, along with recommendations to address specific larval habitats are common during the inspection process and reinforce occupant knowledge. Previous studies have reported an improvement in knowledge and the adoption of best practices for the reduction of Aedes aegypti breeding sites after education activities [29][30][31]. Health inspectors only apply Temephos larvicide to containers positive for mosquito larvae and occupants are taught to regular remove any standing water thereafter. Therefore, reductions in the mosquito population are more likely to be attributable to the frequency of standing water removal rather than the one-time application of larvicides. We suggest that inspections may increase occupant knowledge and awareness and thus contribute in part to improved household practices that reduce entomological activity. The detection of larval habitats during an inspection results in the issuance of a punitive fine to the household occupant [32]. A study conducted in Brazil reported that government pressure coupled with warnings and fines were the most effective measure in reducing entomological breeding sites in commercial establishments at high risk for Aedes aegypti propagation [33]. We also suggest that the perceived risk of punitive fines may also influence household practices in part, though additional research is required to determine the independent effects of deterrence and knowledge.
We found clear evidence that longer durations between successive inspections increased the risk of positive mosquito larval habitat reports in households. This finding suggests that delaying subsequent inspections may erode the protective effects from previous inspections. Health authorities should also consider higher and regularly spaced inspection frequencies as a strategy for reducing entomological activity in households. Desired inspection frequencies must however reflect the balance between the economic costs and anticipated public health benefits.
Not surprisingly, we found that outbreak-related household inspections were positively associated with reports of mosquito larval habitats. Increased reports may be due to elevated vector activity within disease outbreak areas. The NEA intensifies its household inspections in dengue outbreak areas to quickly seek out and destroy sources of vector activity [34,35]. The observed association may be in part due to the increased motivation of health inspectors to uncover and eliminate sources of mosquito larval habitats in disease outbreak areas. Since the intention of inspections during an outbreak is to identify and eliminate as many sources of mosquito activity as possible, this study finding is reassuring and reflects the ability of inspections to identify more household sources of mosquito activity in residential areas with elevated dengue transmission.
We also observed the influence of spatial and household-level characteristics on the reporting of mosquito larval habitats in households in Singapore. There were spatial differences in the reports of mosquito larval habitats among the community districts of residence, with lower levels observed in the western districts compared to the others, though this was only significant for the North-western district. This appears to correspond with historical spatial trends for dengue reports in Singapore that indicate that the western community districts had comparatively lower levels of reported dengue infections [36].
Previous studies have reported on the influence of household-level and environmental characteristics on the entomological activity in homes. A study carried out in Sant Cugat (Spain) reported that environmental characteristics such as the presence of solid waste, scuppers, construction sites, and stacked gardening or building materials were positively associated with the presence of Aedes albopictus mosquito larval habitats in households [37]. Another study carried out in Machala (Ecuador) reported that factors such as the condition of the house and patio, water storage practices and lack of access to piped water were positively associated with the presence of Aedes aegypti pupal habitats in households [38]. In our study, we found that reports of mosquito larval habitats were more likely in landed households compared to apartments. Our observations were in agreement with the findings from an earlier study conducted in Singapore [39]. Landed households generally have outdoor paved and turf areas that are directly exposed to the weather elements. Vegetation in these areas may provide refuge for adult mosquitoes during warmer weather. Plants and containers placed in these areas may accumulate stagnant water and become sources of mosquito activity. Relative to apartments, landed households have larger living spaces and the propensity for a higher number of mosquito larval habitat reports may be as a result the larger number of sites for stagnant water to develop into larval breeding habitats.
In our study, the proclivity for reports of mosquito larval habitats in households declined with increasing vertical distance from the ground level. Households located on lower floors are proximate to natural vegetation, discarded receptacles and street level storm water drains that can accumulate stagnant water due to rainfall. These natural and artificial containers may become conducive mosquito breeding sites if the stagnant water within is not removed. Our study findings are consistent with that from another study on the use of gravitraps, which reported lower Aedes aegypti activity on higher floors of high-rise buildings [40]. A previous study analysing the data collected from ovitraps deployed in high-rise apartment blocks in Malaysia reported the presence Aedes aegypti mosquito larvae on most floors though the negative relationship between larvae presence and floor level was not statistically significant [41]. The difference in the size of the studies may have contributed to the difference in study findings.
A previous study advocated the merits of spatial-temporal dengue forecasts in targeting vector control activities at the neighbourhood level [42]. While our main study findings point to the benefits of increasing inspection frequencies for reducing entomological activity in households, practical resource limitations restrict its adoption across all households in any identified neighbourhood. The observed dependence of mosquito larval habitat reports on the spatial, temporal and household-level characteristics from our study reinforce the need to adopt a risk-based approach in executing vector control at the household level. We recommend accounting for such factors when prioritizing the allocation of limited vector control resources.
In our study, we found that immediate previous reports of mosquito larval habitats were positively associated with subsequent reports of habitats in the most recent inspections. Though the estimated strength of this association was high, the proportion of households with repeated reports was extremely low. This may be due to success of the NEA's household inspection programme and public education efforts. Nevertheless, public health inspectors from the NEA seeking to eliminate household sources of entomological activity could expeditiously prioritize their vector control efforts in households with past reports. Given that the report of a mosquito larval habitat is a proxy for a punitive fine received by the household, this finding may suggest that the present penalty regime alone is insufficient in incentivizing positive behaviour in a small number of households. Additional research to determine the association between the characteristics of such households and their occupants with sustained entomological activity is required. Discovering the factors associated with a propensity for entomological activity in households may inform the design and choice of strategies aimed at reducing repeated reports of mosquito larval habitats.
Study strengths and limitations
We analysed a large and nationally representative (n = 560,249) dataset. We included 95% of all household inspection records in our data analysis, thus greatly minimizing selection bias. We used a well-defined outcome measure-entomologically confirmed mosquito larval habitats, and thus the potential for case misclassification was low. In the absence of data on actual fines meted out to each household, we used outcomes of past inspections as a proxy in order to examine their subsequent effect on reported mosquito larval habitats. To the best of the authors' knowledge, the NEA infrequently accedes to appeals for fine waivers related to such reports. Therefore, the effect of fines is likely to be similar to our estimate for the effect of past reports of mosquito habitats on subsequent inspection outcomes. The likelihood of detecting statistically significant findings in this highly powered study was high. However, given the large number of households inspected annually (>550,000), even a 10% change in risk may translate into a substantial change in the absolute number of positive mosquito larval habitats reported. The NEA did not collect household-level information to assess the impact of localized community initiatives on entomological activity in households. We were thus unable to account for their independent effects on reports of mosquito larval habitats in households. Additional studies examining the effectiveness of such community initiatives are recommended.
Conclusions
Vector control remains an important strategy in arboviral disease control. While inspections seek to reduce sources of arboviral vectors, evidence of their effectiveness is limited. Ours is the first nationally representative study to demonstrate the protective effect of inspections on entomological activity in households, thus providing evidence of its effectiveness as a vector control intervention. Our main study finding is reassuring to health authorities that intend to use or continue with inspections to reduce entomological activity in households. We recommend that arboviral disease control programmes account for spatial, temporal and householdlevel characteristics in prioritizing vector control efforts. Research on alternative strategies may help address recurrent reports of mosquito larval habitats in households, though periodic assessments to ensure their effectiveness may be necessary.
|
2019-06-28T13:21:56.882Z
|
2019-06-01T00:00:00.000
|
{
"year": 2019,
"sha1": "c92a42b223d7b623b4c545b1a2d94eaeca5d01f7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pntd.0007492",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c92a42b223d7b623b4c545b1a2d94eaeca5d01f7",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
16803918
|
pes2o/s2orc
|
v3-fos-license
|
Individual differences in approach-avoidance aptitude: some clues from research on Parkinson’s disease
Approach and avoidance are two basic behavioral aptitudes of humans whose correct balance is critical for successful adaptation to the environment. As the expression of approach and avoidance tendencies may differ significantly between healthy individuals, different psychobiological factors have been posited to account for such variability. In this regard, two main issues are still open that refers to (i) the role played by dopamine neurotransmission; and (ii) the possible influence of cognitive characteristics, particularly executive functioning. The aim of the present paper was to highlight the contribution of research on Parkinson’s disease (PD) to our understanding of the above issues. In particular, we here reviewed PD literature to clarify whether neurobiological and neuropsychological modifications due to PD are associated to changes in approach-avoidance related personality features. Available data indicate that PD patients may show and approach-avoidance imbalance as documented by lower novelty-seeking and higher harm-avoidance behaviors, possibly suggesting a relationship with neurobiological and neurocognitive PD-related changes. However, the literature that directly investigated this issue is still sparse and much more work is needed to better clarify it.
Introduction
Actively seeking contact with rewarding stimuli and avoiding unpleasant conditions in the environment are critical in the functional adaptation of humans to their life context. Indeed, people learn early that in some conditions they have to maintain an approach attitude to pursue a desired goal and in other conditions they have to inhibit the tendency to move toward an object or a person to avoid negative outcomes. This implementation balance in approach-avoidance operations should allow the formation of behavioral modules at the level of the disposition to act, which represent the pre-conditions for obtaining correct knowledge of the world and one's own limits, successful access to resources and, at the same time, provide for one's own safety.
Within a psychobiological framework it has been posited that the activity of two main motivation systems modulates the approach-avoidance aptitude of an individual: the behavioral activation system and the behavioral inhibition system (Reinforcement Sensitivity Theory; Gray, 1970;Pickering and Gray, 2001). The first system was considered to mediate behavior related to gratifying conditions or potential positive outcomes of a situation and to be specifically sensitive to rewarding or non-punishing stimuli and to promote active searching for potentially rewarding conditions. By contrast, the behavioral inhibition system was considered particularly sensitive to punishment and non-rewarding stimuli. It modulated behavior by inhibiting appetitive responses and increasing arousal in order to improve attention to salient and relevant stimuli, e.g., potentially harmful stimuli, in the environment. The predominant activity of one of the two above systems was considered to lead to greater or even exclusive expression of behavioral moduli related to approach or, alternatively, to avoidance aptitudes, thus determining an individual's stable dispositional response mode to external stimuli.
Recent findings deriving from both animal (mammals) models and human studies suggest that the activity of the above-mentioned motivational systems and, thus, the degree to which approach and avoidance behaviors can be expressed in an individual, depends on the variable effects of both biological and psychosocial factors. In particular, results of studies with mammals show that the approach-avoidance aptitude may be modulated by the central activity of the neuropeptides oxytocin (OT) and arginine vasopressin (AVP) in target brain regions (Young, 2002); interaction with the dopamine reward system was also suggested (Skuse and Gallagher, 2009). Human data document that in healthy subjects personality traits and some cognitive processes may be related to the likelihood of adopting approach or avoidance behavior (Rettew et al., 2006;Spielberg et al., 2011). Finally, in persons suffering from psychopathological disorders, the approach-avoidance related motivational systems may show differential sensitivity to environmental stimulation (Muris et al., 2001;Hirano et al., 2002;Mitchell and Nelson-Gray, 2006). In view of the above observations, interest has recently been centered on individual differences in approach-avoidance behavior and on the possible role played by the interaction between the different psychobiological factors in moderating its expression.
Aims of the Review
The purpose of this paper was to highlight findings deriving from research in individuals with PD that might contribute to clarifying the factors related to individual differences in approach and avoidance aptitude. In particular, we here reviewed PD literature to clarify whether neurobiological and neuropsychological modifications due to PD are associated to changes in approach-avoidance related personality features. We focused on this issue for three main reasons: First, PD clinical manifestations are primarily a consequence of dopamine dysfunction in neural networks whose activity is considered important for sustaining the activity of behavioral motivation systems (Young, 2002;Calabresi et al., 2006;Laricchiuta et al., 2014). Second, some data suggest that PD patients develop personality characteristics and psychopathological disorders associated with avoidance behavior (Meyer et al., 1999;Muris et al., 2001). Third, PD patients frequently present neuropsychological disorders involving cognitive functions that are critical for sustaining goal-directed behavior (Halliday et al., 2014). These three points will be discussed by focusing mainly on the results of studies that suggested a potential relationship between the modifications occurring during the course of PD and the expression of various aspects of approach and avoidance behavior.
Methods
The studies were searched using electronic database Medline and PsychoInfo in a period including the first months of 2014. In both databases the same following keywords were used: Approachavoidance; Dopamine systems; Parkinson's disease; Personality; Motivation Disorders; Cognitive functioning; Executive abilities. The studies included in the review should investigate, in PD patients, personality traits that could be related to approachavoidance aptitude and their relationship with neurobiological and neuropsychological variables. A list of the studies that were considered with a description of main characteristics and results is reported in Table 1.
Neurobiological Mechanisms of Approach-Avoidance Behavior and PD: Evidence of an Overlap
Neurobiologic Correlates of Approach-Avoidance Behavior: Evidence from Non-PD Studies Enter et al. (2012) found that dopamine transporter (DAT1) polymorphisms were related to different approach-avoidance behaviors when healthy adults were assessed using a task that had stimuli with emotional social valence (i.e., human faces). In particular, these authors demonstrated that, compared with DAT1 10-repeat homozygote carriers DAT1 9-repeat carriers showed an increased effect of the presented stimuli (happy and angry faces) in approach-avoidance responses. This finding suggests that the motivational behavioral systems of these subjects are more sensitive. The DAT is involved in dopamine reuptake in the striatum and the DAT1 9repeat carriers have been reported to have lower levels of DAT than individuals with 10-repeat alleles, which indicates that these subjects have higher dopamine concentrations in the striatum (Heinz et al., 2000). Furthermore, in a recent functional magnetic resonance imaging (fRMI) study in healthy young subjects, Simon et al. (2010) documented greater ventral striatal and mesial orbito-frontal cortex activation when individuals who showed a high expression of rewardseeking behavior actually received rewards. By contrast, they found less ventral striatal activation when subjects who were more prone to inhibit appetitive behavior received a reward (Simon et al., 2010). These findings provide evidence in line with previous data from animals models that dopamine neurotransmission in neural networks (including the striatal structures) is critically involved in the modulation of motivation behavior (For a review on animal models see, Hoebel et al., 2007). (Cloninger, 1987); TCI: Temperament and Character Inventory (Cloninger et al., 1993); KSP: Karolinka Scales of Personality Questionnaire (Schalling et al., 1987); BFAC: Big Five Adjective Checklist (Caprara et al., 2002); MMPI Minnesota Multiphasic Personality Inventory (Dahlstrom et al., 1972); Neo-FFI: Neo-Five Factor Inventory (Costa and McCrae, 1992).
In particular, based on findings suggesting that dopamine activity would promote appetitive behavior (e.g., moving toward external stimuli, reward seeking), whereas acetylcholine would mainly enhance behavioral inhibition and aversive responses Avena et al., 2006), Hoebel et al. (2007) proposed that dopamine interacts with acetylcholine in the ventral striatum (i.e., in the nucleus accumbens) to maintain a functional balance between approach and avoidance tendencies.
Neurobiological Modifications in PD
PD is a well-known neurological disease that is primarily characterized by dysregulation of the nigro-striatal, mesolimbic and the mesocortical dopaminergic brain systems. (Owen, 2004;Dickson et al., 2009). More specifically, degeneration of the dopamine cells in the midbrain leads to precocious and severe dopamine depletion in the striatum, which first involves the rostrodorsal extent of the head of the caudate nucleus and, later, the ventral tegmental neurons that project to more ventral parts of this structure and to prefrontal and limbic regions (Yeterian and Pandya, 1991;Agid et al., 1993;Costa et al., 2009). In fact, in addition to movement disorders PD patients often display cognitive-behavioral deficits (Robbins and Cools, 2014). Although the role of dopamine brain transmission in causing cognitive-behavioral disorders in PD has not yet been completely clarified, in the early phase of the disease cognitive deficits are considered due to an imbalance between phasic dopamine activity in the dorsal striatum and tonic dopamine activity in the prefrontal cortex, which leads to reduced efficiency of flexibility processes (i.e., updating and set-shifting) (Cools, 2006;Cools and D'Esposito, 2011). With disease progression and the parallel greater involvement of dopamine transmission in the ventral striatum and the dopamine projections to the other structures of the mesolimbic system, reduced ability to decode and use environmental stimulation (e.g., reinforcers) to adopt functional behavior, and altered emotional processing and declarative memory disorders are observed. The hypothesis was also advanced that the disrupted equilibrium between the activity of dopamine and acetylcholine, which occurs in the striatum, could account for some of the cognitive-behavioral manifestations of PD (Calabresi et al., 2006). As stated above, dopamine projections to striatal structures are primarily affected in PD, thus causing a decrease of dopamine activity and, likely, a parallel increase of cholinergic tone. According to Calabresi et al. (2006) the altered dopamine-acetylcholine equilibrium could affect synaptic mechanisms of long-term potentiation and depression and of synaptic depotentiation, in some way modifying frontalstriatal interconnections and causing learning and executive disorders. The above mentioned evidence suggests that PD may precociously cause functional and structural changes in frontalstriatal regions whose activity is supposed to be responsible for the modulation of approach and avoidance responses in animals as well as in humans. Thus, PD is an interesting natural human model for investigating the psychobiological mechanisms involved in learning and sustaining these behavioral aptitudes.
Do the Personality and Psychopathological
Features of PD Indicate an Approach-Avoidance Imbalance?
In the previous section we suggested that the neuropathological processes of PD might affect the functioning of brain circuitries involved in the mediation of approach and avoidance tendencies. This leads to the key question of whether these two main aspects of the behavioral motivational systems are impaired in PD patients. Some clinical reports are in line with this idea. In fact, a large proportion of PD patients suffer from depressive disorders and apathy (Aarsland et al., 2011;Martínez-Horta et al., 2013), which have been shown to be associated with a significant decrease of appetitive and self-initiated behaviors also in PD (Costa et al., 2006;Martínez-Horta et al., 2013;Damholdt et al., 2014;Spielberg et al., 2014). Anxiety symptoms, which are associated with avoidance behavior, particularly in the context of social interactions (Wong and Moulds, 2011), are also frequently described in these patients (Sagna et al., 2014). An opposite behavioral pattern, characterized by excessive attraction to rewarding stimuli, is observed in some PD individuals who develop impulse control disorders especially in response to the administration of dopamine therapy (Callesen et al., 2013).
More direct evidence of an imbalance between the behavioral activation and inhibition systems comes from research on the personality functioning of PD patients. These data document that personality traits such as novelty seeking, which mainly refers to the propensity towards active exploration in response to novel stimuli and the avoidance of frustration (Cloninger, 1987), are expressed to a lesser extent in PD patients compared to controls without neurologic diseases (Menza et al., 1993;Bódi et al., 2009; for a review see Poletti and Bonuccelli, 2012). By contrast, the personality trait of harm avoidance, which is characterized by the inclination to adopt a passive avoidance behavior, was found to be much more present in PD patients than in controls (Jacobs et al., 2001;Tomer and Aharon-Peretz, 2004;McNamara et al., 2008;Koerts et al., 2013). Other studies reported evidence of reduced extraversion in PD patients (Damholdt et al., 2014), which probably indicates their low aptitude for approach social interactions (McCrae and Costa, 1997).
Personality Modifications May Predate Clinical Manifestation of PD
It was also hypothesized that personality changes might occur in a pre-clinical phase of PD. This hypothesis was mainly grounded on the observation that the presence of a personality with low novelty-seeking functioning, rigidity and caution predates the onset of extrapyramidal symptoms (for a review see Menza, 2000). Nevertheless, few longitudinal studies have been conducted to investigate this hypothesis. In this regard, some interesting data were reported by Bower et al. (2010) and by Arabia et al. (2010) in a cohort study in which more than 6,800 persons were followed for four decades. The Minnesota Multiphasic Personality Inventory, a validated psychometric test that investigates psychological disorders (Dahlstrom et al., 1972), was used to assess personality. The authors did not find a clear relationship between the constructs of introversion and extroversion and the risk of PD. However, results showed that an anxious personality, as assessed by the psychoastenia scale, is associated with a higher risk of developing PD with a hazard ratio of 1.63. This finding is quite interesting for our discussion because there are reports that anxiety symptoms are associated with avoidance behavior in both human studies (Muris et al., 2001;Wong and Moulds, 2011) and animal models (Toth and Neumann, 2013). Evidence that the neurobiochemical alterations of PD occur years before clinical manifestation of the disease also support the idea of a correlation with these personality changes. Nevertheless, findings from studies that directly correlated the personality changes and neurobiological modifications of PD are still sparse.
Relationship between Personality Changes and Neurobiological Modifications in PD
Findings Indicating a Positive Association Bódi et al. (2009) documented a significant relationship between dopamine stimulation and novelty seeking in drugnaïve PD patients. In particular, they investigated the effect of dopamine agonist administration on different personality traits (i.e, novelty seeking, harm avoidance, reward dependence and persistence) and on reward-learning using a feedbackbased task. Patients underwent two assessments in which the examiners were blind to personality measures, test results and medication conditions, the first without taking any medication and the second after a 12-week period of treatment with the D 2 and D 3 dopamine receptor agonists pramipexole and ropinirole. At the first assessment PD patients showed significantly lower novelty-seeking and reward-learning scores than healthy controls. The second assessment showed that dopamine intake significantly improved PD patients' novelty-seeking scores and reward-learning performance so that they could no longer be distinguished from healthy controls on these measures (Bódi et al., 2009). Tomer and Aharon-Peretz (2004) reported more complex results. They showed that left-right side asymmetry of dopamine-related pathology may differentially affect personality functioning in PD patients. In fact, findings from this study suggest that in these patients dopamine loss in the left hemisphere is associated with reduced novelty-seeking behavior while higher harm avoidance is related to dopamine loss in the right hemisphere (Tomer and Aharon-Peretz, 2004). These findings are in line with those of a previous PET study in PD patients in which Menza et al. (1995) demonstrated that 6-[18F]fluorodopa uptake in the caudate nucleus (of the left hemisphere) correlated with novelty-seeking scores.
Above observations are congruent with previous evidence in people without neurological disorders of the relevant role of dopamine transmission in the striatum in modulating sensitivity to reward-and novelty-seeking behavior (Leyton et al., 2002), and would sustain the general hypothesys that novelty seeking is strongly related to dopamine transmission (Cloninger, 2000; but see Paris, 2005 for a critical review). Also interestingly, the above findings are congruent with the hypothesis of brain hemispheric asymmetry in mediating activation and inhibition behavioral systems, with the left hemisphere more associated with approach and the right hemisphere with avoidance (Spielberg et al., 2013).
Studies Documenting a Non-Linear Association between Personality Features and PD Related Neurobiological Changes
Partially divergent results in respect to those above discussed were reported in other studies with PD patients. Indeed, Kaasinen et al. (2004) showed an inverse correlation between novelty-seeking scores and dopamine receptor availability in the insula, a brain region highly interconnected with the striatum and suggested to be involved in different cognitive and emotional processes in PD (Christopher et al., 2014). The finding by Kaasinen et al. (2004) indicates that the likelihood of PD patients expressing a higher novelty-seeking aptitude corresponds with lower dopamine activity in this structure. In another independent PET study Kaasinen et al. (2001) also demonstrated that in unmedicated PD patients novelty-seeking scores did not correlate with 6-[18F]fluorodopa uptake in any of the target brain regions. Instead, higher harm avoidance scores were positively correlated with 6-[18F]fluorodopa uptake in the caudate nucleus.
In summary, an imbalance between the activity of behavioral activation and inhibition systems, characterized by lower approach and higher avoidance tendencies, seems to be present in PD patients (see Table 1 for a synthesis of the results of the main studies). This is supported by observations of their reduced novelty seeking, sensitivity to reward and self-initiated behavior. However, results on the potential role played by the neurobiological changes of PD and this hypothesized imbalance are inconclusive. One limit of most studies investigating this issue in PD patients is the use of self-rating psychometric tools that require selfjudgment of one's own characteristics. Further studies using more objective, performance-based paradigms, which specifically assess approach and avoidance behaviors, would be more informative. For instance, to better understand the role of dopamine neurotransmission in target brain regions on these processes, a functional magnetic resonance protocol could be used to investigate how dopamine administration/withdrawal modulates neural activity in PD patients while they perform approach-avoidance procedures. Future investigations should also take into account several potentially confounding individual clinical (e.g., disease duration, disease severity, side of onset, pattern of movement disorders) and cognitive (e.g., presence of dementia, mild cognitive impairment or attention disorders) characteristics of the disease.
Are the Executive Disorders in PD Potentially Related to Approach-Avoidance Tendencies?
Results Evidencing an Association between Approach-Avoidance Behavior and Executive System in Individuals without PD The interaction between approach and avoidance motivational systems and executive functions appears to be critical for the maintenance of goal pursuit and, thus, for the implementation of adaptive behavior (for a review see Spielberg et al., 2008Spielberg et al., , 2013. This interaction has been observed in both behavioral and neuroimaging investigations. Gray (2001) showed that the induction of approach and withdrawal motivation differentially affected subjects' performance on verbal and spatial working memory tasks. In a subsequent study Spielberg et al. (2011) showed that trait motivation modulated dorsolateral prefrontal cortex activity while healthy subjects were performing a selective attention task (i.e., the Stroop color-word task). Specifically, in a fMRI protocol, these authors explored changes in neural activity as a function of subjects' scores on questionnaires investigating approach and avoidance temperament. Results showed a significant positive correlation between approach temperament scores and activation in the left superior and middle frontal gyri. A positive association was also found between avoidance scores and activation in the middle frontal gyrus (bilaterally) and the left superior frontal gyrus (Spielberg et al., 2011).
Similar results indicating an interaction at the level of the prefrontal cortex between motivation and executive abilities were previously reported by Pochon et al. (2002) and by Taylor et al. (2004). Indeed, the prefrontal cortex has been consistently found to be highly involved in the mediation of several executive abilities (e.g., planning, working, multitasking, prospective memory) and it is considered fundamental for the correct organization of information and goal-directed behavior (Miller and Cohen, 2001;Burgess et al., 2011;Yuan and Raz, 2014). A recently proposed integrated view accounts for the interaction between approach-avoidance traits, state motivation and executive skills in coherently pursuing internal goals (Spielberg et al., 2013).
On the basis of the above observations it can be hypothesized that the qualitative characteristics of executive functioning influence the expression of individual approach and avoidance tendencies. This hypothesis is corroborated by the clinical manifestation of brain damage involving the prefrontal cortices. These patients often present cognitive and behavioral signs---such as a decrease in goal-directed behavior, apathy and disinhibition---that could be related to an imbalance between activation and inhibition systems.
Results from Studies with PD Patients
We previously mentioned that PD patients present cognitive disorders early in the disease course, which are primarily related to dopamine dysfunction in the frontal-striatal circuitries (Costa et al., 2014a), with convergent evidence documenting their reduced ability to perform tasks sensitive to executive functions (Dirnberger and Jahanshahi, 2013;Kudlicka et al., 2013;Robbins and Cools, 2014). In particular, set-shifting and updating efficiency appear to be reduced early, likely affecting their ability to successfully maintain a goal-directed behavior (Cools, 2006;Cools and D'Esposito, 2011). In fact, these patients have difficulty in performing planning and multitasking tests and in spontaneously retrieving the intention to perform planned actions (Owen, 2004;Kliegel et al., 2011;Costa et al., 2014b,c).
Based on above observations documenting both an association between approach and avoidance motivational systems and executive functions in non-PD individuals and, in PD patients, a decreased efficiency of the executive system, we could hypothesize, in the latters, the existence of a relationship between executive dysfunctioning and approach-avoidance imbalance. Some findings are in line with these hypothesis (in Table 1 a synthesis of main results is reported). McNamara et al. (2008) reported, in 44 PD patients without dementia, a significant inverse correlation between harm avoidance rates, as assessed by means of the Temperament and Character Inventory, and a measure of cognitive flexibility and strategic access to information (i.e, verbal fluency). In an other study, Volpato et al. (2009) administered to 25 PD patients without dementia the Big Five Adjective Checklist to examine different personality traits, and the Tower of London test and Alternating Fluency test to investigate planning and setshifting, respectively. The authors found a significant correlation between the personality factor of openness to experience and PD patients'score on alternating fluencies, thus indicating a significant relationship between this personality factor and cognitive flexibility processes.
In a more recent study with 43 PD patients without dementia, Koerts et al. (2013) investigated the relationship between different components of executive functions (cognitive flexibility, inhibition, working memory, planning and verbal fluency) and various aspects of personality functioning (novelty seeking, harm avoidance, persistence and reward dependence). Neuropsychological tests and questionnaires (the Temperament and Character Inventory) were administered to assess cognitive and personality features, respectively. The authors found that PD patients' scores on cognitive flexibility measures significantly predicted reward dependence rates (Koerts et al., 2013).
In summary, taken together the above findings do not allow us to make firm conclusions about the nature of the relationship between executive functioning and approachavoidance imbalance in PD. Indeed, the reported associations between some personality factors (i.e, reward dependance and openness to experience) and cognitive flexibility is undoubtedly interesting as neural activity in the frontal-striatal circuitries has been demonstrated to be critical for both reward-related behavior and cognitive flexibility (O'Doherty, 2004;Bódi et al., 2009;Macdonald and Monchi, 2011). However, a better method for investigating the relationship between executive functioning and approach-avoidance aptitude in PD patients would be to test the effect of cognitive training aimed at potentiating executive abilities on approach-avoidance related behavior. The observation of executive improvement combined with a change in approach-avoidance behavior would be clear evidence of a causal relationship. In this regard it should be noted that some cognitive training has been proposed which significantly enhances some components of executive functioning, e.g., cognitive flexibility, which, as we discussed here, may be associated with approach-avoidance tendencies also in persons with PD (Calleo et al., 2012;Costa et al., 2014b).
Conclusions
Currently, the main issues in the study of individual differences in the expression of approach and avoidance behaviors are (i) the role played by dopamine neurotransmission; and (ii) the possible influence of cognitive characteristics, particularly executive functioning. Regarding the first point, there is some evidence that dopamine and its interaction with other neuromodulators at the level of the mesocorticolimbic networks affects these processes by modulating the learning of individual response modalities (Skuse and Gallagher, 2009). However, this evidence is mainly derived from results of studies that used animal models; in fact, human data are inconclusive. Regarding the second point, also in this case the results of some studies suggest the existence of a potentially bidirectional relationship between cognitive (executive) functioning and approach-avoidance behavior. Extant findings are, however, still sparse and not univocal.
Here we highlight the potentially relevant contribution of research on PD in clarifying above issues. In fact, PD is characterized early by dopamine loss in brain circuitries including the frontal-striatal, mesolimbic and mesocortical pathways, that are implicated in sustaining the functioning of motivational behavioral systems and executive processes.
Some evidence seem to support the idea that PD may be associated to an alteration of approach-avoidance behaviors, wherein it documents that PD patients show reduced novelty seeking and higher harm avoidance expression. However, the literature that directly investigated the relationship between PD patients' neurobiological and neuropsychological modifications and approach-avoidance related personality functioning gave inconclusive results. Further research that overcomes the limits of previous studies (e.g., low sample size, clinical heterogeneity, heterogeneity of dopamine treatments, use of self-report questionnaires) are needed to further explore these important topics in PD.
to be accountable for all aspects of the work. Contribution of the author CC: conception of the work, revising the draft, final approval of the version to be published, agreement to be accountable for all aspects of the work.
|
2016-05-12T22:15:10.714Z
|
2015-03-24T00:00:00.000
|
{
"year": 2015,
"sha1": "15064b1bc6041ad010308390c7195fd161b78636",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnsys.2015.00043/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "15064b1bc6041ad010308390c7195fd161b78636",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
53268878
|
pes2o/s2orc
|
v3-fos-license
|
Redox Homeostasis and Natural Dietary Compounds: Focusing on Antioxidants of Rice (Oryza sativa L.)
Redox homeostasis may be defined as the dynamic equilibrium between electrophiles and nucleophiles to maintain the optimum redox steady state. This mechanism involves complex reactions, including nuclear factor erythroid 2-related factor 2 (Nrf2) pathway, activated by oxidative stress in order to restore the redox balance. The ability to maintain the optimal redox homeostasis is fundamental for preserving physiological functions and preventing phenotypic shift toward pathological conditions. Here, we reviewed mechanisms involved in redox homeostasis and how certain natural compounds regulate the nucleophilic tone. In addition, we focused on the antioxidant properties of rice and particularly on its bioactive compound, γ-oryzanol. It is well known that γ-oryzanol exerts a variety of beneficial effects mediated by its antioxidant properties. Recently, γ-oryzanol was also found as a Nrf2 inducer, resulting in nucleophilic tone regulation and making rice a para-hormetic food.
Introduction
Redox homeostasis may be defined as the internal dynamic equilibrium with respect to the continuous alterations of electrophilic and nucleophilic tone in order to maintain the optimum redox steady state [1][2][3]. The maintenance of redox homeostasis is crucial for preserving physiological functions since reactive oxygen and nitrogen species (ROS/RNS) are constantly generated in the normal metabolism of aerobic cells [4]. In fact, redox homeostasis alterations, as the result of the inability to maintain the optimal redox steady state, are associated with a phenotypic shift toward pathological conditions [5,6]. According with the Harman theory of aging, an imbalance between the efficiency of antioxidant systems and the production of free radicals, derived from mitochondrial respiration and/or external environmental stressors, is at the base of aging process alterations and the development of age-related diseases. Following this concept, scavenging free radicals from a cellular environment might be the right approach for a beneficial effect. Indeed, ROS/RNS are not intrinsically harmful, and the Harman theory today could be too simplistic [7]. The fine regulation of the dynamic equilibrium between electrophilic and nucleophilic tone is at the base of redox homeostasis. When electrophilic tone increases, nucleophilic feedback reactions will be activated through the engagement of redox signaling in order to restore the system back to the initial steady state [5,8]. These reactions involve the antioxidant response element (ARE or also known as electrophile response element, EpRE)-regulated phase II enzymes such as NAD(P)H: quinone oxidoreductase 1 (NQO1), heme oxygenase-1 (HO-1), and glutathione S-transferase (GST), which transcriptional activation requires a basic leucine zipper transcription factor called nuclear factor erythroid 2-related factor 2 (Nrf2) [9][10][11].
Natural antioxidants have recently obtained great attention in preventing lifestyle and age-related diseases. They contain phytochemicals that help the body to counteract oxidative stress by directly scavenging free radicals or activating antioxidant pathways [12][13][14][15]. Recently, new evidence has shown that certain natural compounds are able to induce ARE-regulated phase II enzymes, thus participating in the maintenance of redox homeostasis [16][17][18][19][20][21][22][23][24][25][26][27][28][29][30]. A good understanding of the mechanisms by which certain natural compounds preserve nucleophilic tone is of great interest in the field of the public health prevention, especially in the aging population.
Thus, the aim of this review is to summarize the mechanisms involved in redox homeostasis and how certain natural compounds regulate the nucleophilic tone. In addition, we discussed on the antioxidant properties of rice focusing on γ-oryzanol, underlining the relationship between chemical structures and biological effects. Besides, based on our recent finding, we emphasized a possible role of γ-oryzanol as a para-hormetic natural compound.
Oxidants and the Electrophilic Tone Regulation
Since 1985, the concept of oxidative stress has been introduced to redox biological research [31]. In general, the term of oxidative stress is referred to an imbalance between the efficiency of antioxidant capacity and free radical production. Several types of reactive species are generated in the body in the form of free radicals or non-radicals as the result of normal metabolic reactions or the exposure to exogenous toxins and pathological events. These species include oxygen derivatives (i.e., O Although it is widely believed that these free radicals and oxidized products can cause cellular and tissue damage, it is also true that the nature has selected ROS/RNS as a signal transduction mechanism responsive to the effects of nutrients and oxidative environment. Only in the past two decades, it has become apparent that ROS serve as signaling molecules to regulate biological and physiological processes [32-34,40-43]. Their roles as harmful compounds or signal molecules depend on the types of ROS/RNS, duration of the stimulus, and their local concentrations [35,44]. Redox signaling involves specific electrophiles that react with specific protein thiolates, and this redox shift is rapidly reverted by feedback reactions. In this context, redox post-translational modifications (rPTMs) of cysteine residues, reminiscent of phosphorylation and ubiquitination of critical amino acids, regulate a broad spectrum of protein activities [45]. For example, in redox signaling pathway, thiolate anion (Cys-S) of cysteine residues can be oxidized by H 2 O 2 to sulfenic (Cys-SOH), modulating protein functions. This reaction can be reverted by thioredoxin and glutaredoxin, thus ensuring the fine regulation of signal transduction. On the other hand, accumulation of H 2 O 2 can further oxidize sulfenic (Cys-SOH) to sulfinic (Cys-SO 2 H) and sulfonic (Cys-SO 3 H) species, both of which are irreversible mechanisms and permanently damage protein structures and functions [46,47]. RNS are also involved in redox signaling pathway through S-nitrosylation, generating S-nitrosoproteins [48]. S-nitrosylation is the reversible reaction of nitric oxide-derived species with thiols of cysteine residues through oxidation or transnitrosylation, a transferring of NO [45,49]. The relevance of S-nitrosylation role as a pleiotropic player of protein rPTMs is underlined by the specificity of target substrates and the enzymatic mechanisms involved in S-nitrosylation/denitrosylation [45,49] . On the other hand, aberrant S-nitrosylation affects protein functions, and it is related to the development of various diseases including cancer, diabetes type 1 and 2, cardiovascular (CVD), Parkinson's (PD), and Alzheimer's (AD) diseases [55]. Another example is the lipid peroxidation, the reaction of oxidative chain degradation in lipid cell membrane. This reaction is activated by free radicals in polyunsaturated fatty acids of lipid membrane, producing lipid hydroperoxides (LOOH) and their derivatives [56,57]. LOOH are further catalyzed by lipoxygenases enzyme, which also requires the activation of hydroperoxide, resulting in cellular damage [58,59]. However, this chain reaction of lipid peroxidation also produces one of α, β-unsaturated aldehydes called 4-hydroxynonenal (HNE). HNE is mainly generated from the reaction between n-6 fatty acids and hydroperoxide, and functions as a signal transduction molecule for redox homeostasis [59][60][61][62]. In normal state, HNE is produced in the proper amount to maintain redox steady state modulating the mRNA expression of antioxidant enzymes [63]. While in the presence of oxidative stress, the production of HNE is substantially increased and able to act as a signaling species for nucleophilic feedback to counteract oxidative stress and maintain the original steady state [64,65]
Antioxidants and the Maintenance of Neutrophilic Tone
To counteract the effects of electrophiles and oxidants, the body is endowed with a category of compounds called antioxidants. These antioxidants are produced endogenously (i.e., enzymes: superoxide dismutase (SOD), catalase (CAT), and glutathione peroxidase (GPx)) or received from exogenous sources (i.e., vitamins: vitamin C and E; minerals: zinc (Zn), manganese (Mn), and selenium (Se); and phenolic compounds). They represent the first defense against a burst of ROS/RNS to restore nucleophilic tone. Antioxidant enzyme cascade reacts sequentially to neutralize free radicals. SOD catalyzes the dismutation of superoxide anion producing H 2 O 2 , which is in turn decomposed by CAT or GPx to water [66][67][68][69]. Vitamins, phenolic compounds, and minerals are not endogenously produced, thus they have to be integrated from diet. Vitamin C is a hydrosoluble, while vitamin E is a liposoluble antioxidants [70][71][72]. Also, phenolic compounds are one of the most common antioxidants found in food [73][74][75]. Thanks to their chemical structures, these vitamins and phenolic compounds can neutralize free radicals by donating hydrogen atoms and become stable resonance structures, thereby protecting cell membranes and proteins from oxidative damage [75][76][77][78][79][80][81]. Dietary minerals are known as essential cofactors of antioxidant enzymes involved in redox homeostasis. For example, Zn, Mn, and Se function in various classes of enzymes. Zn is present in cytosolic SOD (SOD1). Mn is well-known for mitochondrial SOD (SOD2), while Se is a co-enzyme of GPx [82][83][84].
In addition, cellular stress response-related enzymes, also called ARE-regulated phase II enzymes (HO-1, NQO1, and GST), are engaged in long-lasting maintenance of the redox homeostasis, supporting the nucleophilic tone [10,11]. These enzymes are transcriptionally under the control of Nrf2 through its binding to the consensus ARE within the 5 -flanking promoter region of these target genes. Nrf2 is considered as a sentinel of oxidative stress and protects the body by making it more resistant to oxidative insults [85]. For instance, Nrf2 knockout mice are substantially more susceptible to a broad range of chemical toxicity and disease conditions associated with oxidative pathology such as AD, PD, and amyotrophic lateral sclerosis (ALS) [86][87][88][89][90][91][92][93]. Activation of Nrf2-mediated gene transcription involves various complex processes [94,95]. Nrf2 generally binds to Kelch like ECH associated protein 1 (Keap1) in the cytoplasm. Keap1 is a protein regulator playing a key role in controlling the steady state of Nrf2 pathway based on redox conditions. In basal or unstressed conditions, Nrf2 is less activated and has rapid turnover rate with short half-life due to the formation of Nrf2-Keap1-Cul3 complex in the cytoplasm. Keap1, which interacts with Cul3-E3-ligase (an ubiquitin ligase), binds to Nrf2 and helps in promoting Nrf2 ubiquitination leading to rapid proteasomal degradation by 26 S proteasome [96][97][98][99][100][101][102][103]. This complex can be interrupted by various electrophiles resulting in the activation of Nrf2 signaling pathway. In the presence of oxidative stress, Keap1 further functions as a stress signal through the stress-induced oxidation of its key cysteine residues. The stimuli can oxidize or covalently modify disulfide bond (-S-S-) of cysteine residues causing conformational changes and interrupt the Keap1-Cul3 complex by inhibiting the ubiquitin E3 ligase activity. The reaction decreases the ability of Keap1 to bind Nrf2, thereby freeing Nrf2 and activating its nuclear translocation. Before its nuclear translocation, Nrf2 is phosphorylated by protein kinases (protein kinase C-δ (PKCδ) and protein kinase B (Akt)), which are also induced by electrophiles and oxidants. When Nrf2 is translocated into the nucleus, it dimerizes with the small Maf heterodimer proteins and binds to a cis-acting element of ARE activating the transcription of ARE-dependent phase II enzymes such as HO-1, GST, and NQO1 [96,99,[103][104][105][106][107][108]. In addition, this Nrf2-ARE activation is also found to control the expression of several cytoprotective genes such as thioredoxin 1, thioredoxin reductase 1, sulfiredoxin 1, NADPH-generating enzymes (glucose-6-phosphate dehydrogenase (G6PD), 6-phosphogluconate dehydrogenase (PGD), malic enzyme (ME)1, and isocitrate dehydrogenase (IDH)1), ferritin, and glutathione-based system (GPx, glutathione disulfide (GSSG), glutathione reductase (GSR)) [105,[109][110][111][112][113][114][115][116][117][118][119][120].
Rice Antioxidants
Rice has been a primary staple food for billions of people worldwide and also represents cultural identity and global unity for centuries [139,140]. Rice that is cultivated for consumption has two major species: Oryza sativa and Oryza glaberrima. Oryza sativa species are the varieties originated from South-East Asia and also found throughout Asia, America, and Europe. Oryza glaberrima species are originated from West Africa and only grown in this area [141]. Rice from all the varieties can be further categorized into two subspecies, Japonica and Indica, based on the degrees of spikelet and pollen sterility in F1 hybrids between them. Rice contains essential amino acids, dietary fibers, carotenoids, folate, lignin, and minerals, and it is rich in many bioactive phytochemicals: γ-oryzanol, vitamin E (tocopherols and tocotrienols), γ-aminobutyric acid (GABA), phenolics, flavones, and anthocyanins [142][143][144][145]. Studies in rice have shown that rice is not only important in terms of a staple food, but also play a role in promoting various health benefits such as anti-inflammation, anti-diabetic, anti-hyperlipidemic, anti-cancer, and antioxidant potential [146][147][148][149][150][151][152][153][154].
Some of the therapeutic effects of rice in preventing diabetes type 2 (T2D), CVD, obesity, different types of cancer, and inflammation are attributed to its antioxidant properties [155][156][157][158]. The antioxidant properties of rice have been first published in 1989, which some of its active compounds including flavonoids, α-tocopherols, and γ-oryzanol were identified [159,160]. Since 2000, research interest on rice has been improved and the number of research articles related to its antioxidant properties dramatically increased more than 15 times [161,162]. Today, it is known that rice contains about 100 kinds of antioxidants, which can be essentially classified into two major groups: vitamins and phenolic-based compounds [163][164][165]. The mechanisms by which these bioactive compounds exert antioxidant activity have been investigated [166,167]. For example, it is well known that vitamin E exerts its antioxidant effects by quenching free radicals. Vitamin E in rice includes tocopherols (α,β,γ,δ forms) and tocotrienols (α,β,γ,δ forms) [143]. Thanks to its structure of the chromanol ring connected with a free hydroxyl group, hydrogen atom of the hydroxyl group can be donated to a free radical resulting in the delocalized and stabilized unpaired electron, vitamin E radical (Figure 1). Since the reactivity of the vitamin E radical is much less than other radicals, it is relatively stable to break the radical chain cascade [168][169][170][171]. Phenolic compounds consist of at least one aromatic ring and one hydroxyl group [172,173]. This chemical structure is present in many active compounds including ferulic, cinnamic, p-coumaric, caffeic, sinapic, chlorogenic, gallic, vanillic, p-hydroxybenzoic, protocatechuic, and syringic acids, flavonoids, and their derivatives [163][164][165]. Those found more in pigmented rice are anthocyanins peonidin-3-Oglucoside, and pelargonidin-3-O-glucoside) and proanthocyanidins, which are responsible for purple to black color and reddish color respectively, while the ferulic acid esters are more abundant in rice bran layer [174][175][176][177][178][179]. The phenolic-based structure highly provides antioxidant properties as free radical scavengers. The hydroxyl group on the phenolic ring can transfer its hydrogen atom to a free radical forming a delocalized and stabilized unpaired electron, phenoxy radical, across the phenolic ring ( Figure 2). The antioxidant capacity of phenolic-based compounds is in function of the numbers of hydroxyl groups, the location of hydroxyl group on aromatic ring (ortho, para, meta positions), and the presence of other functional groups on the molecule [180,181]. The antioxidant activities of these compounds are four times stronger than that of α-tocopherol [182][183][184]. In addition, some of these compounds such as anthocyanins and ferulic acid derivatives are also able to activate AREregulated phase II enzyme expression [185][186][187][188][189]. Phenolic compounds consist of at least one aromatic ring and one hydroxyl group [172,173]. This chemical structure is present in many active compounds including ferulic, cinnamic, p-coumaric, caffeic, sinapic, chlorogenic, gallic, vanillic, p-hydroxybenzoic, protocatechuic, and syringic acids, flavonoids, and their derivatives [163][164][165]. Those found more in pigmented rice are anthocyanins (cyanidin-3-O-glucoside, cyanidin-3-O-galactoside, cyanidin-3-O-rutinoside, peonidin-3-O-glucoside, and pelargonidin-3-O-glucoside) and proanthocyanidins, which are responsible for purple to black color and reddish color respectively, while the ferulic acid esters are more abundant in rice bran layer [174][175][176][177][178][179]. The phenolic-based structure highly provides antioxidant properties as free radical scavengers. The hydroxyl group on the phenolic ring can transfer its hydrogen atom to a free radical forming a delocalized and stabilized unpaired electron, phenoxy radical, across the phenolic ring ( Figure 2). The antioxidant capacity of phenolic-based compounds is in function of the numbers of hydroxyl groups, the location of hydroxyl group on aromatic ring (ortho, para, meta positions), and the presence of other functional groups on the molecule [180,181]. The antioxidant activities of these compounds are four times stronger than that of α-tocopherol [182][183][184]. In addition, some of these compounds such as anthocyanins and ferulic acid derivatives are also able to activate ARE-regulated phase II enzyme expression [185][186][187][188][189]. Phenolic compounds consist of at least one aromatic ring and one hydroxyl group [172,173]. This chemical structure is present in many active compounds including ferulic, cinnamic, p-coumaric, caffeic, sinapic, chlorogenic, gallic, vanillic, p-hydroxybenzoic, protocatechuic, and syringic acids, flavonoids, and their derivatives [163][164][165]. Those found more in pigmented rice are anthocyanins (cyanidin-3-O-glucoside, cyanidin-3-O-galactoside, cyanidin-3-O-rutinoside, peonidin-3-Oglucoside, and pelargonidin-3-O-glucoside) and proanthocyanidins, which are responsible for purple to black color and reddish color respectively, while the ferulic acid esters are more abundant in rice bran layer [174][175][176][177][178][179]. The phenolic-based structure highly provides antioxidant properties as free radical scavengers. The hydroxyl group on the phenolic ring can transfer its hydrogen atom to a free radical forming a delocalized and stabilized unpaired electron, phenoxy radical, across the phenolic ring ( Figure 2). The antioxidant capacity of phenolic-based compounds is in function of the numbers of hydroxyl groups, the location of hydroxyl group on aromatic ring (ortho, para, meta positions), and the presence of other functional groups on the molecule [180,181]. The antioxidant activities of these compounds are four times stronger than that of α-tocopherol [182][183][184]. In addition, some of these compounds such as anthocyanins and ferulic acid derivatives are also able to activate AREregulated phase II enzyme expression [185][186][187][188][189]. Antioxidants (phenolic compounds) in rice and structure-activity relationship. Phenolic compounds act as free radical scavengers since hydroxyl group on the phenolic ring can transfer its hydrogen atom to a free radical, forming a delocalized and stabilized unpaired electron, phenoxy radical, across the phenolic ring. The antioxidant capacity of phenolic compounds is in function of the numbers of hydroxyl groups, the location of hydroxyl group on aromatic ring (ortho, para, meta positions), and the presence of other functional groups on the molecule.
Γ-Oryzanol has been proposed as a potent antioxidant compound [198]. A growing number of studies have demonstrated that γ-oryzanol acts as a free radical scavenger and improves the activity of endogenous antioxidant enzymes. Γ-Oryzanol major components (campesteryl ferulate, cycloartenyl ferulate, and 24-methylenecycloartenyl ferulate) are more potent antioxidants than all components of vitamin E, and 24-methylenecycloartenyl ferulate has the highest antioxidant activity [199]. The antioxidant properties of γ-oryzanol tested with different types of radicals-such as inorganic oxygen-derived radicals, DPPH radicals, lipid soluble organic radicals, 2,2 -azinobis-3ethylbenzothiazoline-6-sulfonic acid (ABTS) free radicals, and linoleic acid peroxidation-revealed that γ-oryzanol acts as organic, lipophilic as well as hydrophilic radical scavengers [200,201]. Moreover, γ-oryzanol was found to possess SOD-like activity in inhibiting superoxide radical catalyzed pyrogallol autoxidation [202]. In in vitro cells, pretreatment with the main compounds of γ-oryzanol (sitosteryl ferulate, cycloartenyl ferulate, and 24-methylenecycloartenyl ferulate) prevented H 2 O 2 -induced ROS production via scavenging free radicals [203]. Likewise, we recently demonstrated in HEK-293 cells that γ-oryzanol pretreatment prevents H 2 O 2 -induced ROS generation by increasing the activity and protein expression of antioxidant enzymes such as SODs [204].
The antioxidant properties of γ-oryzanol have also been shown in in vivo models. In Drosophila melanogaster model, γ-oryzanol enhanced antioxidant defense by significantly improving the antioxidant enzymes such as SOD, CAT, and GST, and decreasing the malondialdehyde (MDA) and ROS production [205]. In streptozotocin-induced oxidative stress, γ-oryzanol also effectively increased the levels of SOD and reduced glutathione in rats [206]. In mice model of ethanol-induced liver injury, γ-oryzanol was able to prevent increased hepatic lipid hydroperoxide, TBARS levels as well as plasma aspartate and alanine aminotransferase activities. Moreover, in the same study, γ-oryzanol also improved SOD activity, suggesting that its effects in preventing ethanol-induced hepatic injury could be mediated by its antioxidant properties [207]. Interestingly, the effects of γ-oryzanol were also evaluated in animals fed with high fat diet (HFD). What the authors found was a significant increase of antioxidant enzyme activities and decrease of free radicals in those mice fed with HFD supplemented with γ-oryzanol compared with control HFD animals. Furthermore, in these mice, γ-oryzanol decreased hepatic lipogenesis and suppressed plasma triglyceride and total cholesterol levels with an increase of HDL cholesterol concentrations [208]. The effects on lipid metabolism of γ-oryzanol were also investigated in dyslipidemic patients, where its beneficial effects were compared with other different dietary supplements including vitamin E and polyunsaturated fatty acids (PUFA) n-3. Among these, the group consuming γ-oryzanol expressed a greater lowering of oxidative stress via regulation of ROS levels, total antioxidant capacity, and inflammatory biomarkers-i.e., tumor necrosis factor (TNF-a), interleukin-1β (IL-1β), and thromboxane B2 (TXB2)-supporting the notion that the anti-hyperlipidemic effects of γ-oryzanol are mediated by its antioxidant properties [209].
The peculiar beneficial properties of γ-oryzanol lay also on its ability to regulate the transcriptional expression of genes related to redox homeostasis and cell survival. For example, in SH-SY5Y cells challenged with H 2 O 2 , γ-oryzanol decreased oxidative stress and prevented neurotoxicity by upregulating antioxidant genes (SODs) and anti-apoptotic genes (NF-κB and Bcl-2) and by downregulating pro-apoptotic genes (TNF, BAX, and caspase-9) [13]. Recently, we further demonstrated that γ-oryzanol activated Nrf2 nuclear translocation and Nrf2-ARE pathway, with an increase of mRNA and protein expression of ARE-response phase II enzymes such as HO-1, NQO1 [204]. The activation of Nrf2 pathway seems to be central in the biological effects of γ-oryzanol. In fact, various of those aforementioned genes are found to be directly or indirectly regulated by Nrf2. For example, Niture and Jaiswal [210], by using band/supershift and ChIP assays, showed a direct interaction between Nrf2 and Bcl-2 antioxidant response element leading to activation of Bcl-2 gene expression. NF-κB and TNF are not directly under the control of Nrf2, but it could be modulated by target genes of Nrf2 such as HO-1 and NQO1 [211,212]. Likewise, Nrf2 was found also to regulate the expression of BAX and caspase-9 in human glioblastoma cells [213].
Therefore, from a structure-activity point of view, (Figure 3) γ-oryzanol could possess free radical scavenging properties due to the transfer of hydrogen atom of 4-hydroxyl group on the phenolic ring of ferulic acid moiety, forming a phenoxy radical. This phenoxy radical is highly stabilized by the delocalization of the unpaired electron across the phenolic ring, unsaturated side chain, and carbonyl group (α, β-unsaturated carbonyl moiety) [214][215][216][217]. On the other hand, γ-oryzanol might activate Nrf2 pathway through at least two possible mechanisms: (1) the formation of more stable oxygen species including H 2 O 2 during the transferring of hydrogen atoms to free radical species, which could be a signaling molecule for Nrf2 activation [182,[218][219][220]; (2) the presence of α, β-unsaturated carbonyl moiety is responsible for its electrophilicity, a main common property of Keap1-Nrf2-ARE pathway inducers. α, β-Unsaturated carbonyl moiety, a Michael acceptor, is a carbon-carbon double bond (olefin) conjugated with an electron-withdrawal carbonyl group [123,221]. This carbonyl group can delocalize an electron across the oxygen to the olefin inducing a partial cation (positive charge) at the (olefin) carbon atom, thereby providing this carbon atom an electrophilicity. Thus, α, β-unsaturated carbonyl moiety becomes an electrophilic moiety to attract electrons and nucleophiles, particularly cysteine residues of Keap1 leading to oxidation of Keap1, Nrf2 nuclear translocation, and in turn Nrf2-ARE activation [222,223] Interestingly, the α, β-unsaturated carbonyl moiety in γ-oryzanol is structurally similar to α, β-unsaturated aldehyde of HNE, an endogenous electrophile signaling molecule known as a Nrf2 inducer [123,221]. Therefore, γ-oryzanol possesses peculiar chemical properties, from both ferulic acid moiety and phytosterols, to counteract oxidative stress and mimic body electrophiles to activate nucleophilic feedback reactions, restoring redox homeostasis.
Nutrients 2018, 9, x FOR PEER REVIEW 7 of 18 fact, various of those aforementioned genes are found to be directly or indirectly regulated by Nrf2. For example, Niture and Jaiswal [210], by using band/supershift and ChIP assays, showed a direct interaction between Nrf2 and Bcl-2 antioxidant response element leading to activation of Bcl-2 gene expression. NF-κB and TNF are not directly under the control of Nrf2, but it could be modulated by target genes of Nrf2 such as HO-1 and NQO1 [211,212]. Likewise, Nrf2 was found also to regulate the expression of BAX and caspase-9 in human glioblastoma cells [213]. Therefore, from a structure-activity point of view, (Figure 3) γ-oryzanol could possess free radical scavenging properties due to the transfer of hydrogen atom of 4-hydroxyl group on the phenolic ring of ferulic acid moiety, forming a phenoxy radical. This phenoxy radical is highly stabilized by the delocalization of the unpaired electron across the phenolic ring, unsaturated side chain, and carbonyl group (α, β-unsaturated carbonyl moiety) [214][215][216][217]. On the other hand, γoryzanol might activate Nrf2 pathway through at least two possible mechanisms: (1) the formation of more stable oxygen species including H2O2 during the transferring of hydrogen atoms to free radical species, which could be a signaling molecule for Nrf2 activation [182,[218][219][220]; (2) the presence of α, β-unsaturated carbonyl moiety is responsible for its electrophilicity, a main common property of Keap1-Nrf2-ARE pathway inducers. α, β-Unsaturated carbonyl moiety, a Michael acceptor, is a carbon-carbon double bond (olefin) conjugated with an electron-withdrawal carbonyl group [123,221]. This carbonyl group can delocalize an electron across the oxygen to the olefin inducing a partial cation (positive charge) at the (olefin) carbon atom, thereby providing this carbon atom an electrophilicity. Thus, α, β-unsaturated carbonyl moiety becomes an electrophilic moiety to attract electrons and nucleophiles, particularly cysteine residues of Keap1 leading to oxidation of Keap1, Nrf2 nuclear translocation, and in turn Nrf2-ARE activation [222,223] Interestingly, the α, βunsaturated carbonyl moiety in γ-oryzanol is structurally similar to α, β-unsaturated aldehyde of HNE, an endogenous electrophile signaling molecule known as a Nrf2 inducer [123,221]. Therefore, γ-oryzanol possesses peculiar chemical properties, from both ferulic acid moiety and phytosterols, to counteract oxidative stress and mimic body electrophiles to activate nucleophilic feedback reactions, restoring redox homeostasis. (1) The 4-hydroxyl group on the phenolic ring is responsible for hydrogen atom transfer reaction, the hydroxyl group forms a phenoxy radical by transferring its hydrogen atom to a free radical, which contributes to the free radical scavenging properties. This phenoxy radical is highly resonance stabilized because the unpaired electron is able to delocalize across the oxygen to the phenolic ring and α, β-unsaturated carbonyl moiety. (2) The presence of α, β-unsaturated carbonyl moiety of γ-oryzanol is responsible for its electrophilicity. This carbonyl group can delocalize an electron across the oxygen to the olefin, inducing a partial cation (positive charge) at the (olefin) carbon atom, thereby providing this carbon atom an electrophilicity to attract electrons and nucleophiles particularly cysteine residues of Keap1, leading to Nrf2 nuclear translocation and in turn Nrf2-ARE activation.
Concluding Remarks
The concept of food and its bioactive nutrients to maintain nucleophilic tone restoring redox homeostasis is of great interest in preventing pathology and even in curing chronic diseases. The fact that these natural compounds, including γ-oryzanol, have been demonstrated as Nrf2 inducers provides new insights into their mechanisms of actions supporting and encouraging their use. Indeed, γ-oryzanol is already used as a prescription drug in Japan to treat various conditions such as hyperlipidemia, irritable bowel syndrome, autonomic ataxia, and menopausal syndrome [224][225][226]. Γ-Oryzanol is certainly a promising natural dietary compound, being present in high amounts in the principal grain of human diet.
|
2018-11-15T16:41:54.583Z
|
2018-11-01T00:00:00.000
|
{
"year": 2018,
"sha1": "7b9f72d8888e7ed5daa2f96328cec17363fbdf05",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/10/11/1605/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7b9f72d8888e7ed5daa2f96328cec17363fbdf05",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
18248164
|
pes2o/s2orc
|
v3-fos-license
|
Patient-reported outcomes in palliative gastrointestinal stenting: a Norwegian multicenter study
Background The clinical effect of stent treatment has been evaluated by mainly physicians; only a limited number of prospective studies have used patient-reported outcomes for this purpose. The aim of this work was to study the clinical effect of self-expanding metal stents in treatment of malignant gastrointestinal obstructions, as evaluated by patient-reported outcomes, and compare the rating of the treatment effect by patients and physicians. Methods Between November 2006 and April 2008, 273 patients treated with SEMS for malignant GI and biliary obstructions were recruited from nine Norwegian hospitals. Patients and physicians assessed symptoms independently at the time of treatment and after 2 weeks using the European Organisation for Research and Treatment of Cancer (EORTC) QLQ-C30 questionnaire supplemented with specific questions related to obstruction. Results A total of 162 patients (99 males; median age = 72 years) completed both assessments and were included in the study. A significant improvement in the mean global health score was observed after 2 weeks (from 9 to 18 on a 0–100 scale, P < 0.03) for all stent locations. Both patients and physicians reported a significant reduction in all obstruction-related symptoms (>20 on the 0–100 scale, P < 0.006) after SEMS treatment. The physicians reported a larger mean improvement in symptoms than did the patients, mainly because they reported more severe symptoms before treatment. Conclusion SEMS treatment is effective in relieving symptoms of malignant GI and biliary obstruction, as reported by patients and physicians. The physicians, however, reported a larger reduction in obstructive symptoms than did the patients. A prospective assessment of patient-reported outcomes is important in evaluating SEMS treatment.
Both patients and physicians reported a significant reduction in all obstruction-related symptoms ([20 on the 0-100 scale, P \ 0.006) after SEMS treatment. The physicians reported a larger mean improvement in symptoms than did the patients, mainly because they reported more severe symptoms before treatment. Conclusion SEMS treatment is effective in relieving symptoms of malignant GI and biliary obstruction, as reported by patients and physicians. The physicians, however, reported a larger reduction in obstructive symptoms than did the patients. A prospective assessment of patientreported outcomes is important in evaluating SEMS treatment.
Keywords Stents Á Palliative care Á Gastrointestinal cancer Á Biliary tract neoplasm Á Outcome assessment Á Quality of life Palliative treatment with self-expanding metal stents (SEMS) is regarded as a safe and highly effective procedure for relief of symptoms caused by malignant obstructions of the gastrointestinal (GI) tract [1][2][3][4][5][6][7][8]. Most studies concerning treatment with SEMS, whether randomized, comparative, or merely descriptive, focus on technical success (e.g., correct deployment of the stent), clinical success (restored passage), procedure-related complications, and cost-effectiveness. Typically, the clinical outcomes of SEMS treatment have been evaluated by the physician [9]; only a few prospective studies reported repeated symptom assessments by the patient [10][11][12][13][14][15][16]. Since patients' and physicians' ratings of treatment effects do not always correspond well, palliative treatment efforts such as SEMS for malignant GI obstructions should be evaluated by individual outcome measures reported by the patients as well as by the physicians [17][18][19][20][21][22].
The main objective of this multicenter study was to use patient-reported outcomes to evaluate the treatment effects of SEMS on quality of life (QoL) and symptoms related to malignant GI and biliary obstruction. An additional aim of the study was to compare patient-and physician-reported evaluations of the treatment's effects.
Materials and methods
Nine Norwegian hospitals performing SEMS treatment for GI obstructions participated in the present study. The inclusion period was from November 2006 to April 2008. Patients were eligible for consecutive inclusion according to the following criteria: (1) symptoms related to malignant GI obstruction, (2) indication for treatment with all types of metal stents established, (3) fluency in oral and written Norwegian, and (4) cognitive capability to complete the questionnaires. Patients who received their colonic stent as a ''bridge to surgery'' (i.e., to relieve the acute obstruction prior to elective surgery) and underwent bowel resection within 2 weeks after stent placement were not asked to complete the questionnaire after 2 weeks and were thus not included in the analyses. The study was approved by the Regional Committee for Medical Research Ethics in Southern Norway and the Data Protection Supervisor at Oslo University Hospital, Ullevål. All patients received oral and written information about the study. Written informed consent was obtained from all participants.
Stent procedure
All stents were deployed endoscopically under fluoroscopic guidance. Both covered and uncovered stents were used for esophageal and biliary stent treatment, while uncovered stents were used in other locations.
Assessment of patient-reported outcomes
The European Organisation for Research and Treatment of Cancer Core Quality of Life Questionnaire, EORTC QLQ-C30, version 3.0 [23], was used to assess patient-reported outcomes, supplemented with selected questions from other relevant EORTC organ-and disease-specific modules (http://www.eortc.be/). The EORTC QLQ-C30 is a cancerspecific 30-item self-reporting questionnaire consisting of both multi-item scales and single-item measures. These include five functional scales (i.e., physical, role, cognitive, emotional, and social), three symptom scales (i.e., fatigue, nausea/vomiting, and pain), and six single items (i.e., dyspnea, insomnia, appetite loss, constipation, diarrhea, and financial problems), as well as two questions where the patients assessed their overall health and QoL on a scale from 1 to 7. Combining these two scores resulted in a global health score.
EORTC recommends that organ-specific modules be used in addition to the core questionnaire to capture diagnosis-or treatment-specific problems. For the purpose of the present study, a selection of questions was made from the relevant organ-specific modules to reduce the respondent's burden and to focus on specific problems pertaining to the different diagnostic or stent groups. Questions to be answered by the patients receiving esophageal, biliary, and colonic stents were selected from the stomach module EORTC QLQ-STO22 [24], the pancreatic module EORTC QLQ-PAN26 [25], and the colorectal module EORTC QLQ-CR38 [26], respectively ( Table 2). Patients who received gastroduodenal stents did not answer any Surg Endosc (2011) 25:3162-3169 3163 additional questions as their main obstruction-related symptoms, nausea and vomiting, were specifically addressed by the core questionnaire. Higher scores on the symptom scales and single items from the core questionnaires and the organ-specific modules indicated more severe symptoms, while higher scores on the functional scales indicate better functioning. All items were to be answered on an ordinal scale ranging from 1 (''Not at all'') to 4 (''Very much''), except for the two modified visual analog scales assessing global health and QoL; they ranged from 1 to 7. The time frame was the past 7 days. Scale and item scores were transformed into a continuous scale from 0 to 100, as described in the EORTC Scoring Manual [27]. A mean score difference of 5-10 is usually regarded as a small but clinically noticeable change for the patients, a change between 10-20 as moderate, and [20 as a large clinical change [28,29].
Administration of questionnaires
All assessments were performed twice, at inclusion (-2 to ?1 day before/after the procedure) and 2 weeks after treatment. The questionnaire was administered to the study participants upon admission by the treating physician or a study nurse. The same questionnaire was given to the patients when leaving the hospital. The patients were instructed to complete the second questionnaire 2 weeks after stent treatment and return it by mail. The 2-week time span between assessments was chosen to reach the maximum effect of the stent treatment and reduce the impact of disease progression. To reduce the influence of recall bias, the patients had to complete the initial questionnaire no later than the day after the procedure and the second questionnaire no later than 3 weeks after treatment. The physicians assessed the same organspecific symptoms at inclusion and the second assessment at hospital discharge or 2 weeks after stent treatment if the patient was still hospitalized. The same physician was responsible for the before and after assessment of symptoms.
Statistical analysis
Power calculations were based on a mean change of 10 with a standard deviation (SD) of 15 of global health, with 90% power and a 5% level of significance, which yielded a sample size of 26 patients in each of the treatment groups for the four stent locations. Wilcoxon signed-rank test with 5% significance level was used when evaluating changes of symptoms before and after treatment. Statistical analyses were performed using SPSS 16.0 (SPSS, Inc., Chicago, IL).
Patient characteristics
A total of 273 patients were eligible for inclusion in the study, varying from 2 to 105 patients at the nine participating centers. Two hundred thirty-eight (87%) patients completed the questionnaire prior to the stent procedure, and 162 (68%) of these completed both questionnaires. Twenty-seven patients did not return the second form for unknown reasons (Fig. 1). Ninety-nine males and 63 females with a median age of 72 years were included. Clinical and demographic characteristics are given in Table 1. The most frequent diagnoses were cancer of the colon and pancreas. Of the 18 patients with gastric cancer who received stents, eight had obstructions located in the cardia ventriculi and were treated with esophageal stents. Ten patients had gastric outlet obstruction and were treated with duodenal stents.
Patient-reported outcomes
Patients reported a clinically and statistically significant reduction in all obstruction-related symptoms in all four stent locations, with a mean reduction of at least 20 (P \ 0.02). Furthermore, a clinically and statistically significant improvement in global health function (P \ 0.03) was observed in all treatment groups. Additionally, various other symptoms improved significantly: nausea/vomiting (colon and biliary), appetite loss (biliary and gastroduodenal), pain (gastroduodenal and colonic), and constipation (colonic) (Tables 2, 3, 4, and 5). The total numbers of patients experiencing symptomatic improvement C 20, improvement \ 20, or worsening are reported in Table 6.
The scorings from patients who completed the pretreatment questionnaire before treatment were similar to those from patients who completed it the day after treatment. Sixty-four patients (40%) completed the first assessment the day after stent insertion because of emergency stent treatment or pronounced symptoms before treatment. The rate of missing items was low, 0.9 and 1.0% in the two assessments, respectively. For the multi-items scales, missing values were assigned according to a standard scoring procedure (EORTCs scoring manual, [27]) by replacing missing items with the scale mean values, provided that half or more of the scale items were completed.
Comparison of symptoms evaluated by patients and physicians
When comparing the patients' and physicians' scores, a significant difference in the answers of six of seven questions before treatment was found, whereby the physicians indicated symptoms as more pronounced than the patients (P \ 0.02). However, when comparing the posttreatment evaluation, the scores tended to be similar (a statistically significant difference was found for two questions, see Table 7). When evaluating the clinical effect as an improvement in obstructive symptoms, the physicians reported a larger mean reduction in obstructive symptoms and, thus, a better treatment effect as compared to the patients.
The median hospital stay was 4 days (range = 0-64). Therefore, physicians completed their second symptom assessment \7 days after the first registration in (131/162) 81% of the cases. The patients completed their second assessment of symptoms after 2 weeks (assessing symptoms between days 7 and 14).
Short-term outcome/complications
During the first week, 12 of 162 patients (7%) experienced complications: three nonfunctional stents, two stent migrations, two bleeding episodes, two episodes of cholangitis, one tracheal-esophageal fistula, one stent obstruction by food impaction, and one stent obstruction by tumor overgrowth. There was no procedure-related mortality.
Discussion
This study is one of very few that evaluates the symptomatic effect of palliative GI stenting based on patientreported outcomes. Furthermore, to our knowledge it is the first to compare patients' and physicians' assessments of the symptomatic effect of SEMS treatment. The present study demonstrates that the majority of patients found treatment with SEMS effective in relieving obstructive symptoms in all GI tract locations. Additionally, patients reported a significant clinical improvement in global health after 2 weeks for all four stent locations. The physicians tended to evaluate pretreatment symptoms as more severe than did the patients. The postprocedure scorings were more similar.
This study shows that treatment with SEMS is effective in relieving symptoms related to malignant GI obstruction. Our conclusion is strengthened by the fact that patients in this study were treated at small local centers, not large expert centers. SEMS as palliative principle seems to be effective independent of location. With regard to the symptomatic effect on esophageal and gastric outlet obstructions, our findings are in accordance with previous studies.
Additionally, were we able to find significantly improved general well-being and better QoL, which most previous studies had not been able to document [10,12]. A study of colon obstruction using patient-reported outcomes ended early and was therefore not able to make a conclusion [30]. That physicians' and patients' perceptions of symptoms differ is in line with previous studies in palliative medicine that compared physicians and patients, although underestimation of patients' symptoms by physicians is more common [17][18][19][20][21]. We do not know the reasons for the discrepancies in scoring found in our study; but one plausible explanation may reflect the enthusiasm of the physicians performing these procedures and their needs to justify the indication. The study was not designed to clarify this question. a Breast cancer, n = 1, lymphoma, n = 1; lung cancer, n = 3; prostate cancer, n = 2; hepatocellular carcinoma, n = 1; gallbladder cancer, n = 1; thyroid cancer, n = 1; papillary cancer, n = 1; ovarian cancer, n = 3; duodenal cancer, n = 1; malignant melanoma, n = 2 All values are mean (SD) a A selection of the EORTC QLQ-C30 most relevant scorings was made; no significant change was found in the excluded scores b Scale from 0 to 100; high scores represent higher level of functioning c Scale from 0 to 100; high scores represent more severe symptoms All values are mean (SD) a A selection of the EORTC QLQ-C30 most relevant scorings was made; no significant change was found in the excluded scores b Scale from 0 to 100; high scores represent more severe symptoms c Scale from 0 to 100; high scores represent higher level of overall functioning The physicians completed the second questionnaire earlier than the patients (earlier than day 7 for 81% of the patients). The study protocol did not include a scheduled follow-up after stent treatment. The patients were often severely ill, with long travelling distance to hospital, and an extra hospital visit to allow the physician to perform a symptom assessment was hence not included in the followup. As the hospital stay related to the stent procedure usually was of short duration, the physicians' scoring often had to be performed at discharge from hospital. However, it is likely that the questionnaire's 1-week time format reduced the influence of the discrepancy of when the physicians and patients did the second assessment.
Although there were significant improvements for the group in total, there was interindividual variation and some patients did not experience improvement in their obstructive symptoms. A review of the medical charts revealed that absence of symptomatic improvement often could be explained by dysfunctional stents, migrations, infections, pain, or intercurrent diseases during the first 2 weeks. This represented a limited number of patients and separate subanalyses were not performed. Furthermore, ongoing treatment with other modalities (e.g., chemotherapy) can potentially influence symptom scoring negatively. We found no significant difference in the scorings of the 25 patients who received chemo-and/or radiation therapy during the assessment period. All values are mean (SD) a A selection of the EORTC QLQ-C30 most relevant scorings was made; no significant change was found in the excluded scores b Scale from 0 to 100; high scores represent higher level of overall functioning c Scale from 0 to 100; high scores represent more severe All values are mean (SD) a A selection of the EORTC QLQ-C30 most relevant scorings was made; no significant change was found in the excluded scores b Scale from 0 to 100; high scores represent higher level of overall functioning c Scale from 0 to 100; high scores represent more severe symptoms Our study did not identify subgroups of patients that regularly did not benefit from SEMS treatment and, therefore, should have received alternative palliative treatment. This might be due to the relatively low number of patients included.
Seventy-six patients completed only the first questionnaire. However, as shown in Fig. 1, only 27 patients failed to complete the second questionnaire for unknown reasons. It is possible that these patients did not experience the expected effect of the stent treatment and that this lack of data could represent a selection bias. However, we know that these 27 patients did not differ in age, pretreatment global health, or survival from the 162 repliers. Three of these 27 patients experienced dysfunctional stents and needed reinterventions during the first 2 weeks, which might have influenced their opinion of stent function. Three patients experienced cholangitis and/or pancreatitis immediately after biliary stenting but had functional stents. For the remaining 21 of the 27 patients, there was not sufficient information in their medical records to explain why they did not return their second questionnaire.
Conclusion
SEMS treatment is effective in relieving symptoms of malignant GI and biliary obstruction, according to assessment by both patients and physicians. This study demonstrates a significant difference in how the physicians and patients evaluate treatment effects and thereby the importance of taking patient-reported outcomes into account when evaluating clinical palliative interventions. Future studies evaluating SEMS treatment should include prospective assessment of patient-reported outcomes to increase our knowledge about the efficacy of this treatment.
|
2017-08-02T20:23:46.217Z
|
2011-04-13T00:00:00.000
|
{
"year": 2011,
"sha1": "6b97d73610a96537eeda4d034ae86bc9a0fe4ea7",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00464-011-1680-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6b97d73610a96537eeda4d034ae86bc9a0fe4ea7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
55905825
|
pes2o/s2orc
|
v3-fos-license
|
Complex Adaptive Blockchain Governance
The blockchain revolution upholds the decentralizing ideal of “control nothing.” It is natural that such a pursuit would face issues of governance that demand reasonable control; control that is both operational as well as adaptive in nature. Eliminating middlemen and handing over controls to a trusted system of trustless agents does not thereby bestow trust across time. This is especially true when relentless change is the order of the day. Issues of governance rise up when the blockchain systems (especially those that have embedded smart contracts) are forced to operate increasingly away from its original intent. Smart contracts need governance when beset with the problem of the unknown-unknowns. Guided by the axiomatic approach, this paper looks at the paradoxical issue of blockchain governance from a Complex Adaptive Systems (CAS) perspective that helps frame the fundamental problem of decentralization. The objective is to solve the Blockchain Governance Kernel Design. Real-life examples are used to illustrate the findings.
Introduction
Consider C.P. Snow's proposition of the growing chasm between "The Two Cultures" [1]; i.e., between the sciences (which includes the social sciences) and the humanities, but now from a design perspective.Design as a discipline that deals with human artifacts (be they social, technical or socio-technical), has to bridge Snow's chasm in every single instance of design.This is because meaning and purpose, i.e., the root FR's that mandate any given design, ultimately reside in the human-centric humanities which includes disciplines such as languages, history, philosophy, arts and the law [2][3][4].Thus, for example, the design of an equitable governance system is ultimately rooted in the realm of law and justice.The research reported herein integrates across both the above cultures in order to make explicit the kernel governance design in the context of the ongoing blockchain revolution.
There is a fundamental difference in the requisite bridging-over that is necessary when considering technical versus social systems.Technical system designers have well-developed disciplines such as cognitive engineering, business-analysis, ergonomics, and others to help establish the preamble and move the design activity into the technical realm.In contrast, social system design is barely a discipline.There is no similar preamble body of knowledge that helps translate the social system FR's into the language of the social sciences.There are no rich traditions, no well-accepted bodies-of-knowledge as to how design operates in the social realm.For example, it is only recently that the nascent concept of stigmergy is helping disambiguate Adam Smith's economy-wide, organizing principle of the "invisible hand" [5].A disciplined approach to design, therefore, is more evident in the technical as compared to the social/organizational realms.The design of technological artifacts is more amenable to principled structuring as compared to the design of social artifacts.As Prof. Suh has noted, the ad-hoc approach is the accepted norm in the case of social artifacts [2]: In many organizations well-defined FRs are often lacking or not completely understood by everyone in the organization, and the organizational structure does not have specific DPs to satisfy FRs.The job of the management is to define FRs and establish DPs, but this has been done ad hoc, very much as in other fields of design.While technical system designs have come a long way since the 1990s when the above critique was first made, social system designs remain as is.Heretofore, the social and the technical have existed side-by-side, content to drift apart in their separately evolving cultures.However, now we are entering the realm of massively fine-grained sociotechnical systems such as the world of IoT (Internet of Things).The above divide across these two cultures is therefore not sustainable.The odd marriage between the two cultures does not scale; instead, it has the potential to result in large-scale, out-of-balance socio-technical system failures.Every system has a certain capacity for change beyond which it starts to show pathologies.Current social systems are ill-prepared to receive dramatic influxes of technology such as the promise of IoT [6].
Design of the governance kernel of socio-technical systems is a challenge of the first order.This challenge is two-fold.Governance has to operate both at the agent as well as at the institutional level.Firstly, the design of social systems is more nuanced than the design of a purely technical system.This is because natural laws are easier to discern compared to the social.For example, the ideal of justice enshrined in the rule of law that governs a given social order is not as apparent as, the force of gravity enshrined in the Newtons law of gravity.Natural laws govern the physical sciences; there is, therefore, no other governance needed.In contrast, design (be it technical, social, or socio-technical) requires governance--since it needs to account for the human element.The bounded rationality problem conditions human decision-making.While natural laws are universal in scope, human decision making is constrained to operate within the limited knowledge scope of individuals and groups.Humans have different knowledge bases and perceptions.These differences inevitably lead to misunderstandings and conflicts that then needs governance at the agent level.
A second issue that social system designers face is that of institutionalized injustice.Human agents are endowed with free-will.As we just discussed, when bounded rationality meets free-will, human agents may commit erroneous decisions.However, free-will is also a significant reservoir for corrective governance forces that unleash when agents face institutional-level injustices.Indeed, the exercise of free will (both individually as well as collectively) has the power to change the course of nations and organizations.In other words, organizational units do not have the power to dictate commands that violate the moral code unilaterally.Given the painful history of how humans have socially engineered their way into political/criminal dominance over others, and then institutionalized this dominance, it is crucial to safeguard against such abuses via proper governance.Moreover, with the advent of massive socio-technical systems, the prospective danger of such institutionalized dominance via the exploit of cognitive biases is abundantly clear and present.Cultures may exhibit differences regarding their respective tolerance for injustice; but eventually, social elements rebel and overthrow nodes of injustice.This, of course, is a costly process of redress that a proper governance model can help address upfront.Thus, if properly designed, elements of governance can help safeguard against acts of injustice at the institutional level.
Governance operates at two levels: the agent and the institutional.Governance at the agent level primarily needs to safeguard against bounded rationality; governance at the institutional level needs to safeguard against institutionalized injustice.As discussed in Section 7 on CAS, these two levels of governance bifurcate into the α and β levels of governance respectively.Given the apparent differences in size and operational scale, making course-corrections at the institutional level is far more demanding than the same at the agent level.It is akin to maneuvering and changing course of a massive oil tanker versus an agile sports-car.Even so, and as we shall see, the kernel design for both of these governance systems may be obtained via a surprisingly similar lower-triangle arrangement of heterarchical hierarchies.This is because in both cases, governance ultimately does reduce to the problem of unknown-unknowns in the context of knowledge architectures that operate either at the agent or the institutional level.In either case, the addition of heterarchic controls may, therefore, help solve the governance issue.
In essence, this is the promise of the blockchain revolution (i.e., the addition of decentralized heterarchic controls).The blockchain technology initially was created as a support system for bitcoin transactions.However, it is now turning out to have far-reaching economy-wide implications for conducting peer-to-peer transactions absent any gatekeeper middlemen.Governance in a knowledge-economy implies the establishment of appropriate regulatory nodes that help keep open the flow of information and knowledge with proper regard for privacy and property-right concerns.
Note however that the design of a CAS system (that is emergent, self-organizing and adaptive) is a step removed from the traditional design of systems that are predominantly non-self-organizing.Here the design is left incomplete at a meta-level; the final stages are orchestrated in a self-actualizing boot-strap from its inchoate embryonic state to its fully actualized adult form.The closest exemplars of self-organization may be found in the realm of biological entities that have brought forth their ever adaptive, ever-evolving designs via genetic trial and errors spanning immense temporal expanses.Trapped within the sparse coils of the DNA (which consists of about 1.5 GB of DVD-sized data), one may witness the essence of the Information Axiom operating in a selforganizing context.Herein, the genetic code orchestrates the embryonic self-articulation and development of a complex living entity (consisting of about 150 zettabytes of data and requiring about 30 Manhattan-size datacenters to merely store) that can struggle, adapt and thrive in heretofore novel and unknown environments with everchanging risks and opportunities.Biological as well as blockchain-based socio-technical systems may be studied under the rubric of CAS (Section 7) to help elicit issues of decentralization, self-organization, emergence, etc.
Section 2 surveys the relevant literature on governance as it relates to the blockchain technology.Section 3 highlights the issue of trust amongst trustless agents-which fundamentally undergirds the blockchain way.Section 4 studies the rise of complexity as a driving force for creating new organizational structures.Section 5 summarizes research on organizational design.Section 6 presents the phenomenon of stigmergy and stigmergic gearings that help structure a CAS.Section 7 describes the CAS system; i.e., the basic as well as iterative.Section 8 establishes the concept of decentralization (that underpins much of the blockchain technology) in terms of CAS.Section 9 defines the role of governance.Section 10 looks at the phenomenon of Emergence and Self-Organization in the context of governance.Section 11 explores heterarchies and hierarchies in the context of governance.
Section 12 helps reduce the problem of governance (both agent level as well as institutional) to the heterarchicalhierarchy of knowledge architectures.Section 13 examines governance regarding the unknown-unknowns.Section 14 frames the Axiomatic Design framework for CAS.Section 15 structures the basic blockchain design from an axiomatic perspective.Section 16 extends this design to include smart contracts.Section 17 discusses the kernel blockchain governance design.Section 18 concludes and wraps up the current work.
Literature Review
Governance of digital assets takes on a whole new meaning when almost anything and everything can be tokenized and traded by quasi-anonymous agents which include machines and IoT's.
In [7], Campbell-Verduyn provides a broad-brush introduction to the issue of global governance in the context of the blockchain.Blockchain-based trading platforms started showing the cracks in the operative governance framework (or the lack of it) when enterprises such as Mt.Gox and SilkRoad started surfacing the underlying "deep and dark-web" side of the new technology.The key issue raised is whether the blockchain technology "gives rise to new governance problems and pathologies?[7]" Every shift in the technological front leaves many who are vulnerable to new information asymmetries.If so, what is the role of governance in mitigating these asymmetries while allowing innovation to proceed forward?But hastily drawn governance rules could very well kill the golden goose that may just turn out to be immensely transformative and liberating.Also, as the governance researchers clearly understand, blindly subscribing to the Lawrence Lessig formulation of "Code is the Law [8]" is an invitation to enter a Hobbesian Leviathan monolith.It is therefore critical to understand the foundational basis of human trust; and how much of it could be replicated via a trusted system of trustless agents?
The problem of interdisciplinary complexity inherent in blockchain is also echoed in [9] wherein Lopp asserts that: "One challenge to understanding bitcoin is that it is a multifaceted cross-disciplinary system that is constantly evolving." In [10], Ehrsam (co-founder of Coinbase) asserts the strategic significance of blockchain governance in that it could be "the largest determinant of our future trajectory as a species" when it potentially gets used to bootstrap powerful AI across the distributed landscape.Crossing over to the humanities end, Ehrsam highlights that it is governance that "keeps communities together and, in turn, gives a token value."He further argues for on-chain governance (i.e., code is law) in terms of its consistency, fairness and speed of decision making.He does, however, caution that the Leviathan metasystem could easily get exploited if flaws were to be discovered; also, that it becomes "harder to change once instituted." As a direct rebuttal to Ehrsam's stand on the costs and benefits of on-chain governance, Zamfir (lead developer at Ethereum's Casper protocol) asserts in [11] that blockchain governance (as well as governance in general) cannot "be understood as a design problem."The reason suggested as to why governance falls outside the purview of design is because governance is a process, and processes presumably fall outside the scope of design on account of the dynamics involved.This, of course, is an untenable position in favor of adhocracy; processes can and should be subject to design.Nevertheless, Zamfir's highlighting of the need for adaptiveness when designing governance structures is on target.Furthermore, it dovetails well with the adaptiveness embedded in a CAS architecture.Zamfir also takes issue with Ehrsam's stand on on-chain governance as being "incredibly risky" in inviting automatic upgrades of the governance processes without adequate human due-process and oversight.Again, a proper design of the governance kernel ought to make clear the appropriate contexts where one may resort to on-chain versus off-chain governance.
In [12], DuPont forensically analyses the governance failure in the Ethereum based DAO (Decentralized Autonomous Organization).The DAO promised transparency, efficiency, fairness and a democratic decision-making process.In just a month, it managed to raise USD 250 Million; yet within days of its launch, it suffered a massive "attack" (draining it off USD 35 million) from which it never recovered.The study exposed the "inherent complexity of bringing to life an algorithmic and experimental organizational model."After the attack there were three post-attack options presented: • Code is Law (on-chain): Let the attack stand.
• Hard Fork (off-chain): Return the funds to the "investors" who willingly participated in the trading platform but felt taken advantage of.The attacker lost USD 35 million.Ultimately the DAO that was supposed to be hands-off and accepting of the "Code is Law" dictum, violated its own governance and did a hard-fork by reverting to human governance.
Based on the DAO failure, Voshmgir highlights the problem of the Unknown-Unknowns in [13]: While machine consensus can radically reduce bureaucracy, the question of how to deal with unknown unknowns that manifest over time has not yet been resolved.trust.Traditionally, trust-systems (both internally as well as externally) are hierarchically orchestrated.Internal hierarchies preside over breaches of trust within an organization.External hierarchies preside over breaches that cross organizational boundaries.Hierarchic governance structures are inherently flawed in the sense that the top nodes can be compromised over time.Hence Lord Acton's dictum in [14] that "power tends to corrupt, and absolute power corrupts absolutely."This same sentiment was expressed two millenniums ago by the Roman poet Juvenal when he coined the phrase [15]: Sed quis custodiet ipsos custodes?(i.e., but who will guard the guardians?).Going further back into antiquity, similar suggestions may be discerned in Plato's The Republic.
Having established an authoritarian top-down hierarchical social order, Socrates is here first shown to raise the problem of corruption at the top nodes; he then appeals to self-governance in order to help resolve this conflict; i.e., the self-indulgent notion that the "best man has within himself the divine governing principle" [16].However, within The Republic itself, there are passages [17] that indicate that Plato looked down upon such a selfdeceptive, self-referential design construct as a contemptable "noble lie," (γενναῖον ψεῦδος: i.e., a lie or wrong opinion about the true origins).
Fundamentally, there is no escaping the fact that all hierarchical systems lack sufficient design parameters to help resolve the problem of the top-nodes going rogue.It was founding father, James Madison who first recognized in [18] the fatal flaw in a purely hierarchical design (symbolized as |H henceforth and as in [19]).Instead he proposed the now institutionalized heterarchic governance (symbolized as |h henceforth and as in [19]) within the US federal government; i.e., a model of checks and balances along with a clear separation of powers across three coequal hierarchies in order to help make sure that the top nodes always have external oversight.It, therefore, may be argued that the ongoing US experiment in selfgovernance is indeed a lower-triangle decoupled design as the coupling via self-referentiality so evident in the Platonic logic has now been eliminated.
The blockchain model is likewise potentially revolutionary in scope in that it has within it, the ability to democratize and make ubiquitously available such heterarchic controls across all levels of any given sociotechnical systems; not just at the highest echelons of the US Federal government.
Rising Socio-Technical Complexity
In [20], Prof. Bar-Yam suggests that society at large is shifting away from deeply hierarchic models, and in favor of more and more decentralized, heterarchic or mixedmode |h-|H control.This is because distributed governance/control has a larger capacity for dealing with increasing socio-technical complexity.One may witness this in the academic realm, where interdisciplinarity is on the rise; and (traditionally) hierarchical disciplinary group boundaries are becoming porous.
Heterarchic linkages add extra burden in the realm of governance for the simple reason that heterarchies do not play nice; they instead jostle for dominance.Here, governance involves sense-making across domains and disciplines.And in order to be coherent and make sense, one of the erstwhile co-equals eventually starts to dominate the heterarchic complex.For example, in the case of the US Federal government, historically we do have three co-equal branches.However, over time, the judiciary (given its ability to explicate the governance narrative and create consistency across the total political span) has carved out a dominant long-term role; the executive (given its protagonist role in the near term) similarly dominates the contemporary stage; and much of the legislative branch stands reduced in stature The executive and judiciary have usurped much of the lawmaking ability of the legislative at both the consequential coarse as well as the fine-grain.The judiciary is not per se at fault; instead the default is in the purview of the other two as they lack explicit grasp of a missing FR, namely the need for historical consistency similar to judicial review.This is indeed a flaw in the founding design; for there is a hidden sense-making functional requirement that has been left unaddressed.Each of the co-equal branches ought to have had an ongoing sense-making role.Contrary to Emerson [21], consistency is not "the hobgoblin of little minds"; it in fact is what provides the directive thrust.Court precedents are not easily overturned; established case law sets the stage for what follows.Such binding sense-making is missing in the other two co-equal branches.
Sense-making is intimately related to the nature and shape of human knowledge.As discussed in Section 12, human knowledge is heterarchically hierarchical.Human knowledge dynamics, therefore, have a key role in the evolution of governance.Governance ultimately refers to the over-arching body of knowledge that provides guidance wherever conflicts arise.While heterarchical contributions enrich the growing corpus, it is the hierarchical aspect of human knowledge that is responsible for sense-making.Sense-making is fundamentally hierarchical in nature.Absent the distilling of such knowledge hierarchies, information fails to make sense.Larger the number of agents engaged in abstract, high-order sense-making, greater the chance that the engagement will devolve into heterarchic nonsense.Such is the fundamental weakness that the legislative faces.While it is heterarchically able to bring multiple points of views to the governance table, it is unable to integrate this into coherent hierarchically-sound agreements.Here, for example, is a report [22] on the attempt to merely determine the number of federal criminal laws at hand: In 1982, while at the Justice Department, Mr. Gainer oversaw what still stands as the most comprehensive attempt to tote up a number.The effort came as part of a long and ultimately failed campaign to persuade Congress to revise the criminal code, which by the 1980s was scattered among 50 titles and 23,000 pages of federal law.
Justice Department lawyers undertook "the laborious counting" of the scattered statutes "for the express purpose of exposing the idiocy" of the system, said Mr. Gainer.Consequences of such accountability failures in the governance corpus are dire.For example, the above report summarizes the legislative failure on sense-making with the following comment [22] by law professor Prof. John Baker: "There is no one in the United States over the age of 18 who cannot be indicted for some federal crime.That is not an exaggeration." Sense-making is especially relevant in the modern context of vastly expanded and heterarchically-rich, sociotechnical systems such as the coming world of IoT's where machines directly transact with other machines at an unprecedented scale.The speed and scale of modern socio-technical operations make it abundantly clear that humans are increasingly left out of the decision loop as it is beyond our human comprehension.Fundamentally, humans are conceptual entities.The new era of human/machine symbiotics [23] we are now entering, ultimately has to make sense.Governance is fundamentally rooted in sense-making.It is this that is at stake when dealing with technologies of trust across the human/machine divide.And unless we are careful, just as the case was with the judiciary dominating the sensemaking role and by default, taking up the "first amongst equals" position, machines may likewise be deliberately or accidentally programmed to serenade us with convincing but deceptive "noble lies" that exploit our individual and collective cognitive biases & weaknesses.When cast in this sense, the design of an equitable governance structure for the coming blockchain way of organizing our socio-technical systems may prove to be of existential import.
It is worth noting here that Cynefin [24][25] is also about sense-making in an interdisciplinary setting.Indeed, as the Welsh word Cynefin suggests, the emphasis is on multiple-belongings, i.e., multiple domains that mash-up to create the unwieldiness of modern complexity.
Complex Socio-Technical Organizational Design
Prof. Banathy was a pioneer in advocating for a disciplined design of social systems.He was of the opinion that we are now squarely in the "postindustrial information/knowledge era [3]."Consequently, organizational designs that arose in the industrial machine age does not scale given the exponential rise in the "speed, intensity, and complexity of change."Compared to design initiatives in other disciplines, he was painfully aware of the lack of attention regarding social-systems design.Prof. Banathy held that when considering social systems design, there is a tangible "shift from product thinking to process thinking [3]."In direct contrast to this view was Ethereum's lead developer, Mr. Zamfir who dismissed processes as being outside the purview of design.
Social-systems have an overarching "concern for justice [3]."Ethics, therefore, plays a pre-eminent role when designing social systems.The system ought to be equitable and just to all parties concerned, be they central or peripheral in the activities subsumed.In other words, governance plays a central role in all social system designs.The proper design of the governance unit for a blockchain-based social system ought to help establish clear boundaries that when breached may trigger appropriate smart contracts and/or legal recourse.Blockchain governance is therefore not independent and free-floating outside the appropriate societal moral code.What is, however, a departure from the norm is the fact that the blockchain based governance structures can adjudicate many of the conflict scenarios with machinelike precision and efficiency.It is as if vast parts of the social order may now be governed by benevolent arbitration judges that resides in the form of smart contracts coded up within the machine.
Social system designers are often faced with problems that are "anything but well defined [3]."Such problems may have inconsistent FR's, inconsistent constraint-sets, upstream designs that are coupled, problem-space that is dynamic across time, etc.This is to be expected since we are dealing with inconsistent ontologies across far-flung inter-disciplinary fronts.While keeping ye holistic view, design may, therefore, have to proceed in small iterative steps across the FR-DP divide.When inconsistencies are detected, this may lead to repeat back-tracks up the design hierarchy.
Social systems are primarily designed towards the benevolent nurture, growth, and development of the human potential.A properly designed governance unit helps adjudicate equity throughout the system--both in its homeostatic phase as well when the system adapts and evolves beyond its stable stage; i.e., "self-organization incorporates self-transcendence, the creative reaching out of a system beyond its boundaries [3]."CAS is capable of modeling such shape-shifting behaviors.Díaz and Olaya highlight the role of emergence in [4]: "Human beings co-design the social systems that they form, this is why those designs might be intentional up to some point but they are also emergent, dynamic, incomplete, unpredictable, selforganizing, evolutionary and always 'in the making'" In 2014, Prof. Norman and others put forth the DesignX framework [26] for tackling the design of complex socio-technical systems.It had nine problem categories within its scope.These categories include some of the problem areas mentioned above such as cognitivebiases, bounded-rationality, interdisciplinarity, requirements & constraints that do not always cohere (but can periodically or chaotically change), precedent designs that are inherently coupled, non-linearity in the elementto-element interactions, causality that operates across multiple scales and long/unpredictable latencies.Armed with these complexities, Prof. Norman critiqued the Axiomatic Framework along the following lines [26]: "With sociotechnical systems, it is seldom possible to follow the Independence Axiom: two-way or even n-way interdependencies are common.Moreover, these interdependencies are often unknown, discovered only after the fact."In other words, the design-matrix (that tracks FR-DP couplings two-by-two), is inadequate when dealing with FR-DP clusters that may not compose between them into a static, well-integrated, 2-dimensional matrix.The example referred to in [26] that illustrates this phenomenon pertains to the design of the treatment schedule in an elderly healthcare service where patients often present multiple ailment complexes and severe sideeffects from earlier treatments.What usually starts out as a single-organ failure quickly devolves into a chaotic complex of treatment and care that spans multiple specialties [26]: "When patients have multiple chronic conditions, a common occurrence in the elderly, there are numerous different professionals involved in the treatment, with complex interconnections among them (including, in some cases, a lack of communication).These problems defy easy analysis." The above set of nine problem categories are some of the fundamental design challenges of the modern world-and there are no easy answers.However, easy answers include the "muddling through" approach advocated in the DesignX framework.This approach advocates small, incremental steps that in principle, refuses to consider the problem as a whole [26]: "This approach requires a different design philosophy than might be used when considering the project as a whole.Now, the design must be modular, with multiple small, relatively independent parts, incremental changes that can be implemented, and linkages that are designed for flexibility."Indeed, if such a "muddling through" approach were to be institutionalized in medical-care, it would be cause for alarm.Furthermore, such an approach fails to take advantage of some of the modern tools at our disposal, such as stigmergy (Section 6), Axiomatic Design [27], Cynefin [24,28], Agent Base Modeling (ABM) [29], Data-Sciences [30] and others.Each of these approaches attempts to learn workable heuristics that are holistic in scope while also attempting to meet the current expediency.These are a more responsible approach than merely "muddling through."Indeed, the muddlingthrough approach may be considered as being unnecessarily defeatist in embracing of the adhocracy philosophy of yesteryears--even as complexities abound.
Even so, Prof. Norman's critique about the inadequacy of the design-matrix in tracking complex coupling clusters is well taken and needs to be addressed.Indeed, biological systems are highly coupled.In fact, an uncoupled biological system may legitimately be considered to be dead.Thus, while the design axioms continue to inspire (i.e., the FR-DP mapping could be considered as "form follows function" in the biological realm), the tools used to implement the axiomatic design framework may need to be extended.
For example, the AD/CT extension [31] recognizes the time-dependence of a given design.In other words, the design matrix (along with its couplings) are not static and unchanging across all the operational phases that a given design is faced with.It is in fact, time-dependent.In this expanded sense, the overall design is an ensemble of appropriately governed designs that are either pre-set or just-in-time improvisations composed of known elements.The underlying time-dependence may be periodic or aperiodic.AD/CT can help streamline and resolve many of the objections raised by Prof. Norman.
Governance
Governance has taken center-stage on account of at least two major global trends, and often working at cross purposes [32]: 1. Globalization 2. Democratization Globalization as the top-down legacy framework is being challenged by the bottom-up democratic fervor that has swept across the world stage ever since ubiquitous smart-phones and blogs brought about an overthrow of the highly scripted and staged "noble lie."Now added into the mix is the promise of the blockchain technology that threatens to flatten and shorten the existing value-chains, end-to-end.Consequently, a traditional firm now faces competitive and regulatory challenges from multiple dimensions.A governance misstep in any of the exposed fronts may have serious consequences.
Governance is about collective decision-making, and may be defined as in [32] as follows: Governance is about the rules of collective decision-making in settings where there are a plurality of actors or organisations and where no formal control system can dictate the terms of the relationship between these actors and organisations.Note the emphasis on: 1. Rules 2. The collective scope 3. The decision-making process, and the 4. Lack of formal control systems While formal rules may easily get coded in smart contracts, informal, on-the-fly, negotiated rules are much harder to codify.Also, the collective scope squarely places modern-day governance in the unwieldy heterarchic side of the ledger as opposed to the well-behaved hierarchic side that may be safely encoded in smart contracts.Also, the decision-making process itself has to have its own slowly-changing meta-rules as to "who can decide what, and how decision-makers are to be made accountable [32]."But the most challenging aspect of modern-day governance is the realization that really "no one is in charge"; i.e., "no formal control system can dictate the relationships and outcomes".And as we shall see (in sections 7-8), this aspect of "no formal control" is what makes modern governance a CAS problem.The provenance of decentralized control may be traced at least as far back as the writing of the Old Testament [33]: Go to the ant, you sluggard; consider its ways and be wise!It has no commander, no overseer or ruler, yet it stores its provisions in summer and gathers its food at harvest.The lack of formal control mechanisms is a distinct departure from traditional models of top-down governance.
Stigmergy
So how is it that non-conceptual entities like ants, that even though lacking a central controlling agent, are still able to coordinate and collaborate in vast numbers (i.e., in billions [34])?The answer to this is stigmergy.Etymologically it is of Greek origin (stigm-oi meaning pricking, signing, marking; and erg-on meaning work), while entomologically it is from a study in 1959 by P.P Grasse on termites [35]: The stimulation of the workers by the very performances they have achieved is a significant one inducing accurate and adaptable response, and has been named stigmergy.Stigmergy denotes call to work based on local signs or markings left by self or other agents at some time in the past and during the course of their work (either as a sideeffect of the said work or as something in addition to the work).These markings aggregate to provide organizational directives available at various levels, both within the environment as well as within and between agents, thus leading to the visual of stigmergic gearings (Fig. 1).Thus, even though there is no one controlling, there is nevertheless system-wide control.These geartrains may or may not all engage simultaneously; instead, they may be asynchronously meshed in different groupings as per some meta-level (αi-βj) logic.
Examples of stigmergy abound in nature.For example, the pheromone markings that an agent ant (αi) leaves behind as it navigates an unknown terrain helps it to navigate back home instead of being lost.Moreover, if perchance, it does chance upon a choice food item, these same trails then help recruit other ant compatriots in jointly squirreling away the find back to the nest (Fig. 2).
The pheromone trails that aggregate across the environment is the emergent pattern (βj).Gearing upwards, the pattern-making potential (i.e., the chance ability to create, sense and communicate asynchronously via pheromones), must have had to be evolutionarily written into the genetic constitution of the ant or its predecessors (at some remote point in the past).
Stigmergy is thus a two-way street; it is not just that the agent is leaving tell-tale markings in the environment; the environment is also signing back but at a much more glacial gearing pace.This captures the A and the Ω, but there could be many more engagements across the span that could veer off laterally.Thus, these gear-trains are not just linearly laid-out; instead, they constitute fractal-networks, and networks-ofnetworks that branch off and engage other such gearings.Governance is the sum-total effect of the multi-faceted βlevel gear-train network on the evolving α-level agent body.
Stigmergic gear trains may work to enhance or inhibit a given agent/group-level activity, thus leading to nonlinear effects [37]: In more complex self-organising systems, there will be several interlocking positive and negative feedback loops, so that changes in some directions are amplified while changes in other directions are suppressed.Also, these stigmergic gear-trains are a direct analog to the modern blockchain; except that nature displays abundant varieties of these offerings, each of which has painstakingly been forged in its exacting survival-of-thefittest workshop.One such example is the immune system which we discuss in Section 15.Blockchain designers would be wise to study similar nature-inspired chains.
The two-way nature of stigmergy (as mentioned above) does not thereby imply any additional agency embedded in the environment; but that the environment is also signing back (in an "action/reaction" Newtonian sense) and thus shaping the evolution of the protagonist agent.Longer the stigmergic chain, greater the need to unitize and embed the ricochet as second nature within the agent.Emergence is the global precipitation of these stigmergic patterns into the environment; while submergence is the local embedding of the unitized write-back (i.e., the ricochet) into the constitution of the agent for facilitating future emergence.
As illustrated above, stigmergy helps in the social organization of lower level life forms such as ants and termites.However, stigmergic ordering is not just limited to the lower forms; indeed, much of human organization (or the lack of it) may be attributed to stigmergic successes and failures.For example, the organizing market power captured in Adam Smith's "invisible hand" metaphor may be attributed to stigmergy [5]: "Adam Smith's ''invisible hand'' metaphor used to denote the unintended emergent consequences of a multiplicity of individuals' actions, is stigmergic in all but name…" Indeed, the price of goods are the pheromone markings that helps organize the vast reaches of our global economy without explicit direction (i.e., the invisible in the "invisible hand" metaphor).Stigmergy operates as a problem-solving coordination mechanism wherever living entities are faced with problems that are beyond their limited individual ken.Billions of ant's and other insects wouldn't be able to coordinate and thrive but for their stigmergic know-how.It is therefore not surprising that we humans have also been engaging in stigmergic rituals without explicitly knowing it.Parunak analyses a whole slew of such human-level stigmergic processes in [38], including forest trail-formation, highway traffic-flows, democratic elections, document editing, social-media groupings, viral-marketing, Google page-ranks, peer-topeer computing, Amazon-style recommender-systems, etc.The blockchain is yet another stigmergic innovation to help coordinate human (as well as human-machine) activities.In each of these systems, the patterns that emerge have significant potential to help organize and scale the human potential.
From a designer's viewpoint, stigmergy is critical in learning to read nature without stumbling on "intelligent design [46]."It is crucial for understanding and deciphering designs in nature towards creating and validating an integrated perspective that spans across the natural as well as the artificial.It is the causal thread that connects function and form; i.e., the Functional Requirement (FR) to its Design Parameter (DP) in the natural world.Given the scale, scope, immense time frames as well as the vast combinatorial sweep across which nature operates, it behooves us as keen students of design, to perk up and listen.Deciphering the submerged building blocks would render much of the biological order transparent and seamless across the artificial/natural divide.This is of significance given that while throughout 20 th century, physics was the dominant science, the 21 st century is the century of biology.And stigmergy can help bridge the conceptual bridge across these vastly different sciences.But that requires carefully mapping the underlying gear-train.Or in the words of Francis Bacon: "nature, to be commanded, must be obeyed [47]."In this context, the axiomatic framework could be fruitfully employed in tracking the myriad gear-trains that nature employs to keep its machinery fine-tuned and humming.For example, from a design-matrix perspective, the aforementioned +/-feedback effects across the gear-train complex could be qualitatively captured in a matrix as shown in Fig. 3 below.Here the design matrix captures the delivery of the FR along the diagonal (denoted X), as well as its off diagonal +/-control-set gearings to help keep the main FR-DP (along the diagonal) on its track.And moving across the A-Ω gear-train spectrum, the aforementioned unitization would result in the familiar design hierarchy that helps drill down from the macro-view and into the micro.
Fig. 3. Design Matrix of Stigmergic Gearings/Hierarchy
Unlike direct and imperial command and control in the human realm, stigmergic command is far subtler, indirect, distributed and democratic.This is because there is no master gear; no single point of control; instead there are very many stigmergic gears distributed across the unwieldly control landscape.
Indirection implies asynchrony as well as a lack of specificity as to who the recipient of the message is.Needless to say, stigmergy is thus intimately connected with governance as it is via stigmergic gearing that the individuals in a group as well as the group as a whole is able to orchestrate a global order despite having "no commander, no overseer or ruler [33]." Note that the stigmergic signal is not a clear explicit imperative command for the next agent to do something or the other.Instead it is like a mark left on a shared common blackboard for the next agent that comes by, to use it as it pleases.For example, the openly displayed stigmergic signal may very well be read by an adversary and then put to nefarious uses against the original agent and/or its group.Stigmergic signals are therefore not an imperative command of what should be done next; instead it is a recording of what has transpired up to the point when the mark was made.The imperative element instead resides in the collective emergences as well as in the agent that picks up the baton.Thus, from an agent modeling perspective, the "what next" is probabilistic.
In an abstract sense, markets are primarily an expression of the human quest for freedom (via division of labor); i.e., freedom to make better use of our scant resources, especially time.The blockchain technology helps escalate this timeless quest as it unblocks and frees up the agents towards engaging in many more degrees of freedom and ad-hoc mashups than previously imaginable.If prior to the advent of the blockchain, the markets were operating along highly choreographed pathways; after the advent of the blockchain, the markets have now entered a world of jazz-like improvisations that have the potential to topple many of the strait-jacketed middle-men controlled pathways.With the elimination of superfluous middlemen, not just organizations; whole industries may be flattened.
Note that lock-in is a problem that stigmergic systems (such as cryptocurrencies) face.As suggested in [48]: …path-based idiosyncrasies may become locked in as material artifacts, institutions, notations, measuring tools, and cultural practices.Under lock-in, stigmergic systems are unable to fork away from the dominant strand on account of lack of followership.The first movers therefore have a strategic advantage which is not easily overcome.In the natural world diseases, viruses and bacteria are the heterarchical pathways that nature exploits in order to constantly stresstest the resilience/viability of the total evolving biomass away from stagnant lock-in.Likewise, in the blockchain context one may expect similar heterarchic thrusts and parries across the unguarded/evolving attack surfaces.
While stigmergic-coordination is remarkable in its scale and scope, it is not the same as cognitive thought and reasoning.Attempts to anthropomorphize stigmergy and posit the existence of an "extended mind" is in error.There is no "extended mind" agency, no stigmergic cognition, and therefore no basis for stigmergic epistemology as in [5].Hypothetically speaking, if the extended mind exists in a distributed, asynchronous sense, it must incarnate whenever an agent partakes or contributes to the growing stigmergic corpus.It is then like the luminance of the lightning bug--it comes and goes out of existence.Even so, the fundamental problem is that of will.No executive center animates the extended mind figment.All will, action, and responsibility remains vested in the underlying agents.This restriction has jurisprudential implications for the blockchain enterprise.Legally speaking, one cannot litigate against the emergent βj-pattern; one may only sue the αi-agents either individually or collectively [49].The extended-mind concept is a flawed concept; it serves no rational purpose.Thinking along this line could place the blockchain founders in legal jeopardy.
Similar restrictions also apply to humans operating stigmergically as a group.There is no organ that can be posited as a repository of group cognition.Stigmergic epistemology [5] is, therefore, an oxymoron.Anthropomorphically positing otherwise is an error.Same is true for the rest of the philosophical train (i.e., there is no validity to stigmergic metaphysics, ethics, politics, aesthetics, etc.).However, what can be studied is the validation of stigmergically arrived conclusions via cognitive means resident in independent individual entities.Thus, the philosophic underpinnings of stigmergically arrived truths revert to normal philosophy.There can, therefore, be no stigmergic validation of design or governance.Thus, no matter how good a stigmergically arrived design may be, it still needs independent analysis and validation using the normal tools of human reasoning.Conceptual knowledge therefore has dominance.
Complex Adaptive Systems (CAS)
Professor John H. Holland (1929Holland ( -2015) ) is rightly considered the father of genetic algorithms.He also laid the foundational work in the study of CAS.As he has described it [50], CAS's "are systems that have a large number of components, often called agents that interact and adapt or learn."Holland proposed a two-tiered system as shown in Fig. 4a below.The lower α-tier follows a fast-dynamic and is engaged in the flow of resources between diverse agents (αi grouped in level i) that are also leaving behind stigmergic markings; while the upper β-tier follows a slower-dynamic that captures and aggregates the stigmergic markings into emergent patterns (βj grouped in level j), which is then emitted system-wide as stigmergic signals that help the governed agents to self-organize and scale.Instead, what may be happening is that each follow-on feedback-loop/iteration is bifurcating the target population into higher levels of organizational complexity.In each subsequent iteration, the population is now composed of bifurcated ensembles of agent-nodes and artifacts (as indicated by the dotted-ovals in Fig. 4(b)).Gearing, therefore, iteratively creates self-organization and structure in both of these interacting entity spaces.In each follow-on iteration, the respective number of nodes in each of these dotted-ovals is asymptotically decreasing (with allowance for population dynamics) while the dotted-ovals proliferate.Agents may, of course, migrate across these boundaries.
While it is true that the β-tier is entrusted with the governance mandate (i.e., the role of control and governance) of a vast network of unwieldy, decentralized agents that populate the α-tier, it is likewise, not immune from further restructuring (i.e., higher order gearings).
In his post "Notes on Blockchain Governance [51]," Buterin (who founded Ethereum) gives partial evidence of the α-β CAS structuring in the blockchain architecture when he writes: Generally speaking, there are two informal models of governance, that I will call the "decision function" view of governance and the "coordination" view of governance.The coordination view is the β-tier view, while the decision-function view pertains to the α-tier agents that act on the coordination signals.Also, as mentioned earlier in Section 7, it is the decision-function view that has legal liability.
Buterin also gives evidence about the layering (at least in the β-tier) when he writes that "the coordination model of governance…exists in layers [51]."
Centralization/Decentralization & CAS
Here we explore centralization vs. decentralization in the context of CAS.In [52], Paul Baran was the first to outline the distinction (see Fig. 5) between the two: Although one can draw a wide variety of networks, they all factor into two components: centralized (or star) and distributed (or grid or mesh).The centralized network is obviously vulnerable as destruction of a single central node destroys communication between the end stations.In practice, a mixture of star and mesh components is used to form communication networks.When cast in the CAS-framework, it is clear that centralized vs. decentralized is relative.Absent the β-tier, α-tier agents are distributed (Fig. 5a).It is only in the governance context of the β-tier that the α-agents may be considered as centralized (Fig. 5b) or decentralized (Fig. 5c).Also, there is a natural progression across these three self-organizing architectures.Extending this natural progression, one could easily see that there are far more architectures beyond the above mentioned three.Indeed, the full rigor of network sciences may be put to use in extending the categories.For example, the mammalian nervous system is a hybrid architecture that incorporates both centralized as well as distributed control.The vocabulary, therefore, needs to be enriched with mathematical rigor.Also, note the loss-factor approach being used in defining these critical concepts; i.e., the "centralized network is obviously vulnerable as destruction of a single central node destroys communication between the end stations."While such vulnerability duly needs to be noted, it, unfortunately, fails to capture the positive functions that the β-tier provides.It is akin to saying that the stoppage of the heart muscle will result in death which doesn't quite describe the positive function the heart muscle performs.This same lossfunction approach may be witnessed in the works of other researchers who followed Baran's approach.Nevertheless, the above four preliminary concepts may be summarized as follows: • Distributed/Non-Distributed: Pertains to the agent-spread in the α-tier, and across its various bifurcations/groupings.• Centralized/De-Centralized: Pertains to the governance/control-logic in the β-tier (as arranged across its various bifurcations/groupings).With the β-tier providing the controlling logic, all of the α-agents would appear as centralized to the β-tier if indeed there was agency embedded in β.But there is no agency in the β-tier; hence usage of "control" as it pertains to the β-tier is purely anthropomorphic.
Buterin categorizes centralization vs. decentralization along the following three axes [53]: • Architectural (de)centralization: how many agents is a system made up of?How many of these agents may be lost, without loss of function?• Political (de)centralization: how many agents ultimately control all other agents/infrastructure the system is made up of?• Logical (de)centralization: how monolithic or dispersed are the underlying data structures/interfaces?How much of this infrastructure may be lost, without loss of function?There is ambiguity in the above classification that may be clarified using the CAS-framework: • Architectural (de)centralization: o How many agents is a system made up of?Agent population size pertains to the α-tier; this does not directly pertain to the issue of centralization vs. decentralization.o How many of these agents may be lost, without loss of function?This refers to system resiliency and therefore does pertain to the governing βtier.However, Buterin does not explicate how merely the counting of α-level agent's, as well as the fraction that may be lost, is thereby sufficient in helping establish the system as being centralized vs. decentralized.Indeed, until we look at the controlling patterns that have been established in the β-tier, it is impractical to say whether a given system is architecturally centralized or decentralized.The example that Buterin provides is equally ambiguous: traditional corporations are architecturally centralized as it has just one head office.While this may sound plausible, note that this has nothing to do with the number of agents as well as the loss function.It is, therefore, a non-sequitur.
• Political (de)centralization:
o How many agents ultimately control all other agents/infrastructure the system is made up of?While the element of control is salient in the context of centralization/decentralization, it is an error to think that the controlling logic of a system (that often outlives the agent lifeexpectancy) is to be found in such short-lived agent entities.Consider the suggested example: traditional corporations are politically centralized (one CEO).This again is erroneous; the political power is indeed being exercised by the transient CEO agent; but the pattern of power assimilates in the office of the CEO, which is a β-tier artifact that outlives any given CEO.So, the question is not regarding how many agents control the rest of the organization; instead, it is whether the rules and protocols vested in the office of the CEO help establish a centralized or a decentralized organization (which in the case of the traditional corporation, is indeed centralized-but it doesn't have to be).
• Logical (de)centralization:
o How monolithic or dispersed are the underlying data structures/interfaces?How much of this infrastructure may be lost without loss of function?Now, this is indeed part of the β-tier, as it pertains to information flows and the patterns around it.However, consider the example suggested: traditional corporations are logically centralized (can't really split them half).A centralized corporate database, in and of itself does not guarantee centralized control; a decentralized organization could very well harness a centralized corporate database.Instead, the focus ought to be on the β-tier rules and protocols that help establish centralized vs. decentralized control.Using the above ambiguous framework, Buterin classifies the blockchain technology along the three axes [53]: Blockchains are politically decentralized (no one controls them) and architecturally decentralized (no infrastructural central point of failure) but they are logically centralized (there is one commonly agreed state and the system behaves like a single computer).The problem with Buterin's classification scheme is that it doesn't address the heart of the decentralization issue.Also, the three suggested axes are ad-hoc and could be easily augmented; for example, they could very well include economic (e.g., microfinance), aesthetic (e.g., decentralized control among jazz musicians), scientific (e.g., the citizen science movement), etc.
Consider the legal implications of the above misclassifications.If (as Buterin asserts) no one controls the blockchain, then no one may be litigated against.However, that is not what is happening in the real world.For example, SilkRoad had six of its decentralized servers tracked down, and its founder Ross Ulbricht, arrested for money-laundering [54].Ripple is currently facing multiple class-action lawsuits (with CEO Bradley Garlinghouse named as a defendant) claiming securities law violation [55].Similarly, Tezos and its founders face multiple class-action lawsuits [56] for securities law violation.The case against Tezos is significant as it was designed as a meta-level operator that would smoothen all future governance issues.However, because of poor corporate governance structuring and the resultant fallout between the founders, their ICO (Initial Coin Offering) got stalled, resulting in the lawsuits [56]: "One thing is clear though: there is a certain irony in how Tezos, the cryptocurrency aiming to solve governance issues on the blockchain, crashed due to governance issues." From a legal as well as business point of view, it is critical to understand the relative nature of centralization versus decentralization.What appears decentralized for agents at the αi-level (and below) is indeed centralized for agents operating at the level of αi+1 (and above) and under the direction of βi.That being the case, agents at the αi+1 (and above) are legally liable.Buterin is therefore in error when he claims that "blockchains are politically decentralized (no one controls them) [53]."In fact, wherever two or more human agents engage, legal disputes are certainly possible.Furthermore, litigation is more than likely when necessary boundaries are left unstated.In the blockchain context, disputes can occur between agents at the same level, or across levels.It serves no one any favors when the leadership tries to hide the agency issue behind the decentralization veil whenever disputes cross levels.True blockchain leadership would be in setting up timely and appropriate responsibilities, limitations and boundaries as the system scales.Or as Robert Frost would wisely but reluctantly suggest, "good fences make good neighbors [57]." Srinivasan and Lee [58] have proposed a Lorenz Curve/Gini Coefficient based framework to help quantify the degree of decentralization in a given system.The Gini coefficient spans the range of 0.0-1.0.The closer the coefficient is to 1.0, more centralized the system.A related concept, the Nakamoto Coefficient is also reported in [58] that tracks the agent level thresholds that tip the cumulative area under the Lorentz curve into 51% control.From the CAS-perspective, the key insight is in the fact that this framework suggests studying the blockchain system as being composed of 6 essential subsystems, namely: 1. Mining: by reward 2. Client: by codebase 3. Developers: by commits 4. Exchanges: by volume 5. Nodes: by country 6.Ownership: by addresses This approach comes closer to the CAS-ideal as it acknowledges the existence of a variety of controlling gear-trains that need to be independently & jointly tracked.Also, note that the metrics are focused on stigmergic outputs (such as measuring the developerfocused commit distribution).But lacking an integrated framework, this approach is unable to combine the Gini/Nakamoto subsystem measurements into a coherent system-level measure; it is therefore forced to treat the subsystems as stand-alone.Furthermore, the concept of decentralization is far more generic than just the blockchain context; for example, the above six subsystems play no role when considering the degree of decentralization in a corporate organization.Here, the underlying bipartite α-β CAS machinery is what is missing.By highlighting and referring to the generic CAS machinery, we may be able to liberate the decentralization concept to its rightful stature.Also, from a principled design perspective, it is important to articulate the driving functional requirements in the above endeavor; i.e., the "why" we are looking for decentralization here vs. centralization there.For example, taking a page from the biological realm, there is a reason why parts our nervous system are under central control; while other parts are under decentralized control.Blindly optimizing along the decentralization ideal would miss out on these hybrid architectures.
Emergence, Self-Organization and Governance
In [37], Wolf and Holvoet differentiate between emergence and self-organization.Referring to Fig. 4, emergence is the upward moving arrow from α to β; while self-organization is the downward pointing feedback loop from β to α.These two flows can and often do occur asynchronously.Long-winded asynchronous loops easily confound the tracing of the causal structures.Governance is predominantly associated with the downward β→α command orchestrations, but it is equally important to underscore the formative α→β pattern-captures.In this sense, emergence ought to precede self-organization.However, it is possible to graft foreign patterns and artifacts onto an immature blockchain offering, resulting in lack of coherence.All the key elements (including the governance units) selected from the overall blockchain ecosystem [59] needs to cohere within the evolving context of a given blockchain community setting to make a unique blockchain offering.Once the base structures have materialized, every new emergence and its corresponding self-organizational restructurings ought to be appropriately governed.Given the speed, anonymity, and heterarchically-hierarchical reach of the blockchain based markets, traditional governance structures do not seamlessly carry over.Governance artifacts ought to have the same level of speed, anonymity-piercing and heterarchically-hierarchical reach (on an as-needed basis) that closely parallels any of the breaches across these dimensions.Or to quote Callimachus of Cyrene, we have got to "set a thief to catch a thief [60]," but now in realtime.
Hierarchy, Heterarchy & Governance
Closely related to the conceptual pair of centralizationdecentralization is the conceptual pair of hierarchyheterarchy.We briefly considered this earlier when discussing the issue of trust (Section 3) as well as the rise of complexity (Section 4).For example, in Section 3 we asserted that "it, therefore, may be argued that the ongoing US experiment in self-governance is indeed a lowertriangle decoupled design."Such a design may be depicted as shown in Fig. 6 below: Fig. 6.Heterarchically Hierarchical (|h-|H) Governance Heterarchic control of hierarchical systems (i.e., |h-|H as depicted above) is an expensive proposition as it demands a delicate balancing act of power-sharing between competing hierarchies.It is therefore rarely used, except in providing governance of the very top-most nodes; and in the cases of national import.However, the problem of "who will guard the guards themselves [15]" occurs throughout the system; not just at the top nodes.Indeed, Lord Acton's insight that "power tends to corrupt, and absolute power corrupts absolutely [14]," is applicable across all nodes, (except maybe the lowermost) in every socio-technical hierarchy.This is because as a hierarchical system scales, it provides sufficient latency for information flow, sufficient nooks and crannies to bury the proverbial skeletons of misconduct.It is similar to the distinction between local vs.global maximum in the field of mathematical programming (Fig. 7).Thus, while there may be just one global optimum, there may indeed be many local optima based on the local settings.Likewise, in hierarchic (as well as in heterarchic) organizations, there may be local as well as global top nodes that may be compromised.Indeed, it may even be asserted that it is the local top nodes that jostle to take on the mantle of the global top node (Shakespeare's Othello vs. Iago being a case in point [61]).Hence it may be crucial to nip the bud of corrosive power at the early local stages before it scales and migrates over to the global slot.If properly designed, the blockchain approach could effectively democratize and make available the above |h-|H governance architecture across the board.This is why establishing a proper governance model for the blockchain approach has significant beneficial implications societywide.As discussed in Section 2, the fundamental problem in democratizing and scaling up the governance kernel is in clearly understanding when to use the machine vs. when to apply human intervention; in other words, when is it appropriate to use on-chain vs. off-chain governance vs. a mixed setup?
How should one go about adding heterarchic controls (via the blockchain technology) into traditional hierarchic governance models, and across the board?Here, an analogy may help in grasping the auxiliary evidence scheme that the blockchain technology offers.Consider the task that a particular lawyer is faced with, i.e., of ascertaining the veracity of a given client or witness.The task is to check if the client is telling the truth.One way is to check each statement, to check for internal consistency and to independently validate it against other bodies of evidence.However, there is yet another way to check if the client is lying; and that is to bring in a micro-expressions expert.Micro-expressions [62] are fleeting (i.e., lasting less than half a second) betrayals of inner conflict that the subject is incapable of hiding or suppressing.The act of concealment is being orchestrated by the pre-frontal cortex, which the amygdala effectively short-circuits by leaking the subterfuge in an involuntary micro-expression.In this sense, microexpressions are auxiliary evidence streams that may help catch a lie.Using the blockchain technology is akin to using micro-expressions to help adjudicate a conflicted situation; it provides easily verifiable auxiliary streams of evidence that may be available to anyone in public.However, note that on its own merit, micro-expressions do not reveal the factual basis of the conflicted case; only that whatever the client is asserting has an element of concealment in it.In this sense, it is preserving the clientattorney privilege as far as the micro-expressions expert is concerned.
Likewise, the blockchain technology cryptologically conceals the factual basis of a given transaction; but it has the potential to reveal if that transaction is conflicted with something prior that happened and as recorded within the ledger the blockchain controls.Such self-on-self is what makes current blockchain governance hierarchical in nature; it, however, does not solve the original problem of top-node governance; i.e., that of "who will guard the guards themselves [15]?" This was painfully evident on June 17 th , 2016, when the Ethereum based DAO (Decentralized Autonomous Organization) suffered the infamous DAO attack that legally exploited weaknesses in its code-base [3].Section 6 briefly described Ethereum.It is a programmable, Turing-complete blockchain infrastructure that can authenticate and run code (in the form of smart contracts), not just keep track of the underlying transactions.The DAO was built on top of Ethereum as a decentralized, cryptocurrency-based, crowd-funded platform where investors could directly fund and manage new enterprises that would, in turn, run on Ethereum.In a period of just one month, the DAO was able to raise the equivalent of 250 million USD, the largest crowdfunding success as of May 2016.
However, the DAO attack fundamentally crippled the visionary zeal.Faced with dire losses (in the order of 35 million USD), the principals banded together in an ad-hoc manner to perform a hard-fork; i.e., to violate their own pre-established rules of conduct, to revert back to the genesis state while simultaneously changing the rules of operation to make it favorable to the majority.This is indeed the ancient problem of governance at the top nodes of an organizational hierarchy, be it human or technologybased at the top-nodes.Merely handing the administration of agreements between willing agents over to smart contracts (i.e., a rule-based digital logic used to verify and enforce an agreed upon contract between two or more agents) does not obviate the top-node problem.Hierarchically governed socio-technical designs are fundamentally coupled on account of too few DP's.To understand how one may go about introducing elements of heterarchic gear-train governors into a predominantly (smart contract based) hierarchic mix, one has to delve into the architecture of human knowledge alongside the issue of the unknown-unknowns.
Heterarchically-Hierarchical Knowledge and Governance
Earlier, in Section 4 we had asserted that governance is intimately related to sense-making, which in turn is related to the nature, shape, and dynamics of human knowledge.It is by understanding the epistemological roots of human knowledge that one may formulate the proper division of labor between the human and the machine (i.e., between off-chain and on-chain governance).In other words, what is it that the human is good at; likewise, what is it that the machine is good at?Smart contracts are smart only to the extent that the human ingenuity has embedded the smarts within them, including the necessary smarts for knowledge dynamics originating both within as well as outside one's ken.
Given the abstract nature and spread of human knowledge, it may be observed that knowledge has a dynamical and heterarchically-hierarchical (|h-|H) structure as shown in Figs 8.a-f below.This figure is adapted from [19].Concretes are far more numerous than abstractions; this implies that domain-specific human knowledge (Fig. 8b) has a conical/hierarchical shape.
Induction flows along an upward arch, while deduction flows along a downward arch.Abductive cascades utilize both inductive as well as deductive streams in problemsolving (including designerly) situations [63].These distinctions ought to inform the on-going debate as to the proper division-of-labour between humans and machines: induction (that favours human faculties) versus deduction (that favours the machine) ought to be the proper role demarcation between the two sets of entities in any sociotechnical system.Call this demarcation the Inducto-Deductive Front (IDF) shown as the dotted line in Fig. 8.ad.For abductive cascades (with the IDF at cascade apex), both agents human and machine agents would need to work in close symbiotic coordination [23,64].The rate of change in the knowledge corpus is more pronounced along the lower rungs as compared to the higher, abstract levels (Fig. 8a).Reverse-salients (Fig. 8c) are lagging knowledge fronts (the known-unknowns) that occur because of differentials in growth spurts across domains that are close enough to make sense if conceptual barriers didn't exist.When they do gap-close, it ripples across the knowledge fabric radially (i.e., hierarchically-|H) as well as tangentially (i.e., heterarchically-|h).
Another source of knowledge dynamic is the archstand [63]-an integrated external perspective such as the Non-Euclidean framework that led to the Theory of Relativity.When stand-alone domains are organized using domain kinship metrics, one may expect these conics to exhibit a self-similar fish-scale (hierarchically-heterarchic) fractal structure (Fig. 8e).Humans are at the mesoscale.Unknowns from the macro-world dominate the outer realms; unknowns from the micro dominate the inner regions.Knowledge is sandwiched between these two outer and inner circles-of-ignorance that are expanding and contracting respectively.Regions beyond are the ultimate terra incognita; the vast unknown-unknowns.
Between hierarchies and heterarchies, hierarchies exhibit relatively stable vertical linkages; whereas heterarchies exhibit dynamic ties that are conceptual mashups in the making.At finer grains, hierarchies may contain heterarchies and vice-versa, and switch dominance across time (Fig. 8f).Knowledge flux involves the constant jostling between heterarchies and hierarchies.Without hierarchies, higher-level heterarchies do not form nor engage; without heterarchies, hierarchies tend to become stale, iconoclastic and insular.The emergence/flourishing of a discipline arises from heterarchic assaults and hierarchic defenses; both forces are necessary.Heterarchies encourage falsifiability while hierarchies encourage verifiability; both are essential.Therefore, in the context of governance, both of these forces ought to be judiciously engaged.When heterarchical assaults reach above the IDF, inductive human ingenuity ought to be marshaled; in contrast, when heterarchic assaults land below the IDF, the machines may well be capable of handling the issue.Likewise, when issues of verifiability range above the IDF, it would again demand human ingenuity to overcome the default.But if it occurs below the IDF line, the smart contract infrastructure may be sufficient to handle it.
In Section 4 (on rising complexity) we had indicated that there is an on-going phase shift away from deephierarchies and into hybrid |h-|H systems with many adhoc laterals.Interdisciplinarity is on the rise; and traditional disciplines are heterarchically being crosspollinated.This has been discussed at length in [19].One of the progressive schemes (the Jantschian) is as shown in Fig. 9 below.In CAS-terms, such a progression is to be expected, given the upward gearing across α↔β.Fig. 9. Terms of Interdisciplinarity (Jantschian) [19] While the overall envelope of the unknown-unknowns is as shown in the knowledge sandwich of Fig. 8.e, there are many nuances (such as the case of reverse-salients, i.e., known-unknowns) that need to be addressed.We turn to the issue of unknown-unknowns next.
The Unknown-Unknowns and Governance
Defense Secretary, Donald Rumsfeld popularized the issue of the unknown unknowns [65]: Subject.What you know.There are known knowns.There are known unknowns.There are unknown unknowns.But there are also unknown knowns.That is to say, things that you think you know that turns out you did not.The problem is how to parse this with logical consistency in mind.
Fig. 10. The Unknown-Unknown Knowledge Asymmetry Exploit
For the sake of brevity, let us notate these options as KK (Known-Known), KU (Known-Unknown), UU (Unknown-Unknown) and UK (Unknown-Known).Elsewhere [66] Rumsfeld also opined that: There are known knowns.These are things we know that we know.There are known unknowns.That is to say, there are things that we know we don't know.But there are also unknown unknowns.These are things we don't know we don't know.
This quote considers just three of the options: KK, KU, and UU, with UK missing.The way Rumsfeld has parsed the second option KU (i.e., things that we know we don't know) suggests that the first lettering is about our state of confidence in our state of knowledge; and the second lettering is about the base state of our knowledge.Thus, the missing variant UK in the above formulation indicates poor confidence in a given assertion that we accept.
Once we parse this base structure, it then becomes clear that there are many more shades of the Unknown-Unknowns that lurk in the shadows, especially when we start considering issues of stigmergic knowledge as well as what our adversaries likewise know.Understanding the problem of the Unknown-Unknowns is central to understanding how blockchain governance is likely to evolve.For example, in analyzing the DAO debacle, Voshmgir highlights the problem of the unknownunknowns faced by on-chain "codified governance rulesets" (CGR's) [13]: In
reality, formalised and codified governance rulesets can only depict known knowns and known unknowns, but have very limited capabilities to properly deal with unknown unknowns.
The earliest formulation of one of the combinations may have been by the poet John Keats in Endymion wherein love-struck demi-god Endymion ponders the mysteries that wrap the object of his affections, the moon: "O known Unknown! from whom my being sips [67]."The realm of the unknown-unknowns can inspire as well as frustrate inquiry.Here Keats puts forth the idea that things of beauty have both familiar as well as unknown facets; and that we come to grasp the unknown by systematically working our way to the edge of the known realms.Indeed, there are many pathways into the realm of the unknowns, not just the four that Rumsfeld put forth.One's true state of knowledge about some pertinent issue may be cross-mapped against what the society-at-large (or your adversary in a game-theoretic sense) is aware of.Also relevant to the problem is how the knowledge being claimed was arrived at; i.e., whether conceptually or stigmergically?In the human context, stigmergic knowledge gets coded in mores, heuristics and, habits of individual thought and action (both at the individual as well as at the societal level).
If it was conceptually arrived at, then it has a greater chance of error; but if true, it has far-reaching potential to scale.In contrast, if it was stigmergically arrived at, then its basis may be stronger (provided it avoids the problem of the aforementioned stigmergic lock-in); but being preconceptual, it does not scale easily.It is therefore of strategic value to convert stigmergic knowledge into the conceptual realm.
So, what are the combinatorial possibilities that populate the realm of the unknown-unknowns?Denote you (or your teams) state of knowledge in small-caps.Denote the state of knowledge of society-at-large (or perhaps your adversary) in large-caps.The resultant combinatorics may then be bifurcated along the following dimensions: • Process by which knowledge is gained: C/S (Conceptual/Stigmergic) for {Society, Adversary} vs. c/s (conceptual/stigmergic) for {you, team} • Confidence in knowledge possessed: H/L (High/Low) for {Society, Adversary} vs. h/l (high/low) for {you, team} • True status of knowledge possessed: T/F (True/False) for {Society, Adversary} vs. t/f (true, false) for {you, team} Mapping the resultant combinations into the known/unknown characterization is fairly straightforward with stigmergically derived true and false states in smallcaps {k, u}; and conceptually derived true and false states in large-caps {K, U}.Thus CHTcht(KK) would denote both you as well as your adversary possessing conceptually-derived knowledge that is of high-confidence and happens to be true, leading to a situation of conceptually derived known-knowns.When the adversary's knowledge is mapped against one's own, there are 64 combinations as shown in Fig. 10 above.Note that the matrix assumes a two-player game structure, though one or both players could represent coordinated groups.Also note that the UK style coding (in parenthesis) is different from the Rumsfeldian coding as it denotes two opposing agents.Each new such player expands the combinations by a multiple of 8.These combinations create knowledge asymmetries (with comparative advantage to the {K, k} team, if paired against a {U, u} adversary) that are ripe for exploitation, thus triggering governance.These asymmetries are, therefore, at the foundation of the governance conundrum, that in its essence, checks to see if agreed-upon knowledge flows have been thwarted to result in the given asymmetry.Consider for example the cell CHTshf (Ku) highlighted in green in the top-right quadrant of Fig. 10.Here the adversary's knowledge about some matter (say the true worth of a smart contract) is conceptual, of high confidence and true; in contrast, your knowledge about that same matter is mere stigmergic hearsay, but of high confidence and happens to be wrong.Diagonally across from CHTshf (Ku) is the diametrically opposite case of SHFcht (uK) (highlighted in red) where the asymmetry now favors the individual actor as opposed to society at large.Here the socially networked group is operating stigmergically, has fatally high confidence in its findings which in fact is wrong; in contrast, the lone operator is operating conceptually, has high confidence in its findings and is in fact right.In the world of finance, hedge-funds try to exploit SHFcht (uK) types of knowledge asymmetries.And when a smart contract is executed based upon such asymmetries, there are bound to be outcries of failures in governance.This indeed is what transpired in the case of the DAO-attack.
Playing the role of the adversary, if indeed one wishes to widen the {K, k}-{U, u} gap even further strategically, it may be worth introducing Axiomatic Design/Complexity Theory based complexing redherrings as suggested in [68].This may be even more potent when dealing with a {k}-{u} type knowledge gaps (which happenstance is much of the operative human knowledge); the reason being that it is challenging to debug stigmergic linkages that have been deliberately sabotaged for the explicit design purpose of throwing off one's adversaries.
Axiomatic Design for Complex Adaptive Systems
The creative mashup between two diametrically opposed design methodologies (i.e., the top-down Axiomatic approach vs. the bottom-up Design Patterns approach) was discussed in [69][70].Thus, the top-down V-approach was juxtaposed with the bottom-up Λapproach to create the N-model.As it turns out, the N-model comports well with the Complex Adaptive Systems framework.The design-patterns approach that leads with the upward-stroke of Λ is akin to the α→β emergent stroke in a CAS system; likewise, the axiomatic approach that leads with the downward stroke of V is akin to the β→α self-organization stroke in a CAS-system.Together they compose to make the N-model which indeed is the overall gearing dynamic behind the α↔β CAS system.However, as mentioned earlier (in Section 1), the design of a CAS System is a step removed from the traditional design of systems that are predominantly non-emergent/non-selforganizing.There is an inherent embryology of the CAS system that the designer has to yield to; i.e., the CAS designer needs to think more like a farmer rather than an engineer and adjust to the vagaries of emergence, such as that between pests and pollinators [71].
To come up with a design that is holistic and emergent requires the designer to be steeped in the practice of design; i.e., it is combinatorically challenging.Also, emergence requires beneficial interaction between the design elements, thus favoring lower-diagonal decoupled vs. uncoupled designs, which is not the usual norm.In the case of the diagonal design, the whole is equal to the sum of the parts.Emergence, however, requires beneficial interaction, which is feasible only if non-diagonal elements are present.Thus, in most cases, the uncoupled has dominance over the decoupled given the lower informational complexity.Emergence is the rare occurrence that could be flipping this dominance to combinatorically win the race with lower information content.This, however, is merely a hypothesis that needs to be validated.Many of the biological systems (given the enormous temporal-combinatorial space that they have been stigmergically operating over and finessing the information axiom) have strong elements of emergent qualities (such as life, consciousness, everything that pertains to the emotional faculties, etc.).Biological designs present rich opportunities to test this hypothesis.
By adopting the axiomatic approach, the β→α design is decomposed both • laterally and non-hierarchically across the various realms such as customer, functional, physical, process (CR, FR, DP, PV, etc.) as shown in Fig. 11 below, and • vertically and hierarchically within each of the above realms.In a rapidly evolving design context, it is impractical to approach design in staged, linear waterfall fashion as in CR↔FR↔DP↔PV.Instead, it is better modeled (as shown in Fig. 11 below) as a fully linked network of information nodes.The linear structuring still dominates, but it is now augmented with auxiliary flows.Each of these realms has their own α↔β CAS structures that form over time.Furthermore, since human knowledge is hierarchical, the design trace that leverages this knowledge is likewise hierarchical.
By visualizing the design in the context of knowledge hierarchies, one may begin to appreciate the historical import of Prof. Suh's work [2].In fact, something similar (see Fig. 12) happened in Renaissance Italy around 1420, with the invention of linear perspective [72] by the Italian architect/artist Filippo Brunelleschi.Ancient Rome indeed did have something close to linear perspective; however, the ancients used multiple vanishing points in its paintings, thus leaving a sense of lack of coherence in the presentation.Brunelleschi did study the ancients.He then came back to Florence to revolutionize the world of representational art as we now know it.With a single vanishing point, all the objects in the field of vision compose in a realistic, coherent, eye-pleasing fashion.Indeed, juxtaposing any of the art-works prior to Brunelleschi's approach, one immediately senses the flatness and lack of proportions in the former vs. the threedimensionality and compositionality in the later.The ability to bring unity and coherence in the realm of the conceptual artifact space is monumental in scope, especially in the field of education in general; and not just design education.Here, the teaching of anatomy and physiology from a "Form follows Function" perspective [73] is worth noting as it offers significant insights into the potential scope.As was the case for paintings prior to Brunelleschi's perspective drawing, much of education today is a sprawl that lacks conceptual unity and coherence.This same sprawl is evident in the blockchain realm [9].Again, there is untapped potential in modeling instructional sciences along the biological template.
Basic Blockchain Design
The web has evolved from a sprawling network of hyperlinks in the 1990s (Web1), to being programmable (Web2) in the 2000's--thus enabling social media, ecommerce, and other similar restructurings.These restructurings allowed a few to scale upwards and enjoy global reach.
So now we are on to the third phase; i.e., Web3.The problem with Web2 was that (as was the case with the Napster-model with its centralized set of index files), at the center of many of the Web2 business models, there exists a centralized database that amasses immense power to structure and shepherd the flow of thought and commerce.These models implicitly took advantage of power-laws that favor the highly connected central nodes.It is true that on the one hand, these Leviathans have enabled tremendous productivity gains compared to what existed prior; but on the other hand, they have de facto established governance-in-stealth for all the peripheral nodes.It is not to say that there is any malevolent element in these designs; it is merely that anything so centralized (as per Lord Acton's dictum) will perforce be restrictive towards the free-flow of thought and association.
Fig. 13. Basic Blockchain Design
To put things in perspective, every google page-rank discriminates against those who create and search for the road-less-traveled; for the average person rarely goes beyond the first five search results [74].In contrast, Web3 portends to be genuinely democratic by eliminating all central nodes and associated intermediaries who currently enjoy a high degree of betweenness-centrality [75] by inserting themselves in between nodes that otherwise would be P2P.Human/machine agents now genuinely can engage P2P without the need for gatekeepers and connectors.To put this in context, the ongoing revolution portends the ability "to build ridesharing without Uber, apartment sharing without Airbnb, and social media without Facebook and Twitter [76]." The top-level CR for the blockchain may be stated as follows: Need consensually-trusted, immutable, distributed and decentrally-managed, verifiable, publicly and efficiently searchable record of all transactions (since genesis) that are private and discreet, but stigmergically-marked for publicviewing, that pertains to a given economic activity and that may be made by adversarial/trust-less agents.
When restated in the FR-DP framing, the above CR translates into the basic blockchain design is as shown in Fig. 13 above.The design-matrix indicates a decoupled, lower-triangle design.The couplings systematically buildup the FR list, top to bottom.For example, the FR: Trustless-Trust (highlighted in red) is being delivered using all the previous DP's, along with the Consensus Model DP.
In the BitCoin case, just as soon as consensus is achieved, new tokens are released as a reward for the successful miner who expended computational resources to help bring about consensus.Thus, in this phase of token creation, a small part of the overall design has one of the design matrices in a different form which leads off with the consensus model.This agrees with the extended AD/CT (i.e., TDPC: Time-Dependent Periodic Complexity).In the discussion below, we have chosen to focus on just the broad design (as shown in Fig. 13 above) and ignore all such finer variations.
The blockchain is a CAS system that fundamentally operates on stigmergy.Discreet stigmergy requires agents be allowed to mark their environment (here, the blockchain ledger) discretely (i.e., without having to reveal their respective true identities).This is similar to ants leaving pheromone droppings-except that in the blockchain context, the agents enjoy a certain degree of anonymity.Cryptographic tokens (bitcoin, ether, gas, etc.) are the cash-like pheromones that various stakeholders use to engage in economic activities.They are cash-like in the sense that they shield the privacy of the agents; but they are stigmergic in the sense that the transaction now has a permanent, publicly-viewable/traceable record.Thus, when spent, the tokens leave their stigmergic markings that if properly aggregated, could help evolve the system forward.Cryptographic stigmergy [77] looks at the stigmergic design of the overall system to help precipitate direction-providing emergences and the corresponding self-organization around it.
Transaction security is obtained via standard, cryptologic hash functions such as SHA-256.
The Merkle tree data-structure that encodes all the transactions in a given block is designed to help verify the existence and validity of the growing chain of transactions in a computationally efficient fashion (costing less than O(Log2(N)) in space and time).
The paradoxical property of trustless trust is obtained via various consensus models (such as PoW: Proof-Of-Work; PoS: Proof-Of-Stake, etc.).Cryptoeconomics [78] studies the creation of economic incentives (such as tokens allocated to miners for performing computationally intensive work such as PoW) to bring about consensuses in a distributed and potentially adversarial setup.For example, the PoW model embedded in the BitCoin system allows decision-making via consensus (via the Byzantine Fault Tolerant algorithm [79]) despite approximately 1/3 rd of all agents going rogue.Blockchain offerings may be differentiated using the consensus model differentiator that power their respective answers to the problem of securing Trustless Trust.
As a biological analog, the immune system [80] is a perfect example of the blockchain along with its own Proof-of-Work (called the fever) which is the computational struggle that various cells from the immune system go thru in order to recognize invading antigens as a friend or a foe.Once identified, the body never forgets.It keeps the evidence of all its successful struggles in its growing ledger, i.e., its collection of antibodies.
The blockchain ledger is a growing linked list of transaction records that have been bundled into timed blocks.The chained ledger has been accumulating these blocks ever since the genesis of the blockchain under study.Each such timed block of transactions is bundled in an efficient, easily verifiable data-structure such as the Merkle tree.Each new addition contains in its header the hash of the previous block.This makes both the nodes as well as the overall chain highly resistant to modification.Longer the chain grows, harder it is to break.
Finally, Network Resilience is obtained via the P2P distributed protocols.
Satoshi Nakamoto designed the PoW based blockchain for the P2P bitcoin cryptocurrency [81].The contractual logic that is embedded in the BitCoin blockchain may be abstracted out and generalized to help secure trusted transactions across the whole gamut of global economic activity.The Ethereum project was the first to recognize the value of such decoupling's.While the Blockchain provides the underlying infrastructure, it is what gets built on top of it that defines the business offering.Each such offering provides unique affordances targeting specific business eco-systems.The rules and boundaries of these eco-systems (along with their governance protocols) are established via the logic of the smart contracts.
Smart Contract and Governance
Smart contracts are contracts written in code that will execute when matching conditions that make up the agreement are met.In other words, it is "cocked, locked and ready to fire"; there are no off-ramps.Smart contracts have been envisioned across multiple domains, including crowdfunding, financials (buying and selling of tangibles/intangibles, insurance, derivatives), legal, etc. Smart contracts are point-to-point with all middlemen having been dis-intermediated.It, therefore, has substantial potential to inflict losses in the hands of the naïve, careless or uninitiated (i.e., the KU type asymmetry).
The blockchain-based smart contract technology has the potential to transform society as a whole for the better; better in the sense of faster, cheaper and fairer transactions.Thus, given the enormous potential to smoothen the flow of commerce while bringing down costs, it is incumbent on the designers of blockchain based smart contracts to get the governance aspect done right.If it is designed for scaling, it ought to cover for the variety of knowledge asymmetries that exist across the spectrum of participants as well as the dynamics along a fastmoving front.Voshmgir emphasizes the pace with which the context for a smart contract design could rapidly move away from its original intent [82]: First use cases show that as circumstances change, protocols can become inappropriate for the new environment and require modification.In other words, it is not just that the design space is rapidly evolving; here the FR's themselves are rapidly evolving; i.e., the half-life of any given FR is also rapidly being cut short.This in itself is highly unprecedented in the world of design.In other words, there is a high premium for designing systems based on first principles as compared to short-sighted pragmatics.
Since the smart contract offering sits on top of the blockchain infrastructure, the CR for the design may be stated as follows: Need the ability to structure and verify autoexecuting contracts that incorporate arbitrarily complex business rules and trade on a given blockchain offering.When restated in the FR-DP framing, the above CR translates into the blockchain-based, smart contracting infrastructure as shown in Fig. 14 above.The smart contract is the structured, auto-executing contract.It is neither a legal contract nor necessarily smart.Just as any other trading instrument, the legality of the contract needs to be ironed out in the appropriate legal setting.The smart in the smart contract depends on how well the coding reflects the underlying economic incentives.For example, the DAO as a smart contract [12] was anything but smart.
The blockchain infrastructure wouldn't necessarily have the necessary data and logic to verify if and when all the preconditions specified in the smart contract have been sufficiently met.An oracle is a 3rd party service that exists outside the blockchain to help verify that the preconditions encoded within the smart contract.These artifacts provide ways and means to interface with the real world.Being outside the underlying blockchain setting, they may have unique governance issues that need to be addressed independently.
Blockchain Governance Kernel Design
As was discussed in Section 13, knowledge asymmetries create governance issues.However, knowledge asymmetries are the basis for initiating any successful trade; and is therefore not wrong per se.Indeed, wealth creation requires such knowledge asymmetries.Also, the half-life of knowledge is short and getting shorter.In other words, there are no guarantees that a given vantage point will remain forever.However, at any given time, if the asymmetries are severe or resulted from a prior information-agreement breach, it then opens up problems of poor governance.Good governance, therefore, involves creating adequate channels of information flow for timely decision-making for all parties to participate in sufficient amount of openness while also allowing specific strategic/proprietary information to remain hidden and off the grid (either permanently, or at least for a while).Insights from Cryptographic Stigmergy [77] as well as Cryptoeconomics [78] would be needed to design the appropriate information signaling mechanisms and economic incentives that help streamline the required information flows.Here we delimit the context to the kernel governance design to help decide between the offchain/on-chain approaches.
When we place the matrix of the Unknown-Unknown asymmetries (Fig. 10) alongside the Inducto-Deductive Front (IDF), it raises the issue of how the design-matrix gets transformed when one of the parties on either side is a machine working off a highly specified Codified Governance Ruleset (CGR) as opposed to a human working off a more abstract Principled Governance Ruleset (PGR)?While the CGR could be codified into the on-chain governance modules, the PGR would be administered in pre-agreed, human-centered, arbitrationlike off-chain governance setups that cross organizational boundaries.Fig. 15 (below) shows the kernel governance design indicating when one or the other ought to be used.
When the knowledge context is conceptual and below the IDF, CGR-coded machines can adjudicate governance modules coded as on-chain, smart contracts (i.e., via X2 CGR in Fig. 15a).In contrast, when the knowledge context is conceptual but above the IDF, governance remains off-chain and human adjudicated (i.e., via X1 PGR in Fig. 15a).
When the knowledge context is stigmergic and above the IDF, governance remains firmly off-chain and human adjudicated (i.e., via X3 PGR in Fig. 15b).The case where the knowledge context is stigmergic and below the IDF is a bit more nuanced.For even though the knowledge context is safely below the IDF (and therefore could use CGR), given the stigmergic uncertainties, it always needs human oversight.It is a case of the decoupled design (as shown in the lower half of Fig. 15b).It is, therefore, a mixed case that uses both off-chain as well as on-chain logic.This is the fundamental answer to the on-chain vs. off-chain governance debate between Ehrsam [10] and Zamfir [11] that we discussed in Section 2.
Fig. 15. Governance Kernel Design
From a CAS-perspective, the various configurations (in Fig. 15 a-b) are in dynamic flux; protagonists are continually re-positioning for strategic advantage.However, each of these configurations has β-level patterns that frame the governance issue.Most dynamic (and therefore requiring the highest amount of governance effort) are those patterns that have intrinsic knowledge asymmetries (i.e., the UK type); and the ones that need the least amount of governance have fundamental knowledge symmetries (i.e., UU, KK type).However, here too, if the β-level patterns indicate that one or both parties suffer from high-confidence coupled with poor-grasp (or alternatively, good grasp, but low confidence), then problems of governance may surface.This adds greater onus to the "Know Your Customer" (KYC) type guidelines.
Fig. 16 shows the overall composition of the Blockchain design along with the governance submodules (as discussed above) factored in.DAO was missing these governance sub-modules.Dominance of the governance sub-modules is evident given its top-row position.Within the governance sub-modules (and in agreement with Section 7), the conceptual dominates the stigmergic on account of its logical consistency.However, the conceptual does need to factor in the broader inductive base that the stigmergic provides.
Conclusions
Given the leveling of the playing field, the disintermediation of the middle-men, and the transparency of blockchain-based transactions, it is highly likely that the information flows are on the verge of scaling exponentially.Stigmergy steps in when information flows scale beyond aided/unaided human cognitive limits.In other words, the α↔β gearing (as discussed in Section 7) will most likely ramp up as the technology gains mainstream support.This paper has provided CAS based governance guideposts as to what may be expected.Salient points include: • Review of pertinent literature on blockchain governance to highlight novel pathologies and the problem of the unknown-unknowns • The issue of trust as it relates to the top nodes in a hierarchical organization; and its solution via heterarchical control (courtesy James Madison) which in essence is a decoupled lower-diagonal design for a fundamental governance problem ever since Plato.• How organizational structures as well as sensemaking changes with rising complexity.• The unique challenges faced in the context of largescale socio-technological organizational designs.• The challenge of governance where "formal controls" are missing.• The role of stigmergic gearing in both biological as well as human decision-making context.• The original formulation of an iterative CAS.
• The original formulation of how the iterative CAS framework may be utilized to help understand the crux of the centralization/decentralization issue.• The framing of governance from a heterarchicallyhierarchic human knowledge architecture perspective.Then using this approach to fundamentally frame the issue of human vs. machine dominance in decision making (i.e., induction vs. deduction).
• The framing of the Unknown-Unknown nuances and the way these show up in the context of governance.• The distinction between emergence vs. selforganization; and how the disruption of the natural flow between these two processes can lead to governance pathologies.• Highlighting the historical significance of Axiomatic Design (in the context of knowledge architectures) as proving a unifying conceptual vanishing point (similar to the perceptual vanishing point in the case of perspective drawings); except being in the realm of conceptuals, it has far greater import, especially in education.• The blockchain technology when viewed from an axiomatic perspective is seen to be a lower-triangle decoupled design.• The promise as well as the governance problem of smart contracts.• The design of the blockchain governance kernel as helping decide between on-chain vs. off-chain governance.Informed by the above guide-posts, a follow-up study will go into the Agent-Based Modeling of archetypical blockchain offerings.
Fig. 4 .
Fig. 4. Complex Adaptive System: Basic vs. IterativeNote that for the sake of simplicity, all the agents in Fig.4(a) have been placed homogeneously in the lower tier, while all the emergent artifacts have been placed homogenously in the upper tier.Such a simplification does not quite capture the aforementioned gear-train logic.Instead, what may be happening is that each follow-on feedback-loop/iteration is bifurcating the target population into higher levels of organizational complexity.In each subsequent iteration, the population is now composed of bifurcated ensembles of agent-nodes and artifacts (as indicated by the dotted-ovals in Fig.4(b)).Gearing, therefore, iteratively creates self-organization and structure in both of these interacting entity spaces.In each follow-on iteration, the respective number of nodes in each of these dotted-ovals is asymptotically decreasing (with allowance for population dynamics) while the dotted-ovals proliferate.Agents may, of course, migrate across these boundaries.While it is true that the β-tier is entrusted with the governance mandate (i.e., the role of control and governance) of a vast network of unwieldy, decentralized agents that populate the α-tier, it is likewise, not immune from further restructuring (i.e., higher order gearings).In his post "Notes on Blockchain Governance[51]," Buterin (who founded Ethereum) gives partial evidence of the α-β CAS structuring in the blockchain architecture when he writes:Generally speaking, there are two informal models of governance, that I will call the "decision function" view of governance and the "coordination" view of governance.The coordination view is the β-tier view, while the decision-function view pertains to the α-tier agents that act on the coordination signals.Also, as mentioned earlier in Section 7, it is the decision-function view that has legal liability.
Fig. 7 .
Fig. 7. Problem of the Local vs. Global Top-Node
|
2018-12-07T10:06:49.815Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "a05e42c4e02ae771210ab4f9c09419c58efba191",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/82/matecconf_icad2018_01010.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a05e42c4e02ae771210ab4f9c09419c58efba191",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
216575898
|
pes2o/s2orc
|
v3-fos-license
|
Institutional Spectrum of Rare Histological Types of Breast Carcinoma
Background: Invasive breast cancer is a heterogeneous disease in its presentation, pathological classification and clinical course. Most tumors are derived from mammary ductal epithelium, principally the terminal duct-lobular unit. However, there are more than a dozen histological variants which are less common but still very well defined by the World Health Organization (WHO) classification. The prime objective of the current study is to document our institutional experience of such rare histological entities with review of literature of the same. Methods: The clinicopathological records of resected breast lesions submitted to histopathology department over the period of three years from Jan 2016 to December 2018 were reviewed retrospectively. It was observational, retrospective and descriptive analysis of 4 unusual histological types of breast carcinoma. The most common lesions like infiltrating ductal carcinoma (IDC) and infiltrating lobular carcinoma in malignant category and benign lesions were excluded from the study. Results: Among 528 breast malignancies reported in the institute, 48 unusual histological types were recognized, of which 4 are very rare histological types with less than 1% incidence. Conclusions: Here in we highlighted the rare varieties of cribriform, squamous, apocrine and signet ring cell carcinoma of breast with relevance to clinical, histopathological and immunohistochemical features. It is with significance to the fact that histological diversity of breast carcinoma has relevant prognostic implications.
INTRODUCTION
Breast Cancer (BC) is the most common malignant tumour in women accounting for onequarter of all cancers in females worldwide. Breast Cancer is characterized by a remarkable degree of morphological and molecular heterogeneity, not only between tumours (inter tumoral heterogeneity) but also among the same tumours (intra tumoral heterogeneity) [1]. Intra tumour heterogeneity which denotes the co existence of subpopulations of cancer cells that differ in their genetic, phenotypic or behavioural characteristics can be attributed to genetic, epigenetic factors and to non hereditary mechanisms [2].
Two potentially complementary theories describing tumour heterogeneity are the Cancer Stem Cell (CSC) hypothesis and the clonal evolution/selection model [3,4]. In addition, in individual breast cancers, subpopulations of cancer cells may exist across geographical region of a tumour (spatial heterogeneity) or evolve over time between the primary tumour and a subsequent local or distal recurrence (temporal heterogeneity) [5].
Tumour heterogeneity of breast cancer has been the platform for the traditional, mainly histologydriven classification of breast cancer put forth by World Health Organization (WHO). This has been refined and at times replaced by the more recent molecular classification which has been used successfully for the design of individual therapies.
WHO presents a detailed classification of breast cancers based on histology which in turn is associated with different epidemiology, diagnostic issues, clinical course and prognosis. There are -rare/unusual types of breast cancer‖ that are less common, but still very well defined by the WHO classification [6].
The aim of this research is to present our institutional experience of these less encountered/rarer entites with their histopathological features along with review of their literature.
MATERIALS AND METHODS
The present study is retrospective and was based on the hospital records of women diagnosed with unusual variants of breast cancer. The study was conducted in the department of Pathology, Indian Red Cross Cancer Hospital, Nellore by retrieval of clinicopathological data of patients between January 2016 to December 2018.
As per our institutional protocol, every woman presenting clinically with a breast lump was evaluated by mammography and core needle biopsy. The standard Formalin Fixed Paraffin Embedding (FFPE) technique of tissue processing was followed and slides were stained using Haematoxylin and Eosin dyes. The same routine was also carried out on tissue specimens following surgical excision.
Histopathological diagnosis was confirmed by double blind peer review of H & E slides by two Pathologists. WHO guidelines on histological classification were followed and reporting was done as per College of American Pathologists (CAP) protocol. Immunohistochemistry with relevant markers was performed when deemed necessary and results obtained.
Exclusion Criteria
Histopathologically, benign lesions and commonly diagnosed malignant lesions i.e Invasive Ductal Carcinoma No Special Type and Infiltrating Lobular Carcinoma were excluded in our study.
RESULTS
During a period of 3 years from January 2016 to December 2018, a total of 528 malignant breast lesions were diagnosed, of which 48 were unusual histological types. Out of the remaining 480 cases, 422 (79.9 %) were diagnosed as Invasive duct cell carcinoma of no special type, 28 cases (5.3 %) were invasive lobular and 30 cases (5.68 %) were medullary carcinomas.
In the unusual category of malignant tumours we encountered 6 cases each of invasive papillary and metaplastic carcinoma, 10 cases of intracystic papillary carcinoma, 18 cases of mucinous carcinoma, 2 cases each of pure squamous cell and invasive cribriform carcinoma, 2 cases of stromal sarcomas and one case each of signet ring cell carcinoma and apocrine carcinoma.
Case-1
A 55 year old female presented to surgery outpatient department with a 2.5 x 2 cm lump in the upper medial quadrant of the right breast. She had a previous history of malignancy of left breast for which she received surgery followed by 6 cycles of chemotherapy and radiotherapy. The histological diagnosis was invasive duct cell carcinoma NOS in the left breast. Modified radical mastectomy was done with axillary lymph node dissection for the right breast. Gross examination showed a greyish white firm solid growth with specks of necrosis. On microscopic examination, there was predominant cribriform pattern and focal comedo pattern with central necrosis. These cribriform areas of tumour cells were seen invading into the stroma. The tumour cells displayed malignant nuclear features with focal lymphatic invasion. The tumour was staged as T2N0Mx. On IHC, the cells were strongly positive for ER, PR.
Case-2
A 52 year old female presented with a lump in right breast which grossly measured 2.5 x 2 cms, grey white, solid in consistency and had irregular pushing margins. On histopathological examination, there was invasive cribriform pattern of tumour cells with focal areas showing in situ duct cell carcinoma. Stroma shows desmoplastic response with lymphocytes and multinucleated giant cells. Lymphatic invasion was also noted. The tumour was staged as T2N0Mx. On IHC, the cells were negative for ER, PR. A diagnosis of cribriform carcinoma was rendered in both cases Case- 3 We received mastectomy specimen of a 45 year old female with 6 x 4 x 3 cm lump in upper outer quadrant of left breast. On cut section, the tumour showed partly solid and partly cystic consistency. On histopathological examination, the tumour cells showed high grade nuclear atypia with multiple areas showing signet ring cell morphology. These signet ring cells showed peripheral nuclei with abundant intracellular mucin. Metastases to axillary lymph nodes was also noted with signet ring cell deposits. Staging was given as T3N3Mx. On IHC, the tumour was negative for ER, PR. Upper GI Endoscopy, CT neck, chest, abdomen were normal, henceforth ruling out metastasis. Figure 3 depicts signet ring carcinoma.
Case 4 & 5
We encountered two female patients of age 48 years and 49 years respectively. The former presented with a 4 x3 cm lump in the upper medial quadrant of right breast. The latter showed a 3.5 x 2.5 lump in the upper outer quadrant of left breast. Both underwent modified radical mastectomy for the same along with axillary node dissection following a core biopsy report of squamous cell carcinoma. On serial slicing, both the cases showed a large grey white growth with normal overlying skin, nipple and areola. Microscopic examination revealed invasive lesion with nests, sheets and trabeculae of cells with squamoid morphology. Keratin pearl formation was also appreciated. Tumour was staged as T2N0Mx. On IHC, ER and PR were negative, tumour cells showed positivity for pan cytokeratin. One patient additionally got IHC marker positivity for P63 and high molecular weight cytokeratin. Figure-4 depicts squamoid differentiation.
Fig-4: Microphotograph showing squamoid differentiation (H & E, 400x)
Case- 6 We received a mastectomy specimen of a 62 year old female diagnosed with poorly differentiated duct cell carcinoma left breast on core biopsy. Cut section showed a 5 x 4 cm grey white solid tumour in the upper quadrants of left breast with grey white deposits in axillary nodes. On histopathological examination, sheets and glandular pattern of cells with large nuclei, prominent nucleoli and abundant eosinophilic cytoplasm was noted. The tumour was staged as T2N1Mx. The patient also was positive for squamous cell carcinoma in situ cervix. IHC showed the tumour negative for ER and PR. Diagnosis of apocrine carcinoma was rendered Figure-5.
DISCUSSION
Breast cancer is a heterogeneous disease, comprising multiple entities associated with distinctive histological and biological features, clinical presentations and behaviours and responses to therapy [7].
Most of the research is primarily focused on invasive ductal carcinoma of no special type as it accounts for nearly 50 -80 % of cases.
Histological special types of breast cancer account for up to 25 % of all invasive breast cancer. Owing to the relative rarity of special types of breast cancer, information about the biology and clinical behaviour of breast cancers conveyed by histological type has not been taken into account.
Herein we made an attempt to present our institutional perspective of few special types.
Cribriform Carcinoma
It is a well differentiated, low grade variant of invasive duct carcinoma. Exhibits a sieve like growth pattern with distinctive holes in between the cancer cells.
Epidemiology: Accounts for up to 3.5 % of the breast cancers [8].
Immunohistochemistry: ER is positive in 100 % and PR in nearly 69 % of cases [9].
Clinical Features:
The mean age is 53-58 years. Low frequency of axillary node metastases.
Prognosis:
The outcome is remarkably favourable.
Signet Ring Cell Carcinoma
Until 2003, primary signet ring cell carcinoma of the breast was placed under ‗mucin producing carcinomas' by WHO. It is a very rare tumour, which shows a significant number of tumour cells resembling gastric carcinoma cells with intracellular mucin displacing the nucleus to the periphery [10].
Clinical features: Deemed to be more aggressive.
Prognosis:
Very little is known about the prognostic outcome of primary signet ring cell carcinomas.
Squamous cell carcinoma (SCC)
A breast carcinoma entirely composed of metaplastic squamous cells that may be keratinizing, non keratinizing or spindled; they are neither derived from the overlying skin nor represent metastases from other sites [10].
Epidemiology: Metaplastic carcinomas account for less than 1 % of all invasive mammary carcinomas Immunophenotype: Nearly all SCC's are negative for both estrogen and progesterone receptors [12]. Positivity for high molecular weight cytokeratins confirms epithelial nature of these cells.
Apocrine Carcinoma
It is a carcinoma showing cytological and immuno histochemical features of apocrine cells in > 90 % of the tumour cells. Apocrine cells are characterized by abundant cytoplasm that can be densely eosinophilic, granular or vacuolated and by large nuclei with prominent nucleoli.
Structurally it tends to be poorly differentiated. Epidemiology: Based on light microscopy alone, incidence is only 0.3 -4 % [13]. Variability is probably the result of inconsistent diagnostic criteria.
Immunophenotype: Usually ER negative and AR (Androgen Receptor) positive.
Clinical features: There is no difference between the clinical, mammographic and gross features among apocrine and nonapocrine lesions.
Prognosis: Prognosis is determined mainly by conventional prognostic factors such as grade, tumour size and nodal status [14].
CONCLUSIONS
To conclude, as for the rare conditions, future research should be directed in collecting and evaluating a larger cohort of patients with the aim to better understand the biological pathways and clinical behavior of these uncommon histological types of breast cancer, in order to improve the clinical management strategies and outcomes.
|
2020-03-19T10:48:21.390Z
|
2019-09-30T00:00:00.000
|
{
"year": 2019,
"sha1": "ffc7be686f7c410d20f8152d43f95d8208fa712f",
"oa_license": null,
"oa_url": "https://doi.org/10.36348/sjpm.2019.v04i09.009",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "979b770947e0e9d735db032f6bd98d6f290df8dc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
249912650
|
pes2o/s2orc
|
v3-fos-license
|
Toward Better Preparedness of Mediterranean Rainfed Agricultural Systems to Future Climate-Change-Induced Water Stress: Study Case of Bouregreg Watershed (Morocco) †
: Improving the preparedness of agricultural systems to future climate-change-induced phenomena, such as drought-induced water stress, and the predictive analysis of their vulnerability is crucial. In this study, a hybrid modeling approach based on the SWAT model was built to understand the response of major crops and streamflow in the Bouregreg catchment in Morocco to future droughts. During dry years, the simulation results showed a dramatic decrease in water resources availability (up to − 40%) with uneven impacts across the study catchment area. Crop-wise, significant decreases in rainfed wheat productivity (up to − 55%) were simulated during future extremely dry growing seasons.
Introduction
At a global scale, the potential impact of climate change on major systems is an important question for decision makers, investors and farming communities. Water resources directly interfere with public health and safety, agriculture and food security, energy, industry, biodiversity and ecosystems and, thus, influence the socio-economic development of the nations. Water resources in Morocco are particularly vulnerable to human activities, such as increases in population, tourism, agricultural development and industrial growth being some examples of challenges faced by water resources in the country. In addition to these factors, the changing climate and all resulting phenomena (such as drought) represent a further threat that can have a significant impact on water resources in Morocco and all areas within a similar context. Farming activities are of crucial importance to the country, and are generally vulnerable to climate variability, which can affect the national economy in some situations. The objective of this study is to provide a representative case study of drought-induced water stress vulnerabilities and a catchment-scale impacts assessment in the future, with a focus on water and rainfed crop systems during dry years.
Study Area
The study catchment area was Bouregreg, a 9656 km 2 watershed located in the northwestern part of Morocco; it is one of the most important watersheds in the country [1].
Essentially, there are different climatic zones in the study catchment area influenced by the ocean: subhumid in the extreme western part, semiarid in the central part and arid climate in the south-eastern [1]. The annual potential water resources of the Bouregreg watershed is 720 Million m 3 , coming mostly from surface water. The stream network of the Bouregreg catchment area is organized into 3 main rivers: Korifla, Grou and Bouregreg rivers ( Figure 1) [1,2]. In terms of land use, oak forests cover approximately 24% of the total area of the study watershed; farming activity is very important (olive trees, pulse crops, wheat in addition to minor parcels where some irrigated vegetables and grape crops are grown). Range and pasture lands are distributed across the watershed [1,3]. catchment-scale impacts assessment in the future, with a focus on water and rainfed crop systems during dry years.
Study Area
The study catchment area was Bouregreg, a 9656 km 2 watershed located in the northwestern part of Morocco; it is one of the most important watersheds in the country [1].
Essentially, there are different climatic zones in the study catchment area influenced by the ocean: subhumid in the extreme western part, semiarid in the central part and arid climate in the south-eastern [1]. The annual potential water resources of the Bouregreg watershed is 720 Million m 3 , coming mostly from surface water. The stream network of the Bouregreg catchment area is organized into 3 main rivers: Korifla, Grou and Bouregreg rivers ( Figure 1) [1,2]. In terms of land use, oak forests cover approximately 24% of the total area of the study watershed; farming activity is very important (olive trees, pulse crops, wheat in addition to minor parcels where some irrigated vegetables and grape crops are grown). Range and pasture lands are distributed across the watershed [1,3].
Models Presentation
For hydrological processes investigation, we used soil and water assessment tool (SWAT) model. SWAT is a hydrological model developed by the United States Department of Agriculture [4]. It simulates the cycle of water and sediments in large-scale watersheds and large basins with different soil types and management conditions; it also simulates the effect of agriculture practices, or any other anthropogenic actions that can affect water cycle [5,6]. Regarding drought analysis, interpolated rainfall outputs of SWAT over Bouregreg watershed were used to compute drought indicator such as the standardized precipitation index (SPI) along the study periods. In this study, SWAT model was run over the period of January 1990 to December 2005 (data collected from the local water department's measurements); the first two years were allocated to warm up process, and the period of January 1992 to December 2001 was dedicated to model calibration. Validation was performed over the period January 2002 to December 2005 (due to the limited amount of measured data). Following sensitivity analysis, streamflow and crops yields were examined during the calibration process. A maximal agreement between observed and predicted processes was the ultimate goal.
In order to assess future climate projections in the study watershed, dynamically downscaled output data from the Global Climate Model CNRM-CM5 were used [7]. Two representative concentration pathways (RCPs) were selected, RCP4.5 and RCP8.5.
Models Presentation
For hydrological processes investigation, we used soil and water assessment tool (SWAT) model. SWAT is a hydrological model developed by the United States Department of Agriculture [4]. It simulates the cycle of water and sediments in large-scale watersheds and large basins with different soil types and management conditions; it also simulates the effect of agriculture practices, or any other anthropogenic actions that can affect water cycle [5,6]. Regarding drought analysis, interpolated rainfall outputs of SWAT over Bouregreg watershed were used to compute drought indicator such as the standardized precipitation index (SPI) along the study periods. In this study, SWAT model was run over the period of January 1990 to December 2005 (data collected from the local water department's measurements); the first two years were allocated to warm up process, and the period of January 1992 to December 2001 was dedicated to model calibration. Validation was performed over the period January 2002 to December 2005 (due to the limited amount of measured data). Following sensitivity analysis, streamflow and crops yields were examined during the calibration process. A maximal agreement between observed and predicted processes was the ultimate goal.
In order to assess future climate projections in the study watershed, dynamically downscaled output data from the Global Climate Model CNRM-CM5 were used [7]. Two representative concentration pathways (RCPs) were selected, RCP4.5 and RCP8.5. Baseline time series for both daily temperature (max and min) and rainfall data were from January 1990 to December 2010; for future datasets, the 2030-2050 period was collected from the outputs of the Global Climate Model CNRM-CM5. The SPI is a statistical monthly indicator that compares the cumulated precipitation during a period of a number of months with long-term cumulated rainfall distribution for the same location and accumulation period [8]. In this work, SPI-12 was calculated using the 2018 Version of SPI Generator application [9] of the National Drought Management Centre (University of Nebraska, USA) [8]. Once a drought event was determined with its start and end month, its duration and magnitude were then assigned. The duration of a drought event was equal to the number of months between its start (included) and end months (not included); the drought magnitudes were calculated as the sums of the SPI values that were always negative over consecutive months, and the severity as the maximum SPI value of each episode. Baseline time series for both daily temperature (max and min) and rainfall data were from January 1990 to December 2010; for future datasets, the 2030-2050 period was collected from the outputs of the Global Climate Model CNRM-CM5. The SPI is a statistical monthly indicator that compares the cumulated precipitation during a period of a number of months with long-term cumulated rainfall distribution for the same location and accumulation period [8]. In this work, SPI-12 was calculated using the 2018 Version of SPI Generator application [9] of the National Drought Management Centre (University of Nebraska, USA) [8].
Future Droughts Analysis
Once a drought event was determined with its start and end month, its duration and magnitude were then assigned. The duration of a drought event was equal to the number of months between its start (included) and end months (not included); the drought magnitudes were calculated as the sums of the SPI values that were always negative over consecutive months, and the severity as the maximum SPI value of each episode. According to the distribution of SPI-12 values, several drought events were expected (with different magnitudes and durations) under both scenarios. Both scenarios showed three drought classes (dry, severely dry and extremely dry) but with different durations and magnitudes. According to the distribution of SPI-12 values, several drought events were expected (with different magnitudes and durations) under both scenarios. Both scenarios showed three drought classes (dry, severely dry and extremely dry) but with different durations and magnitudes.
Drought Impact on Water Availability
During the 2035 to 2050 period, several decreases in the annual total water yield (TWYD) occurred and are shown in Figure 3. Under RCP4.5, the expected drought episode of January 2036 to April could affect water resources in the study watershed during 2036 and 2037 (67 mm and 57 mm, respectively). During the same simulation period under RCP8.5, the drought period of October 2045 till February 2047 could reduce the TWYD averages in the watershed to 30 mm and 33 mm in 2045 and 2046, respectively.
During the 2035 to 2050 period, several decreases in the annual total water yield (TWYD) occurred and are shown in Figure 3. Under RCP4.5, the expected drought episode of January 2036 to April could affect water resources in the study watershed during 2036 and 2037 (67 mm and 57 mm, respectively). During the same simulation period under RCP8.5, the drought period of October 2045 till February 2047 could reduce the TWYD averages in the watershed to 30 mm and 33 mm in 2045 and 2046, respectively.
Future Crop Performance Analysis
Averages of wheat yields over the whole simulation periods were expected to decrease in most scenarios in comparison to the current one, but with a very high year-toyear variability (Figure 4). In Morocco, and more specifically in the Bouregreg watershed, wheat is sown in between autumn and winter, as most rain happens in wintertime, and temperatures are generally mild in this season. Early stress (excessive temperature and low soil moisture) (Figure 2) limits the tiller number (low ears per unit area) and later stress (after anthesis) can reduce the size of individual grains and their number [10].
Discussion
The projected droughts over the BW revealed that the watershed is likely to experience several droughts with different severity levels under both climate change scenarios. These results endorsed the findings of Gudmundsson and Seneviratne [11], since they reported that enhanced greenhouse forcing has contributed to increased drying in the Mediterranean region (including Northern Africa), and that this trend is likely to keep increasing over the century, especially under high levels of global warming.
Future Crop Performance Analysis
Averages of wheat yields over the whole simulation periods were expected to decrease in most scenarios in comparison to the current one, but with a very high year-to-year variability ( Figure 4).
During the 2035 to 2050 period, several decreases in the annual total water yield (TWYD) occurred and are shown in Figure 3. Under RCP4.5, the expected drought episode of January 2036 to April could affect water resources in the study watershed during 2036 and 2037 (67 mm and 57 mm, respectively). During the same simulation period under RCP8.5, the drought period of October 2045 till February 2047 could reduce the TWYD averages in the watershed to 30 mm and 33 mm in 2045 and 2046, respectively.
Future Crop Performance Analysis
Averages of wheat yields over the whole simulation periods were expected to decrease in most scenarios in comparison to the current one, but with a very high year-toyear variability ( Figure 4). In Morocco, and more specifically in the Bouregreg watershed, wheat is sown in between autumn and winter, as most rain happens in wintertime, and temperatures are generally mild in this season. Early stress (excessive temperature and low soil moisture) (Figure 2) limits the tiller number (low ears per unit area) and later stress (after anthesis) can reduce the size of individual grains and their number [10].
Discussion
The projected droughts over the BW revealed that the watershed is likely to experience several droughts with different severity levels under both climate change scenarios. These results endorsed the findings of Gudmundsson and Seneviratne [11], since they reported that enhanced greenhouse forcing has contributed to increased drying in the Mediterranean region (including Northern Africa), and that this trend is likely to keep increasing over the century, especially under high levels of global warming. In Morocco, and more specifically in the Bouregreg watershed, wheat is sown in between autumn and winter, as most rain happens in wintertime, and temperatures are generally mild in this season. Early stress (excessive temperature and low soil moisture) ( Figure 2) limits the tiller number (low ears per unit area) and later stress (after anthesis) can reduce the size of individual grains and their number [10].
Discussion
The projected droughts over the BW revealed that the watershed is likely to experience several droughts with different severity levels under both climate change scenarios. These results endorsed the findings of Gudmundsson and Seneviratne [11], since they reported that enhanced greenhouse forcing has contributed to increased drying in the Mediterranean region (including Northern Africa), and that this trend is likely to keep increasing over the century, especially under high levels of global warming.
Conclusions
The hydrology and agriculture of the Bouregreg watershed are presumed to experience significant impacts of climate-change-induced water stress; it is expected to have a high year-to-year water and crop yield variability due to several drought events with different severity levels. This context could lead to more challenging water resource situations in the watershed, mainly during the severe and long drought episodes.
|
2022-06-22T15:03:54.793Z
|
2022-06-16T00:00:00.000
|
{
"year": 2022,
"sha1": "b116fc32ddf88ff2db4007a88747e2a46cfd5b7b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2673-4931/16/1/58/pdf?version=1655352260",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7f619e7b53021904ff39b0035321700afc5250a0",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
5060743
|
pes2o/s2orc
|
v3-fos-license
|
Hypercalcemia-Induced Hypokalemic Metabolic Alkalosis in a Multiple Myeloma Patient: The Risk of Furosemide Use
Hypercalcemia is often seen in patients with malignancies, and in the past treatment for this has traditionally included loop diuretics. Clinically, patients with hypercalcemia frequently present with polyuria and volume contraction which may be further exacerbated by diuretic therapy. In the lab, hypercalcemia has been shown to activate the calcium-sensing receptor in the thick ascending limb of Henle and inactivate the 2 chloride sodium potassium co-transporter and induce a hypokalemic metabolic alkalosis, an effect similar to that of the loop diuretic furosemide. We now report what may well be the first clinical correlate of this laboratory finding in a patient who developed a hypokalemic metabolic alkalosis as a consequence of severe hypercalcemia due to multiple myeloma and whose metabolic derangement was corrected without the use of a loop diuretic which may have exacerbated the electrolyte abnormalities.
Introduction
Hypercalcemia is a frequent finding in patients with cancer and occurs in approximately 20-30% of patients [1]. The malignancies most commonly associated with hypercalcemia are breast, lung cancer and multiple myeloma. Hypercalcemia itself can contribute to other complications such as nausea, constipation, polyuria, polydipsia, nephrolithiasis and renal insufficiency. In addition, hypercalcemia is also associated with a metabolic alkalosis which may be due to buffers such as calcium carbonate and phosphates released from bone involved with metastatic disease. Hypokalemia is generally not seen with metastatic bone disease. In the laboratory, hypercalcemia activates the calcium-sensing receptor and has been shown to cause a hypokalemic metabolic alkalosis [2]. The mechanism by which this occurs is well worked out in vitro. The net effect is to inactivate the 2 chloride sodium potassium cotransporter in the thick ascending limb of Henle similar to the effect of furosemide. To date however, to our knowledge, there has been no clinical correlate of this laboratory finding. We report a case of multiple myeloma complicated by severe hypercalcemia associated with volume contraction, hypokalemia and a metabolic alkalosis likely associated with activation of the calcium-sensing receptor, perhaps the first clinical description of this laboratory finding.
Case Presentation
A 69-year-old African-American man presented with a 1-month history of progressive generalized weakness, fatigue and anorexia. The patient denied a history of constipation, polyuria or polydipsia, nausea or vomiting. He also did not ingest any nonsteroidal antiinflammatory drugs. The patient was known to have longstanding hypertension with stage 2 chronic kidney disease due to hypertensive/vascular disease. He recently had sustained left rib fractures following a motor vehicle accident. His home medications included extendedrelease nifedipine, dutasteride, tamsulosin, ferrous sulfate and acetaminophen with codeine on a PRN basis. The patient was not on a diuretic. He did not take any over-the-counter medication. Specifically, he took no calcium-containing antacids.
On examination, the patient was alert and oriented ×3. Blood pressure was 120/70 mm Hg without any orthostatic changes demonstrable while on a saline infusion begun in the emergency room 1-2 h before. His home medications for his hypertension were discontinued on admission because of the observed relative hypotension. Cardiac, respiratory, abdominal and neurologic examinations were normal. There was no peripheral edema. Skin turgor was poor. The patient had reproducible left-sided chest pain associated with his recent motor vehicle accident. No other skeletal pain could be elicited on exam.
Initial blood work demonstrated a serum creatinine concentration of 433.2 μg/l, a serum calcium concentration of 4.38 mmol/l, a serum phosphorus concentration of 1.06 mmol/l and a normal serum albumin level of 40 g/l. His serum creatinine and calcium concentrations were noted to be 123.8 μg/l and 2.35 mmol/l, respectively, 5 months prior to hospitalization. In addition, the patient presented with a serum bicarbonate concentration of 32 mmol/l and a serum potassium concentration of 2.5 mmol/l. His hypokalemic metabolic alkalosis was confirmed by a venous blood gas.
The workup to elucidate the etiology of his hypercalcemia revealed his intact PTH level to be appropriately suppressed to 0.95 pmol/l (normal range: 1.06-6.9 pmol/l) and his PTHrelated peptide level to be 27 ng/l (normal range: 14-27 ng/l). 25-hydroxy vitamin D and 1,25 vitamin D levels were 92.4 nmol/l (within normal range) and <19.2 pmol/l, respectively (low). The patient demonstrated 7 g of proteinuria by a urine protein-to-creatinine ratio. The urine analysis by dipstick, however, revealed only 1+ proteinuria. A serum protein electrophoresis revealed 2 abnormal bands within the beta and gamma regions, and assays for free light chains were elevated for both kappa and lambda and demonstrated an elevation of his kappa/lambda ratio.
A renal sonogram revealed normal-sized kidneys with bilateral renal cysts, and there was no evidence of hydronephrosis. A CT scan of the chest abdomen and pelvis without intravenous contrast was remarkable for numerous lytic lesions in the midsternum, in the right and left ribs, in the upper and lower thoracic and lumbar spine as well as in his scapulas. A subsequent bone marrow biopsy demonstrated a plasma cell dyscrasia with a CD38 monoclonal kappa plasma cell population.
The initial hypercalcemia, metabolic alkalosis and hypokalemia were treated with intravenous saline and potassium chloride supplementation. No intravenous furosemide was utilized. Within 3 days, his hypokalemic metabolic alkalosis totally resolved (serum potassium concentration 4.6 mmol/l and serum bicarbonate concentration 21 mmol/l). On day 5 of his hospitalization, chemotherapy was begun with bortezomib and dexamethasone. Although calcitonin and zoledronic acid were administered shortly after admission, he remained hypercalcemic until the 6th day of hospitalization. With continued therapy for his underlying malignancy, he remained normocalcemic, and his renal function returned to its baseline level 5 months after discharge.
Discussion
Malignancy-induced hypercalcemia occurs through at least 3 mechanisms. Hypercalcemia may be the result of osteolytic metastases with local release of cytokines, tumor secretion of parathyroid hormone-related peptide and tumor production of 1, 25-dihydroxy vitamin D [1,3,4]. Hypercalcemia in multiple myeloma may be the result of bone marrow infiltration and release of osteoclast-activating factors by the plasma cells [3]. This may be exacerbated by the attendant dehydration and renal involvement seen with hypercalcemia in myeloma.
Our patient had a normal parathyroid-related peptide level, nonelevated vitamin D levels and low normal phosphorus levels. With metastatic bony lesions, hyperphosphatemia is generally observed associated with the release of calcium phosphate and calcium carbonate from bone. This syndrome mimics the metabolic alkalosis seen with the calcium alkali syndrome [5]. Such a case was recently reported in a patient with multiple myeloma [6].
Of the above-described mechanisms responsible for hypercalcemia with malignancy, our patient does not appear to fit into any of the categories. Hormonal secretion was not the cause of hypercalcemia since parathyroid hormone-related peptide, 1,25-dihydroxy vitamin D and 25-hydroxy vitamin D were either normal or low. Although osteolytic lesions were present, the patient did not have hyperphosphatemia. The metabolic alkalosis observed with lytic lesions is secondary to release of calcium phosphorous and calcium carbonate from bone resulting in hyperphosphatemia. Our patient's phosphorus was low normal making the lytic lesions unlikely to be the cause of the observed metabolic alkalosis.
Interestingly, although there are sufficient laboratory data implicating hypercalcemia as a cause of hypokalemic metabolic alkalosis by activating the calcium-sensing receptor (see below), little attention has been given to the possibility that this occurs clinically. In our case, hypercalcemia, likely through its action on the renal calcium-sensing receptor, resulted in volume contraction, the generation of a hypokalemic, metabolic alkalosis and acute renal failure by the mechanism described below. Restoration of volume and potassium supplementation completely corrected the metabolic abnormality, and no furosemide was used to correct the associated hypercalcemia.
Calcium is a divalent ion and in its ionized form acts on cellular structures via the extracellular calcium-sensing receptor. The calcium-sensing receptor is an extracellular plasma membrane-bound G-protein-coupled receptor that is present on the cell membrane of various tissues, including chief cells of the parathyroid glands, bone, kidney, bone marrow, gut and others. In vitro studies have demonstrated that small incremental increases in extracellular calcium concentrations can activate the calcium-sensing receptor and alter the function of the nephron and parathyroid glands so as to re-achieve normal serum calcium levels [2,7,8]. Hypercalcemia activates the calcium-sensing receptor on the basolateral surface of the thick ascending limb of Henle and generates arachidonic acid metabolites. These metabolites block the secretory potassium channel, and thus the 2 chloride sodium potassium cotransporter which requires luminal potassium to function fully is inhibited. This results in urinary losses of sodium, potassium, magnesium and calcium, and volume contraction due to an inability to concentrate urine. As a consequence of enhanced delivery of sodium to the distal convoluted tubule, under the effect of aldosterone which is elevated in the volumecontracted state, a hypokalemic metabolic alkalosis ensues similar to that seen in our patient. This effect is analogous to that seen with furosemide. To our knowledge, no such clinical correlation with hypercalcemia has been described to date. However, a drug-induced acquired Bartter-like syndrome has been associated with both gentamicin and amikacin administration in humans [9,10]. Both of these drugs are polyvalent cationic molecules (+2) similar to calcium. The administration of both has been shown to cause a metabolic alkalosis and hypokalemia with polyuria. The authors suggest that activation of the calcium-sensing receptor in the thick ascending limb of Henle in these cases was responsible for the metabolic abnormalities similar to hypercalcemia and furosemide.
In summary, our case may well represent hypercalcemia causing a hypokalemic, metabolic alkalosis and volume contraction by activation of the calcium-sensing receptor. The patient responded to intravascular volume expansion and potassium supplementation with full correction of the metabolic alkalosis within 72 h. Classically, the initial treatment of hypercalcemia includes volume expansion and the use of furosemide to enhance urinary calcium excretion. Recently, the use of furosemide has been de-emphasized [11]. In fact, in the presence of hypercalcemia complicated by a metabolic alkalosis, in our opinion furosemide is contraindicated. The hypercalcemia already has presumably impaired the function of the 2 chloride sodium potassium co-transporter through its activation of the calcium-sensing receptor. Furosemide will further impair sodium reabsorption and volume contraction, and in fact may worsen the metabolic alkalosis and may result in cardiac arrhythmias, and thus its use is not warranted.
The development of new medications, particularly effective chemotherapeutic agents (bortezomib), and critical review of traditional therapies have changed the treatment approach to severe hypercalcemia, and there is now a limited role for aggressive isotonic fluid administration with furosemide [11]. The possibility of hypercalcemia itself causing metabolic alkalosis and hypokalemia needs to be seriously considered, and exacerbating this with furosemide needs to be meticulously avoided.
Statement of Ethics
The authors have no ethical conflicts to disclose.
|
2017-10-06T19:05:02.166Z
|
2015-09-18T00:00:00.000
|
{
"year": 2015,
"sha1": "8a5e2d1ac04772576f1036f680abb322df8ebdf9",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1159/000439377",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a5e2d1ac04772576f1036f680abb322df8ebdf9",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
202837595
|
pes2o/s2orc
|
v3-fos-license
|
Association of metabolic syndrome with intravesical prostatic protrusion and international prostatic severity symptoms score in patients with benign prostatic enlargement
It is known benign prostatic enlargement (BPE) gives rise to LUTS and bladder outlet obstruction (BOO). Prostate enlargement is also manifested by the development of intravesical prostatic protrusion (IPP), a morphological change resulting from enlarged lateral lobes and median lobe. It has also been suggested that a prostatic mass with greater protrusion causes more severe voiding dysfunction by causing more serious BOO. Many investigators have made efforts to evaluate the severity of BOO or overactive bladder in a non-invasive manner; for example, by using transabdominal ultrasonography to estimate bladder weight, surface area, bladder wall ABSTRACT
INTRODUCTION
It is known benign prostatic enlargement (BPE) gives rise to LUTS and bladder outlet obstruction (BOO). 1 Prostate enlargement is also manifested by the development of intravesical prostatic protrusion (IPP), a morphological change resulting from enlarged lateral lobes and median lobe. 2 It has also been suggested that a prostatic mass with greater protrusion causes more severe voiding dysfunction by causing more serious BOO. 3 Many investigators have made efforts to evaluate the severity of BOO or overactive bladder in a non-invasive manner; for example, by using transabdominal ultrasonography to estimate bladder weight, surface area, bladder wall thickness and IPP. 4,5 Multiple reports have examined the utility of IPP as a marker of BOO (which should be confirmed by urodynamic or video-urodynamics studies) and IPP has been reported to be a useful anatomical measure for the assessment of BOO. 4 IPP is useful in evaluating BOO because of its good correlation with conventional pressure flow study and with detrusor function. 6 According to previous studies, IPP is significantly correlated with increased total prostate volume (TPV), greater obstructive symptoms, decreased maximum urinary flow rate (peak flow), and increased post-void residual urine volume (PVR), which suggests that IPP may have clinical usefulness in predicting the need for treatment. 7 The aetiology and pathogenesis of LUTS/BPE remains unclear. Well-designed studies are needed to assess the effect of morphological features of BPE on LUTS according to the presence of MetS and to determine whether there is a significant correlation between LUTS (IPSS), TPV and IPP with MetS. The aim of this study is to assess the association of MetS and its components with IPP, TPV, and IPSS.
METHODS
This is a single centre cross-sectional study in Department of Urology, GMCH, Guwahati, Assam, India between March 2016 to May 2018, 114 consecutive men aged >50 years presenting with lower urinary tract symptoms (LUTS) suggestive of BPE (PSA 0-4ng/ml) were recruited to this single centre cross-sectional observational study with informed consent.
The exclusion criteria included: 5- reductase inhibitor therapy, neurogenic bladder dysfunction, history of prostatic and/or urethral surgery, history of bladder cancer, gross haematuria and urinary infection, PSA >4 ng/mL and diagnosis of prostate cancer, previous lower urinary tract or pelvic surgery and radiation therapy. Men with incomplete data were excluded from the statistical analysis.
Evaluation of the participants in the study included DRE, IPSS, and USG KUBP. Each participant completed the IPSS questionnaire and PSA values were obtained. TPV and IPP were measured using USG KUBP. IPP was assessed by measuring the vertical distance from the tip of the protrusion to the circumference of the bladder at the base of the prostate gland and classified as grade I (<5 mm), II (5-10 mm) and III (>10 mm). TPV was automatically calculated in mm 3 after the measurement of their largest antero-posterior (height, H), transverse (width, W), and cephalocaudal (length, L) diameters, using the formula HxWxLx0.52. All measurements were carried out with the bladder containing approximately 150 ml of urine, which was confirmed after ultrasonography by measuring voided urine. Blood samples were drawn from the participants after an overnight fast, and serum PSA, fasting blood glucose, high-density lipoprotein (HDL), triglyceride levels and blood pressure were recorded. LUTS were evaluated by culturally and linguistically validated versions of IPSS. LUTS severity was classified as mild (IPSS 0-7), moderate (IPSS [8][9][10][11][12][13][14][15][16][17][18][19] and severe (IPSS 20-35).
International diabetes federation criteria were used to define MetS in the presence of central obesity (defined as waist circumference ≥94 cm for European ethnic group) and two or more of the four characteristics: triglycerides ≥150 mg/dl or treatment for hypertriglyceridaemia; HDL cholesterol <40 mg/dl or treatment for reduced HDL cholesterol; blood pressure ≥130/85 mmHg or current use of antihypertensive medications and fasting blood glucose >100 mg/dl or previous diagnosis of type 2 diabetes mellitus. All the MetS components were considered individually (single variables above vs below defined thresholds) and combined, according to MetS (presence or absence). Poor response to medication has been considered as the lack of a decrease of 35% in IPSS after12 weeks of -blocker therapy.
Statistical analysis
Statistical analysis was performed using SPSS version 21.0 (SPSS, Cary, NC, USA). Continuous variables are presented as median (interquartile range) and differences between groups were tested using Student's independent t-test or the Mann-Whitney U-test according to their normal or non-normal distribution, respectively (normality of variables' distribution was tested by Kolmogorov-Smirnov test). Age-adjusted linear regression models were performed to verify factors associated with IPP and TPV. Multivariate logistic regression models were constructed to identify predictive factors of IPP and TPV by including all collected variables.
RESULTS
Characteristics of participants in terms of age, PSA value, TPV, IPSS, IPP grade, systolic BP, Fasting blood sugar, Triglicerides, HDL cholestrol, obesity and their corelation with MetS shown in Table 1. Medications taken by the participants were as follows: 94 (82.4%) were on -blocker therapy and 102 underwent surgery (89.4%). MetS was present in 36 out of 114 participants (31.5%) with average age 71.3 years old. Patient with MetS had average TVP 64.6 gms. Figure 1 showing Correlation of systolic blood pressure, fasting blood sugar, triglicerides, high density lipoprotein and obesity with IPPS group with highest proportion of hypertensive (65.2%), diabetic (71.7%), hyperlipidemic (58.7%), low HDL (34.8%) and obese (56.5%) patients in IPSS III group. Lowest proportion of patients in IPSS I group, hypertensive (7.1%), while no diabetic, hyperlipidemic, obese patients, and no patient with low HDL. Significant difference between IPPS I, II and III group in terms of above mentioned parameters (<0.001). Figure 2 showing that MetS significantly associated with IPSS (<0.001) with highest numbers of patients in group III (54.3%) and MetS was not present in group I. Correlation of systolic blood pressure, fasting blood sugar, triglicerides, high density lipoprotein and obesity with IPP group shown in Figure 3. Highest proportion of hypertensive (80%), diabetic (86.7%), hyperlipidemic (63.4%), low HDL (40.0%) and obese (63.3%) patients in IPP III group. Lowest proportion of patients in IPP I group, hypertensive (7.8%), diabetic (11.8%), hyperlipidemic (15.7%), low HDL (9.8%) and obese (15.7%) patients. Significant difference between IPP I, II and III group in terms of above mentioned parameters (<0.001).
DISCUSSION
In present study, we investigated the correlation of hypertension, hyperglycaemia, hyperlipidaemia, low HDL and obesity with IPSS and IPP groups. It has been found that MetS was not only associated with increase in prostate size, but also with increasing grade of IPSS and IPP, supporting the association between metabolic alterations and clinical increase in prostate volume. MetS of having IPSS ≥20 (p<0.05). 9 Their results correlated with our study.
Gacci et al showed a significant difference in MetSdependent prostate growth in men with a prostate volume >30 ml or <30 ml (3.4 ml vs 1.99 ml, respectively). Moreover, their meta-regression analysis suggested obese, dyslipidaemic and elderly patients were more at risk of MetS being a determinant of their increased prostate size. Present study also suggested that obese and dyslipidamic patients had high IPSS and IPP. 10 They also found that MetS-induced differences in prostate volumes were greater in patients with metabolic disorders. This inference correlated well with our study which also showed that MetS patients had IPSS III in 54.3% and IPP III in 63.3% patients.
The features of MetS that represent the trigger causes associated with BPE/LUTS are central obesity, lipid disorder and hyperinsulinaemia. 11 These alterations include an increase in the activity of the sympathetic nervous system and muscle tone of the prostate, resulting in more severe LUTS independent of prostate enlargement. 12,13 Furthermore, reduced HDL cholesterol and increased triglyceride levels were significantly related to higher prostatic inflammation by secreting interleukin-8 in response not only to oxidated LDL, but also to insulin, indicating that different MetS features could synergistically boost inflammation and tissue remodelling in BPH/LUTS. 14,15 Lotti et al showed that waist size and reduced HDL cholesterol level were significantly associated with prostate volume. In addition, similarly to the TPV results, TZV was significantly associated with reduced HDL cholesterol levels (hazard ratio 1.15). 13 Similar correlation was found in our study.
St Sauver et al in a retrospective population-based cohort study in 2447 men aged 40-79 years, showed that statin therapy was associated with a 6.5 to 7 year delay in the new onset of moderate/severe LUTS/BPE. 14 Dyslipidaemia could have a detrimental effect on prostate cells, boosting prostate inflammation, a key factor in the development and progression of BPH/LUTS.
Recently, IPP has been studied as a non-invasive test in diagnosing BOO in men with LUTS. 4,7 A systematic review of the overall literature reported that five studies used a threshold of 10 mm to define BOO and found similar diagnostic accuracy for uroflowmetry alone with a median sensitivity of 67.8% and specificity of 74.8%. A positive predictive value of 73.8% and a negative predictive value of 69.3%. 16 Kyung et al in a longitudinal analysis during a 5-year period, showed that changes in weight and MetS status were significantly associated with the prostate growth rate. Moreover, MetS diagnosis affected the prostate growth rate could be decreased by controlling for MetS. 17 It could be speculated that the counteracting release of inflammatory mediators by adipose tissue, increasing HDL cholesterol and decreasing triglyceride levels, could reverse the prostate volume increase. New evidence suggests that metformin could also have the effect of reducing metabolic stress conditions and activating lipophagy mechanisms through activation of AMPKindependent mechanisms. 14 We are still far from this application in patients affected by BPH/LUTS, but the targeting of coexisting inflammation is crucial for this condition. 14,18 The present study has several limitations. Firstly, we did not investigate the role of cytokines and inflammatory markers in patients with IPP or their relationship with MetS. Secondly, the study was crosssectional and we are yet to demonstrate the impact of metabolic alterations on the onset of IPP in a longitudinal model. Thirdly, we did not adjust for the use of statins or metformin. We did, however, determine that MetS is associated with an increase in IPP together with an increase in prostate volume, explaining the lack of response to medical therapy in those patients with metabolic alterations and LUTS/BPE.
CONCLUSION
We found that metabolic alterations, including low HDL cholesterol, hypertension, high triglycerides, hyperglycaemia and obesity are associated with increased risk of IPP ≥10 mm and IPSS ≥19. Moreover, an IPP ≥10 mm and IPSS ≥19 were associated with MetS and a lack of satisfaction with therapy. These results offer new insights into the link between metabolic alterations and BPE.
|
2019-09-17T01:04:13.054Z
|
2019-08-28T00:00:00.000
|
{
"year": 2019,
"sha1": "1b62239047e4a201e6776ed57b97076804383df5",
"oa_license": null,
"oa_url": "https://www.ijsurgery.com/index.php/isj/article/download/4638/3158",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "97c4b2381d0d3871b779b72390b1a6550f2cdbf0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
216239830
|
pes2o/s2orc
|
v3-fos-license
|
Pullout Behaviour of Screw and Suction Piles in Clayey Soil
Screw piles are mostly used in the foundations to provide structural stability against axial compression, uplift tension, overturning moment and lateral forces. Where there are problems in poor foundation material, remote locations with reclaimed and soil deposits, screw piles are more commonly used as structural foundation. Previously the behaviour of multiple anchors in clayey soils at different embedment depth with different pitch has not been studied extensively. In this project an experimental programme on model screw anchors was conducted and studied. The aim behind to choose laboratory tests is to get an advantage of allowing close control of at least some of the variables encountered in practice. In this way, trends and behaviour patterns observed in the laboratory can be of value in developing an understanding of performance of screw piles at larger scales. In addition, observations made in laboratory testing on different screw piles can be used in conjunction on a same graph.
Introduction General
Quite often civil engineering construction is required to be done even in poor sub-soil deposits and reclaimed soil deposits. Such deposits mainly include fine grained soils like soft highly compressible clays, marine deposits along coastal belts and low laying areas, and even loose sandy silts etc. Screw Piles, also known as Screwed, Screw-In, Torque and Helical, Piles, Piers or Anchors consist of a steel centre pipe and one or more steel screw flights welded near the toe of pipe and some intervals along the shaft. Geotechnical Engineering Problems and Solutions A system of structure is visualized to comprise of the super structure, the substructure and the foundation soil. Such a structural unit founded on or in contact with soil mass present two categories of engineering problems. 1. Stability problems.
2. Deformation problems. The stability of a structure refers to its guarantee against shear failure of foundation soil mass on which loads are ultimately transferred. Furthermore, even under working loads, the superstructure foundation soil system is required to perform satisfactorily with its deflections and deformations within the permissible limits. Analysis and prediction of this deformation response constitute deformation problems. This sub structure part in direct contact with soils media, the most scientific and rational approach to obtain realistic solution to both these geotechnical engineering problems is evaluation and prediction of the soil foundation interaction. It is to be realized that the overall behaviour of structural system is governed by as to how the soil and foundation structural elements interact with each other on the development of stresses and strains in both these components of this system. With the development of modern sophisticated method of analysis and design and with the availability of high speed digital computers any complex problem of soil structure interaction can be tackled with confidence. The majority of past theoretical and experimental research, however, has focused on predicting anchor behaviour and capacity in sand. In contrast, the study of anchors embedded in clay has attracted only limited attention. Most of the results from studies of anchors in clay either consist of simple analytical solutions or are derived empirically from laboratory model tests. The uplift capacity of anchors is typically expressed in terms of a breakout factor, which is a function of the anchor shape, embedment depth, overburden pressure and the soil properties.
, authors have conducted laboratory research into other aspects of plate anchor behaviour of square, rectangular, circular and strip, horizontal anchors in both cohesion less and cohesive soils. Also they presented a new analytical procedure to predict the breakout load and load displacement relationship for strip, square and circular, horizontal anchors in both cohesion less and cohesive soils. The uplift capacity of anchors is typically expressed in terms of a break-out factor, which is a function of the anchor shape, embedment depth, depth ratio, over burden pressure and soil properties. A number of authors have conducted laboratory research into other aspects of plate anchor behaviour. Parameters studied include the influence of soil suction, layered soil, sloping ground and long term loading. Subbarao, et al (1988), Singh and Siavoshnina (1988), Mandal and Sah (1992) conducted tests on combined anchored earth and geotextile system Semi Z shaped mild steel anchor and non woven geotextile and geogrids was used in the model tests.
Methodology of Load Transfer Mechanism
During loading, the load applied to the pile is transferred to the surrounding soil. Thus, the ultimate load carrying capacity of the pile is dependent upon the strength of the soil. Soils derive their strength and ultimately their load capacity from several characteristics like the internal friction angle φ, the adhesion factor α, the effective unit weight of the soil (γ') and the undrained shear strength of the soil SU. Part of the load will be transferred to the soil through the pile shaft (adhesion between shaft and soil) and part will be through the helix (bearing). If there is more than one helix, the last part will be transferred to the soil through the cohesion between the column of soil between the helixes and the surrounding soil. Especially, in clayey soils, when the piles are subjected to uplift, a suction force will also be present (which increases the pile uplift capacity). The Geotechnical Engineer will decide the pile diameter, shaft length, helix diameter as well as helix depth, while the Structural Engineer will design the pile to withstand the design load, the torque required for installation, the helix (or helixes) thickness and the welding size. 2. Site specific soils information: Soil type, soil description, soil classification, water table level and depth of frost penetration. 3. Pile Geometry: The helix diameter and number of helixes are selected based on the soil parameters and the load the pile is designed to support pile shaft, helix diameter, thickness, number of helixes, embedment depth. 5. Estimation installation torque: The central steel pipe shaft transmits the applied torque during installation and transfers the axial compressive or tensile loads to the helixes during loading. 6. Other factors: Seismic considerations, soil chemistry, etc. Screw piles, like any other underground steel structure, need special attention paid to corrosion protection. This can be taken into consideration with extra thickness added to the structural required thickness, hot dip galvanization or providing a cathodic protection system; whichever suits the needs and requirements of a specific project.
Model Preparation Loading Frame
The testing assembly of screw pile is shown in the Fig 2. It consists of two pulleys with strings, from which one pulley is attached with the loading pan and other one is attached with screw pile shown in Fig 3. Screw pile is inserted in a testing tank filled with soil. Two dial gauges are placed at an appropriate level with respect to the position of piles, for measuring deflections when load is applied in a loading pan. Free Swell Index 25% Table 1: Test Results of Soil Properties Test Tank Preparation Two sets of tanks were used in experimentation. These tanks are made up of cast iron. The dimensions of testing tank are thickness-5mm, diameter-28cm and height-55cm. The given field soil contains stones, pebbles and vegetation due to which soil becomes truly heterogeneous and anisotropic in nature. The field soil contains moisture, to remove this moisture the field soil is first oven dried at 100 0 C and then sieved by 475μ sieved which is further tested by various physical characteristics of soil. The oven dried soil is mixed with the water content of 34% to maintain the field density of the soil. Firstly, water and soil is mixed vigorously till the soil becomes uniform in nature and weighed. The prepared soil is further divided into parts of same weight. The testing tank is filled with soil to the designed depth with each part of soil compacted to10cm.
Procedure
The testing tack is mounted on the assembly position maintaining the centre of the mould with edge of the pile shaft which is attached with the pulley. Then screw pile is inserted into the soil up to the embedment depth. Two dial gauges are placed to measure the deflection at a required position. Weights are added in the loading pan with initial weight of 0.5Kg/ 1.0 Kg and weights are increased till the shaft comes out of the soil. Deflections to corresponding loads are noted down till the screw pile comes out of the soil. The failure load is then considered regarding with the failure height which is 20% of the embedment depth. The tests are conducted with one screw pile varying with varying embedment depths, two screws pile with embedment depths of 2D and 3D and with varying pitches of 1.5D, 2.0D, 2.5D and 3.D.
Results
From the Figure 4, Figure 5 and Figure 6, it can be observed that as embedment depth increased the failure load and deflection increased. The failure load for varying embedment and varying pitches with one screw and two screws are given in the Table 2. From the pullout test of screw pile, it is concluded that as the embedment depth increases the load carrying capacity of pile also increased and gets more strength to pile foundation. 2. It is observed that as the number of screws increased in pile, it increases anchor strength of pile and takes more load as compared to single screw pile. 3. It is observed that as the pitch increased between the two screws, the load carrying capacity of pile increases nearly 20%.
|
2020-03-19T10:28:09.648Z
|
2020-02-29T00:00:00.000
|
{
"year": 2020,
"sha1": "0c5073f090c9ba47e2063b7bf115c5ba3ff5f5fa",
"oa_license": "CCBYNCSA",
"oa_url": "https://helixscientific.pub/index.php/home/article/download/60/62",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6553152ae01a3ace5cc5d347d338bb92b79dbb99",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
}
|
202538577
|
pes2o/s2orc
|
v3-fos-license
|
Neural Network Models for the Anisotropic Reynolds Stress Tensor in Turbulent Channel Flow
Reynolds-averaged Navier-Stokes (RANS) equations are presently one of the most popular models for simulating turbulence. Performing RANS simulation requires additional modeling for the anisotropic Reynolds stress tensor, but traditional Reynolds stress closure models lead to only partially reliable predictions. Recently, data-driven turbulence models for the Reynolds anisotropy tensor involving novel machine learning techniques have garnered considerable attention and have been rapidly developed. Focusing on modeling the Reynolds stress closure for the specific case of turbulent channel flow, this paper proposes three modifications to a standard neural network to account for the no-slip boundary condition of the anisotropy tensor, the Reynolds number dependence, and spatial non-locality. The modified models are shown to provide increased predicative accuracy compared to the standard neural network when they are trained and tested on channel flow at different Reynolds numbers. The best performance is yielded by the model combining the boundary condition enforcement and Reynolds number injection. This model also outperforms the Tensor Basis Neural Network (Ling et al., 2016) on the turbulent channel flow dataset.
Introduction
Development of practical, high fidelity turbulence models is a key challenge facing scientists and engineers. Most fluid systems of interest are in a state of turbulence, which is a disordered fluid flow characterized by many interacting and collectively organized length and time scales. Predictive models must perform well and be computationally tractable in spite of the challenges posed by the turbulence phenomenon. The gold standard would be direct numerical simulations (DNS) of turbulent fluid systems, but such simulations will remain largely out of reach at the Reynolds numbers found in nature for the foreseeable future. A variety of turbulence models have been developed over the decades and have been applied to various engineering fields with differing success rates. Two dominant approaches in the engineering sciences are Reynolds-averaged Navier-Stokes (RANS) models and Large Eddy Simulation (LES) models, the former being much more computationally tractable than the latter but less accurate. New hybrid RANS-LES models are under active development to take advantage of the strengths of both approaches (Fröhlich and Von Terzi, 2008). In recent years, researchers have started to leverage existing DNS databases of turbulent flows to build machine learning models that learn to represent closures in the RANS equations (Duraisamy et al., 2019).
The present work focuses on the RANS equations. The goal in the RANS framework is to determine the average of the flow fields and to model the effects of the fluctuations about the average (called the fluctuating components) on the average fields themselves. The fluctuating fields enter the RANS equations through the divergence of the Reynolds stress tensor. It is precisely this tensor that must be modeled. Traditional Reynolds stress closure models lead to only partially reliable predictions. Two-equation eddy viscosity models are commonly used (e.g. k − or k −ω) but they are known to be unable to properly account for streamline curvature and history effects on the individual Reynolds stress components, causing large discrepancies in many engineering-relevant flows (Speziale, 1991;Gatski, 2004;Johansson, 2002;Chen et al., 2003). Reynolds stress transport equations can also be derived, but these introduce an additional cost as well as require considerable new modeling efforts. It is expected to be very difficult, if possible at all, to develop a universal representation of the Reynolds stress tensor. Nevertheless, machine learning approaches can provide substantial flexibility in developing and learning new and more flexible models.
In recent years, machine learning and data science have undergone rapid development thanks to increased data availability, boosted computing power and advanced algorithmic innovations. Many researchers have started to incorporate machine learning and data science techniques into modeling fluid systems (Brunton et al., 2016;Colabrese et al., 2017;Jiménez, 2018;Raissi et al., 2019). For RANS turbulence modeling, the task is to learn a good model for the Reynolds anisotropy tensor and considerable effort has been devoted to it (Tracey et al., 2015;Zhang and Duraisamy, 2015;Ling et al., 2016a,b;Wang et al., 2017;Wu et al., 2018). Various supervised learning techniques, including random forests and deep neural networks, have been employed to learn adequate models for the Reynolds anisotropy tensor from high-fidelity DNS data. In a seminal work, Ling et al. (2016b) utilized a generalized expansion of the Reynolds anisotropy tensor (Pope, 1975) and proposed a neural network architecture, called the Tensor Basis Neural Network (TBNN), to learn the coefficients of the expansion. This novel architecture provides an innovative approach to automatically embed the physical invariances in the predicted Reynolds anisotropy tensor. When trained and tested on a variety of flows, the TBNN was shown to provide improved predicative accuracy compared to traditional RANS models as well as a generic neural network architecture that does not embed invariances (Ling et al., 2016b). Although significant successes have been achieved, there exist further challenges such as modeling the non-local and non-equilibrium effects of turbulence (Duraisamy et al., 2019). So far, the vast majority of machine learning models for the Reynolds stress closure, including the TBNN, use only local quantities. However, turbulence is generally a non-local phenomenon in space and time (Speziale and Eringen, 1981;Domaradzki and Rogallo, 1990;Laval et al., 2001). The Reynolds stress tensor at any location and time can be dependent on prior history of the flow and flow quantities of other spatial locations Hamlington and Dahm (2009a). Spatially non-local effects are particularly significant in strongly inhomogeneous flows, where spatial variations of mean flow quantities are substantial (e.g. the near-wall region of turbulent channel flow) (Hamba, 2005;Pope, 2001). Temporal non-locality effects, also known as non-equilibrium effects, are most obvious when there exist large time variations in the mean strain rate, such as in impulsively and periodically sheared turbulence (Hamlington and Dahm, 2008). For these flows turbulence models that neglect non-local effects can cause substantial inaccuracies. It could therefore be of interest to introduce ways to account for non-local effects with the help of machine learning techniques (Song and Karniadakis, 2018).
Interestingly, kinetic formulations of fluid systems can provide insightful perspectives on non-locality (Succi, 2018).
Although intriguing, machine learning algorithms applied to kinetic formulations are out of the scope of the present work.
The present work aims to use machine learning to model the Reynolds stress for fully-developed turbulent channel flow. We use a generic neural network as the base model, and then increase its capabilities by using non-local features, directly incorporating Reynolds-number information, and enforcing the boundary condition at the channel wall. The models are trained and tested on different combinations of Reynolds numbers in order to investigate the Reynolds-number generalizability. Meanwhile, each of these models is compared to the TBNN.
The remainder of the paper is structured as follows. Section 2 provides background on RANS, turbulent channel flow, and neural networks. Section 3 introduces the new neural network-based models. Section 4 describes the datasets used in this work for training the models and presents the Reynolds shear stress predictions by the new models in comparison with the TBNN model for different train-test cases. Finally, conclusions and future directions are described in Section 5.
Background and Methodology
The focus of this work is the development of new neural network models to represent the Reynolds anisotropy tensor for turbulent channel flow. The models are tested on turbulent channel flow at a variety of Reynolds numbers. In this section, we provide some background on turbulent channel flow and neural networks to provide context and to fix notation for the rest of the paper.
Turbulent channel flow
Turbulent channel flow consists of a fluid confined between two, infinite parallel plates in the x − z plane and situated at y = 0 and y = 2h and the flow is driven by a known pressure gradient in the streamwise (x) direction. The velocity field is denoted by u = (u, v, w) where u is the streamwise velocity, v is the wall-normal (y) velocity, w is the spanwise (z) velocity. In general, each component of the velocity field is a function of space (x, y, z) and time t. The friction Reynolds number for turbulent channel flow is Reτ = uτ h/ν where uτ = τ wall /ρ is the friction velocity and τ wall is the wall shear stress, which is proportional to the pressure gradient. The fluid density and kinetic viscosity are denoted by ρ and ν respectively. Wall units are a non-dimensional distance from the wall and are given by y + = uτ y/ν.
The Reynolds-averaged approach to turbulence modeling relies on an averaging operation to decompose the velocity into average and fluctuating components u = u + u , the goal being to ultimately find the average fields u.
Performing this averaging operation on the Navier-Stokes equations leads to the Reynolds-averaged Navier-Stokes (RANS) equations and the infamous Reynolds stress tensor u ⊗ u , which must be modeled in order to close the RANS equations. Most modeling efforts actually focus on the anisotropic Reynolds stress tensor a = u ⊗ u −(2k/3) I where k is the turbulent kinetic energy since that is the portion responsible for turbulent transport. Substantial effort has been expended over the years to understand the nature of and develop useful models for the Reynolds anisotropy tensor. Eddy viscosity models are among the most popular and widely used models for a, at least in part, due to their ease of implementation and quick simulation times. In particular, the k − and k − ω models have gained widespread acceptance, although they are not universally applicable to all flows of interest. In recent years, machine learning algorithms have been applied to turbulence datasets to learn the correct Reynolds anisotropy tensor for a variety of flow fields including those where eddy viscosity models are known to be inadequate. Turbulent channel flow is statistically a one-dimensional flow with mean velocity field u = (u (y) , 0, 0). The RANS equations are to be solved for u and only depend on the u−v component of a, auv. Solving for higher moments of the velocity field would require additional equations that would depend on more components of a.
Turbulent channel flow dataset
The dataset used in the present work for training the machine learning models consists of direct numerical simulation (DNS) data at four friction Reynolds numbers Reτ = [550,1000,2000,5200] (Lee and Moser, 2015). We obtain the data from the Oden Institute turbulence file server 1 . The DNS in that work was performed with a B-spline collocation method in the wall-normal direction and a Fourier-Galerkin method in the streamwise and spanwise directions. More simulation details can be found in the original reference (Lee and Moser, 2015). The DNS data provide truth labels for the Reynolds anisotropy tensor in this work. The inputs to the machine learning models are mean flow features obtained from RANS simulations of the same flows. In this work, we generated synthetic RANS data by smoothing the DNS fields with a moving average filter of width 3. The number of available DNS data points varies for different friction Reynolds numbers and therefore the total number of points over the wall-normal direction, Ny, also varies.
Neural networks
Neural networks are a class of machine learning algorithms that have found applications in a variety of fields, including computer vision (Krizhevsky et al., 2012), natural language processing (LeCun et al., 2015), and gaming (Silver et al., 2017). Neural networks have been shown to be particularly powerful in dealing with high-dimensional data and modeling nonlinear and complex relationships. Mathematically, a neural network defines a mapping f : x → y where x is the input variable and y is the output variable. The function f is defined as a composition of functions, which can be represented through a network structure. A feed-forward network is generally structured as a set of layers of nodes.
Each layer consists of several nodes (also known as neurons). The output of layer i is passed to layer i+1. There are no jump connections or feedback loops. The edges connecting two layers are characterized by a set of unknown weights, w, and biases, b, so that the output of the previous layer, hi, is transformed according to the affine transformation whi + b. In a fully-connected network every node in the current layer is connected to every node in the previous and next layer. Each node in the layer applies a nonlinear operation to the affine transformation so that the output of layer i + 1 is hi+1 = σ (whi + b). The nonlinear function σ is usually referred to as the activation function. It is typically a member of the sigmoid family of functions (such as the logistic function or hyperbolic tangent) although in principle any smooth function should suffice. The output of the neural network is the predicted function that is parameterized by the weights and biases in the network. Networks of this form can contain millions of unknown parameters. The prediction from the neural network is compared to data in a loss function and an optimization algorithm adjusts the weights and biases of the network to minimize this loss function. The process of optimizing a neural network is a challenge because it is a high-dimensional non-convex optimization problem. Many optimization strategies exist and new optimization techniques are frequently being proposed (Ruder, 2016). Commonly used optimization algorithms are gradient-descent based, where the network parameters are iteratively updated according to the gradient of the loss function with respect to the parameters and some learning rate. For optimizing a neural network, stochastic gradient descent is more often used than classical gradient descent because the latter can easily get stuck in a shallow local minimum (Goodfellow et al., 2016). In each iteration of the stochastic gradient descent, the gradient of the loss function on the entire training data is approximated by that on a random batch of training data, thereby stochastic.
A full pass over the entire training data is called an epoch. The fully-connected, feed-forward network (FCFF), also known as the multilayer perceptron (MLP), is only the starting point in a long line of network architectures. Other network architectures include convolutional networks, recurrent networks, autoencoders, and generative adversarial networks. Because of the great success in fields such as image recognition and natural language processing, there is interest in using neural networks in the physical sciences to help model physical processes. A significant challenge in adopting neural networks for physics-based problems is to inform the network with known physical laws (e.g. conservation laws). This approach can help facilitate the training process and may provide better generalizability of the learned model to physical parameters regimes in which there is no data. There exist a variety of approaches in this vein, many of which have been applied to problems in fluid mechanics. Of particular relevance to the present work are the approaches for learning the Reynolds anisotropy tensor. One of the first attempts to embed the physical and mathematical structure of the Reynolds anisotropy tensor into a neural network was the Tensor Basis Neural Network (Ling et al., 2016b). In that work, the authors add an additional tensorial layer to a fully-connected network which returns the most general, local eddy viscosity model (Pope, 1975). A major benefit of this architecture is that it guarantees Galilean and rotational invariance of the predicted Reynolds anisotropy tensor.
Other possible routes to embedding physics into a neural network are to use physics-inspired activation functions and convolutional layers. In the current work, we propose three modifications to a fully connected network, which account for some physics of the turbulent channel flow: 1.) Reparameterize the network to enforce a = 0 at the boundaries; 2.) Explicitly provide Reτ as an input to a layer of the network to provide Reynolds number information; 3.) Extension to allow for non-local models.
Models
In this section, we review the TBNN model and propose new neural network architectures to model the Reynolds anisotropy tensor for turbulent channel flow. The new models are developed based on a generic MLP model due to its flexibility. Three modifications are introduced which guarantee a = 0 at the boundary, directly incorporate Reynolds number information, and account for non-locality. At the end of this section, we provide a table summarizing the characteristics of the various models.
Review of TBNN
Developed by Ling et al. (2016b), the Tensor Basis Neural Network (TBNN) is a novel neural network architecture that embeds physical invariance properties into the modeled Reynolds anisotropy tensor. The core innovation of the TBNN is the design of a network architecture that represents a general eddy viscosity model (Pope, 1975), which by itself guarantees the symmetry and invariance properties of the predicted anisotropy tensor while accounting for physics beyond that permitted by a linear eddy viscosity model.
The general eddy viscosity model refers to the most general representation of the Reynolds anisotropy tensor in terms of the mean velocity gradient, which is (Pope, 1975 where b is the Reynolds anisotropy tensor normalized by 2k, T (n) are basis tensors and g (n) (λ1, ..., λ5) are coefficients that depend on the five scalar tensor invariants. The basis tensors T (n) and the scalar invariants λm are known functions of the symmetric and anti-symmetric part of the normalized mean velocity gradient tensor. Finding the explicit expression of the coefficients is extremely difficult for general three-dimensional turbulent flows, with the significant aggravation that there is no obvious hierarchy of the basis components. The approach taken by Ling et al.
(2016b) was to train a deep neural network to learn the coefficients and subsequently the Reynolds anisotropy tensor across a variety of flow fields.
A schematic of the TBNN is provided in Figure 1. The TBNN consists of two input layers. The first input layer is formed by the scalar invariants, which are passed to a fully-connected feed-forward (FCFF) network. The second input layer is the basis tensors. They are combined with the ten tensor basis coefficients predicted from the FCFF to form the normalized anisotropy tensor b, according to (3.1). network, which outputs the tensor basis coefficients. A second input layer provides the actual tensor basis, which is combined with the predictions from the FCFF network to provide the normalized anisotropy tensor.
This architecture guarantees that b will satisfy invariance properties and remain a symmetric, anisotropic tensor.
The RANS equations are known to be invariant to rotations and reflections of coordinate axes and translations of the frame of reference with constant speed (also known as Galilean invariance). Therefore, it is important for any machine learning approach to preserve these invariance properties in the predicted flow variables. In addition, embedding physics into the machine learning model provides benefits to the model performance. As demonstrated by the authors in the original paper, when trained and tested on a variety of flows, the TBNN yielded the best predictions of the Reynolds anisotropy tensor compared to traditional RANS models and a generic neural network that does not embed invariance properties.
On the other hand, the TBNN still faces challenges inherent to the general eddy viscosity model. For example, the general eddy viscosity model represents the anisotropy tensor as a function of local mean velocity gradient, and has no expression of non-locality. It is of our interest to design new network architectures that are competitive with the TBNN while being more flexible to address its shortcomings.
Base model -MLP
A generic MLP model is employed to serve as the simple baseline neural network model. Figure 2 shows the diagram of the MLP. Starred quantities denote non-dimensionalized quantities. In particular, the spatial dimension y is normalized by the half channel width h: y * = y/h. The mean velocity and the Reynolds anisotropy tensor are normalized by the bulk velocity The input to the model is the mean velocity gradient of a channel flow du * dy * and the output of the model is the u − v component of the Reynolds anisotropy tensor a * uv . The input is fed through a fully-connected feed-forward network, which produces the output. To emphasize that this is a local model, the input and output are specified at a particular location y * i .
We note that the non-dimensionalization employed here as well as the predicted output are specific to RANS for turbulent channel flow, which is the focus of this paper. The RANS equation for u only depend on auv and therefore prediction of the u − v component of the anisotropy tensor is sufficient to solve for the average velocity field. This model can be generalized in a straightforward way to predict the 6 components of the anisotropy tensor for more involved flow fields. A more general non-dimensionalization would involve the prediction of the normalized anisotropy tensor following the TBNN model.
Compared with the TBNN, the MLP is rather simple and has no embedded physics other than Galilean invariance by using the velocity gradients as inputs. Nevertheless, we choose to use this model for its flexibility.
Boundary condition enforcement
The first improvement that can be done to the MLP is enforcing the no-slip boundary condition at the channel wall.
The motivation for this is that the MLP is a free form with almost no embedded physics, hence it is easy to lead to erroneous and unrealistic predictions. To impose the boundary condition, we reparameterize the solution as follows, where FCFF du * dy * is the output of an MLP, y + is the distance from the wall in viscous units, and A y + is a user-selected function with the property that A (0) = 0. We choose A y + = 1 − e −βy + with β a hyperparameter controlling the shape of the function. In this way, the solution satisfies the boundary condition a * uv = 0 at y + = 0 by construction. This is illustrated in Figure 3. Note that the model takes y + as an extra input and the loss function is calculated based on the reparameterized solution. We expect the correct boundary condition to help improve the overall predictive accuracy as measured by the R 2 score.
Reynolds number injection
The Reynolds number of a flow greatly influences the mean velocity profile. In turbulence modeling with machine learning approaches, it is a natural practice to include the Reynolds number as a flow feature. For example Wang et al.
(2017) used the wall-distance based Reynolds number as an indicator to distinguish boundary layers from shear flows in their random forest models. Note that we also use a similar measure, y + , in the boundary condition enforcement.
However, y + only carries indirect information of the Reynolds number. We expect a more direct injection of the Reynolds number would affect the model differently. Figure 4a provides the diagram of a modified MLP with an additional input Reτ , the friction Reynolds number for the channel flow. Figure 4b illustrates in detail how the Reτ is given to the network. It is typically fed into one or more of the intermediate layers of a fully-connected feed-forward network. The intuition is to give the network some higher-level information, which needs to be separated from the ordinary input.
Non-locality
Over the years, many efforts have been devoted to improving turbulence models to better account for non-local effects.
Some are shown to give promising results for the case of turbulent channel flow. Hamlington and Dahm (2009b) developed a classical non-local turbulence model by using a series of Laplacians of the mean strain rate tensor to express the rapid pressure-strain correlation. When applied to turbulent channel flow, their closure resulted in good agreement with the DNS values (Hamlington and Dahm, 2009b). More recently, a group of researchers employed fractional calculus to model the spatial non-locality in wall-bounded turbulent flows (Song and Karniadakis, 2018).
They replaced the original RANS equation with a variable-order fractional differential equation to model the Reynolds stresses. By fitting the DNS data for turbulent channel flow at several different Reynolds numbers, they found a universal form for the variable fractional order, which also holds for turbulent Couette flow and pipe flow.
We design the following modification to the MLP to capture non-local effects in turbulent channel flows. The network architecture is shown in Figure 5. Like the previous architectures, this architecture includes a FCFF network that takes as input a velocity gradient and predicts the anisotropy tensor (the second FCFF network on the right in Figure 5). However, the input to this FCFF network is a non-local velocity gradient, du * dy * , which is a fractional derivative of u * with variable fractional order α y + . We therefore refer to the output as the non-local anisotropy tensor. A major new addition is that the order of the non-local velocity gradient is learned from a different FCFF network. This process is represented by the first FCFF network on the left in Figure 5. The input to the first FCFF network is y + i associated with some particular location y * i , and this is used to predict the corresponding fractional order α (assuming 0 ≤ α ≤ 1); the second input is u * at all locations y * l , l ∈ {0, 1, ..., n}. The non-local derivative for the location y * i is calculated using the Caputo fractional derivative (Li and Zeng, 2015), It can be proved that, for y * i = 0, (3.3) becomes u * | y * i when α (y * i ) = 0, and becomes du * dy * | y * i when α (y * i ) = 1 (Li and Zeng, 2015). In other words, (3.3) reduces to local flow quantities when α is an integer. At the lower terminal To be consistent with the baseline MLP model and traditional RANS models, where the Reynolds anisotropy tensor at the wall depends on the velocity gradient, we decide to let α (0) = 1. This condition is enforced by reparameterizing α y + : Here FCFF y + represents the output of a fully-connected feed-forward network, whose output activation function is set to be a sigmoid function so that the returned value is inside [0, 1]. Multiplying it by 1 − e −βy + and then subtracting the product from 1 forces α = 1 at y + = 0. Because 0 < 1 − e −βy + < 1 for y + > 0, the resulting α y + is always inside [0, 1] as desired.
Overall, this non-local modification has several advantages. First, since the fractional derivative is an integral when α is not an integer, it is able to express non-locality. Second, the fractional order α is itself a function of space, Figure 5: Diagram of the MLP with non-local feature. This architecture consists of two FCFF networks. The first network outputs a variable, fractional differential operator order α. Then α is used in a deterministic step to compute the non-local Caputo derivative of the velocity field representing a non-local velocity gradient.
This gradient is the input to a second FCFF network the output of which is the predicted non-local anisotropy tensor.
allowing the encoding of variable "strength" of the non-locality across space. Last, through fitting the function α y + , the model is able to learn universality among channel flows with different Reτ . That is, the non-local parameter α will reflect the fact that mean flow profiles in turbulent channel flow collapse to the same curve when plotted in + units over a range of Reτ (Song and Karniadakis, 2018;Pope, 2001).
Combination of models
The above modifications can be combined to develop more complex architectures with more capabilities. For instance, it is easy to combine boundary condition enforcement and Reynolds number injection in a single model. Based on this model, we will explore two types of combination: with and without the non-locality. Table 1 summarizes the capabilities of all proposed models and the TBNN model. (Kingma and Ba, 2014) with an initial learning rate of 10 −6 and a batch-size of 10 was used to train the networks. The weights and biases of the networks were initialized using the He uniform distribution (He et al., 2015). We used early-stopping (Prechelt, 1998) as a regularization technique to guard against over-fitting. Early-stopping refers to the procedure where the validation loss is monitored during training and the training process terminates once the validation loss begins to increase. In machine learning, an increase in validation loss is usually a strong indicator of over-fitting (Goodfellow et al., 2016). Figure 6 shows an example of the training and validation loss as a function of epochs during training the MLP-BC-Reτ model for Case 1. In this case, the model was trained after 6924 epochs. From the figure, it appears that the loss has not yet converged. This is because the early-stopping criterion was used. Shortly after epoch 6924, the validation loss starts to rise relative to the training loss. The early stopping criterion senses this and terminates the training process. The number of training epochs varied for different models and different cases, but was generally in the range of 5000-10000. For model evaluation, we used the R 2 score, a statistical measure of how well the regression predictions approximate the true data. An R 2 of 1 indicates a perfect fit. The R 2 scores of the various models on the training and test data are displayed in Table 3. The R 2 scores on the validation data are very similar to the training R 2 and are therefore omitted in the reported results. In subsequent sections we discuss the performance of each of the MLP-BC, MLP-Reτ , MLP-NL, and the combined models, with comparisons to the base MLP model and the TBNN model. (Clevert et al., 2015). The FCFF network in the MLP-BC model shares the same structure as the MLP. The hyperparameter β is chosen to be 0.1.
According to Table 3, the test R 2 scores of the MLP-BC model are higher than the test R 2 scores of the MLP model for all four training-test cases, which evidently demonstrates the advantage of enforcing the boundary condition. To gain a more qualitative picture of the model performance, the profiles of the predicted a * uv for Case 1 and 2 are plotted in Figure 7. Indeed, the MLP-BC model correctly predicts zero for the a * uv at the wall, whereas the MLP model predicts non-zero values.
Comparing across the four cases, for both models, we observe that in Case 1 and 4 the test R 2 is considerably lower than the training R 2 , while in Case 2 and 3 the test R 2 is close to the training R 2 (in Case 2 the test R 2 is even higher than the training R 2 ). In fact, this pattern exists for other models too. One possible explanation for this is that in Case 1 and 4, the Reτ of the test set is in a regime not covered by the range of Reτ in the training set, and so it is harder for the models to generalize.
Performance of the MLP-Re τ model
Next, we investigate the performance of the MLP-Reτ model. In this model, the FCFF network has the same structure as the MLP and the Reτ is a direct input to the third hidden layer.
Compared to the MLP, the MLP-Reτ yields significantly improved predictions: the test R 2 scores for the four training-test cases are all above 0.9. The biggest improvement is for Case 1 (test set Reτ = 5200) where the R 2 raises from 0.5186 to 0.9476. This demonstrates that Reynolds number injection can greatly enhance the model performance. Figure 8 shows the profiles of the predicted a * uv for Case 1 and 2. In both cases, the MLP-Reτ provides more accurate predictions than the MLP does in the bulk region of the channel (y + > 10). For y + < 10, the MLP-Reτ model does not perform as well as the standard MLP model in Case 1, but is better than the standard MLP model
Performance of the MLP-NL model
In the MLP-NL model, the two FCFF networks both have 5 hidden layers with 50 nodes, the activation function of which is the ELU. The output activation function of the first FCFF network is the sigmoid function. The hyperparameter β is 0.1 for enforcing the boundary condition of the fractional derivative order α.
Like the previous two models, the MLP-NL model again displays improved performance compared to the MLP.
The test R 2 scores of the MLP-NL model are in general higher than those of the MLP-BC model, but lower than the MLP-Reτ model. Figure 9 shows the profiles of the predicted a * uv for Case 1 and 2. Noticeably, the predictions of the MLP-NL model in Case 1 are not as good as the predictions in Case 2. As discussed above, the fact that the model struggles with generalizability more in Case 1 is expected because the model is extrapolating to a high Reτ regime not covered in the training set.
Performance of combined models
The three modifications have each shown improved performance over the MLP. It is then expected that the combined models would provide more benefits. The MLP-BC-Reτ model is structured similarly to the MLP-BC and the MLP-Reτ models: it has got a FCFF network of 5 hidden layers with 50 nodes per layer; the Reτ is given to the third hidden layer; then the FCFF output is multiplied by a factor A y + = 1 − e −βy + with β = 0.1 to enforce the boundary condition. Likewise, the non-local combined model MLP-BC-Reτ -NL incorporates the above modifications to the second FCFF network of the MLP-NL model. Table 3, the combined model, MLP-BC-Reτ , indeed achieves higher accuracy than all previous models.
Based on
However, this is not true for the MLP-BC-Reτ -NL model, the model with supposedly the most capabilities and complexities. In Case 1, the test R 2 of the MLP-BC-Reτ -NL model is only 0.8196, smaller than the test R 2 of the MLP-Reτ model, which is 0.9476. In Case 2 and 4, the MLP-BC-Reτ -NL model beats all but the MLP-BC-Reτ model. Only in Case 3 the MLP-BC-Reτ -NL model gives the best performance. Hence, bringing non-locality into the model through the fractional derivative approach does not always provide additional advantage.
Comparing the proposed models to the TBNN model, we find that the MLP-BC-Reτ and MLP-Reτ models consistently outperform the TBNN in all four training-prediction cases. The MLP-BC-Reτ -NL model outperforms the TBNN in Case 2, 3 and 4, yet fails in Case 1. This is verified in the profiles of the a * uv predictions for Case 1 and 2 in Figure 10. In Case 1, the MLP-BC-Reτ and the TBNN both yield good predictions, whereas the predictions of the MLP-BC-Reτ -NL model starts deviating from the DNS at about y + = 1000. In Case 2, the two combined models and the TBNN all provide good predictions, but the two combined models are significantly better. This shows that, while for all models it is harder to generalize in Case 1 than in Case 2, different models display varying levels of robustness. The TBNN and the MLP-BC-Reτ are more robust than the MLP-BC-Reτ -NL. Finally, in Figure 11 we inspect and compare the learned fractional derivative order α of the two non-local models for Case 1 and Case 2. In both cases, and for both models, the α tends to decrease and increase again over the range of 1 < y + < 200. Considering that a bigger deviation from an integer indicates stronger non-local effects, this shows that the non-local effects are more emphasized in specific regions. In particular, the models have learned that non-local effects are significant in the near-wall region and are less significant in the bulk region. Comparing the MLP-NL model and the MLP-BC-Reτ -NL model, we find that the combined model extends the non-local region to large y + whereas the MLP-NL model learns an α that approaches 1 in the bulk region indicating that the anisotropy tensor is well-modeled by the velocity gradient there. Both models compared in Figure 11 use the fractional derivative to express non-locality. However, without Reτ injection, the non-locality is very similar for both Reynolds numbers.
Using Reynolds number injection, the non-locality at different locations acquires roughly the same shape: a dip in the middle before an upturn towards locality near the center of the channel. The main difference is that the Reτ = 2000 case has a smaller α value near the center of the channel than the Reτ = 5200 case. This may be because Reτ = 2000 has not reached an asymptotic state yet (Lee and Moser, 2015). More data at higher Reynolds numbers may provide new insight into this observation.
Conclusion
Turbulence modeling with RANS equations is widely used in engineering applications, but properly modeling the Reynolds stress tensor to represent rich and accurate turbulence physics has always been a major challenge. In the past years, many developments and achievements have been made in modeling the Reynolds stress closure using novel machine learning approaches informed by data, such as deep neural networks. In the present work, we focused on modeling the Reynolds stress closure of the turbulent channel flow with a neural network approach. We proposed three types of modifications to a standard fully-connected network to account for some physical properties of the channel flow: 1.) Reparameterize the network to enforce no-slip boundary condition; 2.) Explicitly provide Reτ as an input to the network; 3.) Allow for spatial non-locality in the network through a fractional derivative while learning the order of the fractional operator. To investigate the Reynolds-number generalizability, we designed four training-prediction cases with the channel flow data at four Reynolds numbers. The results showed that compared to the standard network, the three modifications can all improve the predictions, with the most successful results coming from the combination of Reynolds number injection and boundary condition enforcement without the nonlocal model. We also compared our models to the Tensor Basis Neural Network proposed by Ling et al. (2016b), which embeds invariance properties into the predicted Reynolds anisotropy tensor. The comparison showed that our best models outperformed the TBNN on this particular flow field.
There are several directions for future exploration. A straightforward study to thoroughly explore the hyperparameter space of the models should be performed. For current results, we only performed a very preliminary hyperparameter search. A rigorous hyperparameter optimization would help to validate the present findings. Moreover, in this work we only studied the u − v component of the Reynolds anisotropy tensor for turbulent channel flow. Extensions to other components of the tensor are natural with the current modeling approach and should be investigated in future. For example, predicting the full Reynolds anisotropy tensor may be interesting. A direct extension to the current models would allow prediction of the full Reynolds anisotropy tensor while imposing the required symmetry. More subtle extensions may be possible that take into account the coupling of the components of the tensor. In addition, the non-local models need to be further examined. Other implementations of non-locality may be possible such as non-local kernel methods. It is also possible to address non-locality via kinetic formulations in which non-local effects are accessed through local formulations in extended phase-space (Ansumali et al., 2004;Succi, 2018). Machine learning approaches in kinetic formulations are still in their infancy and combining the two approaches may provide additional insights. Beyond these extensions, the models should be tested, modified, and applied to more complex flow fields including fully three-dimensional flows. Such extensions may involve substantial reformulations of the models. In particular, in the case of non-local models, the fractional derivative approach will need to be modified. Another interesting avenue for future exploration is to incorporate uncertainty quantification into the new neural network models.
|
2019-09-09T02:11:59.000Z
|
2019-09-09T00:00:00.000
|
{
"year": 2020,
"sha1": "008205befe86e9f4ef2dbd0ff0807a92a0e0812f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1909.03591",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "008205befe86e9f4ef2dbd0ff0807a92a0e0812f",
"s2fieldsofstudy": [
"Engineering",
"Computer Science",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
134696461
|
pes2o/s2orc
|
v3-fos-license
|
Specifics in the operating modes of thermosyphon air heater of steam generators №1 and №2 in TPP "Republika" at fuel switch from coal to natural gas
. A fuel switch is motivated both by the necessity of increasing energy efficiency and the compliance with the ever-stricter regulations regarding the release of harmful emissions in the environment. In this paper a thorough financial and energy analysis on the fuel switch from coal to natural gas is carried out, in particular with respect to waste heat recovery systems (two phase thermosyphons). As a result of the calculation of the heat transfer coefficients for both fuels, it is established that the system running on natural gas has a lower value, due to the lower air velocity, caused in turn by the lower requirement for excess air. The heat transfer coefficients of the evaporation and condensation zones respectively are established h fgas =104.9 и h air =84.9 (W/m 2 .K) for coal and h fgas и h air =84.7 (W/m 2 .K) respectively for gas. A numerical study is also carried out and a methodology for the analysis of the efficiency of two phase thermosyphons with complex geometry is presented.
Introduction
In the majority of cases a fuel switch is dictated by: the increase of energy efficiency, compliance with the stricter regulations regarding the release of harmful emissions in the environment; decreased operational costs, a low payback period. The fuel switch itself leads to a number of changes, which most often affect the systems for flue gases discharge. In case waste heat recovery mechanisms are in place along the path of discharging of flue gases, then their operational parameters will also change. This has an impact on both the systems' efficiency as well as on the financial parameters for the proposed waste heat recovery system.
Despite the number of developments in literature, including separate case studies, analysing the effects of a fuel switch itself, there are very few cases in which an analysis of the waste heat recovery systems after a fuel switch has been carried out.
А number of developments exist in literature related to the numerical study of the heat and mass transfer processes in two phase thermosyphons. In [1,2] a detailed analysis of the processes in a horizontal thermosyphon is presented. The effects of the two-phase flow regime along with the degree of filling of the thermosyphon on the heat and mass transfer processes are examined. In [3] the influencing factors including temperature difference, liquid charge, height difference, and circulation flow resistance on the liquid head have been identified and investigated experimentally. Thermal performance of a two-phase closed thermosyphon with an internal surface roughness is presented in [4]. The study shows a reduction in the evaporator's thermal resistance, an increase in its heat transfer and a reduction of the total thermal resistance. The conclusion is that it is possible for the downcomer to be partially filled with liquid. In [5] and [6] numerical and experimental studies of single and twophase thermosyphons are shown accordingly. The vapour velocity and temperature distributions of the flat twophase thermosyphon are obtained by numerically solving the equations of continuity, momentum and energy. A detailed numerical analysis of the heat and mass transfer processes in a multiphase thermosyphon is presented in [7].
[8] presents a CFD analysis of a two-phase thermosyphon using the commercial software ANSYS-CFX. The software successfully predicted the overall temperature distribution for the investigated thermosyphon at three different heat inputs.
The quality of the operational fuel has continuously worsened in the period 1951-2001 with its lower heating value (LHV) decreasing from 10 500 to 8 150 kJ/kg.
In the year 2000 during a planned reconstruction of steam generators №1 and №2 two air heaters of a thermosyphons type based on patent №745/01.06.2005 heat exchanger with heat pipes were realized [9].
On assignment by "Toplofikacija -Pernik" EAD a project was developed and carried out to install an additional air heater with "thermosyphons" (AH-TS) for steam generators №1 and №2 in the TPP. The aim of this project was the reduction of the temperature of exit gases from 220ºC to 180ºC and thus the increase in steam generator efficiency in about 2-2.5%.
The air heater consists of 1 710 tubes (second hand economiser tubes) with a diameter d=32/4 mm, placed on two sides, 9 tubes in a row along the height of the gas tract. The tubes are arranged to form a corridor, their total length being 7 m. The evaporation zone of the thermosyphon is 3 m and inclined at 30 degrees in the direction of the flue gasses, the adiabatic zone is 2 m, inclined at 30 degrees to the evaporation zone, and the condensation zone also 2 m in length, inclined at 30 degrees to the adiabatic zone.
Chemically treated water filling about 25 -30% of the volume of the tube at lower than atmospheric pressure (рvac = 950 mbar; рabs = 1015 -950 = 65 mbar) is used as working fluid. One of the steam generators was in use until 2009 after which it was shut down as it did not comply with the ecological norms of Directive 2001/80/EC for the limitation of emissions of dust, nitrous and sulphurous oxides, and carbon dioxide from large combustion plants.
Steam generator №2 has changed its fuel base from coal to natural gas without any reconstruction aiming to change the heating surfaces taking place. The reasons behind this being that it has only been used in emergency situation, thus investments in reconstruction were not economically viable.
This circumstance allows us to perform valuable research into the modes of operation of the air heaters, using coal and natural gas respectively. Table 1 represents characteristics of the operational fuels of both steam generators as well as the relevant technical parameters of the air heaters in their respective modes of operation. The data shown in table 1 show a higher efficiency of steam generator №2 in natural gas mode of operation, owing to the possibility of a more in-depth cooling of the flue gases. The end effect however, expressed by the relative increase of the efficiency when using an air heater with thermosyphons and coal is higher =2.72 compared with =2.43 for natural gas. This is due to the fact that in the boilers using solid fuel the efficiency ('=84.8%) without the presence of an air heater is significantly lower than the efficiency of a boiler burning gas ('=88.6%) at the same conditions.
Regarding the economic viability, using an air heater with thermosyphons on gas leads to significantly higher cost benefits and hence a shorter pay-back period compared with that of solid fuel, due to the notably higher price of natural gas.
An assessment of the reduction of CO2 emissions as a result of the economies of both hard and gas fuels was carried out. From the results presented in table 1 it is apparent that a more considerable reduction of CO2 emissions is exhibited by an air heater with thermosyphons operating using solid fuel.
The visual observation of the thermosyphons' heating surfaces show low levels of wear subjected to the abrasive action of the coal dust as a result of the 9-year operation of the steam generator, during which time the main fuel used was high ash content sub-bituminous coal. It must be noted however that the structural integrity of the thermosyphons, in particular with regard to leaks, has not been compromised, owing to the fact that they have been manufactured using second hand thick-walled economizer tubes.
Experimental study
The aim of the experimental studies is to establish whether the 18-year operation of steam generator №2 (9 years of which using sub-bituminous coal, and the remaining 9 operating on natural gas) have had an effect on its efficiency and to what extent the operational data (table 2) correspond to the calculated characteristics presented in table 1.
Numerous experimental studies regarding the efficiency of the suggested two-phase thermosyphons have been carried out at different regimes of steam boiler operation. The air heater measurements have been carried out in 2008 and 2015, when the steam generator, the characteristics of which are shown in table 1, was burning coal fuel (2008) and natural gas subsequently (2015) (see table 1).
During the experiments, the following were measured at multiple points along the flue gases and the air paths: the air temperature before AH-TS; the temperature of the flue gases before AH-TS; the temperature of the flue gases after AH-TS; the steam generator's nominal load (table 2) (figure 1). The average temperatures of the parameters measured is given in the last column of table 2. The heat transfer coefficients in the evaporation and condensation zones have been determined analytically by means of the normative method used in [10] with corrections made in accordance with the angles of inclination of the thermosyphons. Using the aforementioned methodology we obtain the heat transfer coefficients of the evaporation and condensation zones respectively fgas=104.9 и air=84.9 (W/m 2 .K) for coal and fgas =99.7 and air =84.7 (W/m 2 .K) respectively for gas. The lower values of the convective heat transfer coefficients are explained with the lower velocities of the gas stream in the intratubular space when using natural gas, due to the lower excess air coefficient compared to that of solid fuel.
From the experimental results in table 2 we can conclude that the measured temperatures (table 2) correspond very closely to the calculated values (table 1).
Numerical study
The numerical study aims to show its applicability to the analysis of two phase thermosyphons after a change in the operational regime parameters. The experimental studies carried out will allow the validation of the results from the numerical solution and the creation of a methodology for the analysis of the efficiency of the systems specified.
Geometric model of the waste heat recovery system
For the aims of the numerical study a 3D model has been created ( Figure 2). The air heater consists of 95 tubes in a row, placed on two sides, and 9 tubes in height, arranged in a square tube layout. Each tube's diameter is d=32/4 mm, the longitudinal tube pitch s1=42 mm; the transverse tube pitch between the tubes s2=45 mm. The evaporation zone of the thermosyphon is 3 m, the adiabatic one 2 m, and the condensation zone 2 m. The two-meter adiabatic zone is located between the evaporation and the condensation zones, and it is further isolated with perlitoconcrete. The aerodynamic resistances are as follows: on the gases part 303 Pa, on the air 219 Pa. For the purposes of the numerical study, a threedimensional model was created. The model consists of 6 rows with 5 pipes inclined at 30 o for each zone of the air heater. The dimensions, thickness and distance between pipes are presented above. A meshing procedure for the proposed volume has been applied, whereby 1 250 000 pcs. triangle mesh elements have been created.
Mathematical modeling
Two different types of heat transfer processes were identified -in the two-phase thermosyphon and between а thermosyphon and flue gases and air. It was considered that а steady state heat transfer is reached in the two-phase thermosyphon. The results obtained were further used as input data for the study of the heat transfer between air heater and flue gases and air.
The numerical solution of the heat transfer processes between the air heater and the flue gases and air is based on the continuity equation, momentum equation and energy equation combined with the appropriate turbulent model. Turbulence modelling is based on the Reynold Averaged Navier Stokoes Equations (RANS). The Finite Volume Method (FVM) was used for the numerical solution of RANS. The fundamental equations were derived in the FVM using the integral approach. The numerical solution procedure is presented in Figure 3. The equations that are solved during the numerical procedure are as follows [11].
Initial conditions
Numerical procedures were carried out to analyze the heat transfer processes between the air heater and the flue gas and air via the paths of the two tracts, considering both fuel bases -coal and natural gas. Furthermore, the adiabatic zone of the thermosyphon air heater is considered. The experimental data shows that the velocity of the flue gases (wfgas) before the air heater is about 7.8 m/s, and the velocity of the air before the air heater is 5.6 m/s. (when the boiler is running on coal). The influence of the heat transfer coefficient (h) was investigated. Initial parameters for the numerical procedures are presented in Table 3. In addition, the numerical analyses show that the degree of turbulence in the intratubular space (around the two phase thermosyphons) increases by about 200%, which is a reason for the intensive heat exchange. Table 4 presents a summary of the results of the numerical analysis. The results show that the difference in temperatures in the two tracts under different regimes do not differ by more than 5% from the experimental studies. The closest to the results of the experimental study are the 3 rd and 4 th rows of Table 4.
Conclusion
The reconstruction of steam boilers №1 and №2 at "Repbulica TPP" with the production, assembly and installation of the additional air heater with thermosyphons carried out was successful and effectivethe temperature of the exit gases has been reduced on average from 218 to 185 º С as a consequence. The utilised heat from the flue gases heats the outdoor air from 25 to 81 º С and thus ensures a non-corrosive regime for the subsequent heating surface along the air tract.
In our current work the results of the numerical study of heat exchange processes in a thermosyphon air heater are presented, inclined at 30 º along the path of the flue gases, 60 º in the adiabatic zone, and perpendicular along the air path. The influence of the heat transfer coefficient in the zone of evaporation and condensation of the pipes was investigated both for coal and natural gas fuel bases. The presence of an adiabatic zone undoubtedly worsens the heat exchange conditions as a whole owing to unfavourable hydrodynamic conditions, but in the present case due to the location of the flue gas tract and air blower, the presence of such a specified area is inevitable. The heat transfer coefficients of the evaporation and condensation zones are respectively fgas=104.9 и air=84.9 (W/m 2 .K) for coal, and fgas =99.7 and air =84.7 (W/m 2 .K) for gas.
The results of the numerical analysis have been validated with the experimental ones, in which the maximal margin of error is about 5%. Thus the numerical modelling scheme proposed allows for an express analysis of the efficiency of the existing systems for waste heat recovery using two phase thermosyphons when a fuel switch from coal to natural gas is performed.
|
2019-04-27T13:13:09.419Z
|
2019-10-01T00:00:00.000
|
{
"year": 2019,
"sha1": "42455cbaa589ebe2ffd5e134d24b2e42460ce30c",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/11/e3sconf_enviro2018_01003.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "43f313a9ee0b6d39180b98c21cf022bceb30bed2",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
266176257
|
pes2o/s2orc
|
v3-fos-license
|
Miffi: Improving the accuracy of CNN-based cryo-EM micrograph filtering with fine-tuning and Fourier space information
Efficient and high-accuracy filtering of cryo-electron microscopy (cryo-EM) micrographs is an emerging challenge with the growing speed of data collection and sizes of datasets. Convolutional neural networks (CNNs) are machine learning models that have been proven successful in many computer vision tasks, and have been previously applied to cryo-EM micrograph filtering. In this work, we demonstrate that two strategies, fine-tuning models from pretrained weights and including the power spectrum of micrographs as input, can greatly improve the attainable prediction accuracy of CNN models. The resulting software package, Miffi, is open-source and freely available for public use (https://github.com/ando-lab/miffi).
Introduction
Single particle cryo-electron microscopy (cryo-EM) is a technique that can provide detailed structural information on the architecture of biological macromolecules with resolution ranging from atomic details to quaternary arrangement (Nogales and Scheres, 2015).It particularly excels at characterizing large biological assemblies or heterogeneous samples, which are extremely challenging targets for other high-resolution structural techniques such as X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy.Since the introduction of direct electron detection (DED) cameras roughly a decade ago, cryo-EM has undergone rapid development both on the hardware and software fronts (Chua et al., 2022).Improved automation and throughput during data collection, coupled with efficient and user-friendly data processing software, has enabled wide adoption of this technique in many areas of biological research.
Despite the achievements so far, many aspects of cryo-EM can still benefit from further improvements.One such area concerns efficient recognition and exclusion of non-ideal cryo-EM micrographs, given the increasing number and size of cryo-EM datasets.Currently, a beamimage shift scheme (Cheng et al., 2018) is commonly used during cryo-EM data collection using software such as SerialEM (Mastronarde, 2005), EPU, and Leginon (Cheng et al., 2021), which greatly increases data collection throughput.Coupled with newer detectors with shorter exposure times, 300 or more cryo-EM movies can be obtained per hour with little compromise on achievable resolution (Fréchin et al., 2023;Peck et al., 2022).Effort is also being made to automate microscope operation through deep learning methods (Bouvette et al., 2022;Cheng et al., 2023;Fan et al., 2024), which would further accelerate the data collection process.Such advances, combined with the need for a large amount of data for low-concentration or highly heterogeneous samples, have led to increasingly large dataset sizes, typically ranging from a few thousand to tens of thousands of movies.Inevitably, a portion of the collected data will be non-ideal for processing and will thus need to be excluded.A common method for performing filtering is based on contrast transfer function (CTF) fitting, in which the user defines a threshold for acceptable CTF fit resolution or defocus value for a given micrograph.This method is very efficient at excluding micrographs with drift issues or beam aberrations, but it is not ideal for filtering out many other issues such as crystalline ice or off-target support film images.To achieve high filtering accuracy, manual inspection of micrographs is often needed, which is slow and requires expert knowledge to perform.Such an approach becomes less practical as the dataset size increases.
A convolutional neural network (CNN) is a machine learning algorithm that is specialized for processing matrix-like data, such as images.CNNs have been successfully applied to a wide range of computer vision tasks such as image recognition, classification, and segmentation (Alzubaidi et al., 2021).CNNs have also proven useful in many cryo-EM processing steps such as particle picking (Bepler et al., 2019;Tegunov and Cramer, 2019;Wagner et al., 2019), 2D class selection (Kimanius et al., 2021;Li et al., 2020), and micrograph segmentation (Sanchez-Garcia et al., 2020).Because cryo-EM micrograph filtering is an image classification task at its essence, a CNN-based approach is highly attractive.Cianfrocco and co-workers were the first to demonstrate such an approach (Li et al., 2020).By training a CNN based on a ResNet34 model on a labeled dataset of "good" and "bad" micrographs, they were able to show that CNN-based micrograph filtering greatly improves prediction accuracy (~93%) over traditional CTF-based filtering (~78%), in particular by dramatically reducing the false negative rate ("good" micrographs predicted as "bad").The improvement over CTF-based filtering is especially significant for the classification of tilted images as tilting inherently leads to lower CTF resolution.A more recent version of their tool, MicAssess 1.0 (Li and Cianfrocco, 2021), implements a hierarchical process to further subclassify the "good" and "bad" classes with ~75 and 80% subclass prediction accuracy, respectively.The success of the CNN-based approach to micrograph filtering raises new questions.For example, can we further improve the accuracy of CNN-based filtering given limited training data?Additionally, can we better encode the reasoning behind micrograph filtering into CNN-based methods?
In this work, we examined various strategies to improve CNN-based filtering.First, we tested whether using a CNN pretrained on the ImageNet dataset (Deng et al., 2009), which does not resemble cryo-EM micrographs, as a starting point for training can achieve better micrograph filtering accuracy than training a CNN on cryo-EM data from scratch.This method, known as fine-tuning or transfer learning, has been shown to greatly improve attainable accuracy of CNN models when only limited training data is available (Kolesnikov et al., 2019;Yadav and Jadhav, 2019).Second, we examined whether the direct inclusion of Fourier space information as part of the CNN input could result in a better prediction accuracy, since many issues such as crystalline ice or sample drift can be better spotted in the power spectrum than in the real-space image.Finally, we aimed to expand the versatility of CNN-based filtering in terms of the type of samples and detectors that it can be applied to and provide integration with common processing software packages including RELION (Kimanius et al., 2021) and cryoSPARC (Punjani et al., 2017).We named the resulting tool Miffi, which stands for cryo-EM micrograph filtering utilizing Fourier space information (Figure 1).Miffi is open-source and freely available for public use (https://github.com/ando-lab/miffi). Importantly, we find that fine-tuning provides significant improvement over training from scratch and that inclusion of power spectra as a second input channel suppresses the false positive rate ("bad" micrographs predicted as "good"), largely through improved detection of micrographs with crystalline ice.While we provide Miffi for public use, our results also indicate that CNNs pretrained on the ImageNet dataset provide a useful starting point for any user interested in training a model on custom datasets.
Figure 1.Schematic of Miffi.Miffi employs a convolutional neural network (CNN) for classification by predicting multiple labels for each micrograph.Real-space cryo-EM micrographs and their corresponding power spectra are preprocessed and input into the CNN as two channels, from which an output array containing prediction probability for each label is obtained.Predicted labels are then determined as the one with the highest probability within each of the four label categories: support film coverage, drift, crystallinity, contamination (examples of which can be found in Figure 2).
Data used for training, validation, and testing
All micrographs used in this work were either directly obtained from EMPIAR (Iudin et al., 2023) or obtained in-house and motion corrected using MotionCor2 (Zheng et al., 2017) in RELION 4 (Kimanius et al., 2021).For data collected on a Gatan K3 detector, 7 by 5 patches were used for motion correction, while 5 by 5 patches were used for data collected on a Gatan K2 Summit detector.The training set included a mix of in-house datasets and EMPIAR entries 10202 (Tan et al., 2018), 10249 (Herzik et al., 2019), and 11228 (Filman et al., 2019), as de-scribed in Table 1.An in-house dataset on a 240 kDa globular protein collected on a Gatan K3 detector that was not part of the training set was used as the validation set, as it contained a number of micrographs with various issues.All in-house datasets used in training and validation were curated from their respective full sets such that bad micrographs constituted roughly 50% of each set.Micrographs from EMPIAR entries 10175 (Noble et al., 2018b), 10344 (Campbell et al., 2020), 10379 (Li et al., 2020), 10916 (Yang et al., 2022), and 11093 (Röder et al., 2020) were used as additional test sets to evaluate the trained model, the details of which can be found in
Labeling scheme
The training, validation, and test datasets were labeled manually by inspecting each square micrograph along with its corresponding power spectrum.A GUI interface designed for this process is included in the GitHub repository.Each square micrograph was given four labels representing different categories of issues: support film coverage, drift, crystallinity, contamination (Figure 1).The first label describes the degree of the support film coverage in the micrograph, which can be one of the following: no film, minor film, major film, film (Figure 2, top row).
Note that film here only means support film of the grid itself, and additional continuous film such as mono-layer graphene oxide is not considered film in this case.The second label is binary and describes issues of sample displacement, where sample drift, cracks, or an empty micrograph is labeled as "bad" (Figure 2, second row).The third label describes the crystallinity of the ice in the sample, which is determined by non-diffuse intensity at around 1/3.7 Å -1 in Fourier space, and can be one of the following: not crystalline, minor crystalline, major crystalline (Figure 2, third row).Finally, the fourth label is binary and describes whether the micrograph is covered largely in contaminant objects, which includes ice crystals and ethane contamination (Figure 2, bottom row).It is worth noting that as the changes within each category are often continuous, it is difficult to set a clear cutoff, but extra care was made to keep the labeling scheme as consistent as possible.
Model training
The ConvNeXt-Small model (Liu et al., 2022) was chosen for the classification task in this work based on its simplicity in architecture and high performance on the ImageNet datasets.All models and their training were implemented in Pytorch (Paszke et al., 2019).Training was performed either from scratch using randomly initialized model weights, or via fine-tuning of model weights pretrained on the ImageNet-12k dataset (Deng et al., 2009;Liu et al., 2022).We tested both a single-channel input of real-space micrographs as well as a two-channel input of the realspace micrographs with their corresponding power spectra.The Timm package (Wightman, 2019) was used to modify the input and output layers of the model to match the desired number of input channels and output classes.As the ImageNet-pretrained model was trained with three input channels corresponding to three colors, pretrained weights were reduced to one or two channels in the following manner to maintain normalization: for a single input channel, the sum of weights from the original three channels was used; for two input channels, the weights in first two original channels were multiplied by a factor of 1.5, while the third was unused.
Square micrographs were Fourier cropped to 1.5 Å pixel size during preprocessing to keep the location of the ice ring consistent in the power spectrum.During the training stage, data augmentation was subsequently applied, including a random crop with a ratio randomly chosen from 0.8 -1.0, along with a random horizonal and vertical flip.Data augmentation was not applied during validation or inference.For two-channel input models, the power spectra of the augmented micrographs were computed and appended as an additional channel.Inputs were then downsampled to 384 pix × 384 pix with bilinear interpolation, and finally normalized to a mean of 0 with a standard deviation of 1 for each channel individually, with pixels that have intensity falling outside 2.5 standard deviation thresholded (to ensure that micrographs from different sources are on a similar intensity scale).During training, the four label categories for each micrograph are treated as independent from each other, while the probability for all possible labels within each label category is summed to one using the softmax function.Cross-entropy between true label and predicted label probability (softmax of CNN output) for all four label categories were calculated individually, and the sum of which was defined as the loss function.The AdamW optimizer (Loshchilov and Hutter, 2017) and a cosine learning rate scheduler were used for training all models.Layer-wise learning rate decay was applied in fine-tuning by dividing layers into 12 groups and setting the learning rate for each group.Linear warmup epochs were used in the case of training from scratch.Detailed parameters used in training can be found in Table 3. Notably, the fine-tuning process was found to be prone to overfitting, likely reflecting the fact that our training set is small relative to the size of the CNN such that the models can overfit and lose generality.The learning rate, number of training epochs, and layer decay ratio were thus chosen carefully to minimize overfitting, which can be visualized in the loss functions (i.e., the training loss will continue to decrease as the model improves its fit with the training set, but the validation loss will begin to increase as the model begins to lose its generality
Inference and calculation of prediction accuracy
During inference, micrographs were first split into square micrographs and Fourier cropped to 1.5 Å pixel size as described above for the training datasets.Power spectra were then appended when applicable, followed by downsampling and normalization, as were done in training.The prepared input was passed through the CNN model to obtain the output array, which was then converted to probabilities for each label using the softmax function.The predicted label for each category was determined as the one with highest probability (Figure 1, right).For micrographs with multiple square splits, predictions from individual splits are combined with a predefined set of rules which can be customized by the user (e.g., the user can choose to only keep micrographs in which all splits are classified as "good," or a user may choose the maximize the number of micrographs by also keeping ones that are partially "bad").In this work, because each square split was labeled individually, accuracy calculation was also performed for each split without combining.For the good-versus-bad accuracy calculations shown here, we defined the following for the two non-binary categories: for the support film coverage category, we defined "good" micrographs as ones that are labeled as either no film or minor film; for the crystalline ice category, we defined "good" micrographs as ones that are labeled as not crystalline.Overall "good" micrographs were then defined as ones that are labeled as "good" in all four categories, with the rest defined as bad micrographs.Accuracy was calculated as the number of correct predictions divide by the total number of samples.True positive denotes the number of correctly predicted "good" micrographs, while true negative denotes the number of correctly predicted "bad" micrographs.False positive denotes the number of "bad" micrographs predicted as "good", while false negative denotes the number of "good" micrographs predicted as "bad".
To compare the performance of our model with existing methods, micrograph filtering was performed with MicAssess (Li et al., 2020;Li and Cianfrocco, 2021) and CTF-based filtering using the same test sets.Version 1.0 of MicAssess was obtained from GitHub, and the model weights were obtained from the authors.Default parameters were used during inference, and micrographs predicted as "good" in the binary classification step of MicAssess were treated as "good" micrographs, while the rest were treated as "bad" micrographs.To compare our model with CTF-based filtering, patch CTF estimation was performed in cryoSPARC (Punjani et al., 2017).Because the distribution of CTF fitting resolution varied greatly between different datasets, we set the cutoff individually for each dataset.Micrographs with CTF fitting resolution worse than 10 Å were first excluded as they greatly biased the statistics and were treated as "bad" micrographs.Of the remainder, those with CTF fitting resolution within two standard deviations of the mean were treated as "good" micrographs, while those outside of this range were treated as "bad" micrographs.
Results and Discussions
We tested two strategies in model design and training and evaluated their effects on the prediction accuracy with the validation set.The first strategy was to perform fine-tuning starting from model weights pretrained on the ImageNet-12k dataset, which contrasts with previous work which trained a model from scratch (Li et al., 2020).Although the ImageNet dataset consists of everyday objects that are very different from cryo-EM images, previous examples such as medical imaging have shown that pretrained weights can still notably improve accuracy especially in cases where limited training data is available (Yadav and Jadhav, 2019).The second strategy was to directly include power spectra of the micrographs as an additional channel in the input.This is inspired by the fact that many issues with micrographs can be spotted more easily in the power spectrum, particularly the ice crystallinity.Although information in the Fourier space representation should theoretically also be present in the real-space images, the information content will be significantly reduced after the downsampling process, which is a necessary preprocessing step when utilizing CNNs on micrographs due to computational cost.By explicitly including power spectra as part of the input, high-resolution Fourier space information contained in the original micrographs can be preserved through the downsampling step.Here, we compared the resulting models of three training cases: (1) training from scratch with a single-channel real-space input, (2) fine-tuning from a pretrained model with a single-channel real-space input, and (3) finetuning with a two-channel input that includes power spectra.The resulting overall good-versus-bad micrograph accuracy can be found in Table 4, while the accuracy for individual categories can be found in Supplementary Table S1.By comparing the resulting accuracy from the first two training cases (Table 4, top two rows), we can see that fine-tuning from pretrained weights greatly improved the attainable accuracy with the same training data, from 78% to 96% or greater.This can be rationalized as the feature extracting capability of the initial layers in the CNN being largely transferable even with significant differences between pretraining ImageNet data and cryo-EM images.This is consistent with the observation that layer decay in learning rate improved attainable accuracy (results not shown), indicating that weights in initial layers require less change to arrive at the optimal values.Furthermore, we found that when fine-tuning is performed, the training loss is already low after a single epoch of training, and becomes lower than the final loss of training from scratch after an additional epoch (Figure 3), again suggesting that pretrained weights are much closer to the optimal values compared to randomly initialized weights.The second strategy, direct inclusion of power spectra, additionally increased the prediction accuracy on top of fine-tuning from 96% to 99%, evident from comparison between the last two training cases (Table 4, bottom two rows).When comparing prediction accuracy in individual categories (Supplementary Table S1), the most significant improvement comes from the ice crystallinity category.Notably, the number of false positives in the prediction of ice crystallinity was greatly reduced, consistent with the intuition that power spectra provide better indication for the presence of crystalline ice than realspace images.To further examine the generality of our two-channel input model trained with fine-tuning, we tested its performance on five additional datasets from EMPIAR, which contains a variety of features including filamentous samples or graphene oxide films.The resulting prediction accuracies (Table 5 and Supplementary Table S2) show that our model performs relatively well for all chosen datasets, with an overall accuracy higher than 95% in all cases.To compare with other existing methods, we performed micrograph filtering with MicAssess and CTF-fitting on the same EMPIAR datasets and calculated prediction accuracies based on our labels (Supplementary Ta-ble S3 and Supplementary Table S4).While our model appears to be generally applicable, we do observe slight differences in accuracy for different test sets.We hypothesized that these variations correspond to their degree of dissim-
Conclusion
In this work, we examined various strategies to improve CNN-based cryo-EM micrograph filtering.We showed that two strategies, fine-tuning from pretrained weights and directly including power spectra as input, can improve the attainable accuracy of resulting models.This likely results from the transferability of the feature-extracting capability in the initial layers of a pretrained model, as well as the existence of better indicators for certain micrograph features in Fourier space.We demonstrated that a model trained with these strategies can filter a diverse set of cryo-EM datasets with high accuracy.
The resulting software, named Miffi, is open-source and freely available for public use.Miffi is implemented with Pytorch, which enables cross-platform compatibility.Miffi can be accelerated with CUDA on a NVIDIA GPU (about 3 micrographs per second with our setup), but it also runs with reasonable speed when using CPU only (about 3 seconds per micrograph with our setup).
Miffi accepts micrograph inputs in file formats produced by common cryo-EM processing software, such as RELION and cryoSPARC, and can write output files in the same format such that they can be directly imported back into the originating software.Necessary preprocessing steps (e.g., splitting/cropping micrographs into square micrographs, Fourier cropping, calculation of power spectra, downsampling) are performed on input micrographs in Miffi before passing them to the CNN for classification.Miffi also provides users the flexibility to control the classification process, including the ability to customize rules for combining predictions for micrographs with multiple square splits, to filter predictions based on their confidence scores, and to control which categories are written out.We note that the labeled categories in our training process do not include all potential issues, such as low particle visibility or thick ice.In particular, ice thickness is a continuum quantity that is best measured experimentally (Neselu et al., 2023;Noble et al., 2018a;Rice et al., 2018), for which the ideal range is sample dependent.Therefore, combining Miffi with other data assessment criteria will still be beneficial.For example, issues with ice thickness can be filtered based on experimentally measured values or based on the intensity of the diffuse ice ring at 1/3.7 Å -1 in Fourier space, and issues with particle visibility can be filtered by excluding micrographs with low defocus values.Overall, we believe that Miffi can be readily incorporated into common data processing pipelines and greatly improve the accuracy and efficiency of the micrograph curation step.
Microscopy Center located at the New York Structural Biology Center, supported by the National Institutes of Health (NIH) Common Fund Transformative High Resolution Cryo-Electron Microscopy program (U24 GM129539) and by grants from the Simons Foundation (SF349247) and NY State Assembly.This work was supported by NSF grant MCB-1942668 (to N.A.), NIH grant GM124847 (to N.A.), and startup funds from Cornell University (to N. A.).
Figure 2 .
Figure 2. Example "bad" micrographs for each label category from the in-house training set.The real-space micrograph is shown on the left, with its corresponding power spectrum shown on the right.(a) Support film category.Support carbon film in the example is the darker region on the right side of the real-space micrograph, while the ice-containing hole is the lighter region on the left side.(b) Sample displacement category.Translational motion of the sample caused by drift or cracks often results in asymmetric resolution or a streaky pattern in the power spectrum.An empty hole on the other hand has a flat power spectrum with high intensity at the origin of Fourier space.(c) Ice crystallinity category.Ice crystallinity is determined by a non-diffuse ring at 1/3.7 Å -1 in Fourier space.Micrographs with strong ice crystallinity often exhibit clear dark-andlight alternating patterns in real space (example on the right).However, for samples with minor crystallinity, features in the real-space micrographs are often subdued and thus are better identified with the ring feature in the power spectra.(e) Contamination category.Two examples shown here represent contaminating ice crystals adhering to the hole (left) and ethane contaminants embedded in ice (right).
Overall accuracy (acc.) of good-versus-bad predictions made for the validation set by the three models.True positive = TP; false positive = FP; true negative = TN; false negative = FN.
Figure 3 .
Figure 3. Loss versus epoch for all three training cases.Solid lines denote training loss while dashed lines denote validation loss.Black, blue, and red lines correspond to training from scratch with single-channel input, fine-tuning with single-channel input, and fine-tuning with twochannel input, respectively.
ilarity to the training set, and attempts were made to improve our model by training it with test sets included as part of the training set and with the same hyperparameters as before.The resulting models, however, show slight deterioration in performance on additional hold-out test sets (results not shown).This could either be due to the new training set requiring additional optimization of hyperparameters, or that training specialized models on individual subsets may be necessary to improve the accuracy beyond what we see here.To investigate how many training samples are required to obtain a reasonable prediction accuracy, we performed training on random subsets of the full training set with various sizes.We also performed the training with bothConvNeXt-Small and ConvNeXt-Tiny models to explore whether a smaller CNN provides better training results when the number of training samples is low.The result, shown in Figure4, suggests that a relatively high accuracy (~94%) can be achieved with a relatively small training set of only 1000 micrographs, and the accuracy plateaus when the number of training samples is higher than 10000 micrographs.Interestingly, we found that ConvNeXt-Tiny did not appear to provide an advantage over ConvNeXt-Small in the low training sample regime, indicating that ConvNeXt-Small is a suitable choice even when training with very limited data.Overall, our result here suggests that with the fine-tuning strategy, training specialized models should not be difficult to achieve even with a small amount of data.
Figure 4 .
Figure 4. Overall good-versus-bad accuracy on validation set plotted against the number of training samples used in training.Blue and red lines correspond to results from training ConvNeXt-Small and ConvNeXt-Tiny models, respectively.
Table 2 .
Dose-weighted micrographs were used for all cases.
Table 1 .
Datasets used in the training set.All in-house datasets were curated such that bad micrographs consisted of roughly half of each set.The training set contains micrographs from various types of samples, detectors, and modes of data collection.
Table 2
. Additional test sets.The test sets contain a diverse set of micrographs that were not part of the training set.To maximize applicability of the model on data collected from different detectors which are of different dimensions, we set the CNN inputs as square images.Micrographs that are non-square were thus split or cropped into square micrographs during the preprocessing stage for training and inference.For analyses done in this work, micrographs from a Gatan K3 detector (5760 pix × 4092 pix) were split into two square micrographs (4092 pix × 4092 pix) spanning the original micrograph with overlap in the middle (2424 pix × 4092 pix).Micrographs from a Gatan K2 Summit detector, which are nearly square (3838 pix × 3710 pix), were cropped into a single square micrograph (3710 pix × 3710 pix) starting from the left edge of the original micrograph oriented with the long dimension in the horizontal direction.Micrographs that are originally square (e.g., collected on a TFS Falcon detector) were kept as is.Each square micrograph in the training and validation sets was labeled individually and treated as separate entries.The training set included a total of 45768 square micrographs, while the validation set included a total of 4000 square micrographs.
The results indicate that our model provides the highest filtering accuracy among tested methods for all datasets.Surprisingly, we found that CTF-based filtering outperformed MicAssess for many of the datasets tested here.This could be a result of these test sets differing significantly from the training set for MicAssess.It is also important to note that our labeling criteria may differ from those used in training MicAssess.In the absence of a standardized and validated test set for micrograph filtering, an exact comparison cannot be made between different methods.Nonetheless, these results do illustrate the utility of utilizing Fourier information.A notable example was EMPIAR-10379, for which CTF-based filtering had an overall accuracy of 97.8%, compared to MicAssess, which had 77.5% accuracy.Interestingly, this dataset had very few issues in the ice crystallinity, film, and contamination categories (17 out of 1118 micrographs) but had many in the drift category (218 out of 1118) (Supplementary TableS2), indicating that CTF-based filtering is better at detecting the latter type of issue.This observation is consistent with the fact that CTF fitting is done in Fourier space, and drift issues are readily detectable as they lead to anisotropic power spectra or loss of Thon rings, in the case of empty holes.The fact that our model outperforms both CTF-based filtering and MicAssess
|
2023-12-13T14:12:29.171Z
|
2024-02-12T00:00:00.000
|
{
"year": 2024,
"sha1": "05607892e40d419a34e390246addf448b8c650f7",
"oa_license": "CCBYNCND",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/12/09/2023.12.08.570849.full.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "163ba85873207df5076ed2868afef5b1cc523904",
"s2fieldsofstudy": [
"Computer Science",
"Materials Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
269787176
|
pes2o/s2orc
|
v3-fos-license
|
Osseous and Cartilaginous Trochlear Development in the Pediatric Knee: A Cadaveric Computed Tomography Study
Background: The anatomy of the trochlea plays a significant role in patellar stability. The developmental anatomy of the trochlea and its relationship to patellar stability remains poorly understood. Purpose: To describe the developmental changes of the osseous and cartilaginous trochlear morphology in skeletally immature specimens. Study Design: Descriptive laboratory study. Methods: A total of 65 skeletally immature cadaveric knees between the ages of 2 months and 11 years were evaluated using computed tomography scans. The measurements in the axial plane of both cartilage and bone include medial, central, and lateral trochlear height; sulcus height; medial and later trochlear facet length; trochlear sulcus angle; patellar sulcus angle; condylar height asymmetry; and trochlear facet asymmetry. Additional measurements included trochlear depth and lateral trochlear inclination angle. In the sagittal plane, measurements included curvilinear trochlear length, direct trochlear length, condylar height, and patellar sulcus angle. Results: Analysis of trochlear morphology using condylar height, condylar height asymmetry, and trochlear depth all increased with increasing age. The osseous and cartilaginous sulcus angles became deeper with age until age 8 and then plateaued. This corresponded with an increase in trochlear depth that also plateaued around age 8. Osseous condylar asymmetry increased with age but flipped from a larger medial condyle to a larger lateral condyle around age 8. The continued growth of the trochlea with age was further demonstrated in all measures in the sagittal view. Conclusion: This cadaveric analysis demonstrated that there is an increase in condylar height as age increased by all measurements analyzed. These changes in condylar height continued to be seen through age 11, suggesting a still-developing trochlea past this age. By age 8, a plateau in sulcus angle, and sulcus depth suggests more proportionate growth after this point. Similar changes in trochlear and patellar shape with age suggests that the 2 structures may affect each other during development. Clinical Relevance: This information can help design, develop, and determine timing of procedures that may alter the anatomy and stabilize the trochlear and patellofemoral joint.
The anatomy of the patellofemoral joint plays a significant role in patellofemoral disorders.Dysplasia of the femoral trochlea is characterized by the abnormal shape and loss of depth of the trochlear groove. 23The presence of trochlear dysplasia has been identified as a primary anatomic risk factor for patellofemoral instability 2,3,7,26 and arthritis. 10Trochlear dysplasia is also a prognostic factor used to predict recurrent patellar instability in children and adolescents. 7,13,14With good outcomes in surgical procedures such as trochleoplasty in patients with trochlear dysplasia, 6,15,16 an improved understanding of trochlear development may alter timing and type of intervention in skeletally immature patients with trochlear dysplasia and related patellofemoral disorders.
Unlike the development of the pediatric hip joint, which has been well studied for decades, the developmental anatomy of the trochlea and its relationship to patellar stability has yet to be fully described.There is, however, a growing understanding of abnormal trochlear morphology. 8,22This has been accomplished with historically used measures for patellar instability obtained using an axial view on magnetic resonance imaging (MRI). 1,3,18Parikh et al 20 also used these measures on MRI to describe the growth pattern of trochlear dysplasia in adolescents.Despite this knowledge of the dysplastic trochlea, normal development of the trochlea remains to be fully understood.
In 2020, Richmond et al 21 published a study of 31 pediatric knee specimens in which they reported a relationship between the development of the trochlea and the patella using new measuring techniques in the sagittal plane on computed tomography (CT), including one that uses the anterior femoral cortex, to add to established measurements.That study was limited, however, by a small sample size.The purpose of the current study was to describe the changes of the osseous and cartilaginous trochlear morphology in skeletally immature specimens and their relationship to the development of the patella, by using a larger sample size of pediatric CT scans.Additionally, we aimed to evaluate morphologic development using the recently described sagittal plane measurements by Richmond et al.
METHODS
The cadaveric tissue used in this study did not include any identifying patient information or genetic information, so no institutional review board (IRB) approval was required.The pediatric cadaveric tissue used in the study was provided by Allosource, an allograft-harvesting facility.In accordance with the waiver for IRB approval, prior consent to use this tissue for research purposes was obtained from families; there was no subsequent contact with families that made these tissue donations, and no genetic tests were performed.Age and sex were defined for each cadaveric specimen; the cause of death was not available.Specimens were screened and found absent of any gross evidence of musculoskeletal chronic disease or dysplasia.
CT air arthrogram scans (GE Litespeed Scanner) with 0.625-mm cuts were taken on 65 fresh-frozen cadaveric knees from skeletally immature specimens ranging in age from 2 months to 11 years.Air arthrogram improves visualization of cartilage surfaces on cadaveric CT scans and is possible due to the dissected state of these cadaveric specimens.This study uses a data set from a previously published paper on this topic (31 specimens) and expands this set by the inclusion of a larger number of specimens (65 specimens), including older but still skeletally immature specimens. 21xial plane measurements included central, medial, and lateral condylar height of both cartilage and bone, sulcus height of both cartilage and bone, medial and lateral trochlear facet length, osseous sulcus angle (OSA), cartilaginous sulcus angle (CSA), patellar sulcus angle (PSA), osseous and cartilaginous condylar height asymmetry, osseous and cartilaginous trochlear facet asymmetry, trochlear depth, and lateral trochlear inclination angle (Table 1).The PSA was measured using the slice with the greatest width of the patellar cartilage.Additionally, measurements of the trochlea were taken from the slice most proximally, where the trochlear groove was entirely covered by articular cartilage.A reference line across the posterior condyles at the first axial slice with full posterior articular cartilage was used for all condylar height measurements and lateral inclination angle.
In the sagittal plane, measurements described by Richmond et al 21 were evaluated.These included curvilinear trochlear length, direct trochlear length, condylar height, and PSA as previously described but in the sagittal plane (Table 2).
Not all specimens had a patella present due to prior tissue harvest (n = 13), so patellar measurements in both the axial and sagittal planes were not recorded in these specimens.The 3 specimens \1 year old did not have sagittal images.Additionally, their trochleae were primarily cartilage; thus, only axial cartilage measurements were made on these specimens.Boston Children's Hospital, Boston, Massachusetts, USA.Final revision submitted October 28, 2023; accepted November 13, 2023.One or more of the authors has declared the following conflict of interest or source of funding: The pediatric cadaveric tissue used in the study was provided by Allosource Inc. M.T. has received hospitality from Aesculap Biologics.S.N.P. has received education payments from CDC Medical.H.B.E. has received education payments from Pylant Medical and hospitality payments from Stryker.D.W.G. has received royalties from Arthrex, consulting fees from OrthoPediatrics, and nonconsulting fees from Arthrex and DePuy Synthes.P.D.F. has received consulting fees from WishBone Medical.P.L.W. has received education payments from Pylant Medical.K.G.S. has received education payments from Evolution Surgical and hospitality payments from Arthrex.AOSSM checks author disclosures against the Open Payments Database (OPD).AOSSM has not conducted an independent investigation on the OPD and disclaims any liability or responsibility relating thereto.
Ethical approval was not sought for the present study.
To obtain interrater reliability, 2 medical student researchers (S.G.A. and N.T.) made the measurements independently on the CT scans.Previously, 31 of the scans were analyzed.Measurements were performed using OsiriX Lite Imaging Software (Version 12.0.3;Pixme), following training and confirmation of methodology by the senior authors (K.G.S., M.T.).To obtain intrarater reliability, the same 2 medical students remeasured 40 scans 4 weeks after making the initial measurements.Trochlear height: The greatest perpendicular distance from the most anterior cartilage of the condyle to the anterior femoral cortex.The lateral and medial trochlear heights were measured from the central reference line placed parallel to the anterior femoral cortex.
Patellar sulcus angle:
Measure on sagittal cut with greatest patellar length.Angle was measured between the most proximal to most distal patellar cartilage. --
Statistical Analysis
Intraclass correlation coefficients (ICCs) and 95% CIs were calculated for the measurements.The ICC(2,1) was used to analyze the interrater reliability between the 2 independent raters, and the ICC(3,1) was used to analyze the intrarater reliability for 1 rater.The mean and standard deviation of the measurements was reported for every age group (\1, 2, 3, 4, 5, 7, 8, 9, 10, and 11 years).All analyses were completed in RStudio (Version 2021.09.1;Integrated Developement for R. RStudio, PBC).
RESULTS
The study cohort included 65 CT scans of skeletally immature knee specimens aged from 2 months to 11 years (46 men and 19 women, mean age, 6.42 years).Mean values of the osseous, cartilaginous, and patellofemoral measurements stratified by age are displayed in Tables 3, 4, and 5, respectively.Most measurements yielded an excellent level of reliability, with some good and moderate ratings.
Axial Measurements
Analysis of trochlear morphology of osseous and cartilaginous measurements of condylar height, condylar height asymmetry, and trochlear depth demonstrated a significant increase in the size of the medial and lateral trochlea as age increased (P \ .001for all) (Figure 1).The osseous and cartilaginous sulcus angles decreased (became deeper) with age until around age 8 and then plateaued.This corresponded with an increase in trochlear depth that also plateaued around age 8.In contrast, medial and lateral condylar height continued to increase up to age 11.Cartilaginous condylar asymmetry rose slightly with age with the lateral condyle consistently larger than the medial condyle.Osseous condylar asymmetry increased with age but flipped from a larger medial condyle to a larger lateral condyle around age 8 (Figure 2).PSA, OSA, and CSA all decreased with age until about the age of 8. OSA changed the most (range, 165.14°-149.04°),CSA changed the least (range, 150.8°-147.06°),and PSA showed an amount of change between the 2 The Orthopaedic Journal of Sports Medicine Trochlear Development in the Pediatric Knee 5 (range, 133.48°-128.41°).A linear regression looking at the slope change over all ages showed a significant decrease of 2.2°(P \ .001)for each 1-year increase in age for OSA.PSA and CSA also showed a negative slope observationally, with P values above the statistically significant cutoff of P \ .05 at .051 and .076,respectively.Lateral inclination angle and patellar tilt appear to remain within a constant range across all ages.
Sagittal Measurements
With respect to height measures, the lateral trochlea was consistently higher than the medial trochlea: on average, 4.9 mm higher across age groups.Length in the sagittal plane followed the same pattern with a consistently longer lateral trochlea: on average, 7.6 mm longer.Measures of direct and curved distances showed significantly higher variability of lateral trochlear height (P \ .001;Levene test).
DISCUSSION
Using CT scans of pediatric cadaveric knee specimens aged 2 months to 11 years, we were able to describe the osseous and cartilaginous trochlear morphology over early childhood development.A pattern of growth of the trochlea was shown through increased medial, central, and lateral trochlear height in both osseous and cartilaginous measurements with age.One of the main findings of this study was that this growth appeared to continue past the age of 8, which was previously suspected by Richmond et al, 21 and through the age of 11.The findings also suggest that by age 8, the growth of the medial, central, and lateral trochlea are more symmetrical based on a plateau in sulcus angle and sulcus depth at this age.The condylar height and trochlear length measurements in the sagittal plane corroborated this pattern of development, establishing them as potentially useful in evaluating trochlear morphology.A few studies have observed morphologic features of the trochlea in skeletally immature patients and described how they change with age. 17,21,24These studies have found a linear increase in lateral, central, and medial trochlear heights.A study of normal pediatric knee MRIs measured these changes in both articular cartilage and subchondral bone in the axial plane but did not explore the relationship of the trochlea to the patella and had a mean patient age of 11.4 years. 24One of the primary findings of the study by Richmond et al 21 was that OSA, CSA and PSA all deepened continuously until the age of 8.The current study 6 Ayala et al The Orthopaedic Journal of Sports Medicine replicated these findings whereby all 3 measurements decreased with age until 8 years and then plateaued.In our study, however, OSA and PSA decreased significantly more than CSA.The similar results between PSA and OSA may support the findings by Richmond et al that there could be a relationship between the development of the trochlea and that of the patella.
Our findings of a more subtle change in CSA demonstrates some agreement with studies by Glard et al 9 and Pagliazzi et al 19 that reported very little change in CSA from an early age.Using biometric evaluation previously described by Wanner, 25 Glard et al determined that sulcus angle was largely determined by genetics.This was based on the findings of similar shape, including sulcus angle, in fetuses and adults.These studies, however, examined the skeletally immature and mature trochlea separately and not over the course of its development.The relationship between the trochlea and patella throughout development remains to be fully understood.Our study provides a better insight into this interplay throughout the process of development and postulates that patellar anatomy could perhaps develop secondarily and progressively in correspondence with the shape of the developing trochlea.
Most clinical measurements of the trochlea are obtained from axial cross-sectional imaging, but additional sequences are important for describing the complete morphology.The sagittal view provides additional visualization of the proximal trochlea, including proximal cartilage extension, and relative lateral and medial trochlear heights.The radiographic correlates of trochlear dysplasia, such as the crossing sign and trochlear bump in the Dejour classification, utilize the sagittal or true lateral view to categorize these features. 7More recently, new measures and views have been proposed to describe the morphology of the trochlea more completely.One study has even found an oblique view of the trochlea in the sagittal plane that can more accurately evaluate trochlear morphology than axial or traditional sagittal views using previously described measurements. 1Using the measurements of Richmond et al, 21 we found similar results of a much larger lateral trochlear height and length that increased with age.
In contrast to the results by Richmond et al, 21 our sagittal measurements of direct and curved trochlear distance demonstrated significantly more variability of the lateral condyle than the medial condyle.Using MRIs in the sagittal plane, Biedert et al 4 found that the lateral trochlea has a high clinical relevance based on their finding that a (too) short lateral articular trochlea is likely another factor in lateral patellar instability.Our findings of more variable development of the lateral trochlea may help to explain how a (too) short lateral trochlea and other dysplastic features of the lateral trochlea could develop.The lateral trochlea is used to calculate several of the established measures of trochlear dysplasia, including trochlear depth, sulcus angle, facet asymmetry, and lateral trochlear inclination.As these parameters have been shown to have the strongest association with lateral patellar dislocation, 3 understanding the development of the lateral trochlea is useful in potentially detecting earlier signs of trochlear dysplasia during development.
A study by Trivellas et al 24 described trochlear morphology development in 246 normal pediatric knee MRIs from patients aged 3 to 16 years old.Their results also demonstrated a similar consistent pattern of development and growth when looking at measures of trochlear height in subchondral bone as well as articular cartilage.Using an older cohort than our study, they concluded that final trochlear development is nearly complete around age 11.
In agreement with this study, we found that trochlear development continued through age 11.Understanding at what age the trochlea is still developing provides valuable information that can aid clinical decision-making about possible surgical interventions and timing of surgery in patients with patellofemoral instability.
Trivellas et al 24 also found that the anatomy of the cartilage in developing specimens more accurately represents the shape of a mature trochlea at an earlier age than that of the subchondral bone.This is in line with our findings, especially when looking at trochlear condylar asymmetry.Cartilaginous condylar asymmetry remained relatively constant across all age groups with the lateral condyle being larger than the medial condyle.Osseous condylar asymmetry, on the other hand, had a larger range of values and, more interestingly, showed a flip from a larger medial condyle to a larger lateral condyle at the age of 8.The change in condylar asymmetry is likely explained by the continued ossification of the distal femoral epiphysis.Boeyer and Ousley 5 found that by 3.7 weeks of life, 95% of all individuals will exhibit an ossified distal femoral epiphysis.The distal femoral epiphysis will continue to grow, forming both femoral condyles, until it completely fuses with the metaphysis between 14 and 16 years of age in women and 16 and 18 years of age in men. 11The change in osseous condylar asymmetry in our study appears to demonstrate 2 things.Measurements of the subchondral bone show a larger medial condyle before age 8, which may indicate that the medial condyle ossifies earlier.For the lateral condylar subchondral bone to catch up and eventually become larger than the medial condyle after age 8, there also appears to be a point where there is a greater amount of ossification of the lateral trochlea until it more closely resembles the growth of the articular cartilage.This knowledge may inform clinicians in use of measures of the articular cartilage rather more confidently than subchondral bone in skeletally immature patients to predict final trochlear shape and potential risk of patellofemoral instability.
Strengths and Limitations
The strengths of this study include the use of a skeletally immature population in early development (mean age, 6.42 years; range, 2 months-11 years) and the access to pediatric cadaveric specimens, which is incredibly rare.This pediatric specimen database provides a unique perspective on the anatomic development of the trochlea using a CT scan database.The CT scan does have excellent resolution for visualization of bone structure.Although CT scans have limited cartilage resolution in clinical settings, CT air arthrograms on cadavers have very high resolution for cartilage due to increased differentiation between the cartilage and synovial fluid or bone. 12The use of pediatric cadaveric specimens allowed for the joints to be open and the presence of air arthrograms to better be appreciated.The use of cadaveric tissue CT scans is also among of the strengths of the study because CT scan availability in young patients is limited, in part due to concerns about the use of ionizing radiation in patients.While the 65 analyzed specimens are more than double the amount used in a previous CT-based study by Richmond et al, 21 a larger sample size still could help strengthen this study.The large increase in size at age 8, seen in Figure 1, may highlight this limitation as this age group had some of the least number of specimens (n = 4).This makes it possible for a couple of large specimens to skew an entire age group.The evaluation of when trochlear development is completed was limited by a population with a maximum age of 11.The addition of specimens beyond this age could help to more definitively describe when trochlear development is complete.Stratification by sex was limited by a low number of female specimens overall and an uneven distribution across most age groups.Future studies on the development of the trochlea may improve our understanding of this joint and assist with the design and implementation of procedures that aim to stabilize and/or normalize the patellofemoral joint.As these procedures become more common, understanding the development of the trochlea will become increasingly important.Future studies on the relationship between trochlear and patellar development and morphology may also improve our understanding of the normal and pathologic function of this joint in terms of stability and osteoarthritis.
CONCLUSION
This cadaveric CT scan analysis demonstrated that there is an increase in condylar height with increasing age from 2 to 11 years.These changes in condylar height continue to be seen through age 11, suggesting that the trochlea continues to develop past this age.By age 8, sulcus angle and sulcus depth values plateau, which suggests more proportionate growth of the lateral and medial condyles after 8 years of age.This information can help with the design, development, and implementation of procedures that may alter (and perhaps biomechanically improve) the anatomy of the patellofemoral joint.
trochlear facet length (D): The distance between the highest point of the lateral trochlear facet and the lowest point of the trochlear groove.a Medial trochlear facet length (E): The distance between the highest point of the medial trochlear facet and the lowest point of the trochlear groove.a Trochlear facet asymmetry: Divide the medial trochlear facet length (E) by the lateral trochlear facet length (D) to get a ratio (E O D). Osseous sulcus angle: Measure the angle formed from the medial (E) and lateral (D) osseous trochlear facets connecting at the top of the sulcus height (B).This should be the angle from the deepest point of the trochlear groove to the highest points of the medial and lateral trochlear facets.Cartilaginous Measurements Lateral condylar height (A): A perpendicular line that connects the highest point of the lateral trochlear facet and the reference line (Ref) of the posterior femoral condyles.a Sulcus height (B): A perpendicular line that connects the lowest point of the trochlear groove and the reference line (Ref) of the posterior femoral condyles.a Medial condylar height (C): A perpendicular line that connects the highest point of the medial trochlear facet and the reference line (Ref) of the posterior femoral condyles.a Trochlear depth: Take the mean height of the medial (C) and lateral (A) trochlear facets, then subtract the sulcus height (B) ([(A 1 C) O 2] -B).Trochlear condylar asymmetry: Divide the height of the lateral trochlear condyle (A) by the height of the medial trochlear condyle (C) and multiply by 100 to get a percentage ([A O C] 3 100%).Cartilaginous sulcus angle: Measure the angle formed from the medial (E) and lateral (D) cartilaginous trochlear facets connecting at the top of the sulcus height (B).This should be the angle from the deepest point of the trochlear groove to the highest points of the medial and lateral trochlear facets.Patellar sulcus angle (F): Measure the angle of the cartilaginous surface of the posterior patella.Lateral inclination angle (H): Measure the angle between the reference line (Ref) of the posterior femoral condyles and the continuation of the line used to measure the lateral trochlear facet length (D). a Measurements taken of both osseous and cartilaginous structures.The Orthopaedic Journal of Sports Medicine Trochlear Development in the Pediatric Knee 3 TABLE 2 Sagittal Computed Tomography Measurements of the Trochlea and Patella as Described by Richmond et al 21 Measurement and Description Lateral Central Medial Curvilinear distance: Distance from the most proximal trochlear cartilage to the distal portion of the intercondylar notch.The central posterior intercondylar notch reference line was used from both the lateral and medial distances.Direct distance: Distance from the most proximal trochlear cartilage to the distal portion of the intercondylar notch.The central posterior intercondylar notch reference line was used from both the lateral and medial distances.
Figure 1 .
Figure 1.The mean medial and lateral cartilaginous condylar height with respect to age.
Figure 2 .
Figure 2. The osseous and cartilaginous trochlear condylar asymmetries with respect to age.Asymmetry values \1 signify a larger medial trochlear condyle and values .1 signify a larger lateral trochlear condyle.
TABLE 3 Osseous
Measurements Taken in the Axial Plane a ; M, male; OSA, osseous sulcus angle.The table does not include the 3 specimens in the \1 age group as they did not have osseous measurements.
a F, female
TABLE 4 Cartilaginous
Measurements Taken in the Axial Plane a a CSA, cartilaginous sulcus angle; F, female; M, male; PSA, patellar sulcus angle.
TABLE 5 Patellofemoral
Measurements Taken in the Sagittal Plane a a PSA, patellar sulcus angle.
|
2024-05-17T05:06:45.231Z
|
2024-05-01T00:00:00.000
|
{
"year": 2024,
"sha1": "060ff00662c5d4018f5aaca2d1b64ab0e49c89ac",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/23259671241249132",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "060ff00662c5d4018f5aaca2d1b64ab0e49c89ac",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
27734732
|
pes2o/s2orc
|
v3-fos-license
|
The Okinawa Infectious Diseases Initiative
At the Kyushu–Okinawa Group of Eight summit in 2000, Japan announced the Okinawa Infectious Diseases Initiative (IDI) and pledged to spend US$3 billion over a five year period to combat infectious and parasitic diseases in developing countries. The IDI has exceeded expectations, spending more than US$4 billion over four years. The IDI is a unique initiative with its own philosophical basis and specifically tailored interventions and measures that helped to initiate worldwide political and financial commitments in the fight against infectious diseases. Notably, it promoted partnerships among stakeholders and emphasized comprehensive and inter-sectoral approaches (i.e. coordination and collaboration between health and other sectors). It helped to create a new vision of what is possible in the global effort against communicable diseases and has been instrumental in shaping the changing environments of development assistance, poverty reduction and other trends to reduce the impact of infectious and parasitic diseases.
Parasitology in Japan
The Okinawa Infectious Diseases Initiative Osamu Kunii Institute of Tropical Medicine, Nagasaki University, Sakamoto 1-12-4, Nagasaki 852-8523, Japan At the Kyushu-Okinawa Group of Eight summit in 2000, Japan announced the Okinawa Infectious Diseases Initiative (IDI) and pledged to spend US$3 billion over a five year period to combat infectious and parasitic diseases in developing countries. The IDI has exceeded expectations, spending more than US$4 billion over four years. The IDI is a unique initiative with its own philosophical basis and specifically tailored interventions and measures that helped to initiate worldwide political and financial commitments in the fight against infectious diseases. Notably, it promoted partnerships among stakeholders and emphasized comprehensive and intersectoral approaches (i.e. coordination and collaboration between health and other sectors). It helped to create a new vision of what is possible in the global effort against communicable diseases and has been instrumental in shaping the changing environments of development assistance, poverty reduction and other trends to reduce the impact of infectious and parasitic diseases.
The battle against infectious diseases
Since the 1997 Group of Seven (G7) summit in Denver, USA, world leaders have made major commitments and agreed to collaborate to combat infectious and parasitic diseases [1]. At the Kyushu-Okinawa Group of Eight (G8) summit in July 2000, infectious diseases were on the main agenda for the first time, and major progress was made in securing stronger commitments from all G8 nations [2]. The commitments included an agreement to establish new partnerships to help maximize the impact of health and medical interventions, and recognition of the need to expedite the mobilization of additional resources. One of the outcomes of this new-found political appreciation of the urgent need to tackle infectious diseases was the establishment in January 2002 of the Global Fund to Fight AIDS, Tuberculosis and Malaria (GFATM: http://www. theglobalfund.org/) [3].
Japan remained the top overseas aid donor throughout the 1990s and is still the second biggest after the USA [4]. Even before the Denver summit, the Japanese Government was committed to fighting infectious diseases. In 1993, together with the USA, Japan launched the Common Agenda for Cooperation in Global Perspective [5] to address global problems such as increasing environmental damage, overpopulation and damage from both natural and manmade disasters. This was followed, in 1994, by the Global Issues Initiatives on Population and AIDS [6]. Japan underscored its undertaking to boost parasite control efforts by launching the Global Parasite Control Initiative, also known as the Hashimoto Initiative (HI), at the 1998 G8 summit in Birmingham, UK [7]. The subsequent Okinawa Infectious Diseases Initiative (IDI) was designed using the successes and challenges experienced during HI activities, expanding targets and strategies towards broader infectious disease control and prevention, and deploying a broad spectrum of interventions. In this article, I review the IDI, focusing on its philosophy, components, activities and achievements.
Basic philosophy
The IDI incorporated the following basic philosophy, which greatly influenced the manner in which the programme was designed and implemented.
Infectious and parasitic diseases as a central issue in economic and social development Infectious and parasitic diseases not only threaten the lives of individuals in developing countries but also are an impediment to the social and economic development of those nations [8][9][10], particularly affecting the poor [11]. The risk of infection in developing countries is increased by several factors, including high population growth rates, poverty, gender disparities, fragile medical systems, inadequate preventive care and treatment services, lack of safe water supply, and malnutrition [12][13][14]. These factors are compounded because poor health exacerbates poverty [15]. Fighting infectious and parasitic diseases should be a central part of all development programmes and a crucial element in efforts to reduce poverty [16].
Global partnership and community-based action Infectious and parasitic diseases should be viewed as a global issue that requires collective approaches based on international, multisector partnerships [17]. Effective measures to tackle these diseases also require action at the community level based on the concept of primary health care [18]. As such, it is also important to incorporate measures against infectious and parasitic diseases in all community-level development programmes.
The experiences of Japan in public health activities and its future role Following World War II, Japan developed a public health centre system, trained public health workers extensively, promoted measures for maternal and child health care and enhanced health care services in schools [19,20]. These steps contributed to the rapid reduction in infant mortality rates [21]. Japan also mounted major initiatives to eliminate infectious and parasitic diseases nationwide; for example, by linking public health activities with measures to control tuberculosis (TB), Japan substantially reduced the number of TB-related deaths [22]. Okinawa, which comprises the southernmost islands of Japan, has a history of successful eradication of malaria, filariasis and other parasitic diseases through active public health measures, including community participation and mobilization [23].
Drawing upon these experiences, Japan has provided experts, technical assistance and capacity development programmes to developing countries through the IDI by modification and application of the methods and technology that have proved so successful in Japan.
The IDI actions against infectious and parasitic diseases
Strengthening the health sector in developing countries The most important philosophy of Japanese development assistance is to support the self-help efforts of developing countries. Japan supports the development and implementation of health sector plans and/or infectiousdisease-specific intervention programmes designated by the countries themselves. Through infrastructure and institution building, in addition to provision of technical assistance, Japan supports and facilitates the development of health systems and sector reforms, thereby reinforcing health development planning and programmes, building capacity and helping to ensure sustainability of infectious disease control.
Human resources development
The IDI has given high priority to supporting human resources development, which is the most crucial component in making health systems function, and to creating quality services and high performance in infectious disease control. Japan has provided several training programmes and courses in response to local needs in both developing countries and in Japan aimed at individuals from all levels (e.g. from policy makers to field site personnel).
Partnership with civil organizations, donor countries and international organizations
To tackle a formidable global issue, Japan has consolidated partnerships with various stakeholders, including the United Nations (UN: http://www.un.org) and other multilateral organizations, donor agencies, nongovernmental organizations (NGOs) and civil organizations such as private nonprofit organizations and community groups. Japan has also promoted partnerships with developing nations that have already shown progress and in which high-quality human resources in infectious disease control already exist. These partnerships have been developed into a cooperative network in developing countries that facilitates sharing of knowledge, skills and expertise among the cooperating countries.
Promotion of research activities
For decades, Japan has promoted research activities in developing countries that are designed to create new, appropriate technology and quality clinical and laboratory work [24,25]. Therefore, the promotion of research activities became one of the key components of the IDI. However, because of Japanese official development assistance budget constraints (i.e. reduced aid to developing countries) and the increasing trend of targeting worldwide assistance towards poverty reduction [26], there has been a priority shift away from scientific research and towards an operational focus, with the goal of providing rapid and direct support to those most in need. Consequently, support for research has declined.
Promotion of public health at the community level Japan has paid special attention to the improvement of basic sanitation, clean water, basic education and primary health care within communities, concentrating on interventions that lead to the reduction of infectious diseases. This infrastructure support, ostensibly assistance to boost primary and community health care, has emphasized the need for comprehensive, integrated approaches and has included the construction of health centres, the distribution of medical and laboratory equipment and supplies, the training of community personnel and the facilitation of community participation.
Major achievements of the IDI The formation of the IDI stimulated an increase in worldwide political and financial commitment for the fight against infectious diseases [27]. According to the G8 performance assessments conducted by the G8 Information Between 2000 and the start of 2004, Japan spent US$4.1 billion on the IDI. IDI spending is classified under two categories: direct assistance (US$1.2 billion) and indirect assistance (US$2.98 billion). Direct assistance involves projects and interventions that directly affect disease prevention. These include treatment and control (i.e. the provision of medicine), diagnostics and vaccines, technical assistance for disease control and training a workforce for disease control. Indirect assistance involves projects and interventions that indirectly influence infectious disease control. These include clean water and sanitation, provision of basic education and renovation of health facilities. With regard to the geographical allocation of the IDI inputs (Table 1), Japan has focused mainly on Asia. However, more attention is now being paid to Africa because it represents the biggest obstacle to the achievement of the Millennium Development Goals (MDGs: http://www. un.org/millenniumgoals/), which are a series of global development targets set by the 2000 UN Millennium Assembly.
In the direct assistance section, 32% of spending was allocated to HIV/AIDS-related projects, whereas <5% was allocated to the control of malaria and other parasitic diseases. This reflects the political commitment worldwide and especially the efforts that have been prevailing in recent years to combat HIV/AIDS [28,29]. Japan had a Japan has put great emphasis on the promotion of public health and the improvement of underlying social and economic conditions that enable infectious diseases to flourish and spread. These include illiteracy, lack of potable water and sanitation, and inadequate access to basic health services. Major importance has been assigned to water supply projects, which facilitate and promote infectious and parasitic disease control and improve the socioeconomic status of communities through reduction of the time and labour required for drawing water [30]. Japanese leadership in this respect has placed Japan as the top global donor in the sector, having contributed to more than one-third of water supply and sanitation projects in the developing world to provide >40 million people with access to safe drinking water and basic sanitation in the past five years [31] (http://www.mofa.go.jp/mofaj/ gaiko/oda/index.html).
The IDI also provided extensive support to basic education, not only through the building of schools and the training of teachers but also through equipping schools with basic sanitation and a clean water supply, and developing school-based programmes (e.g. deworming, hygiene education and HIV/AIDS awareness and education). For discussion of Japanese initiatives in school health, see Ref. [7].
The creation of new and multifaceted partnerships has been integrated throughout all IDI activities. First, Japan has provided small grants to local, international and Japanese NGOs in >100 countries. Second, Japan has cemented and extended its partnerships with other donors, in particular with the USA. Japan signed the Japan-USAID Partnership for Global Health document in June 2002 and sent joint project-formulation missions to Nigeria, Nepal, Honduras and 30 other countries to initiate collaborative efforts to combat infectious diseases. Third, Japan has strengthened partnerships with the World Health Organization (WHO: http://www.who.int), United Nations Children's Fund (UNICEF: http://www.unicef.org) and other UN agencies in various programmes such as the Roll Back
Box 1. Projects under IDI
Examples Japan has provided diagnostic kits, equipment, voluntary counselling, testing centres, training of laboratory technicians and clinicians, and other types of support to control HIV/AIDS in many HIV-epidemic countries.
For vaccine-preventable diseases, the IDI has supplied vaccines with a cold chain and provided technical assistance for surveillance and laboratory and clinic management. In support of the Global Malaria Programme and RBM, Japan has donated anti-malarial drugs, diagnostic equipment and long-lasting insecticide-treated bed nets, and offered technical assistance and training for laboratory and clinical management, especially in sub-Saharan Africa and the Greater Mekong Subregion To combat lymphatic filariasis in the Pacific region, Japan has contributed to PacELF by offering the know-how learned from Japanese experience in eliminating this disease in the 1970s. The IDI has also provided diethylcarbamazine and immunochromatographic test cards and deployed the Japan Overseas Cooperation Volunteers for this programme [44]. Japan has also been one of the largest donors in controlling Chagas disease in Central America through the provision of equipment and supplies, in addition to technical assistance for programme management and individual interventions, including surveillance, materials for information, education and communication, and insecticide spraying [45].
In partnership with the WHO and the Carter Center (http://www. cartercenter.org), Japan has contributed to the Guinea Worm Eradication Program by establishing community-based surveillance systems and offering health education in infected villages. The IDI also provided indirect assistance to improve water supply systems using new water-filtering equipment. In response to the tremendous impact of severe acute respiratory syndrome (SARS) and avian flu on neighbouring Asian nations and the urgent need for the containment and control of these disorders, Japan provided quick and intensive assistance to China, Vietnam and other affected nations. Japan provided preventive, diagnostic and curative supplies, in addition to equipment and expertise [46].
Achievements
As the largest donor in the Western Pacific Region of the WHO, the IDI has supported the polio eradication programme. This region obtained polio-free status in October 2000 [47]. Japan has made the second biggest contribution, after the USA, towards the eradication of guinea worm and continues to support this programme [48]. The number of people infected with guinea worm worldwide declined by 99%, from 3.5 million cases in 1986 to 16 026 in 2004. Good results have also been achieved in Chagas disease control in Central America. The vector Rhodnius prolixus has been eliminated from 294 villages in nine health areas, and the house infestation rate of another vector, Triatoma dimidata, has decreased by 70% following the first large-scale vector control project in Guatemala. This project was supported by Japan and involved two cycles of residual spraying in >200 000 houses in 2004 (http://www.paho.org/English/ AD/DPC/CD/dch-jica-pjt.doc). The IDI direct assistance programme has provided several projects and interventions, varying from provision of supplies and equipment to technical assistance and training. These were planned and implemented with the recipient governments and development partners according to the needs of the countries. Some examples of the IDI projects are listed in Box 1. It is difficult to measure the impact of the IDI in terms of mortality or morbidity reduction with regard to each infectious disease because the inputs of the IDI vary according to the needs of each country. In addition, the contribution by IDI is just a part of all the efforts and interventions made by the recipient countries, donors and other development partners. However, there are some examples of the IDI having made a major contribution and having shown considerable impact (Box 1).
Concluding remarks
Since the UN adopted the MDGs, global development efforts have been increasingly focused and result orientated [16]. Substantial funding has gone to the control of HIV/AIDS and several other infectious diseases through the GFATM, the US President's Emergency Plan for AIDS Relief (PEPFAR: http://www.pepfar.org) and other initiatives. However, relatively little funding is provided to control several parasitic diseases that are now increasingly being labelled as the neglected diseases [32,33]. Although some of the recently formed foundations and agencies such as the Carter Center focus on the eradication or elimination of individual parasitic diseases [34], most of the donors and aid agencies focus their attention on HIV/AIDS and other high-profile killer diseases [35]. Under these circumstances, Japan needs to pay special attention to the neglected diseases.
Apart from the MDGs, there are other approaches that focus on poverty reduction and that promote coordinated efforts and actions involving the recipient country, donor agencies and other stakeholders; these include Poverty Reduction Strategy Papers and Sector Wide Approaches [36,37]. These are characterized by a set of operating principles, including broadening policy dialogue, developing a private sector policy (e.g. health and education, and a common realistic expenditure programme), common monitoring arrangements and additional coordinated procedures for funding and procurement [38,39]. This trend has been gradually preventing a single donor from providing specific projects in developing countries [40]. Rather, it is recommended to enable donors to support overall health planning and financing in the recipient countries [41].
However, many developing countries still have several specific needs that range from the central government to individual communities and from financial assistance to technical matters. Donor agencies should carefully observe the real needs of recipient countries and be aware of the short-and long-term effects of external assistance. Infectious and parasitic diseases are more prevalent among the poor; therefore, unless special attention and assistance are given for the prevention, treatment and care of the poor, health inequities will broaden and expand, even in a country undergoing rapid development.
Japan has upheld the concept of 'human security', which is a human-centred approach that protects and improves the welfare of people most in need and empowers them to cope with adversity [42]. Professionals who work on infectious or parasitic disease control in or for developing countries also need to broaden their views to encompass the more comprehensive and integrated approach that is needed if poverty reduction and sustainable development of communities, nations and the world are to be successfully achieved. True to its long-term commitment, Japan announced a new Health and Development Initiative (HDI) in 2005. This will build on the foundations and successes of the IDI and will commit a further US$5 billion over a five-year period [43]. The HDI will provide substantial resources to help the global community achieve the MDGs and to help many developing countries improve the overall wellbeing of their populations and, thereby, encourage them to work towards attaining a higher quality of life.
A World Wide Web resource for Plasmodium vivax 2007 will see the launch of www.vivaxmalaria.com, a site for the Plasmodium vivax research community.
The website will be divided into five sections.
(i) Disease: data, graphics and links about the history of P. vivax malaria, its incidence, life cycle, morphology of infected cells, parasite strains and therapeutics.
(ii) Genomics: describing the P. vivax genome project and related genomics initiatives, and comparative genomics of Plasmodium species.
(iii) Issues: highlighting outstanding questions about P. vivax biology, pathology and epidemiology.
(iv) Resources: protocols, reagents and resources for P. vivax research.
(v) Meetings: details of previous and upcoming conferences of interest to P. vivax researchers.
A supplementary section will categorize links on the site and other links of interest to the community.
For information and to offer suggestions, please contact the webmaster at: webmaster@vivaxmalaria.com
|
2018-04-03T01:01:55.032Z
|
2006-12-22T00:00:00.000
|
{
"year": 2006,
"sha1": "d4ef2130f58ba7599db2a29be63957fd5ab13546",
"oa_license": null,
"oa_url": "http://www.cell.com/article/S1471492206003126/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "30d41fddb5b54e9dce613b1b71659f82e5a081e1",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268361530
|
pes2o/s2orc
|
v3-fos-license
|
The influence of marital status at diagnosis on survival of adult patients with mantle cell lymphoma
Purpose Marital status has been reported to influence the survival outcomes of various cancers, but its impact on patients with mantle cell lymphoma (MCL) remains unclear. This study aimed to assess the influence of marital status at diagnosis on overall survival (OS) and cancer-specific survival (CSS) in patients with MCL. Methods The study utilized data from the National Cancer Institute’s Surveillance, Epidemiology, and End Results (SEER)-18 databases, including 6437 eligible individuals diagnosed with MCL from 2000 to 2018. A 1:1 propensity matching method (PSM) minimized confounding factor. Univariate and multivariate analyses determined hazard ratios (HR). Stratified hazard models were developed for married and unmarried statuses across time intervals. Results Married patients exhibited better 5-year OS and CSS rates compared to unmarried patients (54.2% vs. 39.7%, log-rank p < 0.001; 62.6% vs. 49.3%, log-rank p < 0.001). Multivariate analysis indicated that being unmarried was an independent risk factor for OS (adjusted HR 1.420, 95% CI 1.329–1.517) and CSS (adjusted HR 1.388, 95% CI 1.286–1.498). After PSM, being unmarried remained an independent risk factor for both OS and CSS. Among unmarried patients, widowed individuals exhibited the poorest survival outcomes compared to patients with other marital statuses, with 5-year OS and CSS rates of 28.5% and 41.0%, respectively. Furthermore, in the 10-year OS and CSS hazard model for widowed individuals had a significantly higher risk of mortality, with the probability of overall and cancer-specific mortality increased by 1.7-fold and 1.6-fold, respectively. Conclusion Marital status at diagnosis is an independent prognostic factor for MCL patients, with widowed individuals showing worse OS and CSS than those who are married, single, or divorced/separated. Adequate psychological and social support for widowed patients is crucial for improving outcomes in this patient population. Supplementary Information The online version contains supplementary material available at 10.1007/s00432-024-05647-z.
Introduction
Mantle cell lymphoma (MCL) was identified as a specific type of lymphoma in 1992 (Banks et al. 1992).It is a rare subtype of aggressive B cell non-Hodgkin lymphoma (NHL), accounting for 3% to 10% of adult NHL, the incidence is on the rise, with a median age at diagnosis of 68 years and a male-to-female ratio of 2.3-2.5:1(Abrahamsson et al. 2014).Approximately 1 in 200,000 individuals per year are diagnosed with MCL, and in the United States, the incidence is approximately 4 to 8 cases per million persons per year (Teras et al. 2016).Patients with MCL usually present with enlarged lymph nodes at multiple sites, the majority of patients are diagnosed with advanced disease, and CyclinDl expression is characteristic (Jain and Wang 2019).Most patients do not respond Ting Zhang, Tongzhao Wang, Zhuo Li and Shuoxin Yin have contributed equally as first authors.
well to chemotherapy and have a poor prognosis, with a median survival of 3-4 years (Jain et al. 2022).
The study found that social factors such as marital status, race, education, income, and occupation were associated with cancer mortality (Hemminki et al. 2003;Hashibe et al. 2011;Lortet-Tieulent et al. 2020).Many studies have shown that marital status is associated with the prognosis of many cancers, including lung cancer (Wu et al. 2022), gastric cancer (Jin et al. 2016), colorectal cancer (Li et al. 2015), mycosis fungoides (Xing et al. 2021), and Hodgkin's lymphoma (Wang et al. 2017).
Although many studies have confirmed the relationship between marital status and the survival of cancer patients.So far, no study has shown the impact of marital status on the survival outcomes of MCL patients.Therefore, this study explored the impact of marital status at diagnosis on the overall survival (OS) and cancer-specific survival (CSS) of MCL patients by analyzing data from the Surveillance, Epidemiology, and End Results (SEER) database.
Data source
The Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute is a publicly available and reliable cancer database.We screened data on 13,105 MCL patients from the SEER-18 Registries, November 2020 Submission (2000Submission ( -2018)), using SEER * Stat software (version 8.4.2 released on 8/14/2023).
The diagnosis of all MCL patients was confirmed by the International Classification of Diseases for Oncology, third edition (ICD-O-3) histology code 9673/3.The exclusion criteria are as follows: (1) patients with incomplete Ann Arbor stage or missing/incomplete survival data and follow-up, (2) unknown marital status at diagnosis or unknown race, (3) unknown diagnostic confirmation or not first primary site.Based on the above exclusion criteria, 6437 eligible patients were enrolled in the study (Fig. 1).This was a retrospective study, analyzing data from the SEER public database; therefore, ethical approval was not required.
Demographic and clinical variables
Patient demographic variables included sex, age, race, and marital status at diagnosis.Clinical variables included treatment information (chemotherapy, and radiotherapy), primary site, Ann Arbor stage, survival time, survival status, and causes of death.Age categories were delineated as < 50, 50-59, 60-69, and ≥ 70, according to the MIPI age classification (Hoster et al. 2008).Race/ethnicity classifications comprised Hispanic Non-Hispanic White, Non-Hispanic Black, and Other (encompassing American Indian, Alaska Native, Asian, and Pacific Islander categories).MCL staging adhered to the Lugano staging system (Yoo 2022), distinguishing between Limited (stages I and II) and advanced diseases (stages III and IV).The year of diagnosis was stratified into three periods: 2000-2006, 2007-2010, and 2011-2015.
Study endpoints
The endpoints of the study included OS and CSS.OS was defined as the time from the start of the first diagnosis or treatment to the time when the patient died or was last followed up for any cause.CSS was defined as the time from the start of the first diagnosis or treatment to the time when the patient died or was last followed up for MCLrelated causes.
Statistical analysis
The chi-square test was used to compare the categorical variables of clinical characteristics in each group, Age and survival time were presented using the median and interquartile range (IQR), while descriptive statistics for continuous variables were expressed in terms of mean and standard deviation.the Kaplan-Meier method was used to calculate the survival rate and construct a survival curve, and the Logrank test was used for comparison between groups.Cox proportional hazards regression models were used for univariate and multivariate analysis and the hazard ratio [HR] between variable and mortality was calculated.All confidence intervals (CI) are stated with a 95% confidence level.
A propensity matching method (PSM) was employed to minimize potential confounding factors in studies, thereby equalizing differences in clinical characteristics between groups.In this study, a 1:1 nearest-neighbor matching method was applied to marital status, with a caliper value set at 0.01 for matching tolerance.Additionally, we constructed stratified hazard models for unmarried status across different time intervals (1-, 5-, and 10-year), calculating the HR to delineate the associations between various unmarried statuses and the probability of mortality.
All data were analyzed using IBM SPSS statistical software version 26.0 and R, version 4.2.1 (http:// www.r-proje ct.org/).A two-sided p-value < 0.05 was considered statistically significant.
Clinical characteristics of the patient
From 2000 to 2018, 6437 eligible MCL patients were analyzed using the SEER-18 Database.Among them, 4327 (67.2%) were identified as married, while 2110 (32.8%) were categorized as unmarried.The median age of the entire cohort was 68 years (IQR 59-76 years).Notably, the unmarried group exhibited a median age of 70 years (IQR 59-79 years), whereas the married group had a median age of 67 years (IQR 59-75 years).Approximately 45.2%/6437 of the total patients were aged ≥ 70 years, with a higher proportion in the unmarried group (50.3%).The median survival time for the entire cohort was 47 months (IQR 118-85 months), with the married group having a median survival time of 52 months (IQR 22-91 months) and the unmarried group having 38 months (IQR 13-70 months).Statistically significant differences were observed in sex (p < 0.001), age (p < 0.001), race/ethnicity (p < 0.001), stage (p = 0.007), sequence number (p < 0.001), chemotherapy (p < 0.001), and radiation (p = 0.036), while year of diagnosis and primary site did not show significant differences.
To address potential biases, a 1:1 PSM was conducted, resulting in a cohort of 3948 MCL patients, evenly split between married and unmarried groups.Except for the year of diagnosis, all other variables, including sex, age, race/ethnicity, stage, primary site, sequence number, chemotherapy, radiation, showed no significant differences (all p > 0.05), demonstrating good balance.The baseline clinical characteristics of MCL patients with different marital status are summarized in Table 1.
Additionally, by analyzing the 5-year OS rates within different subgroups based on marital status, we found that except for individuals of other race/ethnicity (p = 0.222), there were significant differences in the 5-year OS rates among the remaining subgroups (p < 0.05).Particularly, the 5-year OS rate was at its lowest in elderly patients aged ≥ 70 years (married 38% vs. unmarried 25%), while the highest 5-year OS rate was observed in patients aged ≤ 50 years (married 81% vs. unmarried 67%) (Fig. 3).A similar trend was observed for the 5-year CSS rates in different subgroups based on marital status (Supplementary Fig. S1).
Subgroup analysis of the impact of different marital status on survival after propensity score matching.
We conducted subgroup analysis to further elucidate the impact of different marital status on survival outcomes across diverse subgroups.The results revealed that being married positively influenced survival in all subgroups.Despite the lack of significant differences in marital status for the diagnosis years 2005-2009 (p = 0.056) and other race/ethnicity (p = 0.241), the survival benefit persisted in these cases, as shown in Fig. 5.
1-, 5-and 10-year hazard models
An extended analysis of unmarried subgroup revealed an interesting phenomenon.Widowed patients showed inferior survival outcomes at 1-, 5-, and 10-year intervals compared to patients with other marital status.Particularly noteworthy, in the 10-year OS and CSS hazard model for widowed individuals, the risk of mortality was significantly higher, with the probability of the risk of overall and cancer-specific mortality increased by 1.7-fold and 1.6-fold, respectively (Table 4).
Discussion
MCL represents an incurable and heterogeneous form of lymphoma, exhibiting a 5-year survival rate of 52.5% (Kamel Mohamed et al. 2020).The clinical factors related to the prognosis of MCL include age, sex, stage, physical status, lactate dehydrogenase (LDH), white blood cell count, and Ki-67 index (Wu et al. 2020;Jain et al. 2022).However, existing studies have unexplored the relationship between marital status and survival outcomes in MCL.
Based on our study, marital status significantly impacted OS and CSS.Specifically, widowed patients had lower 5-year OS and CSS rates compared to patients with other marital status.Conversely, married patients demonstrated superior OS and CSS rates compared to patients with other marital statuses.Consequently, marital status was identified as an independent risk factor for survival outcomes in MCL patients.
Numerous studies have confirmed the impact of marital status on cancer survival (Li et al. 2015;Jin et al. 2016;Wang et al. 2017;Xing et al. 2021;Wu et al. 2022).Aizer, et al. found that widowed patients faced a greater risk of developing metastatic cancer, receiving unbalanced treatment, and experiencing death linked to their cancer when compared to married patients (Aizer et al. 2013).This study is the first to analyze the impact of marital status on OS and CSS in patients with MCL based on the SEER database, which has important implications for clinicians to more comprehensively assess the prognosis of patients with MCL.
The impact of marital status on the survival of cancer patients can be explained from the perspective of social psychology.Cancer patients have more serious psychological distress than other patients.Married patients showed less HR, hazard ratio; CI, confidence interval Patients who are married often have a lower risk of developing major depression (Weissman et al. 1996).Goodwin, et al. concluded that breast patients diagnosed with depression who received nondefinitive treatment had greater risk and worse survival than those who received definitive treatment (Goodwin et al. 2004).
An abnormal cortisol circadian rhythm can early death of cancer patients, while restraint in natural killer cell quantity and function might signify speedy disease progression (Sephton et al. 2000(Sephton et al. , 2013)).Studies have shown that better quality social support is associated with healthier neuroendocrine function, which has significant implications for cancer prognosis (Turner-Cobb et al. 2000).Additionally, it should not be ignored that married people have a lower risk of alcohol abuse and smoking than those with other marital status (Leonard and Rothbard 1999;Lindström 2010), which could be advantageous for the well-being of cancer patients.As a population-based retrospective study, there are inevitably some limitations.First of all, the SEER database lacks detailed information related to the treatment of MCL HR, hazard ratio; CI, confidence interval Fig. 5 The forest plot presents a subgroup analysis of the impact of marital statuses on overall survival after propensity score matching.HR hazard ratio; CI confidence interval patients, such as the regimen of chemotherapy, the application of targeted drugs, and the evaluation of efficacy.Secondly, important features related to MCL prognoses, such as ECOG score, LDH, ki-67 index, and white blood cell count, were lacking.Finally, we hypothesized that psychosocial and treatment adherence factors were responsible for the poor survival of widowed patients, but the SEER database lacked records of psychological tests, mental status, and treatment adherence assessments of MCL patients.Additionally, some confounding variables that affect the outcome of patients with MCL, such as smoking and alcohol abuse, were not available from the SEER database.This may lead to some bias in the analysis results, and further research is necessary to verify it.
Conclusion
The first study to analyze the relationship between marital status at diagnosis and survival in patients with MCL, the results of this study demonstrate that marital status at diagnosis is an independent prognostic factor for patients with MCL, with widowed patients showing worse OS and CSS than those who are married, single, and divorced/separated.It is important to note that adequate psychological and social support for widowed patients can help improve outcomes for such patients.
Ethics approval This was a retrospective study, analyzing data from the SEER public database; therefore, ethical approval was not required.Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http:// creat iveco mmons.org/ licen ses/ by/4.0/.
Fig. 1
Fig. 1 Flow diagram of data process for mantle cell lymphoma patients
Fig. 2 Fig. 3
Fig. 2 Kaplan-Meier curves present the overall survival and cancer-specific survival of patients with mantle cell lymphoma stratified by marital status depression, anxiety, and distress after a cancer diagnosis by having their spouse help combat negative emotional distress and receive strong social support from friends and family(Goldzweig et al. 2010;Kaiser et al. 2010).There was a strong association between psychological distress and poor adherence to treatment, and patients experiencing depression were found to be three times more likely to fail to comply with medication recommendations compared to those who did not have depression(DiMatteo et al. 2000).McCowan, et al. found that breast cancer patients with high adherence to tamoxifen treatment had a lower recurrence rate of 8.95% and a lower mortality rate of 8.65%(McCowan et al. 2013).
Fig. 4
Fig. 4 Kaplan-Meier curves present the overall survival and cancer-specific survival of patients with mantle cell lymphoma stratified by marital status after propensity score matching
Table 1
Baseline clinical characteristics of patients with mantle cell lymphoma in the data before and after PSM independent prognostic factors significantly associated with
Table 2
Univariate and multivariate analyses of overall survival and cancer-specific survival in patients with mantle cell lymphoma
Table 3
Univariate and multivariate analyses of overall survival and cancer-specific survival in patients with mantle cell lymphoma after propensity score matching
Table 4 1
-, 5-and 10-year hazard models of overall survival and cancer-specific survival based on different marital statuses in patients with mantle cell lymphoma HR, hazard ratio; CI, confidence interval; OS, overall survival; CSS, cancer-specific survival
|
2024-03-13T06:17:55.676Z
|
2024-03-11T00:00:00.000
|
{
"year": 2024,
"sha1": "791fd7157be122c3adff6139800b4e879292efea",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00432-024-05647-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "1082b04ab285d2f5293449ddb6c8621c30d381ae",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
201015525
|
pes2o/s2orc
|
v3-fos-license
|
A pharmacokinetic model including arrival time for two inputs and compensating for varying applied flip-angle in dynamic gadoxetic acid-enhanced MR imaging
Purpose Pharmacokinetic models facilitate assessment of properties of the micro-vascularization based on DCE-MRI data. However, accurate pharmacokinetic modeling in the liver is challenging since it has two vascular inputs and it is subject to large deformation and displacement due to respiration. Methods We propose an improved pharmacokinetic model for the liver that (1) analytically models the arrival-time of the contrast agent for both inputs separately; (2) implicitly compensates for signal fluctuations that can be modeled by varying applied flip-angle e.g. due to B1-inhomogeneity. Orton’s AIF model is used to analytically represent the vascular input functions. The inputs are independently embedded into the Sourbron model. B1-inhomogeneity-driven variations of flip-angles are accounted for to justify the voxel’s displacement with respect to a pre-contrast image. Results The new model was shown to yield lower root mean square error (RMSE) after fitting the model to all but a minority of voxels compared to Sourbron’s approach. Furthermore, it outperformed this existing model in the majority of voxels according to three model-selection criteria. Conclusion Our work primarily targeted to improve pharmacokinetic modeling for DCE-MRI of the liver. However, other types of pharmacokinetic models may also benefit from our approaches, since the techniques are generally applicable.
Introduction
Dynamic Contrast-Enhanced MRI (DCE-MRI) is a technique that can be applied to assess properties of the micro-vascularization in organs such as the liver, breast, and kidney [1] [2]. Pharmacokinetic (PK) modeling in the liver is more challenging than in the rest of the body since the liver has two vascular inputs: the hepatic artery and the portal vein. Furthermore, contrary to standard Gd-based contrast media, the hepatobiliary contrast agent Gadoxetate disodium (Primovist TM , Bayer pharmaceutical) is also taken up by the hepatocytes. As such an additional compartment should be taken into account in a pharmacokinetic model. Finally, the uptake rate of the hepatocytes is low and for this reason DCE-MRI may take up to 20 minutes or more [1]. During image acquisition the liver can experience large deformations and displacements, which may significantly influence the signal intensity (e.g. due to B1-inhomogeneity). These issues result in the fact that accurate pharmacokinetic modeling in the liver is far from trivial.
Related work
Quantitative analysis of liver function with MRI using Gd-EOB-DTPA in rabbits was first proposed by Ryeom et al. [3] in 2004. Using a deconvolution technique, the estimated hepatic extraction fraction (HEF) showed correlation with liver function measured through the plasma's retention rate after indocyane green injection. Subsequently, Nilsson et al. [4] applied the same liver model to humans with a more efficient deconvolution technique called truncated singular value decomposition (TSVD). However, this deconvolution approach regarded the hepatic artery as the sole input, and ignored the portal vein. A dual-input one-compartmental model was already proposed in 2002, but this model focused on extracellular contrast agents such as Gd-DTPA (Magnevist, Bayer Schering Pharma, Berlin, Germany) [5]. By adding an intracellular compartment, Sourbron et al. [2] created a dual-input, two-compartmental model that accounted for Gd-EOB-DTPA metabolization by the hepatic cells in 2012. A limitation of Sourbron's model is that it ignores the extraction rate of hepatocytes, i.e. the efflux to the bile canaliculi. To solve this, Ulloa et al. [6] and Forsgren et al. [7] modeled the transport of the contrast agent from the hepatocytes to the bile via nonlinear Michaelis-Menten kinetics in rats and humans respectively. Georgiou et al. [8] tried to simplify the efflux transport by a simple linear approximation. Recently, Ning et al. [9] correlated pharmacokinetic parameters estimated from different models with a blood chemistry test. It was found that the relative liver uptake rate estimated from the model without bile efflux transport significantly correlated with direct bilirubin (r = -0.52, p = 0.015), prealbumin (r = 0.58, p = 0.015) and prothrombin time (r = -0.51, p = 0.026). Furthermore, only insignificant correlations were found using the model with efflux transport. Accordingly, our work regards Sourbron's model [2] as the starting point, i.e. opting for a model without bile efflux transport.
The Arterial Input Function (AIF) represents the time-dependent arterial contrast agent concentration, that is typically used in pharmacokinetic modelling of dynamic imaging data. Population-averaged parametrized models (e.g. (Orton, Parker) have been used as such. The AIF model described by Orton et al. [10] parametrizes the AIF as a sum of two functions, one describing the first passage of the bolus peak, and the other representing the wash-out of CA in the tail of the AIF. Alternatively, Parker's model [11] can describe the second pass (recirculation) of the contrast agent. However, the latter feature may not always be visible in the MRI data, e.g. due to low temporal resolution (see e.g. [12]).
In Sourbron's approach, the delay of the arterial input is empirically determined by the best model fit over a discrete set of values. This might limit the accuracy of the PKM parameter estimation and could restrict its applicability. Furthermore, the method does not take the effects of liver motion on the signal intensity into account. Such motion not only causes misalignment, which should be compensated for using image registration, but it may also induce other signal fluctuation, due to motion-induced time-varying B1-inhomogeneity caused by the movement of the bowel in the field of view [13].
Previously, several papers investigated the influence of B1-inhomogeneity on pharmacokinetic modeling. For example, Park et al. [14] and Sengupta et al [15] conducted a simulation and an experimental study respectively showing that a small degree of B1-inhomogeneity can cause a significant error in the estimated PKM parameters. Gach et al. [16] corrected the B1 inhomogeneity by performing a 3D GRE sequence with various flip angles (2-30˚) in phantoms to obtain standards for normalizing the 3D GRE DCE MR images. Alternatively, Van Schie et al. [17] combined variable flip angle (VFA) and Look-Locker (LL) sequences to obtain a B1-inhomogeneity map for DCE imaging. Such a B1-map may also be obtained by means of the DREAM sequence [18]. Essentially, all these methods attempt to correct the B1-inhomogeneity based on auxiliary sequences. However, this not only makes the imaging even more time-consuming, it conventionally yields static B1-maps whereas fluctuations due to motion remain hard to account for.
Objective
In this paper we aim to improve pharmacokinetic modeling of liver DCE MRI data. Therefore, two novelties are introduced in the PK modeling. First, the arterial input function proposed by Orton is integrated into Sourbron's PK model. This enables that the arrival times of contrast from the portal vein and the hepatic artery are separately included in the model and estimated simultaneously with the PK model parameters. Secondly, the deformation and displacement of the liver is estimated and used to correct for changes in signal intensity such as the ones caused by B1-inhomogeneities.The effectiveness of the new model will be assessed by several experiments.
Data acquisition
Patients diagnosed with one or more liver lesions and who were scheduled for 99m Tc-mebrofenin HBS as part of the preoperative workup were included in this prospective pilot study. Patients with general contraindications for MRI, chronic renal insufficiency, known or family history of congenital prolonged QT-syndrome, current use of cardiac repolarization time prolonging drugs (such as class 3 anti-arrhythmic drugs), history of arrhythmia after the use of cardiac repolarization time prolonging drugs and history of allergic reaction to gadoliniumcontaining compounds were excluded from participation. 11 subjects were included for this research project. Subjects' characteristics can be seen in Table 1.
Characteristics n 11
The study was approved by the ethical review board of the Amsterdam University Medical Centers and registered under ID NL45755.018.13. Written informed consent was obtained from all individual participants included in the study.
In addition, dual refocusing echo acquisition mode (DREAM) images [18] were acquired to quantify the extent of the B1-inhomogeneity before the DCE sequence was acquired. The acquisition parameter settings were matrix size = 64×64×30, voxel size = 8.28×8.28×8.80 mm 3 , nominal STEAM flip-angle α = 60˚, nominal imaging flip-angleβ = 10˚, TE STE = 1.06 ms, TE FID = 2.30 ms, TR = 3.84 ms. Essentially, the DREAM sequence produces a map in which the value of every voxel represents the ratio between the real flip-angle and the programmed flip-angle. We will refer to it as the 'zeta' map.
Image registration and liver segmentation
Image registration is required to achieve spatial correspondence between voxels of the DCE-MRI data prior to PK modeling. In this work, each 4D DCE-MR dynamic is registered to the last dynamic volume. In order to do so, we apply the Modality Independent Neighborhood Descriptor (MIND) method [19], which is a state-of-the-art technique for multi-modal image registration. Essentially, it relies on a patch-based descriptor of the structure in a local neighborhood MINDðI; x; rÞ ¼ 1 n exp À D p ðI; x; x þ rÞ VðI; xÞ in which I is an image, r an offset in neighborhood P of size R×R×R around position x and n a normalization constant; D p is the distance between two image patches, measured by the sum of squared differences (SSD): where V(I, x) is the mean of the patch distances in a small neighborhood N VðI; xÞ ¼ 1 numðNÞ The MIND registration can be described as jMINDðI; x; rÞ À MINDðJ; x; rÞj where u = (u, v, w) is the deformation field and α a coefficient that weighs a regularization term. Thus, the MIND registration method produces a 3D voxelwise, regularized deformation field. In this paper we follow the default setup from [19]: R = 3, N = N 6 i.e. a six-connected neighborhood, patch size D = 3, and the regularization coefficient α = 0.1. Furthermore, we segment the liver, defining our region of interest. As we apply a liver-specific contrast agent, the surrounding organs show less signal enhancement than the liver.
Maximal image contrast is achieved by subtracting the first dynamic of the series from the last, after registration. Subsequently, the liver is segmented based on the resulting "contrast" volume by means of a level set approach, which takes boundary as well as region information into consideration [20]. More implementation details can be found in [21]. We refrained from performing the segmentation in an anatomical scan, which indeed has higher resolution, but inferior contrast compared to the DCE MRI difference image.
The obtained mask coarsely segments the liver across the registered DCE series. Simultaneously, inverting the registration transformations and applying them to the liver mask yield liver segmentations in each original dynamic, the transformations were performed as shown in Fig 1. Finally, we subtract from each deformation field the deformation field resulting from the registration of the first image to the last one. We do this merely for practical reasons, so that all deformation fields are relative to the first image in the series.
The liver's mean relative displacement in a dynamic volume with respect to the first image is estimated as the distance between the liver mask's center of mass and the deformed liver mask's center of mass (in 3D), see In the section Varying effective flip-angle compensation we will show how the liver displacements can be used to compensate for these intensity offsets.
In the following, the x-axis of the data corresponds to the anterior-posterior direction, the y-axis to the left-right direction and the z-axis to the superior-inferior direction, as show in
Input function models
An arterial input function (AIF) represents the time-dependent arterial contrast agent (CA) concentration, that is used in PK modeling of dynamic imaging data. The AIF is often computed directly from the signal measured in an artery close to the tissue of interest. The liver, however, has two inputs: the hepatic artery's AIF and the portal vein's venous input function (VIF).
We assume that the profile of both input functions follows a slightly modified input function model described by Orton et al. [10]. This model parametrizes an input function as a sum of two functions, one describing the first passage of the bolus peak, and the other describing the wash-out of CA in the tail of the input function [22].
The bolus peak C B (t) is described by: with u(t) the unit step function. This function has been modified slightly with respect to the one described by Orton et al., such that the area under the curve of C B (t) is given by the parameter a B , while μ B only affects the decay rate. The tail of the AIF and VIF is expressed as a convolution between the bolus peak and a body transfer function G(t), which is modeled as In which a G determines the starting level of this decay function and μ G governs the decay rate, which may reflect kidney functioning [10]. Thus, the complete input function is given by: with can be used to represent either the AIF or the VIF. The liver's AIF and VIF were estimated by semi-automatically segmenting a homogeneous region in the aorta and the portal vein, respectively [21]. The aorta and portal vein were segmented much in the same way as we performed the liver segmentation. Specifically, they were segmentedfrom volumes obtained by subtracting the first volume from the one in which maximal signal was measured in the aorta and portal vein, respectively. This was measurement was made in small, manually indicationed regions of interest in the aorta and portal vein.Subsequently, a level set segmentation algorithm [20] was applied to segment these structures.volumes. Finally, the resulting segmentations were eroded by 3x3 kernel, to remove partial volume voxels.
Subsequently, the top three of the most enhancing time intensity curves of the voxels in both regions were separately averaged and converted into time concentration curves (TCC) assuming a nonlinear relationship between signal intensity and concentration of contrast agent [23]. Finally, the input function parameters were estimated by fitting Orton's model to these data. These fits yield different parameters for AIF and VIF.
An advantage of our approach is that noise on the input function is suppressed, because a smooth, parameterized representation is fit to the data. However, not all features contained in the original data may be represented, especially a second pass of the bolus peak, which is not contained in Orton's model. We considered this limitation acceptable as, we could not visually identify a second peak corresponding to a second bolus pass in the hepatic artery let alone in the portal vein for any data set. Furthermore, the parameterized input functions can be analytically integrated in our PK model (see below). As such, it allows for a continuous estimate of the time delay with which the AIF and VIF arrive in a voxel under investigation.
Sourbron's model
Sourbron et al. [2] developed a dual-inlet, two-compartment uptake model that was especially designed for the intracellular hepatobiliary contrast agent Primovist. The diagram in Fig 3 illustrates the model. The arterial input function C A and venous input function C V are the dual inlets representing the contrast agent concentration in the blood plasma supplied to the liver by the hepatic artery and the portal vein, respectively. T A and T V represent time delays and F A and F V are constants representing the volume transfer rates from the plasma compartments into the extravascular, extracellular space. Furthermore, the gray rectangle denotes liver tissue, the left circle represents the extravascular extracellular compartment and the right circle stands for the extravascular intracellular compartment, i.e. corresponding to the hepatocytes. As such, V E is the extravascular extracellular volume and K I represents the uptake rate of the hepatocytes represented by a volume V I .
The analytical solution of Sourbron's model yielding the total contrast agent concentration C T in a voxel is where A derivation of this expression can be found in S1 Appendix.
The combined Orton-Sourbron (COS) model
Since vascular input functions are the front-ends of Sourbron's liver model, a comprehensive model can be derived by inserting Eq (7) into Eq (8). This leads for the contrast agent concentration in a voxel C T,I due to either AIF or VIF (i.e I2{A,V} to: in which Orton's model parameters (μ B ,μ G ) are particular for either AIF or VIF; T I refers to the time delay associated with the particular input function. A derivation of this expression can be found in S2 Appendix.
The final model is expressed as the sum of contributions from AIF and VIF: in which C T , as before, models the total contrast agent concentration in a voxel.
Practically, we set the time delay of the portal vein (T V ) to zero (as in [2]) since it is smaller than the temporal resolution of our data (2.2 s). We do estimate the time delay of the arterial input function (T A ), which is larger as it is measured in the aorta, i.e. further away from the liver.
Varying effective flip-angle compensation Fig 2 shows the distribution of TICs for a particular patient. Several abrupt drops in signal intensity may be observed that appear correlated with the liver's displacement.
We hypothesize that this signal variation can be modeled as a deviation in the locally applied flip-angle. In general, the signal intensity in a voxel emanating from a gradient echo sequence, neglecting T � 2 decay, and assuming the spins are in the steady state, is given by: where N(H) is the local proton density multiplied by an arbitrary factor (the scaling factor used by the scanner), T 1 the spin-lattice relaxation time, α the flip-angle and TR the repetition time. Furthermore, the Relative Signal Intensity (RSI) in a voxel while the contrast agent is flowing in can be expressed as: in which α 0 is the presumed flip-angle in the voxel prior to contrast administration) (we assume 15˚, i.e. the flip angle as per scan protocol); T 10 the spin-lattice relaxation time before contrast arrives, T 1 the actual spin-lattice relaxation time and α the actually perceived flipangle during the dynamic scan, modeling the effect of a deviating flip angle. The contrast agent concentration C T can be expressed as a function of α, T 1 and the RSI as (see S3 Appendix): with R the relaxivity of the applied contrast agent (for Gd-EOB-DTPA at 3T, R = 7 s -1 mM -1 l [24]). Consequently, the error in the calculated contrast agent concentration due to deviating flipangle (e.g. caused by B1-inhomogeneity) is: The intrinsic T 1 value of the liver prior to contrast injection is around 800 ms [25], while we estimate that the effective T 1 can be as small as 300 ms after contrast injection. Fig 4(A) shows ΔC T for this range of T 1 values as well as for flip angle deviations varying from -3 o to +3 o . Essentially, the graph demonstrates that the error in C T is non-linearly dependent on T 1 for any given deviation in flip-angle. However, normalizing through division by RSI(α, T 1 ) yields profiles that are independent of T 1 for every flip-angle deviation, see Fig 4(B). Furthermore, the distance between the profiles reflects that there is an approximately linear relation between ΔC T and the applied flip-angle.
We model the contrast agent concentration in a voxel as:
C T 0 ðtÞ ¼ C T ðtÞ þ ½aRSIðtÞ bRSIðtÞ gRSIðtÞ�½DuðtÞ DvðtÞ DwðtÞ�
in which C' T (t) is the measured, uncorrected contrast agent concentration in a voxel; C T (t) is the combined Orton-Sourbron (COS) model, see Eq (10); α, β and γ are proportionality constants that need to be estimated and RSI(t) is the relative signal intensity with respect to the one in the pre-contrast stage, i.e. S(α, T 1-post )/S(α, T 1-pre ). [Δu(t) Δv(t) Δw(t)] is the estimated displacement of the considered voxel in the dynamic at time t, relative to the last dynamic. This estimated displacement is taken from the deformation field emanating from the registration of the dynamic at time t to the last dynamic. As such a linear relation was fit between the displacement of liver and the modeled deviation contrast agent concentration as a first order approximation. Thus, by fitting Eq (15) to the concentration curves we have not only parameterized the arrival time in Sourbron's model (through the COS approach), but also included an implicit varying flip-angle correction (FLAC). Henceforth, we will refer to this as our COS-FLAC approach.
Experimental setup
Assessment of registration performance. The correctness of each registration was first visually checked. Furthermore, synthetic MR images were generated by artificially deforming the last image of the DCE series, i.e. the fixed images of our registration procedure, and then registering the deformed images back to the originals. The artificial deformations were generated by randomly selecting 10 estimated deformations fields from the DCE series. As such, the ground truth is known (the originals), enabling to calculate the mean target registration error (mtre) for each point in the liver. We did so since it appeared not feasible to reliably identify landmarks in these data. This was due to the low resolution of the data and absence of highly characteristic points around the liver in our data.
Comparison between Sourbron's model and the COS model. We first ran a numerical experiment to compare the accuracy and time efficiency of Sourbron's original approach and the proposed COS technique. Essentially, synthetic data was generated in two steps: a parameter estimation step and a data generation step.
In the parameter estimation step the input function parameters (of AIF and VIF) were first obtained by fitting Orton's model in a small region of interest in the aorta respectively the portal vein in each patient. Subsequently, the PKM parameters were estimated for both the reference approach (Sourbron's) and the proposed method in each liver voxel. Then, the PKM parameters of the two methods were averaged (to be unbiased) and this average was taken as the ground truth.
As such, known input function parameters were obtained from each patient as well as known PKM parameters from each liver voxel.
Subsequently, in the data generation step synthetic data was generated by (1) creating ground truth input functions from the estimated Orton's model parameters (of AIF and VIF) and adding noise; (2) generating tissue TCC's from the ground truth PKM parameters and adding noise. The standard deviation of the added noise on the input functions equaled the root mean square error (RMSE) of Orton's model fit; it was set to the average RMSE of the reference and proposed model fits for the TCC's.
Thus, a wide variety of artificial, noisy time intensity curves could be generated (for each liver voxel one such curve). Please note that the synthetic data was generated by averaging the PKM parameters of reference and proposed method exactly to avoid a bias to either approach.
Finally, we fitted both PK models to the noisy synthetic data and compared the estimated PK model parameters with the ground truth. The nonlinear least-squares fitting routine lsqcurvefit in MATLAB (version R2015b; Mathworks, Natick, USA) was used to perform the model fits; 19 cores were adopted for parallel computing on a HPC equipped with two Intel(R) Xeon (R) CPU E5-2698 v4 clocked at 2.20GHz and 256GB RAM memory.
Relation between displacement and programmed flip-angle deviations. Eq (15) assumed that a difference from the true contrast agent is linearly related to the displacement of a liver voxel. Furthermore, the difference (ΔC T ) was modeled to linearly relate to the deviation from the programmed flip-angle (Fig 4).
To assess the validity of this, the zeta-map from the DREAM sequence, representing the deviation from the programmed flip angle, was geometrically aligned to the first dynamic. Observe that the displacement of a liver voxel in any DCE image is given by the registration transformation that is relative to the first dynamic. Subsequently, the difference in zeta value over the displacement vector (Δzeta) was correlated to the displacement across all dynamics. The strength of the correlation was assessed by Spearman correlation coefficient and the significance of the correlation was determined.
The COS-FLAC model with and without RSI weighting. Models of increasing complexity, from the COS-model up to the COS-FLAC model with RSI weighting, were fit to the data of the 11 subjects described in Section Varying effective flip-angle compensation. The root mean square error (RMSE) of the residual that remains after fitting the COS and the COS-FLAC models to the signal were determined in order to quantitatively assess the performance. However, increasing degrees of freedom by adding parameters to a model generally leads to decreased smaller RMSE of the fit residual. To evaluate whether the added parameters truly contributed to a better fit, three model-selection criteria were applied: Akaike's information criterion (AIC) [26], the Bayesian information criterion (BIC) [27], and Information Complexity (ICOMP) [28].
Assessment of registration performance
A typical example of illustrating the performance of the registration algorithm is contained in Fig 5. Furthermore, we found that the mtre across the selected deformation fields and patients was 1.3269 mm with a standard deviation of 0.6905 mm. The unregistered data yielded an mtre of 8.0234 mm with a standard deviation of 7.4431 mm. As such, these quantitative results confirm the accurate performance of the image registration based on the visual assessment.
Fitting results of input function models
Orton's model is a general model to describe an organ's AIF. For reference, the fitting parameters of Orton's model as well as two measures of the goodness of the fit for both AIF and VIF in each patient is contained in the S4 Appendix. Table 2 shows the mean difference between the ground truth and estimated PK model parameters (as well as corresponding standard deviations) for Sourbron's model and the COS model. It shows that the COS model achieved smaller mean difference and standard deviation on four PK model parameters out of five. Additionally, the COS model was fitted more than 7 times faster than Sourbron's model due to the analytical representation of AIF and VIF. Table 3 collates the mean correlation coefficients averaged over all liver voxels for each patient. Additionally, the mean p-values (and associated standard deviations) of the correlations are given. The p-values are corrected via the Benjamini-Hochberg procedure [29] for multiple testing, The false discovery rate used for Benjamini-Hochberg correction in our paper is 0.05. The mean adjusted p-values demonstrate that the correlations are highly significant. Furthermore, the correlation coefficients indicated a moderate to strong linear relationship [30]. The moderate to strong correlation and the significance of the correlations are indications that the assumption is appropriate.
The COS-FLAC model with and without RSI weighting
The signal intensity in the liver of one patient was already shown in Fig 2(C). Fig 6 illustrates how models (red) of increasing complexity, from COS up to the COS-FLAC model with RSI Table 2. Comparison between Sourbron's model (discrete AIF) and COS model (analytical AIF) in terms of estimating PK model parameters and time efficiency on synthetic data. The numbers report the mean difference from the ground truth and corresponding standard deviation (between brackets). The numbers printed in boldface are the best outcomes per row.
Original Sourbron's model COS model
ΔF Table 4. It shows that the COS--FLAC model with RSI weighting term achieved the lowest RMSE, which is significantly better than the COS model and the COS-FLAC model without RSI weighting term (p <0.001, assessed by paired t-tests, and corrected via the Benjamini-Hochberg procedure [29] for multiple testing). Henceforth the, COS-FLAC model refers to the model including the RSI weighting term. Table 5 shows the scores that PK models get according to three model selection criteria as well as the percentage of voxels in which these criteria favored the COS-FLAC model over the mere COS approach. The proposed COS-FLAC technique was considered to yield a better fit in the majority of voxels across all subjects according to all three model-selection methods (p <0.001, assessed by paired t-tests). Previously, Sourbron et al [2], Chandarana et al [31] and Simeth et al [32] reported on liver uptake rates based on their respective PK models, see Table 6. Our results are a little bit higher than those in previous studies but are still in the same order.
Discussion
In this paper, we proposed an improved pharmacokinetic model for DCE-MRI of the liver. The novelties of our work comprise: (1) analytically modeling the arrival-time of the contrast agent in a voxel; (2) compensation for effects that can be modeled by allowing for a breathdependent B1-induced variation of the experienced flip-angle in each voxel.
The VIF and AIF might not be completely independent functions, which could introduce correlations in the parameter estimation. Clearly, they were measured in different arteries and we have observed different shapes in our data. For this reason, we modelled them independently.
Orton's model was adopted to represent the liver's input functions (hepatic artery and portal vein) and embed them into Sourbron's model. The combined Orton and Sourborn (COS) model was shown to enhance the fitting accuracy as well as the efficiency of the model fitting (see Table 2). The poorer performance of Sourbron's original approach is due to the discretized delay of the arterial input and determining the best model fit over a set of delay values.
A potentially deviating flip-angle was modeled to linearly relate to the displacement of a liver voxel with respect to the first image. We referred to the approach combining both novelties as the COS-FLAC model. The validity of our approach is supported by the moderate to strong linear correlation between displacement and deviation in flip angle. There are some weak correlations in part of the voxels in all cases. We observed that the voxels showing the weak correlation generally do not exhibit large displacements across the time series. In other words, these voxels do not move much. In these cases, the corresponding deviation in flip angles typically was also not large. As a result, the correlation between them is low. Observe that the low correlations in these voxels are not incompatible with our approach: a small displacement in these voxels will produce only a very small signal correction.
One may observe that the same, noisy AIF and VIF were at the basis of estimating the PKM parameters with the two methods. However, a crucial difference is in how the methods deal with arrival time. The errors in the arrival times indeed are small: see Table 2. Simultaneously, larger errors in the other parameters can be observed for the reference method. We attribute this to the correlation with the arrival time. Indeed, with the arrival time constrained to the ground truth value, much smaller errors in the other parameters were observed (data not shown).
The COS-FLAC model was quantitatively assessed by the root mean square error (RMSE) of the residual that remains after fitting the model to the signal in every voxel of the liver. We found that the COS-FLAC model achieved significantly lower RMSE than the COS approach. Furthermore, three model complexity criteria showed that the COS-FLAC model outperformed the COS model in the vast majority of voxels. These findings confirm that a small degree of B1-inhomogeneity can have a marked effect on the estimation of PKM parameters, cf. [14] [15]. One might argue that the COS approach would suffice in voxels in which there is no deviation in flip-angle. This might explain why, according to the model selection criteria, there are still some voxels in which this simpler model appears sufficient. At the same time, the large number of voxels in which the COS-FLAC approach is favored, emphasizes to our opinion its importance.
There are several limitations of our work. A first limitation is that the number of subjects is rather small. Clearly, evaluating the performance of the method on a larger number of subjects would be more convincing. Unfortunately, we are restricted to a small number of subjects as our work is part of a pilot study into the uptake rate of the contrast medium into liver cells.
A second limitation is the lack of a reference standard. Obtaining the true pharmacokinetic tissue parameters under realistic measurement circumstances is a highly complex, still unsolved issue.
|
2019-08-17T13:04:14.048Z
|
2019-08-15T00:00:00.000
|
{
"year": 2019,
"sha1": "f1773db4cc0903d0c4783c6693a316429c604511",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0220835&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f1773db4cc0903d0c4783c6693a316429c604511",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
270603990
|
pes2o/s2orc
|
v3-fos-license
|
iTRAQ-based quantitative proteomics revealing the therapeutic mechanism of a medicinal and edible formula YH0618 in reducing doxorubicin-induced alopecia by targeting keratins and TGF-β/Smad3 pathway
YH0618, a medicinal and edible formulation, has demonstrated the potential to alleviate doxorubicin-induced alopecia in animal studies and clinical trials. However, the mechanisms underlying its therapeutic effects remain unexplored. The objective of this study was to ascertain possible therapeutic targets of YH0618 in the treatment of doxorubicin-induced alopecia. The assessment of hair loss was conducted through the measurement of the proportion of the affected area and the examination of skin histology. Isobaric tags for relative and absolute quantification (iTRAQ) in quantitative proteomics was employed to discern proteins that exhibited variable expressions. The major proteins associated with doxorubicin-induced alopecia were identified using gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis. The interaction network of the differentially expressed proteins was constructed using the STRING database and the Python software. The study analyzed a total of 3894 proteins extracted from the skin tissue of mice. Doxorubicin treatment resulted in the upregulation of 18 distinct proteins, whereas one differential protein was found to be downregulated. The above effects were reinstated after the administration of the YH0618 therapy. The bioinformatic study revealed that the identified proteins exhibited enrichment in many biological processes, including staphylococcus aureus infection, estrogen signaling route, pyruvate metabolism, chemical carcinogenesis, and PPAR signaling pathway. The results of Western blot revealed that the levels of keratin 81 (Krt81), keratin 34 (Krt34), keratin 33a (Krt33a), and Sma and MAD-related protein 3 (Smad3) were upregulated in response to doxorubicin treatment, and were attenuated by the administration of YH0618. These four proteins are likely to correlate with DOX-induced alopecia and serve as promising therapeutic targets for YH0618. This work presents significant insights and empirical evidence for comprehending the process underlying chemotherapy-induced alopecia, paving the way for exploring innovative therapeutic or preventive strategies employing herbal items.
Introduction
Alopecia is a frequently observed adverse effect occurring in around 65 % of patients undergoing chemotherapy.The occurrence rate of alopecia induced by certain chemotherapy medications, such as doxorubicin and paclitaxel, can reach levels as high as 80%-100 % [1,2].While alopecia may not pose a direct threat to individual life, it significantly diminishes the overall quality of life.This is primarily because of the continuous unpleasant emotions, including anxiety, despair, and negative self-evaluation, that are commonly reported by patients [3,4].The precise molecular process underpinning the phenomenon of chemotherapy-induced alopecia (CIA) remains elusive and requires further investigation.The fundamental concepts for mitigating CIA involve minimizing the delivery of chemotherapeutic agents to the hair follicle and employing pharmaceutical interventions to facilitate hair regrowth [5].Nevertheless, a viable solution to mitigate this issue has yet to be devised [2,4].The sole method sanctioned by the United States is scalp cooling therapy.The Food and Drug Administration (FDA) has approved a treatment for CIA, but its efficacy rate is below 50 % [6].In addition, individuals at a heightened risk of cold-induced urticaria, cold agglutinin illness, and cryoglobulinemia are not good candidates for scalp cooling therapy [7,8].
The utilization of Traditional Chinese Medicine (TCM) in treating cancer, either as a standalone therapy or as a complementary approach, has been well-documented [9,10].The possible discordance between TCM and chemotherapy medications remains a significant area of concern for both medical practitioners and patients.Utilizing species with homology in medicine and food is widely perceived as a safer and more beneficial approach.In this context, a medicinal and edible formulation named YH0618 has been devised to mitigate the toxicity and adverse effects associated with chemotherapy.The composition of YH0618 includes Sojae Semen Nigrum (Hei Dou), Brown Rice (Cao Mi), Glycyrrhizae Radix et Rhizoma (Gan Cao), Auricularia polytricha (Mont.)Sacc (Hei Mu Er), and Siraitiae Fructus (Luo Han Guo), and its main active components were identified by UPLC analysis [11].A recent randomized clinical research showed that the administration of YH0618 resulted in a considerable reduction in skin and nail pigmentation caused by chemotherapy.Additionally, it was demonstrated that YH0618 facilitated the regeneration and darkening of hair, ultimately improving the overall quality of life for individuals undergoing cancer treatment [12].Furthermore, animal experiments demonstrated that YH0618 effectively reduced doxorubicin (DOX)-induced alopecia in C57BL/6 mice while not compromising the anticancer effects [12].
Despite the shown therapeutic efficacy of YH0618, there remains a dearth of research investigating its underlying mechanism of action.Proteomics assumes a pivotal position in drug discovery and development.This innovative approach enables the investigation of protein interactions, biological functionalities, and physical attributes.CIA is a complex process involving multiple factors and signaling pathways [13].TCM exhibits a multifaceted mode of action by targeting various biological entities [14].Network pharmacology is a valuable approach to constructing a network encompassing multiple components and targets, facilitating elucidation of the molecular mechanisms behind Chinese medicine formulations [15].Therefore, the primary objective of this study is to employ proteomics and network pharmacology to thoroughly and comprehensively investigate the underlying mechanism by which YH0618 mitigates alopecia induced by DOX.
YH0618 was smashed into coarse particles and decocted with boiling distilled water three times (1:10, 1:8, and 1:8 w/v) for 1 h.The solution was filtered with gauze and concentrated by a rotary evaporator.The condensate was dehydrated by a freeze-dryer and stored at − 80 • C. R. Li et al.
Effect of YH0618 on DOX-induced alopecia 2.2.1. Animals
All animal experiments in this study were monitored and handled following an animal protocol (NO.20200903002) approved by the Ethics Committee for Laboratory Animals, Guangzhou University of Chinese Medicine (Guangzhou, China).C57BL/6 mice (7 weeks old and weighed between 18 and 20 g) were supplied by the Laboratory Animal Center at Guangzhou University of Chinese Medicine.These mice were confirmed to be pathogen free.The mice were housed in a controlled environment with a temperature of 25 ± 2 • C, relative humidity ranging from 40 % to 60 %, and a light/dark cycle of 12 h each.The mice were accommodated for one week before the commencement of the studies, during which they were provided unrestricted access to food and water.
The experimental model was developed utilizing the previously proposed modified method [16].The dorsal skin of the mice was treated with a depilatory cream, resulting in the removal of hair shafts within a designated region measuring 1.5 cm by 2 cm.The uniform pink hue of the skin indicated the presence of hairs in the telogen phase.Removing hair at this stage resulted in the simultaneous growth of new hair in the anagen stage.Based on the dose regimen outlined in Supplementary Tables S1 and 40 mice were allocated into four groups using random assignment.Each group consisted of 5 male and 5 female mice, and the groups were identified as the control group, YH0618 group, DOX group, and DOX + YH0618 group.The DOX and DOX + YH0618 groups were administered a 0.2 mL intraperitoneal injection of DOX (5 mg/kg) once weekly for 3 weeks.In contrast, the control and YH0618 groups were given an intraperitoneal injection of 0.2 mL vehicle alone.In addition, the YH0618 and DOX + YH0618 cohorts were subjected to intragastric injection of YH0618 at a dosage of 4.5 g/kg daily for 3 weeks.Conversely, the control and DOX groups were administered 0.9 % NaCl using the same method.Following a three-week treatment period, the mice were promptly euthanized under anesthesia.The skin samples were gathered to conduct subsequent analysis.
Hair loss evaluation
The skin color change and hair regrowth time in the depilated area were observed and recorded daily.The percentage of hair loss area (i.e., area without hair growth/area of the entire hair removal area × 100 %) was analyzed on the 0 th , 7th, 14th, and 21st day after hair removal.
Skin histopathology
The skin tissues were fixed with 4 % paraformaldehyde, embedded in paraffin, and sectioned.The tissue sections were stained with hematoxylin-eosin trichrome for histological evaluation under microscopy.
Exploration of mechanisms by proteomics 2.3.1. Protein sample preparation
Skin tissues were homogenized in sodium dodecyl sulfate (SDS) lysis buffer (1:5, w/v).The homogenates were centrifuged at 12,000×g for 10 min at 4 • C, and the supernatant was collected.The protein concentrations of each sample were quantified using the bicinchoninic acid protein assay kit (Thermo Fisher Scientific, MA, USA).
Protein digestion and iTRAQ labeling
The protein sample (50 μg) was used, and dithiothreitol (DTT) was added to the protein solution with the final concentration of 5 mM.The mixtures were incubated at 55 • C for 30 min and alkylated by adding iodoacetamide solution (final concentration of 5 mM) for 15 min at room temperature.Subsequently, acetone (1:6, w/v) was added for protein extraction overnight at − 20 • C. The samples were centrifuged for 10 min (4 • C, 8000×g), and the precipitates were collected.The proteins were re-suspended in 100 mM tetraethylammonium bromide (TEAB) and digested with trypsin (in a ratio of protein: trypsin of 50:1) at 37 • C for 12 h.The mixtures were labeled using the iTRAQ Reagent-8 plex Multiplex Kit (Applied Biosystems, CA, USA) according to the manufacturer's protocol.
Fractionation by high pH reversed-phase (RP) chromatography
The iTRAQ-labeled peptide mixtures were fractionated by HPLC with a reverse phase column (Agilent Zorbax Extend-C18 column,
LC-MS/MS analysis
The peptide fractions were analyzed using a Q Exactive orbitrap mass spectrometer equipped with Easy-nLC 1200 (Thermo Fisher, MA, USA).The fractions were loaded into the precolumn at a flow rate of 350 nL/min and separated through the analytical column 75 μm × 150 mm (RP-C18, New Objective, USA).The mobile phase A contained 0.1 % formic acid (v/v) in water, whereas ACN was used as mobile phase B. The peptides were separated using a linear 90 min gradient: 0-1 min, 2-6% B; 1-52 min, 6-35 % B; 52-54 min, 35-90 % B; 54-60 min, 90 % B. Afterwards, the eluted peptides were analyzed by a mass spectrometer.The primary MS mass resolution was set at 120,000, the automatic gain control value was 3 × 10 6 , and the maximum ion injection time was 30 ms.Full-scan MS spectra (m/z 350-1650) were acquired, and the 15 highest peaks were scanned by MS/MS.All MS/MS maps were acquired using sequential high-energy collisional dissociation (HCD) with collision energy at 3. The resolution of MS/MS was 15,000, and the automatic gain control was 1 × 10 5 .The maximum ion injection time was 54 ms, and the dynamic exclusion time was set at 40 s.
Bioinformatics analysis
Proteins were identified and analyzed with the Proteome Discover 2.4 (Thermo Fisher Scientific, MA, USA).Uniprot, Kyoto Encyclopedia of Genes and Genomes (KEGG), Gene Ontology (GO), and Clusters of Orthologous Groups/euKaryotic Orthologous Groups (COG/KOG) databases were adopted for obtaining annotation information and functions of the identified proteins.GO and KEGG pathway enrichment analyses of the identified differentially expressed proteins (DEPs) were performed using the clusterProfiler package in R. The Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) was adopted to analyze the protein-protein interaction (PPI) network among the selected DEPs.Cytoscape 3.7.2software was used to analyze the differential proteins and establish their interaction and Venn diagram.
Verification of target proteins by Western blot
Skin tissues of the mice in each group were homogenized in a lysis buffer containing protease inhibitors and phosphatase inhibitors.The lysates were centrifuged at 12,000×g for 10 min at 4 • C, and the supernatant was collected.The protein concentration was quantified by a bicinchoninic acid protein assay kit (Thermo Fisher Scientific, MA, USA).The protein samples were separated by SDS-PAGE and transferred onto a PVDF membrane (GE Healthcare, Freiburg, Germany).The membranes were blocked with 5 % (w/v) non- fat milk powder in TBS with 0.1 % (v/v) Tween-20 for 1 h at room temperature, followed by incubation with primary antibodies, including keratin 81 (Krt81), keratin 34 (Krt34), keratin 33a (Krt33a), keratin 33b (Krt33b) and Sma and MAD-related protein 3 (Smad3) (Thermo Fisher Scientific, MA, USA) at 4 • C overnight.After washing with TBST, the membrane was incubated with corresponding secondary anti-mouse or anti-rabbit antibodies (Thermo Fisher Scientific, MA, USA) for 1 h at room temperature.The signals were visualized using the ECL Advance reagent (Seyotin, Guangzhou, China) and quantified using Tanon 4600 (Tanon, Shanghai, China).
Statistical analysis
The results were presented as the means ± standard deviation (SD) of at least 3 independent experiments.The data were analyzed by SPSS 25.0 software using normality tests, homogeneity of variance, and one-way analysis of variance (ANOVA) with LSD's or Bonferroni's multiple comparisons post hoc tests.The histogram for protein expression analysis was plotted using GraphPad Prism 8.0 software (GraphPad Software Inc., USA).P < 0.05 was considered statistically significant.
Effect of YH0618 on DOX-induced alopecia
Fig. 1 and Table 1 present the physical characteristics of mouse skin and the quantification of hair loss area, respectively.On the 7th day of treatment, the dermal tissue of mice in both the control and YH0618 groups exhibited black pigmentation.In contrast, the dermal tissue of mice in the DOX group and the DOX + YH0618 group retained a pink coloration.No significant variation in the extent of hair loss area is observed among different groups.On the 14th day, the skin of all experimental groups exhibited a darkened pigmentation accompanied by hair growth.The control group and the YH0618 group had a considerably reduced proportion of hair loss area compared to the DOX group and the DOX + YH0618 group.Compared to the DOX group, the DOX + YH0618 group exhibited a slightly decreased percentage of hair loss area.However, this difference did not reach statistical significance.After 21 days of treatment, the control and YH0618 groups exhibited total hair regrowth in the shaved area.The DOX group showed a 33 % alopecia region.The co-administration of DOX with YH06118 resulted in a reduction of the proportion to 21 %.
Evaluation of YH0618 on skin histology
The study examined various pathological alterations in skin sections, including hyperemia, congestion, hemorrhage, edema, degeneration, necrosis, hyperplasia, fibrosis, organization, granulation tissue, and inflammatory changes.The findings indicated that the epidermis of mice in all experimental groups exhibited an intact structure with consistent thickness.The organization of the stratum corneum exhibited typical characteristics.The skin tissue showed no apparent signs of edema, congestion, or inflammatory response.On the 21st day, many hair follicles exhibiting consistent distribution and typical morphology were identified inside the dermis and adipose layer of mice in the control and YH0618 groups.Nevertheless, a few atrophic hair follicles were detected within the dermis and adipose tissue layer in the DOX group, with a notably sparse distribution.The DOX + YH0618 group had a larger quantity of hair follicles than the DOX group, and the distribution of hair follicles was uniform with typical morphology, as depicted in Fig. 2.
Identification of DEPs
The iTRAQ analysis successfully identified 3894 proteins.463 DEPs were identified in the DOX group compared to the control group, using the screening criteria of fold change (FC) greater than 1.2 or less than 0.83 and a significance threshold of p less than 0.05.Selecting upper and lower bounds for ratio thresholds is a common approach to identify differentially expressed proteins [17].Although the threshold selections are variable, ratios greater than 1.2 or less than 0.83 were commonly regarded as differentially expressed [18].Among these DEPs, 404 proteins exhibited an upregulated expression level, while 59 proteins showed a downregulated expression level.102 DEPs were observed in the DOX + YH0618 group compared to the DOX group.Among these DEPs, 48 proteins were found to be up-regulated, while 54 proteins were down-regulated.The detected DEPs are included in Tables S2 and S3.The volcanic maps were generated by considering two key factors: the fold change (Log2) between two distinct sample groups and the p-value (-Log10) derived from the t-test.These maps were employed to visually represent the notable changes observed between the control group and the DOX group, as well as between the DOX group and the DOX + YH0618 group (Fig. 3A and B).Additionally, the DEPs can be observed and analyzed from a macroscopic standpoint by generating a cluster heat map, as shown in Fig. 3C-D.
Cluster analysis of DEPs
To examine the specific protein targeted by YH0618 in the context of hair loss induced by DOX, a Venn diagram analysis was employed to conduct a more comprehensive assessment of the DEPs.The findings indicated a substantial increase in the expression levels of 386 proteins in the DOX group compared to the control group.Additionally, after treatment with YH0618, 18 of these proteins exhibited a significant decrease in expression (Fig. 4A).In contrast, the expression levels of 59 proteins exhibited a substantial decrease in the DOX group, whereas only one protein showed a significant increase following treatment with YH0618 (Fig. 4B).Table 2 displays a collection of 19 DEPs that have been reversed and are potentially the target proteins for YH0618 to mitigate DOX-induced alopecia.
GO analysis of reversed DEPs
The DEPs that were seen following the injection of YH0618 were utilized to conduct GO annotation and enrichment analysis, focusing on the domains of biological process (BP), molecular function (MF), and cellular component (CC).With respect to the examination of BP, the findings indicated that the DEPs influenced by YH0618 were primarily associated with the biosynthesis of acetyl CoA, the metabolic process of pyruvate, and the biosynthesis of lipids (Fig. 5).In the examination of CC, the identified DEPs were found to be linked with the cytosol, mitochondrion, and keratin filament.Upon analysis of MF, a significant proportion of the reversed DEPs exhibited relevance to structural molecular activity, calcium ion binding, and protein homodimerization activity.
KEGG pathway analysis of reversed DEPs
To determine the probable signaling pathway by which YH0618 reduces doxorubicin-induced alopecia, we conducted a KEGG signaling pathway analysis on the DEPs induced by YH0618 treatment.The signaling pathways implicated in the control of YH0618, as shown in Fig. 6, encompass staphylococcus aureus infection, estrogen signaling pathway, pyruvate metabolism, chemical carcinogenesis, and PPAR signaling system.
PPI analysis
To conduct a more comprehensive investigation into the reciprocal influence of the reversed DEPs resulting from YH0618 treatment, we utilized the STRING database and the Python tool "network" to evaluate these DEPs.The interaction network diagram displays 13 protein nodes and 25 interactions (Fig. 7).Notably, Krt81, Krt87, Krt34, Krt33b, Acly, Me1, Hrnr, Krt33a, and Gsta3 are positioned at the center of the network and serve as prominent hubs for interacting with other proteins.
Western blot analysis of the reversed DEPs
The DEPs that exhibit a close relationship with hair growth were chosen for further investigation using Western blot investigations.The findings indicate that the protein expression levels of Krt81, Krt34, Krt33a, and Smad3 were considerably elevated in the DOX group compared to the control group.Furthermore, the treatment with YH0618 resulted in a considerable reduction in the expression levels of these proteins.The protein expression level of Krt33b did not exhibit a statistically significant alteration in response to the treatment with DOX and YH0618, as demonstrated in Fig. 8A-B.
Discussion
The administration of DOX, a commonly used chemotherapeutic agent, has resulted in significant dermal-epidermal shrinkage and substantial modifications to the epithelial and mesenchymal constituents comprising hair follicles [19].Prior research has indicated that the primary factor contributing to CIA is the apoptosis of matrix cells.This form of programmed cell death has been associated with various mechanisms, including alterations in p53 gene expression, decreased expression of β-catenin, glutathione depletion, The significantly downregulated proteins were annotated in blue (FC < 0.83 and P < 0.05), the significantly upregulated proteins were annotated in red (FC > 1.2 and P < 0.05), and the proteins without differences were indicated in grey.Hierarchical clustering results between the control group and DOX group (C), DOX group and DOX + YH0618 group (D) were presented as a tree-type heatmap with the abscissa showing the samples and the ordinate showing the significant DEPs.The expression levels of the significantly differential proteins in different samples were exhibited in the heatmap by different colors after normalization using the log2 method, in which red dots represent the significantly upregulated proteins, blue dots represent the significantly downregulated proteins, and grey dots represent proteins with no quantitative information.(For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.)(caption on next page) R. Li et al. changes in mitochondrial function, and the production of reactive oxygen species [20,21].The concept of oncosis has been suggested as an additional etiological factor contributing to CIA.The impact of DOX on glucose transport during the initial phase may decrease glycogen levels within the outer root sheath cells of hair follicles, ultimately culminating in oncosis [22].Furthermore, suppressing angiogenesis and sebaceous gland dysfunction induced by DOX may contribute to hair loss [23,24].The etiology of hair loss caused by DOX is multifaceted.Utilizing omics technology presents a significant methodological approach for systematically examining the mechanism underlying DOX-induced alopecia.Furthermore, it enables the exploration of the mechanisms through which potential protective medicines can mitigate this issue.The utilization of proteomics and network pharmacology has been employed to elucidate the underlying mechanism of action of TCM [25][26][27].Previous works reported that C57BL/6 mice were set as the animal model to study alopecia.This model can well reflect the hair grown which possesses some advantages, such as recognition of mature hair follicles by pigmentation, maintaining a level of consistency, and testing the anagen-phased follicle that produces hair growth [28,29].The present work used several methodologies to investigate the underlying mechanism by which the medicinal and edible formula YH0618 mitigates hair loss induced by DOX in C57BL/6 mice.
Fig. 4. A Venn diagram analysis of DEPs. (A)
The green circle represents 386 up-regulated proteins in the DOX group compared with the control group; the blue circle represents 54 down-regulated proteins in the DOX + YH0618 group compared with the DOX group; the overlapping area presents 18 reversed differential proteins after YH0618 treatment.(B) The green circle represents 59 down-regulated proteins in the DOX group compared with the control group; the blue circle represents 48 up-regulated proteins in the DOX + YH0618 group compared with the DOX group; overlapping area presents 1 reversed differential protein after YH0618 administration.(For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.)The current investigation employed the iTRAQ-based quantitative proteomics technique to identify a total of 3894 proteins from the skin tissue of mice.Nineteen DEPs were shown to be impacted by DOX treatment.However, it was observed that these DEPs might be alleviated by applying the YH0618 treatment.The bioinformatics research revealed that the DEPs were implicated in several biological processes, including staphylococcus aureus infection, estrogen signaling pathway, pyruvate metabolism, chemical carcinogenesis, and PPAR signaling system.Staphylococcus aureus infection has been proven to be closely related to folliculitis, which can cause damage and necrosis of the hair follicle roots, leading to hair loss [30,31].Some studies also found that estrogen receptors are expressed in the scalp hair follicles, and estrogen regulates the growth cycle and function of hair follicles through binding to estrogen receptors [32].When the estrogen receptor pathway is abnormal, it may affect the normal physiological process of hair follicles, resulting in the occurrence or aggravation of hair loss [33].Flores et al. showed that regulation of pyruvate entry into mitochondria for subsequent oxidation to fuel the TCA cycle can manipulate hair follicle stem cells and the hair cycle in normal adult mice with typical hair cycling, and also propose inhibition of pyruvate entry into mitochondria as a versatile treatment strategy for alopecia in humans [34].PPAR is essentially a class of ligand-dependent transcriptional regulators, including three subtypes: PPARα, PPAR β/δ, and PPARγ [35].Due to the breadth and complexity of the PPAR signaling pathway, it is known to be associated with many proteins, factors, and diseases.Hu et al. have developed a PPARγ agonist that can treat hair loss, which can upregulate the expression of iNOS genes, thereby promoting hair growth [36].Therefore, the potential mechanism by which YH0618 reduces doxorubicin-induced alopecia appears to entail modulation of energy metabolism, hormone control, and inflammatory response.
Keratin serves as the primary structural constituent of epithelial tissues and hair and nails.The cellular structure is maintained, and various biological processes, including cell proliferation, migration, and apoptosis, are regulated by it [37,38].Over the past three decades, several hair follicle-specific epithelial keratins, specifically those found in the root sheath and hair keratins, have been identified.The expression of type I keratin (Krt25-Krt28) and type II keratin (Krt71-Krt75) is localized primarily to the root sheath of hair follicles.On the other hand, hair keratins contain Type I keratin (Krt31, Krt32, Krt33a, Krt33b, Krt34-Krt40) and Type II keratin (Krt81-Krt86), as reported in a previous study [39].Specific keratins are linked to hair abnormalities [40].As an illustration, it has been observed that mutations occurring in the hard keratins Krt81 can result in monilethrix, which is classified as an autosomal dominant hair condition [41].Besides, Giesen et al. [42] postulated that the observed drop in the expression levels of Krt33a and Krt34, two markers associated with the later stage of hair follicle differentiation, may contribute to the alterations in hair structure commonly observed during aging.The protein Krt33b, which exhibits an encoded structure, is associated with the development of hair and nails [43][44][45].In the present study, the proteomic data revealed the involvement of four DEPs (Krt81, Krt34, Krt33a, and Krt33b) in the underlying mechanism of YH0618's activity in mitigating DOX-induced alopecia.The confirmation of the involvement of Krt81, Krt34, and Krt33a can be achieved using Western blot analysis.The findings of our investigation diverged from previous research, which indicated that the expressions of Krt33a, Krt33b, Krt34, and Krt81 were down-regulated in injured skin [46,47].In contrast, our work demonstrated that DOX up-regulated the expressions of these proteins in the skin.The observed phenomenon may be attributed to the activation of alternative signaling pathways by DOX, resulting in alterations in the expression of these proteins.Further investigation is warranted to elucidate the precise underlying mechanism.
The transforming growth factor-β (TGF-β) protein plays a vital role in various biological processes, including embryonic development, organ formation, physiological remodeling of connective tissue during tissue repair and wound healing, and the development of cancer [48].The Smads protein family is known to have a significant role in facilitating the transportation of TGF-β signal from the cell surface receptor to the nucleus [49].Furthermore, distinct Smads mediate diverse signal transduction pathways of members of the TGF-β family.Prior research has established that the administration of high-dose electric radiation can lead to the development of skin thickening, fibrosis, permanent alopecia, and ulceration.Radiation exposure has been found to induce a significant upregulation of Smad3 protein expression in skin tissue, decreasing skin elasticity and bursting strength [50].Furthermore, previous studies have indicated a correlation between hair development and hair follicle regeneration with the TGF-β/Smad3 signaling cascades [51,52].In the present investigation, the administration of DOX resulted in an upregulation of Smad3 protein expression, a finding that aligns with previous studies [53,54].Notably, the observed effect was mitigated by the application of YH0618.There is a possibility that the TGF-β/Smad3 signaling pathway could be linked to alopecia generated by DOX, suggesting that it may serve as a viable therapeutic target for YH0618.Additional investigation is necessary to validate this theory.
Although we found that Krt33a, Krt33b, Krt34, Krt81, and TGF-β/Smad3 signaling pathway might be associated with the action of YH0618 in reducing DOX-induced alopecia, alopecia is a complicated progress after chemotherapy.There are two major types of CIA, including telogen effluvium and anagen effluvium [55], which may make the physiological process of CIA more complex and unpredictable.Previous research demonstrated that topical treatment of cyclin-dependent kinase-2 (CDK-2) decreased CIA at the application site in some tested rats [56].In animal models, treatments of fibroblast growth factor and epidermal growth factor have demonstrated a preventative function for CIA [57,58].It has been revealed that p53-deficient mice do not develop CIA, probably as a result of suppression of hair follicle apoptosis [59].Further study is necessary to validate the relationships between the actions of YH0618 and the potential targets reported above.So far, the present investigation just focuses on the potential mechanism of actions of YH0618 on animal models and has not been extended to study the underlying relationship between hair regrowth and alopecia in patients after treatment.Further investigations are warranted to confirm whether the actions of YH0618 were related to the potential targets mentioned above in preventing CIA under the clinical application.
Conclusion
In summary, our research has provided evidence that the medicinal and edible formula YH0618 can mitigate hair loss induced by DOX and facilitate hair regrowth.The underlying mechanism is likely associated with signaling pathways related to energy metabolism, hormone control, and inflammatory response.Possible targets of YH0618 include Krt81, Krt34, Krt33a, and the TGF-β/Smad3 signaling pathway.This study aims to contribute novel insights and empirical evidence towards elucidating the underlying mechanism of CIA.Additionally, it sheds light on discovering possible therapeutic targets for preventing and treating CIA.The outcomes of this research endeavor will establish a solid groundwork for the subsequent investigation and development of pharmaceutical and nutraceutical interventions.
Fig. 1 .
Fig. 1.Skin color changes and hair growth of mice at different time points of treatment.The hair shafts in an area of 1.5 cm × 2 cm were shaved.The dorsal skin of mice was treated with depilatory cream.The homogeneous pink color of the skin revealed the telogen phase of the hairs.Depilation at this hair stage induced the development of a synchronous anagen stage.The mice were randomly divided into the control group, YH0618 group, DOX group, and DOX + YH0618 group.After 3-week treatment, the mice were sacrificed under anesthesia.The skin samples were collected for further analysis.(For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.)
Fig. 3 .
Fig. 3. iTRAQ quantitative proteomic analysis of DEPs after YH0618 treatment.Volcano plot was performed using the data of the proteins between the control group and DOX group (A), DOX group and DOX + YH0618 group (B) (Logarithmic transformation based on 10, Student's t-test).The significantly downregulated proteins were annotated in blue (FC < 0.83 and P < 0.05), the significantly upregulated proteins were annotated in red (FC > 1.2 and P < 0.05), and the proteins without differences were indicated in grey.Hierarchical clustering results between the control group and DOX group (C), DOX group and DOX + YH0618 group (D) were presented as a tree-type heatmap with the abscissa showing the samples and the ordinate showing the significant DEPs.The expression levels of the significantly differential proteins in different samples were exhibited in the heatmap by different colors after normalization using the log2 method, in which red dots represent the significantly upregulated proteins, blue dots represent the significantly downregulated proteins, and grey dots represent proteins with no quantitative information.(For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.)
Fig. 5 .
Fig. 5. GO analysis of the reversed DEPs in mice skin tissue after YH0618 administration.The abscissa represents the GO entry name, and the ordinate represents the number and percentage of proteins corresponding to the entry.
Fig. 6 .Fig. 7 .
Fig. 6.KEGG pathway analysis of the reversed DEPs in mice skin tissue after YH0618 treatment.The abscissa represents the enrichment score, and the ordinate represents the pathway information with the top 20 enrichment score.The color changing from red and green to blue and violet represents the p-value.The bubble size represents the number of significant modular proteins, with a larger area indicating a greater number of proteins.(For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.)
Fig. 8 .
Fig. 8.Western blot analysis of Krt81, Krt34, Krt33a, Krt33b, and Smad3.Skin tissues of the mice were collected for Western blotting analysis.(A) The expressions of Krt81, Krt34, Krt33a, Krt33b, and Smad3 were analyzed by Western blotting.(B) Quantitative analysis of related protein bands and the intensities of them were normalized to that of GAPDH bands.The original blots were included in Supplementary Material (Fig. S1).# represents a significant difference compared to the control group ( # P < 0.05; ## P < 0.01); *represents significant difference compared to the DOX group (*P < 0.05; **P < 0.01).
Table 1
Effect of YH0618 on DOX-induced hair loss area.
a Compared with the DOX group.a Compared with DOX group,**P<0.01R. Li et al.
Table 2
Reversed DEPs in mice skin tissue after YH0618 administration.
|
2024-06-20T15:09:19.899Z
|
2024-06-01T00:00:00.000
|
{
"year": 2024,
"sha1": "28cb6e86d3a657cf49da7f25898a4897178470ca",
"oa_license": "CCBYNC",
"oa_url": "http://www.cell.com/article/S2405844024090820/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "80a9028853a7da7ce6cd763b5a1fe64aa148b66f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
207930592
|
pes2o/s2orc
|
v3-fos-license
|
Revivals imply quantum many-body scars
We derive general results relating revivals in the dynamics of quantum many-body systems to the entanglement properties of energy eigenstates. For a D-dimensional lattice system of N sites initialized in a low-entangled and short-range correlated state, our results show that a perfect revival of the state after a time at most poly(N) implies the existence of"quantum many-body scars", whose number grows at least as the square root of N up to poly-logarithmic factors. These are energy eigenstates with energies placed in an equally-spaced ladder and with R\'enyi entanglement entropy scaling as log(N) plus an area law term for any region of the lattice. This shows that quantum many-body scars are a necessary condition for revivals, independent of particularities of the Hamiltonian leading to them. We also present results for approximate revivals, for revivals of expectation values of observables and prove that the duration of revivals of states has to become vanishingly short with increasing system size.
The behaviour of out of equilibrium quantum manybody systems has been gathering a large amount of attention in recent years. This has largely been motivated by the recent progress of experimental platforms such as cold atoms, ion traps or Rydberg atoms, where many of these systems can be realized in practice [1][2][3]. One of the most widely studied situations in this context is that of "quantum quenches": The system is first prepared in an initial pure state, to then be subjected to an instantaneous change of Hamiltonian H 0 → H that drives it out of equilibrium. In generic cases, it is believed that the dynamics will relax to an equilibrium state locally indistinguishable from a thermal ensemble, as granted by the Eigenstate Thermalization Hypothesis (ETH) [4,5]. Both the ETH and this relaxing behaviour have been confirmed in numerous numerical and experimental works [6,7]. However, there are various cases where this prediction fails notoriously. They include integrable systems that relax to a so-called generalized Gibbs ensemble [8], and also many-body localized systems [9], characterized by the presence of quasi-local integrals of motion [10] which prevent the system from thermalizing due to memory of the initial conditions.
Recently, a new kind of deviation from the predictions of the ETH has been found. It consists of systems which, rather than relaxing, actually revive back to the initial state after a short time. This phenomenon was first found in the experiment of Ref. [3], which showed that a system of 51 Rydberg atoms did not thermalize as expected when prepared in a particular initial product state. Shortly after, this was associated with the presence of a number of anomalous energy eigenstates in the spectrum [11], the so called quantum many-body scars. The first class of models displaying such anomalous eigenstates had been constructed in [12], and since then, numerous recent efforts have aimed to characterize these eigenstates [13][14][15][16][17][18][19]. Since their discovery, they have been found in further classes of models, see for example Refs. [20][21][22][23][24][25][26][27][28], (including driven ones [29][30][31]), some of which even display per-fect revivals when the system is prepared in particular product states [32,33] or matrix product states (MPS) [34,35]. While it is clear that in any model exhibiting scarred eigenstates there are relatively low-entangled initial states that show perfect revivals (simply take a super-position of two scarred eigenstates), it is not expected that one can always find short-range correlated states (e.g., product states) that show perfect revivals.
Motivated by these recent findings, we here derive a number of analytical results that apply to many-body systems exhibiting revivals at short times from lowentangled and short-range correlated states. Our results significantly improve on a Lemma presented in Ref. [32]. We first derive properties of the energy spectrum and eigenstates that have to be fulfilled whenever (approximate) revivals appear in a local quantum many-body system, independent of the details of the Hamiltonian and in any dimension D of the underlying lattice. We show that the existence of at least O( √ N / log 2D (N )) (where N is the system size) quantum many-body scars follows from the early revivals of low-entangled and short-range correlated initial states, when the revival time τ is at most of the order of poly(N ). We prove that all of these quantum many-body scars have Rényi entanglement entropies (of orders α > 1) of at most O(log(N )) + O(|∂A|), for any subset A of the lattice sites, with the area law term vanishing if the initial state experiencing revivals is a product state. Our bounds hence match the scaling that has been found in concrete model Hamiltonians [20][21][22][32][33][34][35]. In dimension D = 2 or higher and for initial product states, our results show that quantum-many-body scars show even weaker entanglement in terms of Rényi entropies of order α > 1 than allowed by an area law (with log corrections). For Rényi entropies with α ≤ 1, we use techniques inspired by the problem of bounding entanglement of ground states of gapped models [36] to show that the entanglement entropy scales at most O( N |∂A|).
The paper is structured as follows: In Section I we state our assumptions on the initial states and define the notion of exact and approximate revivals. Then, in Section II, we give constraints on the energy distribution of initial states with revivals, which we use to give bounds on the Rényi entanglement entropy S α , first in Section III (for α > 1) and then in Section IV (α ≤ 1). In Section V we explain the consequences of perfect revivals on an observable, and in Section VI we give universal constraints that all periodic revivals must obey. In the Appendix we include the proof of some of the technical statements and discuss a model example to benchmark our bounds.
I. SYSTEMS WITH REVIVALS
We consider a system on a regular D-dimensional lattice Λ of N d-dimensional sites with a local Hamiltonian H = x∈Λ h x , where the local terms h x have support on at most b neighboring sites, and are uniformly bounded by a constant h: h x ≤ h. As usual, we denote the unitary implementing time evolution by U t = exp(−iHt) and the energy eigenstates by |E j . Without loss of generality, we assume that the ground state energy vanishes, E 0 = 0, and set = 1. We further assume that the system is prepared in a pure state |Ψ which: i) Is a low-entangled state: For every region A on the lattice, the reduced density matrix σ A has rank at most χ |∂A| , where |∂A| is the area of the boundary of A and χ is independent of the system size.
ii) Is short-range correlated and out of equilibrium: It fulfills exponential decay of correlation with a finite correlation length and the standard deviation of the energy is given by σ ≡ The statement of assumption i) is somewhat technical, but it includes all states that can be represented by a tensor network with constant bond dimension, such as projected entangled-pair states (PEPS) in D=2 and matrix product states (MPS) in D=1. The constant χ is then directly related to the bond dimension. In particular, for product states we have χ = 1. The upper bound σ ≤ s √ N required in assumption ii) follows directly from the finite correlation length. The assumption therefore simply makes explicit that the initial state must not be an eigenstate of the Hamiltonian. We emphasize that generic tensor network states also have a finite correlation length [37].
The way in which we understand revivals of a state is in terms of the fidelity with the initial state, captured by the following definition. Definition 1. An initial state |Ψ = j c j |E j evolved with a Hamiltonian H has an -revival at time τ if where .... The definition only involves an -revival at a single time τ . However, it implies that there are further periodic approximate revivals at later times. Concretely, anrevival at time τ implies (see appendix A 1 for derivation) We emphasize that the revival of the full many-body state is a very strong condition and f (t), sometimes known as "spectral form factor" and its absolute F (t) value as "survival probability", is not a directly measurable quantity (F (t) is, however, measurable in principle using an interferometric Ramsey scheme [38][39][40][41]). For this reason, we also consider the case of a perfectly recurring expectation value of an observable A, leading to similar results under an additional assumption (see Section V).
II. CONSTRAINTS ON THE ENERGY DISTRIBUTION
From the definition f (t) = Ψ| exp(−iHt) |Ψ , it is clear that f (t) is the characteristic function of the probability distribution of energy. In this section, we therefore study the properties of the probability distribution of energy in the case of -revivals of a state that fulfils assumption ii) above. First we show that if there are approximate revivals at short times τ , a large weight of the distribution is contained within equally-spaced "peaks", whose spacing depends on τ (see Fig. 1). This is true for any initial state. We then make use of the fact that the probability distribution of energy of a state with a finite correlation length is roughly Gaussian, with which we show that at least ∼ √ N of the peaks each contain total weight of at least O(1/N ).
As before, we write the initial state in the energy eigen-basis as In the case where the Hamiltonian has degenerate energylevels, we choose the basis in each energy eigenspace so that every energy appears only once in the above decomposition of |Ψ . To set up some further notation, let us introduce α(t) as the phase of f (t): Given α(τ ) and an arbitrary constant 0 ≤ δ ≤ π, we define for all l ∈ Z the energy intervals where addition is point-wise, and the interval between two consecutive such intervals These partition the real line as R = l∈Z ∆ δ (l) ∪ ∆ δ (l). Note that since ||H|| ≤ hN , the number of intervals ∆ δ (l) in the spectrum with nonzero energy eigenvalues is at most To de-clutter the notation in what follows, let us also introduce p as the probability measure of energy of the initial state, so that The following lemma lower bounds the probability of measuring an energy on the initial state within one of the intervals ∆ δ (l).
Proof. The proof follows from simple applications of inequalities between complex numbers. Since f (0) = 1, we have Since for any complex number we have |z| = √ Rez 2 + Imz 2 ≥ |Rez|, we get the lower bound We now split up the summation in terms of the intervals ∆ δ (l) and ∆ δ (l) and neglect the contributions from ∆ δ (l). This yields a lower bound For E j ∈ ∆ δ (l) we have that We hence obtain Using the normalization of the probability distribution of energy, we then find Lemma 1 tells us that if an -revival at time τ occurs, the energy distribution must be mostly contained in the intervals ∆ δ (l) as long as cos(δ) is not too close to unity. The smaller (which is equivalent to an increasingly exact revival), the narrower the intervals ∆ δ (l) can be made, by choosing a δ such that the RHS of (8) is close to 1. If the recurrence time τ is very large, both the distance between the intervals ∆ δ (l) and their width 2δ/τ is small. In a finite system, for every > 0, recurrence theorems guarantee [42,43] the existence of a corresponding recurrence time τ R . For generic systems, however, one expects thatτ R = O(exp(exp(N ))), while for particular cases such as integrable systems, it is expected that τ R = O(exp(N )) [44]. In either case, the distance between the intervals ∆ δ (l) becomes comparable to or smaller than the level-spacing, so that the union of the ∆ δ (l) automatically contains (almost) all energy eigenvalues.
The next important feature of energy distributions of local models in a state with finite-correlation length is given by the Berry-Esseen theorem [45]. This is a strengthening of the central limit theorem, in which the error from having finite sample sizes is bounded by a function of the number of samples. It allows us to derive the second key constraint.
Lemma 2. Let |Ψ be a state fulfilling assumption ii). Then there exists a constant K ≥ 0 (independent of N ) such that The proof can be found in Appendix A 2. Lemma 1 and Lemma 2 have competing effects: While Lemma 1 shows that the distribution clusters around at most n evenly spaced energy intervals, Lemma 2 guarantees that no particular interval of energy width δ can contain a weight larger than δ/(τ σ) + K log 2D (N )/ √ N . Together, they imply the existence of a large number of intervals containing each a certain minimum weight: Theorem 1. Given an initial state fulfilling assumption ii) and with an -revival at time τ , then for every c > 1 and 0 < δ ≤ π, the number N c,δ of intervals ∆ δ (l) in the energy distribution with p(∆ δ (l)) > 1/(cN ) is lower bounded as .
Proof. The total number of peaks ∆ δ (l) is upper bounded by n = hN τ /2π. Hence the total number of peaks such that p(∆ δ (l)) ≤ 1/(cN ) is trivially also upper bounded by n. Let the index set J δ collect the peaks such that p(∆ δ (l)) > 1/(cN ). Then using Lemma 1 we find Using Lemma 2 we then get and re-arranging yields the desired bound.
To understand this bound, let us make a specific choice for c and δ, assuming that is very small. For example, we can choose c = 2hτ /π and δ = √ 2 , so that /(1 − cos(δ)) ≈ 1/2 and we find We see that the number of peaks of width √ 2 such that each of them contains weight at least (π/2)/(hτ N ) is essentially lower bounded by O( √ N / log 2D (N )). The result holds for any value of τ , but if τ scales very quickly with N , this result loses its predictive power. For τ = poly(N ), one still finds a total weight of 1/poly(N ) in each peak, which is sufficient for our arguments on entanglement in the next section. However, if we consider the usual recurrence time τ R for some > 0 in a generic system, which is τ R = O(exp(exp(N ))), our bound trivializes: the r.h.s. becomes negative if we do not choose c doubly-exponentially large in the system size. At the same time, if we choose c doublyexponentially large, the peaks are only required to contain an doubly-exponentially small amount of weight, which does not yield useful information.
We now estimate the entanglement entropy of (approximate) eigenstates of the system. The previous discussion motivates the definition of the following normalized pure states These are approximate energy eigenstates with energŷ E l = (2πl + α(τ ))/τ for which δ/τ controls the precision, in the sense that and where the last approximation holds for δt/τ 1. The states |Ê l hence dephase in a time of order τ /δ, but cannot be distinguished from eigenstates on time-scales much smaller than that. In the limit δ/τ → 0 they converge to actual eigenstates provided that the limit exists, i.e., the interval ∆ δ (l) actually contains an eigenstate in this limit.
Theorem 1 implies that the initial state has a fidelity of at least 1/cN with N c,δ of the approximate eigenstates. This is in fact enough to bound the Rényi entanglement entropy S α of those approximate eigenstates for every region of the lattice, which is the focus of our next main result. We remind the reader at this point that the Rényi entropies are defined as In the limit α → 1 they converge to the von Neumann entropy pointwise and they fulfill S α ≤ S β for α ≥ β. In the limit α → ∞, one obtains S ∞ (ρ) = − log( ρ ).
Theorem 2. There exist at least N c,δ many of the approximate eigenstates |Ê l with the following property: For all α > 1 and for any subregion A of the lattice, the Rényi entanglement entropy is bounded as where N c,δ is bounded as per Theorem 1 andρ Proof. This result is a slight extension of an argument from Ref. [46]: Since the fidelity F between two quantum states cannot decrease under tracing out sub-systems, we have where, ρ A = Tr A c [|Φ Φ|] is the reduced state of Φ on A and σ A that of |Ψ . The fidelity between two states is smaller than that of the outcome-distributions of any measurement on the states. We can therefore use the binary projective measurement {P σ , 1 − P σ }, with P σ being the projector onto the image of σ A , to find In Refs. [46,47] it was further shown that S ∞ ≥ α−1 α S α . Using assumption i), we thus find in our case and solving for S α (ρ (l) A ) yields the desired bound. In D = 1, |∂A| simply counts the number of connected components of A and for a product state we have χ = 1, so that the area law term vanishes. As long as c = O(poly(N )), the result then leads to O( √ N /poly(log(N ))) approximate eigenstates with entanglement entropy of order O(log(N )). Since τ < c, this allows for a longer revival time, of up to τ = O(poly(N )).
For systems that exhibit perfect revivals, = 0, we can choose δ = 0, so that by Eq. (23) the |Ê l become exact eigenstates with energies in the set {(2πk + α(τ ))/τ } k∈Z . Corollary 1. If F (τ ) = F (0) for some τ , there exists a set of at least N c,0 energy eigenstates |E l with energies in the set {(2πk + α(τ ))/τ } k∈Z and such that their entanglement entropy of any region is bounded as where This bound on the entropy is consistent with the examples in [20][21][22][32][33][34][35], which display eigenstates with a log(N ) scaling of the von Neumann entropy (see Appendix B for a more detailed comparison).
IV. BOUNDS ON THE RÉNYI ENTANGLEMENT ENTROPY WITH α ≤ 1
For this range of entropies, let us again consider the case of perfect revivals, with F (τ ) = F (0). In this case, the scar states on which the initial state |Ψ has support can be exactly represented by polynomial functions of the Hamiltonian, with a low degree. Consider the polynomial where the product over j ranges from 1 to hN τ /2π, except for i.
which can be interpreted as the statement that the polynomial K i (H) projects |Ψ to the state |E i E i |. The next result is an immediate consequence of this construction, following [36].
, then for all eigenstates |E i with non-zero support on |Ψ , the Rényi-0 entanglement entropy of sufficiently regular regions is bounded as The proof, together with a precise definition of what we mean by sufficiently regular regions, can be found in Appendix A 3. This result also bounds the von Neumann entropy, since S 1 ≤ S 0 . Notice that this result holds for all eigenstates on the equally-spaced ladder, as opposed to only O( √ N ) of them as in Corollary 1. While it represents a non-trivial bound, much smaller than the O(N ) expected for most eigenstates, the concrete models in the literature show that this could potentially be improved to O(log N ). Indeed, in concrete models, the scar states can usually be written as |E i = ( j S j ) i |Φ for some simple state |Φ (such as the ground state), where S j are single particle operators and j labels the sites of the lattice. These eigenstates |E i have equally-spaced energies E i = ωi + E 0 , so that there are at most hN/ω of them in the spectrum. Writing we find that the Schmidt rank of the operator ( j S j ) i is at most log(i) ≤ log hN ω . Thus, if |Φ is low-entangled in the sense of assumption i), as is usually the case, all entanglement entropies of |E i are bounded by log hN ω + |∂A| log χ.
V. REVIVALS IN AN OBSERVABLE
Assuming a revival of the full many-body state is a rather strong condition. Intuitively, it should be possible that physically relevant observables have a revival in terms of their expectation value A(t) ≡ Ψ(t)| A |Ψ(t) at time τ and yet the full many-body state has a small overlap with the initial state, F (τ ) 1. It may therefore be surprising that conclusions similar to those above can be reached when one assumes that the expectation value is periodic, and makes one further assumption on the observable. To state this assumption, let us write such that v ω = Ei−Ej =ω c i c * j A ij . Then for any ω , It follows that either the frequency ω does not appear in the dynamics of the expectation value (v ω = 0) or it is of the form ω = 2πl/τ . For local observables in many-body systems, we expect that in general A ij = 0 unless A and H share some symmetry. It therefore seems reasonable to assume that generically We thus conclude that c i c * j = 0 only if E i − E j = 2πl/τ for some integer l. This in turn implies F (0) = F (τ ), which is the assumption of Corollary 1. We leave it as an open problem to explore the setting of approximate -revivals of local observables.
VI. UNIVERSAL CONSTRAINTS ON REVIVALS
In the preceding sections, we have assumed revivals and derived properties of the energy eigenstates from this assumption. Before concluding, let us now briefly discuss general constraints for such revivals which apply to any model with a local Hamiltonian. It is expected that if revivals of the initial product state exist, their duration (i.e., the time for which F (t) is larger than some constant) must become vanishingly short in the thermodynamic limit (see Fig. 2). This is also a feature found in the concrete models (see, for example, Refs. [33,34]). This property does not imply that the duration of revivals for local observables becomes short in the thermodynamic limit, but only the time-interval in which the full many-body state has large overlap with the initial state.
We illustrate behaviour this with two different and general results. The first one shows that the average fidelity over time decays with the system size for any initial state fulfilling assumption ii). Theorem 4. Let |Ψ be a pure state fulfilling assumption ii). Then where K ≥ 0 is a constant independent of system size.
A revival at a time τ of fidelity at least (1 − ) for a time interval of length τ rev contributes to the LHS of Eq.
For times τ ≥ O(1), we see that the RHS goes as ∼ O(1/ √ N ). This bound then restricts the revivals of high fidelity to either a very short time interval τ rev or a very late time τ . In Appendix B we show that the model from [33] effectively saturates this bound.
The second result utilizes the Lieb-Robinson bound [48] to show that a short time after an initial product state is prepared (or equally, after a perfect revival), its overlap with the initial state has to be sub-exponentially small in the system size. For simplicity, we formulate and prove this result only in the case of a D-dimensional cubic lattice of side-length L and with a translationally invariant initial state and Hamiltonian. We emphasize, however, that a similar argument applies to any regular D-dimensional lattice and also for initial states that are only translationally invariant with a higher period than the lattice spacing.
where ρ x (t) = Tr {x} c [U t |Ψ Ψ|U † t ] is the reduced density matrix of an arbitrary site x at time t. If |Ψ is not an eigenstate, then for any δ > 0 there exists a time 0 < τ < δ such that k(τ ) > 0. For any such fixed time τ and for large enough L, we have The theorem says that, whenever k(τ ) > 0, the fidelity between |Ψ and the time-evolved state U τ |Ψ is subexponentially small in the linear size of the system L. Furthermore, from a perturbative expansion one quickly finds that for small τ we have k(τ ) = O(τ 2 ). The proof of Theorem 5 is relatively involved and presented in Appendix A 5. However, the idea behind it is simple and sketched in Fig. 3.
VII. CONCLUSION
We have derived general results on the energy spectrum and the entanglement of (approximate) energy eigenstates for systems that show revivals. Most importantly, our results show that the presence of "quantum many-body scars" with small amounts of entanglement of order log(N ) is a necessary consequence of the existence of revivals of a low entangled state with a revival time that is at most O(poly(N )). This explains why this scaling behaviour has been found in the concrete models studied so far [20][21][22][32][33][34][35]. One drawback of our results is that they only show a O(log(N )) scaling for Rényi entanglement entropies of orders α > 1, while for smaller α we can only show the upper bound O( N |∂A|).
While it is often found in practice that the von Neumann entropy and the higher order Rényi entanglement entropies show a similar scaling behaviour, this is not always the case. In particular our bounds on the Rényi entanglement entropies do not guarantee the existence of an efficient description in terms of matrix product states (MPS) [49] (although a O(log(N )) bound on the von Neumann would not imply this either). Indeed, it is known [46] that there exist states with both i) an arbitrarily large overlap with a product state and ii) a volume-law scaling of the von Neumann entropy, while all Rényi entropies of order α > 1 are bounded by a constant (dependend on α). It is an interesting open problem to find arguments for bounding the Rényi entropy with α < 1 by O(log(N )), which would guarantee an efficient MPS description of the scar states from dynamical considerations alone. A further interesting open problem is to understand whether the emergent (approximate) SU (2) representations that are connected to quantum many-body scars in concrete models [22,[32][33][34][35] can be derived from general arguments. Finally, it would be interesting to see whether results similar to those in the case of approximate revival of the initial state can also be derived for approximate revivals of (generic) expectation values of observables. This would also be interesting from the point of view of bounding equilibration time-scales in interacting quantum many-body system, a problem where relatively little rigorous progress has been made so far (see, for example, Refs. [7,[50][51][52] and references therein for recent discussions of this problem). In particular, it is an interesting open problem whether -approximate revivals of local observables and at early times are possible in entanglement-ergodic systems [46], where all energy eigenstates at positive energy density fulfill weak volumes laws of entanglement.
Acknowledgements. The authors acknowledge useful discussions with Cheng-Ju Lin. H. W. acknowledges support from the Swiss National Science Foundation through SNSF project No. 200020 165843 and through the National Centre of Competence in Research Quantum Science and Technology (QSIT). AA is supported by the Canadian Institute for Advanced Research, through funding provided to the Institute for Quantum Computing by the Government of Canada and the Province of Ontario. This research was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities. Below we report the estimated CO 2 emissions for transport due to this project (template from scientificconduct.github.io). Here we show inequality (2), using the notation |Ψ(t) = U t |Ψ . First, from the triangle inequality and the unitary invariance of the trace-norm, we find We now make use of the Fuchs-van de Graaf inequalities [53] 1 where F (ρ, σ) = √ ρ √ σ 1 is the fidelity between two quantum states. In our case we have which proves the claim.
Proof of Lemma 2
We first need the Berry-Esseen theorem for local Hamiltonians from [45], which reads as follows.
Theorem 6. (Lemma 8 of [45]) Let |Ψ be a state with a finite correlation length, energy variance σ, and a local Hamiltonian with uniformly bounded local terms, of a system of N particles on a D-dimensional lattice. Given the cumulative function and the Gaussian cumulative function where C is a constant and s = σ √ N .
The proof of Lemma 2 follows straightforwardly from this result. Let us first recall the definition of p(∆ δ (l)) from the main text, p(∆ δ (l)) = i:Ei∈∆ δ (l) where ∆ δ (l) = 2πl+α(τ ) τ + − δ τ , δ τ . Using the notation of Theorem 6, we can write Now, using the triangle inequality and the Berry-Esseen bound, where K ≥ 2C/s 3 . By definition we have that, for all x ≥ 0, where in the last step we have used that Erf 2x. Setting y = 2δ τ completes the proof.
Proof of Theorem 3
We start with the assumption of perfect revivals F (τ ) = F (0), which implies that where l ∈ Z. Since the Hamiltonian is bounded, we know that the number of such levels is at most hN τ 2π , so that |Ψ has support on at most that many levels. Let us now define the following operator K i for the eigenstate |E i : (A15) This is a polynomial of the Hamiltonian of degree at most hN τ π , and is such that K i (H) |Ψ = c i |E i . Now fix a bipartition A ∪Ā of the lattice. The boundary of a set A, denoted B(A), is the set of local terms h x which are supported on both A andĀ. Assume the following regularity conditions for it, parametrized by an integer m.
Definition 2. Let m be an integer. The bipartition A ∪Ā is said to be m-regular if the following holds.
• The number of spins in A m \ A and A \ A −m is at most 10m|∂A|.
• Every local term h x belongs to at most one B(A q ).
Several natural partitions, such as rectangular, vertical and circular, satisfy these conditions, whenever |∂A| = Ω(m) (i.e., for sufficiently large regions). Given this definition, we now proceed to bound the Schmidt rank of polynomials of the Hamiltonian, using a result from [36]. Following [54], we can view H as an effective one dimensional Hamiltonian. It was argued in [36] that H can be expanded as a linear combination of at most +m q=−m , such that each multinomial has a Schmidt rank at most N d b m across some bipartition A q ∪Ā q (with −m < q < m). Since the number of spins between A q and A is at most 10m|∂A| (recall Definition 2), each such multinomial has a Schmidt rank of at most This completes the proof.
On the other hand, assumption i) from the main text implies that there exists a product state |ψ A ⊗ |ψ A and an operator K Ψ of Schmidt rank SR ( We now proceed to prove Theorem 3. Notice that Eq. 34 states that |E i can be obtained from |Ψ by applying a polynomial in H of degree at most hN τ π . Applying Lemma 3 and choosing = hN τ π , m = |∂A| yields for every m-regular partition. With this, a bound on the Rényi-0 entropy follows This completes the proof.
Proof of Theorem 4
The proof follows an argument first made in [55,56], combined with the Berry-Esseen theorem (see section A 2). Since the integrand of the LHS is positive, we have that l,m |c l | 2 |c m | 2 e −T |E l −Em| of the proof. In the following we write |Ψ(τ ) = U τ |Ψ . First, using that |Ψ = |Ψ(0) is a product state, we find for any region A: Viewing A x = |ψ x ψ x | ⊗ 1 Λ\{x} as a local observable supported on site x ∈ A and A = ⊗ x∈A |ψ x ψ x | ⊗ 1 A c as one supported on region A, we can then make use of the Heisenberg picture to get We now fix the region A to consist of a sub-lattice of sites, all of which are a distance 2(l + 1) + r apart from each other, where r is the maximum diameter of the support size of the Hamiltonian terms: The distance l will be fixed later. We define B x (l) to be an l-neighbourhood of x, and set X = ∪ x∈A B x (l − 1). With this choice we have d(A, X c ) = l and where V x (t) is only supported within B x (l − 1), which implies Using the LR-bounds and that |Ψ(0) is a product state, we then find We can now make use of the LR-bounds again to approximate each factor: Using that A(t) = A X (t) = A X x (t) = A x (t) = 1, we can then bound However, we also have Putting the bounds together and using the assumption we thus find We now choose l = L D k(τ ) 1/(1+D) . For large enough L, we then find This leads to What is left is to show that for any δ > 0 there exists a τ < δ such that k(τ ) > 0. To do this, we now make use of translational invariance, so that k(τ ) = k x (τ ) for any x ∈ Λ. Suppose now contrarily that there exists a δ > 0 and k(τ ) = 0 for all τ < δ. This means that k(τ ) is constant over an open interval. But since on any finite system k(τ ) is an analytic function, it then has to be constant. This in turn implies that for all τ , which implies that the initial state is an eigenstate. This finishes the proof of Theorem 5. We emphasize that we only used translational invariance of the initial state to argue that k(τ ) > 0. It should be clear from the argument given above that it can be generalized to situations where, for example, the initial state is translationally invariant with a higher period, or is only a product state after neighboring spins are blocked together.
Appendix B: Comparing the bounds with previous results
To illustrate the tightness of our bounds, we compare our results with those of a recently found model with quantum scars and perfect revivals in Ref. [33]. The model is the spin-1 XY model in a hypercubic lattice with N = L D particles and Hamiltonian where S α i are the spin-1 operators at site i. In Ref. [33] it was found that this Hamiltonian has N + 1 eigenstates |S n with n ∈ {0, ..., N } which form a representation of SU(2) and have equally spaced energies E n = h(2n − N ) + N D. Moreover, there exists a particular product state |Ψ 0 = i |ψ i , the so-called "nematic Néel" state, which is such that that is, it exhibits perfect revivals at periods of π/h, with a weight suppressed exponentially with the system size. This initial product state can be written as so that H = N D and σ = h √ N , which thus fulfills assumptions i) and ii) from the main text. One can easily calculate that for any T = πl/h we have Let us now do the change of variables n = bN , so that b is a O(1) number for the O(N ) eigenstates in the bulk of the spectrum. For large N , using Stirling's approximation, we find that which leads to and therefore This, together with the inequalities for Rényi entropies S α ≥ S ∞ for all α ≥ 0 and implies that all the Rényi entropies with α > 1 of the O(N ) eigenstates also scale logarithmically, and that the Rényi entropies for 0 ≤ α ≤ 1 scale at least logarithmically. Similar conclusions can likely be reached with further models in the literature such as [32,34]. In contrast, Corollary 1 only guarantees the existence of O( √ N / log 2D (N )) eigenstates in which the Rényi entropies with α > 1 scale at most logarithmically.
In addition, in [33] it is shown that at least one eigenstate in the middle of the spectrum has von Neumann entanglement entropy scaling as S 1 O (log N ), which suggests that Theorem 3 is not tight.
|
2019-11-13T17:16:28.000Z
|
2019-11-13T00:00:00.000
|
{
"year": 2019,
"sha1": "70e3db044d5f697214b3337a8e8ec98afae626d7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1911.05637",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "db537a6b3470375a248d7587af3420e87036befb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
76666761
|
pes2o/s2orc
|
v3-fos-license
|
Health IT and digital health: The future of health technology is diverse.
Physicians used to make informed decisions with a very limited set of tools but a growing amount of experience and knowledge that spans generations. With the birth of modern medicine in the 18th century, the practice of medicine has gradually become dependent on the use of technology. It started with simpler methods, hollow wooden tubes (the first stethoscope) and X-rays, and evolved into cloud-based algorithms and virtual reality. By the end of the 20th century and the beginning of the 21st century, medicine has become a technological profession. Without it, no physician can be up-to-date, make informed decisions, or be able to legally practice medicine. Technologies have been infused into the delivery of care too. When personal computers became widely available in the 1990s, e-health emerged [1]. When such computers could be
connected to networks, telemedical services appeared [2]. The rise of social media networks made room for medicine 2.0 and health 2.0 [3]; while the advent of mobile phones and later smartphones summoned mobile health [4]. But in the 2010s, the rate at which disruptive technologies appear is becoming overwhelming for both patients and their caregivers [5].
As patients started to gain access to information and technologies that before were only available in the so-called 'ivory tower of medicine,' patient empowerment was born. This made patients proactive and wish to have an equal-level partnership instead of a hierarchical dependence on their physicians. They want to take part in the decision-making process regarding their health and contribute data they measure at home.
This cultural transformation that changes the essence of the doctor-patient relationship and the basics of healthcare is called DOI: http://dx.doi.org/10.18053/jctres.03.2017S3.006 digital health. We define digital health as "the cultural transformation of how disruptive technologies that provide digital and objective data accessible to both caregivers and patients leads to an equal-level doctor-patient relationship with shared decision-making and the democratization of care." When discussing the future of technologies in healthcare, one must make a clear distinction between issues regarding IT (Information technology) and digital health. The two areas are often intermingled while their nature and the solution they require are different on many scales. In a nutshell, IT issues impact physicians' everyday job the most but can be dealt with in the short term. Digital health has more impact on cultural changes and entails a long-term process ( Figure 1).
To give some examples of IT issues: anti-virus software started running on a computer that monitored a patient who was undergoing heart surgery [6]. The monitoring equipment failed during the operation. An investigation by the Food and Drug Administration (FDA) found that anti-malware software was responsible for the failure of the equipment, as it was set to scan for viruses every hour, against the recommendation of the equip-ment maker. This problem can be instantly resolved by turning off the scanning function of the software.
In another example, many electronic medical records systems do not communicate with each other, so accessing the required information to make a medical decision can become chal-lenging for physicians dealing with multi-institutionalized pa-tients. Such issues can be resolved by developing computer pro-grams that can communicate between the different electronic
Health IT and digital health
health records software packages and retrieve patient data for direct access by the physicians. Lastly, during the WannaCryscandal, a global cyberattack infected 300,000 computers in 150 countries using hacking tools [7]. It also crippled the National Health Service (NHS) in the United Kingdom. UK hospitals were shut down and had to turn away non-emergency patients after ransomware ransacked its networks. Since that attack, hos-pitals doubled down on cybersecurity.
Digital health issues are different in nature. Patients bring data they measure with sensors for their health or medical condition to the doctor-patient meeting and expect their caregivers to address technological questions in addition to medical questions. A medical robot can be a valuable asset to the team in a healthcare facility to deal with the labor burden during night shifts, when fewer people are working the floors. While the robot can facilitate the caregivers' job, it takes time and effort to get accustomed to a robot being a member of their team. Medical professionals use technologies daily as medical records are being digitized worldwide and smartphone apps are widespread. Since the dawn of digital health, medical professionals have gradually had to accommodate health sensors and internet-based services. Using digital health is a team effort, thus the era of lonely doctor heroes will end. The success of providing care depends on collaboration, empathy, and shared decision making. What is needed for implementation of care in the digital era is a newly defined cooperation between patients and their caregivers that allows room for new technologies. Nevertheless, a wellfunctioning patient-physician relationship will remain an essential part of healing. One seminal study revealed that the empathy skills of physicians can influence diabetic patients' objective clinical chemistry outcomes, the incidence of complications, and subjective well-being [8].
The distinction between IT and digital health and the implementation of digital health into practice require new frameworks. As digital health makes patients the point-of-care, a new status quo and new roles for both patients and caregivers are surfac-ing that heavily affect healthcare policies and shape the digital health framework. While constructing the digital health frame-work in terms of regulatory policies, several important aspects should be taken into account. Policy makers are expected to make every new technology available quickly, otherwise con-sumers may start using the technology without the proper reg-ulations in place. The wearenotwaiting initiative is a perfect example for this kind of pressure [9]. As there was no single device on the market to monitor blood sugar and supply insulin automatically, creative patients invented a do-it-yourself version from existing technologies. A movement grew out of the initia-tive and campaigned for the market introduction of an 'artificial pancreas' for years. One of the leading figures of the movement, Dana Lewis, used the device for almost two years before the FDA finally approved it.
Although the artificial pancreas was ultimately a success, such (social) initiatives come with risks too. Medical technologies including surgical robots, pacemakers, and insulin pumps have been shown to be prone to hacking. Health sensors used by patients at home might not be accurate and lack an evidencebased foundation. Patients might find misleading information online that leads to erroneous self-diagnosis.
Policy makers should therefore find ways to promote the safe use of digital health technologies, regulate them as fast as possible, and keep patients' data safe.
IT issues
Digital health issues Integration of different electronic medical record (EMR) and personal health record (PHR) software is lacking, rendering clinical decision-making difficult for physicians who rely on these records.
Using self-measured data from patients in medical decision making requires a stronger relationship with patients and better knowledge about assessing the quality of such data.
Malware programs can cripple operations with surgical robots or destabilize entire medical records systems. Some of the programs ask for a ransom to halt the spread of malicious programs.
Integrating smartphone apps into the practice of medicine requires improved knowledge about digital literacy from physicians and discussing apps-related issues with the patient. The lack of interoperability between the medical software that physicians use, online health services, and the health sensors that patients use makes it more difficult to input data relevant to the patient's health and disease management.
Advanced analytics such as deep learning algorithms can assist physicians in decision-making only when physicians can use them properly and understand the technological limitations besides its evidence-based advantages.
technologies, preventing ethical challenges, and promoting the use of digital health [10].
In terms of putting patients at the center of healthcare, the creation of the "Patients Included" badge is an exemplary initiative. The badge helps to identify medical events where patients are either among the speakers or involved in the organizing committee. The concept was developed at an innovation hub called the REshape Center of the Radboud University Medical Center in 2010. Events such as Stanford Medicine X and Doctors 2.0, and You even launched e-patient ambassador programs and invited patients to speak. The British Medical Journal was awarded a special "Patients Included" certificate to acknowledge and encourage their involvement of patients in medical publishing.
With respect to regulating disruptive technologies, the FDA cleared AliveCor's smartphone ECG, which is available for both Apple and Android phones, to be used by patients. It was the first digital health sensor to receive clearance. AliveCor also received clearance for an algorithm that allows the smartphone ECG app to detect atrial fibrillation. In 2017, the FDA cleared AliveCor's Kardiaband ECG reader as the first medical device accessory for the Apple Watch. These developments pave the way for additional approvals for digital health sensors that will become available to patients.
Regulators must prevent ethical pitfalls when shaping the digital health framework. With the advent of do-it-yourself gene therapies that attempt to modify one's genomic material, the FDA acknowledged that gene therapy products and "do it yourself" kits intended for self-administration are available to the public, but stressed that the sale of these products is against the law. The FDA's public stance on this issue and regulatory follow-up on the one hand cautioned consumers about the inher-ent dangers of self-administered gene therapy and, on the other hand, ensured that these therapies have either been approved by the FDA or are being studied with appropriate regulatory over-sight.
One of the hardest challenges in the framework shift is creating regulations that promote the use of digital health without pushing stakeholders to do something that is against their will. In a successful attempt to consolidate digital health with patient care, the NHS rolled out a program that encourages physicians to prescribe apps to patients with a chronic condition. A study found that digital health helped reduce the number of patient visits by 25%. By curating reliable medical apps, primary care physicians will be asked to recommend apps that are free or cheap in an attempt to give patients more power and reduce visits to doctors.
Conclusion and introducing the "Gary-rule"
To help medical professionals and policy makers make a clear distinction between health IT and digital health whenever they need, a general rule of thumb, the "Gary-rule" might come handy. If a technological issue comes up in a healthcare setting such as the antivirus software becomes outdated or the electronic medical record system stopped working and we have to call Gary, the IT guy, as he is alone capable of solving it whatever methods he uses, it's an IT issue.
If Gary is not enough to solve the issue because more stakeholders of healthcare must get involved, (e.g. letting patients to bring the data of their trackers into the practice and merging that with electronic medical records; or allowing physicians to do remote consultations on a regular basis), it's digital health.
It's hard to draw a definitive line between health IT and digital health issues (Table 1), although drawing a territory between them might help caregivers use new technologies to improve and expedite their job so that in the end they may spend more time with patients.
Examples for the implementation of digital health
Four categorical examples are described below that illustrate how digital health can be implemented into a novel framework. These examples entail patient centricity, regulating disruptive
|
2019-03-16T13:02:36.532Z
|
2018-09-08T00:00:00.000
|
{
"year": 2018,
"sha1": "9ec04104807c3ffac3c5649f3346a8a44632306e",
"oa_license": "CCBY",
"oa_url": "https://www.jctres.com/media/downloads/jctres032017s3006/jclintranslres-3-431.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "74e65899af3475dac356c6e164c24a77dbde5e4a",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Business"
]
}
|
38101439
|
pes2o/s2orc
|
v3-fos-license
|
Placing Nichiren in the “ Big Picture ” Some Ongoing Issues in Scholarship
This article places Nichiren within the context of three larger scholarly issues: de3nitions of the new Buddhist movements of the Kamakura period; the reception of the Tendai discourse of original enlightenment (hongaku) among the new Buddhist movements; and new attempts, emerging in the medieval period, to locate “Japan” in the cosmos and in history. It shows how Nichiren has been represented as either politically conservative or radical, marginal to the new Buddhism or its paradigmatic 3gure, depending upon which model of “Kamakura new Buddhism” is employed. It also shows how the question of Nichiren’s appropriation of original enlightenment thought has been inμuenced by models of Kamakura Buddhism emphasizing the polarity between “old” and “new” institutions and suggests a different approach. Lastly, it surveys some aspects of Nichiren’s thinking about “Japan” for the light they shed on larger, emergent medieval discourses of Japan’s religiocosmic signi3cance, an issue that cuts across the “old Buddhism”/“new Buddhism” divide.
Placing Nichiren in the "Big Picture" Some O ngoing Issues in Scholarship Jacqueline I. Stone This article places Nichiren within the context of three larger scholarly issues: definitions of the new Buddhist movements of the Kamakura period; the reception of the Tendai discourse of original enlightenment (hongaku) among the new Buddhist movements; and new attempts, emerging in the medieval period, to locate " Japan " in the cosmos and in history.It shows how Nicmren has been represented as either politically conservative or rad ical, marginal to the new Buddhism or its paradigmatic figv/re, depending' upon which model of "Kamakura new Buddhism" is employed.It also shows how the question of Nichiren, s appropriation of original enlighten ment thought has been influenced by models of Kamakura Buddnism emphasizing the polarity between "old" and "new,institutions and sug gests a different approach.Lastly, it surveys some aspects of Nichiren y s thinking-about " Japan " for the light they shed on larger, emergent medieval discourses of Japan relioiocosmic significance, an issue that cuts across the "old Buddhism, ,/ "new Buddhism " divide.
Keywords: Nichiren -Tendai -original enlightenment -Kamakura Buddhism -medieval Japan -shinkoku For this issue I was asked to write an overview of recent scholarship on Nichiren.A comprehensive overview would exceed the scope of one article.To provide some focus and also adumbrate the signifi cance of Nichiren studies to the broader field oi Japanese religions, I have chosen to consider Nichiren in the contexts of three larger areas of modern scholarly inquiry: "Kamakura new Buddhism, " its relation to Tendai original enlightenment thought, and new relisdocosmoloeical concepts of "Japan" that emerged in the medieval period.In the case of the first two areas-Kamakura new Buddhism and original enlightenment thought-this article will address how some of the major interpretive frameworks have shaped our view of Nichiren, and how study of Nichiren has in turn affected larger scholarly pictures.Some assessment of current interpretations and alternative suegestions will also be offered.Medieval concepts of Japan, however, represents an area where the importance of Nichiren has yet to be fully recognized, and this final section of the article suggests the potential contribution to be made by an investigation of his thought in this regard.
Nichiren and Kamakura Buddhism
No era in Japanese Buddhist history has received more scholarly atten tion than the Kamakura period .This was the time when several of the Buddhist traditions most influential in Japan today~ Zen, Pure Land, and Nichiren-had their institutional beginnings.Indeed, for many years, the study of Kamakura Buddhism was largely equivalent to the study of sectarian origins.The last two decades, how ever, have seen a dramatic methodological shift, which in turn has affected scholarly readings of Nichiren.
Beginning before the war, a major category in studies of Kamakura Buddhism was the "Kamakura new Buddhism" 錄倉新仏教一that is, the movements besdnning with Honen, ^hmran, Eisai, Doeen, and Nichiren (Ippen is also sometimes included).Especially in the postwar period, the older institutionalized Buddhism from which these founders had emerged was treated primarily as the backdrop for their religious inno vations."Old Buddhism"一 Tendai, ^hmgon, and the Nara schoolswas regarded in the dominant postwar model of Kamakura Buddhism as a moribund remnant of the state Buddhism of the ritsuryo 律領 sys tem, elitist, overly scholastic, and unable either to respond to the reli gious needs of the common people in the face of an alleged sense of crisis accompanyine the arrival of the Final Dharma age ( mappo 末伝) or to accommodate to rapid social change brought about by the rise of warrior power.In contrast, the new Kamakura Buddhist movements were seen as egalitarian and lay oriented, offering easily accessible religious practices.They were often represented in a "Protestant" light, as having rejected worsnip of the myriad kami and the apotropaic rites of esoteric Buddhism.And, unlike the commitment of "old Buddhism" to serving the state with its rituals of nation protection, the new Buddhism was deemed to have been concerned chiefly with individual salvation.Postwar "new Buddhism"-centered models of Kamakura Buddhism were represented by such scholars as Ienaga Saburo and Inoue Mitsusada, for whom the exclusive Pure Land movement was paradigmatic.This model often characterized Nichi ren as an in-between figure who had not fully negotiated the transi tion from "old" to "new."For Ienaga in particular, Nichiren, s beliei m the efficacy o f ritual prayers (kitd 祈礙 j and his concern with the Japanese kami placed him squarely in the lineage of "old Buddhism"; any "new B uddhist" elements in his teaching were due solely to H6nen5 s influence (Ienaga 1947, pp. 96, 63).In particular, Ienaga saw his emphasis on "nation protection" (chingo kokkaM M M '^) as indistin guishable from that of Nara and Heian times, something that "presents a large obstacle to understanding Nichiren, s religion solely in terms of the so-called new Buddhism" (p.68).Ienaea is an outstand ing scholar, and his work on Kamakura Buddhism, read critically, is still useful today.Nonetheless, he was wntme in the immediate post war period, when conscientious scholars of Buddhism were just begin ning to confront the troubling legacy of institutional Buddhism 's recent support for militant Japanese imperialism.In that context, Nichiren, s concern with the relationship between Buddhism and gov ernment could perhaps be seen only in a negative light.
A major challenge to "new Buddhism ^-centered models of Kama kura Buddhism came about through the work of the late historian Kuroda Toshio , whose work is too famous to need much discussion here (see Dobbins 199bハ Kuroda conclusively demonstrated that the dominant forms of medieval Japanese Buddhism were not the Kamakura new Buddhist movements, which did not attain significant institutional presence until the late medieval period, but rather the temple-shrine complexes of "old Buddhism."Kuroda found that, far from being an ossified remnant of Nara state Buddhism, these institu tions had evolved distinctively medieval forms of organization, deriv ing their support, not from the imperial court, but from their own extensive private estates or shoen 壮園.As major landholders, together with the court and later the bushi 武士 (warrior) leadership, these tem ple-shrine complexes emerged as one of the powerful kenmon or rul ing elites that form ed the jo in t system of medieval governance (kenmon 沒 •権門体制) .As one of these powerful factions, the leading Buddhist temples jo ine d across sectarian lines to form a distinctive rit ual a n a ideological system that K uroda called the kenmitsu taisei 顕密体制一 a fusion of the exoteric doctrines of particular Buddhist schools with a shared body of esoteric ritual that provided both thaumatureical support and religious legitimization for existing rule.Ken mitsu B uddhism , Kuroda argued, overwhelm ingly represented orthodoxy (seitd 正統)for the period.Within this overarching system, the new Buddhist movements of the Kamakura period were mere marginal heterodoxies (itan MiS).Kuroda5 s work produced a revolution in scholarly aDproaches to medieval Japanese religion.He shifted attention away from the long standing approaches of doctrinal and sectarian history to focus on the political, economic, ideological, and other previously neglected dimensions of the field.He transcended an earlier emphasis on indi vidual sects by noting underlying structures that cut across traditions, such as the exo-esoteric fusion {kenmitsu)) discourse of the mutual dependence of imperial law and Buddhism ( obo buppo soi ron 王法仏法 相依論) ;o r the honji-suijaku 本 地 垂 迹 logic that identified kami as the local manifestations of buddhas and bodhisattvas, thus enabling the incorporation of spirit cults and kami worship within the kenmitsu sys tem.The implications of his work have yet to be fully explored.Kuroda himself did not study Nichiren in any detail, but his understandine of the new Buddhist movements of the Kamakura period as small hetero dox movements defining themselves over and aeainst the dominant religiopolitical establishment opened a new perspective from which Nichiren might be reconsidered.Here we will briefly consider some aspects of the work of Sasaki Kaoru and Sato Hiroo, two scholars who have focused on Nichiren in this light.
NICHIREN AS "ANTIESTABLISHMENT"
Sasaki Kaoru has built upon Kuroda, s work to clarity the nature of the dominant religious establishment aeainst which the new movements, includine Nichiren's, were reacting.He argues that Kuroda5 s category of kenmitsu taisei typifies the religious institutions of Kyoto aristocrats but is not adequate to describe the religious support structure of the Kamakura Bakufu, which developed its own religious policy.Sasaki accordingly introduces the concept of zenmitsu shugi 禅密王義,a reli gious ideology composed o f Zen and esoteric elements stemming from the activities of those Zen monks and mikkyd ritual specialists who provided the Bakufu with religious support.The Buddhism bol stering the established system of rule (taisei Bukkyd can thus be divided into that of the court aristocracy and that of the leading Kamakura bushi.Over and aeainst this dominant "establishment Bud dhism,M Sasaki sets up two further categories: antiestablishment Bud dhism (han-taisei Bukkyd 反体制仏教) ,o r those who defined themselves over and against the dominant relieious system, and w transestablishment Buddhism" (cho-taisei Bukkyd 超体制仏教) ,o r those whose religion was defined independently of the tension between the other two (Sasaki 1988(Sasaki , 1997)).
One of the most striking features of Sasaki's work on Nichiren is his analysis oi how Nichiren gradually shifted, over the course of his life, from an "establishment" to an "antiestablishment, , position.As others have noted, Nicmren in the early stages of ms career was very much self-identified with "old Buddhism" or the kenmitsu of Tenaai (Kawazoe 1955-1956;Ikegami 1976;Sato 1978).His criticism of H 6 n e n , s exclusive nenbutsu was launched from this kenmitsu standpoint.Nichiren saw himself as a successor to Myoe 明恵 and others of the established Buddhist schools who had written critiques of H 6nen5 s Senchakushu 選 択 集 (Shugo kokka ron 守護国家論,STN 1:90) and, contra H 6nen5 s exclusive nenbutsu doctrine, still spoke at this staee of the esoteric teachings and other Mahayana sutras, along with the Lotus Sutra, as worthy teachings to be upheld.He also criticized the exclusive nenbutsu movement for underm ining the Tendai economic base.But Nicmren's early self-identification was with the Tendai of Mt.Hiei, and rela tions between the Bakufu and Mt.Hiei were anything but cordial.The Bakufu had a number of Tendai monks in its service; for example, of the seventeen successive chief superintendents ( betto 別当) of Kama kura^ Tsurueaoka Hachiman Shrine 鶴岡八幡呂 who served between 1180 and 1266, ten were Tenaai monks.All, however, belonged to the rival Tendai lineage of Onjo-ji 園城寺, which had enjoyed a longstand ing relationship with the Minamoto house.Bakufu religious policy, says Sasaki (1997, pp. 405, 421-22), was informed by anti-Hiei sentiment~ one reason, in his estimation, why Nichiren encountered persecution.
Sasaki divides Nichiren's thinKing into three periods demarcated by his exile to Sado Island: pre-Sado (up until 1271), Sado (1271-1274), and post-Sado (1274-1282), or the years of his retirement on Mt.Minobu.He traces Nichiren's shift from an establishment to anti establishment perspective through an exhaustive reading of his works and collation of their internal evidence, focusing on Nichiren's view of the emperor and the Bakufu, his criticism of the esoteric teachings (mikkyd), and his understanding of the kami (Sasaki 1997, pp. 287-415).
In his early writings, Sasaki says, Nichiren saw the emperor or tenno 天皇 as Ja p a n 's actual ruler {jisshitsuteki kokushu 実質的国主)and the Bakufu as subordinate, an upstart in terms of pedigree and the ruler merely in name or form.While in exile on Sado, however, his think ing on this matter began to change, undergoing a radical transforma tion during the Minobu years.This becomes particularly evident in his understanding of the Jokyu Uprising of 1221, in which the retired emperor Go-Toba sought to overthrow the Bakufu and restore full imperial authority but was defeated by the Kamakura forces under the command of Hojo Yoshitoki.As a result, despite his imperial status, Go-Toba and two other retired emperors who had supported him were sent into exile.Nichiren interpreted this as due to Go-Toba, s reliance on mikkyd ritual rather than the Lotus Sutra for his thaumatureical support, as well as the spread of other, "inferior" teacnmgs.This inversion of the proper hierarchy of "true" and "provisional" in the realm of Buddhism led to a corresponding upset in worldly rule: Not only were estates dedicated to support the true sutra seized and stolen and converted into the domains of the provi sional sutras of shingon, but because all the people of Japan had embraced the evil doctrines of the Zen and nenbutsu sects, there occurred the most unprecedented overturning of high and low (gekokujo 下®i上)the world has ever seen.However, the lord of Sagami [Hojo Yoshitoki] was innocent of slander ing the Dharma, and in addition, was master of both literary and military arts.Thus Heaven permitted him to become ruler (kokushu 国主) .(Shimoyama goshdsoku 下山御消息 STN 2:1329) Nichiren also reconciled the fact of Go-foba s defeat with the tracution that the bodhisattva Hachiman had vowed to make his dwelling on the heads of honest persons and in particular to protect one hun dred honest sovereigns in succession.Nichiren interpreted "honest" in both the worldly sense, meaning free from falsehood, and in a relieious sense, as according with the Lotus Sutra, in which Sakyamuni Buddha vowed that he would "honestly discard skillful means" and "preach only the unexcelled Way" (T.no.262,9.10a)."The retired emperor of Oki [Go-Toba] was in name the nation's ruler, but he was a liar and a wicked man."In contrast, Yoshitoki was "in name the vas sal, but in his person a great sovereign and without falsehood" ; hence Hachiman had abandoned Go-loba, the eighty-second tenno, and transferred ms protection to Yoshitoki (Kang^d Hachim an sho 諌暁 ノ \ 巾 番 手 少 ,STN2 :1848).A similar logic informs Nichiren's reading, during the Minobu period, of the defeat of the laira in 丄 185.Like Go-Toba four decades later, the Taira had relied on mikkyd ritual in their prayers for victory; hence the emperor Antoku, drowned in the battle of Dan-no-Ura, had been "attacked by the general Minamoto no Yoritomo and became food for the fish in the sea" (Shinkokud gosho ネ申国王 御書,ST N 1 ..881; see also pp.884-85), while Yoritomo was able "not only to defeat the enemy but also to become the great general of the warriors of Japan, solely because of the power of the Lotus Sutra" (Nanjo-dono gohenji 南条殿御返事,S T N 2:1175.Yoritomo5 s respect for the Lotus Sutra is Historically attested).Nichiren also says that, just as the bodhisattva Hachiman had shifted his allegiance from Go-Toba to Yoshitoki, so had he also earlier transferred his protection from Antoku to Yoritomo (Shijo Kingo-gari onfumi 四条金吾許御文,S T N 2 : 1824).Thus in Sasaki's view, during the Minobu period, Nichiren's view of who represented Japan's legitimate ruler completed a 180° turn.The emperor, whom he had looked upon before the Sado exile as the actual ruler, he now relegated to the status of ruler in name only.Clearly this view saw legitimacy of rule as deriving, not from the imperial bloodline, but from readiness to protect the Lotus Sutra.
As Sasaki notes, Nichiren's view of the shift of authority from Go-Toba to Yoshitoki was inseparable from his criticism of the esoteric teachings.This criticism begins from about 1269 and develops during the Sado and post-Sado years.It was from the Sado exile on that Nichiren began to interpret G o -T o b a , s defeat as an example of the pernicious effects of relying on mikkyd ritual.This reflected not merely Nichiren's interpretation of past events but was also intimately con nected to his view of present Bakufu policy.Even before the arrival of the letter from the Mongols demanding that Japan enter into a tribu tary relationship, the Bakufu had sponsored esoteric rites: the posi tion of chief superintendent of the Tsurugaoka Hachiman Shrine was dom inated by mikkyd ritual specialists from the imperial capital, chiefly from the lineages of T o-ji and Onjo-ji.Patronage had also been extended to prominent mikkyd ritualists of the Saidai-ji precept lineage: Eison 睿又尊(1201-1290) and his disciple Ninsho 忍 性 (1217-1303).Now, in the face of the Moneol threat, the Bakufu was all the more eaeer to sponsor such rituals for thaumaturgical protection.Nichiren, however, saw the Mongol threat itself as the result of slander of the true Dharma, to be averted only by wholehearted conversion to the Lotus Sutra\ thus in his eyes, the Bakufu was in imminent danger of destroying itself in the same manner that Go-loba had done decades earlier.In contrast to his early criticism of the exclusive nenbutsu, which was closely linked to the views of the Buddhist establishment, Nichiren's criticism of Zen and especially mikkyd was developed from the standpoint of his growing Lotus exclusivism and antiestablishment outlook that culminated during the Minobu years.According to Sasaki, in this period of Nichiren's life, all notions of worldly rule (obo) as a separate authority droDped from his worldview and only the authority of Buddhism (buppo) remained; Nichiren's vision was now that of a transcendent "world of the Lotus Sutra" (Hokekyd no sekai 法華経の世界) in which all legitimacy of rule was to be judeed solely by the standard of whether or not the Lotus Sutra was upheld (Sasaki 1997, pp. 309-10)ゾ 1 While expressing-admiration for Sasaki's research, Sato Hiroo has offered some correc tives and clarifications.First, Sato finds that Nichiren's early writings distinguish between the sovereign (kokud 国王),o r head of the country, and the ruler (kokushu 国主) ,w h o carries out actual administration.Sato argues that Nichiren always identified the Hojo with the kokushu, at least with regard to the Kan to provinces, though his earlier writings accord the tenno superior authority.Thus Sasaki's claim that Nichiren before Sado saw the tenno as the "actual ruler" may have to be reevaluated.Second, Sato cautions that Nichiren's references to Heaven allowing Yoshitoki to become ruler, or the transfer of Hachim an's protection from Go-Toba to Yoshitoki, does not mean, as some scholars have suggested, that Nichiren endorsed ideas of overthrowing the imperial dynasty under a new mandate of Heaven, or that he saw the imperial line as having been abrogated.The emperor remained head of the country; it was the actual authority of rule that Nichiren saw as having shifted to the Bakufu.
The emergence in Nichiren's thinking of a transcendent "world of the Lotus Sutra" can also be traced, Sasaki argues, through his chang ing views of the kami.An early letter (1264) to a female follower addresses her questions concerning menstrual taboos, which Nichiren dismissed as irrelevant from a Buddhist standpoint but accorded a limited local significance as something generally expected by the kami."Japan is a land of the kami (shinkoku 神国) , " he wrote, "and the way of this country is that, strange as it may be when they are manifes tations (smjaku) of buddhas and bodhisattvas, [the kamij in many respects do not conform to the sutras and [Buddhist] treatises....And yet we find men of wisdom who... forcefully insist that the kami are demonic and not to be revered, thus causing harm to their lay supporters" ( Gassui gosho 月水御書, STN1: 292; see also Yampolsky 1996, p. 256).Sasaki sees this passage as reflecting uncritical acceptance of the Japanese kami and participatine in the Buddhist establishment's criti cism of exclusive nenbutsu practitioners for their refusal to worship them.O n Sado, however, the Japanese kami, especially Tensho Daijm (Amaterasu Omikami 天照大神)and Hacmman, undergo redefinition in his thought as protectors of the Lotus Sutra and its practitioners.In other words, Nichiren divorced them from their specific association with Japan and relocated them within the world of the Lotus Sutra as Buddhist tutelary deities who protect the true Dharma.2Sasaki finds that N ichiren5 s relativizinsr of the kami vis-a-vis the Lotus Sutra contin ued during the Minobu period and took various forms.A letter dated soon after his reclusion declares furiously that Brahma, Indra, the sun and m oon deities, and the four deva King's are doomed to the Avici hell for failing to protect m m and his mission as the votary o f the Lotus Sutra (Shinkokud gosho, STN 1:893), while three years later he wrote that these same deities had commanded the Mongols to chastise Japan for its slander of the Lotus Sutra and that "Tensho Daijm and the bodhisattva Hachiman are powerless to help" ( Yorimoto cmnjo 頼基 陳状 STN 2 : 1359).When a fire destroyed the Tsurugaoka Hachiman shrine in 1280, Nichiren wrote that Hachiman, who had vowed to pro Third, Sato argues that Nichiren never abandoned hope of finding some form of govern ment support for his teaching.When he despaired of gaining endorsement from the Bakufu, he for a time entertained hopes of winning a hearing from the tenno\ tms, in Sat6?s view, is why his post-1278 writings show increased awareness o f the emperor, and not, as some mod ern nationalists have argued, because he was a supporter of imperial rule (Sato 1998, pp. 253-304).While Sato does not address the issue, the Sandai hihd sho discussed by Sueki Fumihiko in this volume might, if it is genuine, be fruitfully considered in this light.
2
It is in this rather minor role in the Buddhist hierarchy that Tensho Daijin and Hachi man appear on Nichiren's mandala.During the last years of the Pacific War, the Ministry of Education, prompted by the complaints of shrine priests, demanded that the mandala be revised.The war ended before the issue could be resolved (see Ishikawa 1975).
tect "honest p e r s o n s , , ,h a d razed his own shrine and ascended to the heavens because there were no more honest persons in Japan, only Dharma slanderers ( Shijo Kingo-gari onfumi, STN 2:1823; see also Kangyd Hachiman sho STN 2:1849).It is an error, Sasaki concludes, to label Nichiren's view of the kami a remnant of "old Buddhism, " as Ienaga did.In his later thought, Nichiren came thoroughly to reject the honji-suijaku notions that bolstered the authority of establishment Buddhism and deemed the kami significant only insofar as they protect the "world of the Lotus Sutra."This was in effect a denial of the kami in their original status as the deities of Japan and thus consistent with the rejection of kami worship said to characterize the new Buddhism.
Another reading of Nichiren and Kamakura Buddhism to build upon the insights of Kuroda Toshio is that of Sato Hiroo (1987,1998).Sato5 s larger project has been to investigate the differences in the underlying "logic" of both kenmitsu orthodoxy and the heterodox itan, as well as in their respective cosmological visions."Old Buddhist" insti tutions of the medieval period, Sato finds, were supported by what he terms a "logic of harmony" (yuwa no ronri 融和の論理) .According to this logic, ah Buddhist teachings are true.The differences among vari ous teachings and practices are necessitated by the varying capacities of practitioners, so that no one will slip through the net of the Bud dha^ salvific intent.This assertion that "all Buddhism is true" did not, of course, preclude asserting the supremacy of one's own tradition by arguing that it was intended for persons of the most highly developed faculties.In Sato5 s view, the "loeic of harmony" was by no means a medieval equivalent of modern ideals of religious tolerance or plural ism but rather was enlisted for various forms of social control.It served to maintain a loose unity among rival Buddhist institutions as the kenmitsu system; one could assert the superiority of one's own school or lineage but could not deny that others had their own validity.Additionally, the Buddhas, bodhisattvas, or kami enshrined as objects of worship in a particular temple or shrine were all seen as particular em bodim ents o f universal truth.Thus the shoen attached to these institutions could be defined as "Buddha lands" and the taxes and labor of peasants employed on them, as "offerings to the Buddha" ; simi larly, peasant negligence or resistance in providing these services could be averted by the threat of divine punishment.Lastly, Sato claims, the recoenition of distinctions of superior and inferior spiritual faculties inherent in the "logic of harmony" served by analogy to legitimize the existing social hierarchy.
In contrast, the new Buddhism, beginning with Honen and devel-oped by Shinran and especially Nichiren, is characterized in Sato5 s view by a "logic of exclusive choice" (senchaku no ronri 選択の論理).
1 his logic holds one form of Buddhism alone to be valid and denies the soteriolosdcal efficacy of all others.For Sato, this logic goes well beyond simple commitment to a single form of practice; he notes that even within the framework oi the old Buddhism supported by the "logic of harmony," one finds practitioners who relied, for example, solely on the nenbutsu, arguing that it was the only teacnmg suited to their particular capacity.What distinguishes the "logic of exclusive choice" is its categorical rejection of all other forms as soterioloeically useless.Sato sees this logic as backed by the absolute authority of a single, personified transcendent Buddha, from whom all other authority was seen to derive.The "logic of exclusive choice" in effect denied not only all other forms of religious practice but also the entire "logic of harmony" and,implicitly, the system it legitimized.1 his new concept, Sato writes, "aimed at a completely different sort of society, in which the worldly law was subordinated to the Buddha Dharma and in which, under the Buddha who held sovereignty over the land, all people were placed equally, without reeard for origins or status" (Sato 1998, p. 40).
We have noted how postwar models that saw a focus on individual salvation as characteristic of "Kamakura new Buddhism" regarded the exclusive Pure Land movement as central, and Nichiren, with his con cern for the nation, as having not yet fully emerged from the frame work of "old Buddhism."But when the new Buddhism is redefined, as in the more recent, Kuroda-inspired models, in terms of resistance to the religiopolitical establishment, it is Nichiren who inevitably emerges as its paradigmatic figure.In H onen^ case, for example, the potential of the exclusive nenbutsu to function as a critique of the kenmitsu sys tem remains undeveloped.He discouraged his followers from criticiz ing-the worship of other buddhas or kami and did not put forth a clear argument about the relationship of Buddhism to worldly author ity; thus religion in H 6 n e n , s teaching remains apart from worldly affairs.Nichiren's teacnmg, on the other hand, requires the believer to engage in such criticism as an act of compassion, even at the risk of o n e , s life.Ih is was because he saw the "exclusive choice" of the Lotus Sutra as determining, not only one's personal salvation, but also the welfare of the country.He elaborated an entire concomitant discourse about fulfilling the mission of a bodhisattva by practicing shakubuku, the rebuking of attachment to provisional teachings, and eradicating one's past sins by encountering persecution as a result.Moreover, he was very clear about how Buddhism is related to worldly authority.In contrast to "old Buddhist" discourse of the mutual dependence of Buddhism and worldly rule, Nichiren separated the two and radically relativized the latter.In his eyes it was the ruler's duty to protect the true Dharma, and he ruled legitimately only so long as he fulfilled it (Sato 1978, pp. 22-23; see also Sato5 s article in this volume).
Earlier postwar scholars of Nichiren, such as Fujii Manabu, Tokoro Shigemoto, Takagi Yutaka, and Kawazoe Shoji, had already noted his critical attitude toward the establishment and his subordination of worldly authority to the Buddha Dharma.Sasaki and Sato have fur ther built upon the work of these predecessors and additionally placed Nichiren in a larger interpretive framework of Kamakura Bud dhism, particularly of the itan or marginal movements, that draws u p o n the insights of K uroda Toshio.Despite their innovative approaches, both have been criticized for reproducing "new Bud dhism 5 ,-centered views of Kamakura Buddhism, excessively polarized between the new movements, seen as egalitarian, progressive, and lib erating, over and against an oppressive Buddhist establishment (K uroda 1990, pp.7-1 1 ;Sueki 1993; see also the response to Kuroda in Sato 1998, pp. 439-51).A discussion of the strengths and shortcomines of their models in illuminating Kamakura Buddhism as a whole would exceed the scope of this article.Here, however, we may note their very substantial contributions to our understanding of Nichi ren.By drawing attention to the long-neglected ideological side of his teaching, they offer new insieht into how he saw the relationship of Buddhism to political authority, showing conclusively that it was by no means a mere continuation of earlier notions of nation-protection or the mutual dependence of Buddhism and worldly rule; rather, drawine on elements in these earlier systems, Nichiren constructed a dif ferent legitimizing approach that established the Lotus Sutra as the sole source of authority and was in fact highly critical of the existing system.Needless to say, they have also driven some very large nails into the coffin of earlier images of Nichiren as a fervent supporter of the emperor who valued nation above all.
T H E LIM IT S O F RESISTANCE
Nonetheless, if it is inappropriate to see Nichiren as an imperial sup porter and ardent nationalist, it is also possible to go to extremes in characterizing him as a figure of resistance.This aspect or his teaching emerges most dramatically when Kamakura Buddhism is defined m terms of a polarization between established institutions and new het erodox movements.Another approach mieht be~without losing sight of N ichiren's "antiestablishment, ' side pointed out by Sasaki and Sato-to see how he was at the same time embedded in the values and conceptual structures of the medieval period that were shared across the In what sense was Nichiren's Buddhism "antiestablishment, , ? Funda mentally, this has to do with how he understood the locus of authority.If all authority emanates from the Lotus Sutra, then those who reject the sutra may wield illegitimate power but by definition can have no authority whatever.Thus in cases where government contravenes the Lotus Sutra, there is no doubt but that Nichiren's Lotus exclusivismhis "logic of exclusive choice"一 established a basis for moral resist ance.Ih is was one of Nichiren's most significant legacies.While often compromised by his later tradition in the interests of securing the institutional foundations of the Nichiren Hokkeshu, it was periodically revived by individuals and factions within the tradition in acts of exceptional courage, even martyrdom, in resistance to worldly rule (Stone 1994).
Nichiren's investing of ultimate authority in the Lotus Sutra works to undercut or even invert all hierarchies constructed on other bases.Buddhism is not in the ruler's service; rather, the ruler is obligated to protect the Buddha Dharma.The Final Dharma age, widely under stood as frightful and degenerate, is redefined as the most auspicious moment to be alive, because it is the very time when the Lotus Sutra is destined to spread.Similarly the lowly, even the polluted, who embrace Lotus Sutra are raised above those of lofty status who do not: "Rather than be great rulers during the two thousand years of the True and Semblance Dharma ages, those concerned for their salva tion should rather be common people now in the Final Dharm a age....It is better to be a leper who chants Namu-myoho-renge-kyo than to be chief abbot of the Tendai school" (Senji sho 撰時抄,S T N 2: 1009).A similar logic holds true for gender hierarchy: "A woman who embraces this [Lotus] Sutra not only excels all other women but also surpasses all men" (Shijo Kingo-dono nyobo gohenji 四条金吾殿女房御返事, STN 1 : 857) and for that of sacred and profane places: "A hundred years' practice in [the Pure Land of] Utmost Bliss cannot equal the merit of a single day's practice in this defiled world" (Hoon sho 幸艮恩、 J少, STN 2:1249).But this sort of hierarchy inversion is less a critique of authority per se than it is a regrounding of it in the Lotus Sutra and, hence, a reversal of the position一 from m argin to center-that Nicmren and his followers occupy within the existing order.As he wrote late in life, envisioning a time when his teachings would be widely accepted: "O f my disciples, the monks will be teachers to the emperor and retired emperors, while the laymen will be ranged among the ministers of the left and riffht" (Shonin gohenji 諸人御返事, STN 2:1479).The "logic of exclusive choice" in Nichiren's teaching worked as a critique of the system and was "antiestablishment, , because it was launched from the margins of structures of religious and politi cal power.There is little in Nichiren's teaching that would make it critical of authority as such.3 This point becomes clearer in examining Nichiren's teachings regard ing social relationships.Here, too, authority is seen to derive from the Lotus Sutra, and in cases where a social superior opposes the believer's faith, resistance is mandated.Where the superior is committing slander or the Lotus Sutra, then as stated in the Classic of Filial Piety, "A son must admonish his father and a vassal must reprove his lord" (cited in Yorimoto chinjo STN 2:1355).Similarly, between husband and w ife ,"N o matter what sort of man you may marry, if he is an enemy of the Lotus Sutra, you must not follow him " ( Oto gozen goshdsoku こ御前御消息 STN2: 1100).But where the in d iv id u a l, s faith in the Lotus yutra was not con tested, Nichiren tended to uphold the values oi loyalty and filial piety shared by the Kan to bushi society to w hich most o f his followers belonged-even to legitimate such social relationsnips from a religious standpoint.As the late historian Takagi Yutaka notes: "Nichiren's faith in the Lotus Sutra has the two aspects of, in effect, rejecting the exist ing order and of positively affirming it (1970, p. 234; see also Takagi 1965, pp. 221-53).Further studies might examine, not only Nicmren's subversive side, but this issue or how the "existing order" is reconstituted in his thought on the basis of the sole authority of the Lotus Sutra.This point is related to arguments put forth by the movement known as "Critical Buddhism" (hihan み々批判仏教) .Hakamaya Noriaki, its leading representative, has posited a fundamental opposition between "topical Buddhism," which uncritically subsumes all posi tions within an ineffable, universal ground ("topos") , thus effectively swallowing its opposition without confronting it, and "critical Buddhism," the reasoned choice of truth and rejection of falsehood, which is what he believes Buddhism should be (H ubbard and Swanson 1997).Readers familiar with Critical Buddhism will note a structural similarity between Hakamaya, s categories of "topical Buddhism" and "critical Buddhism ," and Sato5 s "logic of harmony" and "logic of exclusive choice, " respectively.However, where Sato is concerned with how these two logics operated historically in the specific context of Jap a n 's medieval period, Hakamaya5 s argument is universalizing and normative: the all-inclusive "topical" stance is by definition oppressive, while the critical stance that chooses one form and rejects all others is by definition liberative.Referring to kenmitsu taisei theory, H ak am ay a (1998, p . 12) has even suggested that, from a normative standpoint, its terms should be reversed: it is the new movements, based upon exclusive choice, that represented orthodoxy (seitd), that is, what Buddhism should be, and the kenmitsu establishment that was heretical {itan).It would seem, however, that whether either of the two modes is oppressive or antiauthoritarian would depend on the specific social context.Nichiren5 s "logic of exclusive choice" was criti cal of the system because he and his followers were outside the existing structures of reli gious and political power.Had his Buddhism ever dominated the religious establishment as he had hoped, one imagines that it could have become quite authoritarian in its own right.
Nichiren and Original Enlightenment Thought
One of the major discourses of medieval Japanese Buddhism involved the concept of original enlightenm ent (hongaku 本覚、 .While the term "original enlightenment thought,is today used very loosely to indi cate any sort of innate Buddha-nature concept, in the medieval period hongaku had a more specific meaning as a particular Tenaai reading o f the Lotus Sutra, especially o f its latter fourteen chapters.The medieval Tendai tradition of "orally transmitted doctrines" (kuden homon ロizr/去門) in which it was developed may be thought of as an attempt to reinterpret received Tendai/Lotus doctrine through the lens of an esoteric sensibility~although this tradition was defined as "exoteric, " distinct from mikkyd, by the lineaees that transmitted it.
From the perspective of this doctrine, all things, just as they are, mani fest the true aspect of reality and are the Buddha of prim ordial enlightenment.Seen in their true light, all forms or daily conduct, even o n e , s delusive thoughts, are, without transformation, the expres sions of original enlightenm ent.Ihe purpose of relieious practice is not to achieve a distant buddhahood in the future, but to realize that one is Buddha from the outset.Tms way of thinking soon spread beyond the confines of Tendai doctrinal formulations and influenced medieval aesthetics, especially poetics, as well as nascent theories about the kami.Scholars today are sharply divided in their evaluation of this discourse.Shimaji Daito (1875-1927), who first popularized "original enlightenm ent thought" as a scholarly category in the early decades of the twentieth century, saw it as "the climax of Buddhism as philosophy."Others have assimilated it to projects of cultural essentialism, purporting to find in it the expression of a timeless Japanese spirit of harmony with the natural world or the key to healing ecologi cal problems said to derive from a perceived rift between humans and nature born of dualistic Western thought.Still others see it as a perni cious authoritarian ideology that legitimates discrimination and hier archy by sacralizing the status quo.However, no aspect of original enlightenment thought has been more hotly debated than the nature or its influence on Kamakura new Buddhism.Nichiren, with his close ties to the Tendai tradition, has been absolutely central to this debate.
since around the 1930s,the tendency within Nichiren sectarian scholarship has been to see Nichiren as a "restorer" of orthodox Tendai who rejected the mzM); o-influenced Tenaai of his day, includine its doctrine of original enlightenment.Representative, or rather formative, of this trend was Asai Yorin (1883-1942), who pioneered critical textual studies of the Nichiren canon.Asai excoriated the many scholars of his own tradition, past and present, who interpreted Nichiren's thought from a hongaku perspective."If it is as such schol ars say," he wrote, "then [Nichiren] S ho nin^ doctrinal studies... either lapped up the dregs of Tendai esotericism or sank to an imita tion of medieval Tendai, and in either case possess neither originality nor purity.Can this indeed be the true pride of Nichiren doctrinal studies?" (Asai 1945, p. 285).In Asai5 s view, Nichiren was indebted to no one except the early Tiantai/Tendai tradition represented by Zhiyi 智 顗 ( 538-597), Zhanran 湛 然 ( 711-782), and Saich6 最 澄 ( 767-822).
Asai was responding to Shimaji Daito, Uesugi Bunshu, and other pioneering scholars of medieval Tendai who argued, often on the basis of texts now regarded as problematic, that Nichiren's thought represented an offshoot of the medieval Tendai kuden (oral transmis sion) tradition that was grounded in a hongaku perspective.Asai, how ever, countered that Nichiren-attributed texts reflecting honm ku thought were not Nicniren's work at all but the forgeries of later disci ples who had fallen under the sway of this influential discourse.Indeed the controversy over problematic writings in the Nichiren canon (analyzed by Sueki Fumihiko in his contribution to this vol ume), is by no means a clear-cut, "scientific" matter of purely textual issues but is inextricably intertwined with debate over Nicniren's rela tion to Tendai hongaku thought~a debate informed both by sectarian agendas of recovering a "pure" Nichiren doctrine and by larger schol arly readings of the opposition between "old" and "new" Kamakura Buddhism (Stone 1990).Asai Yorin was to my knowledge the first scholar ever to characterize a founder of one of the new Kamakura Buddnist movements-Nichiren, in tms case-as rejecting Tendai original enlightenment thought, a move which has by now become academic orthodoxy.In tms reading, hongaku thought is represented as an uncritical affirmation of reality that, in regarding all phenomena as expressions of original enlightenment, endorses things just as they are.For Asai and the succeeding generation of scholars, in arguing the nonduality of good and evil and legitimating all phenomena, even human delusion, as original enlightenment, medieval hongaku thought exerted an antinomian influence, denying the necessity of religious discipline, undermining the moral force of the precepts, and contributine to clerical degeneracy (see Asai 1945,pp. 80,221;Shigyo 1954,p. 45).Since Kuroda Toshio, however, the alleged < 4 worldaffirming" tendency of medieval honmku thought has been more com monly interpreted as an authoritarian discourse that legitimated social hierarchy and the entrenched system of rule (see K uroda 1975, p. 4 4 3 -4 5 , 487-88; Sato 1987, p. 57).This position is also maintained by Critical Buddhists (Hakamaya 1989).In either case, Nichiren-along with Honen, Shinran, and Dogen-has been interpreted for some decades now as a teacher who either rejected, or at least radically revised, Tendai hongaku doctrine.
Asai Yorin5 s standpoint has been refined by a number of scholars, of whom two can be mentioned here.Tamura Yoshiro (1921Yoshiro ( -1989) ) devoted much of his scholarly career to investigating the relationship of the ideas of the Kamakura new Buddhist founders-Honen, Shin ran, Dogen, and Nichiren-to original enlightenment thought; in this w ay, he uncovered vital continuities among the teachings of the vari ous strands of "new B u d d h is m , , , as well as a common intellectual foun dation in the medieval Tendai tradition from which they had emerged (Tamura 1965).Tamura recognized that Nichiren in his youth had been deeply influenced by the "absolute monism" of Tendai hongaku thought, in which all things just as they are are viewed as expressions of original enlightenment.For example, Nichiren's earliest extant essay, written at age twenty-one, states, "When we achieve the awaken ing of the Lotus Sutra, then our own person-composed of body and mind and subject to birth and death-is precisely unborn and undy ing.And the same is true of the land.The oxen, horses, and others of the six kinds of domestic animals in this land are all buddhas, and the grasses and trees, sun and moon are all the holy sangha" (K aitai sokushin jobutsu gi 戒体即身成仏義,S T N 1:14).It was from this nondual viewpoint, Tamura notes, that Nichiren first criticized the dualistic, otherworldly Pure Land thought of Honen and his exclusive nenbutsu.Later, however, as Nichiren came into conflict with government authorities and experienced exile and persecution, in Tamura5 s view, he "descended" from the absolute nonduality of honmku thought to focus increasingly on the relative categories of history and human capacity and the need for world transformation.Tamura writes, "After aee forty, Nichiren came to part with the Tendai original enlighten ment doctrine's absolute monism and affirmation of reality" ( 1974, p. 142).Thus, works from this later period of Nichiren's life that assume a hongaku perspective were in Tamura5 s view most likely to be apocryphal.
By turning his attention to the process of Nichiren's intellectual development~an area of inquiry that he helped to pioneer-Tamura avoided Asai5 s problematic assumption of an originally "pure" Nichi ren doctrine.Nonetheless, his theory about Nichiren's later retreat from nondual original enlightenment thought presents two major problems, mrst, it is based on a circular argument.Almost twenty of the texts in the Nichiren collection from the later period of his life that exhibit hongaku ideas are problematic, in the sense that they do not exist in holograph, and their authenticity as Nichiren's writings can be neither established nor refuted.In other words, they belong to the category that Sueki Fumihiko terms "Nichiren B" (see his essay in this volum e).Tamura treats them as apocryphal because he sees N ichiren's mature thought as moving away from a hongaku perspec tive, but his very argument for this retreat rests on the claim that most of Nichiren's later works dealing with hongaku thought are apoc ryphal.
A second problem is that a few works of unimpeachable authenticity from the Sado period and later, including two identified by Nichiren himself as his most important writings, contain passages that are very close to hongaku ideas.The Kaimoku sho 開目抄、( 1272), for example, asserts that the nine realms of ordinary beings and the Buddha realm are originally inherent from the outset, and the K a n j i n horizon sho 観 心 本 尊 抄 ( 1273) identifies the present saha world with the constantly abiding pure land, ih is compels fAMURA to acknowledge that "even in the latter part of his life, Nicmren was at bottom sustained by this [doctrine], ,(1965, p. 623).However, he goes on to say that, on close examination, such writings, "while m aintaining nondual original enlightenment thought as their basis, nevertheless emerge from it" (p.625).Tamura saw Nichiren, along with Shinran and Doeen, as achieving a synthesis between the "absolute nonduality, ' of Tendai original enlightenment thought and a "relative duality," most strongly asserted by Honen, between the Buddha and deluded beings, this world and the Pure Land.While maintaining the absolute nonduality of ho n g a k u thought as his ontological basis, on a soteriological level, Tamura says, Nichiren "emerged" from it to confront the relative dualities of the phenomenal world.
A somewhat different argument for disjuncture between Nichiren and Tendai original enlightenment thought has more recently been advanced by Asai Endo.For A sai, "The process by which N ichiren gradually distanced himself from shingon mikkyd also entailed a grad ual widening of the eap between him and medieval Tendai in terms of both thought and faith" ( 1991, p. 286).Asai, too, acknowledges both the influence of nondual original enlightenment doctrine on Nichi ren^ early thought, as well as structural similarities between key pas sages of Nichiren's later writings and hongaku ideas.But in his view there is no clear thread in Nichiren's intellectual development sueeesting that hongaku ideas formed the basis of his thought, something particularly evident with regard to the daimoku, which is absolutely central to Nichiren's teaching.Asai notes that the mainstream of Bud dhist practice in premodern Japan reflects a gradual shift away from complex, introspective meditation toward concrete ritual perform ance and simple, symbolic acts; hongaku discourse helped legitimate this process by making it possible to identify even small, everyday actions as the expressions of original enlightenment.Had Tendai hon gaku thought indeed been the foundation of N ic h ir e n , s teaching, Asai argues, it would have been logical for him to argue from the outset that buddhahood is manifested in the simple act of chanting the daimoku.However, Nichiren did not initially argue the potency of the daimoku in this w ay.His early claims for the blessings of the daimoku are far more modest; he presents it simply as a practice for ignorant persons of the Final Dharma age that will save them from karmic rebirth in the lower realms of transmigration."Those who take faith even slightly in the Lotus Sutra, as long as they do not slander the Dharma in the least, will not be drawn down by other evil deeds into the evil paths....Those who believe in the Lotus Sutra, even without understanding, will not fall into the three evil paths.But escaping the six paths may be impossible for one without some degree of awaken ing" {Sho Hokke daimoku sho 唱法華題目鈔,S T N 1 : 184,188), or "One who chants Namu-myoho-rensre-kyo, even without understanding, will escape the evil paths" (Hokke daimoku 法華題目鈔,S T N 1' 393).Only after oemg exiled to ^>ado, from the time of the K anjin horizon sho, does Nichiren speak of the daimoku as enabling the realization or bud dhahood in this very body."Until Nichiren could confirm it in terms of his own experience, he would not give voice to it " " I also believe this was because he departed from the theoretical Buddhism of medieval Tendai, wmch endlessly pursued the original enlightenment of living beines as an absolute perspective, and instead realized that Buddhism must be based on the reality of evil persons or inferior fac ulties in the mnal D harm a age, making this the starting Doint of his relieion" (Asai 1991, p. 292; see also 1974).From this standpoint, Asai also differentiates between medieval Tendai hongaku thought and pas sages in Nichiren's later writings that seem to resemble it.For exam ple, he says, the Kanjin horizon sho's identitication of this world with the "constantly abiding pure land" is a statement made from the Buddh a's perspective, drawing on the passage in the "Fathoming the Lifespan" chapter of the Lotus Sutra, "I have always been here in this saha world, preaching the Dharma, and teaching and convertingM (T. no. 262, 9.42b),and its interpretations found in the works of Zmyi and Zhanran; it is not the absolutizing or the phenomenal world just as it is, as found in Tendai hongaku thought.In Nicniren's view, while this world may in principle be the Buddha's original land, in reality it is filled with strife and disaster; the Buddha land had to be actualized through the practice of shakubuku and the spread of faith in the Lotus Sutra, even at the cost of one's lite.
Few scholars are as capable as Asai of illuminating and analyzing the connections between early Tiantai/Tenaai and Nichiren's thought, and he reminds us that Nichiren was drawing on a received tradition of textual commentary.Yet from a historical perspective it is hard to imagine that Nichiren's indebtedness to Tendai derived purely from its earlier forms and that he remained unaffected by contemporane ous Tendai developments.Indeed, this argument bears a strong family resemblance to those sectarian readings of Nichiren that represent him as independent of mikkyd influence, criticized by Lucia Dolce in her contribution to this volume.
As the above summary indicates, both Tamura Yoshiro and Asai Endo participate in a larger academic discourse that regards Tendai hongaku thought as "theoretical" or "abstract," affirming the enlight enment of phenomena just as they are,and holds Nichiren, along with the other Kamakura new Buddhist founders to have "broken through abstraction" (Asai 1974), reasserting the importance of practice and engaging the real sufferings and contradictions of this world.Such readings both mirror and reinforce "polarity models" of Kamakuraperiod religion that valorize the new Buddhism as liberative and reformist, over and against a corrupt and oppressive Buddhism of the establishment.In fact, this "theory versus practice" distinction between medieval Tendai and the new Buddhism may be more an artifact of the method of comparison than a description of medieval realities."Original enlightenment thought" is a discourse drawn chiefly from doctrinal texts, often of uncertain date and authorship and whose contexts are virtually unknown, while the thought of Nichiren and other "new Buddhist" founders comes to us embedded in detailed life stories reconstructible from personal writings and the context of reli gious communities whose circumstances are comparatively well under stood.To compare the two on the same plane and conclude that one is mere theory and the other a concrete engagement with the world seems problematic, to say the least.Recent research suggests that medieval Tendai monks were very much concerned with practice (Groner 1995;H abito 1995;Stone 1995 and1999); thus the relation ship between Nichiren and Tendai hongaku thought needs to be approached in a different w ay.
Moreover, Nichiren's thought could not have emerged from, nor reacted against, a reified "original enlightenment t h o u g h t , , ,f o r the discursive field indicated by that term was extremely fluid, interacting with other elements and undergoing development throughout the medieval period.It is more fruitful to consider both N ichiren's thought and the medieval Tendai kuden homon tradition as simultane ously engaged in working out new concepts about religion that were distinctive of the medieval period, appropriating them on their respective sides to different institutional and social contexts and to different modes of practice.Such a perspective is not to deny the ten sions-social,political, and ideological-between established Bud dhist institutions and N ichiren's marginal new movement, but to acknowledge both as inhabiting the same historical moment and shar ing in developments that cut across the "old Buddhism"/ ,, new Buddhism" divide.From this perspective, let us look at a few aspects shared between Nichiren's thought and the Tendai of his day, as well as some noteworthy disjunctures.
Nichiren shared with traditional Tiantai/Tendai thought the position that the Lotus Sutra surpasses all others in its promise of universal bud dhahood.Those who were denied this possibility in other Mahayana sutras~women, evil men, and persons of the two vehicles-are in this sutra all guaranteed the attainment of supreme enlightenment.Nichi ren also shared with his Tendai contemporaries a particular respect for the origin teaching, or second fourteen chapters of the sutra.
Zhiyi, the Tiantai founder, had divided the sutra for exegetical pur poses into the "trace teaching" 迹門) and the "origin teachin s'w (honmon 本門) ,their chief difference being in their respective views of the B uddha.1 he Buddha of the trace teaching is the histori cal Sakyamuni who achieved supreme enlightenment under the bodhi tree, while in the origin teaching, particularly its key chapter, "Fathoming the Lifespan of the Tathagata/5 Sakyamuni is revealed to have first realized buddhahood an unfathomable, stae^ering number of kalpas ago, measurable only by analogy to the innumerable particles yielded by reducing to dust countless billions of world systems (indi cated in East Asian exegesis by the term gohyaku jindeno'd 五臼塵; ^去 力 )."Ever since t h e n , , ' Sakyamuni says, "I have always been here in this saha world, preaching the Dharma, and teaching and converting" (T.no.262, 9.42b).In Japan, the need to reconcile traditional liantai/L otus teachings with esoteric Buddhism resulted in increased attention to the sutra5 s honmon section.Rather than the nistorical Buddha of the shakumon section, the Sakyamuni who attained buddhahood in the inconceivably remote past and who ever since then has been present in this world was more readily identifiable with Mahavairocana or D ainichi大日,th e cosmic Buddha of the esoteric teachings, who is con stantly manifest in all phenomena.This led to the development in Japan of a distinctive "honmon thought" as a corollary of laim itsu 台密, the Tendai/esoteric synthesis elaborated by Ennin 円 仁 ( 794-864), Enchin 円 珍 ( 814-891), and Annen 安 然 ( 841-?) (see Asai 1975).Thus the two divisions of the Lotus Sutra came to be ranked hierarchically, the latter being elevated above the former.By N ichiren's time, the origin teaching was also being appropriated by the "exoteric" branch of Tendai scholarship as the unique locus of the original enlightenment doctrine.The transmission texts of both the Eshin 恵心 and Danna 檀那 schools, the two major doctrinal line ages of medieval Tendai, eive varying, often extremely complex, dis cussions of the distinction between trace and origin teacnmgs (Shimaji 1976, pp. 497-500;Hazama 1974, pp. 196-201).Most agree, however, in reading the trace teaching as representing the perspective of "acquired enlightenment" (shikaku 始覚) and the origin teaching as that of oriemal enlightenment.In other words, the first fourteen chapters of the Lotus Sutra are seen as representing a conventional perspective in which the practitioner cultivates practice, accumulates merit, extirpates delusion, and eventually reaches enlightenment as the culmination of a linear process, "proceeding from cause (prac tice) to effect (enlightenment), ' (Juin 従因至果) • To enter the realm of the origin teaching is to dramatically invert this perspective, "proceeding from effect to cause" (juka koin 従果向因) .It is to shift from linear time, in which practice is first cultivated and enlighten ment later achieved, to mandalic time, in which practice and enlight enment are simultaneous.Nichiren makes a similar assertion: When one arrives at the origin teacnmg, because [the view that the Buddha] first attained enlightenment [in this life time] is demolished, the fruits of the four teachings are demolished.1 he fruits of the four teachings being demol ished, their causes are also demolished.The causes and effects of the ten realms of the pre-Lotus Sutra and trace teachings being demolished, the cause and effect of the ten realms of the origin teaching are revealed, fhis is precisely the doctrine of original cause and original effect.The nine realms are inherent in the beginningless Buddha realm, ana the Buddha realm inheres in the beginningless nine realms.This repre sents the true mutual inclusion of the ten realms, the hundred realms and thousand suchnesses, and the three thousand realms in one thought-moment.
(Kaimoku sho, STN V. 552)4 Here the "four teachings" indicate those other than the Lotus Sutra.
Their "effects" refer to the attainment of buddhahood as represented in those teachings, and their "c a u s e s , " to the corresponding practices for attaining buddhahood.Applied to the ten realms, "cause" indi-ェ This passage and the next quoted passage are am ong those form ing the basis for Tamura's conclusion that Nichiren was "at bottom sustained" by original enlightenment thought.Asai Endo, however, sees them as informed primarily by earlier Tiantai ideas.cates all beings of the first nine realms, from hell dwellers to bodhi sattvas, who have yet to realize supreme buddhahood, and "e f fe c t, , ,to the Buddha realm."Demolishing" the causes and effects of the pre-Lotus Sutra and trace teachings means to demolish linear views of practice and attainment, opening a perspective in which cause (nine realms) and effect (buddhahood) are present simultaneously.We can also see this in Nichiren's description of the Buddha's pure land: Now the saha world of the original time (honji 本 日 寺 )[of the Buddha's enlightenment] is the constantly abiding pure land, liberated from the three disasters and beyond the [cycle of the] four kalpas.Its Buddha has not already entered nirvana in the past, nor is he yet to be born in the future.And his dis ciples are of the same essence.This [reality] is [precisely]… the three thousand realms of one's mind." (Kanjin horizon shd, STN 1..712)5 In medieval Tendai, the image used for this mandalic reality is the assembly of the Lotus Sutra itself, present in the open space above Vul ture Peak where the two buddhas, Sakyamuni and Prabhutaratna, sit side by side in the floating jeweled stupa.This assembly was envisioned, not as a past event, but as a constantly abiding reality, as expressed in a phrase recurring in Tendai kuden texts: "Ih e assembly on sacred [Vulture] Peak is still numinously present and has not yet dispersed."Some medieval Tendai initiation rituals reenact the "transmission of the jeweled s tu p a.A n example is the third, final, and most secret part of the kai kanjo (戒雀頂) ,the precept initiation conducted within the medieval Tenaai precept lineage based at Kurodani on Mt.Hiei.Koen 興 円 ( 1263-1317), who compiled the earliest description of the ritual, explains that this initiation does not have the meaning of a sequential transmission: master and disciple share the same seat, like Sakyamuni and Prabhutaratna in the stupa, to show the simultaneity of cause and effect, and the mythic time when the Lotus Sutra was expounded is retrieved in the present m om ent (Enkat jurokucho 円戒十六巾占,Tendai Shuten Hensanjo 1989,dd. 88-91).Lotus assembly imagery for the realm of "original cause and original effect" is linked to Taimitsu ritual, such as the hokkehd 法華法 discussed in Dolce's article in this volume, and is also depicted on Nichiren's calligraphic mandala: The 'Jeweled StQpa" chapter states: "All in that great assembly were lifted and present in open space."All the buddhas, 5 Considerable controversy has occurred within the Nichiren tradition over whether the "three thousand realms of one's m in d " (koshin sanzen 己七、 三 千 ) in this passage refers to the Buddha's mind, the m ind of the ordinary worldling, or the m ind of one who embraces the Lotus Sutra (see M o c h iz u k i 1958, p. 115).
bodhisattvas, and great saints, and in general all the beings of the two worlds [of desire and form] and the eight kinds of [non-human] beings... dwell in this gohonzon, without a single exception.Illuminated by the light of the five characters of the Wonderful Dharma, they assume their originally inherent august attributes.This is called the object of worship....By believing undividedly in [the Lotus Sutra, in accordance with its w o r d s , ] "honestly discarding skillful means" and "not accept[ing] even a single verse from other s u tra s , , , m y disciples and lay followers shall enter the jeweled stupa of this gohonzon.
(Nichinyo gohenji 日女御前御返事, STN2 :1375-76) Chan tine the daimoku with raith in the Lotus ^utra thus affords entry into the timeless realm of the Lotus assembly, where cause and effect are simultaneous and the Buddha and his disciples "constantly abide."
C O N T E M P L A T IO N IN TERMS O F ACT U A LIT Y
The original enlightenment doctrine associated with the origin teach ing is just that, a doctrine.There must also be practice, by which the identity of the Buddha and ordinary worldlings is realized.Hence the category of kanjin "mind-contemplation" or "mind-discernment.,, In traditional Tiantai, this term simply denoted meditative practice as opposed to doctrinal study.In Japan, like the term shikan 止 観 (calm ing and contemplation), it was sometimes used to indicate the entire Tendai/Lotus s y s te m ,as distinguished from Tenaai esoteric teachings.By the late Heian period, kanjin had come to mean contemplation or insight associated specifically with the oriein teaching (Take 1991, p. 409).Specific meditation methods in which the practitioner brings a focused m ind to bear upon a particular ooject were regarded as linear in approach, "moving from cause to effect" ; these were termed rikan 理観 or "contemplation in terms of principle" and associated with the trace teaching.In contrast, the contemplation associated with the ori gin teachings was called ノ 事 観 , contemplation in term of actuality, and was said to "move from effect to cause."Rather than a specific meditation method, kanjin in medieval Tendai kuden texts often seems simply to denote the insight that all phenomenal things, just as they are, express the reality of original enlightenment (Shimaji 1976,pp. 502-3;H azama 1974, pp. 203-4).
Nichiren also associated the origin teaching with "contemplation in terms of a c tu a lit y , , , b u t he used this term in a distinctive sense.In his famous treatise Kanjin horizon sho, he wrote, "Kanpn means to contem plate one's mind and to find the ten realms in it."Specifically, Nicniren was concerned with the Buddha realm implicit in the hum an realm; for him, this was the main purport of the mutual inclusion of the ten realms {jikkai gogu 十界互具) ,it s e lf an abbreviated expression of the three thousand realms in one thought-moment (ichinen sanzen 一念三千) .These principles were central to Nichiren's reading of the Lotus Sutra throughout his career.In the teachings of the lia n ta i founder Zmyi, ichinen sanzen indicates the mutual inclusion of all dharmas and the ' single thought" that arises at each moment in ordi nary worldlings; one's mind and all phenomena are at every moment inseparable and mutually encompassing.Tms is the "realm of the inconceivable" 刁ヽロ丁思、 議 境 to be contemplated as the first of ten modes of contemplation set forth in Zhiyi's meditation Mohe zhiguan 摩言可止観(Great calming and contem plation).Zhiyi grounded this concept in the second chapter of the Lotus Sutra, "Skillful Means," whicn belongs to the trace teaching.For Nichiren, however, this was merely ichinen sanzen in terms of principle 理の一念三千, the theoreti cal potential for buddhahooa m human beings.What Zhiyi had not revealed was the 4 4 three thousand realms in actuality5 5 (ji no ichinen 事の一念三千) , which is "found only in the origin teaching, hid den in the depths of the 'Fathoming the Lifespan5 chapter5 5 ( K a i m o k u shd, STN 1:539).This "practice in actuality" is "the five characters Namu-myoho-renge-kyo and the object of worship of the origin teachine" (K a n jin horizon shd, STN 1:719).K a n jin for Nichiren has the specific meaning of realizing buddhahood by embracing the dai m o k u of the Lotus Sutra.
Thus Nichiren, like ms Tendai contemporaries, taught kanjin and "contemplation in terms of actuality" as the practice uniquely associated with (although not explicitly stated m j the origin teaching.But in his case, jik a n is not primarily the insight that all phenomena just as they are express original enlightenment, but that "contemplation" or prac tice entails specific religious forms: the daimoku,th e object of worship {horizon), and the place of practice (the kaidan or ordination plat form) , which he defined as the "three great secret Dharmas" or "three ereat matters" of the origin teaching.This use of the term "actuality" may derive from mikkyd, where it indicates the "actual forms" yjtso)一 mudras, mantras, and mandalas-of esoteric practice.However, "actuality" for Nichiren also carried the meaning of encountering great trials in the course of spreading Namu-myoho-renge-kyo ( Toki nyudddono gohenji 富木入道殿御返事,STN 2 : 1522; see M ochizuki 1958,pp. 118-22, and A sai 1986).Moreover, as Asai Endo points out in his essay in this volume, there is a sense in which N ichiren's "ichinen sanzen in actuality" is not inherent from the outset but bestowed by the Buddha upon all beings of the last aee.6 Mandalic readings of the origin teaching had important implications for practice.Since the beings of the nine realms (cause) and the Bud dha (effect) are present simultaneously, practice and realization can not be temporally divorced but must occur in the same moment.In the words of the Sanju-shi ka no kotogaki 三十四箇の事書( Notes on thirtyfour items), an important medieval Tendai kuden text: [According to the provisional teachings, ] delusion and enlightenment are separate.One must first extirpate delusion and then enter enlightenment; thus one does not enter the stage [of enlightenment] from the outset.But in the perfect and sudden teaching [of the Lotus Sutra], practice...and enlightenment are simultaneous....All practices and good deeds are skillful means subsequent to the fruit.(Tada 1973, p. 180) For Nichiren, too, The merit of all [other] sutras is uncertain, because they teach that one must first plant good roots and [only] afterward become a Buddha.But in the case of the Lotus Sutra, when one takes it in one's hand, that hand at once becomes Buddha, and when one chants it with one's mouth, that mouth is pre cisely Buddha.It is like the moon being reflected in the water the moment it appears above the eastern mountains, or like a sound and its echo occurring simultaneously.
(Ueno-ama gozen gohenji 上野尼御前御返事, STN2 :1890) Buddha between Nichiren and medieval fe n d a i,a question too complex to oe addressed m depth here (see S to n e 1990, pp. 164-70; 1999, p. 274).Medieval Tendai texts celebrate the "unproduced triple-bodied Tathagata^ {musa sanjin 無作三身) , who is manifested as all phe nomena, and tend to regard Sakyamuni s initial realization in the remote past, described in the "Lifespan" chapter of the Lotus Sutra, as a metaphor for the original enlightenment of all living beings.This sort o f immanentalist view was not lacking in N ichiren's thought: ''Sakyamuni o f subtle awakening (mydkaku 妙覚)is our blood and flesh.Are not the merits of his causes and effects our bones and marrow?...The Sakyamuni of our own m ind has mani fested the three bodies since countless dust-particle kalpas ago; he is the ancient Buddha without beginning" {Kanjin horizon shd, STN 1:711, 712).However, Nichiren treats Sakya m uni Buddha's enlightenment in the remote past as an actual event, mediating the realiza tion of buddhahood by all living beings, and also embraces more transcendent views of the Buddha as "sovereign, teacher, and parent of tms threefold world."Some commentators have for these reasons tended to distinguish medieval Tendai views o f the B uddha as emphasizing the Dharma body, and JNichiren's, the recompense body (see for example A sai 1945, pp. 287-97, 304-15;Tamura 1965, pp. 625-26;Kitagawa 1987, pp. 190-278;Asai 1991, pp. 299-300).W h at we do n o t know, however, is w hether the utriple-bodied Tathagata" of the medieval Tendai kuden texts accounted for the whole of their compilers, views about the Buddha, or whether they, too, in other contexts, may have envisioned the Buddha as an external savior figure.
In both traditions, the moment of practice in which the Buddha and the ordinary worldling are united is associated with traditional categories of Tiantai/Tendai Lotus Sutra exegesis celebrating the unfathomable merit to be gained from even the slightest inclination toward the sutra, such as "a single moment's faith and understanding" ( ichinen 一念信解) or "a single moment's appropriate rejoicing" ( ichinen zuiki 一念随吾) .In medieval Tendai texts, the content of that moment is generally described as the realization that "all dharmas are the Bud dha D h a r m a , " the traditional definition of the stage of verbal identity ( mwji-soku 名午良P), the initial stage of practice in the Tiantai/ Tendai marm scheme.Nicmren, too, stressed the realization of buddhahood at the stage of verbal identity, but for him, this was equated, not with a particular insieht, but with embracing raith in the Lotus Sutra and chanting its daimoku (see Shishin gohon shd 四信五品鈔, STN2 : 1295-96).
The simultaneity of uractice and realization is not a denial oi the necessity of continued practice but a reconceivme or it: practice is seen, not in instrumental, linear terms as a means leading to an end, but instead, as the expression, confirmation, and deepening of a liber ation or salvation that in some sense is already present.It is true that, vis-a-vis the medieval Tendai kuden literature, which tends to stress the moment of realization, Nichiren's writings place greater emphasis on the aspect of continued practice.However, this was not because he was reasserting the need for practice over and against a Tendai tradi tion that had lapsed into mere theoretical argument.Rather, it stemmed from the fact that he was in effect establishing a new reli gious community and needed to make clear its premises; that he had continually to exhort his followers to keep faith in the face of severe opposition; and because exclusive devotion to the Lotus Sutra carried in his mind a mandate to propagate it.Broadly speaking, the perspec tive of practice (or raith) and realization (or salvation) as simultane ous can be said to characterize other forms of Kamakura Buddhism as well.Dogen taught "practice on the basis of realization" and the w oneness of practice and realization/5 while Shinran held that in the moment when faith arises in one's heart, one is "equal to TathagatasM ; the nenbutsu is recited in gratitude for a salvation that is already assured.This is an element shared by the new movements with the dominant Tenaai tradition and is probably traceable, at least in part, to the influence of esoteric thought, in which the adept is said to real ize the unity of self and Buddha in the act of ritual practice.
Thus we can see that Nichiren adopted a perspective similar to Tendai original enlightenm ent doctrine, in that it was grounded specifically in the origin teaching of the Lotus Sutra and entailed the simultaneity of practice and realization.Nonetheless, there is a vital difference between many medieval Tendai texts and N ichiren's thought in the way this perspective is appropriated-not a practice/ theory distinction, but another that has generally gone unrecognized.In the case of medieval Tendai hongaku texts, arguments against the linear perspective of "acquired enlightenment" in favor of that of inherent, original enlightenment often form the primary polemical agenda: instrumental views of practice as a means to an end, or con structions of buddhahood as something external to one's immediate reality, temporally or ontologically, are condemned as provisional or even delusive.In Nichiren's case, however, the primary polemical agenda is asserting the unique soteriological validity of the Lotus Sutra in the Final Dharma age.Like every other element in his mature teaching, hongaku ideas are subordinated to this overriding concern.Original enlightenment doctrine-such as the mutual inclusion of the beginningless nine realms and the beginningless Buddha realm-thus becomes for Nichiren another ground for arguing the superiority of the Lotus Sutra, since in his view it is found in the depths of the origin teaching of the Lotus Sutra and in no other place.Tamura may well be right in asserting that Nichiren was "at bottom sustained" by this doc trine, but its recession from the central focus of much of his later writ ings is attributable, not to a departure or "descent" from a nondual hongaku perspective, but to the fact that exclusive faith in the Lotus Sutra superseded it as the most important thing he had to convey.
The Cosmos, History, and Japan
The late twelfth through early fourteenth centuries saw the rise of a number of discourses that attempted, usually in religiocosmic terms, to locate Japan in the world and in history.The most famous of these is shinkoku ネ 申 国 thought, which encompasses a range of notions about Japan as a land uniquely under the guidance and protection oi its kami, elaborated from different angles and with different aims in such well-known works as the Gukan 5如 愚 管 抄 ( c .1219) of Jien 慈円 and the Jinn o shotoki 神 皇 正 統 記 ( rev.1343) of Kitabatake Chikafusa 北畠親房.Shinkoku thought has recently been the subject of considerable revi sionist scholarship.Countering the ideological agendas of scholars who have associated it with notions of an essentialized "Shint6" as the timeless spiritual basis of the Japanese or assimilated it to modern national consciousness, Kuroda Toshio (1996) has sought to locate it in the historical specifics of medieval society.Shinkoku thought, he areues, originated largely as a reactionary ideological move within the Buddhist kenmitsu system aimed at bolstering the authority of the temple-shrine complexes and other ruling elites and countering inno vative, heterodox movements.It was also stimulated in part by exter nal events, such as the failure of the Mongol invasion attempts and the Kenmu restoration.K u r o d a , s work has been developed by a n u m ber of other insightful studies (see for example Rambelli 1996;Sasaki 1997, pp. 28-62;and Sato 1998, pp. 307-47).
This new scholarship has identified shinkoku thought as an estab lishm ent discourse.Sasaki, for example, notes that it tended to be invoked in arguments against the exclusive nenbutsu movement, which rejected worship of the kami (1997,pp. 36-37).At the same time, however, shinkoku arguments can be seen as emerging from a still larger complex of discourses about Japan's place in the world, in which both "new" and "old, , ,"establishment" and "antiestablishment, , Buddhism participated.A figure whose work casts considerable light on such concepts of "Japan" in the medieval period is Nichiren.This strand in his thought has received little attention in postwar scholarship, proba bly because of lingering associations with nationalistic wartime Nichirenism.Nevertheless, now that Nichiren's ideas of the state and political authority have been reexamined by Sasaki, Sato, and others, it is appropriate that his views of Japan also be acknowledged and reconsidered in their medieval context.One of the very few postwar scholars to address this subject is Takagi Yutaka (1982).Takagi did not take up the issue of shinkoku thought but rather focused on other Buddhist views of Japan in the Kamakura period, a topic to which he was led by his study of Nichiren.Central to Takagi5 s view of Kamakura Buddhism is consciousness of the Final Dharma age, to which both "old" and "new" Buddhism responded, and Takagi places medieval Buddhist discourses about Japan in this context.Takagi notes that Buddhist thinkers of the time generally accepted the traditional Buddhist cosmology of four continents, one in each of the four directions surrounding Mt.Sumeru: Purvavideha in the east, Aparagodaniya in the west, Uttarakuru in the north, and Jambudvipa in the south.Among these, it is in the southern continent of Jambudvipa that Buddhism appears and spreads.At least as early as the Nara period, Japan had been incorporated into this Indian world model as one of countless island countries scattered "like grains of millet" in the sea surrounding Jambudvipa.This locus was not only geographical but also temporal, marking Japan as the terminus in the historical process of Buddhism's eastward dissemination through the "three countries" of India, China, and Japan.7By the late twelfth cen 7 The "three countries" no doubt represent imaginative and ideological space as much as geographical realities.The elision of Korea, historically so vital to the Japanese reception of Buddhism, is striking.tury, Takagi suggests, influenced by awareness of the Final Dharma age, Japan5 s status as a hendo 辺土 or marginal country on the edge of the Buddhist cosmos had additionally come to represent a projection into the spatial dimension of a perceived alienation from the Buddha and the possibility of salvation (Takagi 1982,pp. 275-78).Both new and older forms of Buddnism had to address Japan5 s peripheral loca tion in the effort to overcome the negative soteriological connotations of mappo.It is in this connection, Takaei argues, that we may under stand an increased interest at the time in India, the source of Bud d h ism ^ origin, as seen, for example, in the famous plans for a pilgrimage there by the monk Myoe (1173Myoe ( -1232)).A renewed empha sis on lineage,firmly linKing one's own tradition to the orthodox transmission of Buddhism through the "three countries," is also iden tified by Takagi as an im portant strategy for counteracting the assumptions of decline implicit in mappo thought.In the case of new movements, of course, such lineages were constructed de novo; Nichiren, for example, traces his lineage from Sakyamuni through Zhiyi through Saicho to himself~the "four teachers of three countries" (Kenbutsu mirai 如•顕仏未来記,S T N 1 ..743).
N IC H IR E N O N JAPAN
Nichiren's views of Japan link the mappo-countcring strategies noted by Takaei with the increased cosmological significance accorded Japan in medieval shinkoku thought, although the kami were not cen tral to his teaching and his agenda was not that of the shinkoku theo reticians.W hat follows are a few preliminary observations that may serve both to shed light on his own thoueht and to link it to other medieval concepts of Japan.
(1 )While he accepted the received view of Japan as situated on the edge of a horizontal Buddhist cosmos, Nicmren also placed Japan within a vertical Buddhist cosmos of his own devising.This cosmos is a feudal hierarchy, at the top of which stands Sakyamuni Buddha, lord of the threefold world.Beneath him are Brahma and Indra, and beneath them, Vaisravana and the others of the four deva kines, "who rule over and protect the four quarters as their gatekeepers.The monarchs of the four continents are vassals to Vaisravana [and the others].
The ruler of Japan is not even equal to a vassal of the wheel-turnine monarchs of the four continents.He is just an island chief.{Homon mdsarubekiyd no 々論法門可被申様之事, S T N \ 448; see also Fujii 1959).
Nichiren5 s concept of this world as sakyamuni?saomam ( Shakuson goryo 釈尊御領)bears structural similarities to other "feudal cosmolo gies5 5 being elaborated durine the same perioa.For example, Sanno ^hmto transmissions of Mt.Hiei identify the gongen 権現 of the Hie Shrine as the "landlord {jinushi 地主) of the country of Japan," while Ryobu and Ise Shinto transmissions identify Tensho Daijm as the absolute deity and sovereign of Japan, heading a feudal hierarchy of lesser, local kami.These notions of one deity as overlord or the coun try in turn represent outgrowths o f kenmitsu thought in which the Buddha or kami of a particular temple or shrine was also seen as the "landlord" of its shoen (see Kuroda 1975, pp. 266,289-90).In Nichi ren^ case, the feudal hierarchy serves not only to emphasize the supremacy of one's own deity, but also to subsume Japan within the realm of Sakyamuni who expounded the Lotus Sutra, and to relativize the authority of worldly rule by placing it beneath that of a transcen dent Buddha.As with the case of original enlightenment doctrine, a more widespread idea is here assimilated m JNichiren's thought to the supremacy of the Lotus Sutra.
(2) Nichiren understood Japan not solely in terms of its cosmologi cal location but as a member of a larger category of "country."uCountry" undergoes specific definition in Nicniren's concept of the five guides (goko S l l ) first developed during his exile to Izu (1261-1263).These are the teaching, human capacity, time, the country, and the sequence of propagation-five perspectives from which Nicmren argued the sole validity of the Lotus Sutra as the proper teaching to be spread in ms day.Concernine "c o u n tr y , , , h e observes: There are cold countries, hot countries, poor countries, wealthy countries, central countries, peripheral countries, large coun tries, small countries, countries wholly dedicated to theft, countries wholly dedicated to the killing of living beings, and countries utterly lacking in filial piety.In addition, there are countries solely devoted to Hinayana, countries solely devoted to Mahayana, and countries in which both Hinayana and Maha yana are pursued.
(Kydkijikoku shd 教機時国鈔,S T N 1:243)8 It is worth noting here that he makes no reference to language, race, ethnicity, culture, or other categories by which modern national or ethnic identity is commonly defined.What is most im portant for Nichiren about a country (kuni 国 or kokudo 国土) is the nature oi its affinity for a particular form of Buddhism.
( include a remark attributed to Suryasoma, teacher of the Lotus Sutra's famed translator Kumarajiva, that "this sutra is karmically related to a small country in the northeast" (Fahua c h u an ji、 法華伝言己,T.no.2068, Dlb-54b).He also cites Annen, who quotes Maitreya as saying, "In the east there is a small country, where people's faculties are suited solely to the srreat vehicle," and adds,"Everyone in our country of Japan believes in the Mahayana^ {Futsu jubosatsukai 如 5ん普通授菩薩戒広釈, T. no.2381, 74.757c), as well as Genshin 源 信 ( 942-1017), who writes, "Throughout Japan, all people have the pure and singular capacity suited solely to the Perfect Teaching" (Ichijo yoketsu 一 来 要 伏 ,T.no.2370, /4.35iaj.Nichiren concludes, ^apan is a country where people have faculties related solely to the Lotus Sutra.It they practice even a phrase or verse of it, they are sure to attain the W ay, because it is the teaching to which they have a connection.... To the nenbutsu and other good practices, it is a country without connections" (STN 1: 324).Nichiren here invokes a longstanding Tenaai tradition that the Japanese have faculties suited solely to the perfect teaching of the Lotus Sutra (Groner 1984, pp. 181-82).Spoken by earlier figures such as Annen or Genshin, this assertion served to legitimize Mt.Hiei as a leading cultic center for the rituals of nation protection.Made by Nichiren, however, the same claim worked to challenge the authority of Mt.Hiei and other leading cultic centers, and by implication, the authorities who supported them, by arguing that they had u n de r m ined the supreme position of the Lotus Sutra by embracing Amidist, Zen, and esoteric teacnmgs, and instead served to legitimize the posi tion oi himself and his toilowers.
(4) Despite Japan5 s affinity for the Lotus Sutra, in Nichiren's eyes, this connection was not being honored.This view, first articulated in the Rissho ankoku ron 立 正 安 国 論 (1260) and other essays of the same period, erew stronger with the threat of foreign invasion.The year after the first demand for Japan's submission arrived from the Mongol empire, Nichiren was writing: Because all people of the land of Japan, from high to low with out a single exception, have become slanderers of the Dhar ma, Brahma, Indra, Tensho Daym, and the other deities must have instructed the sages of a neighboring country to reprove that slander....The entire country has now become inimical to the Buddhas and deities...China and Korea, following the example of India, became Buddhist countries.But because they embraced the Zen and nenbutsu teachings, they were destroyed by the Mongols.The country of Japan is a disciple to those two countries.And if they have been destroyed, how can our country remain at peace?...All the people in the country of Japan will fall into the Hell without Respite.
(Homon mdsarubekiyd no koto, STN 1 :454-55) That Japan has turned against the Lotus Sutra and become a land of slanderers, that the protective deities have therefore abandoned the country, and that the Mongol invasion is a deserved punishment for this threat and perhaps even a necessary evil to awaken the country from its slander, become recurrent themes in Nichiren's writing from this time on.
(5) Nichiren's understanding of Japan as a land of Dharma slander ers influenced the mode of proselytizing that he adopted.As is well known, drawing on scriptural sources and the commentaries of Zhiyi, Nichiren distinguished between shoju 摂受, literally to "embrace and a c c e p t, " the mild method oi leading others gradually without explicitly criticizing their position, and s h a kubuku 折 伏 ,to "break and subdue, , , the harsh method of directly rebuking attachment to inferior or wrone views.9He likened these to the two worldly arts of the pen and the sword.For the most part, Nichiren saw the choice between the two methods as temporally dictated: Where shoju had been suited to the True and Semblance Dharma ages, shakubuku was appropriate to the mnal Dharma asre.In one famous passage, however, Nichiren qualifies the choice according to the country.Even in the Final Dharma age, he argues, both shoju and shakubuku are to be used, because there are two kinds of countries: countries that are evil merely because their inhabitants are ignorant of the Lotus Sutra, and countries whose inhabi tants embrace heretical teachings and actively slander the Dharma.
Japan in his view clearly belonged in the latter category (Kaimoku shd, STN I: 606).10 (6) However, the deplorable state of Buddhism in Japan did not mean for Nichiren (as it had for Dogen) that it was to be sought in 9 T h e locus classicus fo r these term s is th e Snmdla-devi-sutra, w h ic h says th a t the two methods "enable the Dharma to long endure" (Sheng-man jin g 勝蔓経, T. no.353, 12.217c).Zhiyi explicitly connects shakubuku with the Lotus Sutra) see Fahua xuanyi、 /云芈玄 養 ,T.no.1716, 33.792b;Fahua wenju 法華文句, T. no. 1718, 34.118c;and 摩詞■ 止観, T. no. 1911, 46.137c. 10 This distinction became an issue when Nichiren-based religious movements began to proselytize outside Japan.For example, Soka Gakkai publications from that organization's early period of overseas expansion argue to the effect that the Japanese, having inherited a national tradition of "heretical" Buddhism, are distinguished by an exceptionally heavy karmic burden of Dharma slander; the religions of other, presumably Western, countries may be misguided and ineffectual but are not "heretical" in a strict sense."Because overseas countries are ignorant o f the Buddhism for the Final D harm a age, what is needed, of course, is shoju based on the spirit of shakubuku....There is absolutely no need to use such words as 'heresy5 {jashu 邪宗) .Just teach them the benefits of the gvhonzon" (Soka Gakkai Kyogakubu, 1968, p. 399).purer form somewhere else."By tasting a single drop, one can know the flavor of the great ocean, and by observing one flower, one can infer the coming of spring.One need not travel ten thousand leagues to reach Song [China] or spend three years journeying to Vulture Peak [in In d ia ] ... to distinguish superior from inferior am ong the Buddha's lifetime teachings" (Kaimoku shd, STN 1:588-89).Nichiren revered the Tang-period Chinese Tiantai masters Zhiyi and Zhanran but did not regard contemporary Song China as a repository of Bud dhist truth.In at least one writing, dating from the Sado period, Nichiren represents Japan as the only place where Buddhism survives: The great teacher Miao-luo 妙 楽 [Zhanran] said, "Has not the Dharma been lost in India, so that they are now seeking it throughout the four quarters?"This passage testifies that Bud dhism no longer exists in India.In Cnina, during the reign of Emperor Gaozong 高示, northern barbarians captured the eastern capital, and it has now been more than a hundred fifty years since the Buddha Dharma and the ruler's dharma (obo) came to an end.Within the great repositories of China not a single Hinayana sutra remains, and the vast majority of the Mahayana sutras have also been lost.... Therefore Zunshi 遵式 said, "[These teachings] were first transmitted from the west, where the moon appears.But now they return from the east, where the sun rises." (Kenbutsu mirai ki, STN1: 741) How far this alleged disappearance of Buddhism from the Asian mainland represents Nichiren's genuine impression and how far it represents a rhetorical strategy is difficult to assess.Though he consid ered Zen an inferior form of Buddhism, he would certainly have been aware at least of the existence of contemporary Song Chan, as a num ber of refugee Chan monks had fled to Japan to escape the Mongols and some had taken up residence in Kamakura.Be that as it it may, the polemical intent of this passage is clear enough.In their original contexts, the quotations from Zhanran and from the Tiantai master Zunshi (964-1032) refer only to specific texts.Zhanran refers to a request made by an Indian monk to Amoghavajra (705-774) for a translation of Zhiyi5 s works-though Zhanran draws a similar rhetori cal conclusion, that the Dharma has been lost in India and is being sought abroad {Fahua wenju j i 法華文句記, T. no.1719,34.359c).Zun shi for his part is commenting on the fact that Genshin's disciple Jakusho 寂照 had brought back from Japan a work of the Tiantai master H u is i, 恵 '® that had been lost in China (Dasheng zhiguan f a m e n 大乗}h 觀 法門,T. no. 1924,46.641c).Nichiren reads these statements synechdochically, so that the particular texts in question are made to stand for the whole of Buddhism; he then assimilates this alleged disappear ance of Buddhism, in the Chinese case, to invasion by the Mongols, the very situation then being faced by Japan.Ih is leaves Japan as the only place where, in the persons of Nichiren and his followers, the teachings of the Lotus Sutra are upheld.Here, as in the preceding quotation as well, an implied analogy is drawn between political reali ties and the state of Buddhism: Just as Korea has fallen, the great Song nation is beleaguered, and Japan now stands alone against the Mon gols, so Buddhism has now been wiped out in these countries and exists only in Japan.Nichiren's famous three vows-"I will be the pillar of Japan, I will be the eye of Japan, I will be the great ship of Japan" (Kaimoku shd, STN1: 601)~were no doubt made from this perspective.
(7) As an extension of this perspective, Nichiren began, also during the Sado period, to speak of Japan as the land where, through his own efforts and those of his disciples, a new Lotus Buddhism uniquely suited to the Final Dharma age would arise.In the Final Dharma age, he said, "the secret Dharma of the sole great matter shall be spread for the first time in this country" (Tolu Nyudd-dono gohenji, STN 1:516).Elaborating on Z u n s h i, s analogy of the sun and moon cited above and expanding the scope of its referent, he wrote, "The moon appears in the west and illuminates the east.The sun rises in the east and illu mines the west.The same is true of Buddhism.In the True and Sem blance Dharma ages, it moved from west to east, but in the Final Dharma age, it will return from east to west....It is now the beginning of the last [of the five] five hundred year periods [following the Bud dha^ nirvana], and the Buddha Dharma will surely emerge from the eastern land of Japan" (Kenbutsu mirai ki, STN 1 : 741-42; see also Soya Nyudd-dono gari gosho 曾谷入道殿許御書,S T N 1 : 909).In a somewhat later text written in 1280, Nichiren expanded still further on this anal ogy by drawing a comparison between the Buddha Dharma of India, wmch he called "the land of the moon tribe," or Yiieh-chih 月氏,a n d the Buddha Dharma of Japan,the "land of the sun."Hitherto, he said, the Buddha Dharma of India had spread from west to east.But like the moon, its light was feeble; it could never dispel the darkness of the degenerate, mnal Dharma age.Now it was time for the Buddha Dharma of Japan to rise like the sun, moving from east to west, and illuminate the world {Ranged Hachiman shd, STN 2:1850).JNichiren's identification of Japan as the birthplace of a new Buddhism parallels his growing sense of himself as the bearer of a new Dharma, distineuished in important ways from his received Tenaai tradition and intended specifically for the Final Dharma age.
Nichiren did not redefine Japan as the center of the cosmos, a move that would be made by some later shinkoku ideologues such as Yoshida Kanetomo 吉 田 兼 倶 ( 1434-1511), whose famous "tree metaphor" defines Buddhism as the fruit and flowers, Confucianism as the leaves and branches, and Shinto as the root.Nichiren's Japan remains a tiny "millet erain" country on the periphery of the Buddhist cosmos in the last age.But it is precisely from tms marginal land that the Bud dha Dharma for the last age will emerge and spread west to illuminate the world.Thus, in another of JNichiren's hierarchy inversions that result from investing all authority in the Lotus Sutra, the periphery becomes more important than the center.This theme in his teaching, of the Dharma returning from Japan to the west, has been appropriated in a variety of ways in the twentieth century.Indeed, one suspects it may have been this element, as much as anachronistic notions of Nichiren as a modern patriot and imperial supporter, that first drew the attention of serious modern nationalistic thinkers.The ima^e of the Buddha Dharma reversing its historical flow to return from the east to the west has also been an inspiration to those seeking to spread various forms of Nicmren Buddnism outside Japan.1 1 Nichiren5 s concept of Japan sugrgests the need, on occasion, to con sider medieval Japanese religions in ways that cut across the polarity of old and new institutions.As Takagi Yutaka has noted, Nichiren's views of Japan are tied to broader medieval Buddnist attempts, transcendine sectarian affiliation, to overcome the negative soteriological im pli cations of mappo seen by many to be reflected in Jap an 's peripheral location on the edee of the Buddnist cosmos.At the same time, as we have seen, they are clearly linked to the "feudal c o s m o lo g ie s , ' emerging from kenmitsu institutions.Unlike some shinkoku ideoloeues, Nichiren did not see Japan as sacred in itself; its significance lies solely in its affinity with the Lotus Sutra.Nonetheless, his is an attempt to define Japan's place in the world and history, and as such, sheds light on other religious and cosmological concepts of Japan emerging in the medieval period. 3 ) This then raises the question: "Now by mastery of what teaching can [the people of] the country of Japan escape birth and death?" (STN 1 :323).Proof texts that Nichiren cites in answer to this question 8 See also Nanjo Hyde Shichird-dono gosho 南条兵衛七郎殿御書, STN 1:323; trans. in Y a m p o l sky 1996, p. 417.Nicmren derived these categories from Xuanzang's 玄奘 Record of the Western Regions and Saicho5 s Kenkai raw 顕戒論.See also the discussion in Takagi 1982, pp.279-83.
|
2017-10-02T02:11:00.940Z
|
1999-11-01T00:00:00.000
|
{
"year": 1999,
"sha1": "299e5b9a52cc633a53ba38eb48b5068876f04eab",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.18874/jjrs.26.3-4.1999.383-421",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "299e5b9a52cc633a53ba38eb48b5068876f04eab",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"History"
]
}
|
247499462
|
pes2o/s2orc
|
v3-fos-license
|
Role of the antigen presentation process in the immunization mechanism of the genetic vaccines against COVID‐19 and the need for biodistribution evaluations
To the editor, The mechanism of ‘traditional’ vaccines consists in inoculating viruses, which have been previously inactivated (e.g. by thermal treatments), or attenuated (e.g. by multiple passages in suboptimal growth conditions).1 Such viruses, which lost the ability to cause acute infection, allow the immune system to recognize them as exogenous pathogens, promoting the production of specific antibodies and memoryT lymphocytes.1 The genetic vaccines against COVID19 which obtained the authorization for use in the European Union, namely the adenoviralbased vaccines (produced by AstraZeneca and Janssen) and the mRNA vaccines (produced by Pfizer/BioNTech and Moderna), encode genetic information, which enables human cells to produce a viral antigen. More precisely, the aforementioned vaccines induce the protein synthesis machinery of human cells to translate the spike protein of the viral capsid of SARSCoV2.2 Upon its translation by the ribosomes, the spike protein gets processed by the Golgi apparatus and presented to the immune system in two forms: i) as an entire protein, displayed on the cellular membrane, which can be recognized by B cells and Thelper cells (Figure 1A); or ii) in the form of fragments loaded on the major histocompatibility complex I (MHC I), which presents the endogenous antigens to CD8+ T lymphocytes (Figure 1B). The immune system recognizes the exogenous antigen, initiates the inflammatory response and the subsequent steps leading to the production of specific antibodies by the B cells.2 In human cells, the antigen presentation process is performed by the MHC I and II, and this mechanism is essential for the cellmediated immunity.3 The MHC I is a protein complex, located on the membrane of all nucleated cells, which presents to CD8+ lymphocytes fragments of endogenous antigens, generated upon the proteasomal degradation of intracellular proteins (Figure 1C).3 This mechanism allows the immune system to constantly screen the proteosynthetic activity of all nucleated cells of the body, in order to detect when a cell is synthesizing viral or mutant proteins. The MHC II is located on the membranes of professional antigenpresenting cells (APCs), such as macrophages, monocytes, B cells and dendritic cells, and it displays fragments of exogenous antigens ingested around the body to CD4+ lymphocytes (Figure 1D).3 In some cases, MHC II molecules can be found even on endothelial cells, as a consequence of inflammatory signals.3 When a CD8+ or CD4+ lymphocyte detects a cell expressing a viral gene (e.g. due to an infection), a mutant gene (e.g. due to cancer) or a foreign gene (e.g. due to a transplant), it binds the MHC activating the immune response that leads to the destruction of the abnormal cell.3 The aforementioned processes are essential for understanding the differences between the ‘traditional’ and the genetic vaccines, in terms of antigen presentation. The ‘traditional’ vaccines generally do not induce human cells to produce viral proteins, and thus, human cells do not expose viral antigens deriving from their proteosynthetic activity. On the contrary, the genetic vaccines against COVID19 induce human cells to produce the spike protein, relying intrinsically to an autoimmune reaction, extended to all the cells that intake the genetic material and start the protein synthesis. Biodistribution studies are fundamental to determine in which tissues and organs an injected compound travels and accumulates. To the author's knowledge, up to now, such evaluation has not been carried out on humans for any of the emergency use approved COVID19 vaccines. As concerns the Pfizer/BioNTech BNT162b2 vaccine, it is injected into the deltoid muscle, which drains primarily to the axillary lymph nodes. Theoretically, the lipid nanoparticles (LNPs) in which the mRNA is encapsulated should have a very restricted biodistribution, targeting the draining axillary lymph nodes.4 However, a pharmacokinetic study performed by Pfizer for the Japanese regulatory agency shows that the LNPs display an offtarget distribution on rodents, accumulating in organs such as the spleen, liver, pituitary gland, thyroid, ovaries and in other tissues.5 Similarly, the results of the European Medicines Agency (EMA) assessment reports show an offtarget distribution of the LNPs used by Pfizer/BioNTech and Moderna, in the liver and other organs of rodents.6,7
Role of the antigen presentation process in the immunization mechanism of the genetic vaccines against COVID-19 and the need for biodistribution evaluations
To the editor, The mechanism of 'traditional' vaccines consists in inoculating viruses, which have been previously inactivated (e.g. by thermal treatments), or attenuated (e.g. by multiple passages in suboptimal growth conditions). 1 Such viruses, which lost the ability to cause acute infection, allow the immune system to recognize them as exogenous pathogens, promoting the production of specific antibodies and memory-T lymphocytes. 1 The genetic vaccines against COVID-19 which obtained the authorization for use in the European Union, namely the adenoviral-based vaccines (produced by AstraZeneca and Janssen) and the mRNA vaccines (produced by Pfizer/BioNTech and Moderna), encode genetic information, which enables human cells to produce a viral antigen. More precisely, the aforementioned vaccines induce the protein synthesis machinery of human cells to translate the spike protein of the viral capsid of SARS-CoV-2. 2 Upon its translation by the ribosomes, the spike protein gets processed by the Golgi apparatus and presented to the immune system in two forms: i) as an entire protein, displayed on the cellular membrane, which can be recognized by B cells and T-helper cells ( Figure 1A); or ii) in the form of fragments loaded on the major histocompatibility complex I (MHC I), which presents the endogenous antigens to CD8 + T lymphocytes ( Figure 1B). The immune system recognizes the exogenous antigen, initiates the inflammatory response and the subsequent steps leading to the production of specific antibodies by the B cells. 2 In human cells, the antigen presentation process is performed by the MHC I and II, and this mechanism is essential for the cell-mediated immunity. 3 The MHC I is a protein complex, located on the membrane of all nucleated cells, which presents to CD8 + lymphocytes fragments of endogenous antigens, generated upon the proteasomal degradation of intracellular proteins ( Figure 1C). 3 This mechanism allows the immune system to constantly screen the proteosynthetic activity of all nucleated cells of the body, in order to detect when a cell is synthesizing viral or mutant proteins. The MHC II is located on the membranes of professional antigen-presenting cells (APCs), such as macrophages, monocytes, B cells and dendritic cells, and it displays fragments of exogenous antigens ingested around the body to CD4 + lymphocytes ( Figure 1D). 3 In some cases, MHC II molecules can be found even on endothelial cells, as a consequence of inflammatory signals. 3 When a CD8 + or CD4 + lymphocyte detects a cell expressing a viral gene (e.g. due to an infection), a mutant gene (e.g. due to cancer) or a foreign gene (e.g. due to a transplant), it binds the MHC activating the immune response that leads to the destruction of the abnormal cell. 3 The aforementioned processes are essential for understanding the differences between the 'traditional' and the genetic vaccines, in terms of antigen presentation. The 'traditional' vaccines generally do not induce human cells to produce viral proteins, and thus, human cells do not expose viral antigens deriving from their proteosynthetic activity. On the contrary, the genetic vaccines against COVID-19 induce human cells to produce the spike protein, relying intrinsically to an autoimmune reaction, extended to all the cells that intake the genetic material and start the protein synthesis.
Biodistribution studies are fundamental to determine in which tissues and organs an injected compound travels and accumulates. To the author's knowledge, up to now, such evaluation has not been carried out on humans for any of the emergency use approved COVID-19 vaccines. As concerns the Pfizer/BioNTech BNT162b2 vaccine, it is injected into the deltoid muscle, which drains primarily to the axillary lymph nodes. Theoretically, the lipid nanoparticles (LNPs) in which the mRNA is encapsulated should have a very restricted biodistribution, targeting the draining axillary lymph nodes. 4 However, a pharmacokinetic study performed by Pfizer for the Japanese regulatory agency shows that the LNPs display an off-target distribution on rodents, accumulating in organs such as the spleen, liver, pituitary gland, thyroid, ovaries and in other tissues. 5 Similarly, the results of the European Medicines Agency (EMA) assessment reports show an off-target distribution of the LNPs used by Pfizer/BioNTech and Moderna, in the liver and other organs of rodents. 6,7 Another harmful source of toxicity has proven to be the spike protein itself. A study measured the longitudinal plasma samples collected from recipients of the mRNA-1273 Moderna vaccine. 8 The study shows that considerable amounts of spike protein, as well as the cleaved S1 subunit, can be detected in the blood plasma several days after the inoculation. The authors hypothesize that the cellular immune responses triggered by Tcell activation, which occur days after the inoculation, lead to the death of cells presenting the spike protein, releasing it into the bloodstream. 8 The fact that the spike protein is released in the bloodstream, involves even the antigen presentation process mediated by the MHC II, due to the intake of the viral protein around the body by the APCs (Figure 1D).
Up to now, more than 1000 peer-reviewed studies evidence a multitude of adverse events in COVID-19 vaccine recipients. 9 Such studies report severe adverse reactions following vaccination, including thrombosis, thrombocytopenia, myocarditis, pericarditis, cardiac arrhythmias, nervous system disorders and other alterations. It is noteworthy that several of the aforesaid side effects had already been reported in the confidential post-authorization cumulative analysis released as part of a Freedom of Information Act (FOIA) procedure, which provides data on deaths and adverse events recorded by Pfizer from 14 December 2020 to 28 February 2021. 10 In conclusion, it is essential to underline that every human cell that intakes the LNPs and translates the viral protein (in case of the mRNA vaccines), or that gets infected by the adenovirus and expresses and translates the viral protein (in case of the adenovirus-based vaccines), is inevitably recognized as a threat by the immune system and killed (Figure 1). There are no exceptions to this mechanism. The severity of the resulting damage and the consequences for health depend on the quantity of the cells involved, on the type of tissue and on the strength of the following autoimmune reaction. For instance, if the mRNA contained in the LNPs would get internalized by cardiac myocytes, and such cells would produce the spike protein, the resulting inflammation would likely lead to the necrosis of the myocardium, with an extent proportional to the number of involved cells. Therefore, it is fundamental to perform pharmacokinetic evaluations in humans, in order to determine the exact biodistribution of the vaccines against COVID-19, and thus to identify the possible tissues at threat.
CONFLICT OF INTEREST
The author declares that he has no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
ETHICAL APPROVAL
Not required.
DATA AVAILABILITY STATEMENT
Data sharing is not applicable to this article as no new data were created or analysed in this study.
Institute of Applied Physics 'Nello Carrara', National
Research Council, Sesto Fiorentino, Italy
|
2022-03-18T06:23:25.493Z
|
2022-03-17T00:00:00.000
|
{
"year": 2022,
"sha1": "b780578f30cae75f6222c6a2441af2141651ef48",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/sji.13160",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "65302c87e51ac3b970d965604b9abe3f285386ca",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
254094803
|
pes2o/s2orc
|
v3-fos-license
|
Clinical features and viral etiology of acute respiratory infection in an outpatient fever clinic during COVID‐19 pandemic in a tertiary hospital in Nanjing, China
Abstract Background Clinical feature and viral etiology for acute respiratory infection (ARI) in the community was unknown during coronavirus disease 2019 (COVID‐19) pandemic. Objective In a retrospective study, we aimed to characterize the clinical feature and etiology for the ARI patients admitted to the outpatient fever clinic in Nanjing Drum Tower Hospital between November 2020 and March 2021. Methods Fifteen common respiratory pathogens were tested using pharyngeal swabs by multiplex reverse transcriptase‐polymerase chain reaction assays. Results Of the 242 patients, 56 (23%) were tested positive for at least one viral agent. The predominant viruses included human rhinovirus (HRV) (5.4%), parainfluenza virus type III (PIV‐III) (5.0%), and human coronavirus‐NL63 (HCoV‐NL63) (3.7%). Cough, sputum, nasal obstruction, and rhinorrhea were the most prevalent symptoms in patients with viral infection. Elderly and the patients with underlying diseases were susceptible to pneumonia accompanied with sputum and chest oppression. Three (5.4%) patients in virus infection group, whereas 31 (16.7%) in non‐viral infection group (p = 0.033), were empirically prescribed with antiviral agents. Among 149 patients who received antibiotic therapy, 30 (20.1%) patients were later identified with viral infection. Conclusion Our study indicated the importance of accurate diagnosis of ARI, especially during the COVID‐19 pandemic, which might facilitate appropriate clinical treatment.
| INTRODUC TI ON
Acute respiratory infection (ARI) is one of the leading infectious diseases associated with significant morbidity and mortality in the community. 1 In 2016, global respiratory infections led to 65.9 million hospital admissions, and lower respiratory infections caused 4.4% of deaths in people across all ages. 2 Influenza-like illness (ILI) is defined as a sudden onset fever (>38°C) with cough or sore throat in the absence of other diagnoses, accounting for most ARI. 3 Coronaviruses, respiratory syncytial virus (RSV), influenza A and B viruses, parainfluenza viruses, and adenoviruses are recognized as important viral causes responsible for ARI, which have a high incidence of infection during winter season. 4,5 The novel severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) emerged in late 2019 and trigged a worldwide pandemic and caused global health crisis. 6 The clinical presentation ranges from asymptomatic to mild respiratory tract infection and influenza-like illness to severe disease with lung injury, multiorgan failure, and death. [7][8][9] It is well-established that non-pharmaceutical interventions and measurements, including wearing masks, hand hygiene disinfection, aggressive detection, quarantine of infected individuals and their close contacts, and social distance, have showed to effectively contain COVID-19 outbreak. Meanwhile, due to the above non-pharmaceutical interventions, ARI prevalence during winter season might be partially reduced, and causative agents responsible for ARI might also been changed. 10,11 ARI are caused by a variety of different pathogens, and there are no effective and specific treatment strategies. Current treatment strategies for ARI include antiviral agent, antitussive, and non-steroidal anti-inflammatory drugs, which are administered to relieve the clinical disturbing symptoms. Although antibiotics are not effective against viruses, they are also widely applied as empirical treatment for ARI patients. Herein, our study aimed to identify the viral etiology and clinical features of ARI in adults presenting to fever clinic in a tertiary hospital in Nanjing, China, during COVID-19 pandemic. Additionally, the clinical characteristics of ARI patients with viral pneumonia versus those without non-viral pneumonia were also compared.
| Statistical analysis
Statistical analysis was conducted using SPSS software version 22.0 (SPSS Inc). Data were presented as means ±SD, median, quartiles, or number (percentage) where appropriate. The chi-squared test or Fisher's exact was used to compare the difference between groups of dichotomous variables and t-test, ANOVA, or Mann-Whitney U-tests were used in continuous variables. p < 0.05 was defined as statistically significant. The underlying pathophysiological status of the patient may affect the blood results for diseases, for example, cirrhosis and hematological disorders. We remove data that may affect the routine blood results and re-run the statistical analysis to make the results more reliable.
Sensitivity analysis was undertaken in the study. We compared the difference in blood routine test between patients with virus infection and non-virus infection by excluding participants with underlying diseases such as cirrhosis, leukemia. 12
| Study Enrollment
We consecutively screened 272 outpatients presenting to fever
| Laboratory tests
Meanwhile, there was no statistical difference for blood tests between patients with pneumonia and patients without pneumonia in our sensitive analysis (Table S3).
| Therapy and clinical outcomes of ARI patients
Non-steroidal anti-inflammatory drugs, corticosteroids, antitussives, and mucolytic agents were used in 149 (61.5%) ARI patients presenting to fever clinic to alleviate fever, soreness of the throat, and cough. Besides, 34 (14.0%) patients were empirically treated with antiviral agents including oseltamivir, peramivir, and ribavirin, Secondly, the respiratory viral interferences were found at the cellular, host, and population levels. 23,24 It had been observed that influenza viruses and RSV did not share peaks during the same prevalent period. 23 Given the non-typical clinical presentations, ARI caused by either virus or bacteria were almost indistinguishable. However, molecular diagnosis could provide important information for optimal treatment strategy. Our data showed that empirical administration of antiviral agents or antibiotics might be misleading, suggesting the necessities of rapid diagnosis of respiratory etiology for ARI.
ARI was associated with hospitalization and death, especially in patients with rapidly progressive severe pneumonia. 37,38 Despite all of our patients recovered eventually, our data also showed that elder people and the patients accompanied by underlying diseases were more vulnerable to pneumonia, presenting with sputum and chest oppression. Our data highlighted special care needs for the aging population and those with pre-existing diseases.
The presented study has limitations that warrant discussion.
| CON CLUS IONS
Our study identified the viral etiology and clinical features of ARI patients from fever clinics in winter during COVID-19, and HRV, PIV-III, and HCoV-NL63 were the most prevalent viruses identified in our study. Molecular diagnosis of these ARI patients provides useful information to further understand the epidemiology and clinical feature of respiratory virus during COVID-19 pandemic.
ACK N OWLED G M ENTS
This work was supported by the Nanjing Important Science & Technology Specific Projects (2021-11005) and Nanjing Medical Science and Technique Development Foundation (QRX17141 and YKK19056).
CO N FLI C T O F I NTE R E S T
The authors declare no conflict of interest.
DATA AVA I L A B I L I T Y S TAT E M E N T
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2022-12-01T06:17:52.951Z
|
2022-11-29T00:00:00.000
|
{
"year": 2022,
"sha1": "e85d26861322ebfe37dc58ef3f26cbe18224b276",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Wiley",
"pdf_hash": "c3c90e22f44d07c356a82dd4d41313253c28066d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221132775
|
pes2o/s2orc
|
v3-fos-license
|
Chemiresistive Properties of Imprinted Fluorinated Graphene Films
The electrical conductivity of graphene materials is strongly sensitive to the surface adsorbates, which makes them an excellent platform for the development of gas sensor devices. Functionalization of the surface of graphene opens up the possibility of adjusting the sensor to a target molecule. Here, we investigated the sensor properties of fluorinated graphene films towards exposure to low concentrations of nitrogen dioxide NO2. The films were produced by liquid-phase exfoliation of fluorinated graphite samples with a composition of CF0.08, CF0.23, and CF0.33. Fluorination of graphite using a BrF3/Br2 mixture at room temperature resulted in the covalent attachment of fluorine to basal carbon atoms, which was confirmed by X-ray photoelectron and Raman spectroscopies. Depending on the fluorination degree, the graphite powders had a different dispersion ability in toluene, which affected an average lateral size and thickness of the flakes. The films obtained from fluorinated graphite CF0.33 showed the highest relative response ca. 43% towards 100 ppm NO2 and the best recovery ca. 37% at room temperature.
Introduction
Graphene materials are widely examined as sensitive layers of the resistive gas sensors owing to a strong dependence of the electrical properties on the surrounding medium [1]. It has been demonstrated that graphene-based materials are able to detect various electron-donor (NH 3 [2], H 2 S [3], CO 2 [4]) and electron-acceptor (NO, NO 2 [5,6], organic nitro-compounds [7]) gases or vapors. Most of the studies are conducted with nitrogen dioxide NO 2 , as it is a convenient model system. In addition, detection of nitrogen oxides gases is significant in the protection of environmental and human health and the detection of explosives [8,9]. The main disadvantages of graphene-based sensors are slow response and poor recovery that makes them useless for real time sensing at room temperature. The reasons for long response and recovery time are slow adsorption kinetics of molecules and presence of high-energy sites, such as defects and oxygen containing groups [10], which catch adsorbate thus making the sensor recovery difficult. On the other hand, structural defects, edge states, dopants, and functional groups play an important role in the electrical response of graphene [6,[11][12][13][14]. The controlled introduction of a certain type of defect and functional group enhances the performance of a graphene-based sensor by increasing the binding energy of adsorbate and charge transfer in the reactive sites. Functionalization and doping of graphene include noncovalent modification via π-π stacking, covalent attachment to the unsaturated double bonds, intercalation [15], and insertion of foreign elements in the graphene lattice [16]. The direct grow of functionalized or doped material is a challenging task, and it is limited where I o and I g are the currents before and after exposure to NO 2 and I a is the current after sensor recovery by pure argon. The characterization was made for FG films on silicon substrates. Raman spectra were recorded using an excitation from an Ar + laser at 514 nm on a LabRAM HR Evolution (Horiba, Kyoto, Japan) spectrometer. Optical absorbance measurements were performed using an Optizen 220 UV spectrometer (KLAB, Daejeon, Korea). Elemental analysis of the FG films was carried out by EDS on a Bruker QUANTAX spectrometer with an XFlash 6|60 detector. Morphology, thickness, and particle sizes of FG films were characterized using a set of microscopic methods, namely optical microscopy on a BX 51TRF microscope (Olympus Corporation, Tokyo, Japan), SEM on a JEOL JSM-6700F microscope (Tokyo, Japan), and AFM on a Solver Pro (NT-MDT) microscope (Moscow, Russia). The AFM measurements were performed in tapping mode using cantilevers NSG10 (NT-MDT) with a tip curvature radius of 6 nm and an average value of the force constant of 11.8 N/m.
Materials Characterization
The content of fluorine in the surface of the fluorinated graphites determined from the XPS survey spectra was ca. 7 at.% in CF 0.08 sample, ca. 19 at.% in CF 0.23 sample, and ca. 25 at.% in CF 0.33 sample. Lower values given by XPS are due to partial hydrolysis of the sample surface by H 2 O present in laboratory air [32].
The XPS C 1s spectra of the samples are compared in Figure 1a. Each spectrum exhibits a set of the components corresponding to (1) sp 2 -hybridized carbon (ca. 284.5 eV), (2) carbon atoms located at C-F bonds (C*-C(F)) (ca. 285.3 eV), (3) sp 3 carbon atoms covalently bonded with fluorine atoms (C*-F) (ca. 288.0 eV), and (4-6)-CF x (x = 1-3) groups (ca. 289.7-292.0 eV) located at crystallite boundaries [33][34][35][36]. A broad high-energy component is assigned to π→π* electron excitations. The contribution of Csp 2 component to the C 1s spectrum decreases in the set CF 0.08 > CF 0.23 > CF 0.33 , and this indicates a reduction of the average size of graphene-like areas remaining in the layers after the fluorination procedure. A comparable integral intensity of the components C*-F and C*-C(F) means the formation of chains from fluorine atoms [37]. The co-existence of patterns from alternating CF chains and bare carbon chains, and graphene islands in the fluorinated layers is typical for the sample obtained by room temperature fluorination of graphite [33].
The XPS F 1s spectra of all the studied fluorinated graphites exhibited a dominant peak at ca. 687.4 eV (Figure 1b), attributing to weakened covalent C-F bonding [38]. A broad component around ca. 693.0 eV corresponds to the loss-energy satellite. The components between the main peak and the satellite are assigned to -CFx groups located at the edges of graphite crystallites [35]. The fraction of these groups is largest (~40%) in the CF0.23 sample, as both F 1s and C 1s spectra are revealed. It is likely that the holding of graphite in the vapors of highly reactive BrF3 for several months causes etching of graphene layers at defects and grain boundaries accompanying by the formation of -CFx groups. Stronger etching observed for CF0.23 can be related to the higher concentration of BrF3 (4.3 wt.%) in the reaction mixture as compared to that used for the synthesis of CF0.08. The method used here for the fluorination of graphite leads to the covalent attachment of fluorine atoms to both sides of the graphene planes [32]. As a result, the interlayer distance in fluorinated graphites increases, while the van der Waals forces between the layers decrease as compared to graphite. Weakening of the interaction between the layers allows easy exfoliation of fluorinated graphites in liquid media [39,40]. We tried tetrahydrofuran, chloroform, acetonitrile, isopropanol, and toluene. Freshly prepared dispersions of the fluorinated graphites have a yellowish color. This color changed under sonication of the dispersions in acetonitrile and tetrahydrofuran due to partial reduction of fluorinated graphene layers. An ultrasound treatment of the samples in toluene caused the most effective splitting of the layers among the rest of the solvents. Figure 2a shows colored dispersions of CF0.08, CF0.23, and CF0.33 in toluene, which were stable after about an hour. The dispersion has a darker color with the decrease in fluorine content in parent fluorinated graphite. Optical absorption spectra measured for the dispersions confirmed the reduction of light transmittance for the low-fluorinated samples (Figure 2b). The XPS F 1s spectra of all the studied fluorinated graphites exhibited a dominant peak at ca. 687.4 eV (Figure 1b), attributing to weakened covalent C-F bonding [38]. A broad component around ca. 693.0 eV corresponds to the loss-energy satellite. The components between the main peak and the satellite are assigned to -CF x groups located at the edges of graphite crystallites [35]. The fraction of these groups is largest (~40%) in the CF 0.23 sample, as both F 1s and C 1s spectra are revealed. It is likely that the holding of graphite in the vapors of highly reactive BrF 3 for several months causes etching of graphene layers at defects and grain boundaries accompanying by the formation of -CF x groups. Stronger etching observed for CF 0.23 can be related to the higher concentration of BrF 3 (4.3 wt.%) in the reaction mixture as compared to that used for the synthesis of CF 0.08 .
The method used here for the fluorination of graphite leads to the covalent attachment of fluorine atoms to both sides of the graphene planes [32]. As a result, the interlayer distance in fluorinated graphites increases, while the van der Waals forces between the layers decrease as compared to graphite. Weakening of the interaction between the layers allows easy exfoliation of fluorinated graphites in liquid media [39,40]. We tried tetrahydrofuran, chloroform, acetonitrile, isopropanol, and toluene. Freshly prepared dispersions of the fluorinated graphites have a yellowish color. This color changed under sonication of the dispersions in acetonitrile and tetrahydrofuran due to partial reduction of fluorinated graphene layers. An ultrasound treatment of the samples in toluene caused the most effective splitting of the layers among the rest of the solvents. Figure 2a shows colored dispersions of CF 0.08 , CF 0.23 , and CF 0.33 in toluene, which were stable after about an hour. The dispersion has a darker color with the decrease in fluorine content in parent fluorinated graphite. Optical absorption spectra measured for the dispersions confirmed the reduction of light transmittance for the low-fluorinated samples ( Figure 2b). Large agglomerates precipitate within an hour of sedimentation of the dispersion, and their removal produces supernatant, which is stable for several hours. To increase the fraction of few-layered FG, additional stages of sonication and centrifugation were used. Optical microscopy showed that this treatment decreases the average size of the FG flakes from 3-20 to 1-5 μm ( Figure 3a,b). Moreover, the flakes become more transparent, which undergoes a decrease in the number of adjacent layers. Filtration of FG dispersions after sedimentation of large aggregates and those after additional centrifugation step yielded the films on the CN membranes denoted FG-s and FG-c, respectively. These films can be transferred to various substrates. SEM images of the films deposited from CF0.08 and CF0.23 parents on silicon substrates are compared in Figure 3c,d, respectively. Both films consist of wrinkled flakes, however the flakes in less fluorinated material have a flatter structure. Covalent C-F bonds destroy the conjugated π-system of graphene planes, thus reducing their rigidity. Soft layers fold under mechanical treatment in solvent, and the degree of this deformation depends on the number and distribution of C-F bonds. Transparency of an FG film deposited on the PET substrate demonstrates an advantage of the proposed approach to obtain thin graphene-based layers ( Figure 3e). The black stripes are electrodes painted by silver paste. Integrity of the film is maintained when the substrate is bent. The thickness of the films obtained was ~150-250 nm as determined by AFM ( Figure 3f). Large agglomerates precipitate within an hour of sedimentation of the dispersion, and their removal produces supernatant, which is stable for several hours. To increase the fraction of few-layered FG, additional stages of sonication and centrifugation were used. Optical microscopy showed that this treatment decreases the average size of the FG flakes from 3-20 to 1-5 µm (Figure 3a,b). Moreover, the flakes become more transparent, which undergoes a decrease in the number of adjacent layers. Filtration of FG dispersions after sedimentation of large aggregates and those after additional centrifugation step yielded the films on the CN membranes denoted FG-s and FG-c, respectively. These films can be transferred to various substrates. SEM images of the films deposited from CF 0.08 and CF 0.23 parents on silicon substrates are compared in Figure 3c,d, respectively. Both films consist of wrinkled flakes, however the flakes in less fluorinated material have a flatter structure. Covalent C-F bonds destroy the conjugated π-system of graphene planes, thus reducing their rigidity. Soft layers fold under mechanical treatment in solvent, and the degree of this deformation depends on the number and distribution of C-F bonds. Transparency of an FG film deposited on the PET substrate demonstrates an advantage of the proposed approach to obtain thin graphene-based layers (Figure 3e). The black stripes are electrodes painted by silver paste. Integrity of the film is maintained when the substrate is bent. The thickness of the films obtained was~150-250 nm as determined by AFM (Figure 3f). Large agglomerates precipitate within an hour of sedimentation of the dispersion, and their removal produces supernatant, which is stable for several hours. To increase the fraction of few-layered FG, additional stages of sonication and centrifugation were used. Optical microscopy showed that this treatment decreases the average size of the FG flakes from 3-20 to 1-5 μm ( Figure 3a,b). Moreover, the flakes become more transparent, which undergoes a decrease in the number of adjacent layers. Filtration of FG dispersions after sedimentation of large aggregates and those after additional centrifugation step yielded the films on the CN membranes denoted FG-s and FG-c, respectively. These films can be transferred to various substrates. SEM images of the films deposited from CF0.08 and CF0.23 parents on silicon substrates are compared in Figure 3c,d, respectively. Both films consist of wrinkled flakes, however the flakes in less fluorinated material have a flatter structure. Covalent C-F bonds destroy the conjugated π-system of graphene planes, thus reducing their rigidity. Soft layers fold under mechanical treatment in solvent, and the degree of this deformation depends on the number and distribution of C-F bonds. Transparency of an FG film deposited on the PET substrate demonstrates an advantage of the proposed approach to obtain thin graphene-based layers (Figure 3e). The black stripes are electrodes painted by silver paste. Integrity of the film is maintained when the substrate is bent. The thickness of the films obtained was ~150-250 nm as determined by AFM (Figure 3f). Raman spectroscopy proved the functionalization of graphene layers in studied materials ( Figure 4). Fluorination of graphite led to the appearance of defect-activated peaks D at 1351 cm −1 and D' at 1604 cm −1 (Figure 4a), which correspond to the single-phonon intervalley and intravalley scattering processes, respectively. The intensity of these peaks relative to the G peak intensity changes non-monotonically with the fluorine content, which is similar to the previous results on functionalized graphenes [41]. The D' peak is separated from G peak in the spectrum of CF 0.08 powder, and these peaks are merged when the fluorine content increases. All the single-phonon peaks are broadened for the FG films (Figure 4b) as compared to the powder parent materials. The intensity of D peak increases significantly due to the formation of new edge states resulting from the exfoliation process. An increase in the fluorination degree leads to a blue shift of the two-phonon G'-peak for both powders and FG films, which evidences p-type doping of graphene [42]. Raman spectroscopy proved the functionalization of graphene layers in studied materials ( Figure 4). Fluorination of graphite led to the appearance of defect-activated peaks D at 1351 cm −1 and D' at 1604 cm −1 (Figure 4a), which correspond to the single-phonon intervalley and intravalley scattering processes, respectively. The intensity of these peaks relative to the G peak intensity changes non-monotonically with the fluorine content, which is similar to the previous results on functionalized graphenes [41]. The D' peak is separated from G peak in the spectrum of CF0.08 powder, and these peaks are merged when the fluorine content increases. All the single-phonon peaks are broadened for the FG films (Figure 4b) as compared to the powder parent materials. The intensity of D peak increases significantly due to the formation of new edge states resulting from the exfoliation process. An increase in the fluorination degree leads to a blue shift of the two-phonon G'-peak for both powders and FG films, which evidences p-type doping of graphene [42].
Sensor Tests
Effect of the size of FG particles on the sensing performance was examined on the example of FG-s and FG-c films obtained from CF0.23 sample. The tests were performed toward 100 ppm of NO2 in argon. Figure 5a compares dynamic change of the relative current of the films during five cycles of exposure to NO2, followed by purging with pure argon. The FG-s sensor showed ca. 23% increase in the conductivity in the first cycle. The FG-c film obtained from a smaller fraction of the particles had approximately 1.5-times higher change in the conductivity. The relative response and recovery of the films estimated using Equations (1) and (2) are presented in Figure 5b,c, respectively. Both samples showed a gradual decrease in the response with the cycling (Figure 5b), which could be attributed to the nonreversible adsorption of NO2 at room temperature. The recovery behavior of the sensors was opposite to the response behavior. The FG-s sample showed recovery ca. 30%, which was higher than that for FG-c (Figure 5c). Similar dependences of the relative response and recovery on the particle size were also observed for the FG sensors obtained from CF0.08 and CF0.33 samples.
Sensor Tests
Effect of the size of FG particles on the sensing performance was examined on the example of FG-s and FG-c films obtained from CF 0.23 sample. The tests were performed toward 100 ppm of NO 2 in argon. Figure 5a compares dynamic change of the relative current of the films during five cycles of exposure to NO 2 , followed by purging with pure argon. The FG-s sensor showed ca. 23% increase in the conductivity in the first cycle. The FG-c film obtained from a smaller fraction of the particles had approximately 1.5-times higher change in the conductivity. The relative response and recovery of the films estimated using Equations (1) and (2) are presented in Figure 5b,c, respectively. Both samples showed a gradual decrease in the response with the cycling (Figure 5b), which could be attributed to the nonreversible adsorption of NO 2 at room temperature. The recovery behavior of the sensors was opposite to the response behavior. The FG-s sample showed recovery ca. 30%, which was higher than that for FG-c (Figure 5c). Similar dependences of the relative response and recovery on the particle size were also observed for the FG sensors obtained from CF 0.08 and CF 0.33 samples.
The sensor performance, depending on the fluorine loading, was studied for the thinner FG films obtained after the centrifugation step. Figure 5d compares the run-to-run tests of the FG-c films obtained from the fluorinated graphites with different fluorine content. The largest changes in the conductivity under the adsorption of NO 2 molecules were detected for the FG-c film obtained from CF 0.33 sample. The relative responses of the FG-c sensors in five consecutive operation cycles are presented in Figure 5e. In the first cycle, the response was 21, 28, and 42% for the sensors prepared from CF 0.08 , CF 0.23 , and CF 0.33 samples, respectively. Then, the values decreased, whereas the dependence on the fluorination loading remained. Moreover, the reversibility of the signal at room temperature was strongly improved with the increase in fluorine content in FG film due to better recovery of the sensor (Figure 5f). The sensor performance, depending on the fluorine loading, was studied for the thinner FG films obtained after the centrifugation step. Figure 5d compares the run-to-run tests of the FG-c films obtained from the fluorinated graphites with different fluorine content. The largest changes in the conductivity under the adsorption of NO2 molecules were detected for the FG-c film obtained from CF0.33 sample. The relative responses of the FG-c sensors in five consecutive operation cycles are presented in Figure 5e. In the first cycle, the response was 21, 28, and 42% for the sensors prepared from CF0.08, CF0.23, and CF0.33 samples, respectively. Then, the values decreased, whereas the dependence on the fluorination loading remained. Moreover, the reversibility of the signal at room temperature was strongly improved with the increase in fluorine content in FG film due to better recovery of the sensor (Figure 5f).
The FG-c films obtained from the fluorinated graphites with the lowest and highest fluorination degree, CF0.08 and CF0.33, were additionally tested at an operation temperature varied from 30 to 80 °C with a step of 10 °C. At each temperature, one standard cycle was performed to determine the response of the sensor to 100 ppm of NO2 in argon. After the measurement, the sample was heated to 80 °C and annealed at this temperature for 1 h to achieve the desorption of NO2, and then it was cooled down to the required operation temperature. Both sensors showed a faster response and recovery with the rise of the operation temperature. The response gradually decreased from 23 to 8.9% for the FG-c sensor prepared from graphite fluoride CF0.08 and from 33 to 20% for the FG-c sensor from CF0.33 when the temperature changed from 30 to 80 °C ( Figure 6). The higher fluorinated sample showed much better recovery at all temperatures, which reached a maximum level of ca. 80% at 80 °C after 12 min of sensor purging with pure argon. The FG-c films obtained from the fluorinated graphites with the lowest and highest fluorination degree, CF 0.08 and CF 0.33 , were additionally tested at an operation temperature varied from 30 to 80 • C with a step of 10 • C. At each temperature, one standard cycle was performed to determine the response of the sensor to 100 ppm of NO 2 in argon. After the measurement, the sample was heated to 80 • C and annealed at this temperature for 1 h to achieve the desorption of NO 2 , and then it was cooled down to the required operation temperature. Both sensors showed a faster response and recovery with the rise of the operation temperature. The response gradually decreased from 23 to 8.9% for the FG-c sensor prepared from graphite fluoride CF 0.08 and from 33 to 20% for the FG-c sensor from CF 0.33 when the temperature changed from 30 to 80 • C ( Figure 6). The higher fluorinated sample showed much better recovery at all temperatures, which reached a maximum level of ca. 80% at 80 • C after 12 min of sensor purging with pure argon.
Discussion
Since Novoselov and coworkers [1] reported the detection of a single NO2 molecule on mechanically exfoliated graphene, sensor properties of graphene materials as an active material are widely investigated. It was shown that graphene exhibits maximum sensitivity to charge carriers corresponded to neutrality point. On the other hand, ideal graphene is chemically inert and does not interact specifically with molecules [43]. The fluctuations in the spectral density of the low-frequency current induced by some gases were proposed as the sensing parameter to enhance selectivity of graphene [44]. Surface functionalization and operation at elevated temperatures reduce the response and recovery times of graphene-based sensors [45]. These sensing parameters can be controlled by
Discussion
Since Novoselov and coworkers [1] reported the detection of a single NO 2 molecule on mechanically exfoliated graphene, sensor properties of graphene materials as an active material are widely investigated. It was shown that graphene exhibits maximum sensitivity to charge carriers Materials 2020, 13, 3538 8 of 12 corresponded to neutrality point. On the other hand, ideal graphene is chemically inert and does not interact specifically with molecules [43]. The fluctuations in the spectral density of the low-frequency current induced by some gases were proposed as the sensing parameter to enhance selectivity of graphene [44]. Surface functionalization and operation at elevated temperatures reduce the response and recovery times of graphene-based sensors [45]. These sensing parameters can be controlled by the density of the adsorption sites [46]. Fluorination of graphene is especially suitable in that case, because it introduces single-type functional groups at the basal plane and allows tuning the F/C ratio. The electrical transport properties of partially fluorinated graphene can be adjusted by tuning the fluorination degree for application in chemical sensors. The charge carrier mobility for single flake devices was found to increase with an increase in the fluorination degree, reaching 2000-3000 cm 2 V −1 s −1 [47]. In case of thin films, the charge transport is largely affected by edge/edge, edge/plane, and plane/plane junctions. Additionally, we have previously demonstrated that C-F groups modify the surface chemistry of graphene forming specific sites for molecule adsorption [31,48]. A similar effect has been achieved for plasma-fluorinated CVD-graphene, whose enhanced sensor performance at room temperature was attributed to p-type doped nature of FG and stronger physical adsorption of ammonia [26]. Park and co-authors have proposed to modify graphene oxide by fluorine and achieved better sensitivity of the fluorinated sensor to ammonia, however all obtained sensors failed to be recovered at room temperature [49].
Covalent attachment of fluorine to the basal plane and the edges of graphene turns part of the carbon atoms from sp 2 -to sp 3 -hybridization state. The XPS study of graphite fluorides used in the present work for the preparation of FG films showed a decrease in the fraction of sp 2 -hybridized carbon regions and an increase in the population of sp 3 defects with the synthesis time and concentration of BrF 3 in the reaction mixture ( Figure 1). These defects led to reliable p-type doping of the graphene layers and, as a result, to an increase in the resistivity of FG films (Table 1). Such behavior is consistent with previous works on the fluorination [47] and oxidation [50] of graphene. For the FG films obtained using the sedimentation procedure without or with the centrifugation treatment, we observed a decrease in the conductivity by more than three orders of magnitude when the fluorine content in parent fluorinated graphite increased from ca. 7 to ca. 25 at%. The films obtained from the centrifuged dispersions of FG samples showed a lower resistivity for all fluorination degrees due to partial fluorine detachment with an increase in the sonication time. The EDS determined that fluorine content changes from ca. 19 at.% for graphite powder CF 0.23 to ca. 15 at% for the FG-s film and ca. 14 at% for the FG-c film. Exposure of the FG films to the electron-acceptor NO 2 molecule resulted in an increase in the conductivity, which undergoes an increase in the number of charge carriers. We examined the influence of particle size on the sensor response of FG films, obtained from sedimented and centrifuged dispersions of graphite fluoride CF 0.23 . Run-to-run cycling showed that the small-size fraction has a higher sensor response compared to the large-size fraction (Figure 5b). This could be attributed to more effective influence of the adsorbed molecule on the charge states of graphene particles due to higher surface-volume ratio [51]. It was shown that monolayer and bilayer graphene have the highest sensitivity to adsorbed molecules [11,52], and the same tendency was observed for carbon nanotubes, whereas the sensitivity dropped down from single-walled to multi-walled nanotubes [53]. The recovery of the FG sensors (Figure 5c) showed behavior opposite to the response. As compared to the FG-s sensor, the centrifuged particles had a lower degree of recovery. Probably, small particles formed a more developed pore structure, which trapped the NO 2 molecules.
Response and recovery of the FG sensor increased with the fluorine content (Table 1). Fluorinated graphene is a p-type semiconductor, since fluorine is strong electron acceptor [22]. An increase in the content of covalently attached fluorine leads to an increase in the concentration of hole carriers in the material. Additionally, fluorination introduces adsorption sites near the fluorine groups [26,48]. To further highlight the influence of fluorine attachment to the graphene plane on the sensor performance of FG films, we carried out experiments at temperatures between 30 and 80 • C ( Figure 6). An increase in operation temperature resulted in faster saturation of response, which was achieved within ca. 3 min at 80 • C for the FG-c sensor prepared from CF 0.33 . Additionally, we observed retention of the response with an increase in the operation temperature for the sensors. The observed trend could be related to two factors. The first factor is an increase in the concentration of charge carriers in semiconducting materials with temperature [50]. The second factor is the change in conductivity due to an increase in the adsorption rate of molecules on the surface of FG films. We observed the higher impact of the first factor for both sensors, resulting in a decrease in the response ca. 1.5 and ca. three times for the FG-c sensor based on CF 0.33 and CF 0.08 , respectively, with an increase in the operation temperature from 30 to 80 • C. The lower response retention for the former sensor undergoes a higher adsorption rate of NO 2 molecules ( Figure 6). The higher fluorination degree in that sensor also resulted in better recovery level of 80% for 12 min at 80 • C, which is comparable with the recovery of reduced graphene oxide at 150 • C [54]. This difference could be attributed to lower adsorption energy of NO 2 on the fluorinated graphene. Other two-dimensional materials, particularly MoS 2 , WS 2 , and black phosphorus, are widely examined for gas sensing due to their high surface-volume ratio and tunable band gap [55]. Unlike the FG, the fabrication of sensors from the above materials is currently costly. Contaminations and easy oxidation of surfaces require the capping step to prevent problems with reliability and stability of these devices [56].
Conclusions
Fluorination by a BrF 3 /Br 2 gaseous mixture at room temperature was used to synthesize fluorinated graphite with a tunable F/C ratio by changing the synthesis time and concentration of BrF 3 . Fluorinated graphites are easily exfoliated in toluene forming stable suspensions when the fluorine content was higher than 7%. The simple imprinting method was developed to prepare uniform FG films with a thickness of~200 nm on different substrates. Thin FG films can be transferred on flexible polymer substrates to obtain transparent and bendable films. We revealed the influence of particle fraction and functional composition of FG on the performance of thin-film sensors towards exposure of nitrogen dioxide. The lateral size of FG flakes was changed by additional stages of sonication and centrifugation. It was shown that ultrasonic treatment of fluorinated graphite resulted in partial defluorination of FG flakes for all fluorination degrees. The sensor properties of FG films were tested to 100 ppm NO 2 at room and elevated temperatures. It was shown that the particle fraction affects the relative response via efficiency of charge transfer between the molecule and graphene layers. The sensor characteristics of FG enhanced with the fluorination loading. The fluorine groups act as a scattering center, dopant, and adsorption site, which improves electrical response and response/recovery rates. Regarding that, we stated a high efficiency of fluorinated graphites for the fabrication of thin-film graphene-based sensors due to their excellent exfoliation in organic solvents and tunable electronic properties. Funding: The work of the materials synthesis was funded by the Russian Foundation for Basic Research (Grant No. 18-29-19073).
|
2020-08-13T10:07:25.831Z
|
2020-08-01T00:00:00.000
|
{
"year": 2020,
"sha1": "ca42c80e9c32c9ecf9c99136162fef56f871c35c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/13/16/3538/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dcc12cd92f1a1e27e0bee68df75d5dc28530647a",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
52877912
|
pes2o/s2orc
|
v3-fos-license
|
Multi-Scale Verification of Distributed Synchronisation
Algorithms for the synchronisation of clocks across networks are both common and important within distributed systems. We here address not only the formal modelling of these algorithms, but also the formal verification of their behaviour. Of particular importance is the strong link between the very different levels of abstraction at which the algorithms may be verified. Our contribution is primarily the formalisation of this connection between individual models and population-based models, and the subsequent verification that is then possible. While the technique is applicable across a range of synchronisation algorithms, we particularly focus on the synchronisation of (biologically-inspired) pulse-coupled oscillators, a widely used approach in practical distributed systems. For this application domain, different levels of abstraction are crucial: models based on the behaviour of an individual process are able to capture the details of distinguished nodes in possibly heterogenous networks, where each node may exhibit different behaviour. On the other hand, collective models assume homogeneous sets of processes, and allow the behaviour of the network to be analysed at the global level. System-wide parameters may be easily adjusted, for example environmental factors inhibiting the reliability of the shared communication medium. This work provides a formal bridge across the abstraction gap separating the individual models and the population-based models for this important class of synchronisation algorithms.
to employ methods to use the shared communication medium without too many conflicts, e.g., in the form of collisions. Several protocols to organise shared medium access have been developed and analysed [1,35]. These protocols typically identify a common time frame and divide this frame into slots associated to each node. Thus every node has an allocated time slot that it may use to communicate its messages onto the shared medium.
Such an approach introduces the need for a common clock between the nodes, i.e., they need to synchronise. A valuable approach to achieve synchrony of nodes is the implementation of biologically-inspired pulse-coupled oscillators (PCOs) [25]. A network of PCOs synchronises in the following way: all oscillators have a similar clock cycle at the end of which they fire. That is, they transmit a broadcast message which is received by all oscillators in their communication range. These oscillators then adjust their own position within their clock cycle according to a phase response function. Depending on the concrete implementation, they may move their current position within the clock cycle closer to its end, or closer to its start.
Most analyses of the synchronisation behaviour of PCOs are concerned with continouous clock cycles, i.e., where clocks take real values from the interval [0, 1]. However, the smaller devices get, the more important it is to save memory and computing time for such a low-level functionality. Even a floating point number may need too much memory, compared to an implementation with, for example, a four-bit vector. Hence, in previous work, we chose to analyse the behaviour of discrete time PCOs [16].
In contrast to continuous time PCOs, networks of discrete time PCOs are not always guaranteed to synchronise. Instead, whether they synchronise or not depends on the type of coupling between the oscillators and their common phaseresponse function. We analysed the behaviour of such networks for different parameters via model-checking, to check both qualitatively for which parameters the networks synchronise, as well as quantitatively for how long they need to achieve a synchronised state and how much energy is used to achieve this [17]. In the context of large numbers of single oscillators, for example in the context of wireless sensor networks, the well-known state-space explosion problem of the model-checking approach is extremely important [9]. We formalised a network of oscillators as population models [13] which exploit the behavioural homogeneity of the nodes to encode the global state efficiently. This allows the network size to be increased above what would be feasible when distinguishing each node. But the construction of a population model from a given oscillator specification is not straightforward, and in particular, it is not obvious whether the constructed population model correctly reflects the behaviour of the oscillators. This results in an 'abstraction gap': after abstracting into populations, how can we be sure that the abstraction process was correct and that the results of verification of population models actually hold for the concrete models on which they are based?
In this paper, we remedy this lack of certainty, by proving the correspondence of our population model with an explicit formalisation of the oscillators. To that end, we present the concrete oscillator model as well as its formalisation as a discrete-time Markov chain. Subsequently we describe the corresponding population model, and show how we can, in addition to the abstraction created by the populations, reduce the state space even further to facilitate the analysis. Finally, we prove that the behaviour of a network of concrete oscillators can be simulated by the population model. We cannot prove a one-to-one correspondence, since the concrete model implicitly includes the possibility of identifying individual oscillators, which is exactly what the population model abstracts from. However, by providing a formal notion of abstraction, we prove that population models are a truthful abstraction of concrete models.
The paper is structured as follows. In Sect. 2, we review a selection of related work, both for models of pulse-coupled oscillators, as well as approaches for their verification. After an introduction of preliminary notions in Sect. 3, we present the concrete model of single oscillators, both as an algorithm and as a discrete-time Markov chain derived from this algorithm, in Sect. 4. The abstract model in terms of population models and proofs about their properties are contained in Sect. 5. In Sect. 6, we prove the correspondence between these two types of models, and conclude in Sect. 7.
Related Work
The canonical model of pulse-coupled oscillators, and their synchronisation, was formulated by Mirollo and Strogatz [25], and based on Peskin's model of a cardiac pacemaker [29]. Here the progression of an oscillator through its oscillation cycle is given by a real value in the interval [0, 1]. Mirollo and Strogatz proved that with a convex phase response function, a network of mutually coupled oscillators always converges, i.e., their position within the oscillation cycle eventually coincides. Such a model has been shown to be applicable to the clock synchronisation of wireless sensor nodes [31] and swarms of robots [27].
Synchronisation algorithms based on pulse-coupled oscillators are often benefitial in unreliable, decentralised networks, where other synchronisation algorithms are not appropriate. For example, the Flooding Time Synchronisation Protocol (FTSP) [23] requires the use of an arbitrary root node. In situations where the root becomes unavailable due to communication failure or power outage, FTSP will have to assign another root node. When implemented on unreliable, decentralised networks, FTSP may spend considerable resources on repeatedly assigning root nodes, which may slow down or prevent synchronisation [8]. Other algorithms such as the Berkeley algorithm [18] and Cristian's algorithm [11] require the use of centralised time servers, which is problematic for unreliable, decentralised networks.
Several decentralised network algorithms for synchronisation are based on pulse-coupled oscillators [31,34]. For example, the Gradient Time Synchronisation Protocol (GTSP) by Sommer and Wattenhofer [30] achieves synchronisation by having nodes send their current clock value to their neighbours. Each node then calculates the average of the clock values received and its own clock value. This process is then repeated to maintain synchronisation. Another approach to synchronisation, the Pulse-Coupled Oscillator Protocol [26], makes use of refractory periods after sending messages containing time information. During the refractory period, no more messages are sent, which reduces network bandwidth and energy usage. A similar approach is used in the FiGo protocol [8], which combines biologically inspired synchronisation with information distribution via gossiping. All of these approaches use different phase response functions.
In general, synchronisation algorithms based on PCOs are more robust for unreliable networks, as they do not require centralised nodes and can work with only partial network connectivity [8]. They are particularly useful for batterypowered nodes in wireless networks, as the node can be placed in a low-power node during the refractory period, thus reducing energy usage. (The clock keeps ticking even in low-power mode, thanks to the design of microcontrollers such as the 'Atmel ATmega128L' [4].) Synchronisation of clocks for networks of nodes has been investigated from different perspectives. Heidarian et al. [20] analysed the behaviour of a synchronisation protocol based on time allocation slots for up to four nodes and different topologies, from fully connected networks to line topologies. They modelled the protocol as timed automata [2], and used the model-checker UPPAAL [7] to examine its worst-case behaviour. Their model is based on continuous time, and in particular, they did not model pulse-coupled oscillators.
Bartocci et al. [5] described pulse-coupled oscillators as extended timed automata with suitable semantics to model their peculiarities. They defined a dedicated logic to analyse the behaviour of a network of such automata along traces, and used a pacemaker as a case study to verify the eventual synchronisation and the time needed to achieve this.
Our models and methods are slightly different to all of these approaches. This is, of course, evident for all the mentioned work that is not concerned with pulsecoupled oscillators. However, we also define the oscillation cycle to consist of discrete steps. To the best of our knowledge, with the exception the paper by Webster et al. [33] and our previous work [16,17], there is no other work concerned with PCOs with discrete oscillation cycles. Furthermore, all of these approaches distinguish between single oscillators in the network, while the properties of interest relate to global behaviour. This discrepancy between local modelling and global analysis restricts the size of networks that can be analysed, due to the state-space explosion. To extend the size of analysable networks, we employ population models, a counting-abstraction of such networks [12]. Instead of identifying each oscillator on its own, we record how many oscillators are in each step of the oscillation cycle. This reduces the state-space quite tremendously by exploiting the symmetries in the model [13], and we are hence able to extend the size of networks.
The notion of population models should not be confused with population protocols [3], a formalism to express distributed algorithms. In contrast to our setting, communication in population protocols is always between two agents, where one agent initiates the communication and the other responds. Furthermore, even though the agents cannot identify the other agents in the network, within the global model each agent is uniquely associated with a state. In our model, we cannot distinguish between two different agents sharing the same state, even at the global level. Finally, our oscillators may change their state without interacting with other oscillators, while the agents in a population protocol must communicate with another agent to change their internal state.
We will present a relation between the concrete models, where each oscillator can be identified, and corresponding population models, and show that these two models are in a simulation relation [24]. More precisely, the concrete model weakly simulates its abstraction, since the oscillators have to take transitions independently, while in the population model, all oscillators evolve in a single step.
Similarly to typical definitions of counter abstractions [14,6], we use counters to model concurrent entities that are indistinguishable for our purposes. For example, to analyse the probability of eventually reaching a synchronised state, we are not interested in an order of oscillators, which would be artificial anyway. However, in contrast to these approaches, we do not include means to introduce new entities into a model. That is, the values within our population models are naturally bounded by the number of oscillators within the network.
Preliminaries
In this section we define discrete-time Markov chains (DTMCs), stochastic processes with discrete state space and discrete time, and introduce Probabilistic Computation Tree Logic (PCTL), a logic that can be used to reason about probabilistic reachability and rewards in these processes.
Throughout this paper, we use the notation f ⊕ [x → y], where f is a function, to express updating f at x by y. That is, the function that coincides with f , except for x, where it takes the value y.
Discrete-Time Markov Chains
DTMCs can be used to model systems where the evolution of the system at any moment in time can be represented by a discrete probabilistic choice over several outcomes.
Definition 1 A discrete-time Markov chain D is a tuple (Q, σ 0 , P, L) where Q is a finite set of states. σ 0 is the initial state, and L : Q → P(V) is a labelling function that assigns properties of interest from a set of labels V to states. P : Q ×Q → [0, 1] is the transition probability matrix subject to σ ′ ∈Q P(σ, σ ′ ) = 1 for all σ ∈ Q, where P(σ, σ ′ ) gives the probability of transitioning from σ to σ ′ . We say that there is a transition between two states σ, σ ′ ∈ Q if P(σ, σ ′ ) > 0.
Intuitively, a DTMC is a state transition system where transitions between states are labelled with probabilities greater than 0 and where states are labelled with properties of interest. An execution path ω of a DTMC D = (Q, σ 0 , P, L) is a nonempty finite, or infinite, sequence σ 0 σ 1 σ 2 · · · where σ i ∈ Q and P(σ i , σ i+1 ) > 0 for i 0. We denote the set of all paths starting in state σ by Paths D (σ), and the set of all finite paths starting in σ by Paths D f (σ). For paths where the first state along that path is the initial state σ 0 we will simply use Paths D and Paths D f , and we will simply use Paths and Paths f if D is clear from the context. For a finite path ω f ∈ Paths f the cylinder set of ω f is the set of all infinite paths in Paths that share prefix ω f . The probability of taking a finite path σ 0 σ 1 · · · σn ∈ Paths f is given by n i=1 P(σ i−1 , σ i ). This measure over finite paths can be extended to a probability measure Pr over the set of infinite paths Paths , where the smallest σ-algebra over Paths is the smallest set containing all cylinder sets for paths in Paths f . For a detailed description of the construction of the probability measure we refer the reader to [21].
Probabilistic Computation Tree Logic
Probabilistic Computation Tree Logic [19] (PCTL) is a probabilistic extension of the temporal logic CTL. Properties for DTMCs can be formulated in PCTL and then checked against the DTMCs using model checking.
Formulas denoted by Φ are state formulas and formulas denoted by Ψ are path formulas. A PCTL formula is always a state formula, and a path formula can only occur inside the P operator. We now give the semantics of PCTL over a DTMC. Definition 3 Given a DTMC D = (Q, σ 0 , P, L), we inductively define the satisfaction relation |= for any state σ ∈ Q as follows: where v ∈ V, and for any path ω = σ 0 σ 1 σ 2 · · · of D as follows: Disjunction, true, false, and implication are derived as usual, and we define eventuality as F k Φ ≡ true U k Φ. We simply use F Φ and Φ U Φ ′ when k = ∞.
Concrete Model of a Network of Pulse-Coupled Oscillators
In this section we give a brief introduction to the formal model of a single pulsecoupled oscillator, as originally presented in previous work [16]. Subsequently, we encode fully-coupled networks of such oscillators as discrete time Markov chains.
Pulse-Coupled Oscillator Model
We consider a fully-coupled network of pulse-coupled oscillators with identical dynamics over discrete time. The phase of an oscillator u at time t is denoted by φu(t). The phase of an oscillator progresses through a sequence of discrete integer values bounded by some T 1. The phase progression over time of a single uncoupled oscillator is determined by the successor function, where the phase increases over time until it equals T , at which point the oscillator will fire in the next moment in time and the phase will reset to one. The phase progression of an uncoupled oscillator is therefore cyclic with period T , and we refer to one cycle as an oscillation cycle. When an oscillator fires, it may happen that its firing is not perceived by any of the other oscillators coupled to it. We call this a broadcast failure and denote its probability by µ ∈ [0, 1]. Note that µ is a global parameter, hence the chance of broadcast failure is identical for all oscillators. When an oscillator fires, and a broadcast failure does not occur, it perturbs the phase of all oscillators to which it is coupled; we use αu(t) to denote the number of all other oscillators that are coupled to u and will fire at time t.
Definition 4 The phase response function is a positive increasing function ∆ : {1, . . . , T } × N × R + → N that maps the phase of an oscillator u, the number of other oscillators perceived to be firing by u, and a real value defining the strength of the coupling between oscillators, to an integer value corresponding to the perturbation to phase induced by the firing of oscillators where broadcast failures did not occur. We require ∆(Φ, 0, ǫ) = 0 for all possible phase response functions, that is, oscillators are only perturbed if they perceive at least one firing oscillator.
We can introduce a refractory period into the oscillation cycle of each oscillator. A refractory period is an interval of discrete values [1, R] ⊆ [1, T ] where R T is the size of the refractory period, such that if φu(t) is inside the interval, for some oscillator u at time t, then u cannot be perturbed by other oscillators to which it is coupled. If R = 0 then we set [1, R] = ∅, and there is no refractory period at all.
and takes as parameters δ, the degree of perturbance to the phase of an oscillator, and Φ, the phase, and returns Φ if it is in the refractory period, or Φ + δ otherwise.
The phase evolution of an oscillator u over time is then defined as follows, where the update function and firing predicate, respectively denote the updated phase of oscillator u at time t in the next moment in time, and the firing of oscillator u at time t, update u (t) = 1 + ref(φu(t), ∆(φu(t), αu(t), ǫ)),
Modelling the Network Using a DTMC
We model the whole network of oscillators as a single DTMC D = (Q, s 0 , P, L), where each state s ∈ Q denotes a global state of the network. More precisely, the labelling function uniquely maps each state s to an combined encoding of the individual state of each oscillator. For simplicity, we identify the label of a state with the state itself, and hence we omit L from the DTMC, but describe each member of Q via its internal state.
We model each transition of an oscillator as a single transition within the DTMC. However, since the oscillators may influence each other within a single time step (that is, when they are firing), we cannot simply allow for arbitrary sequences of transitions. For instance, to model that all the oscillators progress on a similar time-scale, we need to prevent a single oscillator from taking a transition and thus progressing its phase without giving the other oscillators a chance to do the same. We achieve this by the following means: we divide the internal computation of each oscillator into two modes: start and update, and we add a counter to the model, containing the number of oscillators that fire.
The counter also possesses both modes, and resets at the start of each "round" of computation. First, in the start mode, each oscillator checks whether it would fire, according to its phase response function and the current number of oscillators that already fired, as given by the counter. If it does, it increases the counter and updates its mode to update, otherwise it just updates its mode. If all oscillators are in the update mode, they compute their new phases in a single step, according to the phase response function and the current state of the environment counter. Furthermore, we impose an order on the evaluation on the oscillators in the start mode if at least one oscillator fires, starting from the highest phase to the lowest. This ensures that firing oscillators are perceived by the other nodes, and thus may lead to the firing of the latter. This way of modelling the nodes implies the assumption that the time window during which each oscillator listens on the shared medium is long enough to perceive the firing of any other oscillator.
The general idea of the progress of the network of oscillators is visualised in Fig. 1. In the figure, each rounded rectangle shows a state of a network of four oscillators. The circles represents the nodes, where we inscribe its current phase and an abbreviation of its mode. A node that is about to fire is indicated by a starred circle, while a shaded circle indicates a node that is within the refractory period. The rectangle denotes the environment counter, with its corresponding value and mode. The phase response function is arbitrarily chosen, and of minor importance for the example.
In the first state, all outgoing transitions only check whether to increase the counter. Since no oscillator is in the firing phase, all oscillators just update their mode (observe that the single arrow actually denotes four transitions). In the next step, all oscillators increase their phase by one, and reset their mode to start. In the next four transitions, oscillator 2 fires and increases the counter, which in turn is sufficient for oscillator 3 to fire as well. Hence they both increased the counter by one, while oscillators 1 and 4 did not. During the last transition of the example, oscillator 2 and 3 reset their phase to one, while oscillator 1 is perturbed and increases its phase by two steps at once. Oscillator 4 is within its refractory period, which means that it is not perturbed, and simply increments its phase. In addition to these transitions, we also need some bookkeeping transitions, to ensure that the counter is reset before the oscillators check their phase response. Furthermore, observe that in the example, it is crucial that oscillator 3 checks its response after oscillator 2 increased the counter, since otherwise 3 would not have been perturbed to fire.
Formally, we conflate the states of the oscillators and the environment into a single state of the DTMC. Each oscillator can be described by a tuple consisting of the current phase Φ of the oscillator and the mode θ within this phase. The phase ranges from 1 to T , while the mode takes values from {start , update}. Furthermore, we use a single counter to keep track of the number of oscillators that fired successfully within a single phase computation.
For a fixed sequence of N oscillators, a state of the concrete model consists of a function ν that associates a phase and mode with each oscillator, and the state of the environment η that counts the number of oscillators that fired, A state is therefore a tuple s = (η, ν), where η is the state of the environment, and ν is the state of the network. We denote the set of all concrete system states by Qc. For simplicity, we use the notation p φ (p θ , respectively) for the corresponding projection function of the network states, i.e., if ν(u) = (Φu, θu), then p φ (ν(u)) = Φu and p θ (ν(u)) = θu. Similarly, for an environment state η = (θ, c), we will refer to θ by p θ (η) and to c by pc(η). We use the notation init Φ (s) = {u | p θ (ν(u)) = start ∧ p φ (ν(u)) = Φ} for the set of all oscillators sharing phase Φ and mode start in the state s = (η, ν). Furthermore, we simply use the notation init (s) = {u | p θ (ν(u)) = start }.
We now define the transition probabilities between states. To do this we first distinguish the following cases: 1. the environment resets its counter; 2. no oscillator has a clock value of T ; 3. an oscillator is in the mode start , has a clock value lower than T , is perturbed, but not enough to fire; 4. an oscillator is in the mode start , has a clock value lower than T and is perturbed enough to fire; 5. an oscillator is in the mode start , has a clock value of T , and broadcasts its pulse; 6. an oscillator is in the mode start , has a clock value of T , and fails to broadcast its pulse; 7. all oscillators are in the mode update , update their clock and reset their state to start .
We will impose an order on certain transitions for two reasons. Firstly, we will restrict transitions that are only used for bookkeeping purposes. For example, we will require that the reset transition of the environment is taken before any of the transitions for the oscillators within a phase are activated. In particular, this means that each computation starts with a transition of the type 1. Secondly, we need to ensure that, if at least one oscillator fires, the phase response of all oscillators is evaluated starting with oscillators in the highest phase, down to the lowest phase, as described above. The cases stated above are reflected in the following definitions for the transition probability between two states s = (η, ν) and s ′ = (η ′ , ν ′ ). Case 1, where the environment resetting its counter is treated as follows. In the precondition, we require that the mode of the counter is start, and the state of the oscillators does not change from s to s ′ . Furthermore, the mode of the counter 3 changes to update in s ′ , and its value is set to 0. Since this transition is mandatory at the beginning of each round, its probability is 1.
Now we turn to the cases 5 and 6 where some oscillator is at the end of its cycle. The preconditions of both cases are similar: the counter is required to be in the update mode, and there is an oscillator w, whose phase is T and mode is start. Furthermore, in s ′ , the mode of w is update, and the state of all other oscillators does not change. The difference between the cases is whether the counter is increased, that is, whether the oscillator manages to broadcast its signal. The probability of succeeding is 1−µ |init T (s)| , since there may be more than one oscillator in phase T at state s. Hence we have to normalise the tranistion probability accordingly. Similarly, the probability of failing to fire is µ |init T (s)| .
If p θ (η) = update and there is a w s.t. (2) If p θ (η) = update and there is a w s.t. (3) If no oscillator is at the end of its cycle, that is, in case 2, we define the probability of one oscillator updating its mode as follows. Observe that we have to normalise the transition probability by the number of all oscillators that have not transitioned to their update mode yet. This is correct, since no oscillator fires, which also means that no oscillator can be activated beyond the maximum phase. This implies in particular that the order of oscillator transitions does not matter in this round.
Now we will consider the cases 3 and 4, where some oscillator already fired (i.e., pc(η) > 0), and other oscillators are perturbed. We distinguish between two cases: either an oscillator is sufficiently perturbed to also fire or the perturbation does not cause the phase to exceed the firing threshold. One complication arises in these cases: we have to ensure that we only allow the oscillators to update their mode once all oscillators with a higher phase have been considered. Since the perturbation function is increasing, a higher phase may result in a higher perturbation. That is, oscillators with a higher phase need to be perturbed by fewer firing oscillators before their phase is increased beyond the threshold and they in turn fire. Hence, if we did not enforce such an order, oscillators with a lower phases might not be perturbed when oscillators with a higher phase fire. Again, observe that we normalise the transition probabilities according to the number of oscillators satisfying similar conditions. That is, this time we need to normalise on the number of oscillators with the same phase in the start mode.
If p θ (η) = update and there is a w s.t. (5) The cases where a perturbed oscillator fires are analogous to oscillators with a maximal phase, except for the addititional conditions that some other oscillator fired, and that all oscillators with higher phases have already been considered.
The final case 7, where all oscillators update their clock values simultaneously, is given by the following equation. It requires that all oscillators have finished their computation, whether they fire, and both the counter and the oscillators will reset their mode to start after the transition.
The formula F update is an abbreviation for the conjunction of the following four conditions, which model the update of the phases of the oscillators, according to the phase response function. Observe that the phases of the oscillators had not been updated by the previously defined transitions. Hence, we now update the phases of all oscillators at once.
In this formula, (8a) handles the simple case of firing oscillators, while (8b) defines the behaviour of oscillators within their refractory period. The formulas (8c) and (8d) reflect the two cases where oscillators are perturbed, either not exceeding their oscillation cycle, or firing, respectively.
With this model, we could begin to analyse the synchronisation behaviour with respect to different phase response functions or broadcast failure probabilities. However, the state space of the model increases exponentially with the number of oscillators, which makes an analysis beyond small numbers of infeasible. To overcome this restriction, we increase the level of abstraction as presented in the next section.
Population Model
In this section, we define a population model of a network of pulse-coupled oscillators for parameters as defined in Sect. 4.1 as S = (∆, N, T, R, ǫ, µ). Oscillators in our model have identical dynamics, and two oscillators are indistinguishable if they share the same phase. That is, we can reason about groups of oscillators, instead of individuals. We therefore encode the global state of the model as a tuple k 1 , . . . , k T where each k Φ is the number of oscillators sharing a phase value of Φ. The population model does not account for the introduction of additional oscillators to a network, or the loss of existing coupled oscillators. That is, the population N remains constant. Fig. 2: Evolution of the global state over four discrete time steps.
The set of all global states of S is Γ (S), or simply Γ when S is clear from the context. Example 1 Figure 2 shows four global states for an instantiated population model of N = 8 oscillators with T = 10 discrete values for their phase and a refractory period of length R = 2. We assume that the phase response function is linear, denotes rounding to the closest integer. Furthermore, let ǫ = 0.115. For example σ 0 = 0, 0, 2, 1, 0, 0, 5, 0, 0, 0 is the global state where two oscillators have a phase of three, one oscillator has a phase of four, and five oscillators have a phase of seven. The starred node indicates the number of oscillators with phase ten that will fire in the next moment in time, while the shaded nodes indicate oscillators with phases that lie within the refractory period (one and two). If no oscillators have some phase Φ then we omit the 0 in the corresponding node. Observe that, while going from σ i−1 to σ i (1 i 3), the oscillator phases increase by one. In the next section, we will explain how transitions between these global states are made. Note that directional arrows indicate cyclic direction, and do not represent transitions.
With every state σ ∈ Γ we associate a non-empty set of failure vectors, where each failure vector is a tuple of broadcast failures that could occur in σ.
We denote the set of all possible failure vectors by F.
Given a failure vector
. . , N } indicates the number of broadcast failures that occur for all oscillators with a phase of Φ. If f Φ = ⋆ then no oscillators with a phase of Φ fire. Semantically, f Φ = 0 and f Φ = ⋆ differ in that the former indicates that all (if any) oscillators with phase Φ fire and no broadcast failures occur, while the latter indicates that all (if any) oscillators with a phase of Φ do not fire. If no oscillators fire at all in a global state then we have only one possible failure vector, namely {⋆} T .
Transitions
In Section 5.2 we will describe how we can calculate the set of all possible failure vectors for a global state, and thereby identify all of its successor states. However we must first show how we can calculate the single successor state of a global state σ, given some failure vector F .
Absorptions. For real deployments of synchronisation protocols it is often the case that the duration of a single oscillation cycle will be at least several seconds [10,28]. The perturbation induced by the firing of a group of oscillators may lead to groups of other oscillators to which they are coupled firing in turn. The firing of these other oscillators may then cause further oscillators to fire, and so forth, leading to a "chain reaction", where each group of oscillators triggered to fire is absorbed by the initial group of firing oscillators. Since the whole chain reaction of absorptions may occur within just a few milliseconds, and in our model the oscillation cycle is a sequence of discrete states, when a chain reaction occurs the phases of all perturbed oscillators are updated at one single time step.
Since we are considering a fully connected network of oscillators, two oscillators sharing the same phase will have their phase updated to the same value in the next time step. They will always perceive the same number of other oscillators firing. Therefore, for each phase Φ we define the function is the number of oscillators with a phase greater than Φ perceived to be firing by oscillators with phase Φ, in some global state, incorporating the broadcast failures defined in the failure vector F . This allows us to encode the aforementioned chain reactions of firing oscillators. Note that our encoding of chain reactions results in a global semantics that differs from typical parallelisation operations, for example, the construction of the cross product of the individual oscillators. Observe that, in the concrete model of Sect. 4.2, we modelled such a behaviour by case 4.
Given a global state σ = k 1 , . . . , k T and a failure vector F = f 1 , . . . , f T , the following mutually recursive definitions show how we calculate the values α 1 (σ, F ), . . . , α T (σ, F ), and how functions introduced in Sect. 4.1 are modified to indicate the update in phase, and firing, of all oscillators sharing the same phase Φ. Observe that to calculate any α Φ (σ, F ) we only refer to definitions for phases greater than Φ and the base case is Φ = T , that is, values are computed from T down to 1. The function ref is the refractory function as defined in Sect. 4.1.
Transition Function. We now define the transition function that maps phase values to their updated values in the next time step. Note that since we no longer distinguish different oscillators with the same phase we only need to calculate a single value for their evolution and perturbation.
Definition 8 The phase transition function τ : Γ × {1, . . . , T } × F → N maps a global state σ, a phase Φ, and some possible failure vector F for σ, to the updated phase in the next discrete time step, with respect to the broadcast failures defined in F , and is defined as Let U Φ (σ, F ) be the set of phase values Ψ where all oscillators with phase Ψ in σ will have their phase updated to Φ in the next time step, with respect to the broadcast failures defined in F . Formally, We can now calculate the successor state of a global state σ and define how the model evolves over time. Definition 9 The successor function → succ : Γ × F → Γ maps a global state σ and a failure vector F to a state σ ′ , and is defined as Example 2 Recall that the perturbation function of our example was given as denotes rounding and ǫ = 0.115. Consider the global state σ 2 of Fig 3 where no oscillators will fire since k 10 = 0. We therefore have one possible failure vector for σ 0 , namely F = {⋆} 10 . Since no oscillators fire the dynamics of the oscillators are determined solely by their standalone evolution, and all oscillators simply increase their phase by 1 in the next time step. Now consider the global state σ 3 and F = ⋆, ⋆, ⋆, ⋆, ⋆, ⋆, 1, 0, 0, 0 , a possible failure vector for σ 3 , indicating that oscillators with phases of 7 to 10 will fire and one broadcast failure will occur for the single oscillator that will fire with phase 7.
Failure Vector Calculation
We construct all possible failure vectors for a global state by considering every group of oscillators in decreasing order of phase. At each stage we determine if the oscillators would fire. If they fire then we consider each outcome where any, all, or none of the firings result in a broadcast failure. We then add a corresponding value to a partially calculated failure vector and consider the next group of oscillators with a lower phase. If the oscillators do not fire then there is nothing left to do, since by Def. 4 we know that ∆ is increasing, therefore all oscillators with a lower phase will also not fire. We can then pad the partial failure vector with ⋆ appropriately to indicate that no failure could happen since no oscillator fired. Table 1 illustrates how a possible failure vector for global state σ 3 in Fig. 3 is iteratively constructed. The first three columns respectively indicate the current iteration i, the global state σ 3 with the currently considered oscillators underlined, and the elements of the failure vector F computed so far. The fourth column is true if the oscillators with phase T +1−i would fire given the broadcast failures in the partial failure vector. We must consider all outcomes of any or all firings resulting in broadcast failure. The final column therefore indicates whether the value added to the partial failure vector in the current iteration is the only possible value (false), or a choice from one of several possible values (true).
Initially we have an empty partial failure vector. At the first iteration there are 5 oscillators with a phase of 10. These oscillators will fire so we must consider each Table 1: Construction of a possible failure vector for a global state σ 3 = 0, 0, 0, 0, 0, 2, 1, 0, 0, 5 . Here we choose 0 broadcast failures, which is then added to the partial failure vector. At iterations 2 and 3 the oscillators would have fired, but since there are no oscillators with a phase of 9 or 8 we only have one possible value to add to the partial failure vector, namely 0. At iteration 4 a single oscillator with a phase of 7 fires, and we choose the case where the firing resulted in a broadcast failure. In the final iteration oscillators with a phase of 6 do not fire, hence we can conclude that oscillators with phases less than 6 also do not fire, and can fill the partial failure vector appropriately with ⋆. Formally, we define a family of functions fail indexed by Φ, where each fail Φ takes as parameters some global state σ, and V , a vector of length T − Φ. V represents all broadcast failures for all oscillators with a phase greater than Φ. The function fail Φ then computes the set of all possible failure vectors for σ with suffix V . Here we use the notation v ⌢ v ′ to indicate vector concatenation.
Observe that the result of fail T is always a set of well defined failure vectors, since whenever ⋆ is introduced into a failure vector at index Φ, all preceding indices are also filled with ⋆, as required by Definition 7.
Definition 11 Given a global state σ ∈ Γ , we define Fσ, the set of all possible failure vectors for that state, as Fσ = fail T (σ, ), and define next (σ), the set of all successor states of σ, as next Note that for some global states |next (σ)| < |Fσ|, since we may have that Given a global state σ and a failure vector F ∈ Fσ, we will now compute the probability of a transition being made to state → succ(σ, F ) in the next time step. Recall that µ is the probability with which a broadcast failure occurs. Firstly we define the probability mass function PMF : {1, . . . , N } 2 → [0, 1], where PMF(k, f ) gives the probability of f broadcast failures occurring given that k oscillators fire, We then denote by PFV : Γ × Fσ → [0, 1] the function mapping a possible broadcast failure vector F for σ, to the probability of the failures in F occurring. That is, Lemma 2 For any global state σ, PFV is a discrete probability distribution over Fσ.
Proof Given a global state σ = k 1 , . . . , k T we can construct a tree of depth T where each leaf node is labelled with a possible failure vector for σ, and each node Λ at depth Φ is labelled with a vector of length Φ corresponding to the last Φ elements of a possible failure vector for σ. We denote the label of a node Λ by V (Λ). We label each node Λω with ω ⌢ V (Λ). We iteratively construct the tree, starting with the root node, root , at depth 0, which we label with the empty tuple . For each node Λ at depth 0 Φ < T we construct the children of Λ as follows: 1. If oscillators with phase Φ fire we define the sample space Ω = {0, . . . , n Φ } to be a set of disjoint events, where each ω ∈ Ω is the event where ω broadcast failures occur, given that k Φ oscillators fired. For each ω ∈ Ω there is a child Λω of Λ with label ω ⌢ V (Λ), and we label the edge from Λ to Λω with PMF(k Φ , ω). 2. If oscillators with phase Φ do not fire then Λ has a single child Λ⋆ labelled with ⋆ ⌢ V (Λ), and we label the edge from Λ to Λ⋆ with 1.
We denote the label of an edge from a node Λ to its child Λ ′ by L(Λ, Λ ′ ). For case 2 we can observe that if oscillators with phase Φ do not fire then we know that oscillators with any phase Ψ < Φ will also not fire, since from Def. 4 we know that ∆ is an increasing function. Hence, all descendants of Λ will also have a single child, with an edge labelled with 1, and each node is labelled with the label of its parent, prefixed with ⋆ . After constructing the tree we have a vector of length T associated with each leaf node, corresponding to a failure vector for σ. The set Fσ of all possible failure vectors for σ is therefore the set of all vectors labelling leaf nodes. We denote by P ↓ (Λ) the product of all labels on edges along the path from Λ back to the root. Given a global state σ = k 1 , . . . , k T and a failure vector F = f 1 , . . . , f T ∈ Fσ labelling some leaf node Λ at depth T , we can see that Let D Φ denote the set of all nodes at depth Φ. We show d∈D Φ P ↓ (d) = 1 by induction on Φ. For Φ = 0, i.e., D Φ = {root }, the property holds by definition. Now assume that d∈D Φ P ↓ (d) = 1 holds for some 0 Φ < T . Let Λ be some node in D Φ , and let C Λ be the set of all children of Λ. Consider the following two cases: If oscillators with phase Φ do not fire then |C Λ | = 1, and for the only c ∈ C Λ we have that L(Λ, c) = 1. If oscillators with phase Φ fire observe that PMF is a probability mass function for a random variable defined on the sample space Ω = {0, . . . , k Φ }. In either case we can see that c∈C Λ L(Λ, c) = 1. Note that D Φ+1 = d∈D Φ C d , and recall that L(d, c) · P ↓ (d) = P ↓ (c). Therefore, Since c∈C d L(d, c) = 1 for each d ∈ D Φ , and from the induction hypothesis, we then have that We have already shown that P ↓ (Λ) = PFV(σ, F ) for any leaf node Λ labelled with a failure vector F , and since the set of all labels for leaf nodes is Fσ we can conclude that This proves the lemma. ⊓ ⊔ Example 3 We consider again the global states σ 3 = 0, 0, 0, 0, 0, 2, 1, 0, 0, 5 and σ 4 = 6, 0, 0, 0, 0, 0, 0, 0, 0, 2 , given in Fig. 3, of the population model instantiated in Example 1, and the failure vector F = ⋆, ⋆, ⋆, ⋆, ⋆, ⋆, 1, 0, 0, 0 given in Example 2, noting that F ∈ Fσ 3 , → succ(σ 3 , F ) = σ 4 , and µ = 0.1. We calculate the probability of a transition being made from σ 3 to σ 4 as PFV( 0, 0, 0, 0, 0, 2, 1, 0, 0, 5 , ⋆, ⋆, ⋆, ⋆, ⋆, ⋆, 1, 0, 0, 0 ) = 1 · 1 · 1 · 1 · 1 · 1 · PMF(1, 1) · PMF(0, 0) · PMF(0, 0) · PMF(5, 0) = (0.1 1 · 0.9 0 · 1) · (1) · (1) · (0.1 0 · 0.9 5 · 1) = 0.059049 We now have everything we need to fully describe the evolution of the global state of a population model over time. An execution path of a population model S is an infinite sequence of global states ω = σ 0 σ 1 σ 2 σ 3 · · · , where σ 0 is called the initial state, and σ k+1 ∈ next (σ) for all k 0.
Synchronisation
When all oscillators in a population model have the same phase in a global state we say that the state is synchronised. Formally, a global state σ = k 1 , . . . , k T is synchronised if, and only if, there is some Φ ∈ {1, . . . , T } such that k Φ = N , and hence k Φ ′ = 0 for all Φ ′ = Φ. We will often want to reason about whether some particular run ω of a model leads to a global state that is synchronised. We say that a path ω = σ 0 σ 1 · · · synchronises if, and only if, there exists some k 0 such that σ k is synchronised. Once a synchronised global state is reached any successor states will also be synchronised. Finally we can say that a model synchronises if, and only if, all runs of the model synchronise.
Model Construction
Given a population model S = (∆, N, T, R, ǫ, µ) we construct a DTMC D(S) = (Q, σ 0 , P, L) where L ranges over the singleton {synch}. We define the set of states Q to be Γ (S) ∪ {σ 0 }, where σ 0 is the initial state of the DTMC. For each σ = k 1 , . . . , k T ∈ Γ (S), we set In the initial state all oscillators are unconfigured. That is, oscillators have not yet been assigned a value for their phase. For each σ = k 1 , . . . , k T ∈ Q \ {σ 0 } we define to be the probability of moving from σ 0 to a state where k i arbitrary oscillators are configured with the phase value i for 1 i T . The multinomial coefficient defines the number of possible assignments of phases to distinct oscillators that result in the global state σ. The fractional coefficient normalises the multinomial coefficient with respect to the total number of possible assignments of phases to all oscillators. In general, given an arbitrary set of initial configurations (global states) for the oscillators, the total number of possible phase assignments can be calculated by computing the sum of the multinomial coefficients for each configuration (global state) in that set. Since Γ is the set of all possible global states, we have that We assign probabilities to the transitions as follows: for every σ ∈ Q \ {σ 0 }, we consider each F ∈ Fσ, and set P(σ, → succ(σ, F )) = PFV(σ, F ). For every combination of σ and σ ′ where σ ′ ∈ next (σ) we set P(σ, σ ′ ) = 0.
Model Reduction
We now describe a reduction of the population model that results in a significant decrease in the size of the model, but is equivalent to the original model with respect to the reachability of synchronised states. We first distinguish between states where one or more oscillators are about to fire, and states where no oscillators will fire at all. We refer to these states as firing states and non-firing states respectively. Definition 12 Given a population model S, a global state k 1 , . . . , k T ∈ Γ is a firing state if, and only if, k T > 0. We denote by Γ F the set of all firing states of S, and denote by Γ NF = Γ \ Γ F the set of all non-firing states of S. We will again omit S if it is clear from the context Given a DTMC D = (Q, σ 0 , P, L) let |P| = |{(t, t ′ ) | t, t ′ ∈ Q 2 and P(t, t ′ ) > 0}| be the number of non-zero transitions in P, and |D| = |Q| + |P| to be the total number of states and non-zero transitions in D.
We now proceed to prove this theorem. To that end, we need some preliminary properties of non-firing states and their relation to firing states. Lemma 3 Every non-firing state σ ∈ Γ NF has exactly one successor state, and in that state all oscillator phases have increased by 1.
Reachable State Reduction. Given a path ω = σ 0 · · · σ n−1 σn where σ i ∈ Γ NF for 0 < i < n and σ 0 , σn ∈ Γ F , we omit transitions (σ i , σ i+1 ) for 0 i < n, and instead introduce a direct transition from σ 0 , the first firing state, to σn, the next firing state in the sequence. For any σ = k 1 , . . . , k T ∈ Γ let δσ = max{Φ | k Φ > 0 and 1 Φ T } be the highest phase of any oscillator in σ. The successor state of a non-firing state is then the state where all phases have increased by T − δσ. Observe that T − δσ = 0 for any σ ∈ Γ F .
Definition 13 The deterministic successor function
maps a state σ ∈ Γ to the next firing state reachable by taking T −δσ deterministic transitions. Observe that for any firing state σ we have δσ = T , and hence that ։ succ(σ) = σ.
We now update the definition for the set of all successor states for some global state σ ∈ Γ to incorporate the deterministic successor function. Definition 15 Given a firing state σ ∈ Γ F let pred (σ) be the set of all non-firing predecessors of σ, where σ is reachable from the predecessor by taking some positive number of transitions deterministically. Formally, We refer to all states σ ′ ∈ pred (σ) as deterministic predecessors of σ.
Then given
to be the reduction of Q where all non-firing states from which a firing state can be reached deterministically are removed. Fig. 4: Five possible initial configurations in Q for N = 2, T = 6.
Proof Let P = σ∈Γ F pred (σ) be the set of all predecessors of firing states in Γ F .
and only if, P = Γ NF . From Definition 15 it follows that P ⊆ Γ NF . In addition, for any σ ∈ Γ NF there is some state σ ′ such that σ ∈ pred (σ ′ ) and σ ′ = ։ succ(σ) ∈ Γ F , hence Γ NF ⊆ P and the lemma is proved. ⊓ ⊔ Lemma 5 For a population model S = (∆, N, T, R, ǫ, µ) and its corresponding DTMC D = (Q, σ 0 , P, L) with Q = Γ ∪ {σ 0 }, the number of states in the reduction of Q is given by Proof Observe that there are ( N +T −1 N ) ways to assign T distinguishable phases to N indistinguishable oscillators [15]. Since Q = Γ ∪ {σ 0 } and Γ is the set of all possible configurations for oscillators we can see that |Q| = ( N +T −1 N ) + 1. For any non-firing state σ = k 1 , . . . k T ∈ Γ NF we know from Definition 6 that T Φ=1 k Φ = N and from Definition 12 that k T = 0, so it must be the case that T −1 Φ=1 k Φ = N . That is, there must be ( N +T −2 N ) ways to assign T −1 distinguishable phases to N indistinguishable oscillators, and so |Γ NF | = ( N +T −2 N ). From Lemma 4 we know that Q ′ = Q \ Γ NF so it must be the case that Transition Matrix Reduction. Here we describe the reduction in the number of nonzero transitions in the model. We ilustrate how initial transitions to non-firing states are removed by using a simple example, and then describe how we remove transitions from firing states to any successor non-firing states.. Figure 4 shows five possible initial configurations σ i , . . . , σ i+4 ∈ Q for N = 2 oscillators with T = 6 values for phase, where a transition is taken from σ 0 to each σ k with probability P(σ 0 , σ k ). Any infinite run of D where a transition is taken from σ 0 to one of the configured states σ i , . . . , σ i+3 will pass through σ i+4 , since all transitions (σ i+k , σ i+k+1 ) for 0 k 3 are taken deterministically. Also, observe that states σ i , . . . , σ i+3 are not in Q ′ , since σ i+4 is reachable from each by taking some number of deterministic transitions. We therefore set the probability of moving from σ 0 to σ i+4 in P ′ to be the sum of the probabilities of moving from σ 0 to σ i+4 and each of its predecessors in P. Generally, given a state σ ∈ Q ′ where σ = σ 0 , we set P ′ (σ 0 , σ) = P(σ 0 , σ) + σ ′ ∈pred(σ) P(σ 0 , σ ′ ).
We now define how we calculate the probability with which a transition is taken from a firing state to each of its possible successors. For each firing state σ ∈ Q ′ we consider each possible successor σ ′ ∈ ։ next (σ) of σ and define F σ→σ ′ to be the set of all possible failure vectors for σ for which the successor of σ is σ ′ , given by F σ→σ ′ = {F ∈ Fσ | ։ succ( → succ(σ, F )) = σ ′ }. We then set the probability with which a transition from σ to σ ′ is taken to P ′ (σ, σ ′ ) = F ∈F σ→σ ′ PFV(σ, F ). Lemma 6 For a population model S = (∆, N, T, R, ǫ, µ), the corresponding DTMC D = (Q, σ 0 , P, L) with Q = {σ 0 } ∪ Γ , and its reduction D ′ (S) = (Q ′ , σ 0 , P ′ , L ′ ), the transitions in P are reduced in P ′ such that |P ′ | |P| − 2|Γ NF | Proof From Lemma 4 we know that |Q ′ | = |Q\Γ NF |, and hence that |Γ NF | transitions from σ 0 to non-firing states are not in P ′ , and from Lemma 3 we also know that there is one transition from each non-firing state to its unique successor state that is not in P ′ . Since no additional transitions are introduced in the reduction it is clear that |P ′ | |P| − 2|Γ NF |. ⊓ ⊔ Lemma 7 For every population model DTMC D = (Q, σ 0 , P, L), unbounded-time reachability properties with respect to synchronised firing states in D are preserved in its reduction D ′ .
Proof We want to show that for every ⊲⊳ ∈ {<, , , >} and every λ ∈ [0, 1], if σ 0 |= P ⊲⊳λ [F synch] holds in D then it also holds in D ′ . From the semantics of PCTL over a DTMC we have Therefore we need to show that where Pr D and Pr D ′ denote the probability measures with respect to the sets of infinite paths from σ 0 in D and D ′ respectively.
Given a firing state σ F ∈ Q we denote by Paths D σ F the set of all infinite paths of D starting in σ 0 where the first firing state reached along that path is σ F . All such sets for all firing states in Q form a partition, such that σ F ∈Γ F Paths D σ F = Paths D . That is, for all firing states σ F , σ F′ ∈ Q where σ F = σ F′ we have that Paths D σ F ∩Paths D σ F′ = ∅. Now observe that any infinite path ω of D can be written in the form ω = σ 0 ω NF 1 σ F 1 ω NF 2 σ F 2 · · · where σ F i is the i th firing state in the path and each ω NF i = σ 1 i σ 2 i · · · σ ki i is a possibly empty sequence of k i non-firing states. Then for every such path in D there is a corresponding path ω ′ of D ′ without non-firing states, and of the form 3 · · · , as for any i we have σ j i ∈ pred (σ F i ) for all 1 j k i . As only deterministic transitions have been removed in D ′ we can see that Hence, we only have to consider the finite paths from σ 0 to σ F 1 . To that end, observe that there are pred (σ F 1 ) possible prefixes for each path from σ 0 to σ F 1 where the initial transition is taken from σ 0 to some non-firing predecessor of σ F 1 , plus the single prefix where the initial transition is taken to σ F 1 itself. Overall there are exactly pred (σ F 1 ) + 1 distinct finite prefixes that have ω ′ as their corresponding path in D ′ . We denote the set of these prefixes for a path ω ′ in D ′ by Pref (ω ′ ). Since the measure of each finite prefix extends to a measure over the set of infinite paths sharing that prefix, it is sufficient to show that the sum of the probabilities for these finite prefixes is equal to the probability of the unique We can then write where k σ ′ is the number of deterministic transitions that lead from σ ′ to σ F 1 in D. Now recall that for any σ ∈ Q ′ \ {σ 0 } we have P ′ (σ 0 , σ) = P(σ 0 , σ) + σ ′ ∈pred(σ) P(σ 0 , σ).
So we have shown that Pr D Pref (ω ′ ) = Pr D ′ {σ 0 , σ F 1 } and the lemma is proved. ⊓ ⊔ Proof (of Theorem 1) Follows from Lemmas 5 and 6 for the reduction of states and transitions respectively, and from Lemma 7 for the preservation of unbounded time reachability properties. Table 2 shows the number of reachable states and transitions of the DTMC, and corresponding reduction, for different population sizes (N) and oscillation cycle lengths (T ), using the Mirollo and Strogatz model of synchronisation [25]. The number of reachable states is stable under changes to the parameters R, ǫ, and µ, since every possible firing state is always reachable from the initial state. For the results shown here the parameters were arbitrarily set to R = 1, ǫ = 0.1. The underlying graph of the DTMC, and hence the number of transitions, is stable under changes to the parameter µ, and is not if interest here. Table 3 shows the number of transitions of the DTMC, and corresponding reduction, for various population model instances, and again uses the Mirollo and Strogatz model of synchronisation. Increasing the length of the refractory period (R) results in an increase in the reduction of transitions in the model. A longer refractory period leads to more firing states where the firing of a group of oscillators is ignored. This results in successor states having oscillators with lower values for phase, and hence a longer sequence of deterministic transitions (later removed in the reduction) leading to the next firing state. Conversely, increasing the strength of the coupling between oscillators (ǫ) results in a decrease in the reduction of transitions in the model. For the Mirollo and Strogatz model of synchronisation used here, increasing the coupling strength results in a linear increase in the pertubation to phase induced by the firing of an oscillator. This results in successor states of firing states having oscillators with higher values for phase, and hence a shorter sequence of deterministic transitions leading to the next firing state.
Reward Structures for Reductions
While probabilistic reachability properties allow us to quantitatively analyse models with respect to the likelihood of reaching a synchronised state, they do not allow us to reason about other properties of interest, for instance the expected time taken for the network to synchronise [16], or the expected energy consumption of the network [17]. Therefore, we will often want to augment the DTMC corresponding to a population model with rewards. We do this by annotating states and transitions with real-valued rewards (respectively costs, should values be negative) that are awarded when states are visited, or transitions taken. Definition 16 Given a DTMC D = (Q, σ 0 , P, L) a reward structure for D is a pair R = (Rs, R t ) where Rs : Q → R and R t : Q × Q → R are the state reward and state transition functions that respectively map real valued rewards to states and transitions in D.
For any finite path ω = σ 0 · · · σ k of D we define the total reward accumulated along that path up to, but not including, σ k as Given a DTMC D = (Q, σ 0 , P, L) augmented with a reward structure R, and some state σ ∈ Q, we will often want to reason about the reward that is accumulated along a path ω = σ 0 σ 1 σ 2 · · · ∈ Paths that eventually passes through some set of target states Ω ⊂ Q. We first define a random variable over the set of infinite paths V Ω : Paths → R ∪ {∞}. Given the set ω Ω = {j | σ j ∈ Ω} of indices of states in ω that are in Ω we define the random variable and define the expectation of V Ω with respect to Pr σ by The logic of PCTL can be extended to include reward properties by introducing the state formula R⊲⊳r[F Ψ ], where ⊲⊳∈ {<, , , >} and r ∈ R [22]. Given a state σ ∈ Q, a real value r, and a PCTL path formula Ψ , the semantics of this formula is given by where Sat (Φ) denotes the set of states in Q that satisfy Φ. Given a reward structure R = (Rs, R t ) for D we construct the corresponding reward structure R ′ = (R ′ s , R ′ t ) as follows: -There is no reward for the initial state and we set Rs(σ 0 ) = 0. -For every firing state σ F in Q with Rs(σ F ) = r we set R ′ s (σ F ) = r. -For every pair of distinct firing states σ F We set the reward for taking the transition from σ F 1 to σ F 2 in D ′ to be the sum of the rewards that would be accumulated across that sequence by a path in D, formally -For every firing state σ F in Q ′ there is a non-zero transition from the initial state σ 0 to σ F in P ′ . Therefore, all paths of D ′ where σ F is the first firing state along that path share the same prefix, namely σ 0 , σ F . For paths of D this is not necessarily the case, since σ F is the first firing state not only along the path where the initial transition is taken to σ F itself, but also along any path where the initial transition is taken to a non-firing state from which a sequence of deterministic transitions leads to σ F (that state is a deterministic predecessor of σ F ). We therefore set the reward along a path ω ′ = σ 0 σ F 1 σ F 2 · · · for taking the initial transition to σ F in D ′ to be the sum of the total rewards accumulated along all distinct path prefixes of the form σ 0 ω NF σ F , normalised by the total probabilitiy of taking any of these paths, where ω NF is a possibly empty sequence of deterministic predecessors of σ F , and where the total reward for each prefix is weighted by the probability of taking the transitions along that sequence, Proof (of Theorem 2) We want to show that for every reward structure R for D and corresponding reward structure R ′ for D ′ , every ⊲⊳ ∈ {<, , , >} and every r ∈ R, if σ 0 |= R⊲⊳r[F synch] holds in D then it also holds in D ′ . Let V Sat(Fsynch ) and V ′ Sat (Fsynch ) respectively denote the random variables over Paths D (σ 0 ) and Paths D ′ (σ 0 ) whose expectations correspond to R and R ′ . From the semantics of PCTL over a DTMC we have Therefore, we need to show that where Pr D and Pr D ′ denote the probability measures with respect to the sets of infinite paths from σ 0 in D and D ′ respectively. There are two cases: Firstly, if there exists some path of D that does not synchronise then by definition V Sat (synch) = ∞. Also, from Lemma 7 we know that there is a corresponding path of D ′ that does not synchronise, and hence that V ′ Sat (synch ) = ∞. By definition the probability measure of all paths of D and D ′ are strictly positive. Therefore, all summands of Equation 17 are defined, and the expectation of both V Sat (synch) and V ′ Sat(synch ) is ∞. Secondly, we consider the case where all possible paths of D and D ′ synchronise. First we define the function reduce : Paths D → Paths D ′ that maps paths of D to their corresponding path in the reduction D ′ , where ω NF i is the (possibly empty) sequence of deterministic predecessors of the firing state σ F i . Let reduce −1 (ω) denote the preimage of ω under reduce . Then, we can rewrite the left side of (17) to For any path ω of D or D ′ let pre s (ω) be the prefix of that path whose last state is the first firing state along that path that is in the set Sat (synch). So we want to show that the following holds for any path ω ′ of D ′ , Given some path ω let ω[i : j] denote the sequence of states in ω from the i th firing state to the j th firing state along that path (inclusively). The notation ω[− : j] indicates that no states are removed from the start of the path i.e. the first state is σ 0 , and the notation ω[i : −] indicates that no states are removed from the end of the path. By recalling that Pr (σ 0 σ 1 · · · σn) = n i=1 P(σ i−1 , σ i ) we can see that Pr (σ 0 σ 1 · · · σn) = Pr (σ 0 · · · σ i )Pr(σ i · · · σn) for any 0 < i < n. Also from (15) it is clear that for any reward structure R, tot R (σ 0 · · · σn) = tot R (σ 0 · · · σ i ) + tot R (σ i · · · σn) holds for all 0 < i < n. Now we can rewrite (18) to By the definition of R ′ we can write the right hand side of (19) as From Lemma 7 we know that and hence obtain Since Pref (ω ′ ) is the set of all possible finite prefixes from the initial state σ 0 to the first firing state σ F 1 , and since ω[− : 1] = pre s (ω)[− : 1] clearly holds, we know that Using this fact, and by observing that by definition we can write (20) as This is the same as the left hand side of (19) and the theorem is proved. ⊓ ⊔
Connecting the Concrete Model and the Population Model
In this section, we define the abstraction function to connect a concrete model with a population model. To that end, let Dc = (Qc, s 0 , Pc) be a concrete model of a network of N PCOs with a clock cycle length T , a refractory period R, a phase response function ∆, a coupling ǫ and broadcast failure probability of µ. Furthermore, let Dp = (Qp, σ 0 , Pp) be the DTMC of a population model for the same parameters. For simpler notation, we introduce some general notation for transitions in DTMCs. If there is a possible transition between two states q and q ′ in a DTMC, that is P(q, q ′ ) > 0, then we also write q → q ′ . Observe that for this simplification, q and q ′ are either both in Qc or both in Qp. We also denote the reflexive, transitive closure of → by ⇒.
Proving the Correspondence between Concrete and Population Models
We need to associate states in Dc to states in Dp. In general, several concrete states will be mapped to a single population state, since we do not distinguish between different orders of oscillators in the latter, while we do in the former.
Furthermore, we want to abstract from different modes of the oscillators. However, it is not sensible to associate all modes within a phase to the same population state, since in the transitions from one mode to the next the system chooses, whether an oscillator fails to broadcast its pulse or not. If we want to be able to define a simulation relation, we need to represent the failures described by the transitions in the population model. To have an exact correspondence, we first collect all the concrete states where the counter and all oscillators are at the start mode into a single set. The abstraction function h : Q ′ c → Qp takes a concrete state s and counts the number of oscillators sharing the same phase, mapping s = (η, ν) to the corresponding state of the population model, To show that this abstraction is sensibly defined, we need to show that the concrete model can weakly simulate the transitions allowed by the population model, and vice versa. That is, if the abstraction σ 1 of a concrete state s 1 allows a transition to another population state σ 2 , then there is a sequence of transitions from s 1 leading to s 2 , whose abstraction is σ 2 . Furthermore, if there is a transition sequence from one concrete state s 1 to s 2 , where both statescan be abstracted to population states σ 1 and σ 2 , respectively, then there is also a sequence of transitions connecting σ 1 with σ 2 . This situation is visualised in Fig. 5.
For the first direction, we actually show this condition for a single transition in the population model. However, this result can be straightforwardly extended to transition sequences. Lemma 8 Let s 1 ∈ Q ′ c and σ 1 , σ 2 ∈ Qp such that h(s 1 ) = σ 1 and σ 1 → σ 2 . Then there is a s 2 ∈ Q ′ c such that s 1 ⇒ s 2 and h(s 2 ) = σ 2 . Furthermore, the sum of the probabilities of transition sequences from s 1 to an instantiation s 2 of σ 2 is equal to the probability of the transition from σ 1 to σ 2 .
That is, Q F c (s) denotes the set of oscillators possibly firing in s. The sets Q P c (s) and Q PF c (s) denote the sets of oscillators being perturbed but not firing (since the perturbation is not sufficient for the oscillators to reach the end of their cycle), and possibly firing, respectively. We can only say that elements of Q F c (s) and Q PF c (s) possibly fire, since they may be affected by a broadcast failure.
We now have to construct a sequence of transitions, where we draw the firing oscillators from the sets Q F c (s 1 ) and Q PF c,Φ (s 1 ), according to the broadcast failure vector F . Furthermore, all elements of Q F c (s 1 ) and the sets Q PF c,Φ (s 1 ) have to take transitions such that their phase value in the next iteration is 1.
Let σ 1 = k 1 , k 2 , . . . , k T . Now consider an arbitrary sequence u 1 , . . . , u kT of all k T elements from Q F c (s 1 ). Additionally, let C T ⊆ Q F c (s 1 ) be the set of oscillators in phase T with a broadcast failure, i.e., |C T | = f T . Observe that p φ (ν 1 (u j )) = T for all 1 j k T . Furthermore, let r 0 = s 1 . Then we define a sequence of successors of r 0 = (η 0 , ν 0 ) as follows, where 1 j k T . If k i ∈ C T , then Observe that these states define a sequence of transitions from r 0 to r kT according to conditions (2) and (3). Now, for each phase Φ, with k Φ < T , we proceed similarly. That is, we first choose a sequence u Φ 1 , . . . , u Φ kΦ of oscillators and a set C Φ ⊆ Q PF c,Φ (s 1 ) with Subsequently, we define each r Φ j to be Observe again, that these sequences exhaust Q PF c,Φ (s 1 ) for each phase Φ. Furthermore, we claim that the number of firing oscillators that are not inhibited by a broadcast failure in the concrete model coincides with the number of perceived firing oscillators in the population model in this phase. F ). Now let Φ < T and assume pc(η Φ+1 0 ) = α Φ+1 (σ 1 , F ). By definition, we have By assumption, we then get We now distinguish two cases. First, assume that {u | p φ (ν 1 (u) = T )} = ∅, and let s = (ηs, νs) be such that s 1 ⇒ s and p θ (νs(u)) = update for all u. Then there is exactly one transition s → s 2 , which is defined according to equation (8). Furthermore, due to the assumption that no oscillator fires, we have pc(ηs) = 0, which implies ∆(Φ, pc(ηs), ǫ) = 0 for all Φ by Definition 4. Hence, for all u, we have p φ (ν 2 (u)) = p φ (νs(u)) + 1 = p φ (ν 1 (u)) + 1. That is, That is, we have σ 1 → σ 2 due to a deterministic transition, which, in particular, The second case is more involved. Let us assume {u | p φ (ν 1 (u) = T )} = ∅, that is, at least one oscillator fires. Hence, due to the preconditions of the transitions, we can divide the transition sequence from s 1 to s 2 as follows: where r Φ = (η rΦ , ν rΦ ) denotes the state where all oscillators with phase Φ changed their mode to update. Our goal now is to find a broadcast failure vector F , such that → succ(σ 1 , F ) = σ 2 . To that end, let . . , f T . With this broadcast failure vector at hand, we now have to show that Recall that U Φ (σ 1 , F ) = {Ψ | τ (σ 1 , Ψ, F ) = Φ}. Together with the condition we want to prove, this implies, that we need to show p φ (ν 2 (u)) = τ (σ 1 , p φ (ν 1 (u)), F ) for all oscillators u. We now need again to distinguish several cases, according to the different cases of the transition defined by condition (8).
First, let u be such that p φ (ν 1 (u)) R, i.e., oscillator u is within its refractory period. If p φ (ν 1 (u)) = T , then we have Otherwise, if p φ (ν 1 (u)) < T , we have Now assume that p φ (ν 1 (u)) R, i.e., oscillator u is outside of its refractory period and thus will be perturbed by firing oscillators. If p φ (ν 1 (u)) = T , then we proceed as in the previous case. So, let us assume p φ (ν 1 (u)) < T . To show that the transition function of the population model coincides with the result within the concrete model, we need to ensure that the perceived firing oscillators are equal in both models for each oscillator.
Claim Within each phase, the perceived oscillators in the population model coincide with the oscillators that fired up to the next higher phase in the concrete model. Formally, for each 1 Φ < T , we have pc(η rΦ+1 ) = α Φ (σ 1 , F ).
Lemma 10 Let Dc = (Qc, s 0 , Pc) be a concrete network of oscillators and Dp = (Qp, σ 0 , Pp) be its abstraction as a population model, as well as s 1 , Q ′ c and σ 1 , σ 2 ∈ Qp, with h(s 1 ) = σ 1 . Then, the sum of the probabilities of transition sequences from s 1 to all instantiations s 2 with h(s 2 ) = σ 2 is equal to the probability of the transition from σ 1 to σ 2 .
Proof Let σ = k 1 , . . . , k T . Furthermore, let N = T i=1 k i . Now, let s be an arbitrary state corresponding to σ. If no oscillator fires, we have N ! possibilities to create an transition sequence, each of which has a probability of 1 N ! to happen. Hence, we get that the probability that one of these transitions happen is N ! · 1 N ! = 1, which coincides with the definition in the population model. For the case that at least one oscillator fires and thus perturbs the other oscillators, we consider the construction in the proof of Lemma 8 with respect to a failure vector F = f 1 , . . . , f T for σ. During each phase Φ, we have to choose the particular order of the k Φ and in addition, we have to choose the set C Φ . That is, we have k Φ ! possible orders, and k Φ f Φ possibilities for the choice of C Φ . Furthermore, the combined probability for the transitions of the oscillators that should fire but are inhibited by a broadcast failure is Observe that at the start of the construction of each phase, |init Φ (s)| = k Φ . Hence the probability above simplifies to Due to the possible choices during the construction of the transition sequence, we have that the probability of one of these sequences to happen is which is exactly the function PMF(k Φ , f Φ ) as in the population model. Furthermore, with similar reasoning as above, the transition probability for the sequences, where no oscillator is perturbed anymore, is 1. Hence, we get that the combined probability of the set of paths from one instantiation s 1 of a population model state σ 1 to an instantiation of a successor σ 2 of σ 1 is equal to the probability of the the transition from σ 1 to σ 2 . ⊓ ⊔ From Lemmas 8, 9 and 10, we immediately get that the same weak simulation relation holds between a population model and the concrete network of oscillators it represents. Theorem 3 Let Dc = (Qc, s 0 , Pc) be a concrete network of oscillators and Dp = (Qp, σ 0 , Pp) be its abstraction as a population model. If we have s 1 , s 2 ∈ Q ′ c and σ 1 , σ 2 ∈ Qp, with h(s i ) = σ i , then σ 1 → * σ 2 if, and only if, s 1 ⇒ s 2 . Furthermore, the probabilities over all paths in both models coincide. In particular, we have h(s 0 ) = σ 0 , and that s 0 weakly bisimulates σ 0 . Hence, we can use population models to analyse the global properties of a network of pulse-coupled oscillators following the concrete model as defined in Sect. 4 without loss of precision. In particular, this allows us to increase the size of the network to check such properties, while still giving us the opportunity to analyse the internal behaviour of nodes, if we restrict the network size.
Experimental Validation
As Theorem 3 implies, the synchronisation probabilities for a concrete model and its corresponding population model coincide. However, we have to keep in mind that the PCTL formulas describing synchronisation are of course different. For a concrete model with four nodes and a cycle length T = 10, the synchronisation probability can be queried with the following formula.
For a population model, the corresponding property is For both types of models, we defined a suitable input for the model checker Prism, and compared the results for different values of R, ǫ and µ. As expected, the model checking results were matching exactly. Table 4 shows the model construction and checking times for some exemplary parameter combinations of the models as reported by Prism 1 . In the concrete model, the bulk of time is spent in the model checking phase, while the construction is much faster. For the analysis of the population model, however, the situation is reversed. The model construction phase is an order of magnitude longer than the model checking phase. As expected, the model checker needs less time for the analysis of the population model, if we add the time needed for model construction and checking.
Conclusion
In this paper we have introduced a formal concrete model for a network of nodes synchronising their clocks over a set of discrete values. Furthermore, we developed a population model that can alleviate state-space explosion when reasoning about significantly larger networks. We encoded both models as discrete-time Markov chains, and formally connected them by showing that a concrete model of a network weakly simulates a population model of that same network. We then showed that these two models are equivalent with respect to the reachability of distinguished states, namely those where all nodes in the network have synchronised their clocks.
Formalising the individual nodes of a network allows for the analysis of their internal properties. However, this internal structure also inhibits the verification of global network properties. Modelling the whole network as the product of the models for the individual nodes quickly, and unsurprisingly, results in a model that is too large to analyse with existing tools and techniques. While the use of appropriate collective abstractions, such as population models, allow for the analysis of larger networks, they often impose restrictions on the topologies of the network that can be considered. We could, of course, simply take the product of individual population models to represent network structures more specialised than the fully-connected graphs considered here, but again we face the consequences of this approach when trying to analyse the resulting model. In addition, when using population models we lose the possibility to distinguish between nodes having the same internal state. However, this does not restrict our analysis when considering networks of homogeneous nodes where the properties of interest relate to global behaviours of the network itself.
Our current definition of pulse-coupled oscillators only allows for non-negative results of the phase response function. However, there are also oscillator definitions with phase response functions with possibly negative values [32]. That is, instead of shifting the state of an oscillator towards the end of the cycle, the perturbation may reduce the value of the oscillator's state. It would be interesting to study the impact of negative-valued phase response functions in the setting of discrete clock values. While a concrete model can be instantiated to incorporate different topologies by explicit encoding of possible perturbances in the nodes' transitions, it is by no means obvious how to encorporate topologies into a population model. By design, the nodes in the latter are indistinguishable, hence the differences in the connections between nodes are lost. We could alleviate this restriction slightly, by modelling the connection between networks of strongly connected components. That is, each component can be modelled by a different population model, and the firings within one model can perturb different models. However, this would mean to compute the cross-product of the population model, and hence we are back at the state-space explosion problem. Furthermore, our abstract relation would need to take the mapping of single nodes into different components into account.
Deductive approaches might serve as an additional way to verify larger systems. In particular, due to the regularity of population models, we conjecture the existence of an inductive invariant that holds from a certain size of models onwards. That is, as soon as the population grows to a size to be treated as a single entity, we can increase this size by one node, and guarantee that synchronisation still occurrs. For the population model sizes below this threshold, we could still use our proposed model-checking technique as the induction base. However, is is not clear what such an invariant should be, and how it can be verified.
|
2018-09-27T17:31:16.000Z
|
2018-09-27T00:00:00.000
|
{
"year": 2020,
"sha1": "1adb1057614ec50c95350ff1f5bcfc26ecec1c23",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10703-020-00347-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "5336ae5d07a025e40e5b3d2cf2f1ce5aa05c4e7c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
52864018
|
pes2o/s2orc
|
v3-fos-license
|
Partial purification of organ-specific neoantigens from human colon and breast cancer by affinity chromatography with human tumour-specific gamma-globulin.
Organ-specific neoantigens (TA) shed from the tumours of patients with metastatic breast or colon cancer and which had filtered into the urine were partially purified by a combination of physicochemical methods and affinity chromatography. TA activity of the isolated materials was monitored by the blocking Tube LAI assay. Urinary protein was precipitated by 80% saturated ammonium sulphate. Albumin was removed by affinity chromatography with blue Sepharose CL-6B. Affinity columns of human IgG were prepared from sera of patients whose leucocytes were LAI+ to the breast- or colon-cancer extracts. The anti-breast-TA affinity column bound the TA in the urine of patients with metastatic breast cancer but not that of patients with metastatic colon cancer. The TA in urine of patients with metastatic colon cancer was bound by the anti-colon-TA affinity column. Analysis by SDS PAGE revealed that the isolates with and without TA activity were composed mostly of urinary protein which had bound nonspecifically to the human IgG affinity columns. With an affinity column of anti-NHS and Protein A, some of the contaminants were removed, to reveal SDD PAGE unique bands at about 38,000 and 12,000 mol. wt in the isolate with breast-TA activity. Rabbit antisera, raised to the material that had bound nonspecically to the anti-breast-TA affinity column, were used as an anti-nonspecific affinity column to remove the contaminants in the isolates from the affinity columns of anti-breast TA and anti-colon TA. After passage through the anti-nonspecific affinity column, the material that contained the putative breast or colon cancer TA revealed a unique band at about 38,000-40,000 mol. wt and residual fine bands at about 25,000-30,000 mol. wt. Both the control material and material with TA activity had similar bands at about 25,000 and 50,000 mol. wt. The specific activity of the putative colon or breast TAs, as measured by the blocking Tube LAI assay, was increased from about 30 to 5000-10,000 u/mg, a 125-400-fold enrichment.
HISTOCOMPATIBILITY ANTIGENS have been defined with alloantisera from immunized subjects. Alloantisera have been invaluable in defining the polymorphism of these antigens and monitoring their purification. On the other hand, xenoantibodies to the histocompatibility complex, even to purified HLA antigens, have not been reagents of great value in revealing HLA allospecificity (Sanderson, 1977). Likewise, detection of experimental or human tumour-antigen epitopes by the immunization of xenogeneic animals has proved fraught with problems and few if any tumour antigens to which the tumourbearing host responds have been defined.
In experimental tumour models the existence of tumour-specific transplantation antigens was established on the basis of the rejection of transplantable tumours in previously immunized syngeneic recipients. Assays of cell-mediated and humoral antibody responses have been used to monitor the isolation of tumour antigens involved in the in vitro response to the animal tumour cells (Baldwin & Embleton, 1970). In our laboratory, the isolation of a chemically induced tumourspecific antigen (TSA) was monitored with syngeneic tumour-immune serum and the IgG from the tumour-immune serum was used in affinity chromatography to isolate the papain-solubilized TSA from tumourcell membranes Thomson et al., , 1976.
The principal evidence that human tumours express neoantigens has come from in vitro assays of cell-mediated and humoral antibody responses to tumour cells. Such assays have not been felt to be sufficiently reliable to be used to monitor the isolation of the tumour antigen (TA) involved in the response. However, Halliday & Miller (1972) described a most promising in vitro assay of human antitumour immunity which is based on the binding of TA with the membrane of sensitized peripheral-blood leucocytes, which inhibits the adherence of the leucocytes to glass. The validity of leucocyte-adherence inhibition (LAI) has been confirmed in many laboratories (Holan et al., 1974;Maluish & Holliday, 1975;Powell et al., 1975;Burger et al., 1977;Leveson et al., 1977;Russo et al., 1978;Shani et al., 1978;Thomson, 1979).
A modified assay called the Tube LAI was adopted in our laboratory (Grosser & Thomson, 1975) and subsequently the counting of the nonadherent leucocytes was automated by image analysis Thomson et al., 1979a). Inhibition of the LAI response was used to monitor the purification of human TAs papain-solubilized from the membranes of hepatoma, malignant melanoma, breast and colorectal cancer and the human TAs were shown to be associated with 32microglobulin 1 979b).
The present experiments were undertaken to purify specific cancer TAs from the urine of patients with metastatic breast (Lopez & Thomson, 1977) or colon cancer by affinity chromatography with tumour-immune serum from patients whose leucocytes were LAI+ Marti et al., 1976). Success of the method was judged by the enrichment of the specific activity of the putative colon or breast cancer TA when tested in the blocking Tube LAI assay, and the finding of polypeptide subunits unique to the material with TA activity when analysed by SDS PAGE.
Doniors of leucocytes
Heparinized samples of blood were drawn from patients with colon, breast or melanoma cancer. Buffy-coat peripheral-blood leucocytes (PBL) were processed for the Tube LA f as described by Grosser & Thomson (1975).
Antigen-induced tube leucocyte-adherence inhibition (Tube LAI) The antigen-induced Tube LAI assay was performed as described in detail by Grosser & Thomson (1975). In most of the experiments, nonadherent PBL were counted by image analysis (Tataryn et al., , 1979Thomson et al., 1979a). The results were expressed as a nonadherence index (NAI) (Grosser & Thomson, 1975) where A is the number of nonadherent cells in the presence of the specific antigen and B is the number of nonadherent cells in the presence of the nonspecific antigen. To detect LAI reactivity to colon cancer, the specific antigen was an extract of colon cancer and the nonspecific antigen was an extract of squamous-cell lung cancer. To detect LAI reactivity to breast cancer, the specific antigen was an extract of breast cancer and the nonspecific antigen was an extract of malignant melanoma. LAI reactivity to malignant melanoma was detected by reversing the calculations with the extracts of melanoma and breast cancer. NAIs ) 30 were considered positive and indicated antitumour immunity. NAIs < 30 were considered negative because in previous studies more than 95%0 of control subjects had NAIs < 30 (Flores et al., 1977;Lopez et al., 1978;Tataryn et al., 1978Tataryn et al., , 1979. In the Tube LAI assay, the number of nonadherent leucocytes is affected by the quantity of protein in the tubes. At about 100 pg/tube the number of nonadherent leucocytes is optimal for counting. Too much or too little protein causes too many or too few leucocytes to be nonadherent for reliable counting.
To detect antigen activity in small quantities of protein, the cells were preincubated with the material, with the idea that the specific effect of the antigen might be mediated during the incubation and, by washing the cells free of unbound protein, the nonspecific effect of protein directly in the assay would be eliminated, while maintaining the desired specific effect. In subsequent experiments, the preincubation of leucocytes with the test or control material proved a valid step Marti et al., 1976;Lopez & Thomson, 1977;Thomson et al., , 1979b. During preincubation the leucocytes react with any sensitizing antigen present, and when plated in the standard assay the leucocytes will not react again with the same specific antigen.
PBL were from patients with either malignant melanoma, breast or colon cancer, and reacted in the antigen-induced Tube LAI assay against their respective cancer extracts. A sample to be tested for TA activity was diluted to the appropriate protein concentration in Medium 199 containing 20% foetal calf serum, and 0.5 ml was preincubated with a minimum of 1-3 x 107 leucocytes in 0 5 ml of Medium 199. The mixture was incubated at 37°C in a 5% C02 humidified atmosphere with frequent agitation of the plastic tube. After 30 min the PBL were washed with 10 ml of Medium 199 to remove the experimental sample. The PBL were resuspended in Medium 199 to 058 ml and then 041 ml of the suspension was plated in each of the glass test tubes with the specific and nonspecific cancer extracts as described in the antigeninduced Tube LAI assay. After 2 h incubation a sample of the nonadherent cells was counted by the computerized Tube LAI assay. TA activity was present in the sample when the LAI response was specifically and reproducibly nullified. To compare antigenic activities at different stages of the purification, a unit of activity was defined as the amount of antigen needed to give an NAI <30 in the blocking assay. All samples that were tested in the blocking assay were coded, and unknown to the two individuals doing the LAI assay. Throughout this study, one experimenter used the PBS extracts of colon and squamous-cell lung cancer to test for LAI reactivity to colon cancer, whilst the other used the PBS extracts of breast cancer and malignant melanoma to test for LAI reactivity to either breast cancer or melanoma. Moreover, when the samples were tested, a sample that might be expected to block and a sample that should not block (positive and negative contents) were included. Since abrogation of LAI by any sample could be mediated by nonspecific and nonimmunologic substances, the samples which blocked were then tested on LAI+ PBL from patients with unrelated cancers, to test that the sample did not affect the LAI of their leucocytes.
Isolation of human TA Jrom urine Physicochemical methods.-A 24h urine sample from control subjects and patients with metastatic colon or breast cancer were collected in sterile containers and stored at 40C with 0 02% NaN3 and 15 ml I-OM Tris-HCI buffer, pH 9. The urine was brought to 0-8 saturation with (NH4)2504 with constant stirring and the resultant precipitate was collected by centrifugation at 20,000 g for 10 min. The precipitate was suspended in a minimal volume of PBS and dialysed twice against 100 volumes of this buffer over 24 h at 4°C. Insoluble material was removed by centrifugation at 20,000 g for 10 min and the supernatant was concentrated by ultrafiltration on an Amicon PM10 membrane to about 10-15 mg/ml. Albumin was removed from the isolated urinary proteins by chromatography with blue Sepharose CL-6B (Pharmacia, Montreal) with all materials in 01M sodium phosphate buffer, pH 7 0. The unretained material was concentrated and dialysed against PBS at pH 7-3 (Lopez & Thomson, 1977). This isolate of urinary protein was subjected to molecular-sieve chromatography on a calibrated Sephadex G-150 column (2.5 x 60 cm) or applied to an affinity column of human IgG.
A ntisera used in the isolation of TA from urine Reactive human serum.-Sera from patients with limited cancer of the breast whose leucocytes were reactive in the Tube LAI 88 assay were shown to "arm" normal leucocytes to respond specifically in the LAI assay and this serum was used in affinity chromatography. Similar results were observed with sera from patients with limited colon cancer. The affinity column prepared with IgG from the serum of LAI+ breast-cancer patients was called an antibreast-TA affinity column, and the affinity column prepared with the IgG from the serum of LAI+ colon-cancer patients was called an anti-colon-TA affinity column.
Anti-human whole serum.-Rabbits were immunized with 1 ml of normal human serum (NHS) in complete Freund's adjuvant. From the resultant antiserum, antibodies directed to membrane components were removed by passage through an affinity column of AH-Sepharase 4B to which papain-soluble human liver-cell membrane was coupled.
Anti-nonspecific serum.-The urinary protein from the patients with metastatic colon, or from breast cancer patients which bound to the affinity column of anti-breast TA, was used to immunize separate rabbits. Rabbits were immunized i.m. at Days 1 and 4 with 250 jug of material mixed with 250 jig of methylated bovine albumin and emulsified with an equal volume of complete Freund's adjuvant. The rabbits were boosted at 4 weeks with the material in incomplete Freund's adjuvant and bled 6 and 7 weeks after the priming injection.
Urinary protein from patients with metastatic colon cancer was isolated by the physicochemical methods described above and linked to AH-Sepharose 4B (Cambiaso et al., 1975). Through this affinity column was separately passed the IgG from the antisera of rabbits immunized with the bound colon and breast urinary protein. The IgG which bound to the column was eluted with 3M KSCN. The eluted IgG had specificity for colon and breast urinary protein that had bound nonspecifically both to the anti-breast TA column and to the anti-colon-TA affinity column.
Affinity chromatography procedure.s IgG from the different antisera was purified by DEAE cellulose as described by Reif (1969). The IgG was covalently coupled to AH-Sepharose 4B (Cambiaso et al., 1975), and thoroughly washed with 1-OM NaCl buffers of high and low pH; then with 3M KSCN, and finally with PBS at pH 7-3. The antibreast-TA and anti-colon-TA affinity columns contained 70 and 40 ml of AH-Sepharose 4B to which 2 and 1P5 g of IgG were coupled, respectively. Isolation of urinary TA by affinity chromatography Urinary protein isolated by physicochemical methods from patients with metastatic colon or breast cancer was applied separately to the anti-breast-TA affinity column. After the unbound fraction was eluted with 10 column volumes of PBS at pH 7 3, the column was prewashed with 1OM NaCl, 0dIM NaOH; glycine buffer (pH 9 0) to remove nonspecifically adsorbed proteins (Zoller & Matzku, 1976). The bound proteins were then eluted with 3-OM KSCN, immediately dialysed against BPS and concentrated by ultrafiltration with a YM1O membrane (Amicon). The unbound fraction was returned to its original volume and the bound fraction concentrated to about 1 mg/ml; and, before storage at -40°C, both the unbound and bound fractions were centrifuged at 100,000 g for 1 h. After thawing and before being assayed the fractions were centrifuged at 10,000 g for 10 min to remove denatured materials.
The bound and unbound materials were tested for TA activity, and their patterns on SDS PAGE were analysed. The unbound materials were then applied to the anticolon-TA affinity column and the unbound and bound materials were similarly analysed.
Because the material isolated from either the anti-breast-TA or anti-colon-TA affinity columns had obvious protein contaminants, either anti-NHS affinity chromatography or the anti-nonspecific affinity columns were used to remove the nonspecific urinary protein which had adhered to the anti-breast-TA or anti-colon-TA affinity columns. Swollen and washed Protein A linked to Sepharose (1 ml) (Pharmacia) was placed at the bottom of the anti-NHS and anti-nonspecific affinity columns to remove IgG which may have bled from the previous affinity columns.
Sodium-dodecyl sulphate (SDS) polyacrylamide-gel electrophoresis (PAGE) High-resolution SDS slab gels (0-75 mm thick) were run under reducing conditions by the discontinuous method of Laemmli (1970), the running gel having an exponential gradient of 5-20% polyacrylamide. The gels were stained with 0.25% Coomassie blue, 4.5% methanol and 7.5% acetic acid. Radioiodination of the isolated urinary protein was performed by the Chloramine T method, with 1 mCi of 1251 and 50 ,g of protein. The pattern of the radiolabelled proteins on SDS PAGE was determined by autoradiography.
Partial purification of breast or colon cancer TAs from urine
The partial purification of breast or colon cancer TAs from the urine of patients with metastatic breast or colon cancer respectively was as outlined (Tables I and II). TA activity was assayed by the blocking Tube LAI. An overall yield of 16-44% was obtained with an enrichment ofspecific activity from 125to 400-fold with 4 different preparations, the results of two being detailed in Tables I and II. After physicochemical isolation of TA in urinary protein 80% or more of the TA activity was recovered (Tables I and II). PBL from patients with breast cancer had their LAI activity abrogated by preincubation with urinary protein from patients with metastatic breast cancer, but not from patients with metastatic colon cancer (Table III). Similarly, LAI+ leukocytes from patients with colon cancer were blocked only by urinary protein from patients with metastatic colon cancer ( Table III).
The urinary protein with TA activity that did not bind to the blue Sepharose CL-6B affinity column was electrophoresed on SDS gels and stained with Coomassie 200 * An NAI > 30 was positive and indicated no blocking, whereas an NAI < 30 was negative and indicated blocking. The specific and nonspecific cancer extracts were used at a concentration of 100 dig/tube. blue. Multiple bands were visible (not shown).
The isolated urinary protein was subjected to molecular-sieve chromatography on a calibrated column of Sephadex G-150, and most of the protein eluted in a single broad peak with the maximum OD280 at 48,000 mol. wt. The material that eluted from the column was pooled into 3 fractions that corresponded approximately to the excluded volume (Frac. 1), the elution volume of aldolase (Frac. 2), and ovalbumin (Frac. 3). The Sephadex G-150 Frac. 3 of the urinary protein from patients with metastatic breast cancer had specific blocking activity (Table III).
Urinary protein from patients with metastatic colon cancer, isolated in a similar manner, also had TA activity in Frac. 3 (results not shown). Moreover, Table III shows that the material in Frac. 3 nullified the LAI activity of leucocytes from patients with colon or breast cancer in an immunologically specific manner.
Urinary protein from 6 patients with metastatic breast cancer and 5 patients with metastatic colon cancer have been similarly isolated and found to have specific TA activity. Urinary protein from 2 patients without cancer had no blocking activity. The urinary protein isolated physicochemically and then by molecular- Anti-breast TA. Isolated urinary protein from patients with either metastatic breast or colon cancer was applied separately to the anti-breast-TA affinity column. The unbound, bound and eluted fractions were then assayed for TA activity (Table IV). The bound and eluted fraction of urinary protein from the patients with metastatic breast cancer (but not from the controls) negated the LAI activity of leucocytes from patients with breast cancer. Moreover, LAI+ leucocytes from patients with colon cancer or malignant melanoma showed LAI activity in the presence of the isolate that blocked the LAI activity of leucocytes from breastcancer patients (Table IV). The unbound fraction of urinary protein from patients with cancer of the colon did not block LAI+ leucocytes from breast-cancer patients, whereas the same unbound urinary protein blocked the LAI reactivity of leucocytes from colon-cancer patients (Table IV). Affinity chromatography with the anti-breast-TA affinity column increased the specific activity of the breast TA about 25-fold (Table I), and had no effect on the specific activity of the colon TA (Table II).
Anti-colon TA. The urinary protein which did not bind to the anti-breast-TA affinity column was applied to the anticolon-TA affinity column. The urinary protein from patients with metastatic colon cancer that bound and was eluted, was enriched for colon-TA activity from 10to 30-fold. Table II shows that the specific activity of the colon TA was increased from 100 to 1000 u/mg, a 10-fold enrichment. The isolate blocked LAI+ leucocytes from colon-cancer patients in an immunologically specific manner (Table IV). The isolate of colon urinary protein did not inhibit the LAI activity of leucocytes from breast-cancer patients, nor did the bound and eluted material from breast urinary protein from the anti-colon-TA affinity column alter the LAI activity of PBL from colonic-cancer patients (Table IV).
SDS PAGE of the isolates of the urinary protein from either control subjects, metastatic breast or colon cancer patients revealed fewer bands; however, the isolate from the urinary protein from patients with metastatic breast or colon cancer showed no difference from the isolates from the controls (not shown) in spite of the use of O -OM NaCl, 0dIM NaOH: glycine buffer (pH 9.0) to remove the proteins that had adhered nonspecifically to the affinity column before elution with 3*OM KSCN.
Affinity chromatography with anti-NATHS and protein A
The binding of undesired urinary protein was greater than the specific binding of the breast TA to the anti-breast-TA affinity column. To remove the unwanted protein contaminants, the isolates from the urinary protein from normal subjects, breast and colon-cancer patients were passed through an affinity column of rabbit anti-NHS and Protein A which yielded a 2-fold enrichment of breast TA (Table I). The isolate from the urine from patients with metastatic breast cancer blocked LAI+ leucocytes from patients with breast cancer, whereas similar isolates from normal subjects and patients with metastatic colon cancer had no effect on LAI. Furthermore, the isolate that blocked LAI+ leucocytes from patients with breast cancer did not block the LAI+ of leucocytes from patients with other cancers (Table V). Hence, the blocking was specific.
The SDS PACGE pattern of the isolate from the breast or colon cancer urinary protein that did not bind to the anti-NHS and Protein A affinity column is shown in Fig. 1 50,000. However, the isolate from the breast-cancer urinary protein has an intense band at a mol. wt of -12,000, in contrast to the isolate from the coloncancer urinary protein, and a unique band at -38,000 (Fig. 1).
The isolates of the breast-and coloncancer urinary protein from the anti-NHS affinity column were radiolabelled with 1251 and run on the SDS slab gels. Fig. 2 shows the autoradiographs of the isolate from the breast-cancer urinary protein before and after incubation and clearing with Protein A on fixed bacteria (Kessler, 1975) and the isolate from the colon cancer urinary protein. The isolate from the breast-cancer urinary protein has a unique band at a mol. wt of -38,000. The band at -12,000 mol. wt is more intense in the isolate from breast than colon cancer urinary protein. In comparison, Fig. 2 shows an autoradiograph of papain-soluble breast-cancer membrane material purified by a horse anti-human-:2 microglobulin affinity column (Thomson et al., , 1979b. This material also blocked specifically the LAI reactivity of leucocytes from breast cancer patients and did not alter that from patients with colon cancer or melanoma.
The anti-NHS and Protein A affinity column removed some of the protein contaminates from the isolates recovered from the anti-breast-TA affinity column with a 2-fold increase in specific activity of the breast TA. However, on SDS PAGE a number of intense, common bands remained in the materials with and without breast-TA activity.
Affinity chromatography with antinonspecific sera Although the TAs in the urine ofpatients with metastatic cancer could be isolated by an affinity column of tumour-specific IgG, the isolates were clearly contaminated by other species of urinary protein.
The anti-NHS and Protein A affinity column did not remove enough of the contaminants from the isolates, so another approach was made.
Antiserum to the nonspecific proteins eluted from the anti-breast affinity column and eluted isolates of urinary protein from the anti-breast-TA affinity column that were then passed through the anti-NHS and Protein-A affinity column A mol. wt standards; B, urinary protein from patient with metastatic colon cancer; C, urinary protein from patient with metastatic breast cancer. Triangles point to bands in the isolate from breast urinary protein but which are either absent or in minimal amounts in the isolate from colon urinary protein. 4 was prepared as described in Materials and Methods. With this antiserum an antinonspecific affinity column was prepared. The possibility that the antiserum might react with the TA in the urine was considered.
The bound and eluted samples of urinary protein from patients with metastatic breast or colon cancer from the antibreast-TA affinity column were applied to the anti-nonspecific affinity column. of the unbound isolates from the breastand colon-cancer urinary protein (Table VI). Similarly, the bound and eluted urinary protein from the anti-colon-TA affinity column were applied to the antinonspecific affinity column. Table VI shows that the unbound isolate from colon urinary protein nullified LAI in an immunologically specific manner. The antinonspecific affinity column increased the TA-specific activity about 5-fold (Tables I and II).
Fifty-jig samples of the 4 isolates from the anti-breast-TA and anti-colon-TA affinity column were labelled with 125I. The 1251-labelled isolates were applied separately to the anti-nonspecific affinity 95 column and the unbound material was collected and concentrated. The 1 25I labelled isolates were electrophoresed on SDS gels and the patterns were autoradiographed.
The isolate from the breast urinary protein from the anti-breast TA affinity column shows a heavy band at 38,000 mol. wt that is not observed in the isolate from the colon urinary protein from the same affinity column (Fig. 3). After passage through the anti-nonspecific affinity column, the 1251-labelled isolate from the breast urinary protein continued to show a strong band at a mol. wt of about 38,000.
Three finer bands at 25,000-30,000 mol. wt were seen, and at least one of these bands appeared in the isolate from the colon urinary protein after passage through the anti-nonspecific affinity column. Isolates from both colon and breast urinary protein show a band at 50,000 mol. wt that was not removed by the anti-nonspecific affinity column. An autoradiograph of the isolate from the colon-cancer urinary protein from the anti-colon-TA affinity column shows a band at , 40,000 mol. wt (Fig. 4), whereas the isolate from the colon urinary protein from the anti-breast-TA affinity column lacked a band at this mol. wt (Figs 2 and 3). After passage through the anti-nonspecific affinity column, the isolate from the colon urinary protein from the anti-colon-TA affinity column continued to have a heavy band at -40,000 mol. wt. The heavy band at 25,000 mol. wt was removed, although 4 fine bands remained at 25,000-30,000 mol. wt. A faint band at 12,000 mol. wt is also present. In comparison, an autoradiograph of the isolate from the breast urinary protein from the anti-colon TA affinity column is shown in Fig. 5. A heavy band is present at 25,000 mol. wt with faint bands at -50,000 and 12,000 mol. wt. However, the isolate from the breast urinary protein from the anti-colon-TA affinity column lacked a band at , 40,000 vol. wt (Fig. 5). After passage of the breast urinary protein from the anti-colon-TA affinity column through the anti-nonspecific affinity column, no band at 40,000 mol. wt was seen, although a band at 50,000 mol. wvt and a faint band at -25,000 mol. wt remained.
DISCUSSION
Since the TA is denatured by SDS PAGE, there is no way to determine 68,000* which, if any, of the bands visualized in the gels carries the TA epitope. Previously we have shown that human TAs papainsolubilized from the membranes of hepatoma, malignant melanoma, breast and colon cancer were linked tot2-microglobulin (Thomson et at., , 1979bThomson, 1979). In this study, the material with specific TA activity had a 12,000-mol.-wt subunit which became fainter with purification, which suggests that the association between P2-microglobulin and the molecule which carries the TA epitope was partially ruptured during isolation. Histocompatibility antigens isolated from urine are reported to show dissociation of the heavy chain and 32-microglobulin (Bernier et at., 1974).
Human cancers express organ-typespecific neoantigens which are immunogenic to the host bearing the cancer. The same antigens exist in some premalignant lesions such as colon adenomas (Tataryn , 1979) and in tissues of organs that display dysplastic changes of epithelium which may or may not be premalignant, such as papillomatosis and fibrocystic disease of the breast (Flores et al., 1977;Lopez et al., 1978;O'Connor et al., 1978;Thomson et al., 1979;Sanner et al., 1979) and chronic atrophic gastritis (Tataryn et al., 1979). The epitope of the organspecific neoantigen has not been detected in the tissues of normal organs (Grosser & Thomson, 1975;O'Connor et al., 1978;Thomson et al., 1979b;Tataryn et al., 1979). The organ-specific neoantigen is probably not synthesized de novo, but the mutational process involved in the genesis of the cancer or dysplastic cell may induce some rearrangement of a cellsurface protein of the normal cell. The nature of the change in the cell-surface protein is unknown, but might represent exposure of cryptic sites or structural changes. The question which cell-surface protein has an organ-specific neoantigen epitope that becomes immunogenic will probably not be answered until the molecule is purified and characterized.
|
2014-10-01T00:00:00.000Z
|
1980-01-01T00:00:00.000
|
{
"year": 1980,
"sha1": "2ac746e5fe60e7bd8ad9d1907d4c1ade3c2b55d9",
"oa_license": null,
"oa_url": "https://www.nature.com/articles/bjc198010.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "f26e42045ec4f119a7d32698ec6a1b3a33392451",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
42759322
|
pes2o/s2orc
|
v3-fos-license
|
Quality of Life in Men Treated for Early Prostate Cancer : A Prospective Patient Preference Cohort Study
Objectives: In 1997, a study was launched to investigate the treatment of early prostate cancer. Using a patient preference design, health-related quality-of-life (HRQOL) and disease specific HRQOL was assessed prospectively to compare men undergoing radical prostatectomy (RP), hypo-fractionated conformal radiotherapy (CRT) or brachytherapy (BT). Methods: Patients with localised prostate cancer were counselled by a urological surgeon, clinical oncologist and specialist uro-oncology nurse. Patients received treatment according to individual preference. 430 men chose and received RP (n = 217), CRT (n = 161) and BT (n = 52). 354 (82%) completed pre-treatment RAND 36-Item Short-Form Health survey version-2 (SF36v2) and University of California, Los Angeles Prostate cancer index (UCLA-PCI) questionnaires. HRQOL score changes from baseline to 24 months were compared using Kruskall-Wallis test. Results: Pretreatment, the CRT cohort scored lower for physical function (p = 0.0029) and general health perception (p = 0.0021). The BT cohort reported better baseline scores for urinary function (p = 0.0291), urinary bother (p = 0.0030), sexual function (p = 0.0009) and bowel bother (p = 0.0063). At 24 months, bowel function was similar for CRT and BT but both modalities were worse than RP (p = 0.0010). Urinary continence deteriorated most following RP (p < 0.0001) but BT had worse urinary bother (p = 0.0153). Sexual function deteriorated most following RP and BT (p < 0.0005). Percentages of patients achieving erections adequate for sexual activity (from baseline to 24 months) were 66% to 29% for RP, 62% to 49% for CRT and 88% to 65% for BT. Conclusions: This data demonstrates significant differences in disease specific quality-of-life between RP, CRT and BT and should be available for men with early prostate cancer making treatment decisions.
Introduction
Men undergoing treatment for early prostate cancer are faced with a decision about which therapeutic option to choose.This is due to the increasing number of available treatments, each with differing side effects.Patients make their decisions following advice from their specialist, which may be biased towards the treatment they offer [1].
PSA testing and screening programmes have changed the profile of prostate cancer, increasing the proportion of patients with early disease at low risk of becoming symptomatic.In this group of patients, with long life expectancy, treatment-related side effects (urinary, bowel and sexual) are of major concern as they can seriously affect quality of life over years.Therefore quality of life after primary definitive treatment is an important outcome as many would contemplate reduction in life expectancy for treatments with fewer side effects [2,3].
The reliability of health-related quality of life (HRQOL) information in localised prostate cancer is compromised because few randomised controlled trials (RCTs) have compared treatments.Many have been attempted but abandoned due to poor accrual [4].The only RCT comparing radical prostatectomy to watchful waiting reported a modest survival advantage with surgery [5] and HRQOL data showed important differences in urinary and sexual dysfunction between treatment groups.However, treatment choice had little effect on general HRQOL and no pre-treatment data was available [6].
RCTs are currently underway studying screened populations such as the UK based Protect study, which is evaluating survival and HRQOL after radical prostatectomy, external beam radiotherapy and active surveillance.Accrual was improved using qualitative methods [7] and important data will be provided by these studies in the future.
In view of the limited data currently available from RCTs, non-randomised prospective studies can give important information.Several studies have compared HRQOL after different treatments but unfortunately, most do not include pre-treatment data [8][9][10][11][12][13] and of those that do, pre-treatment data usually does not directly compare all 3 major treatment options [14][15][16].A couple of recent non-randomised cohort studies that compared all three treatments with pre-treatment data have not differentiated patients receiving hormone therapy as part of their treatment [17][18][19].Patients receiving hormones for their prostate cancer were excluded from our study to remove the confounding variable of hormone treatment on HRQOL.This was done because for the majority of patients in this early stage group, hormone therapy would not be necessary at the time of treatment decision and hormone treatment has its own side effect profile.This study has used a hypofractionated conformal radiotherapy schedule for the CRT option.In recent years hypofractionated schedules for prostate radiotherapy are becoming more popular as increasing evidence from experimental studies show that prostate cancers have a higher sensitivity to fraction size reflected in a low alpha/beta ratio.This centre has used hypofractionated schedule as standard treatment since 1993 and therefore these patients are a unique cohort in this type of study.
After a pilot study where acceptance to randomisation proved problematic, our investigators embarked upon a prospective audit of outcome in men with early prostate cancer.Treatment choice was based on the individual patient's preference following full counselling by a urologist, clinical oncologist and uro-oncology nurse practitioner.The rationale for this study design was to minimise investigator-related selection bias and produce a more balanced population by comparison with other longitudinal studies.The aims of this report were to assess the impact of treatment side effects for each of the three main treatments for early prostate cancer: radical prostatectomy (RP), conformal hypofractionated radiotherapy (CRT) and brachytherapy (BT), by measurement of generic and disease-specific HRQOL.Importantly, this study evaluated HRQOL in a non-screened population of men typical of uro-oncology centres in the UK and elsewhere.
Patients
Patients were recruited prospectively with full ethical committee approval between 1 st Dec 1997 and All eligible men recruited were counselled by a clinical oncologist and urological surgeon and received information leaflets about treatment options.Participants had a concluding discussion with a trained specialist nurse and were invited to choose their treatment after a period of reflection.
Radical Prostatectomy
Surgery consisted of radical retro-pubic prostatectomy in all cases with nerve-sparing where appropriate and/or possible.No adjuvant treatment was given.
Conformal Radiotherapy-Hypo-Fractionated
All patients choosing conformal radiotherapy received photon beams to the prostate with a standard technique in use in this centre since 1993.Patients were treated supine, without formal immobilisation, with an empty bladder.They were treated with a linear accelerator equipped with a multileaf collimator delivered with 8 -20 MV x-rays with a four-field technique (opposed anterior and posterior and opposed lateral portals).The planning target volume (PTV) was defined as the clinical target volume (CTV) with the addition of a 1 cm margin anteriorly and laterally and 0.7 cm posteriorly.The tumour stage, Gleason score and PSA of each patient determined inclusion of the seminal vesicles with the CTV as per department protocol.A dose of 50 Gy in 16 fractions over 3 weeks was given to the isocentre without neo/adjuvant hormone manipulation.
Brachytherapy
Brachytherapy using transrectal ultrasound guided permanent seed implant became a treatment option in 2000 in our centre.Additional inclusion criteria were 1) prostate volume ≤ 60 ml and 2) International Prostate Symptom Score (IPSS) score < 16 and 3) No previous TURP.
If these criteria were not met, brachytherapy was not offered.After ultrasound assessment of prostate volume, brachytherapy was performed by transrectal ultrasound guided permanent I-125 seed implant.The dose was 145 Gy prescribed to the peripheral margin of the prostate.Patients did not receive supplementary external beam radiotherapy or neo-adjuvant hormone manipulation/downsizing.
HRQOL Questionnaires
Self-assessment questionnaires were posted to each patient on four occasions.Pre-treatment measurement preceded the patients' decision on therapy.Post-treatment assessments occurred at 3, 12 and 24 months.Non-respondents were sent a second reminder questionnaire.
Validated questionnaires measured general and disease specific HRQOL.General HRQOL was measured with the RAND 36-Item Short-Form Health survey version 2 (SF36v2) [20].This contained 36 questions to assess eight aspects of HRQOL.The University of California, Los Angeles Prostate cancer index (UCLA-PCI) measured disease specific HRQOL [21].The questions assessed bowel, urinary and sexual function and bother.Each question's score was linearly translated to a score from 0 to 100, and a median score was obtained for each [21].
Stage, PSA and Gleason grade were recorded prospectively.PSA progression was defined by the 1997 American Society for Therapeutic Radiation Oncology criteria for CRT and brachytherapy patients and by the presence of a serum PSA > 0.2 ng/ml for post-radical prostatectomy patients.Data for patients with biochemical or clinical failure were excluded from the point of PSA progression as this was thought to influence HRQOL.In addi-tion, patients receiving hormone manipulation during the follow up period were excluded from the study for the same reason.
Statistics
Parametric data (e.g.Age and PSA) was analysed using ANOVA followed by Tukey's multiple comparison test.Non-parametric data (e.g.comparisons between the three treatment groups for HRQOL, baseline stage and Gleason score) were performed using the Kruskal-Wallis test (KW) unless otherwise stated.The Mann-Whitney U-test (MW) was used for inter-group comparison (if the KW test was significant) with a Bonferroni multiple comparison adjustment to preserve the overall significance level.Chi square tests were performed on all proportional data (e.g.response rates and comorbidity).A non-responder analysis was performed to assess baseline differences from those that responded in each treatment group and here T-test were used for age and PSA whereas Mann-Whitney test were used for stage and Gleason grade.Analysis was not adjusted for baseline differences as the aim was to find relationships between treatment choice and baseline differences.
HRQOL scores at 3, 12 and 24 months were compared with pre-treatment quality of life scores to calculate therapy-specific changes in each of the HRQOL domains.Again Kruskall-Wallis test was used to determine treatment effect on the change in HRQOL scores between each time point and pre-treatment assessments.
The 5% significance level was used in all primary tests.Statistical analyses were performed using SPSS version 11.
Funding
AstraZeneca contributed towards data management for the initial 6 months.
Study Population
Between 1 st December 1997 and 1 st April 2004, 490 men registered in the quality of life aspect of the study.Patients who received a different treatment to their original choice were excluded from analysis (n = 38) as this may have affected how they perceived the therapy to work and therefore confound the quality of life results related to the treatment.Despite patients being referred to this study if they were seeking active treatment, 22 men chose active monitoring/watchful waiting and therefore these men too were excluded from analysis.The remaining 430 patients chose and received either RP (n = 217), CRT (n = 161) or Brachytherapy (n = 52).Pre-treatment questionnaire responses were obtained from 354 patients (82.3%), 178 (RP), 129 (CRT) and 47 (BT).There was no significant difference in response rates between treatments: RP (82%), CRT (80%) and BT (90%) (p = 0.2381).Patients were counselled by experts in the radical treatment of localised prostate cancer.These comprised of 10 different urological surgeons, 3 clinical oncologists and 14 specialist nurses.
There was no significant difference in baseline characteristics between patients who responded to pre-treatment questionnaires and non-responders.Table 1 summarises the baseline patient characteristics.There was a significant difference in age between the treatment groups (KWp < 0.0005).Inter-group analysis (Bonferroni significance level = 0.017) showed that compared to RP and BT groups, CRT patients were significantly older.(Between CRT and RP MWp = 0.0002, between CRT and BT MWp < 0.0005).There was no difference in age between the RP and BT patients (MWp = 0.2201).
Compliance
Table 2 illustrates the compliance to the questionnaires (i.e. the percentage of patients responding to the questionnaire of the patients given follow up questionnaires) and shows the number of eligible questionnaires at each time point, taking into account patients who died and those who had recurrent disease.Brachytherapy patients had better compliance than the other two groups.However, the questionnaire subsection that fell below 85% compliance was the sexual function and bother questions for patients treated with brachytherapy.
Generic HRQOL at Baseline
There were important baseline differences between treat-ment groups.Patients undergoing CRT had worse scores than other treatments for physical function (KWp = 0.0029) and general health perception (KWp = 0.0021).They also scored significantly worse than BT patients (but not RP patients) for role-limitation physical (KWp = 0.0352) and bodily pain (KWp = 0.0056).Emotional well-being scores were worse in RP patients compared to the brachytherapy group (KWp = 0.0186) but were the same as those in the CRT group.
Generic HRQOL over Time
For each subscale of SF-36, the median change between the score at each time point and baseline was calculated for the patient groups.No significant changes in SF-36 measures were observed between treatment groups at each time point (data not shown).
Disease Specific HRQOL Scores at Baseline
Pre-treatment evaluation revealed that the BT cohort reported significantly better scores than other treatment alternatives for four out of six domains of the UCLA-PCI, including urinary function (KWp = 0.0291), urinary bother (KWp = 0.0030), sexual function (KWp = 0.0009) and bowel bother (KWp = 0.0063).For bowel function and sexual bother there were no differences between treatment options and similarly, there were no differences in any of the subscales between surgical and CRT groups.
Disease Specific HRQOL Scores over Time
Results of repeated measures over time for function items are shown in Figure 1.The decline in urinary function after prostatectomy improved over the follow-up period but did not return to pre-treatment levels (Figure 1(a)).A similar, though less pronounced pattern was seen in bowel function for the CRT group (Figure 1(b)).
Sexual function was more complex.All treatments were associated with a decline in function.RP and CRT cohorts had comparable scores at baseline but the impact of RP was greater than CRT (Figure 1(c)).With BT and RP groups there was slight improvement in sexual function over time whereas CRT patients continued to deteriorate over the follow-up period.
Change in HRQOL from Baseline
Figure 2 shows the change in QOL score from baseline at each time point.The data suggests CRT patients had a significantly worse bowel function score at 2 years compared to RP patients, but not compared to BT patients (KWp = 0.0010).The most marked urinary function change from baseline was in the prostatectomy group (KWp < 0.0001).Sexual function deteriorated for all three treatment cohorts.It can be concluded that as baseline functions for surgery and CRT were comparable, the impact of surgery on sexual function was greater than CRT over the two year follow up period (KW and MW p values show 3 month p < 0.0001, 1 year p < 0.0001 and 2 year p < 0.0005).In this study, there was no difference in change in sexual function scores for those undergoing nerve sparing or non-nerve sparing surgery at any time point (p > 0.05) even though their baseline scores were comparable.At all three time points, the deterioration in score from baseline for BT patients was significantly greater than CRT (but not as great as the RP patients).
The proportions of patients able to have erections adequate for sexual activity were 66% to 29% for RP, 62% to 49% for CRT and 88% to 65% for BT (baseline and 24 months respectively).Similarly, assuming patients with poor baseline sexual function remained low post-treatment, we stratified "sexually potent" patients with baseline scores greater than 80 and assessed how each treatment affected sexual function.Figure 3 demonstrates the change in score from pre-treatment to the two-year time point.The numbers were small in each group but brachytherapy patients had better sexual function scores at 2 years.
Bowel bother results showed that both radiotherapy options affected HRQOL significantly compared to RP at three months (KWp = 0.0134) but at 2 years there was no difference between any of the groups correlating with function scores (KWp = 0.0527).Recalling that there was no statistical difference between baseline sexual bother scores for all three treatments, at the three month (KWp = 0.0093) and 1 year (KWp = 0.0071) time points, RP patients fared significantly worse for sexual bother scores.However at 2 years, change in bother scores had deteriorated equally from baseline levels in all three groups (KWp = 0.0982).
Discussion
Although the gold standard for comparing treatment approaches is a RCT, it is recognised that in certain circumstances patients are reluctant to be randomised between different treatment options.Many patients choosing not to be randomised are thus excluded from comparison.Our pilot study of randomisation [4] in early prostate cancer eventually led us to the patient preference study where all patients were included.In assessing the impact of treatment on QOL we compared the change in QOL pre and post treatment for each patient.Thus, our aim was to describe the natural history of different groups of patients expressing a preference for their treatment.
Previous cross-sectional studies have demonstrated hat sexual, urinary and bowel dysfunction remain preva-t lent among men undergoing treatment for localised prostate cancer [8][9][10][11]13,[22][23][24].Longitudinal studies have illustrated treatment effect over time and those with pretreatment data are of greatest value in interpreting the true extent of post treatment change [14,15].Amongst these studies, those directly comparing all three major treatments are of particular interest [17][18][19]25,26].This prospective patient preference study directly compared HRQOL outcomes for the 3 main treatments in a uniform cohort of men without the confounding variable of hormone treatment and with data recorded both pre and post treatment.All men were counselled by both urological surgeon and clinical oncologist (to reduce bias from either specialty) prior to making their treatment choice.This study also incorporates a unique cohort of radiotherapy patients receiving hypofractionated conformal external beam treatment in a European oncology centre.
Previous HRQOL research found that prostatectomy affected predominantly urinary and sexual function, whereas CRT mostly influenced bowel and sexual function.These studies also showed that general HRQOL is relatively unaffected despite changes in disease specific HRQOL [17,27].This study confirmed these conclusions and also demonstrated brachytherapy and hypo-fractionated CRT were well tolerated when compared to other treatment modalities.
As this study was non-randomised we observed baseline differences between men choosing different treatment options.BT patients had better compliance in replying to questionnaires.Interestingly when brachytherapy was first introduced many men actively sought out this treatment.This degree of motivation may explain the higher compliance rate for completing questionnaires.As expected the median age of the CRT group was significantly older than the other cohorts.Brachytherapy patients had better baseline urinary function, explained by the exclusion criteria of a high IPSS score in this cohort.
This investigation studied the impact of different treatments over time.Interestingly, when evaluating urinary bother, the BT group had a worse score at the two year time point compared to baseline, whereas patients in other cohorts improved despite greater deterioration in urinary function scores.One possible reason is different expectations between patient groups.Patients with almost normal urinary baseline function (e.g.BT cohort) were more likely to be troubled by small declines in function compared with those with poor baseline urinary function that are possibly better adapted to their disability.Treatment itself may improve symptoms for some patients.It should be remembered that UCLA-PCI was designed for patients undergoing surgery.The use of this assessment tool is therefore a major limitation of this study as it has more sensitivity to detect urinary incontinence, rather than irritative/obstructive symptoms and it maybe that BT patients experienced more symptoms than were recorded in function questions.It is worth noting that bother is represented by only one question for each function in the UCLA questionnaire.Detecting irritative voiding symptoms can be achieved using the Expanded Prostate Cancer Index Composite (EPIC) which was modified from the UCLA-PCI and captures irritative voiding complaints more pertinent to BT patients [28].
This research concurred with previous investigators in that bowel-related HRQOL was adversely affected by radiotherapy when compared with surgery.It could be hypothesised that hypofractionated radiotherapy used in this study would increase the chance of late effects to normal tissues such as the rectum and therefore lead to worse bowel function and bother.However, these results show that the bowel HRQOL scores are fairly similar to the other treatments and also comparable to other studies using the same scoring systems for more conventional radiotherapy [29].In previous studies comparing the observed toxicity of this centre to other published results using more conventional radiotherapy schedules, it has been again demonstrated that treatments are equivalent [30].To truly assess late effects in this study, longer follow up trends would be needed.Also, further studies evaluating differential bowel symptoms for the available management options should include a tool designed to assess bowel treatment effect, rather than a broadly based QOL tool.
Interpretation of sexual dysfunction is complex because of the multifactorial aspects that make up a person's sexuality.Previous investigations have established the deleterious effect of surgery and CRT on sexual function.This study additionally confirms that surgery had the greatest initial impact with only slight improvement over time, whereas CRT caused a less marked initial decline followed by a continued slow reduction in function over the follow-up period.A component of this deterioration in CRT men may be attributed to their older age at baseline, though there is undoubtedly a radiation related effect over a 2 year period following treatment.The brachytherapy cohort illustrated a similar pattern to the RP group but with a less marked deleterious effect in the initial phases.It is important to emphasize that the confounding factor of concomitant hormone therapy was not an issue in this study as patients given endocrine therapy were excluded from analysis thereafter.Therefore, comparison of this study to other research should be done with caution as most of these other studies included patients receiving hormone therapy.Previous studies have advocated stratifying patients according to their baseline function (as shown for sexual function in Figure 3) as it has been demonstrated that there is little change to poor baseline sexual function post treatment [18].Our study confirms the findings from Chen et al. [18] that BT patients with good baseline function preserve function better compared with similar patients receiving the other treatment options.Nerve sparing RP made no difference to sexual function in the patients undergoing surgery even though the surgery was undertaken by experienced urological oncologists, all of whom were trained and had adequate experience of radical retropubic RP with nerve sparing.It may be suggested that the figures are appropriate for the type of population involved (i.e.nonscreened population with a high index of cases from socially deprived areas) where 50% -60% of patients who present to the hospitals involved in this study have erectile dysfunction prior to treatment.
In this study we have demonstrated a marked variation in the consequences of treatment experienced by our patients.There will be an element of inherent selection bias that is inevitable in non-randomised questionnaire based studies.Evenso it is clear that information from studies like this is highly relevant for men diagnosed with early prostate cancer faced with a difficult choice.To date, there has been a paucity of accurate information on the sequelae of treatment in early prostate cancer.As clinicians it is important we collect good quality data on treatment effects and make this available to our patients to allow them to make an informed choice.These important differences between sexual function should prompt further research taking into account co-morbidities, sexual relationships and aids used to help improve dysfunction in the post treatment period.
Conclusions
This novel inclusive study design demonstrated significant differences in treatment sequelae, the most marked of which is sexual function.We plan to include this data in the new information we impart to our patients with the intention of auditing the impact on patient choice.
Figure 1 .
Figure 1.Results showing median scale scores for each treatment group: UCLA-PCI scores (for function and bother), urinary (a and d), bowel (b and e), sexual (c and f).
Figure 2 .Figure 3 .
Figure 2. Box and whisker plot showing the change in median score from baseline at 2 year time point.* and † both denote a statistically significant difference between the two treatments indicated, using Mann-Whitney U test with Bonferroni correcion (p < 0.001).Circles represent outliers (beyond 95% CI).t
|
2017-11-09T19:09:28.146Z
|
2011-10-31T00:00:00.000
|
{
"year": 2011,
"sha1": "124393ea49f236efcd4208c95dff00c0df25a96c",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=7784",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "124393ea49f236efcd4208c95dff00c0df25a96c",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
118404283
|
pes2o/s2orc
|
v3-fos-license
|
Induced plasma magnetization due to magnetic monopoles
When magnetic monopoles are introduced in plasma equations, the propagation of electromagnetic waves is modified. In this work is shown that this modification leads to the emergence of a ponderomotive force which induces a magnetization of the plasma. As a result, a cyclotron frequency is induced in electrons. This frequency is proportional to the square of the magnetic charge.
I. INTRODUCTION
A common problem in electrodynamics and plasma physics courses, which vividly illustrates the phenomenon of propagation of waves through a plasma, is the calculation of the dispersion relation of electromagnetic transverse waves propagating in a plasma composed by fixed ions and moving electrons. Such a problem is widely used to introduce concepts like cutoff and plasma frequencies, as well as, group and phase velocity of a wave.
On the other hand, it is well known that the Maxwell's equations become symmetric when electric and magnetic charges (magnetic monopoles) are theorized [1]. Even, in an elegant theoretical demonstration, Dirac shown that the simple existence of only one magnetic monopole in the universe leads to an explanation of the quantization of the elementary electric charge [2,3].
Once magnetic monopoles are introduced in Maxwell's equations, they become simmetric and it is expected that new effects appear due to their simmetry. Including magnetic charges, the equations are where ρ e is the electron charge density, ρ g is the magnetic charge density, J e is the electron current density, and J g is the magnetic current density. A simple test is to consider a plasma containing electric and magnetic charges. Such analysis has great theoretical [4,5] and pedagogical [6] value, because of its simplicity, and the discussion that emerges from these studies.
As a simple theoretical exercise we show how a magnetization is induced in a plasma when magnetic charges are introduced. The problem is the following: we have a gas * Electronic address: fasenjo@levlan.ciencias.uchile.cl † Electronic address: pmoya@zeth.ciencias.uchile.cl composed by magnetic monopoles and electrons. Suddenly, an electromagnetic wave passed through this fluid. We look for the effect in the propagation of the very wellknown simplest wave modes when magnetic monopoles are included.
II. ELECTROMAGNETIC WAVES
Let us consider a plasma as a fluid which contains electrons, with charge −e and mass m e , and magnetic monopoles with magnetic charge g and mass m g . Let us suppose that there exist other particles with respectives opposite electron and magnetic charges and with the same densities, which provide the total charge neutrality of this plasma [6].
The magnetic monopoles modify the well-known dispersion relation for electromagnetic waves. In this section, we show the derivation of the relation for this propagation mode considering the electric and magnetic charges as a fluids and perturbing their equilibrium velocities. In order to do this, we expand every quantity in Maxwell's equations φ as φ 0 + φ 1 , where φ 0 is the zeroth order (equilibrium) value of φ, and φ 1 is the first order perturbation. In that sense, throughout this work, electrons have a zeroth order electron density n 0e and a first order electron density n e . In a similar way, magnetic monopoles have an zeroth order magnetic density n 0g and a first order magnetic density n g . Thus, the first order electric charge density is ρ e = −en e and the first order magnetic charge density is ρ = gn g . Thereby, the first order electric and magnetic current density are where v e and v g are the first order electron and magnetic monopole velocities respectively. Both zeroth order velocities are null. As in the case of a cold plasma, the classical equations of motion for electrons and magnetic monopoles [5], at first order are As seen, the Lorentz force over a magnetic monopole is the same as for an electron but changing electric fields by magnetic field and vice versa. In order to obtain the dispersion relation, the usual method is apply a Fourier transform over all the linearized equations with the form exp(ik · r − iωt), where ω is the frequency and k is the wavenumber of the wave. Thus, the equations of motion (4) becomes respectively. Hereṽ e is the Fourier transform of v e , and the same for the other quantities. We can write the electron and magnetic current density with the help of the velocities given by Eqs. (5). Therefore, Eqs. (2) can be rewritten as where we define the electron plasma frecuency and a magnetic monopole plasma frecuency respectively as Finally, the dispersion relation for a transversal electromagnetic wave, with k ·Ẽ 1 = 0, in an electron plasma with magnetic monopoles can be obtained from Eqs. (6). This gives Dispersion relation (8) have been obtained previously in Ref. [6]. Notice that when magnetic monopoles are neglected, ω m = 0 and we recover the usual dispersion relation ω 2 = ω 2 p + c 2 k 2 for electromagnetic waves in cold plasmas.
III. MAGNETIZATION DUE TO PONDEROMOTIVE FORCE
By incorporating magnetic charges can be seen several effects in the propagation of waves through plasmas [5,6]. We focus on the emergence of a magnetic field due to the ponderomotive force that the wave exerts on the electrons. This magnetization appears due to the magnetic charge corrections in dispersion relation (8). Similarly to the inclusion of magnetic monopoles, it has been studied the plasma magnetization when quantum effects are considered in the calculation of the dispersion relation of an electron plasma [7,8].
The ponderomotive force f induced by the highfrequency electromagnetic waves in a plasma is given in general by f = f (s) +f (t) , where the ponderomotive forces f (s) and f (t) are related to the space (s) and time (t) variations of the amplitude |E 1 | of the electric field. This force is a nonlinear effect which origin is the inhomogeneity of the electromagnetic field. Each force component is given by [9] where ǫ = c 2 k 2 /ω 2 is the dielectric function of the propagation mode. Using the dielectric function ǫ of the dispersion relation (8), we can calculate the ponderomotive force for the high-frequency electromagnetic wave. It is straightforward to obtain an expression for the poderomotive force related to time variations Notice that the effects of magnetic charges in the ponderomotive force are in ω m . When ω m = 0 the ponderomotive force f (t) is null. The effect of ponderomotive force of the electromagnetic wave is to push the electrons locally. Thus, it creates a slowly varying electric field E s , such that f = en 0e E s = −en 0e ∇φ s − (en 0e /c)∂ t A s [7,8]. Using the ponderomotive forces (10), we can identify the slowly varying vector potential as Owing to the well-known relation B = ∇ × A, the vector field (11) induces a slowly varying magnetic field B s This magnetic field interacts with electrons and induces an electron cyclotron frequency Ω cs = −eB s /m e c. Taking the approximation ∇ × (k|E 1 | 2 ) ≈ k|E 1 | 2 /L, where L is the scale length of |E 1 | 2 [7,8]. The induced electron cyclotron frequency is given by where V 0 = e|E 1 |/m e ω is the electron quiver velocity. Notice that when magnetic monopoles are neglected, Ω cs = 0 and there are no magnetization.
IV. CONCLUSIONS
Starting from the inclusion of magnetic charges in Maxwell's equations, we have studied the simple problem of propagation of electromagnetic waves in a cold electron plasma. We have shown that when magnetic monopoles are introduced in an electron plasma, the dispersion relation (8), and thereby the propagation of electromagnetic waves, is modified because of the magnetic monopole plasma frequency (7).
Due to the presence of the magnetic monopoles, the plasma is magnetized with a magnetic field given by Eq. (12), which is of g 2 order. However, it can interact with the electrons inducing an electron cyclotron frequency. The frequency (13) is only due to the ponderomotive force related to time variation of the field intensity and which is induced by magnetic charges. It depends on the magnetic monopole plasma frequency ω m and on the propagation mode (8) through ω. Finally, notice that the magnetization and the cyclotron motion decrease when the frequency of the electromagnetic wave increase.
The above calculations of the dispersion relation and of the plasma magnetization are very simple. Therefore, they are useful as a pedagogical tool for teaching the recognition of basic phenomena arising of magnetic charges in plasmas, and of how these simple effects can bring new insights of the electron plasma dynamics at classical level.
|
2010-10-18T01:12:57.000Z
|
2010-10-18T00:00:00.000
|
{
"year": 2010,
"sha1": "abdd979b485a5095ca412c53ff2f18b4116d0468",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "abdd979b485a5095ca412c53ff2f18b4116d0468",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.