id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
237610688
|
pes2o/s2orc
|
v3-fos-license
|
CDT1 Is a Novel Prognostic and Predictive Biomarkers for Hepatocellular Carcinoma
Objective Hepatocellular carcinoma (HCC) is one of the most common malignant tumors endangering human health and life in the 21st century. Chromatin licensing and DNA replication factor 1 (CDT1) is an important regulator of DNA replication licensing, which is essential for initiation of DNA replication. CDT1 overexpression in several human cancers reportedly leads to abnormal cell replication, activates DNA damage checkpoints, and predisposes malignant transformation. However, the abnormal expression of CDT1 in HCC and its diagnostic and prognostic value remains to be elucidated. Methods TCGA, ONCOMINE, UALCAN, HCCDB, HPA, Kaplan-Meier plotter, STRING, GEPIA, GeneMANIA, and TIMER were conducted for bioinformatics analysis. CDT1 protein expression was evaluated by immunohistochemistry in HCC tissues through a tissue microarray. qRT-PCR, western blot and a cohort of functional experiments were performed for in vitro validation. Results In this study, we discovered remarkably upregulated transcription of CDT1 in HCC samples relative to normal liver samples through bioinformatic analysis, which was further verified in clinical tissue microarray samples and in vitro experiments. Moreover, the transcriptional level of CDT1 in HCC samples was positively associated with clinical parameters such as clinical tumor stage. Survival, logistic regression, and Cox regression analyses revealed the significant clinical prognostic value of CDT1 expression in HCC. The receiver operating characteristic curve and nomogram analysis results demonstrated the strong predictive ability of CDT1 in HCC. Kyoto Encyclopedia of Genes and Genomes and gene set enrichment analyses indicated that CDT1 was mainly associated with the cell cycle, DNA repair, and DNA replication. We further demonstrated the significant correlation between CDT1 and minichromosome maintenance (MCM) family genes, revealing abnormal expression and prognostic significance of MCMs in HCC. Immune infiltration analysis indicated that CDT1 was significantly associated with immune cell subsets and affected the survival of HCC patients. Finally, knockdown of CDT1 decreased, whereas overexpression of CDT1 promoted the proliferation, migration, invasion of HCC cells in vitro. Conclusions Our study findings demonstrate the potential diagnostic and prognostic significance of CDT1 expression in HCC, and elucidate the potential molecular mechanism underlying its role in promoting the occurrence and development of liver cancer. These results may provide new opportunities and research paths for targeted therapies in HCC.
Methods: TCGA, ONCOMINE, UALCAN, HCCDB, HPA, Kaplan-Meier plotter, STRING, GEPIA, GeneMANIA, and TIMER were conducted for bioinformatics analysis. CDT1 protein expression was evaluated by immunohistochemistry in HCC tissues through a tissue microarray. qRT-PCR, western blot and a cohort of functional experiments were performed for in vitro validation.
Results: In this study, we discovered remarkably upregulated transcription of CDT1 in HCC samples relative to normal liver samples through bioinformatic analysis, which was further verified in clinical tissue microarray samples and in vitro experiments. Moreover, the transcriptional level of CDT1 in HCC samples was positively associated with clinical parameters such as clinical tumor stage. Survival, logistic regression, and Cox regression analyses revealed the significant clinical prognostic value of CDT1 expression in HCC. The receiver operating characteristic curve and nomogram analysis results demonstrated the strong predictive ability of CDT1 in HCC. Kyoto Encyclopedia of Genes and Genomes and gene set enrichment analyses indicated that CDT1 was mainly associated with the cell cycle, DNA repair, and DNA replication. We further demonstrated the significant correlation between CDT1 and minichromosome maintenance (MCM) family genes, revealing abnormal expression and prognostic significance of MCMs in HCC. Immune infiltration analysis indicated that CDT1 was significantly associated with immune cell subsets and affected the survival of HCC patients. Finally, knockdown of CDT1 decreased, whereas overexpression of CDT1 promoted the proliferation, migration, invasion of HCC cells in vitro.
INTRODUCTION
Hepatocellular carcinoma (HCC) is a serious disease with high morbidity and mortality, annually causing more than 500,000 deaths worldwide (1). HCC usually evolves from chronic liver inflammation, 80% of which is caused by viral hepatitis C or B (2). In the past decade, the prevalence and mortality of HCC have been decreasing in East Asia and other areas that traditionally report high incidence rates, while increasing in Europe and the United States (3). Although many researchers have delved into the biological and environmental mechanisms underlying liver cancer occurrence and progression, limited clinical options are currently available to delay or prolong tumor progression. Further, the high metastasis and recurrence rates of HCC pose significant challenges for diagnosis and treatment (4). Investigating the potential molecular mechanisms and effective prognostic signatures of HCC is thus urgently needed.
Maintaining the integrity of the genome requires strict and precise regulation of DNA replication, which needs to be coordinated with other cellular events to ensure that it only occurs once per cell cycle (5,6). Chromatin licensing and DNA replication factor 1 (CDT1) is indispensable for the initiation of DNA replication, which plays a key role in eukaryotic cell replication and cell cycle regulation (7). The control of DNA initiation in the eukaryotic cell cycle requires coordination between multiple protein complexes. Initially, the origin recognition complex (ORC) directly binds to the site of DNA replication. ORC-DNA binding then recruits CDT1 and cell division cycle 6 (CDC6) to form a pre-replicating complex (pre-RC), which further loads minichromosome maintenance proteins (MCMs) onto chromatin (8). The cooperation of ORC, CDC6, CDT1, and MCMs at the initiation of replication ensures orderly DNA replication. Recent studies have reported that some cases of aberrant DNA replication and uncontrolled cell cycle progression may be attributable to dysregulated CDT1, and its destructive role has been identified in the tumor initiation, development, and chemoresistance of some tumor types (9,10). Further, overexpression of CDT1 has been markedly associated with decreased survival and poor prognosis for some tumors (6,11). Nevertheless, the prognostic significance and exact functions of CDT1 have not yet been determined in HCC progression.
Here, we aimed to comprehensively and systematically explore the expression of CDT1 in HCC through bioinformatics analysis, clinical tissue microarray samples and in vitro functional experiments using CDT1-knockdown and overexpression HCC cells. The study findings offer insight into the clinical significance, potential functions, interactive network, and association with immune infiltration of CDT1 in HCC, providing a novel prognostic biomarker for accurate survival prediction and precise targeted treatment of early-stage HCC. The workflow for this article is shown in Figure 1.
Data Resource
Level 3 gene expression profiles (level 3 data) were obtained from the liver hepatocellular carcinoma (LIHC) dataset in The Cancer Genome Atlas (TCGA) database (https://cancergenome.nih.gov/), comprised of 374 LIHC samples and 50 paracancerous tissues (Workflow Type: HTSeq-FPKM). HTSeq-FPKM values were then converted to TPM values (transcript per million) to compare differential expression among samples. Corresponding clinical information of HCC patients was also obtained from the TCGA data portal. A summary of clinical data is shown in Supplementary Table 1. For pan-cancer analysis, normal RNA-Seq data for 33 kinds of tumors were obtained from TCGA and Genotype-Tissue Expression (GTEx) samples using UCSC Xena (https://xenabrowser.net/).
Comprehensive Analysis
ONCOMINE database (oncomine.org) is an integrated online data-mining tool, which provides an integrated analysis of genome-wide expression in multiple tumor samples and normal control samples (12). In our study, transcription levels of CDT1 in HCC samples and normal adjacent tissues were compared. Statistical significance was considered at p < 0.05, the fold change (FC) was set to 2, and the threshold for statistical significance was set at 10%.
The Cancer Cell Line Encyclopedia (CCLE) (www. broadinstitute.org/ccle) is a comprehensive portal that analyzes and visualizes genomic data from more than 1,000 tumor cell lines (13). Expression levels of CDT1 in multiple cancer cell lines were assessed by the CCLE dataset. HCCDB is a comprehensive visual database dedicated to the expression profile analysis of more than 3000 HCC samples (lifeome.net/database/hccdb/) (14). We utilized this powerful site to evaluate the expression of CDT1 in HCC. Besides, we analyzed the association between mRNA levels of CDT1 and survival outcomes in GSE14520 (HCCDB6) and ICGC-LIRI-JP (HCCDB18) datasets using the HCCDB database.
The Human Protein Atlas (HPA) is a publicly available source that provides immunohistochemical images for analyzing protein expression patterns in approximately 20 common tumors and normal tissues (https://www.proteinatlas.org) (15). Immunohistochemical images of clinical LIHC specimens and normal tissue samples were obtained from this database to compare CDT1 protein expression between the two groups. UALCAN (http://ualcan.path.uab.edu/index.html) is a visual bioinformatics service platform that contains gene expression and clinicopathologic data from TCGA and MET500 cohort databases (http://ualcan.path.uab.edu/index.html) (16). In this study, we employed UALCAN to analyze correlations between mRNA expression of CDT1 and clinicopathological features. A p-value < 0.05 was considered significant.
Kaplan-Meier Plotter (https://kmplot.com/analysis/) is a comprehensive portal for analyzing the survival of cancer patients (17,18), which was utilized to assess the prognostic significance of CDT1 in HCC by analyzing the association between mRNA levels of CDT1 and survival outcomes. Survival outcomes included overall survival (OS), progressionfree survival (PFS), recurrence-free survival (RFS), and diseasespecific survival (DSS). The optimal cutoff value was determined by the KM plotter algorithm. A p-value < 0.05 was considered significant.
Cell Culture and Transfection
A normal human liver cell line (L-02 cells) and HCC cell lines (Hep3B, LM3, and SMMC-7721) were purchased from China Cell Bank (Shanghai, China). All cell lines were cultured in DMEM medium (Gibco, Waltham, MA, USA) with 10% fetal bovine serum (Ausbian, Australia) and 1% penicillinstreptomycin (Gibco). Cells were maintained in an incubator at a constant temperature of 37°C with 5% CO2. CDT1 siRNA oligonucleotides (40 nM) (5′-GCAUGUCAAGGAGCA CCACAATT UUGUGGUGCUCCUUGACAUGCTT-3′), si-NC oligonucleotides (5′-UUCUCCGAACGUGUCAC G U T T A C G U G A C A C G U U C G G A G A A T T -3 ′ ) , C D T 1 overexpression vector pEGFP-N1-CDT1 and empty control vector pEGFP-N1 were obtained from Shenggong Bioengineering Technology (Shanghai, China) and transfected into cells using Lipofectamine 3000 (Invitrogen, Waltham, MA, USA) according to the manufacturer's instructions. CDT1 knockdown cells were obtained 72 h after transfection.
Western Blotting
Cells were lysed and the protein concentration was determined by bicinchoninic acid assay. Proteins were subsequently separated by SDS-polyacrylamide gel electrophoresis and transferred to the polyvinylidene difluoride membrane. The membrane was blocked with 5% bovine serum albumin diluted with Tris-buffered saline with 0.1% Tween 20 (TBST) at room temperature for 2 h, and then incubated overnight at 4°C with the primary antibodies anti-b-Tubulin (1:1000, Proteintech, Chicago, IL, USA) and anti-CDT1 (1:1000, Proteintech). The membrane was washed with TBST and incubated with secondary antibodies for 1.5 h (1:3000). The Enhanced Chemiluminescent detection kit (Biosharp, Beijing, China) was employed to visualize the protein bands.
HCC Tissue Microarray and Immunohistochemical Staining
The human HCC tissue microarray (Cat No. IWLT-N-64LV41) was obtained from Wuhan Saiweier Biotechnology Co., Ltd. (Wuhan, China), including 14 HCC tissues samples and paired non-tumor tissues samples. IHC staining was carried out to measure CDT1 protein level in HCC tissues per the instructions from the manufacturer using an anti-CDT1 antibody (1:500, Proteintech). Each HCC sample was evaluated based on the staining intensity and the percentage of cells with positive staining. The H-score was calculated as (percentage of weak intensity cells ×1) + (percentage of moderate intensity cells ×2) + (percentage of strong intensity cells ×3). The numbers 0, 1, 2, 3 indicate the classification of positive cells. The H-score value ranges between 0 and 300. Paired t-test was used to compare the CDT1 expression in HCC tissues and the paired nontumor tissues.
Cell Proliferation, Invasion, and Migration Assays
Cell proliferation was assessed indirectly using Cell Counting Kit 8 (CCK-8) and colony formation assays. For the CCK-8 assay, 10 mL aliquots of CCK-8 solution (Dojindo Laboratories, Kumamoto, Japan) were added to the wells of a 96-well plate, each well containing 2500 cancer cells. After incubation at 37°C for 1 h, the absorbance value at 450 nm was determined. For the colony formation experiment, 3000 cancer cells were seeded into six-well plates and the culture medium was changed every other day. The colonies were immobilized with paraldehyde and stained with crystal violet.
Cell migration and invasion ability were investigated using wound healing and Transwell assays, respectively. For the wound healing assay, a 200 mL pipette tip was used to make a single wound in each well when the confluence of transfected cells reached 90% in the six-well plate. Cell migration distance was calculated after incubation for 72 h in serum-free medium. The migration assay was performed in an 8 mm Transwell chamber (Corning, Rochester, NY, USA). The transfected HCC cells were inoculated on Matrigel containing serum-free medium at a density of 6 × 103 cells. The lower chamber was supplemented with a 400 mL medium containing 10% fetal bovine serum. Invasive cells were stained with 0.5% crystal violet.
Screening of DEGs
Differentially expressed genes (DEGs) between HCC samples with high CDT1 expression (CDT1 high ) and low CDT1 expression (CDT1 low ) were identified using the DESeq2 package (19) in R (version 3.6.3) with thresholds of |logFC| > 0.5 and adjusted p < 0.05. Volcano plots and correlation heatmaps of differentially expressed mRNAs were constructed using the ggplot2 package in R.
Functional Enrichment Analysis
To identify gene ontology (GO) annotations and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways in which CDT1 and its related DEGs were enriched. Besides, through the "HCC meta co-expression network" function of HCCDB database, we obtained genes with similar expression patterns to CDT1 in HCC and conducted further enrichment analysis of these co-expression genes. Functional enrichment analysis was conducted by using the clusterProfiler package in R (version 3.6.3) (20). Gene set enrichment analysis (GSEA) uses genome-wide expression profiling microarray data to compare gene enrichment to a predefined gene set (21). Gene expression data were divided into two groups according to CDT1 expression level: CDT1 high and CDT1 low . The number of gene set permutations was set to 1000. Significant enrichment was defined as a gene set with a normal p-value < 5% and a false discovery rate of less than 25%.
Interaction Analysis
Through detection of similar gene functions in GEPIA (http:// gepia2021.cancer-pku.cn/) (22), we identified genes whose expression patterns were similar to that of CDT1 in HCC patients. String (https://string-db.org/) is a search tool that predicts the interactive network of genes and proteins (23). A protein-protein interaction (PPI) network analysis of CDT1 and its 30 most similar genes was conducted using STRING (an interaction score >0.7 was set as a cut-off criterion) and further processed using the visualization tool Cytoscape. GeneMANIA (genemania.org) is a visual database tool with highly accurate prediction algorithms, which provides information on physical interactions, co-expression, genetic interactions, and colocalization of query genes (24). We used GeneMANIA to construct a composite gene-gene functional interaction network of CDT1 and its 30 most similar genes.
Immune Cell Infiltrates Analysis
Quantification of the infiltration level of 24 tumor-infiltrating immune cells in HCC samples was achieved by applying the ssGSEA method using the GSVA package in R. We scored the relative enrichment of every immunocyte type based on 509 gene signatures unique to 24 tumor-infiltrating lymphocytes, including B cells, T cells, macrophages, and neutrophils (25). Spearman correlation analysis was employed to evaluate the correlation between CDT1 expression and the level of immune cell infiltration, and the Wilcoxon rank-sum test was used to analyze the immune cell abundance in different CDT1 expression groups. TIMER (https://cistrome.shinyapps.io/ timer/) is a publicly available portal tool to systematically analyze the infiltration of various immune subclasses and their effect on clinical outcomes (26). The "Survival module" in TIMER was used to evaluate correlations between clinical outcomes and infiltration level of immune cells.
Statistical Analysis
Student's t-test or one-way analysis of variance (ANOVA) was performed to analyze the statistical difference. Kaplan-Meier analysis was employed for evaluating patient survival. Survival difference was evaluated using Log-rank (Mantel-Cox) test. Univariate and multivariate Cox analyses were employed to evaluate the independent prognostic significance of CDT1 expression level and other clinical parameters on OS and DSS in HCC patients. Receiver operating characteristic (ROC) curves were established to evaluate the diagnostic significance of CDT1 expression using the pROC package in R (27), and the area under the ROC curve (AUC) indicated the magnitude of diagnostic efficiency. AUC > 0.7 and 0.5-0.7 indicated good accuracy and weak accuracy, respectively. Based on the expression values of CDT1 and other clinical parameters, we established a nomogram to predict OS of HCC patients at 1, 3, and 5 years. Spearman's correlation coefficients were calculated to investigate the association between CDT1 and MCM family genes.
All statistical analyses were conducted using R software (version 3.6.3). Statistical significance was defined as p < 0.05. Continuous data were presented as means ± standard deviation.
Pan-Cancer Analysis of CDT1 Expression
We examined CDT1 expression levels in different types of cancer using independent datasets from different sources. First, the transcriptional levels of CDT1 in various human cancers and their counterpart normal tissues were investigated in the TCGA and GTEx datasets. CDT1 expression was significantly higher in tumor tissues than in normal tissues for multiple cancers, including breast invasive carcinoma and stomach adenocarcinoma (Figures 2A, B). We then evaluated pan-carcinoma CDT1 expression levels using ONCOMINE, revealing the same expression trend as above ( Figure 2C). Further, analysis of CDT1 expression levels in multiple common cancer cell lines from the CCLE database indicated that liver cancer cells had relatively higher CDT1 expression than other tumor cells ( Figure 2D).
CDT1 Expression in Hepatocellular Carcinoma From Different Databases
Although accumulating evidence suggests that CDT1 is a novel tumor biomarker (28,29), transcriptional analysis of CDT1 in human HCC has not been well documented. Therefore, we utilized the TCGA data to compare transcriptional levels of CDT1 between HCC cancer samples and normal samples. mRNA expression levels of CDT1 were significantly increased in HCC samples relative to normal liver samples (p < 0.001) ( Figure 3A). This conclusion was also verified in paired HCC and normal tissues (p < 0.001) ( Figure 3B). We also compared transcriptional levels of CDT1 between HCC samples and normal control samples in the HCCDB dataset, which suggested abnormally high CDT1 expression in HCC ( Figure 3C). The same conclusion was further confirmed in the ONCOMINE data (p < 0.001). Specifically, the Roessler and Wurmbach datasets indicated that CDT1 was upregulated in HCC tissues relative to normal tissues, with FCs of 1.285-1.852 ( Figures 3D, E). Furthermore, high CDT1 protein expression was observed in HCC tissues based on the HPA dataset ( Figure 3F). Besides, our IHC staining results on HCC tissue microarray demonstrated that CDT1 expression in HCC tissues was significantly higher than that in paired adjacent non-tumor tissues. The results of the paired scatter plot using paired t-test are shown in Figure 3G. Finally, the difference in CDT1 expression between normal and HCC cells was validated by qRT-PCR and western blot analysis of a normal human liver cell line and three HCC cell lines ( Figures 3H, I).
Relationship of mRNA Levels of CDT1 and Clinicopathological Features of HCC Patients
UALCAN was used to assess the relationship between CDT1 expression and the clinicopathologic features of HCC patients, including clinical cancer stage, pathological tumor grade, patient age, and TP53-mutation status. Higher levels of CDT1 mRNA tended to be expressed in tissues obtained from HCC patients with advanced cancer stages (p < 0.01). The highest mRNA levels of CDT1 were predominantly found in patients in stages II and III ( Figure 4A). Pathological tumor grading has important prognostic significance. According to pathological tumor grading criteria, patients with high-grade tumors tended to exhibit higher mRNA levels of CDT1 (p < 0.05) ( Figure 4B). Significantly high CDT1 expression was found mainly in the 41-60 year age group (p < 0.05) ( Figure 4C). The p53 variation reportedly plays an important role in the occurrence and development of tumors (30). As expected, significant differences in CDT1 expression were identified between the TP53-mutation group and the normal and TP53-nonmutation groups (p < 0.001) ( Figure 4D). Moreover, CDT1 expression was significantly associated with gender, race, pathologic stage, T stage, and alpha-fetoprotein level (p < 0.05) (Supplementary Table 1). Logistic regression analysis demonstrated that CDT1 expression was closely associated with a variety of clinical characteristics of poor prognosis such as pathological stage (OR = 2.304, 95% confidence interval [CI] = 1.505-3.548, p < 0.001), T stage (OR = 2.429, 95% CI = 1.605-3.699, p < 0.001), alpha-fetoprotein (OR = 3.428, 95% CI = 1.908-6.361, p < 0.001), and histological grade (OR = 3.256, 95% CI = 2.095-5.123, p < 0.001) ( Table 1). Furthermore, various survival parameters were also evaluated for their relationship with CDT1 mRNA levels in HCC patients. Survival analysis demonstrated that the OS (defined by period from suffering to death), PFS (reflecting tumor worsening), RFS (referring to time from primary treatment to recurrence), and DSS (reflecting death from cancer itself) rates of HCC patients with high CDT1 expression were significantly lower than those of patients with low CDT1 expression (p < 0.001) (Figures 4E-H). Further survival analysis was performed using the GSE14520 (HCCDB6) and ICGC-LIRI-JP (HCCDB18) datasets on HCCDB. The same conclusion was reached (Figures 4I, J). We also evaluated the impact of CDT1 expression on OS in HCC patients of different ages and TNM stages ( Figure 4K). In addition, we assessed the independent prognostic value of CDT1 using Cox proportional hazards regression analysis based on the RNA-Seq data and clinical information from the TCGA dataset. The results demonstrated that a high transcriptional level of CDT1 was independently correlated Table 2). The transcriptional level of CDT1 was confirmed to be an independent prognostic factor for OS and DSS in HCC patients.
Diagnostic Value of CDT1 Expression in HCC
The ROC curve analysis demonstrated the strong value of CDT1 in the diagnosis of HCC ( Figure 5A). Next, we evaluated the diagnostic value of CDT1 expression for different clinical features of HCC patients. Specifically, the AUC values were 0.969 for stage I/II, 0.975 for stage III/IV, 0.969 for stage T1/T2, 0.975 for stage T3/T4, 0.976 for stage N0, 0.972 for stage M0, and 0.960 for stage G1/G2 (Figures 5B-H). Furthermore, we established a nomogram combining CDT1 expression and key clinical factors to predict the 1-, 3-, and 5-year survival of HCC patients. A higher nomogram score for OS indicated a worse prognosis ( Figure 5I). These results implied that the transcriptional level of CDT1 was relatively sensitive and specific for the diagnosis of HCC.
Identification of Differentially Expressed Genes
To explore the abnormal changes in downstream pathways caused by high expression of CDT1, we identified DEGs between HCC samples with CDT1high and CDT1low mRNA expression based on the TCGA data. Among a total of 3755 DEGs, 2873 were upregulated and 882 were downregulated. Volcano plots and bar graphs were generated to visually display the distribution of DEGs ( Figures 6A, B), and heatmaps depicted the top 15 significantly upregulated and downregulated DEGs between the CDT1high and CDT1low expression groups ( Figures 6C, D).
Enrichment Analysis of CDT1 and Their Most Similar Genes
To further clarify the potential mechanisms of CDT1 in HCC progression, GO and KEGG enrichment was performed to predict the functions and pathways of the top 15 upregulated and downregulated CDT1-related DEGs. The biological processes for these genes were predominantly enriched in DNA replication initiation, nuclear division, mitotic nuclear division, and organelle fission. The molecular functions for these genes mainly included DNA replication origin binding, 3'-5' DNA helicase activity, helicase activity, and ATPase activity. The CDT1-related DEGs were mainly enriched in the chromosomal region, condensed chromosome kinetochore, and spindle in terms of the cellular component category ( Figure 7A). The results of KEGG enrichment revealed several major pathways: cell cycle, DNA replication, and homologous recombination ( Figure 7B). Besides, through the "HCC meta coexpression network" function of the HCCDB database, we acquired genes with similar expression patterns to CDT1 in HCC and conducted further enrichment analysis of these Figure 1A). The GO and KEGG enrichment results of CDT1-related co-expression genes were similar as described above (Supplementary Figures 1B, C). The CDT1-related DEGs were further analyzed using GSEA to identify signaling pathways that were significantly enriched (FDR < 0.25, adjusted p-value < 0.05) in HCC. Based on normalized enrichment scores, DNA replication, DNA repair, prometaphase mitosis, cell senescence, and pathway in cancer were significantly enriched ( Figures 7C-H). All the aboveenriched pathways were markedly associated with the occurrence and progression of malignant tumors.
Molecular Interactions of CDT1 in HCC
Through detection of similar gene functions in GEPIA, we identified genes whose expression patterns were similar to CDT1 in HCC patients. We constructed a PPI network to elucidate the potential interactions between CDT1 and genes with similar functions ( Figure 8B). CDT1 and genes similar to it (e.g., KIFC1, RNASEH2A, MCM2, E2F2, and CCNF) were associated with DNA replication origin binding, helicase activity, DNA binding, nucleic acid-binding, and MCM complex. We also constructed a PPI network to explore the interactions between CDT1 and its co-expressing genes. Similar to the above results, the interaction network of CDT1-related coexpressed genes is mainly associated with cell cycle and DNA replication (Supplementary Figure 2). Besides, the gene-gene interaction network also confirmed that CDT1 and its associated genes were primarily associated with DNA replication, DNAdependent DNA replication, DNA strand elongation, and MCM complex ( Figure 8A). From the results of our interaction analyses, we identified a correlation between CDT1 and MCM family genes in HCC. It is now widely accepted that CDT1 cooperates with CDC6 to load MCMs onto the ORC and further induce chromatin unfolding (31). Therefore, we further analyzed the correlation between MCMs and CDT1. The heatmap depicted the expression of MCM family genes in HCC samples with CDT1high and CDT1 low expression ( Figure 8C). The Scatter plot obtained using Spearman correlation analysis indicated that MCMs, except MCM9, were highly correlated with CDT1 at the transcriptional level ( Figures 8D-G and Supplementary Figure 3A). We further utilized the TCGA data to compare transcriptional levels of MCMs between HCC cancer samples and normal samples. The results indicated that mRNA expression levels of all MCM family genes, except MCM9, were significantly higher in HCC tissues than normal tissues and paired tissues (p < 0.001) ( Figures 9A, B and Supplementary Figure 3B). Furthermore, high expression of MCMs, except MCM9, was significantly associated with shorter OS (p < 0.001) ( Figures 9C-J and Supplementary Figure 3C).
To sum up, our present study indicated a close relationship of MCMs with CDT1 as well as their expression and prognostic significance in HCC patients, which suggests the potential mechanism by which CDT1 and MCM family genes cooperate to promote the occurrence and development of HCC.
Correlation Analysis Between CDT1 Expression and Various Immune Infiltrates
Immune cells in the tumor microenvironment largely influence the biological behavior of the tumor (32,33). Investigating infiltration of various immune cells in the HCC microenvironment, we demonstrated that CDT1 expression was positively correlated with the abundance of immunocytes such as T helper 2 (Th2) cells, activated dendritic cells, and T follicular helper cells, but was negatively correlated with the abundance of innate immunocytes such as neutrophils, dendritic cells, cytotoxic cells, and mast cells (Figures 10A-G). Moreover, we assessed the independent prognostic value of immune cell infiltration and CDT1 expression using Cox proportional hazards regression analysis using TIMER. The results indicated that the expression of CDT1 and the degree of infiltration of all six immune cells, except neutrophils, were independently associated with significantly shorter OS ( Table 3).
CDT1 Knockdown Inhibited, Whereas Overexpression Promoted Tumorigenicity of HCC Cells In Vitro
We further validated the role of CDT1 in HCC in vitro. Since CDT1 expression in the LM3 and Hep3B cell lines was higher than that in other cell lines ( Figure 3H), they were selected for functional analysis. The interference efficiency of CDT1 knockdown by siRNA was detected using western blotting ( Figure 11A). The results of CCK-8 and colony formation assays demonstrated that CDT1 knockdown significantly inhibited the proliferation and invasion abilities of LM3 and Hep3B cells ( Figures 11B, F-H). Further, the wound healing and transwell assays demonstrated that CDT1 knockdown significantly inhibited the migration of HCC cells (Figures 11C-E). The function of CDT1 was further investigated by performing overexpression studies. The efficiencies of CDT1 overexpression in LM3 and Hep3B cell lines were detected by western blotting ( Figure 12A). Consistent with the knockdown experiments, the results of CCK-8, colony formation, wound healing and transwell assays showed that overexpression of CDT1 significantly promoted HCC cells proliferation, invasion, and migration ( Figures 12B-E).
DISCUSSION
HCC is one of the most common cancers and the second leading cause of death among cancer patients, accounting for 600,000 deaths each year (1). Numerous studies have shown that metastasis at the later stage is the leading cause of HCC-related mortality, which highlights the importance of early diagnosis and disease management (34). Although landmark advances in HCC diagnosis have been achieved in recent years, only a small proportion of liver cancer cases are detected and diagnosed at an early stage (35). Moreover, despite recent breakthroughs in diagnosis and treatment, the prognosis of HCC patients remains far from satisfactory. Therefore, effective biomarkers and novel therapeutic targets for HCC are urgently needed.
A growing body of evidence has demonstrated that abnormal DNA replication and uncontrolled cell cycle progression are important hallmarks of tumor genesis, invasion, and progression (5). CDT1 is thought to participate in the coordination of the cell cycle and proliferation in eukaryotic cells by forming a pre-RC at the beginning of the cell cycle, which further loads MCMs onto chromatin (8). At present, some studies have reported that abnormally high expression of CDT1 is explicitly associated with the occurrence, development, and malignant behavior of tumors (6,9,10). However, the exact role of CDT1 proteins in HCC remains unknown. In our study, we systematically characterized CDT1 in HCC, revealing its expression profile, predictive and prognostic significance, potential functions, interactive network, miRNA regulation, and association with infiltration levels of immune subsets.
We first examined the transcriptional levels of CDT1 in different types of cancer using independent datasets from three different sources (ONCOMINE, TCGA, and GTEx). CDT1 was highly expressed in various tumors including cervical cancer, breast cancer, colorectal cancer, and liver cancer. Similarly, high CDT1 expression was identified in a variety of tumor cells in the CCLE database. Together, these results suggest that CDT1 may have a potential promoting role in tumor development.
Subsequently, we revealed significantly higher transcriptional levels of CDT1 in HCC specimens than in normal samples. Elevated CDT1 expression has been detected in several cancers, including lung cancer, breast cancer, and lymphoma (6, 10, 11). Karakaidos et al. examined a large number of non-small cell lung cancer samples and corresponding normal lung samples, reporting that CDT1 was overexpressed in most lung cancer tissues at mRNA and protein levels (10). Additionally, a recent study reported that the transcriptional level of CDT1 was significantly increased in breast cancer cells compared with normal breast epithelial cells (6). Further, high CDT1 expression has been associated with undesirable prognosis of Using the Lck promoter element to overexpress CDT1 in T cells, these researchers reported the progression of p53-knockout lymphoblastic lymphoma (11). In the current study, higher transcriptional levels of CDT1 were identified in HCC samples compared to normal liver samples across a variety of databases. CDT1 protein expression in HCC tissues was also significantly higher than that in normal liver tissues in the HPA dataset and clinical tissue microarray. To further verify our conclusion, we detected the relative expression levels of CDT1 in various HCC cell lines and a normal liver cell line by qRT-PCR and western blotting, obtaining results that were consistent with our bioinformatics analysis.
We further investigated the relationship between CDT1 expression and the clinical characteristics of HCC patients, revealing that CDT1 expression was correlated with tumor stage and TP53-mutation status. Logistic regression analysis indicated that CDT1 expression was significantly associated with alphafetoprotein, pathologic stage, histologic grade, and other clinical parameters in HCC patients. The Kaplan-Meier test indicated that high CDT1 expression was suggestive of undesirable OS, RFS, DFS, and DSS prognoses of HCC patients. Univariate and multivariate regression analyses confirmed that high CDT1 expression was an independent adverse prognostic factor for OS and DSS in HCC. At present, a prediction profile of HCC based on CDT1 expression has not been reported, but our ROC curve analysis suggested that CDT1 expression has significant value in the diagnosis of HCC. We further established a nomogram by integrating various clinical parameters and CDT1 mRNA levels from the TCGA dataset to predict individual patient mortality risk and help optimize therapy decisions.
To explore the abnormal changes in downstream pathways caused by high CDT1 expression in HCC, we identified DEGs between HCC patients with high and low CDT1 expression. GO and KEGG enrichment results revealed that the above DEGs mainly participated in cell cycle, DNA replication, and 3'-5' DNA helicase activity. Furthermore, GSEA analysis revealed that the DEGs were significantly enriched in cell cycle checkpoints, DNA repair, DNA replication, prometaphase mitosis, cell senescence, and pathway in cancer. All the above-enriched pathways were markedly correlated with the occurrence and progression of malignant tumors.
PPI network analysis indicated that CDT1 and its similar genes were primarily related to the cell cycle, DNA replication, and the MCM complex. The MCM protein family, including MCM2-10, is reportedly responsible for modulating the cell cycle and DNA replication in eukaryotes (32). Further, overexpression of CDT1 and MCMs has been previously reported in several human cancers (29,32). Our PPI network analysis identified a close correlation between CDT1 and MCM family genes in HCC. Further analysis revealed their transcriptional levels were highly correlated in HCC. In addition, mRNA levels of MCMs, except MCM9, were markedly increased in HCC samples relative to normal samples, and their high expression predicted poor HCC prognosis. Combined, these results indicate a close relationship between MCMs and CDT1, as well as their expression and prognostic significance in HCC patients, which suggests a potential mechanism through which CDT1 and MCMs cooperate to promote the occurrence and development of HCC.
An increasing body of evidence supports the hypothesis that immune cell infiltration influences the occurrence and progression of cancer, which adversely affects clinical prognosis and immunotherapy effectiveness (36). The relationship between CDT1 mRNA level and the degree of immune cell infiltration in HCC was another significant finding of this study. CDT1 expression was significantly associated with the abundance of activated dendritic cells, T follicular helper cells, neutrophils, dendritic cells, cytotoxic cells, mast cells, and especially Th2 cells. Th cells are important immune regulatory cells in the body and the Th1/Th2 ratio is in dynamic equilibrium under normal conditions (37). When the secretion of Th2 cytokines increases in patients with malignant tumors, Th1/Th2 drift will occur, resulting in Th1/Th2 imbalance (38). Many tumors, including lung cancer, liver cancer, and gastric cancer, have a Th1/Th2 balance shift often dominated by Th2 cells in the body, which may be related to the immune escape of tumors (39). Consistent with the above information, we found that CDT1 expression was positively associated with Th2 cell infiltration in HCC. Furthermore, the Cox proportional hazard model revealed that B cells, CD8+ T cells, CD4+ T cells, macrophages, and dendritic cells were explicitly associated with undesirable clinical outcomes of HCC patients.
Finally, we investigated the impact of CDT1 expression on the malignant phenotype of HCC cells in vitro. CDT1 knockdown significantly inhibited, whereas overexpression significantly promoted the proliferation, migration, and invasion of LM3 and Hep3B cells. These results suggest that CDT1 may play an important role in facilitating the development of HCC cells, although the exact downstream mechanism underlying this effect remains to be determined. (C-E) Transwell analysis and wound healing assay reflected the migration ability of LM3 and Hep3B cell lines. (F) Images of the colony formation assay after knockdown of CDT1 in HCC cells (G, H) Relative quantification of the colony areas is shown. Scale bars in (F) equal 5mm. Data are expressed as means ± SD of three independent experiments. *p < 0.05, **p < 0.01, ***p < 0.001.
Although this study revealed the potential significance and possible mechanism of CDT1 in the occurrence and development of HCC, there were some limitations. First, the functional assessment of CDT1 was based on an in vitro model and was not confirmed in vivo, and needs to be further explored in future studies. Second, the expression of CDT1 and its prognostic significance need to be verified in clinical samples, as the use of public data sets leads to some errors. Finally, although this study demonstrated that CDT1 plays a role in regulating the cell cycle and influencing immune infiltration, the underlying molecular mechanisms and signaling pathways have not been explored. We will conduct future studies to elucidate the mechanism of CDT1 in HCC.
CONCLUSION
In conclusion, we comprehensively and systematically evaluated the expression patterns, prognostic and diagnostic value, and potential mechanisms of CDT1 in the occurrence and development of HCC. Our results provide novel insight to help identify new prognostic biomarkers and therapeutic targets, which may assist clinicians to more accurately predict the survival of HCC patients and inform their treatment decisions.
DATA AVAILABILITY STATEMENT
Publicly available datasets were analyzed in this study. This data can be found here: https://portal.gdc.cancer.gov/ (The Cancer Genome Atlas (TCGA) program).
AUTHOR CONTRIBUTIONS
CC and TC developed the idea and designed the research. YZ, XH, WH, SY, and HQ analyzed the data. CC wrote the draft of the manuscript. TC supervised the project. All authors contributed to the article and approved the submitted version.
|
2021-09-24T13:20:11.239Z
|
2021-09-24T00:00:00.000
|
{
"year": 2021,
"sha1": "eb55fb48f146be7e184a50574263043937942fb7",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2021.721644/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3c0eaf95ea015fa90d956487ca2f44c5d4c48450",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
22885894
|
pes2o/s2orc
|
v3-fos-license
|
Intraclass Correlation Estimates for Cancer Screening Outcomes: Estimates and Applications in the Design of Group-Randomized Cancer Screening Studies
Background Screening has become one of our best tools for early detection and prevention of cancer. The group-randomized trial is the most rigorous experimental design for evaluating multilevel interventions. However, identifying the proper sample size for a group-randomized trial requires reliable estimates of intraclass correlation (ICC) for screening outcomes, which are not available to researchers. We present crude and adjusted ICC estimates for cancer screening outcomes for various levels of aggregation (physician, clinic, and county) and provide an example of how these ICC estimates may be used in the design of a future trial. Investigators working in the area of cancer screening were contacted and asked to provide crude and adjusted ICC estimates using the analysis of variance method estimator. Of the 29 investigators identified, estimates were obtained from 10 investigators who had relevant data. ICC estimates were calculated from 13 different studies, with more than half of the studies collecting information on colorectal screening. In the majority of cases, ICC estimates could be adjusted for age, education, and other demographic characteristics, leading to a reduction in the ICC. ICC estimates varied considerably by cancer site and level of aggregation of the groups. 130 crude and adjusted ICC estimates covering breast, cervical, colon, and prostate screening and have detailed them by level of aggregation, screening measure, and study characteristics. We have also demonstrated their use in planning a future trial and the need for the evaluation of the proposed interval estimator for binary outcomes under conditions typically seen in GRTs.
Screening has become one of our best tools for early detection and prevention of breast, cervical, and colorectal cancers. Despite periodic modifications of specific recommendations, these screening tests continue to include the following: mammography for breast cancer; prostate-specific antigen for prostate cancer; the Papanicolaou test and testing for high-risk types of human papillomavirus for cervical cancer, and fecal occult blood test, flexible sigmoidoscopy and colonoscopy for colorectal cancer screening (1,2). In spite of their efficacy, uptake of these screening tests is not optimal, and further outreach and dissemination efforts are needed to inform the community about screening test availability and recommended intervals, to reduce health service access barriers to obtaining screening, and to encourage positive decisions to seek screening (2). Specifically, these issues are particularly apparent in rural communities, such as Appalachia (3)(4)(5)(6).
Public health interventions to increase screening include efforts focusing on individuals, the health-care providers, the health-care delivery systems, other organizational groups in the community (churches and work sites), or an entire community (2,4,6,7). When an intervention operates at a group level, when it cannot be delivered to individuals, or when it manipulates the social or physical environment, a cluster or group-randomized trial (GRT) may be employed to evaluate the intervention effects. GRTs are a natural extension of the usual randomized clinical trial; in GRTs, distinct groups rather than individuals are randomly assigned to the intervention or control condition (8,9).
Because the primary goal of a GRT is to compare the treatment conditions which are assigned to groups, not to individuals, the design and analysis of the trial must account for individuals being a member of a group. Group membership is expressed as the correlation among individuals in the same group. Individuals who see the same physician, who go to the same clinic, who work in the same place, or who live in the same community are expected to share some common characteristics creating a positive intraclass correlation Intraclass Correlation Estimates for Cancer Screening Outcomes: Estimates and Applications in the Design of Group-Randomized Cancer Screening Studies Background Screening has become one of our best tools for early detection and prevention of cancer. The group-randomized trial is the most rigorous experimental design for evaluating multilevel interventions. However, identifying the proper sample size for a group-randomized trial requires reliable estimates of intraclass correlation (ICC) for screening outcomes, which are not available to researchers. We present crude and adjusted ICC estimates for cancer screening outcomes for various levels of aggregation (physician, clinic, and county) and provide an example of how these ICC estimates may be used in the design of a future trial.
Methods
Investigators working in the area of cancer screening were contacted and asked to provide crude and adjusted ICC estimates using the analysis of variance method estimator.
Results
Of the 29 investigators identified, estimates were obtained from 10 investigators who had relevant data. ICC estimates were calculated from 13 different studies, with more than half of the studies collecting information on colorectal screening. In the majority of cases, ICC estimates could be adjusted for age, education, and other demographic characteristics, leading to a reduction in the ICC. ICC estimates varied considerably by cancer site and level of aggregation of the groups.
(ICC). A positive ICC affects the estimated variance of the intervention effect by a factor of (1 ( 1) ) m , where m is the average number of individuals per group and r is the ICC between members of the group (10). For large m, the inflation factor may substantially increase the variance, even when r is small, as it often is in GRTs.
Identifying the proper sample size for a GRT requires reliable estimates of ICC, which are often not published or easily available to researchers. An underestimated ICC will result in an underpowered study, whereas an inflated ICC will require too many groups to be randomized. Accurate sample size estimates are needed for the efficient and timely use of scarce research funding.
Gathering estimates of relevant ICCs is an important step in planning a GRT. We are aware of only two articles that have published ICCs for cancer screening outcomes (11,12). In this article, we present the results of a study to gather both crude and adjusted ICC estimates for different cancer screening outcomes for various levels of aggregation (physician, clinic, county, and region). Furthermore, we provide an example of how these ICC estimates may be used in the design of a future trial.
Data Sources
Twenty-nine investigators working in the area of cancer screening were identified based on our experience in cancer screening research and through discussions with officials at the National Cancer Institute; all were contacted via e-mail in February 2009. Each was asked if he or she had access to data on cancer screening outcomes (ever screened, yes/no; screened within guidelines, yes/ no) and would be willing to work together to calculate crude and adjusted ICC estimates. Approximately, 2 weeks after the initial e-mail, regular follow-up phone calls began to address investigators' concerns and to answer questions they had in calculating ICCs. Regular contact continued with each investigator to compile results and to ensure that all calculations were performed in a consistent fashion. All data were approved by the investigators' local institutional review board.
Cancer Screening Outcomes
For each estimated ICC, collaborating investigators provided details on the study's design, including the target cancer under study, the percentage of individuals ever screened or screened within guidelines, the type and number of groups, and the number of individuals for each group.
Analysis Methods
To calculate ICC estimates consistently, investigators were asked to estimate r via the analysis of variance or analysis of covariance method, which has been shown to perform well for continuous and binary outcomes (13). ICCs were calculated as follows: which is a weighted mean group size. The total number of subjects is given by where m i is the number of subjects in the ith group and g is the number of groups. When possible, unadjusted/ crude estimates of ICC, ICC adjusted for age and education, and ICC adjusted for other covariates were provided for each outcome and level of aggregation.
Results
Of the 29 investigators initially contacted, two referred us to their collaborators who were principal investigators of pertinent cancer screening studies; one investigator initially contacted was involved in a research project of a principal investigator already contacted by us. Of the 28 investigators of unique research projects, 10 agreed to collaborate, 11 indicated that they did not have any relevant data to share, three declined to participate because of time constraints, and four did not respond.
From the 10 participating investigators, we received 138 ICC estimates from 12 different studies. Characteristics of each data source are presented in Table 1. More than half of the studies collected information on colorectal and mammography screening, five of the 12 studies collected data on Papanicolaou test screening, whereas only two studies could provide information on prostate cancer screening (prostate-specific antigen). Outcomes were assessed via medical record abstraction/chart review for more than half of the studies, and the majority of studies enrolled participants more than 40 years old. Table 2 presents crude and adjusted ICCs and further study characteristics. We note that all ICC estimates are from baseline data, except when noted as coming from follow-up. Adjustment for basic demographics (age and education) as well as adjustment for other factors reduced the estimated ICCs in most cases. Adjustment of ICCs (models 2 and 3) most often occurred as a continuous covariate for age and a categorical covariate for education. Exceptions are noted in Table 2. Estimates of ICCs varied considerably by cancer site and by the size of the aggregated group, with larger sized groups tending to have smaller ICCs (24). Adjustment factors considered by investigators, other than age and education, included income, marital status, race, ethnicity, city, insurance status, smoking status, comorbidities, and the number of primary care visits recorded.
Application of Findings for Trial Design
Details of how to use ICC estimates in sample size calculations for GRTs have been described elsewhere (8,9). Here, we provide a relevant example for potential GRTs in cancer screening. We consider a nested cohort design to examine the effect of a new intervention program to increase colon cancer screening in a diverse urban population of men and women. We plan to implement our intervention in community health clinics and will verify up-to-date colorectal cancer screening via chart review. We expect that approximately 40% of adults in our population are already appropriately screened, and we believe that an increase in this rate by 30%, to 52% screened, would be a reasonable and scientifically meaningful increase. Moreover, we believe that we can recruit at least 25 patients on average from each clinic. The planned analysis of this trial will be via a mixed model analysis of covariance, adjusting for baseline covariates. The sample size formula for this type of trial can be written as follows (8) Here, g c is the number of clinics per condition, m is the average number of individuals per group (clinic), and 2 y is the variance of the primary endpoint. The critical values, , / 2 df t and , df t will reflect the acceptable Type I and II error rates for this trial, and m and ˆg reflects one minus the percent of variance reduction expected through regression adjustment for member-level and group-level covariates, respectively.
For example, we may expect that regression adjustment for member-level covariates to reduce variance in our outcome by 10%, and therefore, m would be set to .9. Note that conservative estimates of m and ˆg would be 1. Sample size calculation begins with critical values , / 2 df t and , df t set for infinite df. Next, we use the calculated sample size to determine an updated estimate of df and iterate through the calculation updating the critical values appropriately. Given the proposed study's target population and outcome, we will use the ICC estimates from Ferrante et al. (15) for this example (cf Table 2). We calculate sample size as follows allowing the estimated ICC to be .05, ˆ. θ m = 90, ˆ. θ g = 80, and a Type I error of 5% and 80% power: In the above calculation, we began by using the critical values with infinite df. We can recalculate sample size, assuming df equal to 2(21 2 1) = 40. . .
Therefore, we can suggest that with 22 clinics per condition and 25 patients per clinic, we will have 80% power to detect a 12% absolute increase in screening from a baseline of 40% given the above assumptions. To gauge the sensitivity of the calculated sample size to the study assumptions, we vary both the number of patients to be recruited per clinic and the estimated ICC. Because we expect to be able to recruit at least 25 patients per clinic, a reasonable upper value may be 75 patients per clinic. To obtain a range of ICC values, we calculate the one-sided upper 80% confidence interval for the ICC based on the method described by Searle (25) and by Snedecor and Cochran (26). This method was developed for continuous outcomes, and it is unknown if the nominal coverage level is maintained for binary outcomes (27,28). Even so, we use this method here only to provide an approximate range of values for sample size calculation. Further investigation of the properties of this confidence interval method for binary outcomes is needed under conditions typically seen in GRTs. Kieser and Wassmer (29) discuss the use of confidence limits for estimates used in sample size calculation to take into account uncertainty of sample estimates.
They confirm that using the upper one-sided 80% confidence limit should guarantee that the planned power be 1 2 b, with probability of at least 1 2 a. Table 2 provides ICC estimates and their associated number of groups and average number of members per group needed to calculate confidence limits as outlined above. Using the ICC estimate, the associated number of groups, and members from Ferrante et al. (15), we calculate the upper 80% one-sided confidence limit for the ICC to be approximately .08.
Varying these values, Table 3 outlines the required study sample size per condition. In the range specified, increasing the number of individuals enrolled per clinic reduces the number of groups required, although the decrease appears to be less after increasing to 50 patients per clinic (8). In contrast, any increase in ICC contributes substantially to the number of groups per condition needed to detect our hypothesized treatment effect, with 80% power and 5% two-sided probability of Type I error.
We note that others have suggested varying approaches to account for uncertainty in estimation of the ICC (30-32). Turner et al. use a Bayesian approach that can be extended to combining multiple prior estimates of the ICC. Blitstein et al. (32) developed a method to combine ICC estimates based on techniques common in meta-analysis. Both methods attempt to provide a means to incorporate interstudy heterogeneity and provide investigators the ability to use all data available. Moreover, both authors provide guidance that we find useful for the selection of external ICC estimates (30,32). Available ICCs should be collected from studies that are as similar as possible to the study to be designed. Specifically, it is preferred that the ICC estimates come from studies with a similar endpoint, which use a comparable method of measurement, and are calculated from measurements taken on the same general target population. Furthermore, it is preferable if the design and analysis of the trial from which the ICCs are derived are similar to those of the study being planned (32). Turner et al. (30) relax some of these criteria to incorporate other relevant data sources but allow these to have less influence when combining ICC estimates.
Conclusions
Previously, we had found only two articles with published ICCs for cancer screening outcomes; one of which discussed cervical screening, whereas the other investigated breast cancer screening (11,12). Their reported ICCs fall in line with those presented here (.02-.07) for breast and cervical cancer screening.
Our work makes at least three relevant contributions to the literature. First, we have compiled and described crude and adjusted ICC estimates from 13 studies covering breast, cervical, colon, and prostate screening estimates. Estimates are detailed by level of aggregation, screening measure, and study characteristics. Second, all ICC estimates in Table 2 were calculated in the same manner for consistency. Finally, we have provided an illustration of how these estimates can be used to plan future trials.
There is considerable variation in the ICC estimates both between and within screening types. This is a function of the screening outcome measure, level of aggregation, and overall study design. We note that adjustment for basic demographic characteristics beyond age and education, which are likely available in almost any study, generally aids in reducing the ICC estimate. In fact, in several instances, the point estimate for an ICC fell below zero. In practice, we would recommend using a small positive value for sample size calculation instead of a negative value or zero. As we have done in the above example, investigators can consider calculating the one-sided upper 80% confidence interval for the ICC estimate, which would likely correspond to a small positive number.
We also note that adjustment for covariates can increase the ICC estimate, as it did in a few cases in Table 1. Group-level ICCs can increase as a result of covariate adjustment (8). This can occur when the uneven distribution of a covariate across groups masks what is otherwise a higher level of within-group correlation. When we adjust for the covariate, we remove the mask and the ICC estimate increases.
Although the studies presented should provide a starting point for investigators in planning future studies, it is likely that they will have to do some of their own pilot work to determine the most accurate ICCs for their studies. These pilot ICCs can be combined with published estimates using either of the methods mentioned above to determine a more robust estimate of ICC for sample size determination.
|
2018-04-03T05:37:31.986Z
|
2010-04-01T00:00:00.000
|
{
"year": 2010,
"sha1": "b60500eabe38eca0bc81a530c2de85425334b007",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/jncimono/article-pdf/2010/40/97/2658857/lgq011.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "3b199c60831e635b2d548326027106d1dea42602",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15447542
|
pes2o/s2orc
|
v3-fos-license
|
Swift and Chandra confirm the intensity-hardness correlation of the AXP 1RXS J170849.0-400910
Convincing evidence for long-term variations in the emission properties of the anomalous X-ray pulsar 1RXS J170849.0-400910 has been gathered in the last few years. In particular, and following the pulsar glitches of 1999 and 2001, XMM-Newton witnessed in 2003 a decline of the X-ray flux accompanied by a definite spectral softening. This suggested the existence of a correlation between the luminosity and the spectral hardness in this source, similar to that seen in the soft gamma-repeater SGR 1806-20. Here we report on new Chandra and Swift observations of 1RXS J170849.0-400910 performed in 2004 and 2005, respectively. These observations confirm and strengthen the proposed correlation. The trend appears to have now reversed: the flux increased and the spectrum is now harder. The consequences of these observations for the twisted magnetosphere scenario for anomalous X-ray pulsars are briefly discussed.
Introduction
The Anomalous X-ray Pulsars (AXPs) are a small group of sources which stand apart from other known classes of X-ray pulsars. In particular, they all rotate with spin periods clustered in a very narrow range (P ∼ 5 − 12 s), they have large period derivatives (Ṗ ∼ 10 −13 − 10 −10 s s −1 ), and, except in one case (Camilo et al. 2006), deep searches for radio pulsations gave so far always negative results (Burgay et al. 2006). Another important characteristic which motivated the "anomalous" label (Mereghetti & Stella 1995;van Paradijs, Taam & van den Heuvel 1995), is their relatively high X-ray luminosity (∼ 10 34 − 10 36 erg s −1 ), which cannot be accounted for by rotational energy losses alone, and no convincing evidence for a companion star was discovered so far for any of them. These considerations quite naturally led to the idea that a non-standard energy production mechanism is involved in their emission.
Many different models have been suggested all along for AXPs, such as they are accreting from a fossil disk, formed by the debris of the supernova event, or from a very low-mass companion (e.g. Mereghetti & Stella 1995;Mereghetti et al. 1998;Chatterjee et al. 2000;Perna et al. 2000;Alpar 2001). On the other hand, many observational properties support the idea of these sources being magnetars, i.e. isolated neutron Send offprint requests to: Sergio Campana, cam-pana@merate.mi.astro.it stars powered by the decay of their huge magnetic fields (B ∼ 10 14 − 10 15 G; Duncan & Thompson 1992; Thompson & Duncan 1993. In fact, if the large observed spindown is interpreted in terms of magneto-dipolar losses, all the AXPs seem to have magnetic fields in excess of the quantum critical field (B > 4.4 × 10 13 G). If this is the case, AXPs should be related to the Soft γ−ray Repeaters (SGRs), another class of X-ray sources thought to involve strongly magnetic neutron stars (see Woods & Thompson 2004 for a recent review on SGRs/AXPs). In recent years, intense monitoring programs revealed several common features between AXPs and SGRs, (i.e. short bursts, weak IR counterparts, high energy tails; Gavriil, Kaspi & Woods 2002;Kaspi et al. 2003;Israel et al. 2003;Kuiper, Hermsen & Méndez 2004), strengthening the idea of an underlying relation between these two classes of sources.
AXPs' spectra in the X-ray range are well described by an empirical model, made by an absorbed black body (kT ∼ 0.3− 0.6 keV) plus a relatively steep power law with (photon index Γ ∼ 2 − 4), and a hard X-ray power-law tail with Γ ∼ 1. Until a few years ago AXPs were commonly believed to be steady X-ray emitters (even if hints for variability were already found, see Iwasawa et al. 1992;Baykal & Swank 1996;Oosterbroek et al. 1998) but recently flux changes and spectral variability were detected, both long-term and with spin phase (Kaspi et al. 2003;Mereghetti et al. 2004;Rea et al. 2005).
1RXS J170849.0-400910 is a prototypical AXP, with a period of ∼ 11 s (Sugizaki et al. 1997;Israel et al. 1999), a spindown rate of ∼ 2×10 −11 s s −1 , and a soft spectrum (Israel et al. 2001). A phase-coherent timing solution, inferred thanks to the long Rossi-XTE monitoring of this source, led to the discovery of two glitches in the last few years, with very different postglitch behavior (Kaspi, Lackey & Chakrabarty 2000;Dall'Osso et al. 2003;Kaspi & Gavriil 2003). In a very recent paper, Rea et al. (2005) showed that both the flux and spectral hardness reached a maximum level close to the two glitches that the source experienced in 1999 and 2001, and then decreased again in close correlation. Moreover, a long observation taken by BeppoSAX during the recovery from the second, more dramatic, glitch revealed evidence for a relatively broad absorption line at ∼ 8 keV (Rea et al. 2003), not re-detected in more recent pointings (Rea et al. 2005). The spectral feature has been interpreted as a cyclotron resonance feature, yielding an estimate of the neutron star magnetic field of either 9.2 × 10 11 G or 1.6 × 10 15 G, in the case of electron or proton cyclotron absorption, respectively (Rea et al. 2003). In this paper we present new Swift (calibration) and Chandra observations of 1RXS J170849.0-400910. In Sect. 2 and 3 we describe the timing and spectral analysis of the Swift and Chandra observations, respectively, as well as reporting the results. Discussion follows in Sect. 4.
Swift observations
1RXS J170849.0-400910 was observed with the Swift satellite (Gehrels et al. 2004) a few times, being a calibrator for timing accuracy and for the wings of the Point Spread Function of the X-Ray Telescope (XRT, Burrows et al. 2005; see Table 1 for a detailed log of the observations). Here we focus on data taken in Window Timing (WT) mode and in Photon Counting (PC) mode 1 longer than 1 ks. In particular, we used the PC data for spectral analysis and the PC + WT data for timing analysis. This choice is dictated by the fact that WT observations are affected by the source being at the edge of the window and during part of the observation the source fell outside the window, making difficult a secure evaluation of the instrument spectral response.
Data were analysed with the FTOOL task xrtpipeline (version build-14 under HEADAS 6.0). We applied standard screening criteria to the data (CCD temperature T < −45 • C, eliminated hot pixels and bad aspect times). Hot and flickering pixels were removed. High background intervals due to dark current enhancements and bright Earth limb were removed too. Screened event files where then used to derive light curves and 1 Swift-XRT can observe sources in three observing modes: Low Rate Photodiode (LRPD), Window Timing (WT) and Photon Counting (PC), having a timing resolution of 0.14 ms, 1.8 ms, 2.5 s, respectively. Each mode is designed to deal with sources of different intensities in order to minimize the effects of photon pile-up but losing spatial information. In LRPD the entire CCD is read as a photodiode and there is no spatial information. In WT mode a 1D image is obtained reading data compressing along the central 200 pixels in a single row. PC data produce standard 2D images. For more details see Hill et al. (2005). a -Observations in bold face are the ones used for spectral and timing analysis. Observation in italic face have been used for timing analysis only. Observations shorter than 100 s were not listed above.
spectra. We included data between 0.5 and 10 keV, where PC response matrix is calibrated (we used the v.8 response matrices). We extracted data from two WT observations. The extraction region is computed automatically by the analysis software and is a box 40 pixels along the WT strip, centered on source, encompassing ∼ 98% of the Point Spread Function in this observing mode. We extracted photons from PC data from an annular region (3 pixels inner radius, 30 pixel outer radius) in order to avoid pile-up contamination. We consider standard grades 0-2 in WT and 0-12 in PC modes. Background spectra were taken from close-by regions free of sources.
Timing Analysis
Data were barycentered using the FTOOL task barycorr to correct the photon arrival times to the Solar system barycenter. A period search led to a clear detection of the neutron star spin period. The best period is P = 11.0027 ± 0.0003 s (all errors in the text are given at 90% confidence level). This has been derived with phase fitting techniques. This period is consistent with the extrapolation from known ephemerides at a constant period derivative (Kaspi & Gavriil 2003;Dall'Osso et al. 2003).
We divided the data in four energy bands then folded them at the neutron star spin period. The pulse profiles were then fitted with a sine wave obtaining the pulsed fraction (PF) in different energy ranges. We define here as PF the (semi-)amplitude of the best fitting sine to the normalised and background corrected folded data. We found a PF of 31±2%, 39±3%, 29±4% and 35 ± 7% in the 0.2-10 keV, 0.2-2 keV, 2-4 keV and 4-10 keV energy bands, respectively. Table 2. Spectral parameters of the Chandra ACIS-S observation (4.5 count s −1 ) and of the simultaneous fits of the 5 longer Swift observations (0.3 count s −1 on average). Fluxes are in the 0.5-10 keV band in units of 10 −10 erg s −1 cm −2 ; column density were fixed at the XMM-Newton value of 1.36×10 22 cm −2 (Rea et al. 2005; phabs model in XSPEC); all errors are at 90% confidence level. Normalisations are in XSPEC units, i.e. the number of photons kev −1 s −1 cm −2 at 1 keV.
Spectral analysis
Spectral modelling was performed fitting together the five PC observations with exposure time longer than 1 ks, grouping the PC spectra to 60 counts per energy bin. The spectra were fitted in the 1-10 keV energy range since the high absorption made useless the data below 1 keV (Romano et al. 2005). For the PC data described above we generated the appropriate arf files with the FTOOL task xrtmkarf and used the latest v.8 response matrices. The spectral parameters are reported in Table 2.
We first fitted all the data with an absorbed (using phabs within XSPEC) power-law, leaving all the parameters free to vary. This model gave a reduced χ 2 value of χ 2 red = 1.08. The resulting column density is N H = (2.00 +0.10 −0.16 ) × 10 22 cm −2 and the power law photon index is Γ = 3.55 +0.08 −0.17 . We also consider a fit with the inclusion of a black body component. In this case we derive a lower column density N H = (1.37 +0.27 −0.26 ) × 10 22 cm −2 consistent with the value obtained by Rea et al. (2005). In addition we got kT = 0.42 +0.04 −0.05 keV and R = 5.4 ± 1.5 km (calculated at 5 kpc distance) as well as a flatter power law index Γ = 2.73 +0.43 −0.52 . Also in this case the fit is acceptable (χ 2 red = 1.01). The inclusion of the black body component is significant at 3 σ level (based on an F-test). The 0.5-10 keV absorbed (unabsorbed) flux is 4.4 +0.07 −0.06 × 10 −11 erg s −1 cm −2 (1.43 +0.87 −0.45 × 10 −10 erg s −1 cm −2 ), with the power law component accounting for 72 +7 −8 % of the total flux. The power law component seems to be slightly decreased from the 82±1% for XMM-Newton. For comparison with previous spectra we computed also the same quantities fixing the column density to the XMM-Newton value (N H,XMM = (1.36±0.04)×10 22 cm −2 ). Results are reported in Table 2 (in this case the power law contribution amounts to 71 ± 3%).
The 1RXS J170849.0-400910 spectrum and flux changed significantly from the XMM-Newton observation in August 2003. Constraining all the spectral parameters to the XMM-Newton values within their 90% confidence intervals, the resulting fit is not acceptable with χ 2 red = 5.6. In our best fit, the blackbody temperature remains consistent with the XMM-Newton value. The photon index instead decreased significantly and, at the same time, the flux increased. Interestingly, this is in good agreement with the correlation found in this source by Rea et al. (2005; see also below).
Chandra observation
1RXS J170849.0-400910 was observed by Chandra on 2004 July 3 (Obs-ID: 4605), for ∼30 ks with the Advanced CCD Imaging Spectrometer (ACIS). The ACIS CCDs S1, S2, S3, S4, I2 and I3 were on during the observation. In order to avoid the pile-up, the source was observed in the Continuous Clocking (CC) mode (CC33 FAINT; time resolution 2.85 ms). The source was positioned in the back-illuminated ACIS-S3 CCD on the nominal target position. The data were reprocessed using CIAO software (version 3.2). A detailed description on the analysis procedures, such as extraction regions, corrections and filtering applied to the source events and spectra can be found in Rea et al. (2005b).
In order to perform the timing analysis we corrected the events arrival times for the barycenter of the solar system (with the CIAO axbary tool) using the provided ephemeris. We used for the timing analysis only the events in the 0.3-8 keV energy range and the standard Xronos tools (version 5.19). One fundamental peak plus one harmonic were present in the power-spectrum. A period of 11.00223 ± 0.00005 s has been detected referred to MJD 53189. The pulse profile has not changed with respect to the previous detection and the 0.3-8 keV PF is 35.4 ± 0.6% (see Fig. 2
right panel).
Being the CC mode not yet spectrally calibrated, the TE mode response matrices (rmf) and ancillary files (arf) are generally used for the spectral analysis (see Rea et al. 2005b for a detailed description of the matrices extraction).
We fixed the absorption at N H,XMM = (1.36 ± 0.04) × 10 22 cm −2 during the spectral fitting, because of the low statistics below 1 keV and especially because of well known calibration issues at 1-2 keV, as previously reported for other CC mode observations (Jonker et al 2003;Rea et al. 2005b). Actually, to avoid any CC mode calibration problem, all the fits were performed removing the data in the 0.9-2 keV range.
Also for this observation the best fitting model was the absorbed power law plus a blackbody. The blackbody temperature does not change much with respect to the XMM-Newton detection. The black body radius, 3.6 ± 0.4 km, is however smaller and the decrease appears to be significant at or above the 3σ level. On the other hand the power law contribution in this Chandra observation was 80 ± 2%, still consistent with the Table 2 for further details on the spectral parameters. Right Panel: folded light curves in three energy bands, from top: 0.2-2 keV, 2-4 keV and 4-10 keV.
XMM-Newton observation of the previous year. However, also in this case the photon index is moving harder and the flux increasing toward what Swift saw a year later (see Sec. 2). The spectral results are reported in Table 2 and Fig. 2 (left panel).
Discussion
In this paper we present a new Chandra observation of the AXP 1RXS J170849.0-400910 and the first Swift observations of this source performed as a part of the XRT calibration programme.
We performed spectral and timing analysis of the data. The measured periods, even though affected by relatively large errors (see § 2.1), allows us to confirm that the sources is still in a phase of steady spin-down, following the last glitch.
The spectral analysis reveals the source undergoing significant spectral changes. Interestingly, the trend monitored following the glitches epochs (Kaspi, Lackey & Chakrabarty 2000;Dall'Osso et al. 2003;Kaspi & Gavriil 2003) and until the last XMM-Newton observation has now reversed (see Fig. 3). In particular, the source spectrum became much harder and the total unabsorbed flux in the 0.5-10 keV energy band a fraction ∼ 50% higher with respect to that measured by XMM-Newton (Rea et al. 2005). Moreover, our analysis indicates that the flux increase is mainly due to an increase in the contribution of the thermal component, while the power law contribution to the total flux slightly decreased (71 ± 3%, while the XMM-Newton measurement was 82 ± 1%).
In Rea et al. (2005) it has been proposed that the observed correlation between the X-ray flux and the spectral hardness may be explained within the "twisted magnetosphere" scenario (Thompson, Lyutikov & Kulkarni 2002;Beloborodov & Thompson 2006). The basic idea is that when a static twist is implanted, currents flow into the magnetosphere. As the twist angle ∆φ NS grows, charge carriers (electrons and ions) provide an increasing optical depth to resonant cyclotron scattering and hence a flatter power-law. At the same time, the larger returning currents heat the star surface producing more thermal photons. Observations collected until 2003 were consistent with a scenario in which the twist angle was steadily increasing before the glitch epochs, culminating with glitches and a period of increased timing noise, and then decreasing, leading to a smaller flux and a softer spectrum. Both the Chandra and the Swift observations caught the source in a (relatively) hard, luminous state, revealing a reversed trend. However, the hardening-flux correlation is maintained, lending further support to this scenario.
What is particularly interesting, and measured here for the first time, is that since the last XMM-Newton observation the fraction of the total flux in the power-law component slightly decreased although the source spectrum became harder. This is somehow counter-intuitive. If taken at face value, it may be explained by the fact that, in the twisted magnetosphere model, both the spatial distributions of the magnetospheric currents (which act as "scattering medium") and the surface emission induced by the returning currents (which acts as source of seed photons for the resonant scattering) are substantially anisotropic. Seed thermal photons and scatterers are confined in two different limited ranges of magnetic colatitudes, and both distributions move away from the poles for larger twist angle, although at a different rate. For instance, by using the expressions provided by Thompson, Lyutikov & Kulkarni (2002) for the differential luminosity induced by the returning currents, we can estimate that the center of the heated surface region moves from ∼ 37 • to ∼ 63 • in colatitude when ∆φ NS increases from ∼ 0.1 to 2 radians. Correspondingly, the peak of the efficiency of the scattering only shifts from ∼ 66 • to ∼ 72 • in colatitude. The size of the region interested by the scattering decreases to ∼ 37%, while the thermally emitting region becomes ∼ 5% larger. Although the model is quite approximated, and the above numbers should be treated with care, this strong anisotropy suggests that the observed drop in the nonthermal flux may be due to the fact that a lower fraction of soft photons are intercepted by the cloud of scattering particles sur- Table 2 for further details on the spectral parameters. Right Panel: Chandra pulse profile in the 0.3-8.0 keV energy band. Rea et al. 2005). All reported fluxes are unabsorbed and in the 0.5-10 keV energy range. For clarity, the observations dates are: ROSAT -1994;ASCA -1996;first BeppoSAX -1997;second BeppoSAX -2001;Chandra-HETG -2002;XMM-Newton -2003;Chandra-CC -2004;Swift -2005. rounding the star. Clearly, since the scattering depth increases with ∆φ NS , the power-law will be in any case flatter.
Our preliminary quantitative estimates show, however, that the increase in size of the thermally emitting region is not sufficient to account for the observed variation in the blackbody radius, at least on the basis of the original model by Thompson, Lyutikov & Kulkarni (2002). We only note in this respect that viewing geometry effects may be important, since the expected change in the position of the heated surface region may result in a larger portion of the emitting area coming into view.
Finally, we might speculate that the long-term variations shown in Fig. 3 may have a cyclic behavior with a recurrence time of ≈ 5-10 yr. A possible explanation within the magnetar scenario might be the periodic twisting/untwisting of the star magnetosphere, where the characteristic dissipation time of a static twist is in fact ≈ 1-10 yr according to the more recent estimates (Beloborodov & Thompson 2006). A detailed study of this intensity-hardness correlation, through further X-ray mon-itoring of this source, is needed in order to better constrain the model, and to infer information on the physical conditions in the star magnetosphere. Note that with a detailed modeling of this correlation we would be able in the near future to predict the occurrence of glitches and possibly also of bursts.
|
2014-10-01T00:00:00.000Z
|
2006-11-07T00:00:00.000
|
{
"year": 2006,
"sha1": "a91289fa835b2c5bb23c44b1e2be17b92cfa035e",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2007/09/aa5482-06.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "ae48e361e17da75d7fffa0e3a2a625cf297d0d9a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
235216525
|
pes2o/s2orc
|
v3-fos-license
|
COVID-19 Risk in Youth Club Sports: A nationwide sample representing over 200,000 Athletes
4 Context: The COVID-19 pandemic has affected almost every aspect of life including youth 5 sports. Little data exists on COVID-19 incidences and risk mitigation strategies in youth club 6 sports. 7 Objective: To determine the reported incidence of COVID-19 cases among youth club sport 8 athletes and the information sources used to develop COVID-19 risk mitigation procedures. 9 Design: Cross-sectional study. 10 Setting: Online surveys. 11 Patients: Soccer and volleyball youth club directors. 12 Intervention: A survey was completed by directors of youth volleyball and soccer clubs across 13 the country in October 2020. Surveys included self-reported date of re-initiation, number of 14 players, player COVID-19 cases, sources of infection, COVID-19 mitigation strategies, and 15 information sources for the development of COVID-19 mitigation strategies. 2 days) as an offset. Estimates were exponentiated to yield a reported incidence rate ratio (IRR) with Wald confidence intervals. Results: A total of 205,136 athletes (soccer=165,580; volleyball=39,556) were represented by 23 437 clubs (soccer=159; volleyball=278). Club organizers reported 673 COVID-19 cases 24 (soccer=322; volleyball=351), for a reported incidence rate of 2.8 cases per 100,000 player-days 25 (soccer=1.7, volleyball=7.9). Volleyball had a significantly higher reported COVID-19 incidence 26 rate compared to soccer (reported IRR = 3.06 [2.0-4.6], p<0.001). Out of 11 possible mitigation 27 strategies, the median number of strategies used by all clubs was 7 with an interquartile range of 28 2. Conclusions: The incidence of self-reported cases of COVID-19 was lower in soccer clubs than 30 volleyball clubs. Most clubs report using many COVID-19 mitigation strategies to reduce the 31 risk of COVID-19.
45
In March 2020, the novel SARS-CoV-2 (COVID-19) was declared a global pandemic 46 and much of the United States issued stay-at-home orders to help prevent the spread of the 47 disease. These orders effectively halted most aspects of everyday life and restricted most 48 individuals to the confines of their own home. Though these measures were critical in slowing 49 the spread of COVID-19 and have been effective at protecting the health care system, 1-3 the 50 short-and long-term effects of these orders on mental health and well-being among youth 51 athletes represent a growing concern. With the cessation of most public activities, youth sports 52 were nearly universally discontinued in the spring of 2020 and their reinstatement has been 53 mixed throughout the country. Youth sports offer a myriad of social, physical, and mental health 54 benefits for adolescents. 4,5 Prior research has demonstrated an increase in symptoms of 55 depression and anxiety in adolescents related to quarantines and lock down orders, 6,7 and it has 56 been suggested that youth athletes may have been particularly negatively affected by the 57 cancelation of school and sports. 8
58
Physical activity levels in youth athletes have also been affected by the cancelation of 59 youth sports. Recent studies have demonstrated a decrease in physical activity among youth 60 during the pandemic. 9,10 The cancellation of youth sports may accelerate the decrease in sport 61 participation and physical activity, which has been observed prior to the current pandemic to 62 decrease as children age. 11,12 This may have significant long-term consequences as youth sport 63 activity is a predictor of health and physical activity into adulthood. [13][14][15] organized sport has been shown to be a major component in reduced childhood obesity rates, 16,17 65 which has persisted in the United States and has increased over the years. Furthermore, with the 66 COVID-19 pandemic physical activity rates have decreased dramatically in adolescents and 67 especially youth from areas of low socioeconomic status. 18 These findings add clarity to 68 projection models that suggest childhood obesity in the United States may increase 69 disproportionately among non-Hispanic blacks and Hispanic children. 19 Therefore, there is 70 serious need to balance the risks and benefits of youth sports during the COVID-19 pandemic as 71 it pertains to short-and long-term health of youth in the United States.
72
Though COVID-19 appears to results in less severe disease and lower overall mortality 73 rates among younger populations, 20 it is unknown how participation in sports results in 74 transmission between participants, though new evidence appears to suggest that youth sports is 75 not a large contributor to 22 Despite media reports to the contrary, 76 early evidence from adult professional athletes and preprint publications from club and high 77 school sports appear to suggest that COVID-19 transmission between athletes is relatively 78 rare. 23 about the re-initiation and continuation of youth sports. Therefore, the purpose of this study was 94 to determine the incidence of reported COVID-19 cases among youth club sport athletes, to 95 describe the reported sources of infection for reported cases, and to describe the information 96 sources used to develop COVID-19 risk mitigation procedures.
98
This study was approved by the Institutional Review Board of **blinded**. The overall 99 study design was cross-sectional, utilizing an online survey. The survey was given to US Youth 100 Soccer and the National Volleyball Association, who subsequently passed on to member 101 organizations, leagues, and other stakeholders within youth soccer and volleyball at their 102 discretion. Surveys were explicitly intended for the direct of the recipient club and asked for 103 responses on behalf of the entire organization. Sport club directors are generally the 104 administrator for their youth sport club. The survey was distributed on October 1 st , 2020 and 105 responses were accepted until November 3 rd , 2020. Clubs were excluded from the study analysis 106 if they had not restarted sports at the time the completed the survey. The survey included demographics that outlined the name of the club, zip code of the 109 club's primary facility, the state the club was located in, and the sport that was offered by the 110 club. Each director was asked if their club had restarted playing sports since COVID-19 111 restrictions began in their area. If the director answered that their organization had restarted 112 sports, they were asked to provide the date that sport activity resumed, how many athletes 113 participated in the club during that time, whether they had formal procedures for COVID-19 risk 114 reduction, and how many players had been diagnosed, hospitalized, or died from COVID-19 115 since the re-initiation. If known, respondents were asked to report the source of any infections in 116 players (household member, school contact, community/social contact, club sport activity, other, 117 or unknown). If the respondent endorsed having a plan regarding COVID-19 risk reduction, they 118 were asked which procedures the organization had been implementing to reduce COVID-19 119 incidence and which information sources were used to develop the plan. A total of 11 defined 120 procedures to mitigate the risk of COVID-19 were offered as choices as well as 8 possible 121 information sources (shown in Supplemental Table 1). These mitigation strategies and sources of days was determined as the product of the number of participating players and duration. COVID-132 19 incidence rate was expressed as the number of reported cases per 100,000 player-days 133 (reported cases / total number of player-days*100,000) separately for both volleyball and soccer. 134 Additionally, based on the median duration of participation of 108 days for reporting clubs, the 135 number of cases, total population, case rate, and incidence rate for US children for 15 weeks 136 prior to survey closure (7/23/20 to 11/5/20) was determined using data from the American 137 Academy of Pediatrics (AAP). 29 Similarly, total cases, total population, case rate, and incidence 138 rate were identified for the prior 15 weeks among the general population for each of the states 139 where clubs were located. To compare incidence rates between soccer and volleyball, a negative 140 binomial model was developed to predict player cases with sport and state incidence as 141 covariates and log(player-days) as an offset. Estimates were exponentiated to yield an incidence 142 rate ratio (IRR) with Wald confidence intervals. A chi-square analysis was used to compare the 143 proportion of reported known source of COVID-19 cases among players between soccer and 144 volleyball clubs. The proportion of soccer and volleyball clubs that endorsed each risk mitigation 145 procedure and each information source were compared using chi-square tests. For significant 146 chi-square tests, the standardized residuals for each cell were calculated to determine which cells 147 were the largest contributors to the chi-square analysis and a standardized residual greater than 2 148 or less than -2 was considered a significant contributor. 30 Statistical significance was set a priori 149 at p < 0.05 and all analyses were performed using R Foundation for Statistical Computing 150 (Vienna, Austria).
152
The distribution of soccer and volleyball clubs that responded to the survey from various (table 4).
172
The overall reported incidence rate of COVID-19 among all youth club athletes during 173 the summer and fall of 2020 was comparable to the incidence reported among children in the 174 United States during a similar timeframe. In addition, most cases were attributed to contacts 175 outside of sport, with only a small number reportedly due to transmission during sport activities.
176
This seems to agree with previous research that found that COVID-19 incidence rates reported 177 O n l i n e F i r s t 9 by high school athletic directors were highly correlated with local, background COVID-19 178 incidence rates. 24 It also appears to agree with the growing body of evidence that seems to 179 suggest that COVID-19 cases among athletes are predominantly attributed to community and 180 social contacts rather than transmission during sports. 21,22 While caution is needed in making 181 inferential comparisons between data aggregated from state health authorities and data collected 182 through self-reporting from youth sports organizations, nationwide pediatric data during a similar 183 time frame may nonetheless offer context regarding the overall COVID-19 case rate for children 184 during the time when respondent clubs were participating in sports.
185
The adjusted incidence rate reported by soccer clubs was found to be about 67% lower 186 than the adjusted incidence rates reported in volleyball clubs. These results agree with previous 187 research that found that high school outdoor sports had lower incidences of COVID-19 than high 188 school indoor sports. 24 This is further supported by the result that volleyball clubs were more 189 likely to attribute COVID-19 cases with club sporting activities than soccer clubs (table 3) attempted to account for this by including the COVID-19 incidence rate for the state of each 201 respondent club as a covariate in the models to compare sports.
202
It is important to note, that several other explanations for the difference in reported rates 203 between soccer and volleyball may be the underlying cause for this observed difference.
204
Anecdotally, volleyball is a highly communicative sport between teammates. Loud, vocal 205 communication may increase the likelihood of spreading COVID-19 from participant to 206 participant regardless of the outdoor and indoor atmosphere. This communication also happens 207 within a smaller space than a soccer field. This may mean that volleyball athletes spend a greater 208 amount of time within 6 feet of one another during practice and competition than soccer athletes.
209
It is also possible that volleyball athletes may represent an older population of athletes which 210 may predispose to higher incidence rate relative to younger soccer athletes. These are all 211 speculative rationales for our results and suggest that further research is needed for each 212 individual sport on the risk associated with contracting the SARS-CoV-2 virus. Every 213 respondent club reported the development and use of a formal plan to mitigate the risk of 214 COVID-19 and most clubs reported using a large number of risk reduction procedures. The most 215 common practices were symptom monitoring, facemask use, and increased facility disinfection.
216
Volleyball clubs were more likely to use face masks during play, increased facility disinfection, 217 and check player and coaches' temperatures on site compared to soccer clubs; whereas soccer 218 clubs were more likely to have players and staff check temperatures at home, and implement face 219 mask use for players off the field, face mask use for staff, social distancing for players and staff 220 off the field, and staggered arrival and departure times. Some of these differences may be due to This study has several limitations. The information provided was self-reported by soccer 235 and volleyball club directors and cannot be verified through medical records or other sources.
236
The self-reporting nature of the survey may also introduce recall bias as directors were asked to 237 remember how many cases they had since their restart which may have been months prior to 238 completing their survey. Nonetheless, we do not have reason to believe that a systematic bias 239 exists between sports with respect to self-reporting that would account for differences between 240 volleyball and soccer. Our method for survey distribution may have introduced sampling bias 241 into our results and we cannot account for the total number of organizations that received our 242 survey, only the number that ultimately completed it. Additionally, our number of player-days 243 assumes that all players participated at every practice and game between the start date and the 244 end of the survey. As mentioned above, caution is needed in comparing our data with data 245 reported by the AAP for nationwide pediatric COVID-19 incidence, but we have provided this to contextualize our findings. Furthermore, we did not use a validated survey; however, as COVID-247 19 is a new and rapidly growing concern, we feel confident that the developed survey asked the 248 necessary questions to answer the current research questions. Clubs reported COVID-19 cases 249 over different timelines based on re-initiation and survey completion dates, and came from areas 250 with varying background COVID-19 incidence, both of which may have impacted the reported 251 incidence rates. Nonetheless, when comparing club soccer and volleyball incidences, we tried to In this survey-based study, reported COVID-19 incidence rates among youth club sport 258 athletes were comparable to those reported for US children during a similar timeframe. After 259 adjusting for background state incidence rates, soccer clubs reported a lower COVID-19 260 incidence rate than volleyball clubs. This may be due to the indoor nature of the sport which is in 261 line with previous research on indoor and outdoor sporting activities and the risk of COVID-19; 262 however, additional factors may add to this difference like the difference in state locations 263 between soccer and volleyball in this study. Although both soccer and volleyball clubs reported 264 that only a small percentage of COVID-19 cases were attributable to sport participation, this was 265 more likely in volleyball than soccer. All clubs reported having a formal plan regarding COVID-266 19 mitigation and most clubs reported using a large number of risk reduction procedures.
267
Differences in incidence rates, reported infection sources, and the procedures utilized may be due 268 to the differences between indoor and outdoor sport participation and may be due to the nature of O n l i n e F i r s t
Study Overview
Sports have tremendous health benefits for children, but it remains unclear whether club sport participation, with risk reduction procedures in place, increases the risk of children contracting COVID-19. This study is being conducted through the Department of Orthopedics and Rehabilitation at the University of Wisconsin School of Medicine and Public Health to better define the risks associated with COVID-19 among youth athletes and aid local decision-making regarding the continuation of youth sports.
Please respond regarding your youth sport organization as a whole. Thank you for your participation! What is the name of your club?
What is the zip code for your club's primary facility?
What state is your club's primary facility located in? What sport is offered by your club? If you selected "other" for the sport offered by your club, please type in the sport your club offers.
|
2021-05-28T06:16:55.643Z
|
2021-05-26T00:00:00.000
|
{
"year": 2021,
"sha1": "a5e89043cad91805dad855e86678d5319145bea7",
"oa_license": null,
"oa_url": "https://meridian.allenpress.com/jat/article-pdf/56/12/1265/2984410/i1062-6050-56-12-1265.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5ff3dd2aa5486281667f3c7854b45a11391b1a4e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
11376380
|
pes2o/s2orc
|
v3-fos-license
|
Variation of muscle stiffness with force at increasing speeds of shortening.
Single frog skeletal muscle fibers were attached to a servo motor and force transducer by knotting the tendons to pieces of wire at the fiver insertions. Small amplitude, high frequency sinusoidal length changes were then applied during tetani while fibers contracted both isometrically and isotonically at various constant velocities. The amlitude of the resulting force oscillation provides a relative measure of muscle stiffness. It is shown from an analysis of the transient force responses observed after sudden changes in muscle length applied both at full and reduced overlap and during the rising phase of short tetani that these responses can be explained on the basis of varying numbers of cross bridges attached at the time of the length step. Therefore, the stiffness measured by the high frequency legth oscillation method is taken to be directly proportional to the number of cross bridges attached to thin filament sittes. It is found that muscle stiffness measured in this way falls with increasing shortening velocity, but not as rapidly as the force. The results suggest that at the maximum velocity of shortening, when the external force is zero, muscle stiffness is still substantial. The findings are interpreted in terms of a specific model for muscle contraction in which the maximum velocity of shortening under zero external load arises when a force balance is attained between attached cross bridges somr interpretations of these results are also discussed.
INTRODUCTION
It is now believed that the maximum steady tetanic force a skeletal striated muscle can generate at or beyond its optimum length depends on the number of thick filament cross bridges attached to thin filament sites (Gordon et al., 1966). One of the most important questions remaining regarding the nature of the contractile mechanism concerns the number of attached cross bridges present while a muscle shortens under various loads in the optimum or plateau region of the length-tension diagram (Gordon et al., 1966), where the number of cross bridges available for interaction is constant. There is some evidence (A. F. Huxley,197 I) indicating that at high speeds of shortening the number of attached cross bridges, or equivalently the muscle stiffness, JOURNAL OF GENERAL PHYSIOLOGY • VOLUME 66, 1975 • pages 287-3o2 ~87 since the cross bridges act in parallel (Gordon et al., 1966), decreases, but this conclusion needs to be supported by further work. However, if, as in the classical view (Hill, 1938), muscle can be modeled by a contractile component pulling against a series elastic component (SEC) whose stiffness varies with its extension, then measuring the total muscle stiffness, i.e., the stiffness of the series combination of contractile component and spring, may not provide information about the stiffness of the contractile component. If the stiffness of the series spring is less than that of the contractile component, the spring stiffness will dominate the measurement. Therefore, it must first be determined whether a significant SEC is present before meaningful stiffness measurements can be made. To do this in the work reported here, experiments of the type described by were done in which very rapid length changes were applied to active muscle and the transient force responses recorded. The length changes were applied both at full and reduced overlap and during the rising phase of short tetani. An analysis shows that the transient force responses can be simply explained on the basis of varying numbers of cross bridges attached at the time of the length step. This suggests that a significant SEC is probably not present in the preparation used in this study. It is further shown that muscle stiffness, measured by the use of small amplitude, high frequency length oscillations, decreases with decreasing force as the speed of steady shortening increases, and this can be interpreted to indicate a decreasing number of cross bridges attached. A short description of this work has already been given (Julian and Sollins, 1974). METHODS Single muscle fibers were dissected from the anterior tibial muscles of the frog Rana pipiens and were securely attached to a servo apparatus by tying the stout tendons these fibers have to pieces of wire with 10-0 nylon suture at the fiber insertions. The composition of the Ringer's solution was (in raM): NaC1, 115; KC1, 2.5; CaC12, 1.8; Na2HPO 4, 2.15; Na H zPO 4, 0.85. Its temperature was kept at 0°C. The general apparatus and procedures have already been described in detail (Julian, 1971;Julian and Sollins, 1973). Significant improvements were made in the force transducer and servo system so that rapid length changes could be applied and resulting force responses recorded.
The first series of results, shown in Fig. 1, was obtained using the servo motor described in the second of the two references just cited. The length steps were completed in approximately 1 ms, and the resonant frequency of the transducer, which was improved by making the deflecting capacitor plate and stylus lighter and stiffer, was about 1,000 Hz in air. This apparatus was also used in the experiments involving length oscillations during steady shortening, shown in Fig. 5. In these experiments (as well as in the length step experiments) the servo system was controlling muscle length (as opposed to force) at all times. A high frequency (500 or 1,000 Hz), small amplitude (4/~m or about 5.3 .~/half-sarcomere, peak-to-peak) dne wave was used as the command signal in the length control loop of the servo system at all times. While the fiber was maintaining a steady force under tetanie stimulation, additional command signals consisting of a step decrease in length immediately followed by a constant velocity length decrease were superimposed on the sine wave oscillation. The oscillations in force and length were observed while the fiber was developing steady tetanic force and no distortion or asymmetry in the sinusoidal wave-forms was detected. The overall widths of the fast force traces used in calculating relative stiffness were corrected by subtracting the inherent width of the force trace in other records (not shown) in which the length oscillation was absent. The noise in the fast force trace was reduced by using a low pass filter with a 3-dB down frequency of about 5 kHz.
The second series of length steps, shown in Fig. 3, was obtained after making additional improvements in the performance of the servo system. The servo motor was replaced by one having a much lower moment of inertia (General Scanning, Inc., Watertown, Mass., model G-108) which was capable of greater acceleration, With this motor it was possible to make length steps in approximately 0.4 ms. The transducer resonant frequency was increased to approximately 2,000 Hz by further stiffening the deflecting plate of the capacitor and the stylus without producing an appreciable loss of sensitivity.
It is important that the results obtained in this work do not depend in a significant way on the resonant frequency characteristics of the force transducers used. In the case of the small amplitude, high frequency length oscillations, similar results were obtained using frequencies near and below the resonant frequency of the transducer. Only relative force values were used at any one frequency of length oscillation so that constancy of force transducer output as a function of frequency of applied oscillation is not necessary. In the case of recording the TI and T~ force responses after sudden length changes, the force transducer resonant frequency component is small compared to the total response. When the force transducers used in this work were deflected in air and then suddenly released, the output voltage response consisted of an immediate large change in DC level followed by a small amplitude, rapidly decaying oscillation at the resonant frequency. The force transducer with the approximately 1,000-Hz resonant frequency was further tested by firmly connecting the transducer wire directly to the servo motor wire. Very small, constant amplitude length oscillations of various frequencies from near zero to about 2,000 Hz were then applied and the force transducer output recorded. The results suggest the presence of a small resonant peak near 1,000 Hz which is consistent with the transducer response to sudden unloading just mentioned. Since the rise time of the transducer with the lower resonant frequency should have been less than 1 ms, a significant limitation was the l-ms time interval required to complete the length change. In addition, the initial drop in force in phase with the applied length change is not instantaneous while the recovery from T1 to T, is very rapid. This suggests that the in-phase drop in force would have been larger than was recorded. The main effects of these limitations would be to indicate a decreased muscle stiffness, i.e., decreased slope of the curve passed through TI points. The fact that, for similarly sized large length de- creases, the Tx points in Fig. 4 fall considerably below those shown in Fig. 2 suggests that the true TI points would fall along a steeper and more nearly linear path than those shown in Fig. 4 if faster length changes and higher resonant frequency force transducers had been used.
]RESULTS Fig. 1 shows typical records from an experiment designed to test whether a significant amount of series elasticity is present in the preparation. As can be seen in the slow traces, the fiber is tetanized and held isometric until the force reaches a nearly steady level. Then the muscle length is suddenly altered by a small amount. The details of the length step and force response are shown in the upper traces which are recorded at a fast sweep speed. The values of interest are measured from the records in the way shown in part F. Two additional series of length steps were performed, one while the force was in the rising phase of a tetanus, and the other in the steady phase of a tetanus after the fiber was stretched so that the steady tetanic force was decreased.
The force values T1 and T~, as defined in Fig. 1 F are plotted against size of the length step in Fig. 2 for the rising and steady phases of contractions at optimal overlap and the steady phase of contraction at reduced overlap. The method by which the points obtained under these different conditions are plotted is described in the legend. In the Discussion, the results shown in this plot will be used to show that T1 and T~ can be explained on the basis of varying numbers of cross bridges attached at the time of the length step.
As described in the Methods, the performance of the servo system was improved and this made it possible to complete the length changes in approximately 0.4 ms. Records showing the resulting length changes and force responses are presented in Fig. 3. The records are similar to those in Fig. 1 except that the slow traces have been omitted, and also some stretches are included. The Tx and T2 force values are plotted against the size of the length steps in Fig. 4. Also shown are the Tx responses of a model for muscle contraction (Julian et al., 1974) to instantaneous length steps and to slower length changes which took about 0.4 ms to complete. Fig. 5 presents records of the type of experiment used to measure stiffness at various shortening velocities. A tetanically stimulated single fiber was constrained to develop force isometrically until a nearly steady level was attained. Then the muscle length was suddenly reduced and subsequently made to decrease at a constant velocity. The force response, after a transient, maintains a constant level less than Po until the shortening is stopped at which point the force redevelops to the isometric Po. The important point is that throughout the entire process the servo is applying a constant amplitude, high frequency sinusoidal length oscillation to the muscle. The amplitude of the length oscillation, about 4/~m peak-to-peak, is too small to be seen in The recovery of the force after the release has two distinct phases as shown in part F: a rapid recovery from T1 to T2 (best seen in the fast sweeps) followed by a slower rise in force back to the original Po. Note the dip in the force traces immediately after the quick recover)', as indicated in part D by the arrow. This feature of the response is also produced by our model (Julian et al., 1974) in which it results from detachment of cross bridges in the force-generating position. Part F shows how the size of the shortening step, AL, and the force levels, Tt and T~, were measured from the experimental records. The rapid recovery phase from T1 to 7 ~ was retouched in the slow sweeps of parts C, D, and E.
the length traces. However, when the muscle fiber is stimulated and develops force, the oscillation is easily detected in the force records where it appears as a widening of the traces. Since the length oscillation is maintained at a constant amplitude, the variation in width of the force traces can be taken as a measure of stiffness relative to the isometric state. It is apparent that during the phases of steady shortening the force traces are narrower than during the isometric phases. This indicates a decreased stiffness during shortening. In addition, the force trace becomes narrower as the velocity of shortening increases. In Fig. 6, stiffness, as measured by the length oscillation method shown in Fig. 5, is plotted against force. The data were obtained from three different fibers. The stiffness obtained from the slopes of the curves fitted to the Tx In all records, the force zero is the same as that indicated in part G. Note that parts A and B show length increases, while the rest are length decreases.
DISCUSSION
The argument first presented by will now be used to show how the transient force responses observed after sudden length changes can be simply explained on the basis of cross-bridge interactions. Shortening steps are applied at various muscle lengths corresponding to full and partial overlap of the thick and thin filaments. T1 and T2 (see Fig. 1 F for definitions) curves are obtained as shown in Fig. 2. If a force generator is connected to a SEC consisting of damped and undamped springs in series, then with partial overlap the generator would develop less steady force than that developed at full overlap. The extension of the SEC would be less than at full overlap so that in order to drop the force to zero (T1 --0) the shortening step would not need to be as large as at full overlap. The effect of reducing the overlap would be to shift the T1 and T2 curves toward the right, i.e., the curves fitted to the full overlap points could be made to fit the partial overlap points by adding a suitable constant to the length coordinate of each point on the full overlap curves. If, however, the only elasticity is in a series combination of damped and undamped springs residing in the cross bridges themselves, and the filaments, Z disks and fiber insertions are much stiffer, a negligible SEG will be present. Then, on decreasing overlap, each individual cross bridge attached to a thin filament develops the same force and stretches its own damped and undamped spring combination the same distance as it would at full overlap. In this case, the same size length step is required to reduce the force to zero regardless of the degree of overlap. The effect of reduced overlap on the T1 and T~ curves is to scale down the curves by a factor given by the ratio of partial overlap Po to full overlap Po, or, in other words, simple vertical scaling. The length axis intercepts are, of course, unchanged by this procedure. In Fig. 2, the vertical scaling procedure was used on the curves fitted to the full overlap T1 and T~ points (average sarcomere length, 2.2 #m). It can be seen that all the rising phase and reduced overlap points fall very near the scaled curves in confirmation of results already presented by Huxley and Simmons (1973). 0°C. Stimulus frequency: 16/s. A 500-Hz, 4-~m, or 5.3oA/half-sarcomere amplitude (peak-to-peak) oscillation was applied to the fiber at all times (not visible in length records). With the fiber in the isometric steady state, a sudden decrease in fiber length was imposed to bring the force to the desired level followed by a constant velocity length decrease to maintain the force at that level. After approximately 0.5 mm of shortening the fiber was again held isometric. Stiffness was obtained by measuring the width of the fast force trace in the way described in the Methods section; it was expressed relative to the width of the fast force trace in the isometric state just before shortening occurred.
As indicated in Fig. 2, horizontal shifting of the full overlap T1 and T, curves did not fit the reduced overlap and rising phase points as well. This is certainly obvious for T2 at all force levels, while for T1 the failure to fit by shifting becomes apparent only at the lower force levels. The good fit to the rising phase points obtained by vertically scaling the full overlap TI and T~ curves implies, as pointed out by Huxley and Simmons (1973), that the rise of tension during an isometric tetanus corresponds directly to an increasing number of cross bridges attached to thin filaments. The preceding experiments have indicated that the cross bridges are most likely responsible for the T~ and T2 responses. It is of further interest to determine whether the T~ curve is a reflection of a nonlinear cross-bridge spring characteristic or whether there are factors distorting what would otherwise be a linear relation. The results shown in Figs. 2 and 4 suggest that recovery of force from the T~ to the Tz level during the length change may be mainly responsible for the T~ plot deviating from a straight line. As mentioned in In the model, the stiffness was obtained directly from the number of cross bridges attached during shortening; it was expressed relative to the number attached in the isometric steady state. The curve passed through the filled squares was drawn by eye. The dashed curves show the stiffness as measured by the slope of the T1 curves. The short-dash curve was obtained from the full overlap T1 curve in Fig. 2 and the long-dash curve from the Tt points in Fig. 4 in the following way. Cubic polynomials were fitted by the method of least squares to the plots of TI force against size of length change. In the case of the short-dash curve, the equation is given in the legend for Fig. 2. For the long-dash curve, the T1 points shown in Fig. 4 were fitted by the equation Y = 1.008 + 0.0152X + 2.232 X 10-6X ~ --2.881 × 10-~X 3. These equations were differentiated with respect to length to obtain the slope stiffness. This stiffness was then replotted in this figure as a function of force with the values expressed relative to the stiffness value obtained where the length change was equal to zero.
the Methods section, the time taken by the servo system to complete the length changes was about 1 ms in Fig. 1, but only about 0.4 ms in Fig. 3. It is reasonable to suppose that this is the basis for the higher values for T1 in Fig. 2 as compared with Fig. 4 for equal sized length steps. In Fig. 4 the T1 points fall nearly along a straight line over the range of +30 to -30 .~/half-sarcomere. Presumably, if the length steps were made still faster, the straight line relation would hold out to larger releases, provided that a force transducer with a sufficiently high resonant frequency were used. A dependence of T1 on the speed of the applied length change would not be expected from a simple passive SEC. Results obtained from a model for muscle contraction (Julian et al., 1974) provide some insight into the way in which an apparent nonlinear cross-bridge characteristic could arise. The model assumes a linear cross-bridge spring characteristic, and, when the length changes are made instantaneously, the resulting T1 values trace out the straight line shown in Fig. 4. However, when the length changes are made to simulate those applied to the experimental preparation, i.e., constant speed length changes completed in 0.4 ms, the T1 values from the model deviate from the straight line as shown in Fig. 4 in a way very similar to that observed experimentally. The reason for this behavior in the model is that considerable change in cross-bridge configuration (and, therefore, force recovery) occurs during the slow length decreases. This effect becomes more pronounced at large length decreases so that low force level T~ points deviate progressively away from the instantaneous linear relation.
The T~ points in Figs. 2 and 4 are also different in that the points obtained using the faster steps and improved transducer fall slightly above those obtained using slower steps. In the range of length decreases down to about 80 ~/half-sarcomere, the behavior does not appear to be significantly different from that reported by Huxley and Simmons (1971 a, b) and Huxley (1974) as the speed of their steps was progressively increased. Beyond length decreases of about 80 ~/half-sarcomere, a range in which we have hardly any data, Huxley and Simmons' results show that the T~ points obtained using faster steps fall below slower step T2 points. The T2 responses of our model to length changes of various speeds have not yet been investigated. There is no apparent reason, however, for believing that these variations in T~ have serious consequences regarding the conclusions drawn in this work.
Evidence has been presented suggesting that no appreciable SEC is present and that the true cross-bridge force-length relation is more nearly linear than our results indicate. Therefore, measurement of total muscle stiffness can be taken simply as a measure of the relative number of cross bridges attached to thin filament sites. The plot of relative stiffness against relative force shown in Fig. 6 indicates that stiffness falls as the steady force decreases with increasing shortening speed, and it appears that as the force tends to zero (shortening velocity approaches Vmax) relative stiffness does not. That is, even though the external force would be zero, a significant fraction of the number of cross bridges generating force in an isometric steady contraction would still be attached. One way in which this situation might arise can be seen in the model mentioned previously (Julian et al., 1974). Here it is assumed that cross bridges are capable of bearing a compressive force, i.e., a force tending to oppose shortening. In the isometric state none of the bridges experiences such forces. However, when shortening is allowed in the model the configuration of some of the cross bridges changes to one that opposes shortening. In particular, at V .... the total external force is zero because the force generated by the cross bridges aiding shortening is exactly balanced by that of the cross bridges opposing shortening. The model has a stiffness-force relation similar to that obtained experimentally as shown by the curve presented in Fig. 6. It should be noted that this similarity occurs only as a consequence of fitting the model's response to data from other types of experiments.
The fall in relative stiffness as the steady force decreases with increasing shortening speed shown in Fig. 6 could be explained in other ways. One case which has been worked out in some detail by Podolsky and Nolan (1973) has a force generator which actually becomes stiffer, i.e., more cross bridges attached, as the speed of steady shortening increases. Clearly, our results are incompatible with this kind of model. However, Podolsky and Nolan proceed to connect their force generator to an external SEC having an exponential force-extension relation. This combination of force generator and SEC would lead to a decrease in total stiffness as the steady force decreases with increasing shortening speed. The result is still not compatible with our data since the Podolsky-Nolan combination would produce a stiffness-force characteristic passing through the origin in Fig. 6.
It could be proposed that a decrease in stiffness without any change in the number of attached cross bridges could be the result of force recovery from TI to T2 occurring more rapidly during a length change the faster a muscle is shortening. If this were the case, then more force recovery would take place during the oscillations at 500 Hz as compared to those at 1,000 Hz. The force change for a given length change would be less using the 500-Hz oscillation, or, in other words, muscle stiffness would appear to be less using the lower frequency. However, such an effect is not evident in the data shown in Fig. 6 since both circles with a bar and open circles fall along the same path. The fact that similar results were obtained using both the 500-and 1,000-Hz oscillations suggests very little force recovery occurred during the very low amplitude length changes used. This means that the frequency of the oscillations was sufficiently high to give a valid indication of number of cross bridges attached.
The decrease in stiffness shown in Fig. 6, where force was varied by shortening at constant velocity, may be contrasted with the results to be expected when steady tetanic force is varied by changing overlap. In this case, stiffness and force would decrease in strict proportion so that stiffness would extrapolate to zero as the force approached zero. This relation was confirmed by measuring muscle stiffness using the length oscillation method at average sarcomere lengths 2.2, 2.6, and 2.8 /~m. It was found that the tetanic force and the stiffness did maintain the same ratio in these three cases. The difference in the stiffness-force relation depending on whether the force is varied by changing the overlap or by varying the speed of shortening could be explained in the following way. In the case of steady shortening, force would be decreased both by a decrease in the number of attached cross bridges together with a redistribution of the remaining attached cross bridges. In the case of overlap changes, force would vary only as a result of changes in the number of attached cross bridges.
Up to this point, the results have been explained, with one exception, using models in which the structures responsible for the T1 and T~ responses have been located either entirely within the force generator, i.e., in the cross bridges, or entirely in an external SEC. Vertical scaling and horizontal shifting have been used to decide which model is more appropriate. The analysis can become much more complex if it is assumed that a SEC is present and that its tension-extension relation is exponential down to the lowest force levels reached in this work. In this case, it is possible to obtain a perfect fit by vertical scaling and even a complete lack of fit by horizontal shifting, and this would lead to the conclusion that a SEC was absent. In turn, this could lead to erroneous conclusions regarding number of attached cross bridges based on measurements of total muscle stiffness. However, our results indicate that, as faster length changes and improved force transducers are used, the measured T1 curve becomes increasingly steeper and more nearly linear. This means that in order to obtain an exponential SEC relation an unusual force generator instantaneous elasticity characteristic would have to be assumed. This would include a region in the characteristic where it bends back toward the origin at low force levels. There does not seem to be any evidence available which would indicate the presence of such generator characteristics. It seems more reasonable to assume a nearly linear generator characteristic and this would lead to a nonexponential SEC and failure by vertical scaling.
Another difficult situation arises if it is assumed that the SEC characteristic changes, becoming less stiff with decreasing overlap as a result of, e.g., decreasing lengths of filaments linked together by cross bridges. In particular, if the SEC changes in such a way that the reduced overlap SEC characteristic corresponds more nearly to a vertically scaled rather than horizontally shifted full overlap curve, then it might be possible to explain our results obtained at different degrees of overlap without having to conclude that a significant SEC was absent. The reason for this is that T1 curves obtained at reduced overlap could be fitted by vertical scaling of the full overlap curve. It would then be concluded that a significant SEC was not present, but, in this case also, it would be impossible to draw any conclusions regarding number of attached cross bridges from measurements of total muscle stiffness. As pointed out by Simmons and Jewell (1974), it is unlikely that the filament stiffness decreases to zero with decreasing overlap as would be necessary to fit reduced overlap SEC characteristics by vertical scaling of the full overlap SEC curve. If the limiting value for filament stiffness reached at no overlap were comparable to generator stiffness at full overlap, it would not be expected to obtain the finding previously mentioned in the Discussion that total muscle stiffness decreases in proportion to force with moderate decreases in overlap. Even in the absence of a significant SEC, our results indicating a decrease in stiffness during shortening might be explained using a model in which the number of cross bridges attached at various speeds of shortening remains constant. This could be done by having the individual cross-bridge forcelength characteristic become less stiff with decreasing force level. The small amplitude length oscillation method would then measure the stiffness, i.e., the slope, of a curve passed through the T: points. Evidence against this view is presented in Fig. 6 where the slopes of the curves passed through the T: points shown in Figs. 2 and 4 are plotted. In neither case is a good fit to the length oscillation data obtained. In addition, it seems clear that the stiffness of T: increases as the time taken to complete the length step is decreased and the force transducer is improved, while the stiffness measured using length oscillations does not vary much with frequency. It would be difficult to explain the variation in stiffness of T: with speed of length step if the cross-bridge force-length characteristic varied only with force level. It must also be kept in mind that the T1 curve obtained using length steps applied in the isometric steady state cannot simply be assumed to apply during steady shortening. An indication of this has already been Wesented by Huxley and Simmons (1973). In their Fig. 14 b, it can be seen that the slope of the T1 curve obtained during steady shortening is less than the slope of the isometric T~ at the force level equal to the load applied during steady shortening. This also indicates a decrease in relative muscle stiffness with increasing speed o[ shortening. Huxley and Simmons' figure in addition shows that the decrease in relative stiffness occurring during shortening is less than the decrease in relative force, and our results agree with this finding.
It has been argued that the number of attached cross bridges mainly determine the stiffness properties of the contracting fibers used in this work. The question can be put as to whether this is a plausible conclusion in view of what is now known about the microstructure of skeletal muscle. It would appear that the A, I, and Z filaments are principally in tension during a contraction, while the attached cross bridges act more nearly in a transverse bending mode. This mode is inherently less stiff than is the tension-compression mode. The cross bridges act in parallel so that the total stiffness depends on the number attached, but, according to the X-ray results of Huxley and Brown (1967), at any given moment during a contraction only a small proportion of cross bridges appear to be attached to actin filaments. It is also commonly believed that the LMM-S~ and $2-$1 junctions of the myosin molecule are rather flexible (H. E. . It seems reasonable to conclude, therefore, that there is no apparent structural basis for rejecting the idea of number of attached cross bridges dominating stiffness measurements. White (1970) has shown that during rigor contractions the stiffness of muscle fibers is much higher than during calcium-activated contractions. Huxley and Brown (1967) interpret their X-ray diagrams to indicate that in rigor a much larger proportion of cross bridges are attached, so that White's findings can be explained on the basis of stiffness being determined by number of cross bridges attached. Further X-ray studies of Elliott et al. (1967) and Huxley and Brown (1967) have indicated that no detectable changes occur in the spacings of the actin reflections from the thin filaments during contraction (within an experimental error of about 0.2%). Elliott et al. (1967) also find no detectable change in the myosin subunit repeat from the thick filaments during contraction, while Huxley and Brown (1967) report an increase of about 1%. It is not at all certain that the increase in the myosin period is the result of stretch caused by the isometric tension. An increase could be caused by activation-related changes in the thick filaments, or by a change of position of the myosin heads relative to the backbone of the thick filament during contraction (Huxley and Brown, 1967).
|
2014-10-01T00:00:00.000Z
|
1975-09-01T00:00:00.000
|
{
"year": 1975,
"sha1": "189739088cc05668309708b43f28448b704425f9",
"oa_license": "CCBYNCSA",
"oa_url": "http://jgp.rupress.org/content/66/3/287.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "189739088cc05668309708b43f28448b704425f9",
"s2fieldsofstudy": [
"Biology",
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
267326837
|
pes2o/s2orc
|
v3-fos-license
|
Response of Sunflower Yield and Water Productivity to Saline Water Irrigation in the Coastal Zones of the Ganges Delta
.
Introduction
Coastal saline soils are a growing concern for food security [1].The coastal zone of the Ganges Delta, where the livelihood of 40 million people mostly depends on agriculture, is affected by varying degrees of soil and water salinity, increased salt-water intrusion and scarcity of suitable irrigation water in the dry season [2].In the dry (rabi) season, crop establishment can be limited by waterlogging during the optimum sowing period, while soils dry out and accumulate salt in the root zone later [1,2].Consequently, about 0.7-0.8 million ha of agricultural land remains fallow, which can be brought under cultivation [3,4].In many countries, fresh water is relatively scarce, and saline water irrigation reduces crop yield.However, with appropriate management practices crop production is possible in saline areas [5].The scarcity of fresh water, drought and accumulation of salts combined with varying degrees of salinity affect the crop growth and limit the expansion of crop area in the dry season in the coastal zones of Bangladesh.In this study, we tested the use of low and medium-salinity water in place of low-salinity or fresh water for growing sunflowers (Helianthus annuus) in the dry season [4,[6][7][8][9].
Sunflower is an important crop that is also moderately tolerant to salinity [10].In the Ganges coastal zones where sunflower is a promising crop, there are limited volumes of fresh water, but more abundant volumes of low to medium-salinity water stored in ponds and canals.The salinity gradually increases from low to medium-salinity levels at the later growth stage of crops [11].The strategic use of fresh water combined with the use of saline water for irrigation is an opportunity to increase crop yields and profits [12].Non-saline water can be mixed with saline water and applied in the field, while the two water sources can be used alternately or in sequence leaving more saline water for later growth stages [13].Other options for crop cultivation in the coastal zones include the use of salt-tolerant crop varieties and irrigation of the crops at the salt-sensitive growth stages with fresh water.
Sunflowers are most sensitive to saline irrigation water at flowering stages [14].At later growth stages, saline water (≤7 dS m −1 ) can be used to irrigate the plants due to higher salinity tolerance [14,15].Several studies reported on the use of saline water for sunflowers by conjunctive use of fresh and saline water [15][16][17][18], alternating use of fresh and saline water [19], cyclic use of fresh and saline water [20,21] and conjunctive use of surface water and groundwater resources [22].Saline water can be used in coastal agriculture to irrigate crops and is being used in other countries like Israel, Iraq and Kuwait [23,24] to grow different crops.In the Ganges region, there are sources of saline water stored in ponds/canals and management of the pond/canal can maintain their water salinity at low to medium levels, creating opportunities for irrigation in coastal areas [2,15].Several studies [12,25] suggest using fresh and saline water at different growth stages of crops where there is a scarcity of fresh water.However, these findings remain site specific in their application and lack an overall synthesis or overarching principles that could be applied to the coastal zone of the Ganges Delta.Therefore, this study has been undertaken to understand how sunflower seed yield and water productivity respond to low and medium saline water irrigation in the Ganges coastal zone.
Location of the Study Sites, Weather and Soil Characteristics
The study was carried out at Dacope, Khulna (latitude: 22 • 34 ′ 53 ′′ N, longitude: 89 • 27 ′ 44 ′′ E) and Amtali, Barguna (latitude: 22 • 07 ′ 45.8 ′′ N, longitude: 90 • 13 ′ 44 ′′ E), both located on the Ganges Tidal Floodplain.The mean maximum and minimum air temperature, pan evaporation and precipitation during the crop growing seasons of 2016-2017 and 2017-2018 at the experiment sites are presented in Figure 1.The soil texture is clay loam (Table 1).Before crop sowing, soil samples were randomly collected in 15 cm increments to 60 cm within the experimental plots to determine the soil's physical properties.Soil organic carbon for determination of organic matter was estimated by the wet oxidation method [26].The soil physical properties were determined at the Soil Science Laboratory of Bangladesh Agricultural Research Institute (BARI), Gazipur (Table 1).Soil organic carbon for determination of organic matter was estimated by the wet oxidation method [26].The soil physical properties were determined at the Soil Science Laboratory of Bangladesh Agricultural Research Institute (BARI), Gazipur (Table 1).
Figure 1.Mean maximum (Tmax) and minimum (Tmin) air temperature (T, °C), pan evaporation (EV, mm) and rainfall (Pe, mm) during the crop growing seasons of 2016-2017 and 2017-2018 at the experiment sites of the salt-affected area of Dacope (A) and Amtali (B), respectively.
Experimental Design and Treatments
The field experiments were laid out in a randomized complete block design with six treatments and three replications.The treatments were: T1-two irrigations, at early vegetative (25-30 days after sowing, DAS) and flowering stage (60-65 DAS) with low salinity water (LSW, electric conductivity, EC w < 2 dS/m); T2-two irrigations, one at vegetative stage with LSW and one at flowering stage with medium-salinity water (MSW, 2 < EC w < 5 dS/m); T3-two irrigations, one at vegetative stage with LSW and one at seed development stage (75-80 DAS) with MSW; T4-three irrigations at vegetative, flowering and seed development stages with LSW; T5-three irrigations, at vegetative stage with LSW, and flowering and seed development stages with MSW; and T6-three irrigations, at vegetative and flowering stages with LSW and at seed development stage with MSW.
Crop Management
The crop management practices recommended by BARI were followed for sunflower growing.Recommended fertilizer doses (N 129 P 32 K 60 S 21 Mg 6 Zn 2 B 1.6 kg ha −1 ) for sunflower were applied in the form of urea, triple superphosphate, potassium chloride, gypsum, zinc sulfate and borax, respectively [27].The unit plot size was 7.2 × 4 m.Sunflower (Hysun-33) was sown on 15 and 24 December in 2016 and on 17 and 22 December in 2017 at Dacope and Amtali, respectively, with row to row distance of 60 cm and plant-toplant spacing of 40 cm.The seed was sown into untilled (no tillage) wet soil by dibbling method [28] with sub-surface placement of banded fertilizers.Half of the nitrogen and potassium and all of the phosphorus, sulfur, zinc and boron were applied as basal doses.Basal doses of the recommended fertilizers were mixed and placed manually into the soil uniformly.The remaining nitrogen and potassium were applied (before the flower initiation stage) and covered by soil followed by irrigation.No significant pest or disease infestation were observed in the experimental plots.Sunflower was harvested on 6 and 14 April 2017 and 12 and 13 April 2018 at Dacope and Amtali, respectively.
Measuring Soil Water Content, Soil Electrical Conductivity and Solute Potential
Soil water content, soil salinity and solute potential of soil solutions at different growth stages were determined for each treatment.Soils were sampled from 0-60 cm soil depth in 15 cm increments.Gravimetric soil water content (SWC) was determined.The soil samples were subsampled, mixed together, weighed and dried at 105 • C for 48 h and reweighed to determine gravimetric water content.The electrical conductivity (EC) in a 1:5 (soil:water suspension) extract (EC 1:5 ) was determined and converted to EC in a saturated extract (EC e , dS m −1 ) using Equation (1) [5,29,30].
where EC e is the soil solution salinity (dS m −1 ), cf is the conversion factor (8.6 for clay, clay loam soils [29,30]), EC 1:5 is the electrical conductivity (dS m −1 ) of the 1:5 soil:water extract and SWC is the gravimetric soil water content (%, weight basis).EC 1:5 was determined using a portable conductivity meter (Tri-meter model: pH/EC and TEMP-983) that can be inserted directly into the 1:5 soil solution.The solute potential of soil solution was calculated by the following Equation (2) [31,32].
where φ o is the osmotic solute (kPa).
Irrigation Water Salinity
The water salinity (EC w ) of the pond (low salinity) and bunded canal (medium salinity) irrigation sources at both locations were monitored during the crop-growing seasons.The average of three measuring points in the pond and the bunded canal was considered for measuring the water salinity at 10-day intervals from each site.Mean values of the irrigation water salinity (EC w ) during crop growing seasons (2016-2017 and 2017-2018) over two locations are shown in Figure 2A,B and Figure 2C,D, respectively.The irrigation water salinity (EC w ) of the pond ranged from 0.5 (December) to <2 dS m −1 (March-April) and the canal water salinity ranged from 0.7 to ≤5 dS m −1 over two years and locations (Figure 2A-D).The classification of irrigation water salinity as low or medium salinity was based on the classification of Rhodes et al. (1992), Mila et al. (2021), USSLS (1994), Reddi and Reddy (1995), Michael (1978) and Majumdar (2004), [5,11,[33][34][35][36].In this study, the low (EC w < 2 dS m −1 ) and medium saline water (2 < EC w < 5 dS m −1 ) were used for applying irrigation of the sunflower plants (Figure 2).Reddi and Reddy (1995), Michael (1978) and Majumdar (2004), [5,11,[33][34][35][36].In this study, the low (ECw < 2 dS m −1 ) and medium saline water (2 < ECw < 5 dS m −1 ) were used for applying irrigation of the sunflower plants (Figure 2).
Estimation of Irrigation Water Use
Seasonal evapotranspiration (ET a ) of sunflowers was calculated using a soil water balance Equation (3) [37,38].
where ET a is the sunflower seasonal evapotranspiration (mm), I is the irrigation water (mm), D p is deep percolation water (mm), R so is surface runoff, ∆SMC is the change of soil water between sowing and harvesting (mm) and P e is effective rainfall (mm).Here, we assume no soil water losses or additions through deep percolation, surface runoff and capillary rise.Each plot was separated by a 1.5 m distance.Therefore, the parameters of D p and R so were considered zero in this study.Irrigation water (I) was applied based on the pan evaporation method at different crop growth stages (initial stage, vegetative stage, flowering and grain development stages) [33,34].A class A evaporation pan placed near the experiment was used to estimate irrigation water requirement (I, mm) for full irrigation using the following equation.
where I is the amount of irrigation water (mm), E p is the cumulative pan evaporation (mm) and K p is the pan coefficient, which was considered to be 0.7 [34].V is the volume of irrigation amount (liter) and A is the area of the plot (m 2 ).The estimated irrigation water (Table 2) was supplied using a polyethylene hose pipe by pumping water from the water sources.A water flow meter was used to measure the volume of irrigation water.Effective rainfall (P e ) was calculated (Table 2) as per [33,39,40], using the following equation: P e = P total (125 − 0.25 P total ) /125 if P total < 250 mm ( 6) P e = 125 + 0.1 P total if P total > 250 mm (7) where P total is the rainfall (mm).∆SMC is the change in soil water during sowing and harvesting and follows Equation ( 8) [34,37].
where MC si is soil water content during sowing and MC hi is soil water content at harvest in the ith layer of the soil profile, n is the number of soil layer (0-15, 15-30, 30-45 and 45-60 cm).b i is the bulk density of the ith soil layer (gm cm −3 ) and d ri is the root zone depth of the ith soil layer (cm).SWC (%) was determined using the oven drying method.The soil samples were well-mixed together, subsampled, weighed, dried at 105 • C for 48 h and reweighed to determine SWC.
Sunflower Yield, Crop Water Productivity (CWP) and Irrigation Water Productivity (IWP)
The yield-contributing characters and seed yield of sunflowers were recorded.Five plants were randomly chosen to measure the seed yield components from each treatment.Economic seed yields (t ha −1 ) were measured from the plants harvested from two selected rows of each plot (5.76 m 2 ).The sunflower seed yield was manually harvested, cleaned and weighed after sun drying and converted to t ha −1 at 12% moisture content.The CWP and IWP were calculated to evaluate the efficient use of irrigation water at the level of sunflower production using the following equations [37,41].
where CWP is the crop water productivity (kg m −3 ), SY is the sunflower seed yield (t ha −1 ), ET a is the total seasonal crop water use (mm), IWP is the irrigation water productivity (kg m −3 ) and I is the amount of applied irrigation water (mm).
Statistical Analysis
Data on sunflower seed yield and yield contributing parameters, CWP and IWP, were statistically analyzed to test the effects of different levels of saline water irrigation at two sites in two years using R-statistical version 3.5.0(2018), developed by R-Project for Statistical Computing [42].All the treatment means differences were tested for any significant differences at p < 0.05 probability level.The variations of extract (soil:water = 1:5) soil salinity (EC e , dS m −1 ), solute potential (kPa) and soil water content (SWC, % w/w) with the effect of time (month) and treatment were also analyzed and compared for significant differences at p < 0.05.
Variation of Sunflower Seed Yield and Yield Components
The analysis of variance (ANOVA) and the treatment mean values over two locations and years for sunflower seed yield and yield contributing characters are presented in Tables 3 and 4. The location had a markedly significant (p < 0.001) effect on the seed yield and yield contributing characters of sunflowers (Table 3) in 2016-2017 but not in 2017-2018 (Table 4).Treatment had also a highly significant effect (p < 0.001) on seed yield and yield contributing characters except seed head −1 (p < 0.10) in both years (Tables 3 and 4).Irrigation with LSW and MSW significantly affected the yield and yield contributing characters of sunflowers.The seed yields of sunflower were 1.80 t ha −1 and 2.45 t ha −1 at Amtali and Dacope, respectively, in 2017 while in 2018, the sunflower yields were 1.39 t ha −1 and 1.50 t ha −1 at Amtali and Dacope, respectively (Tables 3 and 4).In Dacope in 2016-2017, the seed yield of T2 was lower than T4 and T6, but not different from other treatments even though this site had the highest overall yield.At Amtali in 2016-2017 (Table 3), T4 and T6 had higher seed yield than T5, while T4 was higher than the two irrigation treatments.In 2017-2018 (Table 4), there was no significant difference in seed yield between treatments, T4 and T6, but both exceeded T5 and the 2 irrigation treatments.
Water Use, Crop Water Productivity and Irrigation Water Productivity
The seasonal crop water use (ET a ), crop water productivity (CWP) and irrigation water productivity (IWP) are shown in Tables 2-4.In 2016-2017, seasonal ET a of sunflower ranged from 170 mm (T2, T3) to 233 mm (T4) at Dacope and from 122 mm (T3) to 174 mm (T5) at Amtali (Table 2).In 2018, Eta varied from 131 mm (T1, T3) to 193 mm (T6) at Dacope and 126 mm (T3) to 189 mm (T6) at Amtali (Table 2).CWP of sunflowers under different irrigation treatments ranged from 0.99 (T5) to 1.36 kg m −3 (T3), with an average of 1.19 kg m −3 over two locations in 2016-2017 (Table 3).In 2017-2018, the CWP of sunflowers under different irrigation treatments ranged from 0.79 kg m −3 (T1) to 1.03 kg m −3 (T3) with an average of 0.92 kg m −3 (Table 4).The ANOVA indicates that the interaction of location and treatment (L × T) had significant (p < 0.001) effects on CWP and IWP of sunflower during 2017 (Table 3).Treatment (T) had also a greatly significant effect (p ≤ 0.001) on CWP (Table 4) but the location (L) and the interaction of location and treatment (L × T) had no significant effect on CWP and IWP in 2018 (Table 4).In both years (2017 and 2018) and between the locations (Dacope and Amtali), T3 had the highest CWP among the treatments.
Variation in Soil Salinity
Soil salinity (EC e ) during the growing season for various treatments is illustrated in Figure 3a-d.Results indicate that soil salinity (EC e ) significantly (p < 0.001) varied with time during the crop growing season from December to April at 0-15, 15-30, 30-45 and 45-60 cm soil depths.The most significant (p < 0.001) effect was observed in February and March compared to the beginning and the end of the growing season over two years (2016-2017 and 2017-2018) in both locations of Amtali (Figure 3a) and Dacope (Figure 3b).At Amtali in 2016-2017 (Figure 3a(A1-A4)) and 2017-2018 (Figure 3a(a1-a4)), the results indicate that the soil salinity (EC e ) significantly (p < 0.001) changes in March to 60 cm soil depth.At Dacope in 2016-2017, similar significant (p < 0.001) changes were observed in EC e in March to 60 cm soil depth in 15 cm increments (Figure 3b(D1-D4)).The highest changes in soil salinity (6.9 dS m −1 ) were found in February in 0-15 and 45-60 cm soil layers in 2017-2018 at Dacope (Figure 3b(d1-d4)).The results indicated that the effect of time on soil salinity increased in February-March during seed development of sunflower in both years and locations.The effect of treatments on soil salinity significantly (p < 0.001) varied in the soil depths up to 60 cm over two years (2017 and 2018) in both locations.EC e was greater at 0-15 cm depth in all treatments during February and March.Similar trends were observed on the other soil profiles.Significant (p < 0.001) changes occurred in treatment T 5 compared to the other treatments in 0-60 cm with 15 cm increments at Amtali in 2016-2017 (Figure 3c).In 2017-2018 (Figure 3c), greater changes in soil salinity were observed in T2 at 0-15 and 45-60 cm depth.The treatment T2 significantly (p < 0.001) increased the soil salinity compared to the other treatments at the soil layer of 0-15 and 30-45 cm depth (Figure 3d).At Amtali in 2016-2017 (Figure 3a(A1-A4)), EC e varied from 3.1 to 6.0 dS m −1 and the highest value was in February-March in treatment T5 at all soil layers.In 2017-2018 (Figure 3a(a1-a4)), EC e varied from 3.09 to 6.4 dS m −1 and the highest was in February-March.Treatment T2 produced significantly greater EC e (5.9 dS m −1 ) at 0-15 cm and 30-45 cm depth, while EC e was reduced at 15-30 and 30-45 cm depth.Similar trends were observed in the Dacope location in both years (Figure 3c,d).--means no irrigation applied.The timing of irrigation events was vegetative, flowering and seed development stages.Mean values within the same columns followed by different letters (a-f) are significantly different.
Variations of Solute Potential
The variations of solute potential with the progress of the time at 0-60 cm soil depth for each irrigation treatment are shown in Figure 4a,b.The effect of time on solute potential significantly (p < 0.001) varied at different soil depths (0-15, 15-30, 30-45 and 45-60 cm) over both locations in both years (Figure 4a).The SP decreased (negatively greater) in February more than in other months but there was a similar trend of SP in February and March.The SP was much lower at 0-15 cm soil depth in February than other soil layers.The irrigation treatments had a significant (p < 0.001) effect on SP (Figure 4b).The treatment T4 increased significantly (negatively lower) SP in both locations and both years.At Amtali, T1 and T2 had significantly lower (negatively greater) SP than other treatments at different depths of soil layers.At Dacope, T2 and T3 had significantly lower (negatively greater) SP greater than other treatments in both years.We observed that more irrigations at different growth stages of sunflower are important for better response for yield of sunflower in both environments and soils due to greater (negatively lower) SP layers, and SP was greater (negatively lower) at 45-60 cm in all treatments.The results indicate that the months of February-March were lower (negatively greater) SP at 0-15 cm depth and greater SP (negatively lower) at lower soil depth (45-60 cm).
The variations of solute potential with the progress of the time at 0-60 cm soil depth for each irrigation treatment are shown in Figure 4a,b.The effect of time on solute potential significantly (p < 0.001) varied at different soil depths (0-15, 15-30, 30-45 and 45-60 cm) over both locations in both years (Figure 4a).The SP decreased (negatively greater) in February more than in other months but there was a similar trend of SP in February and March.The SP was much lower at 0-15 cm soil depth in February than other soil layers.The irrigation treatments had Mean values within the same columns followed by different letters (a-c) are significantly different.
Variations of Soil Water Content
The variations of gravimetric soil water content (SWC, %, w/w) during crop growing season for various treatments are shown in Figure 5a,b.An increase or decrease in SWC was observed following the irrigation treatments or precipitation.The results showed that the effect of time (month) on SWC significantly (p < 0.001) varied with the progress of the time in crop growing season from December to April at 0-60 cm soil profiles with 15 cm increments over both Amtali in Figure 5a(A1,A2) and Dacope in Figure 5b(D1,D2).The results indicate that SWC significantly (p < 0.001) decreased at the flowering of sunflowers in February compared to the beginning and the end of the growing season in both Amtali (Figure 5a(A1,A2)) and Dacope (Figure 5a --means no irrigation applied.Timing of irrigation events was vegetative, flowering and seed filling stages.Here, p values were shown: * indicates p < 0.05 significant; ** indicates p < 0.01; *** indicates p < 0.001 highly significant.Mean values within the same columns followed by different letters (a-c) are significantly different.
Variations of Soil Water Content
The variations of gravimetric soil water content (SWC, %, w/w) during crop growing season for various treatments are shown in Figure 5a,b.An increase or decrease in SWC was observed following the irrigation treatments or precipitation.The results showed that the effect of time (month) on SWC significantly (p < 0.001) varied with the progress of the time in crop growing season from December to April at 0-60 cm soil profiles with 15 cm increments over both Amtali in Figure 5a(A1,A2) and Dacope in Figure 5b(D1,D2).The results indicate that SWC significantly (p < 0.001) decreased at the flowering of sunflowers in February compared to the beginning and the end of the growing season in both Amtali (Figure 5a(A1,A2)) and Dacope (Figure 5a The effect of treatments on SWC significantly (p < 0.001) varied with the soil depths over two years (2017 and 2018) in both Amtali (Figure 5b(A3,A4)) and Dacope (Figure 5b(D3,D4)).SWC was greater at the soil depth of 45-60 cm in all treatments in both locations and years.The treatment T6 had significantly greater SWC than the other treatments.In treatment T1, SWC was significantly (p < 0.001) lower compared to the other treatments.In 2016-17 at Amtali (Figure 5a(A3)), treatments T4, T5 and T6 had on average nearly similar SWC (26.2, 27.1 and 27.3%) and greater than T1, T2 and T3 (24.4,24.6 and 24.1%) within 0-60 cm soil depths, respectively.In 2017-2018 (Figure 5a(A4)), similar trends were observed at Amtali.In 2016-2017 at Dacope (Figure 5b(D3)), treatments T5 and T6 had nearly similar SWC at 35.3 and 35.5% and greater than the other treatments.Similarly, in 2017-2018 at Dacope (Figure 5b(D4)), SWC was on average greater in T6 at 0-60 cm soil depth (26.4%).The lower SWC was observed in T1 (24.3%).The SWC decreased at later growth stages of sunflower in both years (2017 and 2018), but plant-available soil water was not drastically reduced due to maintaining the irrigation schedule and supplying the amount of water for sunflower production.--means no irrigation applied.The timing of irrigation events was vegetative, flowering and seed development stages.Mean values within the same columns followed by different letters (a-f) are significantly different.
Discussion
The number of irrigation events, regardless of EC w was the critical determinant for sunflower seed yield and irrigation water productivity.With both LSW and MSW irrigation, sunflower seed yield was higher with three irrigations than with two irrigations at both locations in the two growing seasons.However, the use of LSW (0.5 < salinity < 2 dS m −1 ) followed by two irrigation events with MSW (2 < salinity < 5 dS m −1 ) to sunflower at later growth stages can decrease yield relative to continuous application of LSW or a single late application of MSW.There are previous reports of positive effects of medium-salinity water irrigation on crop yield [11,13,43], as well as recommendations to use saline water (≤7 dS m −1 ) to supplement fresh water (<2.7 dS m −1 ) for irrigation where fresh water is scarce.Our findings suggest that root zone solute potential is the key factor that explains the responses to a number of irrigation events and crop tolerance of MSW for irrigation.
Variation of Sunflower Seed Yield and Yield Components
We observed that yield slightly increased with an increased number of low and medium saline water irrigation in both locations (Tables 3 and 4).The technique of conjunctive use of groundwater (1.5-3 dS m −1 ) at early growth stages and saline canal water (4-7 dS m −1 ) at later growth stages maintained maize grain yield of 8.6-9.5 t ha −1 .With wheat, Mojid and Hossain [14] and Mojid et al. [12] stated that saline water irrigation could be applied at later growth stages when plants have better salinity tolerance.On the other hand, the cotton yield contributing attributes and yield was significantly higher when fresh water was used to irrigate and significantly decreased with increasing the levels of salinity from 4-12 dS m −1 [44].In this study, irrigation levels also significantly (p < 0.001) depressed the yield components of sunflower-like seed weight head −1 , as well as sunflower seed yield, if a single application of LSW was followed by two applications with MSW (Tables 3 and 4).
An earlier study [45] reported that crop yield loss due to increased soil salinity in the dry season could be minimized when the crops are irrigated properly and maintained proper irrigation scheduling techniques.We observed that irrigations at the growth stages of early vegetative, flowering and seed development are important for a better response for plant growth and yield of sunflowers in coastal saline soils.Three irrigations at 25-30 (early growth), 60-65 (flowering) and 75-80 (seed development) days after sowing produced significantly higher head diameters and weights of seeds [46].On the other hand, two irrigations at flowering and seed development stages are required for sunflowers for higher seed yields [47].In addition, it is noted that earlier sowing by dibbling in zero tillage techniques resulted in crop escape from water stress for effective sunflower establishment and seed yield [48][49][50].
Seasonal Crop Water Use, Crop Water Productivity and Irrigation Water Productivity
Sofia et al. [51] reported that the average CWP of sunflower in the conjunctive use of non-saline and saline water was 0.90 kg m −3 and water productivity changed by 7.6% as compared with non-saline irrigation water without an increase in soil salinity in the root zone during the crop growth [15].The CWP of sunflower in the use of LSW and MSW varied from 0.99 to 1.36 kg m −3 with an average value of 1.19 kg m −3 over two locations in 2017 and 0.79 kg m −3 to 1.03 kg m −3 with an average of 0.92 kg m −3 in 2018.Under deficit irrigation in non-saline conditions, Erdem et al. [52] stated that CWP and IWP of sunflowers varied from 0.062-0.094kg m −3 and 0.080-0.247kg m −3 , respectively.We observed that irrigation affected sunflower seed yields.Sunflower yield was found to be at the maximum when the levels of available soil water content were 70-80% [53].In other crops like maize, the technique of non-saline and saline water (1:1 water salinity of 3.5 dS m −1 and 5.7 dS m −1 ) on drip-irrigated maize produced the highest and lowest IWP by 15.3 kg m −3 and 8.7 kg m −3 and IWP increased with increasing irrigation water salinity up to 10.9 dS m −1 [54].IWP of tomato increased as water salinity increased with 1.1-4.9dS m −1 [55].Ben-Asher et al. [56] used three salinity levels (1.8, 3.3 and 4.8 dS m −1 ) of saline water to irrigate grapevine and stated that salinity had no effect on IWP.Chen et al. [10] indicated that with every 1 dS m −1 increase in irrigation water salinity, sunflower yield decreased by 1.8% while IWP increased.Moreover, this study indicates that the number of irrigation events is the critical determinant for increasing sunflowers and improving water productivity to intensify the cropping system.This study showed that the CWP was significantly increased by increasing medium saline water irrigation and it could be maintained by replacing brackish water with low to medium saline water irrigation at later growth stages (T2, T5 and T6).
Variation in Soil Salinity
In the present study, the soil salinity increased in February-March during sunflower flowering and seed development (Figure 3a,b) due to high temperature, rapid soil water evaporation, increased soil cracking and capillary rise that contributed to an increase in the soil salinity.The EC e was greater at the top soil layer of 0-15 cm depth in all treatments during February and March.Similar trends were observed on the other soil profiles.The accumulation of soil salts on the top surface occurred due to soil water uptake by the plants and rapid evaporation of the soil water [28,57]; therefore, salt accumulation was generally higher in the upper soil surface.In treatments T 2 and T 5 , salt accumulation was slightly greater than in the treatments of T1, T4 and T6 due to the use of MSW (2 ≤ 5 dS m −1 ) irrigation.Irrigation with MSW (canal water) after LSW (pond water) may cause a slight increase in soil salinity.It could be stated that MSW irrigation at later growth stages after LSW irrigation at early growth stage may produce more salt movement in soil profiles.The technique of low saline and medium saline water irrigation indicates a better understanding of sunflower crop response to salinity at different growth stages and growing periods.This technique is important to salt stress susceptibility during the critical growth stages of crops.The initial growth stages of crops such as early vegetative are sensitive to salt stress but the later growth stage becomes more salt tolerant [13,16,58].This study also indicates that proper irrigation scheduling (saline water irrigation at critical growth stages) techniques are needed to minimum sacrifice yield reductions for sustainable use of limited fresh water.The choice of irrigation technique is very important for saline water irrigation to intensify the cropping in the coastal regions [11,59].The technique of saline water (EC w : 2-5 dS m −1 ), together with LSW (EC w : <2 dS m −1 ) for irrigation during the dry winter season, resulted in sunflower yields from 1.57 to 2.33 t ha −1 .Around 70% of crop roots are concentrated in the upper 30 cm of the soil profile, which is crucial to establish the acceptable salinity level during the critical growth phases of sunflowers.With adequate cultural practices, no salt could accumulate in the soil depth of 0-20 cm for a long time [23].Li et al. [60] showed that saline water irrigation helped the accumulation of soil salts significantly at the soil surface (0-10 cm soil layers), but not at the soil depth of 40-60 cm where abundant lateral roots were found.In this study, EC e increased in February and it remained basically stable over two years in both locations for two crop cycles.The soil salinity builds up mainly due to the addition of salts from saline water irrigation and upward movement of salts through capillary rise by evaporation from shallow groundwater table (≤3 m) and gradually increased as the dry season in all crops and found maximum soil salinity at mid or flowering stage.Sunflowers are particularly affected in critical development growth stages during February-March in the coastal area of southern Bangladesh.It is clear to understand that reduced crop yields are not the only effect of salinization, but also the combined effect of soil water stress and salinity and other agronomic practices [61].Francois [62] reported that around 5% yield reduction for each unit increase in soil salinity.Soil salinity increased with saline irrigation water (7 dS m −1 ) and slightly increased with brackish irrigation water (2.7 dS m −1 ).In this study, the results (Figure 3a,b) indicate that EC e was not substantially higher in soil profiles among the treatments due to medium saline water (2 to 4.9 dS m −1 ) irrigation and salinity may be tolerable for sunflower germination to crop yield production in the coastal areas of Bangladesh.Utilization of only saline water for irrigation is associated with salt accumulation in the soil, which might be harmful to plants, and diminish yields.But in Bangladesh, high precipitation (120-180 cm) during the monsoon season (June-August) in the coastal zone is an opportunity for effectively leaching and dilution of salt from the soils, and the drainage system allows flushing of the salt [40,[63][64][65][66].
Variations of Solute Potential
The lower (negatively higher) SP was found in the mid-growth stages of the crop in both years (2017 and 2018) due to soil water uptake and soil water evaporation from the soil surface in both locations (Dacope and Amtali).In treatment T1, the SP slightly affected the plants, which was associated with soil salinity and moisture in both locations.Water uptake by the plants is governed by the water potential [67].The solute potential is more closely related to sunflower crop growth [36,68].It is an effective technique to identify the combined effect of salinity and drought.In this study, the SP is inversely proportional to the soil water content and proportional to the salt concentration in the soil (Figure 4b).Salt concentrations in the soil solution increase due to the drying of the soil, as well as the decrease (negatively increases) in SP, which limits water uptake by sunflowers at higher levels of soil salinity and lower levels of soil water [63].Generally, plants struggle to take up water when the total potential of the soil solution exceeds −1000 kPa and will permanently wilt at −1500 kPa.We observed that when an osmotic solute is less (negatively increases) than −700 kPa, the rate of yield reduction is severe [69].This study indicated that the SP was lower (negatively greater) in February when values were below −700 kPa.Due to decreased SWC and increased salt concentration in soil, SP stress affects the growth and yields of sunflowers [66].An increase or decrease in SWC was observed following irrigation or precipitation and then decreased (negatively lower) or increased (negatively greater) gradually.Soil salinity and osmotic level depend on the soil texture, frequency and amount of saline water irrigation and the effects vary with the stage of crop growth [11,60].
Variations of Soil Water Content
Generally, the sunflower crop is more sensitive to water stress at flowering than at other stages [70].This study shows that SWC was lower in the upper soil layers and greater at the lower depths of the soils (Figure 5a,b), which indicates that sunflowers could extract water from lower depths of soils (15-60 cm) to avoid water stress [71].Doorenbos and Kassam [69] reported that soil water depletion should not exceed 45% of the available soil water at the late vegetative, flowering and grain development stages of crops.This study indicates that lower SWC in the upper soil layer during the later growth stages of sunflowers exerted a negative effect on yield, even though sunflowers can extract water up to 180 cm soil depth during the critical growth stages [71].Sunflower yields were found to be at the maximum when the levels of available SWC were 70-80% [54].Moreover, the study indicates that the number of irrigation events is the critical determinant for increasing sunflowers and improving water productivity to intensify the cropping system in the Ganges Delta.This study showed that WP was significantly increased by the MSW irrigation and it could be maintained by replacing water with low to medium saline water irrigation at later growth stages (T2, T5 and T6).Several studies stated that saline water can successfully be used at later growth stages for the cultivation of irrigated crops like wheat, tomato and mustard in the salt-affected zones [13,16,[72][73][74].
Conclusions
With both low and medium-salinity water, sunflower seed yield increased with three irrigations at both locations in two growing seasons.Moreover, the use of low salinity water (0.5 < salinity < 2 dS m −1 ) followed by medium saline water (2 < salinity < 5 dS m −1 ) irrigation to sunflower at early and later growth stages had no significant effect on yield relative to continuous application of the low saline water.This technique is effective for increasing yield by avoiding low solute potential at critical growth stages of crops in the coastal salt-affected areas of southern Bangladesh.In order to obtain better sunflower seed yield, this technique could be an alternative irrigation scheduling method to practice for rabi crops like maize, wheat, barley and mustard cultivation so as to intensify the cropping system in the coastal saline areas of southern Bangladesh where freshwater availability is limited in supply.Further studies are needed to continue the expansion of rabi crops in coastal salt-affected areas of the Ganges Delta where fresh water (non-saline) is not available for rabi crop cultivation.
Soil Syst.2024, 8, x FOR PEER REVIEW 5 of 23 irrigation water salinity (ECw) during crop growing seasons (2016-2017 and 2017-2018) over two locations are shown in Figure 2A,B and Figure 2C,D, respectively.The irrigation water salinity (ECw) of the pond ranged from 0.5 (December) to <2 dS m −1 (March-April) and the canal water salinity ranged from 0.7 to ≤5 dS m −1 over two years and locations (Figure 2A-D).The classification of irrigation water salinity as low or medium salinity was based on the classification of Rhodes et al. (1992), Mila et al. (2021), USSLS (1994),
Table 1 .
Initial soil physical properties in the experimental plots at Amtali and Dacope in 2016-2017.
Figure 1.Mean maximum (Tmax) and minimum (Tmin) air temperature (T, • C), pan evaporation (EV, mm) and rainfall (Pe, mm) during the crop growing seasons of 2016-2017 and 2017-2018 at the experiment sites of the salt-affected area of Dacope (A) and Amtali (B), respectively.
Table 1 .
Initial soil physical properties in the experimental plots at Amtali and Dacope in 2016-2017.
Table 3 .
Effect of location and treatments on sunflower seed yield, yield contributing parameters, crop water productivity (CWP) and irrigation water productivity (IWP) of sunflower in 2016-2017.
Table 4 .
Effect of location and treatments on sunflower seed yield, yield contributing parameters, crop water productivity (CWP) and irrigation water productivity (IWP) of sunflower in 2017-2018.
|
2024-01-31T16:06:19.657Z
|
2024-01-29T00:00:00.000
|
{
"year": 2024,
"sha1": "da7f43ce85c8f5d23f75ad34c3eb278821ad63dd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2571-8789/8/1/20/pdf?version=1706527237",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ea22ddfa4fb6ac82366a867b2e3a039f62fdd704",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
233168875
|
pes2o/s2orc
|
v3-fos-license
|
NeuMIP: Multi-Resolution Neural Materials
We propose NeuMIP, a neural method for representing and rendering a variety of material appearances at different scales. Classical prefiltering (mipmapping) methods work well on simple material properties such as diffuse color, but fail to generalize to normals, self-shadowing, fibers or more complex microstructures and reflectances. In this work, we generalize traditional mipmap pyramids to pyramids of neural textures, combined with a fully connected network. We also introduce neural offsets, a novel method which allows rendering materials with intricate parallax effects without any tessellation. This generalizes classical parallax mapping, but is trained without supervision by any explicit heightfield. Neural materials within our system support a 7-dimensional query, including position, incoming and outgoing direction, and the desired filter kernel size. The materials have small storage (on the order of standard mipmapping except with more texture channels), and can be integrated within common Monte-Carlo path tracing systems. We demonstrate our method on a variety of materials, resulting in complex appearance across levels of detail, with accurate parallax, self-shadowing, and other effects.
. Top left: Our multi-resolution neural material representing Twisted Wool, rendered seamlessly among standard materials using Monte Carlo path tracing. The neural representation is trained using hundreds of reflectance queries per texel, across multiple resolutions, and is independent of the underlying input, which could be based on displaced geometry (in this example), fiber geometry, measured data, or others. Top right: The stages of our pipeline: computing a kernel size based on pixel coverage, evaluating a neural offset module for improved handling of parallax effects, evaluating a neural texture pyramid to obtain a local feature vector, and applying a small fully-connected neural network to obtain a reflectance value usable in a standard renderer. Bottom left: Comparison of our result to a previous technique and to a reference path-traced from the ground-truth geometry. Bottom right: Our results match the reference across resolutions. Two additional lighting and camera angles shown.
We propose NeuMIP, a neural method for representing and rendering a variety of material appearances at different scales. Classical prefiltering (mipmapping) methods work well on simple material properties such as diffuse color, but fail to generalize to normals, self-shadowing, fibers or more complex microstructures and reflectances. In this work, we generalize traditional mipmap pyramids to pyramids of neural textures, combined with a fully connected network. We also introduce neural offsets, a novel method which allows rendering materials with intricate parallax effects without any tessellation. This generalizes classical parallax mapping, but is trained without supervision by any explicit heightfield. Neural materials within our system support a 7-dimensional query, including position, incoming and outgoing direction, and the desired filter kernel size. The materials have small storage (on the order of standard mipmapping except with more texture channels), and can be integrated within common Monte-Carlo path tracing
INTRODUCTION
The world is full of materials with interesting small-scale structure: a green pasture consisting of millions of individual blades of grass, a scratched and partially rusted metallic paint on a car, a knitted sweater or a velvet dress. The underlying mesostructure and microstructure phenomena are wildly variable: complex surface height profiles, fibers and yarns, self-shadowing, multiple reflections and refractions, and subsurface scattering. Many of these effects vary at different levels of detail: for example, we can see individual fibers of arXiv:2104.02789v1 [cs.GR] 6 Apr 2021 a fabric when we zoom in, but they morph into yarns and eventually disappear when we zoom out.
While computer graphics has made great strides in modeling these phenomena, this is usually at the cost of large computational expense and/or loss of generality. Many previous approaches were designed for a specific material at a particular level of detail, and evaluating those methods over a large patch becomes either slow or results in artifacts. In essence, when we zoom out, we integrate over a given patch. Theoretically, this can be achieved using Monte Carlo integration techniques by evaluating a large number of samples of the path. However, the variance of such an estimator grows with the size of the patch, and the method quickly becomes impractical, requiring large effort to compute a function that typically becomes simpler under zoomed-out viewing conditions. Traditional mipmap techniques [Williams 1983] can erroneously average parameters such as normals that influence the final appearance non-linearly. A universal method for prefiltering a material (that is, finding the integral of the patch of material microstructure covered by a pixel) has remained a challenge, despite some methods that address this problem for specific kinds of specular surfaces [Dupuy et al. 2013;Jakob et al. 2014] and fabrics [Zhao et al. 2016].
Our goal is to develop a neural method to accurately represent a variety of complex materials at different scales, train such a method on synthetic and real data, and integrate it into a standard pathtracing system. Our neural architecture learns a continuous variant of a bidirectional texture function (BTF) [Dana et al. 1999], which we term multi-scale BTF (MBTF). This is a 7-dimensional function, with two dimensions each for the query location, incoming and outgoing direction, and one extra dimension for the filter kernel size. This framework can represent a complex material (with self-shadowing, inter-reflections, displacements, fibers or other structure) at very different scales and can smoothly transition between them.
Inspired by the mipmapping technique, we propose NeuMIP, a method that uses a set of learned power-of-2 feature textures to represent the material at different levels, combined with a fixed per-material fully connected neural network. The network takes as input the trilinearly interpolated feature vector queried from the texture pyramid, along with incoming and outgoing directions, and outputs a reflectance value.
We also introduce a neural offset system, which allows us to efficiently represent materials with prominent non-flat geometric features. This is achieved by adjusting the texture query location through a learned offset, resulting in a parallax effect. This allows the rendering of intricate geometry without any tessellation. We obtain the appearance of a non-flat material without the cost of constructing, storing and intersecting the displaced geometry of the material.
Because our method can represent a wide variety of materials using the same architecture, adding support for a new material becomes a simple matter of creating a dataset of random material queries and optimizing the feature textures and network weights. This typically takes only about 45 minutes on a single GPU for 512 2 resolution of the bottom pyramid level, which is easily more efficient than explicitly generating or acquiring a full high-resolution BTF. As opposed to an explicit BTF, our representation only requires storage on the order of traditional mipmapped texture pyramids (typically with 7 instead of 3 channels per texel), while enabling easy prefiltering and multiscale appearance.
Recent related works [Rainer et al. 2020[Rainer et al. , 2019 also use a neural network to efficiently compress BTFs, and we build upon these methods. However, they do not support prefiltering with arbitrary kernel sizes. They also do not have an equivalent of our neural offset technique, limiting the methods to mostly flat materials. Finally, an advantage of our method is that no encoder needs to be trained, and as a result we do not need high-resolution BTF slices as inputs, instead only requiring about 200-400 random BTF queries per texel to train a material model; our decoder network is small and fast.
The major contributions of this work are as follows: • A neural method which can represent a wide variety of geometrically complex materials at different scales, trained from random queries of the continuous 7-dimensional multiscale BTF. These queries can come from real or synthetic data. • A neural offset technique for rendering complex geometric appearance including parallax effects without tessellation, trained in an unsupervised manner. • Ability to learn appearance from a small number of queries per texel (200-400), due to an encoder-less architecture.
NeuMIPcan be integrated into a Monte Carlo rendering engine, since each material query can be evaluated independently, allowing for light transport between regular and neural materials as shown in Figure 1.
RELATED WORK
In this section, we will briefly review previous work related to material representation and the use of deep learning in rendering.
Prefiltering and mipmapping. Efficiently rendering objects at different scales is one of the fundamental problems of computer graphics. A key challenge is efficiently finding an integral of a patch of the surface of the material, which is covered by a pixel. An overview of prefiltering methods was presented by Bruneton [2011]. Williams [1983] proposed the mipmap technique to create a pyramid of prefiltered textures; this is a standard method found in most rendering engines. However, the prefiltering problem becomes challenging if we drop the assumption of flat and rough materials. Many techniques were proposed over the years to address these shortcomings; however, the solutions tend to be approximate and/or focus on special cases or specific materials. Han et al. [2007] uses spherical harmonics and spherical vMFs to prefilter normal maps. Kaplanyan et al. [2016] proposes a real-time method for prefiltering of normal distribution functions. Dupuy et al. [2013] prefilter displacement mapped surfaces. Becker and Max [1993] introduced a method to smoothly blend between a BRDF, bump mapping, and displacement mapping. Wu et al. [2019] make further improvements in multi-resolution rendering of heightfield surfaces, taking into account shadowing and inter-reflection.
Parallax mapping [Kaneko et al. 2001] is a classic technique for improving bump and normal mapping by adding an approximate parallax effect. The method works by computing a texture space offset based on the local height and normal value. Our neural offset technique is inspired by this idea, but is learned unsupervised; that is, we do not feed any heightfields or normals into either the training or rendering, and in fact support materials where such heightfields/normals are not precisely defined.
Bidirectional texture functions. Dana et al. [1999] introduced the notion of the bidirectional texture function (BTF), a 6D function describing arbitrary reflective surface appearance. Given 2D location coordinates, incoming and outgoing directions, the BTF outputs a reflectance value. While prefiltering BTFs by mipmapping is theoretically simple, storing a discretized 6D function requires a large amount of memory. Therefore, many methods were developed to minimize storage requirements. A common solution [Koudelka et al. 2003;Müller et al. 2003] is to use PCA or clustered PCA to compress the function. A comprehensive overview of different techniques was published by Filip and Haindl [2008].
Neural reflectance. Recently, Rainer et al. [2019] proposed to use an autoencoder framework to compress BTF slices per texel (also termed apparent BRDFs or ABRDFs); the decoder takes incoming/outgoing directions as input in addition to the latent vector, and the autoencoder is trained per BTF. Later, they extended the work by unifying different materials into a shared latent space, so only a single autoencoder needs to be trained [2020]. Within the context of complex specular appearance, [Kuznetsov et al. 2019] used Generative Adversarial Networks (GANs) to generate reflectance functions perceptually similar to synthetic or measured input data, and rendered them using partial evaluation of the generator network. Inspired by these methods, we extend the neural textures to multi-resolution materials, and introduce the neural offset module, which greatly improves the quality of non-flat materials; we also find that neither an encoder nor a discriminator is needed in our case, and direct optimization of the feature textures works well.
Other neural material methods. Yan et al.
[2017] use a neural network as a mapping function to convert fur parameters into a participating media, simplifying the simulation. A neural network can also be used to accelerate the rendering in an unbiased way. Mueller et al. [2019; presented path-guiding methods which learn a sampling distribution function for rendering using normalizing flow networks. Nalbach et al. [2017] proposed deep shading, a technique which uses a CNN to achieve screen-space effects like ambient occlusion, subsurface scattering etc, from simple feature buffers. Thies et al. [2019] introduced the idea of a neural texture and using it inside deferred rendering.
Recently, Mildenhall at el.
[2020] introduced neural radiance fields (NeRFs), a view synthesis framework based on differentiable volume rendering, which can fit the geometry and outgoing radiance of 3D objects or entire scenes from a number of training views, encoding the entire appearance in a fully-connected network (MLP). This work has stimulated a large number of follow-up efforts. Our problem is simpler in that we only focus on material rather than geometric shape; however, it is more complex in other ways, as we support relighting and multi-resolution viewing, and require much faster queries to integrate in a full rendering system. We do not encode the entire reflectance in a single large MLP, and instead use more structure: a feature texture pyramid and a neural offset module combined with a much smaller MLP.
Symbol Domain
Definition Ray depth to offset Table 1. Notation used in the paper.
NEURAL MBTF REPRESENTATION
In this section we define the multi-resolution BTF (Sec. 3.1), and discuss our neural architecture (Sec. 3.2 and 3.3). We introduce a baseline version of our neural MBTF in Sec. 3.2 and then present our full model in Sec. 3.3 with a neural offset technique. In later sections, we will describe how to train our models and how to use them for rendering. Our notation is summarized in Table 1.
Multi-resolution BTF
Accurately computing the reflected radiance on a material surface with complex microgeometry -usually modeled by displaced geometry, fibers, volumetric scattering, etc. -is highly expensive and requires tracing complex geometry at a microscopic scale. Our approach is to precompute the reflectance of the material for a given location u, radius of the footprint kernel , incoming direction , and outgoing direction . We denote this function (u, , , ), and call it a multi-resolution bidirectional texture function (MBTF). Below we define the MBTF more precisely. Traditionally, a BTF [Dana et al. 1999] was used to incorporate all of these effects, functioning as a black-box function that directly provides the equivalent reflectance. However, a classical BTF models a material at a certain fixed scale and does not support multiple levels of detail. We introduce the multi-resolution BTF (MBTF) that supports a continuous Gaussian query kernel size , representing material appearance at an arbitrary level of detail. Our MBTF can thus be seen as a filtered BTF. In particular, let ( , ; x) be a normalized 2D Gaussian with mean and standard deviation , as a function of x. Our MBTF is defined as follows: where (u, , ) can be seen as a traditional BTF at the finest material level. In other words, the MBTF value is the weighted average over the Gaussian kernel centered at u of the exitant radiance in Overview of our neural architecture. Left: The neural offset module (more detail in Fig. 4) takes a uv-space location and incoming direction, and predicts a new (offset) uv-space location to simulate parallax effects. Middle: The neural texture pyramid is queried using trilinear interpolation to obtain a 7-channel feature vector. Right: The feature vector, with incoming and outgoing directions, is fed to a material-specific multi-layer perceptron, which predicts the RGB reflectance value.
direction , assuming the material is lit by distant light of unit irradiance from direction .
Such a BTF can be captured from real data, which we also support; however, in most of our results, we use Monte Carlo path tracing to define it using synthetic microstructure modeled on a reference plane. We compute the value (u, , ) by tracing a standard path estimator from direction − towards u, assuming lighting from a distant light from direction with unit irradiance onto the reference plane. We use a distant light with a finite smoothing kernel in directions, to improve convergence of the path tracer and make multiple importance sampling applicable.
Note that our definition generalizes the standard radiometric definition of a BRDF, which is defined as outgoing radiance per unit incoming irradiance. Therefore, if the synthetic microstructure consists of a simple plane with a homogeneous BRDF, our BTF and MBTF will be equal to that BRDF for any position and kernel size. Also note that the BTF and MBTF will in general not be reciprocal due to occlusions and multiple light bounces.
Neural MBTF baseline
The 7-dimensional MBTF is too prohibitive to store directly. Nevertheless, this function (depending on the material) has low entropy because of redundancies, which can be captured by a well-chosen neural network architecture. We propose a baseline neural architecture that can already model many complex materials. In particular, our baseline architecture consists of a neural texture pyramid that encodes complex multiscale spatially-varying material appearance and a material decoder network that computes directional reflectance from the pyramid. This neural MBTF is expressed by where (u, ) represents a neural feature lookup from the neural pyramid and the decoder network evaluates the final reflectance from the neural feature given input directions ( and ).
Neural texture pyramid. Instead of having one single neural texture of dimension 2 × 2 × (where is some positive integer and the number of channels), we leverage a neural texture pyramid = { }, consisting of a set of neural textures . Each has a size 2 × 2 × , where each texel contains a -channel latent neural feature, denotes the discrete level of details, and we let 0 ≤ ≤ . Such a pyramid structure enables encoding complex material appearance at multiple scales. This is similar to standard mipmapping for color (or BRDF) textures; however, our neural pyramid models challenging appearance effects caused by complex microgeometry. In addition, unlike traditional mipmaps, our per-level neural textures are independent from one another: there is no simple operation that produces all texture levels from the finest one. Each neural texture in the set is independently optimized, which ensures that the MBTF can represent appearance for the material at all levels.
We utilize trilinear interpolation to fetch a neural feature (u, ) at location u and (continuous) level . Much like in classical mipmapping, we use standard bilinear interpolation to get features for the spatial dimensions of the location coordinate u, followed by linear interpolation in the logarithmic space for the scale dimension, since the scales express powers of 2. Specifically, a neural feature at (u, ) is computed by: where = log 2 ( ), 1 = (⌈ ⌉ − ), 2 = ( − ⌊ ⌋) and ⌊ ⌋, ⌈ ⌉ are floor and ceiling operations. (u) is the bilinearly-interpolated neural texture lookup from the level with resolution 2 × 2 with infinite tiling (wrap-around).
Material decoder. We design our material decoder as a multilayer perceptron (MLP) network to regress the final reflectance from the neural feature queried from the neural pyramid. The input of this network contains + 4 values, consisting of values from neural textures and 4 values from the directions (encoded as 2D points on the projected hemisphere). One of the design goals of our architecture is fast evaluation. Therefore, our multi-layer perceptron consists of only 4 layers. Each intermediate layer has 25 output channels. The final 3-channel output is used as the RGB reflectance value of the material. We use ReLU as the activation function for all layers, including the final layer where it clamps the final output to be non-negative. Note that, unlike [Thies et al. 2019] that uses a CNN to process neural textures for global context reasoning in scene rendering, our goal is modeling realistic material with locally complex microgeometry. Therefore, we enforce that our neural textures express the spatial information, encoding complex microgeometry effects; thus a light-weight MLP is able to efficiently decode the texture for our task and is also fast to evaluate.
Neural MBTF with Neural Offset
Our baseline neural MBTF can already model many real materials; however, it is very challenging for this network to handle highly non-flat materials that have significant parallax and occlusion effects. Although, by increasing its capacity, a big enough neural network can potentially approximate any function, this is not ideal, as bigger networks lead to longer rendering times; they also result in a slower training rate as the neural texture needs to learn correlated information across multiple pixels for different camera directions. We have also observed poor angular stability when learning non-flat materials in a generic way.
To improve results on complex non-flat materials, we introduce a neural offset module: a network that predicts a coordinate offset for the feature texture lookup. Instead of directly using the intersection r=F off (T off (u),ω i )
Coarse Geometry
Lookup Location u u new Fig. 3. An intuition for the fixed function stage of the neural offset module. Instead of predicting the uv-space offset directly, we predict the depth under the intersection with the reference plane, and convert it to the offset by assuming a locally flat surface. Note however that no actual heightfield is used to supervise the method, and for some materials (e.g. volumetric or made from fibers) such a heightfield is not even precisely defined. Fig. 4. Details of the neural offset module, which is shown in blue in Fig. 2. A bilinear query of the neural offset texture results in a feature vector, which (together with incoming direction) is fed into a multi-layer perceptron. The resulting scalar value is interpreted as a ray offset, which is converted into a 2D offset in texture space by the fixed function stage.
Neural Texture MLP
location u ∈ R 2 , we use the network module (u, ) to calculate a new lookup location: This is shown in Fig. 3. With the help of neural offsets, we can slightly adjust the lookup location of the texture depending on the viewing direction, achieving effects such as parallax.
Neural offset. Instead of directly regressing a 2D offset, we train a network to regress a 1D scalar value representing the ray depth at the intersection which can be easily turned into the final 2D offset given the view direction (see Fig. 3). This makes our model more geometric-aware, easing the neural offset regression task. In particular, the neural offset module consists of 3 components: a neural offset texture off , an MLP off that regresses the ray depth from off , and a final step (a fixed function) that outputs the offset (See Fig. 4). The design of using a neural texture and an MLP is similar to our baseline MBTF network described above, except the texture look-up is just bilinear (no pyramid). Specifically, the ray depth is computed by where = off (u) is the latent feature vector lookup in off on the initial location u. The MLP off takes the latent vector and the viewing direction as input; it again consists of 4 layers, and each layer outputs 25 channels (except for the last one), with ReLU activation functions in between. Given the estimated ray depth , the offset is computed by where , , are the components of . Therefore, the final form of the neural offset query is: The new lookup location u new can now be used in place of u to lookup a latent vector in the neural texture pyramid in eq. 2. Note that the network can also learn the 2D offset function (u, ) directly, but in our experiments the result was worse without the constraint of a hard-coded function ( , ). Figure 5 visualizes the predicted offset learned by the module on a highly non-flat material.
Full neural MBTF representation. Our full neural representation is modeled by prepending the the neural offset module to our baseline neural MBTF network. Basically, we use a neural offset module to get a new location u new (Eq. 7) by translating the original input u. Then we use that u new to query a feature vector from the neural texture pyramid (eq. 3). Finally, we use an MLP material decoder , in conjunction with incoming/outgoing directions, to get the final reflectance value. Our full neural MBTF can be described as follows: (u, , , ) = ( (u + (u, ), ), , ).
This whole framework is fully differentiable, enabling end-to-end training that simultaneously optimizes the neural offset, the multiscale pyramid, and the decoder. We train the neural offset module in an unsupervised way. We do not provide any ground truth offsets; in fact, for some materials (e.g. volumetric or fiber-based) there may be no clear surface and therefore no well-defined correct offset. The end-to-end training allows our full model to jointly auto-adjust the multiple neural components and leads to the best visual quality it can achieve. Please refer to section 4 for the training procedure.
DATA GENERATION AND TRAINING
Synthetic data preparation. We generate synthetic MBTF data by first constructing the microgeometry, and using a CPU-based standard path tracer with a custom camera ray generator to render the required queries; we use smoothed directional lighting with unit irradiance on the reference plane, to ensure valid radiometric properties. In most cases, the microgeometry is constructed by meshing and displacing a high-resolution heightfield texture, and driving other reflectance properties (albedo, roughness, metallicity, micro-scale normal) from additional textures. The synthetic MBTF data is generated in about 30 minutes on a 32-core CPU, using 64 samples per query in a commercial rendering engine. In the basket weave example in Fig. 9, we use a fiber-level representation without any meshed surface, applying the fiber shading model of Chiang et al. [2016]; this material is precomputed using the PBRT renderer [Pharr et al. 2018].
Training. Our neural module is fully differentiable and can be trained end-to-end. As the input to the module we provide 7D queries consisting of the light direction, camera direction, uv-location and kernel radius. The network produces RGB colors for the given The computed 2D offset, color-coded using red/green for the two components. The blue circle shows the incoming ray direction on a projected hemisphere. Table 2. Errors for images in Fig. 6. Note that both our per-pixel MSE and perceptual LPIPS scores are consistently better.
queries in a forward pass, and back-propagation updates the network weights and neural textures. One training batch consists of around a million queries (2 20 ); this number is much larger than the number of input MBTF queries per texel, so one batch updates many texels. We train the network until convergence (typically 30000 iterations). The training time is about 45 minutes for a 512 2 maximum resolution, and about 90 minutes for a 1024 2 maximum resolution, using a single NVIDIA RTX 2080 Ti GPU. If we optimize neural feature vectors individually, this can result in noisy neural textures. As a result, objects rendered with such materials will have a noisy appearance, reminiscent of Monte Carlo noise. This is especially true for the neural offset texture. We have developed a technique to avoid those problems. During training, we apply a Gaussian blur with initial standard deviation of = 8 texels to the neural textures. As the training progresses, we relax exponentially over time with a half life ℎ = 3333 iterations: ( ) = · 2 − /ℎ .
RENDERING
Our neural materials can be integrated into Monte Carlo rendering engines, so that light can seamlessly interact across regular objects and objects with neural materials. We implemented our final rendering in the Mitsuba rendering engine [Jakob 2010]. If a surface with our neural material is hit, we need to evaluate the neural module. We also need to sample an outgoing direction for indirect illumination. For simplicity, we sample indirect rays according to the cosine-weighted hemisphere, which is sufficient for our current examples. Note that for each shading point, we need to evaluate the material up to twice: for the light sample and for the BRDF sample. Table 3. MSE Errors (scaled ×10 −3 ) for images rendered across multiple levels of detail, shown in Fig. 8. Note that the errors are generally becoming lower for more distant zooms, i.e. larger filter kernels.
Ours Rainer Ratio
# of network weights 3332 38269 11.5 # of texture channels 14 38 2.7 Table 4. The total number of network parameters. For fair comparison, we only count the number of weights in the decoder of Rainer et al. 's method, as the encoder is not needed for deployment in a rendering system. We use 7 channels per texel for both the neural offset texture and the feature texture pyramid.
There are multiple choices for implementing our neural module in a practical rendering system. We could certainly port its implementation to C++, since the neural network is a simple 4-layer MLP, which just requires 4 dense matrix multiplications. However, our current solution is to reuse the PyTorch code to ensure exactly matching outputs between training and rendering. Because the heavy-lifting operations are anyway implemented in C++/CUDA via the PyTorch framework, the Python overhead is negligible. This required some modifications to the Mitsuba path integrator. We use material query buffers: in our integrator, we trace a path until we encounter a neural material. When that happens, we put the query in the buffer, and continue tracing. When the buffer is full or if there are no more active paths to trace, we send the buffer to PyTorch/GPU for evaluation.
RESULTS
In this section, we showcase the abilities of our neural method to represent and render a range of material appearances, with training data coming from different input representations: displaced heightfield geometry with varying reflectance properties, fiber geometry, and measured data.
Ours
Reference [Rainer et al. 2020 Rainer et al. [2020], our baseline method (without neural offset), our full method, and reference computed by path-tracing of synthetic material microstructure. The materials are mapped to a plane, viewed at an angle and lit by a single light slightly to the left. Our baseline method already outperforms Rainer et al., despite being trained with fewer BTF queries. Our neural offset adds even better handling of parallax effects. The small insets in "Ours" show the color-coded 2D neural offset. The match with reference is close, with a minor loss in shadow contrast (the hard shadows form a reflectance discontinuity which is hard to learn perfectly). carpet07 carpet11 leather11 fabric01 Fig. 7. Real BTF data. We also observe good results when applying our method to real BTF data acquired from physical material samples. Our method is trained with a random set of BTF queries, and these can come from any source.
Comparisons. In Figure 6, we show results rendered on a flat plane, with camera and directional light at an angle. We compare several different methods: Rainer et al. [2020], our baseline method (without neural offset), our full method (with neural offset), and a ground truth computed by path-tracing of the synthetic material structure. The materials are mapped to a plane, viewed at an angle and lit by a single light slightly to the left. Our baseline method already outperforms the universal encoder of Rainer et al., despite being trained with fewer BTF queries. We believe this is due to our decoder-only architecture, which can adapt to the material and benefits from a stochastic distribution of the input BTF queries, and our improved training techniques (especially the progressively decaying spatial Gaussian blur). The multi-resolution nature of our solution also helps. On the other hand, Rainer et al.'s solution has the benefit of very fast encoding in case the queries are already in the required uniform format, and its performance could likely improve if the encoder was retrained on a different distribution of materials that matches our examples more closely.
Our neural offset adds even better handling of parallax effects on top of our baseline result. The small insets in "Ours" show the color-coded 2D neural offset. The match with reference is close, with a minor loss in shadow contrast (the hard shadows form a reflectance discontinuity which is hard to learn perfectly).
Real BTF results. While we mostly focus on synthetically generated BTF queries, we also support fitting real BTF data (acquired from physical material samples and also used by Rainer et al.) using our neural architecture. This is demonstrated by Figure 7. Since our method is trained with a random set of BTF queries, these can come from any source.
Quantitative evaluation. We also measure the numerical error for images in Fig. 6 when compared to the reference, both using a per-pixel MSE (Mean Squared Error) and perceptual LPIPS (Learned Perceptual Image Patch Similarity) scores. Our scores, shown in Table 2, are consistently better than Rainer et al. 's.
Multiresolution results. In Table 3, we also report the MSE scores for images rendered across multiple levels of detail, shown visually in Fig. 8. We observed that with higher (coarser) levels in the multiresolution hierarchy, the errors actually tend to decrease. This is because materials at a coarse level of detail tend to have fewer highfrequency details. As a result, it becomes even easier to optimize corresponding feature vectors from the neural texture pyramid and the network at those levels of detail.
Network Size. We also compare our neural network sizes to Rainer et al. 's in Table 4. Not only does our method have smaller errors, it has 11.5 times fewer weight parameters and uses less than half the number of channels for the neural textures compared to Rainer et al. This is because the task of Rainer et al. 's decoder is significantly harder. It needs to decode a latent vector which was encoded by a universal (and necessarily imperfect) encoder. In contrast, the task of our network (decoder) is to decode a more specialized latent vector, which was created through direct optimization for a specific material. For this reason, our network requires substantially fewer weights. By dividing our network into two stages via a neural offset module, we increase the network representative power compared to a simple MLP architecture.
Additional renderings. In Figure 9, we show several fabric-like materials rendered at multiple levels of zoom. The top three fabrics are modeled as heightfields with spatially varying reflectance properties. The textures driving the height and reflectance are from the Substance Source library [Adobe 2021], though any textures can be used. The last Basket Weave example is constructed from yarns using actual fiber geometry shaded with the model of Chiang et al. [2016] and does not use a heightfield. We show several camera and light directions in the right column. In Figure 10, we show a further selection of materials rendered with our method, showing different camera views and light directions. Please make sure to view their animated versions in the supplementary video, which is important to fully appreciate parallax/occlusion effects.
Limitations. One limitation of our current implementation is simple importance sampling. For more specular materials, this could be extended by fitting the parameters of a parametric pdf model (e.g. a weighted combination of a Lambertian and microfacet lobe of a given roughness) per texel. The benefit of this solution would be that no additional neural network is required to sample, and path continuations can be decided independent of network inference. Another limitation is that very specular (e.g. glinty) materials would be hard to handle without blurring; this could be addressed by inserting randomness into our decoder, to synthesize some specular detail instead of attempting to fit it exactly.
CONCLUSION AND FUTURE WORK
We presented a neural architecture that can be trained to accurately represent a variety of complex materials at different scales. Our neural architecture learns a multi-scale bidirectional texture function (MBTF): a 7-dimensional function, with two dimensions each for the query location, incoming and outgoing direction, and one dimension for the filter kernel radius. As part of the architecture, we introduced a neural offset technique for rendering complex geometric appearance including parallax effects without tessellation, trained in an unsupervised manner. Our encoder-less architecture can be trained from a small number of random queries per texel (200-400). These queries can come from real or synthetic data. We show a number of results, demonstrating high quality appearance with accurate displacement, parallax, self-shadowing, and other effects. We believe this approach will stimulate further research on neural representations of materials that are difficult or expensive to handle with classical methods. The most exciting future work avenue, in our opinion, is to fully explore the set of material structures that can be procedurally generated as inputs to our method. Other interesting directions include more advanced importance sampling (e.g. by fitting multi-lobe parametric distributions per texel); the parameters of such a distribution could be stored in small additional textures. A straight-forward extension would be to support semitransparent materials by predicting alpha transparency in addition to reflectance. Yet another direction would be to make the method support glinty specular effects, perhaps by inserting an additional random vector into the decoder, and training it with a GAN loss to generate stochastic detail matching the input data distribution.
Wool Straight
Beaded Fabric Dotted Lines Basket weave
|
2021-04-08T01:16:07.437Z
|
2021-04-06T00:00:00.000
|
{
"year": 2021,
"sha1": "a6aefec0ab06bfcde2b05ab5af22e1cdca456578",
"oa_license": null,
"oa_url": "https://dl.acm.org/doi/pdf/10.1145/3450626.3459795",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "a6aefec0ab06bfcde2b05ab5af22e1cdca456578",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
}
|
15954012
|
pes2o/s2orc
|
v3-fos-license
|
Socioeconomic differences in school dropout among young adults: the role of social relations
Background School dropout in adolescence is an important social determinant of health inequality in a lifetime perspective. It is commonly accepted that parental background factors are associated with later dropout, but to what extent social relations mediate this association is not yet fully understood. Aim: To investigate the effect of social relations on the association between parental socioeconomic position and school dropout in the Danish youth cohort Vestliv. Methods This prospective study used data from questionnaires in 2004 and 2007 and register data in 2004 and 2010. The study population consisted of 3,054 persons born in 1989. Information on dropout was dichotomised into those who had completed a secondary education/were still attending one and those who had dropped out/had never attended a secondary education. Logistic regression analyses were used to investigate associations between parental socioeconomic position and dropout at age 21, taking into account effects of social relations at age 15 and 18. Results A large proportion of young people were having problems with social relations at age 15 and 18. In general, social relations were strongly related to not completing a secondary education, especially among girls. For instance, 18-year-old girls finding family conflicts difficult to handle had a 2.6-fold increased risk of not completing a secondary education. Young people from low socioeconomic position families had approximately a 3-fold higher risk of not completing a secondary education compared to young people from high position families, and the estimates did not change greatly after adjustment for social relations with family or friends. Poor relations with teachers and classmates at age 18 explained a substantial part of the association between income and dropout among both girls and boys. Conclusions The study confirmed a social gradient in completion of secondary education. Despite the fact that poor social relations at age 15 and 18 were related to dropout at age 21, social relations with family and friends only explained a minor part of the socioeconomic differences in dropout. However, poor social relations with teachers and classmates at age 18 explain a substantial part of the socioeconomic difference in dropout from secondary education. Electronic supplementary material The online version of this article (doi:10.1186/s12889-015-2391-0) contains supplementary material, which is available to authorized users.
Background
Some of the strongest determinants of health are structural factors such as national wealth, income inequality, and access to education [1]. A Danish report on determinants of health inequality in a lifetime perspective points out poor educational outcome in adolescence as one of the most important of these determinants [2]. In Denmark, approximately 25 % of the 25-year-olds had not completed a secondary education in 2013 [3]. Those who do not complete a secondary education are at greater risk of developing health problems later in life [4], and across OECD countries, people with poor educational outcome are less likely to be participants in the work force [5] and are at greater risk of sickness and disability in young adulthood [6]. Furthermore, a widening of social inequality in life expectancy between those who obtained a secondary education and those who did not has been reported in Denmark in the recent years [7], indicating that dropout is indirectly related to the development of health inequality during life [2,4].
One of the strongest risk factors of dropout is parental socioeconomic position [8][9][10][11]. Parents' educational level, occupational prestige, and family income have been shown to have direct and indirect relationships with youths' later educational outcome [8,12]. Academic achievement during compulsory school has also been found to be strongly associated with dropout from secondary school [13,14]. Previous studies have shown that parental involvement in their offspring's schooling is an important determinant of both later academic achievement and dropout [15][16][17]. However, a study by Blondal et al. showed that parenting style more strongly predicts school dropout than parental involvement in school activities [18]. Apart from family relations, a good teacher-student relationship was found to be associated with lower student dropout rates [19], and close friendships were found to stimulate a sense of school belonging and academic performance among high school students [20][21][22], and a positive atmosphere at school increases the educational aspirations of young people [23].
Although there is some indication that adolescents' social relations with family, friends, teachers, and classmates influence later academic achievement, the influence on school dropout has not been adequately investigated. In order to reduce social inequality, it is important to identify potential conditions that early in life mediate the relation between parental background factors and later school dropout. Identification of such mediators potentially offers important implications for prevention and intervention.
The purpose of this prospective study was to investigate the effect of social relations on the association between parental socioeconomic position and dropout from secondary education in a Danish youth cohort. Gender differences appear to play a role in the way socioeconomic measures and health are related [24]. A previous study within the Vestliv cohort showed that stress levels in girls were most strongly associated with lower parental education and that stress levels in boys were most strongly associated with parental income [24]. To evaluate the impact of the two different measures of socioeconomic position on social relations and the risk of school dropout, results were presented for each gender separately. Social relations were grouped into three different dimensions: social relations in the family, social relations with friends, and social relations at school (with classmates and teachers). To investigate the independent impact of different social environments in early and late adolescence, information about social relations was collected when the participants were 15 and 18 years old. The time between these two age points represents a very important stage of the life course, with a transition from a more family centred environment to a broader environment more open to the influence of peers and non-family members.
The following research questions were addressed: 1) Are social relations at age 15 and 18 related to dropout at age 21? 2) Is a social gradient in dropout present among 21-year-olds in Denmark? 3) Do social relations at age 15 and 18 mediate the association between parental socioeconomic position and dropout? 4) Are the relations affected by the choice of socioeconomic measure? 5) Are there gender differences in the associations between social relations, socioeconomic position and dropout from secondary education?
Sample
The source population of the prospective cohort study Vestliv consisted of all individuals born in 1989 and living in the county of Ringkjoebing, Denmark, in early April 2004. A total of 3,681 fulfilled these criteria, and contact information was retrieved from the Central Office of Civil Registration and from public schools in the county of Ringkjoebing. All 3,681 individuals were contacted and asked to fill out an initial questionnaire during school hours when they were 15 years of age. Those not at school on the day of collection received the questionnaire by post, resulting in a participation rate of 83 % (n = 3,054). Altogether 1,399 children received the questionnaire by post and 58 % completed it. A follow-up survey was conducted in 2007 when the participants were aged 18 using both e-mailed and postal questionnaires. This resulted in 2,181 participants (71 % of initial). To gather information on family socioeconomic position and dropout from secondary education, respondents were linked to their parents or guardians by using their personal identification number (CPR number), which is given to every inhabitant in Denmark at birth (or upon entry for immigrants) [25]. The study sample of the present report was defined by the 3,054 participants who answered the initial questionnaire and with available information on outcome and at least one of the exposure variables. The study was approved by the Danish Data Protection Agency.
Measures Outcome
Completion of secondary education In Denmark education beyond compulsory school (secondary education) consists primarily of a high school academic track of 3 years, or vocational education, which lasts between 2 and 4 years. The outcome of the present study was completion of a secondary education after compulsory school in October 2010 when the participants were 21 years old, which allowed a follow-up of 6.5 years. Data on secondary education were based on register information derived from Statistics Denmark [26]. The Danish Education Registers collect information on all individuals attending education in Denmark and link information within and across years through the CPR number. Generally, the registers are considered of high quality [26]. The participants were categorised into those who (1) Completed/were attending: consisting of participants who had completed a secondary education or were still attending one, and (2) Dropped out/never attended: if they had dropped out of their last secondary education and never attended another or if they had never attended a secondary education.
Exposures variables Socioeconomic position
Information from registers about highest attained education in the household and household income in year 2003 was chosen as measures of socioeconomic position. Based on the source population (N = 3,681), yearly household income was recoded into tertiles corresponding to lowest (<61,770 EUR), middle (61,770-80,531 EUR), and highest (>80,531 EUR) [27]. Highest attained education in the household was recoded into three categories: < 10 years, 10-12 years, >12 years [26]. If the participants' parents were divorced, information stemmed from the household at which the participants' address was listed.
Social relations with parents, friends, teachers and classmates
Social relations were conceived in a general framework as having three different dimensions: 1. Social relations in the family, 2. Social relations with friends, 3. Social relations with teachers and classmates. Information about social relations was based on questionnaire information collected at age 15 and age 18. At age 15 the General Functioning Scale was used as a measure of the social climate in the family. It is made up of twelve items that assess the overall health/pathology of the family and is one of seven scales from the Family Assessment Device (FAD) [28]. Low scores indicate healthier functioning than higher scores. In this sample the mean score was 1.75, SD 0.52 and Cronbach' alpha was 0.85. A cut-off at the 75 %percentile (2.08) divided the scores into good/poor family functioning. As a measure of the social climate in the family at age 18, a question was asked about whether it is difficult to handle conflicts in the family (yes, sometimes or often vs. no).
Social relations with friends were measured by questions at age 15 and 18 about (1) having at least one friend to be confidential with (yes vs. no); (2) talking to friends about personal worries (very often, often or sometimes vs. not so often or rarely); (3) being satisfied with the help and support they get from friends (very often, often or sometimes vs. not so often or rarely); and (4) whether handling conflicts with friends or partner is difficult (no vs. yes, sometimes or often) [29,30].
Social relations with teachers and classmates at age 15 and 18 were measured by questions on whether (1) teachers help with school work when it is needed (strongly agree or agree vs. disagree or strongly disagree); (2) classmates are doing well together (always, mostly or sometimes vs. rarely or never); (3) they feel left out by the other pupils in the class (always, mostly or sometimes vs. rarely or never); (4) feel attached to the classmates (strongly agree, partially agree or neither agree nor disagree vs. partially disagree or strongly disagree); or if (5) teachers help with personal problems if it is needed (strongly agree, partially agree or neither agree nor disagree vs. partially disagree or strongly disagree) [30,31].
Statistical analyses
A correlation analysis between measures of social relations from each time point was performed initially and no correlation exceeded 0. 30 (2004) or 0. 35 (2007).
Some indication of effect modification between social relations and gender was seen. Of the 13 measures of social relations 5 showed significant interactions with gender. At age 15 it was: talking to friends about personal worries, p = 0.001; being satisfied with the help and support they get from friends, p = 0.04; feeling left out by other pupils in the class, p = 0.02, and at age 18 it was: finding it difficult to handle conflicts with friends or partner, p = 0.02; feeling attached to classmates, p = 0.04. Gender-specific descriptive data are presented for dropout, socioeconomic position and social relations at age 15 and 18. Chi-square-tests were performed to test for gender differences.
Multiple logistic regression analyses were performed to examine gender-specific associations between socioeconomic position, different aspects of social relations and school dropout [32]. The risk estimates were odds ratios and because the prevalence's of the social problems were high the odds ratios would tend to be skewed to a higher level compared to relative risks [33].
We first modelled how aspects of social relations were associated with not completing a secondary education ( Table 2). Then we modelled the simultaneous effects of socioeconomic position and social relations on completion of secondary education after adjusting for age on completion of 9 th grade (Table 3). Adjustments for social relations were done for age 15 and age 18 separately because observations at the two time points were correlated.
All analyses were carried out in STATA statistical package (V.12.0; State, College Station, TX,USA).
Results
Interactions between measures of socioeconomic position and measures of social relations were tested, but none of the tests showed a significant contribution of the interaction terms. Table 1 shows the prevalence of aspects of completion of secondary education, social relations, and the distribution of family socioeconomic position, all together and for girls and boys, separately. Nine percent of the young people had never attended or had dropped-out of the last attended secondary education at the age of 21. A relatively large proportion of young people had problems with relations with family, friends, teachers, or classmates at the age of 15 and 18. At age 15, more boys than girls reported not having a friend to be confidential with (13 % vs. 8 %) and 46 % of the boys reported difficulties in talking to friends about personal worries, compared to 14 % of the girls. More girls than boys felt left out by other pupils in the class (16 % vs. 11 %). At age 18, more girls than boys experienced difficulties in handling family conflicts (43 % vs. 37 %) and conflicts with friends or partner (43 % vs. 37 %). At age 18, 32 % did not feel that teachers helped with personal problems if they needed it.
Socioeconomic differences in social relations
In general, poor socioeconomic position was related to poor social relations with family, friends, teachers, and classmates. Individuals from families with low income or low educational level more often reported poor family functioning and experienced less help and support from friends than their peers at age 15 (ORs between 1.61 and 2.05). Girls from low socioeconomic position families often reported not having a friend to be confidential with, especially at age 18 (low household income: OR 3.12 (95 % CI 1.70-5.71) and low educational level in the family: OR 3.23 (95 % CI 1.417.41)) [see Additional file 1].
Social relations and not completing a secondary education
Social relations with family, friends, teachers, and classmates in general were strongly associated with not completing a secondary education, especially among girls (Table 2). For instance, not being satisfied with help and support from friends at age 15 was strongly associated with not completing a secondary education, especially among the girls (OR 3.02 (95 % CI 1.80-5.07), boys OR 1.73 (95 % CI 1.03-2.91)). Classmates not doing well together at age 15 was strongly related to not completing a secondary education in both girls and boys (girls: OR 3.82 (95 % CI 2.20-6.63), boys: OR 2.14 (95 % CI 1.02-4.48)). 18-year-old girls experiencing family conflicts difficult to handle had a 2.6-fold increased risk of not completing a secondary education (girls: OR 2.59 (95 % CI 1.57-4.27), boys: OR 1.34 (95 % CI 0.73-2.47)) compared to those not experiencing family conflicts difficult to handle.
Socioeconomic position, social relations, and not completing a secondary education Table 3 shows that young people from the lowest socioeconomic position families had approximately a 3-fold higher risk of not completing a secondary education compared to young people from the highest socioeconomic position families (Model 1), and a significant trend was seen across socioeconomic groups. Socioeconomic differences in completion of secondary education did not change substantially after adjustment for family relations (Models 2 and 6) or relations with friends (Models 3 and 7).
Adjusting for social relations with classmates and teachers at age 18 reduced the association between family income and the chance of completing a secondary education (OR changed from 3.09 (95 % CI 2.23-4.27) to 1.51 (95 % CI 0.76-2.97)) (Model 8).
In general, the large socioeconomic differences in young people's chance of completing a secondary education remained after simultaneous adjustments for all social relations, both at age 15 (Model 5) and age 18 (Model 9). However, adjusting for all social relations at age 18 reduced the strength of the association between family income and the chance of completing a secondary education considerably. The odds ratio changed from 3.09 (95 % CI 2. 23-4.27) in the crude analysis to 1.44 (95 % CI 0.72-2.90) in the fully adjusted analysis, but this was not the case when adjusting for social factors from age 15 (OR changed to 2.67 (95 % CI 1.88-3.78). The associations between family educational level and the chance of completing a secondary education remained strong after adjustment for all social relations both at age 15, Model 5 (OR 3.07 (95 % CI 2.07-4.56)) and age 18, Model 9 (OR 2.98 (95 % CI 1.37-6.47)).
Discussion
The present study showed that poor social relations with parents, friends, teachers, and classmates are common among 15-and 18-year-old Danish adolescents. Among both girls and boys, the risk of not having completed a secondary education at age 21 increased if an individual had experienced poor social relations, but at the same time poor social relations with family and friends only explained a minor part of the socioeconomic differences in dropout from secondary education. Poor social relations with teachers and classmates at age 18 explained a large part of the association between income and dropout among both girls and boys. Most previous research on the influence of social relations on educational outcomes has focused on parent's investment and involvement in their children's school, and parental interest appears to facilitate the offspring's motivation for schoolwork and improve both academic achievement and adult educational outcome [8,16,34]. Henry et al. reported parental investment in school as a mediator of the relationship between socioeconomic status and students' expectation to graduate from high school [8], but they did not investigate whether the students succeeded in graduating or not. On the other hand, a study by Blondal et al. found that parenting style at age 14 was a stronger predictor than parental involvement in terms of having completed upper secondary school by age 22 [18]. One of Some gender differences were found in the current study. The associations between parental socioeconomic position and dropout were strong in both genders, and especially among the boys, which is consistent with previous findings [9,35]. At the same time, poor social relations were more strongly associated with not completing a secondary education among girls than among boys. This finding stresses the importance of parents, teachers, and other adults being in contact with adolescent girls to help stimulate positive social relations.
Other studies have confirmed strong associations between negative relations with parents [12,36] friends, teachers, and classmates [19][20][21][22] and lack of educational outcomes in their children but only a few studies have evaluated the influence of poor social relations on the association between socioeconomic position and dropout. A previous study documented that in addition to lower socioeconomic position being related to school dropout, students from lower socioeconomic families were generally more disengaged in school than students from higher socioeconomic families [37]. In addition Melby et al. found that family income of 7 th grade students has both a direct and an indirect effect on educational attainment through supportive parenting [12]. Whether the positive effect of social relations on educational outcome is due to increased school motivation and engagement among the students needs further investigation.
Test for trends overall showed a clear dose-response pattern between level of household income or highest education in the household and completion of secondary education of the young people. The only tests not being statistically significant were between income level and school completion after adjustment for social factors at age 18 (Models 8 and 9).
Previous research suggests that different measures of socioeconomic position, such as parental income and education, affect health and future social status through different pathways [38]. Bourdieu differentiates between two independent yet interrelated mechanisms: economic capital (income) and cultural capital (educational level). He argues that having low levels of economic capital could make a person more prone to living in situations that are more stressful, e.g. lack of material resources, whereas low levels of cultural capital would influence the way a person copes with stressful situations [39]. By including highest education in the household and household income as two separate exogenous variables, we were able to evaluate the contribution of each socioeconomic component. We found both measures related to dropout in young adulthood, but the results indicate that they are related in slightly different ways and that the mechanisms to some extent vary by gender. In general, parental educational level (cultural capital) appeared to have a larger influence on boys' chances of completing a secondary education than household income (economic capital) when social relations were taken into account, whereas among girls, no clear pattern was observed. This finding is in line with the results of a study in a Norwegian male population [40].
In the present study, poor social relations with teachers and classmates at age 18 seemed to explain part of the socioeconomic difference in dropout. Actually, it seemed that social relations with teachers and classmates were mediators of the association between household income and completion of a secondary education but not between parental educational level and completion of secondary education. The reason for the difference between the estimates of the two socioeconomic measures is not obvious. However, the results indicate that the importance of social relations at school increases from age 15 to 18 concurrently with the natural transition during adolescence, especially among young people from stressful environments due to low economic capital. It seems that late adolescence is an important stage of the life course, with a transition from a strong parental influence to greater influence of classmates, teachers, and other non-family members.
This study features a relatively high initial participation rate of 83 % of whom 71 % responded again at follow-up in 2007. Additional strengths of the study are the prospective design with complete follow-up due to use of register-based data. At the same time, the use of both questionnaire and register-based data minimises the risk of common method variance [41].
It is important to emphasise that the questions asked about social relations with classmates and teachers at the two different age points are not all identical. As such, the difference between social relations' mediating role at ages 15 and 18 might be attributable to the different constructs that were measured rather than the age periods per se.
Some of the missing answers to the questions about social relations at age 18 could be due to school dropout prior to this age. Altogether 147 participants reported being out of school when they completed the first followup questionnaire at age 18. This selection problem could result in bias due to missing information from some of the adolescents with highest risk of negative educational outcome. However, it is not clear how this missing information may have influenced the results.
The high frequency of young people attending or having completed a secondary education (91 %) by age 21 indicates that some selection into the Vestliv cohort has occurred. A previous study on the same data material demonstrated that the participants had slightly better school abilities and more often came from homes with two adults, higher income, or higher educational level. These differences increased at subsequent follow-ups. Although certain characteristics were related to those who participate initially and at follow-ups, this did not have any large influence on the relative risk estimates measured in the study. This is reassuring for the validity of the relative estimates in the current study [42].
Social relations with family, friends, teachers, and classmates in general only explained a small part of the association between socioeconomic position and dropout. It is likely that other aspects such as major life events like death or illness in the family, divorce, or living with one parent could potentially influence the chance of completion as well. Including such variables in future studies is recommended.
The objective of this study was not to study social inequality of health per se but to address some potential determinants that eventually could lead to poor health outcome. Addressing inequality in young people's educational outcome has multiple potential benefits that extend beyond reductions in health inequalities. If this inequality could be reduced, it would enable young people to maximise their capabilities and eventually be able to participate equally with others in society. Given the relatively low social inequality in Denmark, the results can be difficult to generalise to other more unequal countries. However, the fact that the difference in life expectancy between those who complete secondary education and those who do not is increasing in Denmark [7], indicates that positive social relations that are preventing school dropout is indirectly related to the prevention of health inequality later in life [2,4].
Conclusion
This study confirmed a social gradient in completion of secondary education among Danish students. Despite the fact that poor social relations at age 15 and 18 were related to dropout at age 21, social relations to family and friends only explained a minor part of the socioeconomic differences in dropout from secondary education. However, poor social relations with teachers and classmates at age 18 explain a substantial part of the socioeconomic difference in dropout from secondary education. The findings suggest that stimulating positive social relations with classmates and teachers may benefit all students and could potentially reduce the risk of adolescents from economically disadvantaged families not getting a secondary education, which may be a part of a number of life events that eventually could lead to social and health inequality.
|
2016-05-31T19:58:12.500Z
|
2015-10-15T00:00:00.000
|
{
"year": 2015,
"sha1": "1cc29ae49220c7be70da0538d602b360efc61cab",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-015-2391-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8075fb237ad484d6825494e4fb37bd237338cdaf",
"s2fieldsofstudy": [
"Psychology",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
10715017
|
pes2o/s2orc
|
v3-fos-license
|
Heavy metals in sediment and their accumulation in commonly consumed fish species in Bangladesh
ABSTRACT Six heavy metals (chromium [Cr], nickel [Ni], copper [Cu], arsenic [As], cadmium [Cd], and lead [Pb]) were measured in sediments and soft tissues of eleven commonly consumed fish species collected from an urban river in the northern part of Bangladesh. The abundance of heavy metals in sediments varied in the decreasing order of Cr > Ni > Cu > Pb > As > Cd. The ranges of mean metal concentrations in fish species, in mg/kg wet weight (ww), were as follows: Cr, 0.11–0.46; Ni, 0.77–2.6; Cu, 0.57–2.1; As, 0.43–1.7; Cd, 0.020–0.23; and Pb, 0.15–1.1. Target hazard quotients (THQs) and target carcinogenic risk (TR) showed the intake of As and Pb through fish consumption were higher than the recommended values, indicating the consumption of these fish species is associated with noncarcinogenic and carcinogenic health risks.
During the last few decades, rapid urbanization and industrial development have provoked some serious concerns for the aquatic environment. About 80% of the world population is facing an increasing threat regarding water security 1,2 because sediments in most of the urban rivers have been contaminated by heavy metals. [3][4][5] It has been well documented that surface sediment acts as a sink of various contaminants and poses a risk to water quality through biogeochemical exchanges with the overlying water body. 6 Chromium (Cr), nickel (Ni), copper (Cu), arsenic (As), cadmium (Cd), and lead (Pb) are some of the most common heavy metal pollutants in the environment. [7][8][9] However, these metals from natural and anthropogenic sources 10,11 may enter into the aquatic environment and pose serious threats due to their toxicity, 12 persistence, and bioaccumulation. 13,14 Usually, in unaffected environments, the concentration of most of the heavy metals is low and mostly derived from the mineralogy and weathering. 15 Sources such as industrial effluents, agricultural runoffs, transport, burning of fossil fuels, animal and human excretions, geologic weathering, and domestic waste contribute to the accumulation of heavy metals in the water bodies. 4,5,[16][17][18] Heavy metal pollution in the environment has become a wide concern owing to the ever-increasing contamination of water, soil, and food in many regions of the world, particularly in some developing countries such as Bangladesh. 9,17,[19][20][21] Heavy metals are not only a threat to the public water supply; they also pose risks to human health through consumption of aquatic products, especially fish. 17,22,23 Fish that generally accumulate contaminants from aquatic environments have been largely used in food safety studies. 24,25 Therefore, studies on bioaccumulation of heavy metals in fish species are important in determining the tolerance limits in fish species, the effects on fish, and biomagnification through the food chain. 26 Fish are an important part of the human diet and a good indicator of environmental contamination by a number of substances, including heavy metals. However, fish have been considered as the top of the aquatic food chain; 25,27,28 therefore, they normally can accumulate heavy metals from food, water, and sediments. 27,28 The accumulated toxic metals in fish can counteract their beneficial effects; several adverse effects of heavy metals to human health have been known for a long time. 24,29 These may include serious threats such as renal failure, liver damage, cardiovascular diseases, and even death. 5,30 Little is known about the bioavailability to aquatic organisms of sediment-associated contaminants. 31 Therefore, many international monitoring programs have been established to assess the quality of fish (in terms of metal concentration) for human consumption and to monitor the health of the aquatic ecosystem. 32 Bangladesh is one of the largest delta regions in the world, formed by the Ganges, Brahmaputra, and Meghna rivers and randomly spreading over 5 countries: Bhutan, Nepal, China, India, and Bangladesh. 16 In Bangladesh, huge amounts of untreated industrial waste are discharged daily into the open water bodies and adjacent lands. In addition, a considerable amount of heavy metal-enriched suspended solids come down from neighboring countries such as India through the Teesta and the Brahmaputra rivers. 16 Bogra District, known as the northern capital of Bangladesh, is situated on the bank of the river Korotoa, which is connected to the rivers Teesta and Brahmaputra. The area of the Korotoa River that is being studied is the only active section with intensive district traffic and supplies water for the people living adjacent to this river. Industrial activities and irrigation also depend on the Korotoa River. The river sediments are traditionally dredged and used as an amending material for agricultural soil. In addition, the river aquatic products such as fish serve as a key source of food for the local inhabitants. Overexploitation, mismanagement, and discharge of improperly treated industrial effluents into the Korotoa River create a great challenge for the ecosystem balance. 16 Thus, the study river recently has become a public concern due to its extreme pollution. To date, no scientific research regarding heavy metal issues on the study river has been conducted. Therefore, the objectives of this study are to evaluate the levels of heavy metals in surface sediment, to observe the metal accumulation in eleven fish species collected from the Korotoa River, and to assess the health risk due to fish consumption in Bangladesh.
Description of the study area
This study focused on the Korotoa River located at the northern part of Bangladesh. The study river originated from the Himalayas, the mother of numerous rivers. Originating from the northern frontier of Bhutan, the Korotoa enters Bangladesh territory through the Darjeeling and Jalpaiguri districts of West Bengal, India, and forms the boundary between Dinajpur and Rangpur districts in Bangladesh. For the present study, we selected sites of the Korotoa River that flow through the Bogra District urbanized area with an area of about 71.56 km 2 . The total population of this district is about 350,397, and it is situated between 24 84 0 91.82 00 N and 89 37 0 29.57 00 E. 16,33 Thousands of villages, towns, and commercial places such as Shibganj, Mohasthangarh, Bogra, and Sherpur have been built on both sides of the Korotoa River. Mohasthangarh, the capital of ancient Pundranagar, is still there beside the Korotoa as a witness of history in Bangladesh.
Sample collection and preparation
The sampling was conducted in August and September of 2013. A total of 30 composite sediment samples were collected from 10 different locations of the Korotoa River situated at the northern part of Bangladesh (Figure 1). At each point, 3 composite sediment samples were collected using standard protocol. 34 The riverbed sediment samples were taken at a depth of 0-5 cm using a portable Ekman grab sampler. Three composite samples of mass approximately 200 g were collected at each station. The upper 2 cm of each sample was taken from the center of the catcher with an acid-washed plastic spatula to avoid any contamination from the metallic parts of the sampler. We collected about 110 samples of eleven different fish species: Channa punctata, Awaous grammepomus, Anabas testudineus, Heteropneus tesfossilis, Neotropius atherinoides, Colisa fasciata, Channa striata, Notopterus notopterus, Batasio batasio, Coricaso borna, and Puntius chola. Fish species were collected using nylon net with the help of fishermen at almost the same locations where the sediments were collected. After collection, fish samples were carefully washed immediately with distilled water, and the edible parts of the fish (muscle tissue) were cut into small pieces and oven dried at 70 -80 C to attain constant weight. The moisture content of the fish was calculated by recording the difference between fresh and dry weights. The dried fish samples were crumbled with a porcelain mortar and pestle and sieved through a 2-mm nylon sieve and stored in airtight, clean zip-lock bags in freezer condition until chemical analysis was performed.
Analytical procedure for heavy metals All reagents used were Merck analytical grade (AR). Deionized water was used for solution preparation. For metal analysis, about 0.3 g of sediment and 0.5 g of the dried fish samples were digested with 15 mL of concentrated HNO 3 , H 2 SO 4 , and HCIO 4 in 5:1:1 ratio at 80 C until a transparent solution was obtained. 35 The digested samples of sediment and fish species were filtered through Whatman no. 42 filter paper, and the filtrates were diluted to 50 mL for sediment and 25 mL for fish with deionized water. All samples were stored at ambient temperature before analysis. For heavy metals, samples were analyzed using an atomic absorption spectrometer (Perkin Elmer Analyst 300). Blank samples were analyzed after 8 samples. Concentrations were calculated on a dry weight (dw) basis for sediment and wet weight basis for fish samples. All analyses were replicated 3 times. The precision and analytical accuracy were checked by analysis of standard reference material NMIJ CRM 7303 (lake sediment) and DORM-2 (dogfish muscle) from the National Research Council, Canada. The measured mean and standard deviation of elemental values for reference materials are reported in Table S1. Comparison is made with the certified values, which in both cases confirmed that the sample preparation and operating condition of the instrument provided good levels of accuracy and precision.
Analysis of physicochemical properties of sediment
The pH of sediments was measured in a 1:2.5 sedimentto-water ratio. The suspension was allowed to stand overnight prior to pH determination. The pH was measured using a pH meter with the calibration of pH 4.0, pH 7.0, and pH 9.0 standards. For electrical conductivity (EC) determination, 5.0 g of sediment was taken in 50 mL polypropylene tubes. Then, 30 mL of distilled water was added to the tube and was shaken for 5 minutes. After that, EC was measured using a portable EC meter (Horiba D-52). Percent nitrogen (%N) and organic carbon (%C) of sediment were measured using an elemental analyzer (vario EL III, Elenemtar, Germany). For total nitrogen (TN) and total organic carbon (TOC) determination, sediment samples were weighed in tin or silver vessels and loaded in the integrated carousel. In a fully automatic process, the transfer of the sample through the ball valve into the combustion tube was performed. Each sample was individually flushed with carrier gas to remove atmospheric nitrogen, resulting in a zero blank sampling process. The catalytic combustion was carried out at a permanent temperature of up to 1200 C. The element concentration from the detector signal and the sample weight on the basis of stored calibration curves were measured.
Metal bioaccumulation in fish species
Metal concentrations in fish species and sediments from the studied river were used for calculating biota-sediment accumulation factor (BSAF). The BSAF is an index of the ability of fish species to accumulate a particular metal with respect to its concentration in sediment. It was calculated by the following equation. 36 where C fish is the metal concentration in fish (mg/kg dw) and C sediment is the metal concentration in sediment (mg/ kg dw).
Noncarcinogenic risk
The noncarcinogenic risk was estimated in accordance with that provided in the USEPA Region III Risk-based Concentration Table. 37 The noncarcinogenic risk for each metal through the consumption of fish was assessed by the target hazard quotient (THQ) 38 : "the ratio of a single substance exposure level over a specified time period (eg, subchronic) to a reference dose (RfD) for that contaminant derived from a similar exposure period." THQ assumes a level of exposure (ie, RfD) below which it is unlikely for the populations to experience adverse health effects. If the exposure level exceeds this threshold (ie, if THQ D E/RfD exceeds unity), there may be concern for potential noncarcinogenic risks.
TotalTHQ.TTHQ/ D THQ metal1 C THQ metal2 C ::::::::::: C THQ metaln ; where EFr is the exposure frequency (365 days/year); ED is the exposure duration (70 years), equivalent to the average human lifespan 39 ; FIR is the fish ingestion rate (g/person/day); C is the metal concentration in samples (mg/kg, fresh weight [fw]); RfD is the oral reference dose (mg/kg/day); BW is the average body weight (adult, 60 kg); AT is the averaging time for noncarcinogens (365 days/year £ number of exposure years, assuming 70 years). The daily consumption rate of fish for adult residents was 45.67 g on a fresh weight basis. 40 The oral reference doses were based on 1.5, 0.02, 0.04, 0.0003, 0.003, and 0.004 mg/kg/day for Cr, Ni, Cu, As, Cd, and Pb, respectively. 37,41 If the THQ is less than 1, the exposed population is unlikely to experience obvious adverse effects. If the THQ is equal to or greater than 1, there is a potential health risk, 42 and interventions and protective measures should be taken.
Carcinogenic risk
Carcinogenic risks of As and Pb were estimated as the incremental probability that an individual will develop cancer over a lifetime as a result of exposure to that potential carcinogen (ie, incremental or excess individual lifetime cancer risk). 38 The equation used for estimating the target carcinogenic risk 38 is as follows: where CSFo is the oral carcinogenic slope factor from the Integrated Risk Information System 37 database. The slope factors were 1.5 for As and 8.5 £ 10 ¡3 (mg/kg/ day) ¡1 for Pb.
Statistical analysis
The data were statistically analyzed using the statistical package SPSS 16.0 (SPSS, USA). The means and standard deviations of the metal concentrations in sediment and fish species were calculated. Multivariate post hoc Tukey test was used to examine the statistical significance of the differences among mean concentrations of heavy metals among fish species. A multivariate method in terms of principal component analysis (PCA) was used to obtain the detailed information of the data set and gain insight into the distribution of heavy metals by detecting similarities or differences in samples.
Physicochemical properties and metals in sediment
Physicochemical properties of sediments are presented in Table 1. The pH of the sediments did not vary much and was slightly acidic for all the sites except S10, which was slightly alkaline. The lower pH at most of the stations of the studied river might be due to discharge of the acidic effluent from nearby industries. The S10 site showed considerably higher pH (8.31), which might be due to the deposition of huge amounts of sediment at this site. This deposited sediment may contain much calcium carbonate and magnesium carbonate, which are calcareous. Hydrolysis of these calcium carbonate and magnesium carbonate releases OH ¡ ion, which contributes to alkalinity in sediment. 43 Due to the variations in topography, hydrology, and geology within catchment areas, as well as differences in precipitation, local climate, and anthropogenic influence, the water chemistry such as pH, alkalinity, and concentration of heavy metals may differ considerably between streams even within small distances. 44 The variation trends of pH at different sampling sites may be due to the higher concentrations of colloidal and/or particulate matter during high river discharges. 45 The composition of the organic carbon in the riverine sediments is varying due to its origin in the aquatic environment. Phytoplankton and zooplankton are the most abundant sources of the organic material in the sediments. 46 Total nitrogen content was in the range of 0.112%-0.252%; organic carbon was in the range of 0.74%-2.45%; and metal retention was found to be high in the locations with high organic carbon (sites S6, S9, and S10). The highest percentage of organic carbon might be attributed to the high amount of drainage water. The high rate of organic growth together with the organic detritus introduced by the drainage system can be considered as the main source of organic carbon. 47 The low percentages of TOC are due to the structure of the sediments in the investigated area, which was mainly sand that has a low affinity to absorb contaminates. According to the US soil texture classification, the textural analysis revealed that the sediments in the study region were dominated by sand and sandy loam ( Table 1). The concentrations of heavy metals in sediment are presented in Table 2. The distributions of heavy metals in the sediments were not uniform among the sampling sites of the river. The variations in the concentration might be due to differences in the source of the heavy metals and prevailing physicochemical conditions and complex reactions such as adsorption, flocculation, and redox condition taking place in the sediments. 47,48 The concentrations of heavy metals at sites S6-S10 were much higher than at other sites because these sites located at the downstream of the river were influenced by the extensive discharging of urban waste. 5,16 Elevated concentrations of heavy metals in surface sediments found at the downstream sites of the Korotoa River close to the urban area of Bogra District indicated that urbanization drove metal contamination in surface sediment. 49,50 The urban activities (industrial discharges, municipal waste water, household garbage, and urban runoff) of Bogra District urban area are the main reasons for higher metal input at sites S6-S10. Higher contaminations of heavy metal found in Yuandang Lagoon due to the municipal sewage discharge or other unknown pollution sources from Xiamen City, China, 51 are in line with our findings. The average concentrations of heavy metals in sediments were in the following decreasing order: Cr > Ni > Cu > Pb > As > Cd.
Among the sites in the current study, the average concentration of Cr was 118 mg/kg; the highest Cr was observed in sediment collected from site S10 (179 mg/kg; Table 2). The concentration of Cr in sediment was in line with the other studies and slightly higher than the average shale value (ASV), toxicity reference value Values of metals in sediment are presented mean concentration (mg/kg dw).
(TRV), lowest effect level (LEL), and severe effect level (SEL; 20 Consequently, the waste discharged from such industries is responsible for the elevated Cr level in the exposed sediment. 4,52 The mean concentration of Ni was 103 mg/kg; the highest was observed at site S7 (163 mg/kg). Slightly higher levels of Ni were observed at the sites near the district urban area and downstream, indicating that the higher input of Ni in sediment might originate from urban and industrial wastes. 16,52 Nickel and its salts are used in several industrial applications, such as electroplating and fabric printing, as well as in storage batteries, automobiles, electrodes, cooking utensils, pigments, lacquer cosmetics, and waste water. 21,30 The effluents from the Bogra District urban area might be the source of Ni for some sites of the Korotoa River.
The average concentration of Cu was 82 mg/kg; elevated levels of Cu were found at sites S6, S7, and S10 ( Table 2). The higher level of Cu indicates its higher input in the sites (S6, S7 and S10); Cu input originates from anthropogenic activities such as vehicle and coal combustion emissions 49 and car lubricants 62 and from natural phenomena such as the metal content of rocks and parent materials and processes of soil formation. 50,63 The highest As was observed at site S6 (52 mg/kg), followed by site S7 (51 mg/kg). The concentration of As in sediment of this study was in line with that in the other studies and higher than the ASV, TRV, LEL, and SEL (Table 3). Recent anthropogenic activities, such as treatment of agricultural land with arsenical pesticides, 62 treatment of wood with chromated copper arsenate, burning of coal in thermal plant power stations, and sediment excavation that alters the hydraulic regime and/or arsenic source material, increased the rate of discharge into the freshwater habitat. 20,64 The mean concentration of Cd in sediment was 1.5 mg/kg with the range of 0.53-2.8 mg/kg ( Table 2). This was in agreement with other studies in Bangladesh and other countries and far higher than the ASV, TRV, and LEL (Table 3), indicating that Cd might pose a risk to the surrounding ecosystems. Elevated concentration of Cd in sediments of the Korotoa River might be related to industrial activity, atmospheric emission, and leachates from defused Ni-Cd batteries and Cd-plated items. 4,16 The average concentration of Pb was 63 mg/kg, which was 3 times the ASV and 2 times the TRV and LEL values ( Table 3). The elevated level of Pb in sediments can be due to the effect from point and nonpoint sources, such as leaded gasoline, municipal runoff and atmospheric deposition, 20,52 and chemical manufacturing and steel works in the urban area of Bogra District. 16 The concentration of Pb in sediment was in agreement with some previous studies (Table 3).
Heavy metals in fish species
Heavy metals may act as a source of contamination when significant changes of pH, redox potential, salinity, particulate matter, or microbial activity occur in the environment. These changes can increase the mobility and transport of the metals in the aquatic media and make them bioavailable to the biota. 48 It is well known that metals can be bioaccumulated in fish tissues. 65 The magnitude of bioaccumulation is a function of age, species, and trophic transfer. Within the same species, the concentrations of metals may vary with age and body weight. 66 The concentrations on fresh wet basis of 6 metals (Cr, Ni, Cu, As, Cd, Pb) in 11 different fish species are listed in Table 4. Overall, the mean concentrations of heavy metals in fish species were in the following descending order: Ni (1.4 mg/kg) > Cu (1.0 mg/kg) > As (0.69 mg/kg) > Pb (0.43 mg/kg) > Cr (0.23 mg/kg) > Cd (0.10 mg/kg). The concentration of metals varied considerably among the fish species. However, the overall concentrations of studied metals among the fish species were in the following descending order: H. fossilis > A. testudineus > B. batasio > C. fasciata > C. soborna > P. chola > A. grammepomus > C. punctuate > N. atherinoides > C. striata > N. notopterus. Bottom-dwelling fish were found to exhibit higher concentrations of heavy metals than were pelagic fishes. 67 Chromium does not normally accumulate in fish, and hence, low concentration was reported even from the industrialized part of the world. A study has shown a higher rate of uptake in young fish, but the body burden of Cr declines with age due to the rapid elimination. 30 The mean concentration of Cr in fish species was found to be 0.23 mg/kg with a range of 0.11-0.46 mg/kg (Table 4). No significant difference was observed for Cr concentration among the investigated fish species. The mean concentration of Ni was 1.4 mg/kg; the highest concentration was observed in A. testudineus and H. fossilis (2.6 mg/kg). The Ni concentration in fish species was higher than the maximum allowable concentration (MAC) in fish (0.8 mg/kg), indicating that Ni might pose a risk to humans through consumption of these contaminated fish species.
Copper was detected in all examined fish species, and a significant difference (p < .05) was observed for Cu content in A. testudineus and H. fossilis compared with other species. The mean concentration of Cu was 1.0 mg/kg with a range of 0.57-2.1 mg/kg (Table 4). Arsenic is widespread in the environment from both anthropogenic and natural sources. The highest concentration of As was observed in H. fossilis (1.7 mg/kg), followed by A. testudineus (1.2 mg/kg). Arsenic concentration in these 2 species (A. testudineus and H. fossilis) showed significant differences (p < 0.05) compared with the other species (Table 4). The USEPA has set a value of 1.3 mg/kg fw in tissues of freshwater fish as the criterion for human health protection. 68 Therefore, 2 species of fish (A. testudineus and H. fossilis) are a concern for health risk due to As exposure; the other species showed concentrations of As lower than the MAC. In fish samples, the mean concentrations of Cd ranged from 0.020 mg/kg (C. striata) to 0.23 mg/kg (H. fossilis; Table 4). The average concentrations of Cd in the fish species A. testudineus, H. fossilis, N. notopterus, B. batasio, and C. soborna were higher than the MAC (0.10 mg/ kg; Table 4), indicating potential health hazards due to the consumption of these fish species from the studied river. In the investigated fish species, the mean concentration of Pb ranged from 0.15 mg/kg (C. soborna) to 1.1 mg/kg (A. testudineus). Lead concentrations in fish species C. punctate, A. testudineus, H. fossilis, and N. atherinoides were higher than the safe limit of 0.5 mg/kg, 69 indicating these 4 fish species were contaminated by Pb and might pose risks to humans.
A PCA was conducted to infer the hypothetical sources of heavy metals (natural or anthropogenic) following the standard procedure reported in the literature, 70,71 which showed clustering of the variables into different groups, where variables belonging to one group are highly correlated with each other. 72 The PCA was performed on the dimensionless standardized form of the data set and is presented in Figure 2. The Varimax rotation was used to maximize the sum of the variance of the factor coefficients. Multivariate PCA of heavy metals in the samples explained about 96% (sediment) and 69% (fish) cumulative variance of the data. In the PCA analysis, first 3 PCs were computed, and the variances explained by them were 41.4%, 31.6%, and 22.6% for sediment and 27.1%, 22.2%, and 19.7% for fish, respectively ( Figure 2). Overall, the PCA revealed 3 major groups of the metals for both sediment and fish. One group consisted of Pb for sediment and Cu and Pb for fish, which were predominantly contributed by lithogenic sources. 5,73 The second group showed mutual associations of Cd and Ni for sediment and Cd and As for fish, which were mostly contributed by the industrial emissions in the vicinity of the sampling sites. The third group revealed similar loadings of Cr, Cu, and As in sediment and Cr and Ni in fish, indicating anthropogenic activities.
Metal bioaccumulation in fish species
Metals in the aquatic environment, particularly in sediment, can be bioaccumulated in fish tissues. 63 Bioaccumulation of heavy metal in fish species is dependent not only on metal exposure and its environment, but also on the different physiological and biochemical activities through which a specific organism deals with metals. 74 Hence, different organisms accumulate metals from the environment depending on their filtration rate, ingestion rate, and gut fluid quality, as well as on the detoxification strategies they adopt (eg, storage in nontoxic form or elimination). 75 Table 5 clearly shows a large variation in BSAF among different fish species and metals. Among the studied metals, the ranking order of mean BSAF values were Cd > As > Ni > Cu > Pb > Cr (Table 5). Among the selected 6 metals, Cd showed the highest BSAF value, suggesting a higher rate of accumulation in fish species. At some sites, levels of metal might be high but accumulation lower than expected due to metal complexation. 5 The BSAFs for the studied metals in A. testudineus and H. fossiliswere were slightly higher than the values obtained for other fish species (Table 5). This can be explained by the ingestion of sediment as well as the omnivorous feeding behavior of A. testudineus and H. fossilis, which may lead to greater BSAFs for these species. Therefore, 2 of the fish species investigated in this study, A. testudineus and H. fossilis, can be potential bioindicators for the assessment of heavy metal contamination in this riverine environment. This study revealed a trivially higher accumulation of metals in 2 fish species (A. testudineus and H. fossilis). These 2 fish species are bottom feeders, and therefore, sediments could be the major source of trace metal accumulation in these fish species. 14,76 Bottom-dwelling fish are found to exhibit higher concentrations of heavy metals than pelagic fishes. 67,76 The BSAFs of individual metals among the fish species and sampling sites did not display similar patterns due to the environment-specific phenomenon. The ingested sediments found in the digestive tracts of fish accelerated the accumulation of metal.
Noncarcinogenic and carcinogenic risks
Target hazard quotients (THQs) of 6 heavy metals from consuming fish species are listed in Table 6. The THQ values for individual metals (except As) in fish species were less than unity, which is considered safe for human consumption. Nevertheless, total values of THQ for As and Pb were greater than 1.0 (Table 6); consequently, the consumption of these fish species was considered to be unsafe, and their consumption was not recommended. Therefore, consumers are at high risk due to the exposure of As and Pb from fish that were associated with noncarcinogenic risks. Given all metals in consideration, total THQ (sum of individual metal THQs) for the consumption of fish species was 1.20-4.59 (Table 6); therefore, potential health risks from studied fish species are of some concern. The target carcinogenic risks (TRs) derived from the intake of As and Pb were calculated and are presented in Table 6. In fish species, TR values for As ranged from 0.243 to 0.951; TR values for Pb ranged from 0.001 to 0.006 (Table 6). TR values for As and Pb were higher than the acceptable risk limit (0.000001), 37 indicating the inhabitants consuming these fish species are exposed to As and Pb with lifetime cancer risk. According to the results of this study, the potential health risk for the inhabitants due to metal exposure through consumption of fish should not be ignored.
Conclusions
In conclusion, this study revealed that the concentrations of heavy metals in sediment from some sites exceeded the sediment quality standards, indicating their risk to the surrounding ecosystems. Fish species from the study river were also contaminated by the relevant metals, particularly Ni, As, Cd, and Pb, which could be a potential health concern to the local inhabitants. The concentrations and biota-sediment accumulation factors (BSAFs) of heavy metals in H. fossilis and A. testudineus were slightly higher than those of other species, which might be due to their mode of feeding behavior. H. fossilis and A. testudineus could be potential bioindicators for metal pollution study. The target hazard quotients (THQs) of individual metals (except As) would not pose any potential risk; however, combined effects of heavy metals can pose significant risks. The carcinogenic and noncarcinogenic risks of As and Pb due to fish consumption showed a considerable risk. Note. Cr D chromium; Ni D nickel; Cu D copper; As D arsenic; Cd D cadmium; Pb D lead. Ã Assuming 50% inorganic arsenic present in fish produces carcinogenic risk (Saha and Zaman 2013). 77
|
2018-04-03T00:16:58.089Z
|
2017-01-02T00:00:00.000
|
{
"year": 2017,
"sha1": "0c95a419f1bd791e0d11ca940a780e8ae0ac8433",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/journal_contribution/Heavy_metals_in_sediment_and_their_accumulation_in_mostly_consumed_fish_species_in_Bangladesh/2253814",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "129f5d6577e6c219bcd35dd27dc93b7fc1311ec5",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
73479845
|
pes2o/s2orc
|
v3-fos-license
|
Understanding Risk Behaviors of Vietnamese Adults with Chronic Hepatitis B in an Urban Setting
Cigarette smoking and alcohol consumption can be considered as risk factors that increase the progression of chronic liver disease. Meanwhile, unprotected sex is one of the main causes of hepatitis B infection. This study aimed to explore drinking, smoking, and risky sexual behaviors among people with chronic hepatitis B virus (HBV) in a Vietnamese urban setting, as well as investigating potential associated factors. A cross-sectional study was performed in October 2018 in Viet-Tiep Hospital, Hai Phong, Vietnam. A total of 298 patients who had been diagnosed with chronic hepatitis B reported their smoking status, alcohol use, and sexual risk behavior in the last 12 months. A multivariate logistic regression model was used to identify the associated factors. It was identified that 82.5% of participants never used alcohol. The Alcohol Use Disorders Identification Test-Consumption (AUDIT-C) positive result among male patients was 7.4% (0% in female patients). In addition, 14.5% of participants were current smokers and the mean number of cigarettes per day was 7.4 (SD = 3.4). It was found that 35.4% of male patients had sex with two or more sex partners. Furthermore, 66.7% and 74.1% of participants used condoms when having sex with casual partners/one-night stands and sex workers, respectively. There was a positive correlation between monthly drinking and currently smoking. White-collar workers were less likely to have multiple sex partners within the last 12 months. Our study highlights the need for integrating counseling sessions and educational programs with treatment services.
Introduction
Hepatitis B-a liver infection caused by the hepatitis B virus (HBV)-has long been considered a threat to global health due to its high prevalence and mortality rate [1]. According to the World Health Organization, the number of people with positive HBV surface antigen (HBsAg) was as high as 257 million in 2015, while fatality records were estimated to be 887,000 in 2015 alone [2]. Hepatitis B has been particularly prevalent in Asian countries, especially in those of Southeast Asia-classified as a high endemic region with over 8% of the population being diagnosed with positive HBsAg [3].
Hepatitis B virus infection has a distinct course of progression, of which a significant step is the transition of an acute episode to a chronic condition [4]. Studies have found that the initiation of HBV chronicity is generally associated with the condition of the immune system and host factors [5,6]. Although the rate of developing chronic HBV is highest among infants (90%), adults with a competent immune system can also develop HBV chronicity at a 5-10% rate [7]. Meanwhile, chronic HBV has been considered the main cause of hepatocellular carcinoma (HCC) or primary liver cancer-the second leading cause of cancer mortality, with an annual death count of 745,000 [8]. A Global Burden of Disease Study in 2010 indicated that chronic HBV accounted for almost 45% of HCC cases investigated globally [9]. Understanding the risky behaviors of chronic HBV patients, especially in relation to HCC development, is essential to reduce HBV-related deaths and the burden of diseases. Risky behaviors are defined as behaviors that expose people to harm and negatively affect physical, economic, or psycho-social well-being [10]. Prevailing literature has identified risky sexual behavior as one of the main risk factors for the transmission of HBV [11,12], while recent research has reported associations of alcohol consumption [13] and cigarette smoking with a risk of liver cancer in those infected with chronic HBV [14].
The prevalence of chronic HBV in Vietnam is substantial-HbsAg positive infection was found to be in the range of 9 to 14% of the population in the two largest cities of Vietnam [15], with a projection of total chronic HBV cases reaching 8 million by 2025 [16]. However, there is little literature that contributes to identifying the risk behaviors of Vietnamese chronic HBV patients. Thus, this study aimed to explore these risk behaviors (in particular drinking, smoking, and risky sexual behavior) among a cohort of people with chronic HBV in a Vietnamese urban setting, as well as investigating potential associated factors, with the hope of identifying appropriate clinical and policy-level implications.
Study Setting and Sampling Method
In this study, we collected data from a cross-sectional study, which was performed in October 2018 in the Chronic Hepatitis Clinic in the Viet-Tiep Hospital, Hai Phong, Vietnam. The inclusion criteria for selecting patients included being diagnosed with chronic hepatitis B (CHB), being aged 18 years old and above, agreeing to participate in the study and having the ability to communicate with the interviewers. Participants were excluded from the study if they suffered from severe health conditions which may have affected their ability to answer the questionnaire. The convenient sampling method was used to recruit patients, and a total of 298 participants agreed to be involved in the study.
Data Measurement
The 20 mininute face-to-face interviews were performed. Health staff in the clinic, who had undergone intensive training, were trained to interview the participants in order to secure the quality of the data. The confidentiality of the participants was ensured and written informed consent was obtained from the participants.
Socioeconomic and Health Status
Data on socioeconomic characteristics including gender, age, education, marital status, occupation, and monthly income were collected. Health status was self-reported based on the EuroQol -visual analog scale (EQ-VAS), rating from 0 (the worst health condition that you can imagine) to 100 (the best health condition that you can imagine).
Substance Use
In order to assess alcohol use disorder, we used the Alcohol Use Disorders Identification Test-Consumption (AUDIT-C). Participants were asked about their frequency of alcohol drinking, standard alcohol drinking in a typical day, and the frequency of having six or more drinks with the total score ranging from 0 to 12. An AUDIT-C positive result was defined as male participants having a score of 4 or more and female participants having a score of 3 or more. Binge drinking was defined as participants having six or more drinks on one occasion. The current smoking status of patients was also explored by asking them to report their smoking status in the previous 30 days. The number of cigarettes per day was also examined. In this study, the information on cigarettes only focused on tobacco smoking.
Sexual Practices
Participants were asked to report the number of sex partners in the last 12 months; the type of sex partners, including casual sexual partners (defined as sexual partners having sexual behavior outside of a committed/romantic relationship [17]), one-night stands (a single sexual encounter without further relations between the sexual participants [18]), and sex workers (people who receive money or goods in exchange for sexual services [19]); and whether they had used condoms in the last sexual intercourse within the last 12 months.
Statistical Analysis
Data analysis was conducted using STATA version 15.0 (Stata Corp. LP, College Station, TX, USA). Descriptive statistics were used to present the socioeconomic characteristics variables, alcohol and tobacco use, and the sexual practices among the participants. A chi-squared test was performed to compare gender differences. A multivariate logistic regression model was used to identify factors associated with risk behaviors, including monthly alcohol use or more, current smoker, and having multiple sex partners in the previous 12 months. The independent variables included socioeconomic characteristics (age, gender, marital status, educational level, occupation, and income level) and current smoking status. To explore a parsimonious regression model, we used stepwise backward selection strategies with the threshold of 0.2 for selecting variables. Statistical significance was acknowledged at a p-value of less than 0.05.
Ethical Approval
The protocol of this study was reviewed and approved by the Institutional Review Board of Hai Phong University of Medicine and Pharmacy. Table 1 illustrates the information regarding the socioeconomic characteristics and self-rated health status of participants. Half of the participants were male (54.5%), and 82.8% of participants were aged above 30. The majority of patients had a high school education and above (81%) and lived with a spouse/partner (89.2%). Of the participants, 36.6% were freelancers, which was followed by farmer/blue-collar workers and accounted for 34.1%. The mean EQ-VAS score was 74.5 (SD = 13.1). Table 2 shows the alcohol and tobacco use among CHB patients. It shows that 82.5% of participants reported never using alcohol. Abstaining from alcohol was significantly and statistically higher among women than that of men (97% and 70.4%, respectively). The majority of patients drank from 0 to 2 cups of alcohol in a typical day (95.3%) and participants who drank 3-4 cups or 5-6 cups per day were only males (4.7%). Of the participants, 90.2% reported that they never had six or more drinks, and 11.7% of male participants drank six or more drinks less than monthly. No female patients had an AUDIT-C positive result while the percentage among male patients was 7.4%. Of the participants, 14.5% were current smokers, and the percentage of smoking among males (25.9%) was statistically and significantly higher than that of females (0.7%). The mean number of cigarettes per day was 7.4 (SD = 3.4).
Results
The information regarding sexual practices among CHB patients is presented in Table 3. It shows that 77.2% of participants had sexual intercourse in the last 12 months. About half of the participants had one sex partner in the last 12 months, and 35.4% of male patients had sex with two or more sex partners. The percentage of male participants having sex with casual partners/one-night stands and sex workers was statistically and significantly higher than that of female patients. Approximately one-fifth of participants used condoms when having sex with their spouse/main partner (20.8%), and 66.7% and 74.1% of participants used condoms when having sex with casual partners/one-night stands and sex workers, respectively. The reduced regression model is presented in Table 4. Female participants were less likely to use alcohol monthly (OR = 0.08; 95%CI = 0.02-0.25), be a current smoker (OR = 0.03; 95%CI = 0.00-0.22), or have multiple sex partners in the last 12 months (OR = 0.04; 95%CI = 0.01-0.13), compared to male participants. White-collar workers were also less likely to have more than one sex partner within the last 12 months (OR = 0.1; 95%CI = 0.01-0.88). Higher monthly individual income was associated with having multiple sex partners within the last 12 months (OR = 9.55; 95%CI = 2.66-34.35). Using alcohol monthly was positively related to being a current smoker (OR = 3.19; 95%CI = 1.3-7.83), and in contrast being a current smoker was also positively associated with monthly alcohol use or more (OR = 3.16; 95%CI = 1.3-7.68).
Discussion
The findings of this study contribute to the literature by adding information regarding smoking and drinking status and sexual practices among chronic hepatitis B patients. The participants who were female were less likely to have risky behavior in any dimension (monthly alcohol use, current smoker, or having multiple sex partners in the last 12 months). In terms of sexual practices, white-collar workers had a lower risk of having multiple sex partners in the last 12 months compared to participants who were unemployed, while people who had a higher individual income level were more likely to have multiple sex partners in the last 12 months. Monthly alcohol use was positively associated with being a current smoker, and in contrast, being a current smoker was also related to monthly alcohol use.
The percentage of current smokers in our study was relatively low and lower than the prevalence of smoking among Vietnamese males in the general population [20], as well as in previous studies which were conducted among the HBV population in China and Korea [21,22]. The disproportionate rates of cigarette smoking among CHB patients in other settings can be explained by the differences in socioeconomic status [23], the concurrence of other substance use disorders [13], as well as chronic smoking-related comorbidities [24] since the treatment duration of CHB patients is prolonged. This percentage was also lower than the proportion of smokers among HIV/AIDS patients in the previous study [25]. This higher proportion can be explained by the fact that among people infected with HIV/AIDS, especially those who are drug users, smoking and drug use are complementary by sharing similar cues and withdrawal symptoms [26]. Moreover, given the close relationship between smoking and chronic liver disease for instant hepatocellular carcinoma, smoking screening and support for smoking cessation should be integrated into the HBV treatment program in Vietnam. In another study among rural immigrants in Hanoi, Vietnam, there was a significantly higher likelihood of engaging in smoking and drinking behavior compared to our study [27].
In our study, percentage of people abusing alcohol was relatively lower than what has been reported in previous studies [28,29], which shows the result of the relationship between alcohol consumption and hepatitis B. In comparison with people infected with HIV/AIDS in Vietnam, our result also depicted a lower proportion of drinking alcohol than males receiving antiretroviral therapy treatment [30]. Alcohol is considered as the only cause for alcoholic liver disease, especially chronic viral hepatitis [31]. The progression of chronic liver disease to cirrhosis and hepatocellular carcinoma is significantly impacted by heavy alcohol consumption [13]. Moreover, alcohol use disorder may impair the response to medications when treating chronic hepatitis B [13]. Furthermore, smoking and being male were also considered as risk factors for drinking alcohol on a monthly basis, which is similar to a previous study [28]. People who smoke are more likely to drink alcohol, and by contrast, drinkers tend to smoke more heavily [32]. The co-use of alcohol and cigarette smoking increase the synergistic carcinogenic effects [33].
In this study, we highlighted that the percentage of participants having multiple sex partners was relatively low. However, the proportion of individuals using condoms when having sexual intercourse with casual sex partners or sex workers was low, which is consistent with a previous study of HIV/AIDS patients [34]. HBV can be easily transmitted by having sexual contact with an infected person, and it is also considered as one of the prevalent sexually transmitted infectious diseases, particularly among people who have multiple sex partners [35]. In our study, those working white-collar jobs were less likely to have multiple sex partners in the last 12 months, compared to those who were unemployed. Unemployment can be considered as a risk factor for multiple partners due to a deprived social background and excess free time, which in turn can induce more frequent casual sexual events [36,37].
Several public health suggestions can be drawn from this study. First, along with medical treatment care, more effort should be put in place to thoroughly address smoking and drinking among CHB patients. Since alcohol and smoking abstinence are recommended to slow the progression of chronic liver disease, counseling sessions should be promptly recommended and incorporated into treatment clinics. Education on safe sexual behavior, as well as sufficient distribution of condoms, should be integrated into healthcare services during hepatitis B treatment so that the risk behaviors and virus transmission can be reduced. In addition, safe-sex education should be focused on unemployed CHB patients, as they are more likely to have more than one sex partner.
There were several limitations to our study. Participants answered the questionnaire based on their recall ability regarding the number of cigarettes, alcohol consumption, and the number of sex partners in the last 12 months, which can cause recall bias. Secondly, the sampling method in this study was convenience sampling, which could possibly have decreased the generalizability of our findings. Thirdly, a cross-sectional study design cannot establish the causal relationship between risk factors and outcomes. Therefore, further research and longitudinal data are more appropriate to gain deeper knowledge and provide adequate explanations for the results. Several variables regarding risk factors should be included in future studies, such as injection drug users (IDU); data on the history of tobacco use disorder; type of intercourse-anal, vaginal, or oral; and orientation-heterosexual, bisexual, or homosexual.
Conclusions
This study highlights a low level of cigarette smoking and alcohol consumption but a high proportion of participants having sexual intercourse with casual sex partners or sex workers without using a condom. In order to decrease the risk of smoking and alcohol abuse among CHB patients, counseling sessions should be promptly recommended and incorporated into treatment clinics. To reduce virus transmission, education about safe sexual behaviors, along with sufficient condom distribution, should be integrated into healthcare services during CHB treatment.
|
2019-02-23T12:57:44.499Z
|
2019-02-01T00:00:00.000
|
{
"year": 2019,
"sha1": "363f21e5e1b88f1515de9cb87d46126cd2b5a98b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/16/4/570/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "363f21e5e1b88f1515de9cb87d46126cd2b5a98b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
118856453
|
pes2o/s2orc
|
v3-fos-license
|
Vacuum thin shells in Einstein-Gauss-Bonnet brane-world cosmology
In this paper we construct new solutions of the Einstein-Gauss-Bonnet field equations in an isotropic Shiromizu-Maeda-Sasaki brane-world setting which represent a couple of $Z_2$-symmetric vacuum thin shells splitting from the central brane, and explore the main properties of the dynamics of the system. The matching of the separating vacuum shells with the brane-world is as smooth as possible and all matter fields are restricted to the brane. We prove the existence of these solutions, derive the criteria for their existence, analyse some fundamental aspects or their evolution and demonstrate the possibility of constructing cosmological examples that exhibit this feature at early times. We also comment on the possible implications for cosmology and the relation of this system with the thermodynamic instability of highly symmetric vacuum solutions of Lovelock theory.
I. INTRODUCTION
Lovelock's theory of gravity is arguably the most natural higher-dimensional generalisation of general relativity [1]. For a 4-dimensional spacetime, Lovelock gravity is precisely general relativity plus an eventual cosmological constant. In the case of 5 or 6-dimensional spacetimes, this theory constitutes the so-called Einstein-Gauss-Bonnet (EGB) gravity, whose action differs from that of general relativity in the addition of a term of second order in the curvature, namely where I m stands for the action of the extant matter-energy fields. EGB gravity also appears as a classical limit of certain string theories [2]. Taking into account this stringy motivation, here we will consider α > 0. A relatively simple way to obtain non-vacuum solutions of a given set of field equations, that are of physical significance, is through the introduction of thin shells. These are hypersurfaces which represent a concentrated source for the field: matter-energy concentrated on a codimension one submanifold. Mathematically, these objects are characterised by junction conditions, which, in the case of gravity, are equations that relate the discontinuity of the extrinsic curvature of the shell with the intrinsic stress-energy tensor defined on the submanifold. In the case of Einstein-Gauss-Bonnet theory, provided the only source for the gravitational field is the thin shell, the junction conditions are [3] [ where the brackets represent the difference at both sides of the shell of the quantity they enclose, and S ab is the intrinsic stress-energy tensor on the shell (all these tensors are defined within the submanifold). It is important to notice the possibility of having non-trivial solutions ([K a b ] = 0) even in the case S ab = 0: these are the so-called vacuum thin shells [4] [5], which are vacuum solutions of low regularity (C 0 at the shell) nonexistent in general relativity.
A relevant and relatively recent application of thin shells is braneworld cosmology (see [6] for a review). In this setting the observable universe is a 4-dimensional thin shell (braneworld), in which all the standard model fields live, embedded in a higher-dimensional spacetime (bulk) that is usually asymptotically AdS 5 . It has attracted considerable interest because it is inspired by results in M -theory, offers an alternative (to compactification) explanation regarding the invisibility of extra dimensions and the hierarchy problem [7], and it can be used to construct models that reproduce standard cosmology including dark energy and inflation. Although the classical limit for the 5-dimensional gravity theory is usually regarded to be general relativity, EGB gravity is more general and, as mentioned, it is also a classical (low-energy) limit of certain string theories. In this way it has been applied to the context of braneworld cosmology [8] and different aspects of the dynamics have been analysed (see [9,11] and references therein).
On the other hand, a new type of stability analysis for thin shells has been recently developed [12]. It consists on an infinitesimal separation into two parts of the constituent matter-energy fields, configuring in this way two different shells with an intermediate bulk, where the resulting spacetime can be determined by continuity of the normal vector of the shell (both splitting shells have the same initial normal vector). It can be understood as a way to determine how well are these constituents gravitationally confined within a single shell. In this work we propose to use this analysis in an EGB braneworld context but with one important difference: we are not going to separate constituents of the brane, we will consider vacuum thin shells emanating from a given braneworld solution, which radically changes the bulk while leaving the brane with the same matter-energy content. As we will show, this analysis turns out to be non-trivial and will demonstrate the existence of a new class of solutions in the context of EGB gravity not previously analysed in the literature.
We begin with a derivation of the equations of motion of the different shells involved in this construction: the central brane-world in Section II and the separating vacuum thin shells in Section III. Then, in Section IV, we obtain criteria that determine the existence of this kind of solution, and prove that the criteria are satisfied for a range of parameters. The possible final outcomes of the evolution are considered in Section V. Finally, in Section VI we give an example that tends to our universe in the limit of large scale factor, and in section VII we summarise our results, propose possible interpretations, discuss the physical relevance of the solution we found and compare with other results in the field of Lovelock gravity.
II. ISOTROPIC THIN SHELL WITH Z 2 -SYMMETRY
Let us consider a 4-dimensional timelike thin shell made of a perfect fluid embedded in a Z 2symmetric 5-dimensional vacuum bulk spacetime that is placed at the symmetry centre. As usual in braneworld contexts, there is positive brane tension σ > 0 on the thin shell. We also impose that the spacetime is foliated by 3-dimensional constant curvature spacelike submanifolds 1 , so the metric of any of the identical bulk regions is given by [13] where dΣ 2 k stands for the metric of the corresponding constant curvature manifold (k = −1, 0, 1), ξ = ±1 (the "minus" branch is the so-called general-relativistic (GR) branch, while the "plus" one is the stringy branch), and µ is the mass parameter. In order to have an asymptotic limit for large r, we will impose 1 + 4/3αΛ > 0. We then define β = 1 + 4/3αΛ. One can see that in this limit, for Λ = 0, the metric tends to de Sitter or Anti-de Sitter depending on the value of the "effective cosmological constant" From this equation it is deduced that the sign of Λ ef f for the GR branch is the same as the sign of Λ (as α > 0), while the stringy branch is always asymptotically AdS. In particular, if Λ = 0 then the GR branch is asymptotically flat, while the other is asymptotically AdS. From (5) it is also deduced that if µ = 0 then the solution is maximally symmetric. Then 1 + 4/3αΛ > 0 is also a necessary condition to have maximally symmetric solutions.
On the other hand, the intrinsic metric of the shell is given by where τ is the proper time of the shell. Because of the symmetries, the intrinsic stress-energy tensor can be written as S j i = diag[−ρ, p, .., p]. We impose that the matter content of the brane satisfies the dominant energy condition. Applying the junction conditions (2), we obtain two independent equations: one that relates the energy density within the shell ρ with bulk parameters, the scale factor of the shell and its first derivative (the τ τ component); while the other relates the pressure within the shell p with bulk parameters, the scale factor and its first two derivatives (any of the diagonal spacelike components). In a non-static situation (ȧ = 0), the second equation is a consequence of the first one and the conservation of the source (S j i;j = 0), so we will focus on the τ τ component of the junction conditions and the continuity equation. Explicitly, at any given side of the shell we have where η is the gaussian normal coordinate of the shell, and we are evaluating this quantity at the η > 0 side 2 . Taking squares, the τ τ component of (2) implies [11] where H =ȧ/a. This equation is equivalent to that component of (2) only if the orientation for the r coordinate of the bulk is in agreement with (A4), as explained in Appendix A, which in this case implies that the bulk should be interior. From (9), an equation of motion for the shell can be derived, provided α > 0, (see [11] and Appendix A) where and the functions P (a) and A µ (a) are defined by In order for B ξ to be well defined it is necessary to have A µ > 0 and 128αP 2 − ξA 3/2 µ > 0 (provided ρ + σ = 0), so we will assume these from now on, and consequently B ξ > 0 must hold. In this way, equation (10) together with a function ρ(a) characterising the matter-energy content of the brane determines a(τ ) for a given initial data (a(τ 0 ), sign(ȧ(τ 0 )). Alternatively, the function ρ(a) can be obtained by solving the continuity equation provided a barotropic equation of state e(ρ, p) = 0 is given.
III. VACUUM THIN SHELL
One can notice the possibility that equation (2) may have non-trivial solutions if the right hand side is zero. It is known that this is indeed the case, in EGB gravity there exist vacuum thin shells [4]. These shells can be understood as an interface between two different vacuum solutions, and as a weak solution of the vacuum field equations 3 . The properties of this kind of shells in this setting have been thoroughly analysed in [5], and here we are only summarising the ones important to our purpose. In a spacetime with the symmetries we imposed, an equation of motion for the vacuum shell can be obtained, namelyȧ where the potential is given by the subindexes ± denote the different vacuum solutions being glued at each side of the shell, and the A µ functions are those defined by (13). In our case, we will only consider the gluing of solutions of the same action, which means that Λ and α are the same at both sides. What can change from one side to the other are the mass coefficients µ ± , the label of the branches ξ ± , and the orientation of the r coordinate with respect to the shell (which means that the bulk regions can be either interior or exterior). As shown in [5], the only way to possibly glue two GR-branches (ξ + = ξ − = −1) is by imposing that the construction has the wormhole orientation (which means that both solutions being glued should be exterior) and α < 0. Also, if the shell glues an interior solution with an exterior one, then they must correspond to different branches (ξ + = ξ − ), that is, it must be a "false vacuum bubble". Also, for this configuration, the mass coefficients can not be equal (µ + = µ − ).
Taking a first derivative of (16) one can shoẅ This expression will be useful in the next Section.
IV. SPLITTING CONSTRUCTION AND STABILITY CONDITIONS
This is the novel part of the paper. Inspired by the possibility of constructing well-defined solutions of Einstein equations that represent splitting thin shells [12], we explore the plausibility of the existence of a SMS brane-world solution with Z 2 -symmetry from which a couple of vacuum thin shells emanate at a given point of the evolution, as illustrated by figure 1.
As the figure suggests, we consider an initial configuration in which the bulk space-time is originally of the GR type. Because of this fact and the assumption α > 0, the vacuum thin shells must be interfaces between a GR branch and a stringy branch. If we demand that the central brane-world must be an embedded submanifold everywhere, including the separation point, we then should impose that the normal vector of the brane is continuous (unique) at the separation point (which we characterise as a s , the scale factor at that moment). This is the same as imposing continuity ofȧ for the brane at this point, which can be written as V µ,−1 (a s ) = V µ ,1 (a s ). It turns out that this continuity condition implies that the normal vectors of the vacuum thin shells at the separation moment also coincide with that of the brane, so V µ ,1 (a s ) = V vac (a s ), where in (16) we would have ξ − = −1, ξ + = 1, µ − = µ, µ + = µ . This can be seen from the junction conditions (2) as Q τ τ , expressed in (8), is a function of a,ȧ and the parameters that characterise the bulk (µ, ξ) at the side being analysed. For the vacuum thin shell we have Q τ τ (a s ,ȧ v,s , µ, ξ = −1) = Q τ τ (a s ,ȧ v,s , µ , ξ = 1), whereȧ v,s is the derivative of the scale factor of the vacuum thin shell with respect to its proper time at a s ; while for the brane before and after the splitting we would have is the derivative of the scale factor of the brane with respect to its proper time also at a s . Given the structure of Q τ τ , both equations can only hold ifȧ v,s =ȧ b,s , so we will call this common valueȧ s . Furthemore, one can notice that V µ,−1 (a s ) = V µ ,1 (a s ) is equivalent to which is an expression that allows one to set µ as a function of (µ, a s ). Calculations are simpler if we define the functions where x(a) and y(a) are non-negative. We also define x s = x(a s ), y s = y(a s ). Then, (18) can be written as The function ν(y) is real only in the range 0 ≤ y ≤ 1/2. Nevertheless, h(y) is real for any non-negative value of y, as it can be written as where α(y) = atan2( √ 2y − 1, 1 − y) is three times the argument of ν(y) when y > 1/2 and is a monotonically increasing function of y which tends to π when y → ∞. Furthermore, this expression also holds for y ≤ 1/2 as this is an analytic function in the complex plane and it is real for any real argument. It can also be shown that h (y) > 0 in its entire range and hence it is also invertible.
Then, the solution y s (x s ) = h −1 (g(x s )) of (22), which must be monotonically increasing as well (as g (x) > 0), can be obtained numerically. In this way, we can express µ in terms of y s (x s ) It can be shown from (22) that dy s /dx s ≥ 1, which implies y s (x s ) > x s in its entire range and hence µ > µ. The next step to prove the existence of these solutions is to calculate the difference in the accelerations of the shells immediately after an infinitesimal separation that generates the structure illustrated by figure 1. If the accelerations at that point are such that this separation grows with time, which in this case would mean that the acceleration of the central brane is greater than that of the vacuum thin shell, as r decreases away from the brane, then the construction is possible, otherwise it would be forbidden. The equation of motion of the brane (10) after the separation moment can be written as and from here the acceleration of the shell can be obtained as On the other hand, the acceleration of any of the vacuum shells can be obtained from (17), which in this context can be written as .
Although these accelerations are calculated using different time coordinates, it can be shown thaẗ a v −ä b is proportional to the relative acceleration calculated with a single time coordinate defined in the stringy region. Let us consider a τ coordinate defined within a shell that is a boundary of a region with metric ds 2 = −f (r)dt 2 + f −1 (r)dr 2 + r 2 dΣ 2 k , then we can write where a(t) is the scale factor of the shell at time t, and the dot denotes derivative with respect to τ . In this way, the acceleration with respect to t can be written as Then, if we apply (29) to the separation moment, asȧ is the same for both shells at that time, choosing the standard coordinate t corresponding to the stringy region, which it makes sense as long as f (a s ) > 0, we get where f (a s ) = k + (a 2 s /4α)(1 + β 2 + αµ /a 4 s ). 4 In this way, the difference between the acceleration of the vacuum shell and that of the brane at the moment of separation is proportional tö Now, using the continuity equation (14) we can replace and the condition of existence for this class of solutions (ä v (a s ) −ä b (a s ) < 0) can be written as where 5 There is a straightforward necessary condition that one can see from (33): σ > p(a s ), which implies that, in a radiation dominated universe, there is a maximum redshift at which this construction can take place, as we shall see. The only remaining ingredient to prove the plausibility of figure 1 is to show that the condition (33) is satisfied in a certain range a min < a s < a max for certain parameters and equation of state for the matter-energy content of the brane. The general strategy, which is not exhaustive, that we apply in order to find specific examples stems from an analysis of ξ(x s ) and x(a). We first must determine a positive lower bound for the factor (1 − p(a s )/σ)(1 + ρ(a s )/σ) 1/3 , call it b, which will depend on the matter-energy content and, in general, it will restrict the domain of a s . Then, although the function ξ(x s ) is positive and monotonically increasing, the function y 2/3 s (x s )/ξ(x s ) is also monotonically increasing. In this way we demand the function x(a) to acquire values sufficiently greater than x ∞ such that we can choose x s large enough to satisfy y 2/3 ∞ . This might be done by choosing carefully the parameters within the definition of x(a), which can be written as where a 4 µ = (αµ)/β 2 . We will return to this discussion when we apply it to the case of a radiation dominated universe.
On the other hand, by means of (35), it can be shown as follows that there can not be solutions if µ ≤ 0. In that case we would have a 4 µ ≤ 0, which implies x(a) ≤ x ∞ /(1 + ζ(a)) 2 (the equality would hold for µ = 0), where ζ(a) = ρ(a)/σ > 0. At the same time, the dominant energy condition would imply −ζ(a) ≤ p(a)/σ ≤ ζ(a). Then, the left hand side of (33) satisfies the inequalities Further, it can be shown that the function (y , which in turns implies that there can not be a solution satisfying (33) if µ ≤ 0. In particular, this construction is not possible if the initial bulk spacetime is dS 5 or AdS 5 , and from now on we will assume µ > 0.
As we shall see, examples can be found, and this kind of construction exists. Nevertheless, as a general expression of (33) in terms of a s is complicated, it is useful, in order to gain insight, an analysis of the asymptotic forms of it.
A. Large a s limit
For a s large enough, x s ≈ x ∞ and the condition (33) takes the form 6 which is an inequality involving x ∞ only. As mentioned, (y s (x s )/x s ) 2/3 < ξ(x s ), so this construction is not possible for large a s . In other words, if ρ(a s ) << σ, |p(a s )| << σ and a s >> a µ , then this construction can not be made. As discussed in appendix C, this fact implies (provided our Friedmann equation is set to asymptotically coincide with the Λ-CDM model) the impossibility of having this kind of splitting at a redshift corresponding to the "standard model regime". B. Small a s limit As discussed, the criterion (33) can only be satisfied if p(a s ) < σ. So, for any matter-energy content such that p (a) < 0 this solution might not be possible for an arbitrarily small a s . We assume that the function ω(a) = p(a)/ρ(a) has a limiting value ω 0 = lim a→0 ω(a), which implies that for a sufficiently small a: ρ(a) ≈ Ca −3(1+ω 0 ) and p(a) ≈ ω 0 ρ(a). In these terms, p(a s ) < σ can only be satisfied for small a s if ω 0 ≤ 0. This precludes any linear barotropic fluid with positive pressure, in particular it precludes a radiation-dominated universe (ω 0 = 1/3). In this way, the function x(a) would have the following limiting behaviour where it can be seen that x(a) tends to a positive constant for ω 0 = 0 (a matter-dominated universe), and to +∞ for ω 0 < 0. 7 On the other hand, if we Taylor expand both sides of (22), taking into account (23) s . Then, for ω 0 < 0, (33) can be written as which always holds for sufficiently small a s . In the remaining case, which includes a matter-dominated universe (ω 0 = 0), x(a) acquires the limiting value x 0 = (αµ) 3/2 /(κ 4 αC 2 ). Then, condition (33) can be written as which always holds for sufficiently small a s . We then found several scenarios in which the construction is possible: linear barotropic fluids of non-positive pressure, which in particular includes the dust brane (a matter dominated universe), in the limit of small a s . However, in the context of brane-world cosmology none of these situations can be attained if standard cosmology is to be recovered, as explained in appendix C.
C. Radiation dominated universe
Because of the fact that in the "standard model regime" this construction is not possible, one should check whether it can be done for the early universe. We then consider the special case in which the matter-energy content of the brane is a photon gas (p = 1/3ρ). Looking at the inequality (33) we notice that the factor ψ(ζ) = (1 − ζ/3)(1 + ζ) 1/3 is a monotonically decreasing function of ζ = ρ/σ whose maximum value is ψ(0) = 1. Then, in order to satisfy (33) the following inequality must hold Inverting the left hand side, this inequality implies that for a given x ∞ there is a minimum value for x s , which we may write x s,min (x ∞ ) > x ∞ (where x s,min (x ∞ ) is a monotonically increasing function of x ∞ that satisfies x s,min (0) = 0). In this way, for a given x ∞ and x s > x s,min (x ∞ ), condition (33) can be written as ψ(ζ) > (x ∞ /y s (x s )) 2/3 ξ(x s ), which sets a maximum for ζ and, consequently, a minimum for a s (which we might call a s,min (x ∞ , x s )).
On the other hand, expression (35) for this case can be written as where a σ is defined through ρ(a σ ) = σ. If a 4 µ > (4/3)a 4 σ , defining r = a 4 µ /a 4 σ , this expression has a maximum at a 4 m = a 4 σ r/(3r − 4), where it takes the value If r ≤ 4/3, then x(a) is a monotonically increasing function that tends asymptotically to x ∞ . We then can rule out this case: inequality (33) can not be satisfied if r ≤ 4/3. We stress the fact that the bound x max (r, x ∞ ) is independent from the condition (33). We then must compare the two bounds on (x s /x ∞ ): (x max (r, x ∞ )/x ∞ ), which is a monotonically increasing function of r alone; and (x s,min (x ∞ )/x ∞ ), which is a function of x ∞ bounded from below (around 4.35). In this way, in order for these bounds to be compatible with each other, there is a lower bound r min (x ∞ ), and, in particular, there can only be a solution if r > 5.26, which is equivalent to a µ > 1.51 a σ . We then found a necessary condition for (33): r > r min (x ∞ ) > 5.26. If satisfied, then the bounds on x s define a range for a s that must contain the range in which (33) holds, provided it exists.
Finally, as mentioned, for a given pair (x ∞ , x s ), (33) is equivalent to a s > a s,min (x ∞ , x s ). If we set x ∞ alone, a s,min (x ∞ , x s ) is a monotonically decreasing function of x s for x s > x s,min (x ∞ ), that tends to +∞ in the lower limit and to 3 −1/4 a σ when x s → ∞. Then, there is a range a min < a s < a max in which (33) is satisfied if and only if the curves a = a s,min (x ∞ , x) and x = x(a) intersect in the plane (a, x). The possibility of this intersection is not self-evident, but it can be proven as follows by giving a concrete example. This will also be useful to illustrate the definitions we introduced in this subsection.
Examples with x ∞ = 1 If x ∞ = 1 then x s,min (x ∞ = 1) = 48.43. Also, r min (x ∞ = 1) = 27.78, so we must choose r greater than that in order to found a solution of (33). We then plot in the (a, x) plane the function x(a) for different values of r and the function a = a s,min (x ∞ = 1, x), as illustrated in figure 2.
Graphically, it can be seen that an intersection takes place for r ≥ 49, so there are solutions for a radiation dominated universe. We illustrate the case r = 60, in which the intersection points are at a min (r = 60) = 0.99a σ and a max (r = 60) = 1.32a σ . In this way, if the parameters (1, x). The horizontal axis scale is set in terms of a/a σ . For r > 27.28 the maximum value of x(a) is greater than x s,min (1) = 48.43. The curves for r = 10, r = 28, r = 40 and r = 60 are plotted. Among these curves only the one corresponding to r = 60 intersects a s,min (1, x), so in that case there is a range for a s in which (33) is satisfied, whose boundary values are denoted as a min (r = 60) and a max (r = 60). are such that x ∞ = 1 and r = 60, which is possible since they are independent (µ is present only in r), then criterion (33) is satisfied in the range 0.99a σ < a s < 1.32a σ and a splitting solution, as illustrated in figure 1, can be constructed.
Furthermore, in Section VI we will provide another example in which this construction holds and such that it tends to the Λ-CDM universe in the large a limit.
V. FINAL OUTCOME OF THE SPLITTING
Provided that condition (33) is satisfied for a given set of parameters and some a s , it remains to determine what would be the final outcome of the spacetime after the splitting. The evolution of the central brane would be determined by (25) where µ is given by equation (24). Although the evolution equation would be different than before the splitting, it seems difficult for the potential to acquire a point of return, provided the original potential (10), with ξ = −1, did not exhibit such a property, as expected from brane-world cosmology applications. Nevertheless, the general analysis of the motion allowed by (10) is outside of the scope of this paper. On the other hand, the motion of the vacuum thin shells is much easier to describe qualitatively. In principle, there are two different possibilities for the final outcome of the spacetime depending on the fate of the vacuum thin shells: they can either expand indefinitely or collapse 8 . As described, the motion of these shells would be determined by the potential (16), where (ξ + = 1, ξ − = −1, µ + = µ , µ − = µ), so the possibility of an indefinite expansion depends on the existence of a point of return and on the relative position of a s with respect to this point.
The possibility of having extremal points for the effective potential (16) adapted to this situation can be easily addressed by means of (17) 9 . From this equation one can see that there must be an extremal point at a e , which is a solution of the expression The left hand side of this equation is monotonically decreasing with a e and its image, for positive a e , is R + , so this equation must have one and only one root. In this way, the effective potential for the motion of the vacuum shell has only one extremal point, and it can be shown that it is a maximum, so the possibility of having points of return is determined by V vac (a e ) > 0, which is equivalent to the following expression If this inequality holds and a s < a e , an initially expanding shell (which is always the case in this context) would rebound at some point of the evolution and collapse afterwards. In that case, the final outcome of the splitting would be a stringy bulk with mass parameter µ , as illustrated by figure 3.
On the other hand, if V vac (a e ) ≤ 0, there would not be any point of return, so the vacuum shell would expand indefinitely according to (16). Anyway, an indefinite expansion, as such, is surely not possible as the shells would eventually recoil. One can notice this by considering the large a limits of (25) and (16):ȧ b grows like a 2 whileȧ v grows like a 6 . In this way, in a scenario where both shells are supposed to indefinitely expand according to their effective potentials, the shells will end up colliding again, as illustrated by figure 4, and a brane collision analysis, like the ones performed in [16], will play a role in determining the evolution beyond this point.
The criterion we adopted to determine the splitting parameters cannot be applied to resolve the outcome of the collision in this setting: continuity ofȧ b for the central brane would imply continuity ofȧ v as well, which is precluded by the Z 2 symmetry. We then must resort to another criterion, which can not be the continuity of the velocity (the tangent vectors of comoving observers within the shells) or the normal vectors, as both coincide and result in the continuity ofȧ v . We argue that the most reasonable outcome is a recombination of the shells, as illustrated by figure 4, which results in the same bulk spacetime (with the same parameters) as initially. Although this would imply a discontinuity of H b and of the normal vector of the brane, a further rebound is hard to justify as it would require the introduction of an extra parameter: the initial "rebound" velocity of the vacuum shells (or, equivalently, the mass parameter of the stringy spacetime between the brane and the rebounded vacuum shell).
For a given setting in which (33) paper) this is indeed the case: there is a limiting value for a s , call it a c , which is included in the range satisfying (33), such that one outcome takes place if a min < a s < a c while the other happens if a c < a s < a max . We illustrate this situation in figure 5 with the parameters of the example we develop in the next Section.
VI. A COSMOLOGICAL EXAMPLE
Let us consider a specific example such that the equation of motion of the brane tends to the standard Friedmann equations with k = 0. In appendix C it is explained that provided this asymptotic limit holds, then two of the parameters (α, β, σ, κ) can be written as functions of the other two. We choose (α, β) as the independent parameters (besides µ), which implies that (κ, σ) can be calculated from (C3). Anyway, (α, β, µ) can not be arbitrary, they must satisfy the restrictions (C5) and (C7) in order to recover standard cosmology since at least nucleosynthesis and to satisfy observational bounds on dark radiation respectively. The scale factor of the universe per se is not observable, so in order to construct this example we are going to express the dynamics in terms of cosmological redshift. In this way, we should consider (µ/a 4 0 ) as the mass parameter to be set, where a 0 represents the present scale factor, then write A µ (z) = β 2 + (µ/a 4 0 )(1 + z) 4 and ρ(z) as described in (C4), and finally replace A µ and P in (10) and (16) with these expressions. On the other hand, we must consider the necessary condition to satisfy (33) that we derived in the Subsection IV C. We then define and choose appropriate values (α, β, z µ ) such that (C5) and (C7) hold and (z σ (α, β)+1) > 1.51(z µ + 1). In order to make calculations simpler we first set z µ = 10 17 , so if we are going to impose z σ > 1.51 z µ = 1.51 10 17 , then, from (C4) and considering that all species of the standard model are relativistic in this regime, z σ can be written as where, as mentioned, x ∞ is actually a function of (α, β) obtained from (C2) and D(x) is defined in (B4). Now that z µ is set, we must find a pair (α, β) that satisfy all the conditions we mentioned in the above paragraph. A pair that does the job is (α, β) = (10 −14 m 2 , 0.01), so the parameters are finally set as follows.
Then we get z σ = 3.058 10 17 . For these parameters it turns out that condition (33) is satisfied in the range 1.20 10 17 < z s < 3.98 10 17 , so the construction illustrated in figure 1 can be made for any value of z s within this range. As explained in Section V, for a given z s we can determine the final outcome of the splitting by means of (44) and (45). We first need to solve (44) in terms of z e = (a 0 /a e ) − 1, and for that one must determine (µ /a 4 0 ), which can be written as a function of z s as follows In this way, for this example one can numerically obtain z e (z s ) from (44) in the range 1.20 10 17 < z s < 3.98 10 17 , and then replace it in (45). It turns out that z e < z s in the entire range, and if z s < z c = 1.302 10 17 then the shells will recoil and the final outcome of the splitting is the illustrated in figure 4, while if z s > z c then the final outcome is a stringy bulk as in figure 3. The different possibilities are illustrated in figure 5, where H 2 for the resulting vacuum shells corresponding to z s = 1.25 10 17 and to z s = 1.35 10 17 are plotted as functions of z. However, there are reasons to avoid a stringy bulk as a final outcome, as it is well-known that this branch poses instabilities against perturbations [15], so one may simply preclude this scenario. In any case, a study of the instability of this family of solutions is outside of the scope of the present paper. We then obtained a concrete example in which the construction can be made, and so that it tends to standard cosmology at low redshift. The redshift at separation z s can be chosen within a certain range, and both final outcomes are possible depending on this choice.
VII. CONCLUDING REMARKS
In this work we obtain a new class of solutions in Einstein-Gauss-Bonnet gravity, which involves a braneworld in a Z 2 -symmetric setting from which a pair of vacuum thin shells emanate. The possibility of this construction is non-trivial: it can only be done if the matter-energy content of the braneworld, its scale factor at the splitting point and the parameters of the bulk satisfy (33). In particular, it is not possible in a regime approximating standard cosmology, or for an arbitrarily small scale factor in a radiation dominated universe. Nevertheless, there are examples that tend to standard cosmology at late times and satisfy (33) at early, but not arbitrarily early, times. Of particular interest is the case in which the splitting shells recoil, as illustrated by figure 4. In this case, the bulk spacetime at both sides of the central shell is the same before the splitting than after the recoil, but different for an interval of time in which the bulk is stringy, whose extension depends on the parameters of the construction. During this particular phase of the evolution of the braneworld, the dynamics changes and, in a case developed to emulate the Λ − CDM universe at late times, this may affect the termal history of the universe. As mentioned, this mechanism may only play a role in the early universe. One then may speculate with the consequences of having a sudden change in the acceleration of the rate of expansion, for example in baryogenesis [17], leptogenesis [18] or inflation [19], but these are outside of the scope of the present paper, and a matter of future research.
The existence of these solutions is an interesting mathematical fact by itself, because it might represent a drawback against the uniqueness in the initial value problem involving thin shells for Lovelock gravity. Anyway, as illustrated in [12], this kind of splitting solution also exists in general relativity, so one can argue that this non-uniqueness is more related to the definition of thin shells than to the structure of the EGB field equations. The main difference with respect to the GR splitting solutions is the very existence of vacuum thin shells. For the GR case there must be two different matter-energy fields constituting a single thin shell, and the splitting solution consists on the smooth separation of these constituents. On the other hand, in Lovelock gravity there is no need to separate two different matter-energy fields, one might just consider a vacuum thin shell emanating from a given non-vacuum thin shell, and it turns out that this is possible in a non-trivial way. One possible reason against the naturalness of the constructions made in [12] is the lack of a triggering mechanism for the splitting, as one may just deem more natural a single evolving thin shell than the resulting evolution after an infinitesimal separation of the constituent fields. Although this argument is contentious, it is worth noticing that in this case this potential shortcoming is not present, as there is no need to "arbitrarily" separate two matter-energy fields to construct the splitting: there is no matter-energy "leaving" the original thin shell.
On the other hand, in the last few years there has been interest in deriving solutions with vacuum thin shells in the context of the thermodynamical instability of vacuum solutions of Lovelock theory. As we mentioned in the case of EGB gravity, a vacuum thin shell can be interpreted as a "false vacuum bubble", which is an interface between two different vacua of the theory. There are analogous solutions for higher dimensional Lovelock theory, in which the isotropic vacuum solutions described in (5) are generalised, that also display different branches. Depending on the parameters of the theory, there are up to K branches, where K is the order of the higher order factor in curvature of the field equations [20], and all possible pairs of different branches can possibly be glued with a vacuum thin shell, constituting, in this way, many different types of vacuum bubbles. The static vacuum bubbles can be analysed thermodynamically by Euclidean methods and the transition probability among the different vacuum solutions can be semiclassically addressed [21,22]. For a given set of boundary conditions, the "true vacuum" corresponding to them can be singled out by this method. In particular, a "metastable" solution may "thermally" decay to a bubble configuration and then to a "true vacuum" via classical dynamics. This analysis is important in the context of the AdS/CFT correspondence, because it reveals an intricate and previously unknown behaviour of gravity theories that should be replicated in the dual CFTs.
Furthermore, there have also been some studies of this sort involving non-vacuum static solutions [23] with a self-gravitating conformal scalar field, but not, as far as we know, involving non-vacuum thin shells. This latter possibility should be of interest in the quest of exploring the most stable solutions that can be interpreted as final outcomes of the evolution of different types of matterenergy configurations and is a matter of future research. We also remark that, although the solutions considered here are thought as dynamical, the method we developed in this paper is perfectly applicable to a static case outside of the context of braneworld cosmology. We then might interpret the present work as the foundation of a different kind of stability analysis for thin shells in Lovelock gravity, which adds to perturbation analysis and thermodynamical stability analysis. In this way, a generalisation of (33) for non-Z 2 -symmetric settings, which would be more algebraically involved, as illustrated in [24], and a comparison with other types of stability analysis, are also a matter of future research.
VIII. ACKNOWLEDGEMENTS
The author acknowledges Ernesto Eiroa for interesting comments and reading the whole manuscript. He also thanks the referees for pointing out a non-trivial mistake and other useful comments. MAR is supported by CONICET.
By means of (10), this condition can be written as .
(A4)
Under our assumptions ρ + σ > 0, then it is clear that if the bulk is GR then it must also be interior, that is, each bulk region can be defined by an inequality r < a(t). On the other hand, if the bulk is stringy, in order to have B 1 (P 2 , A 3/2 µ ) well-defined, 2 7/3 αP 2/3 ≥ A 1/2 µ must hold. But at the same time, by definition, B 1 (P 2 , A 3/2 µ ) ≥ 2 7/3 αP 2/3 , so the bulk must be interior regardless of the value of ξ. In the same way, one can see that if we allowed ρ + σ < 0 then the bulk should be exterior also regardless of the value of ξ. Nevertheless, as explained in appendix C, this last possibility forbids the emergence of standard cosmology in the large a limit.
Appendix B: Large a asymptotics of the effective potential for the central brane The potential in the equation of motion of the shell can then be written as so we need to find the large a limit of P (a) and x(a). The matter-energy degrees of freedom within the braneworld appear in the effective potential through the function P . If we impose that the matter-energy content satisfies the dominant energy condition, but it is not (or it does not contain) a cosmological constant fluid, then ρ → 0 when a → ∞ and we can write Then, linearising the left hand side of (B1) as a function of (P 2 , x) around the values ((κ 4 σ 2 )/(2 8 α 2 ), x ∞ ), we obtain the following expression where As both the functions D(x) and g(x)−D(x) are non-negative for x > 0, we can see that the equation of motion H 2 = −V µ,−1 (a) tends asymptotically to a form similar to the standard Friedmann equations but with an additional term whose effect in the dynamics would be as if there where a radiation density not included in ρ (the so-called dark radiation).
where we recall x ∞ = β 3 /(κ 4 σ 2 α). In this way, the matter-energy content of the brane does not need to include dark energy, as it appears as a consequence of the setting. These identifications justify the assumption σ > 0 and allow us to express two of the parameters (α, β, κ, σ) in terms of the other two. If we choose α and β as the independent ones, we should recall that both parameters are positive and express the other two as functions of them (σ(α, β) and κ(α, β)).
After some manipulation of the relations (C1) we can express x ∞ (α, β) implicitly by means of the equation From this expression it can be seen that x ∞ grows with β and decreases with α, although the dependence with α is only significant if O(α) > 10 51 . Also from (C2) it is deduced, because of the fact that the left hand side asymptotically approaches 2 for large x ∞ , that (C1) can only be satisfied if β < 1 + (4/3)αΛ 4 . Then, we can obtain the other parameters as follows It can be seen from equation (35) that the approximation (B3) would hold only if ρ(a) << σ and a >> a µ . The termal history of the universe according to standard cosmology predicts very well different aspects of the observed universe, in particular the primitive abundances [25], so we require that this approximation should be valid at least since nucleosynthesis, more specifically since the neutron freeze-out at O(z) = 10 10 . In this way, in the framework of standard cosmology, whenever this limit does not apply, the matter-energy content of the braneworld is essentially pure radiation, so if we want to describe the dynamics of the early universe then the matter-energy content should be written as [26] ρ(z) = where g * (T ) is the number of relativistic degrees of freedom at a given temperature, g * s (T ) is the number of effective degrees of freedom in entropy at the same temperature, and Ω r includes the neutrino energy density 10 . Both g * and g * s are implicit functions of z as well (T (z) is obtained from g * s (T )T 3 /z 3 = cst) and are independent from the Friedmann equations. According to the standard model of particle physics, for O(T ) < 10keV , we have 1.84g * (T )g , but with a much slower rate than the growth of z 4 . In this way, using the cosmological parameters best fit from Planck [14], we demand σ(α, β) >> ρ(z = 10 10 ) = 4.94 10 −18 m −2 , z µ = β 2 αµ 1/4 a 0 >> 10 10 , where a 0 is the present scale factor. On the other hand, there is one deviation from standard cosmology that is a part of the dominant term in the radiation era: the dark radiation term This must be limited as a small fraction of the estimated radiation density parameter Ω r = 9.16 10 −5 . Then, from (C3) we demand Ω dr (α, β, x ∞ , z µ ) = β 16α We must then choose (α, β, z µ ) such that (C5) and (C7) hold. One can notice that the restrictions are compatible with each other: for α sufficiently small, x ∞ is essentially a monotonically decreasing function of β only, and, for fixed β, σ can be made arbitrarily large. On the other hand, for z µ sufficiently large and fixed α and β, Ω dr can be made arbitrarily small. In this way, for a given value of β we choose a sufficiently small α in order to satisfy (C5), and then choose a sufficiently large z µ > 10 10 in order to satisfy (C7).
|
2018-02-16T17:51:45.000Z
|
2017-09-16T00:00:00.000
|
{
"year": 2017,
"sha1": "6a084cfba5af4bac8b1625de1551d148437bb5e3",
"oa_license": "CCBYNCSA",
"oa_url": "https://ri.conicet.gov.ar/bitstream/11336/93263/2/CONICET_Digital_Nro.218f3d53-6693-41b6-bd99-7c39221f709d_A.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6a084cfba5af4bac8b1625de1551d148437bb5e3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
212563755
|
pes2o/s2orc
|
v3-fos-license
|
Methylated Vnn1 at promoter regions induces asthma occurrence via the PI3K/Akt/NFκB-mediated inflammation in IUGR mice
ABSTRACT Infants with intrauterine growth retardation (IUGR) have a high risk of developing bronchial asthma in childhood, but the underlying mechanisms remain unclear. This study aimed to disclose the role of vascular non-inflammatory molecule 1 (vannin-1, encoded by the Vnn1 gene) and its downstream signaling in IUGR asthmatic mice induced by ovalbumin. Significant histological alterations and an increase of vannin-1 expression were revealed in IUGR asthmatic mice, accompanied by elevated methylation of Vnn1 promoter regions. In IUGR asthmatic mice, we also found (i) a direct binding of HNF4α and PGC1α to Vnn1 promoter by ChIP assay; (ii) a direct interaction of HNF4α with PGC1α; (iii) upregulation of phospho-PI3K p85/p55 and phospho-AktSer473 and downregulation of phospho-PTENTyr366, and (iv) an increase in nuclear NFκB p65 and a decrease in cytosolic IκB-α. In primary cultured bronchial epithelial cells derived from the IUGR asthmatic mice, knockdown of Vnn1 prevented upregulation of phospho-AktSer473 and an increase of reactive oxygen species (ROS) and TGF-β production. Taken together, we demonstrate that elevated vannin-1 activates the PI3K/Akt/NFκB signaling pathway, leading to ROS and inflammation reactions responsible for asthma occurrence in IUGR individuals. We also disclose that interaction of PGC1α and HNF4α promotes methylation of Vnn1 promoter regions and then upregulates vannin-1 expression.
INTRODUCTION
In recent years, the survival rate of premature and low birth weight infants has increased year by year globally with the continuous development of perinatal medicine, assisted reproductive technology and rescue technology for infants. A significant proportion of these low birth weight infants have intrauterine growth retardation (IUGR), which is defined as a fetal birth weight below the tenth percentile of their gestational age (Sharma et al., 2016a,b). Epidemiological surveys have shown a significant increase in the risk of developing bronchial asthma in childhood and adulthood in IUGR children (Gatford et al., 2017).
With the increasing incidence of IUGR, the pathogenesis and physiological mechanism of bronchial asthma have been continuously recognized and updated. At present, it mainly focuses on inflammation, immune response changes and airway remodeling caused by abnormal subepithelial myofibroblasts and chronic inflammation. Studies have shown that abnormalities in multiple signaling pathways in lung tissue involve the development and progression of asthma, such as PI3K/Akt, MAPK, ERK, JAK and c-Jun (El-Hashim et al., 2017;Li et al., 2015;Southworth et al., 2018;Wagh et al., 2017;Yang et al., 2018). In particular, many studies have shown that activation of the PI3K/Akt pathway plays an important role in the development of asthma by activating oxidative stress and inflammatory responses (Wagh et al., 2017;Yang et al., 2018). The serine/threonine kinase Akt, also known as protein kinase B (PKB), is activated by lipid products of phosphatidylinositol 3-kinase (PI3K). However, the molecular mechanism of bronchial asthma in IUGR children is not clear.
Recent studies have shown that DNA methylation abnormalities may be associated with a predisposition to obesity, insulin resistance and diabetes, and hypertension in IUGR adulthood. It was reported that DNA methylation of exon 2 of dual specificity phosphatase 5 (DUSP5) in IUGR rats caused an increase in mRNA expression, which led to insulin resistance by regulating Ras-MAPK signaling pathway activity (Fu et al., 2006). Increased phosphorylation of the insulin receptor substrate (IRS) leads to the development of insulin resistance. In addition, postnatal overnutrition in IUGR rats can upregulate DNA methylation levels at specific sites of peroxisome proliferator activated receptor γ coactivator-1α (PGC1α), promoting the development of insulin resistance, in which PI3K/Akt activity is reduced (Xie et al., 2015). Vascular non-inflammatory molecule 1 (vannin-1) is a GPIanchored cell surface protein encoded by the Vnn1 gene. Vannin-1 is a newly discovered molecule and possesses pantetheinase activity, which plays a role in inflammation regulation and oxidative-stress response. Human and mouse Vnn1 have a high homology of 80%. In childhood asthma, it has been found that increased mRNA levels in the Vnn1 gene were associated with hormone sensitivity (Xiao et al., 2015). Therefore, this study aimed to investigate the regulatory role of Vnn1 on the PI3K/Akt signaling activity in the IUGR mice challenged with ovalbumin (OVA) in order to discover the potential molecular mechanisms of asthma in IUGR children.
RESULTS
Asthma is induced in the nmIUG and the IUGR mice As previously described (Fu et al., 2006;Xing et al., 2019), the normal intrauterine growth (nmIUG) and the IUGR pups were produced by feeding female mice with normal and low protein diets, respectively. Birth weight was measured at 6 h, showing a significant (P<0.01) reduction in the IUGR group (1.15±0.24 g) compared to the nmIUG group (1.85±0.52 g) (Fig. 1A).
Asthma was then induced with OVA in 6-week-old IUGR and nmIUG mice, whereas the PBS inductions were used as the controls. Asthma is a chronic inflammatory airway disease in which interleukin-4 (IL-4), IL-13, and TNF-α are involved (Manni et al., 2016). These inflammatory factors promote airway eosinophilia infiltration, mucus overproduction, bronchial hyperresponsiveness and immunoglobulin E (IgE) synthesis (Manni et al., 2016). The IgE levels were measured in the serum, showing a dramatic elevation (P<0.01) in the OVA group compared to the PBS controls, both in the nmIUG and IUGR groups (Fig. 1B). To evaluate the inflammation reactions of bronchi, bronchoalveolar lavage fluid (BALF) was collected. Compared to the PBS controls, the levels of IL-13, IL-4 and TNF-α were increased significantly (P<0.01) both in the nmIUG and the IUGR mice challenged with OVA (Fig. 1C). Cells were also classified and counted in the BALF. Compared to the PBS controls, the number of eosinophils, lymphocytes and macrophages as well as the total cell numbers were significantly higher (P<0.01) both in the nmIUG-OVA and the IUGR-OVA groups (Fig. 1D). Notably, the levels of IgE, IL-13 and TNF-α as well as the numbers of the inflammatory cells in the BALF were Fig. 1. Establishment of asthma in IUGR mice. IUGR was established by feeding pregnant mice with a low protein diet. (A) 6 h after birth weight was measured, which showed a significant reduction in the IUGR group in comparison with the normal intrauterine growth (nmIUG) group. Asthma was induced with OVA in the IUGR and the nmIUG groups. PBS induction was used as the control. (B) The concentration of IgE in serum was measured using ELISA kit. (C) Bronchi alveolar lavage fluid (BALF) was collected, and the levels of IL-13, IL-4 and TNF-α were assessed with ELISA assays. (D) The number of eosinophils, lymphocytes and macrophages in BALF was counted and compared. Data are shown as mean±s.d. n=16 (A) and 8 (B-D). *P<0.01. significantly higher (P<0.01) in the IUGR-OVA group than in the nmIUG-OVA group (Fig. 1B-D).
Hematoxylin and Eosin (H&E) staining was performed on lung tissue. We observed obvious eosinophil infiltration in alveolar tissue ( Fig. 2A) and bronchi (Fig. 2B) in the OVA challenged mice, particularly in the IUGR mice. We also performed periodic acid-Schiff (PAS) staining, which showed that the surface area of mucincontaining goblet cells was markedly increased in OVA-induced asthmatic mice compared to that in PBS controls (Fig. 3). Moreover, IUGR-OVA mice had more serious mucus production in the bronchial airway than nmIUG-OVA mice (Fig. 3). These findings demonstrated that the asthma model had been successfully induced, and more severe asthmatic inflammation was observed in the bronchi of the IUGR mice.
Expressions of Vnn1 are elevated in asthmatic IUGR mice It has been reported that the methylation status of Vnn1 has obvious impacts on its mRNA level (Xiao et al., 2015). In this study, we first assessed the methylation levels of Vnn1 at the promoter regions in the asthmatic IUGR and nmIUG mice. Our data showed that compared to the PBS controls, the methylation frequency of CpG islands of Vnn1 at promoter regions was significantly elevated (P<0.01) in the IUGR-OVA group, but not the nmIUG-OVA group (Fig. 4A). Consistent with this finding and in comparison with the PBS controls, we detected a significant (P<0.001) increase of vannin-1 expression both at the mRNA and protein level in the IUGR-OVA group, but not in the nmIUG-OVA group (Fig. 4B,C). Therefore, the function of vannin-1 was investigated in the asthmatic IUGR mice in the following experiments.
PI3K/Akt signaling is activated in asthmatic IUGR mice The PI3K/Akt signaling pathway plays an important role in the release of various cytokines and inflammatory factors (Martini et al., 2014). In the IUGR asthmatic mice, we evaluated its activation levels in the lysates isolated from lung tissues. Immunoblot assays showed that the phospho-PI3K p85 Tyr458 / p55 Tyr199 and phospho-Akt Ser473 levels were significantly increased (P<0.001) in the OVA group compared to the PBS controls ( Fig. 5A). PTEN is a critical negative regulation kinase for PI3K/Akt activation (Martini et al., 2014). In this study, reduction of the phospho-PTEN Tyr366 was detected in the OVA group (Fig. 5B). Previous studies suggest that Akt regulates transcriptional activity of nuclear factor-κB (NFκB) by inducing phosphorylation and subsequent degradation of inhibitor of κB (IκB). NFκB, a family of transcription factors, regulates diverse cellular activities related to inflammation and immune responses (Chauhan et al., 2018). Therefore, we assessed the nuclear levels of NFκB. Our findings show that the nuclear NFκB abundance was increased dramatically (P<0.001) in the OVA group, which was accompanied by a reduction of IκB-α in the cytosolic fractions (Fig. 5C). In the lung tissue lysates, the levels of reactive oxygen species (ROS), TGF-β and IL-1β were also significantly elevated (P<0.001) in the OVA group compared to the PBS controls (Fig. 5D). These findings suggest that the PI3K/Akt signaling pathway plays a critical role in the development of asthma at least partially by NFκB-mediated production of ROS, IL-1β and TGF-β.
Association of PGC1α and HNF4α with Vnn1 in asthmatic IUGR mice As described above, the methylation frequency of Vnn1 promoter region was elevated in the IUGR mice following asthma induction. It was reported that PGC1α is a key upstream regulator for Vnn1 transcription in liver gluconeogenesis, in which hepatocyte nuclear factor-4α (HNF4α) is required (Chen et al., 2014). Therefore, we assessed the levels of PGC1α and HNF4α in the nuclear fractions from lung tissue. Our results show that the abundance of both PGC1α and HNF4α was significantly increased (P<0.001) in the OVA group compared to the PBS controls ( Fig. 6A). Further evaluation using immunoprecipitation assay revealed a remarkable interaction between PGC1α and HNF4α in the OVA group (Fig. 6B). To test if PGC1α and HNF4α regulated Vnn1 transcription levels through binding to its promoter regions, we performed a ChIP assay with anti-PGC1α and anti-HNF4α antibodies followed by qPCR using specific primers for Vnn1 promoter. The binding ability of PGC1α and HNF4α to the Vnn1 promoter was calculated as a percentage of DNA precipitated relative to the total input. We found that both PGC1α and HNF4αin particular HNF4αbound to a greater extent to the Vnn1 promoter in the OVA group compared to the PBS controls (Fig. 6C).
Knockdown of Vnn1 inhibits Akt activation as well as inflammatory cytokines and ROS production in primary bronchial epithelial cells isolated from asthmatic IUGR mice
To verify if vannin-1 directly regulated the PI3K/Akt signal required for asthma occurrence in IUGR mice, we knocked down Vnn1 expression using lentiviral shRNA specifically targeted against mouse Vnn1 and evaluated activation levels of the phospho-Akt Ser473 in primary cultured bronchial epithelia cells. The mRNA and protein levels of Vnn1 were dramatically increased (P<0.001) in Fig. 3. Lung tissue was stained with periodic acid-Schiff (PAS) in IUGR and nmIUG asthmatic mice. Asthma was induced with OVA in IUGR and nmIUG mice. PBS induction was used as the control. Lung tissue was prepared for PAS staining to reveal mucus production. Representative images and higher-magnification images indicated by dashed boxes are provided. The surface area of mucin-containing goblet cells per total surface area of airway epithelial basal membrane was quantitated and compared. Original magnification ×400. Scale bar: 50 µm. Data are shown as mean±s.d. n=4. *P<0.001 versus PBS, #P<0.05 IUGR-OVA versus nmIUG-OVA. cultured cells from the IUGR asthma mice, which was significantly (P<0.05) prevented by shRNA-Vnn1 (Fig. 7A,B). Reduction of phospho-PTEN Tyr366 and increase of phospho-Akt Ser473 activity was dramatically (P<0.01) inhibited by shRNA-Vnn1 in primary bronchial epithelia cells isolated from IUGR asthmatic mice (Fig. 7C). In addition, elevation of ROS production was also prevented (P<0.01) by shRNA-Vnn1 in the cells isolated from IUGR asthmatic mice (Fig. 6D). In addition, we assessed the levels of inflammatory cytokines in the cultured media. A significant increase in IL-13, IL-4 and TNF-α (P<0.001) was detected in the OVA group compared to the PBS controls, which was dramatically (P<0.01) prevented by shRNA-Vnn1.
DISCUSSION
An asthma model was successfully induced in the current study (Figs 1-3). We found that PI3K and Akt activity was increased significantly in IUGR asthmatic mice (Fig. 5A). Similarly, it has been reported that expression of PI3K was elevated in a rat asthma model (Xia et al., 2012). The use of the PI3K inhibitor alleviated airway inflammation and hyperresponsiveness through reduction of nitric oxide, which is closely related to the development of asthma (Xia et al., 2012). In a mouse asthma model, it was also found that upregulation of the phospho-Akt was involved the occurrence of asthma (Cheng et al., 2011). Therefore, these findings suggest that PI3K/Akt signaling plays a critical role in the pathogenesis of asthma in IUGR mice.
Vannin-1, which is encoded by the gene Vnn1, is an epithelial ectoenzyme with pantetheinase activity that provides cysteamine/ cystamine to tissues and is implicated in redox homeostasis (Berruyer et al., 2004). The expression level of Vnn1 in lungs was not altered in experimental mouse asthma models challenged by repeated allergen or IL-13 (Lewis et al., 2009;Zimmermann et al., 2004). Asthma developed in Vnn1-knockout mice following a challenge by house dust mites (Xiao et al., 2015). Interestingly, it was reported that the asthma patients with downregulation of the Vnn1 gene expression level were not sensitive to glucocorticoid therapy. Absence of the Vnn1 gene resulted in resistance to dexamethasone treatment, which was reflected by persistent airway hyperresponsiveness and inflammatory cells in the lungs in an asthma mouse model (Xiao et al., 2015). These findings suggest that vannin-1 may contribute to optimal host response to corticosteroid treatment. Nevertheless, in an experimental mice model, Vnn1-knockout mice exhibited resistance to oxidative injury induced by whole-body irradiation, presenting with a reduction of inflammatory responses to ROS inducers in the thymus (Berruyer et al., 2004), suggesting that vannin-1 also involved the production of ROS and oxidative stress reaction. Moreover, we found that expression of vannin-1 in lung tissue significantly increased both at mRNA and protein levels in IUGR asthmatic mice, but not in nmIUG mice (Fig. 4B,C). These findings imply that upregulation of vannin-1 did occur in IUGR mice with asthma, and that asthma in IUGR mice that have increased vannin-1 may respond better to corticoid treatment. To investigate how Vnn1 expression Fig. 4. The methylation and expression levels of Vnn1 in IUGR and nmIUG asthmatic mice. Asthma was induced with OVA in IUGR and nmIUG mice. PBS induction was used as the control. (A) Total DNA was extracted and sequencing of the CpG islands in Vnn1 promoter regions was performed to assess the methylation levels of Vnn1 promoter. (B) Total RNA was extracted from lung tissues, qPCR was used for assessing expressions of Vnn1 at the mRNA level. (C) Total protein was extracted from lung tissues, and immunoblot assay was performed for expressions of Vnn1 at the protein level. Data are shown as mean±s.d. n=8. *P<0.001, #P<0.05. levels are regulated, we analyzed the methylation frequency of Vnn1 at promoter regions. Our results showed significant upregulation of methylation of Vnn1 promoter in the IUGR asthmatic mice, but not in the nmIUG mice (Fig. 4A). Methylation of CpG sites at promoter regions is generally thought to cause gene silencing (Hon et al., 2012). However, positive correlations between promoter methylation and increased gene expression have been reported (Wagner et al., 2014). It was found that DNA methylation changes in nitric oxide signaling systems such as nitric oxide synthase and arginase are associated with chronic cardiopulmonary disease in adults with IUGR (Xie et al., 2015). It should be noted that a more serious degree of asthma was developed in the IUGR mice, presenting with more significant eosinophil infiltration, mucus accumulation and inflammatory cytokines production (Figs 1-3). In addition, methylation of both the Vnn1 promoter region and its expression level increased dramatically only in the IUGR asthmatic mice (Fig. 4). Thus, elevation of vannin-1 may be responsible for the prominent histological alterations of lung tissues in the IUGR asthmatic mice. Nevertheless, the precise mechanism by which vannin-1 is upregulated in IUGR mice following OVA challenge should be investigated further.
In this study, we explored how transcription of Vnn1 gene is regulated in the IUGR asthmatic mice. Vannin-1 is a liver-enriched oxidative stress sensor that has been implicated in the regulation of multiple metabolic pathways. It has been found that the Vnn1 promoter has two HNF4α binding sites and that HNF4α can mediate the activation of Vnn1 transcription by recruiting PGC1α, which plays a crucial role in the regulation of gluconeogenesis (Chen et al., 2014). Thus, we assessed expression levels of HNF4α and PGC1α in our model. Our data show that the protein levels of both HNF4α and PGC1α were significantly elevated in lung tissues (Fig. 6A), and an interaction between HNF4α and PGC1α was also detected in the IUGR asthmatic mice (Fig. 6B). Furthermore, ChIP assay shows that PGC1α and HNF4α, and in particular HNF4α, bound to a great extent to the Vnn1 promoter (Fig. 6C). In a diabetic model, increased Vnn1 induced a reduction in Akt phosphorylation, which might be associated with insulin resistance (Chen et al., 2014). However, we detected an elevation of PI3K/Akt activity in the IUGR asthmatic mice (Fig. 5A), which was supported by a reduction in phospho-PTEN Tyr366 , a negative regulator of PI3K/Akt signaling (Fig. 5B). Akt is activated by the lipid products of PI3K and phosphorylates a variety of protein targets such as IκB, a key regulator of NFκB pathway that controls cell survival, proliferation and motility (Chauhan et al., 2018;Martini et al., 2014). Upon stimulation, IκB is phosphorylated at critical serine residues, resulting in polyubiquitination and degradation. In our study, we found a significant increase of NFκB p65 subunit in the nuclei and a decrease of IκB-α in the cytosolic fractions (Fig. 5C), suggesting that the NFκB pathway was activated and involved in asthma in IUGR mice.
To further demonstrate if Vnn1 induced the PI3K/Akt pathway activation, we isolated and performed a primary culture of bronchial epithelial cells isolated from the IUGR asthmatic mice lungs. Thereafter, we did a knockdown assay by infecting the cells with validated shRNA specifically targeted against mouse Vnn1 gene. Our data show that Vnn1 expression was dramatically reduced by shRNA-Vnn1 (Fig. 7A,B). Vnn1 knockdown also decreased the level of phospho-Akt Ser473 and increased phosphor-PTEN Tyr366 (Fig. 7C). Moreover, increased ROS and inflammatory cytokines such as IL-4, IL-13 and TNF-a production were suppressed by shRNA-Vnn1 in the cells from IUGR asthmatic mice (Fig. 7D,E). These findings may indicate a direct role of Vnn1 in the development of allergic airway inflammation in IUGR asthmatic mice.
Taken together, our findings demonstrate that following OVA challenge, interaction of HNF4α and PGC1α increased methylation frequency of Vnn1 at promoter regions and thus upregulated its expression, resulting in activation of the PI3K/Akt/NFκB signaling being responsible for ROS production and inflammatory mediators release, and finally resulting in asthma occurrence in the IUGR mice. We may provide a potential therapeutic target in asthma in IUGR children.
Animals and experimental design
This study was carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the Peking University. The protocol was approved by the Committee on the Ethics of Animal Experiments of the Peking University Third Hospital ( protocol number: LA2017200). IUGR mice model was established as described previously (Fu et al., 2006;Xing et al., 2019). BALB/c mice were purchased from Laboratory Animal Science Department of Peking University Health and Science Center (Beijing, China). 20 female mice were mated with males overnight and pregnancy was verified by examining the vaginal sperm plugs. Pregnant mice were randomly fed an isocaloric (30.50 Kcal/g) diet containing 8% protein (lowprotein diet) or 20% protein diet (normal diet) from day 1 of pregnancy until the birth of their pups. Both diets were obtained from Beijing Huakang Biotechnology Co., Ltd. (Beijing, China). Weights were recorded at birth, and IUGR pups were defined as and confirmed by having a lower birth weight minus 2 standard deviations than the controls. In this study, male pups were studied in order to avoid gender and hormonal influence. All pregnant mice during lactation and newborn rats after weaning at 21 days of age, were fed the normal diet. Fresh diet and water were provided daily ad libitum. An animal technician who was not involved in the assessments' outcome performed the diet assignment.
At 6 weeks after birth, asthma was induced in the nmIUG and the IUGR pups. For the asthma group, mice were sensitized with two intraperitoneal injections of 100 mg ovalbumin (OVA; cat. no. vac-pova; InvivoGen, San Fig. 6. PGC1α and HNF4α interacts and binds to Vnn1 promoter in IUGR asthmatic mice. Asthma was induced with OVA in IUGR mice. PBS induction was used as the control. Nuclear protein was extracted from lung tissues. (A) Immunoblot assay was performed for expressions of PGC1α and HNF4α. (B) IP was performed using mouse anti-HNF4α antibody and Protein G-coupled agarose beads, followed by immunoblot with rabbit-anti-PGC1α antibody. The normal mouse IgG was used as the IP control. (C) In primary cultured bronchial epithelia cells isolated from IUGR mice injected with OVA or PBS, ChIP assay was performed using mouse anti-HNF4α or rabbit anti-PGC1α antibodies. The normal mouse or rabbit IgG was used as the control. Graphics show the percentage of total DNA immunoprecipitated by each indicated antibody. Data are shown as mean±s.d. n=4. *P<0.01 OVA versus PBS. Fig. 7. Vnn1 knockdown inhibits Akt activation and ROS production in primary cultured bronchial epithelial cells derived from IUGR asthmatic mice. Knockdown of Vnn1 was performed using shVnn specifically targeted against mouse Vnn1 gene in primary cultured bronchial epithelial cells isolated from IUGR mice injected with OVA or PBS. The control shRNA (shCTL) did not target against any mouse genes. (A) The mRNA level of Vnn1 was quantitatively assessed using qPCR. (B) The protein level of Vnn1 was measured using an immunoblot assay. (C) The protein level of phospho-PTENTyr366 and phospho-AktSer473 was assessed using immunoblot assay. (D) The ROS level was assessed using DCF assay. (E) The levels of IL-4, IL-13 and TNF-α in the supernatant were measured using ELISA. Data are shown as mean±s.d. n=3. *P<0.001 OVA +shCTL versus PBS, #P<0.01 OVA+shVnn versus OVA+shCTL.
RESEARCH ARTICLE
Biology Open (2020) 9, bio049106. doi:10.1242/bio.049106 Diego, CA, USA) emulsified in aluminum hydroxide (cat. no. vac-alu-250; InvivoGen) with a 2-week interval. The animals were then challenged with daily inhalation of OVA for 2 weeks. For the control group, the mice were injected intraperitoneally and inhaled with normal saline. Anesthesia was performed using isoflurane (2% inhalant), and all efforts were made to minimize suffering. Blood was collected from the angular vein, and plasma was separated, frozen in dry ice and kept at −80°C until analysis. BALF was collected for inflammatory cell counts. Mice were then euthanized by cervical dislocation under anesthesia. The lung tissue was dissected. One part was prepared for H&E and periodic acid-Schiff (PAS) staining. The other parts were weighed, snap frozen in liquid nitrogen and stored at −80°C for further analysis.
Detection of Vnn1 promoter methylation
Total DNA was extracted from lung tissues using the PureLink Genomic DNA Mini Kit (cat. K182001; Invitrogen/Thermo Fisher Scientific). 200 ng of DNA from each sample was bisulphite modified using the EZ DNA Methylation Kit (cat. no. D5001; Zymo Research, Irvine, CA, USA). The PCR reaction was performed using the specific primer pairs designed to amplify the target region (forward: tgttgtgattttgtttaaggata, reverse: tctaactataaaacaaaacaccttaac), 95°C/4 min; 40 cycles of 95°C/30 s, 85°C/ 30 s and 72°C/30 s; 72°C/5 min. The PCR product was purified using GenElute™ Gel Extraction Kit (cat. NA1111; Millipore/Sigma), and then cloned into pTG19-T vector for sequencing. The DNA methylation percentage was determined and compared.
Chromatin immunoprecipitation (ChIP) assay
Chromatin was prepared from primarily cultured lung bronchial epithelial cells, and ChIP assay was performed according to the Abcam X-ChIP protocol. Briefly, cells were fixed with 1% formaldehyde for 10 min and neutralized with 1× glycine. Chromatin lysates were prepared, pre-cleared with Protein-A/ G Sepharose beads, and immunoprecipitated with the antibodies against HNF-4α, PGC-1α, or normal rabbit IgG in the presence of bovine serum albumin (BSA) and salmon sperm DNA. The beads were extensively washed three times before reverse crosslinking. The immunoprecipitated DNA was purified using a PCR purification kit (cat. no. K310001; Invitrogen/Thermo Fisher Scientific) and subsequently quantified by real time PCR with the primers (forward: 5′-gctcaagcgaccctcctg-3′, reverse: 5′-catgctgaagtccaaaga-3′) flanking binding sites for HNF4α on the mouse Vnn1 promoter.
Immunoprecipitation assay
Immunoprecipitation was performed for evaluation of HNF4α and PGC1α interactions. In total, 250 µg of nuclear protein was incubated under gentle rotation for overnight at 4°C with 2 µg of mouse anti-HNF4α antibody (2 µg of normal mouse IgG as the control), followed by 1 h incubation with 25 µl of Protein G-coupled agarose beads (cat. P3296; Millipore/Sigma). After five washes with 0.1% NP40/PBS, the beads were eluted with 25 µl of 2× SDS Loading buffer. The elution was stored at −80°C for further immunoblot analysis.
DCF assay
The levels of reactive oxygen species (ROS) was quantitatively measured using The OxiSelect™ In Vitro ROS/RNS Assay Kit (cat. no. STA-347-5; CELL BIOLABS, San Diego, CA, USA) according to the manufacturer's guidelines. In this study, 2×10 7 cells/ml and 50 mg/ml tissue lysates were prepared, and 50 µl was assayed in duplicate. Read the relative fluorescence with SpectraMax Gemini XS Fluorometer (Molecular Devices, San Jose, CA, USA) at 480 nm excitation/530 nm emission. Data are presented as the fold change over the controls.
|
2020-03-07T14:14:24.477Z
|
2020-03-05T00:00:00.000
|
{
"year": 2020,
"sha1": "e6155c176e90f65b7b1f24627753fca72adf2cf4",
"oa_license": "CCBY",
"oa_url": "https://bio.biologists.org/content/biolopen/9/4/bio049106.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2a1ff18107216432b849fdfe18ed042b89d988ec",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
250243143
|
pes2o/s2orc
|
v3-fos-license
|
Patient Preference Studies for Advanced Prostate Cancer Treatment Along the Medical Product Life Cycle: Systematic Literature Review
Background Patient preference studies can inform decision-making across all stages of the medical product life cycle (MPLC). The treatment landscape for advanced prostate cancer (APC) treatment has substantially changed in recent years. However, the most patient-relevant aspects of APC treatment remain unclear. This systematic review of patient preference studies in APC aimed to summarize the evidence on patient preferences and patient-relevant aspects of APC treatments, and to evaluate the potential contribution of existing studies to decision-making within the respective stages of the MPLC. Methods We searched MEDLINE and EMBASE for studies evaluating patient preferences related to APC treatment up to October 2020. Two reviewers independently performed screening, data extraction and quality assessment in duplicate. We descriptively summarized the findings and analyzed the studies regarding their contribution within the MPLC using an analytical framework. Results Seven quantitative preference studies were included. One study each was conducted in the marketing approval and the health technology assessment (HTA) and reimbursement stage, and five were conducted in the post-marketing stage of the MPLC. While almost all stated to inform clinical practice, the specific contributions to clinical decision-making remained unclear for almost all studies. Evaluated attributes related to benefits, harms, and other treatment-related aspects and their relative importance varied relevantly between studies. All studies were judged of high quality overall, but some methodological issues regarding sample selection and the definition of patient-relevant treatment attributes were identified. Conclusion The most patient-relevant aspects regarding the benefits and harms of APC treatment are not yet established, and it remains unclear which APC treatments are preferred by patients. Findings from this study highlight the importance of transparent reporting and discussion of study findings according to their aims and with respect to their stage within the MPLC. Future research may benefit from using the MPLC framework for analyzing or determining the aims and design of patient preference studies.
Introduction
Patient preferences are an essential component of patient-centered care 1 and are rapidly gaining importance in the development and evaluation of novel medical products. [2][3][4][5][6][7][8] The United States Food and Drug Administration (FDA) and the European Medicines Agency (EMA) have both integrated the evaluation of the values and perspectives of patients in their approval processes. [2][3][4][5] Large public-private partnerships such as the Medical Device Innovation Consortium (MDIC) and the Innovative Medicines Initiative Patient Preferences in Benefit Risk Assessments During the Drug Life Cycle (IMI PREFER) initiative are conducting methodological research on how patient preferences can be incorporated in the medical product life cycle (MPLC). [6][7][8] Previous research has shown potential benefits of patient preference information not just for decision-making in clinical practice, but throughout all MPLC stages. 3,6,[9][10][11][12] In the discovery stage, patient preferences may aid the assessment of unmet medical needs 3,6,9,10,12 and the design and selection of novel product prototypes. 3,6,[10][11][12] During preclinical and clinical development, preference information may be used in the design of clinical trials by defining patientrelevant outcomes and study populations, 3,6,[9][10][11][12] understanding important benefit-risk trade-offs, 3,6,10 and exploring preference heterogeneity between patients. 6,12 Patient preferences may support marketing authorization, health technology assessment (HTA) and reimbursement by complementing benefit-risk assessment 6,9,10,12 and economic evaluation, 10,12 as well as by informing value propositions and marketing strategies for industry. 6,9,12 In the postmarketing stage, preference information may guide safety monitoring and post-authorization benefit-risk assessment, 6,[9][10][11][12] inform industry regarding market opportunities and further product development, 6,[9][10][11][12] and enhance clinical practice by informing practice guidelines and enabling more patient-centered decision-making. 3,6,11 Various methods for eliciting patient preferences exist, which need to be carefully selected depending on the study aims and information required at the respective stage along the MPLC. 6,13,14 To date, no study has explicitly investigated the design and the stated aims of existing preference studies to evaluate the extent to which they were suited to inform decision-making along the MPLC.
Patient preferences play an important role in clinical decision-making in advanced prostate cancer (APC). [15][16][17][18] In recent years, this field has been significantly transformed by the development and approval of various novel treatments. To date, optimal treatment strategies have not been established and the balance of benefits and harms needs to be evaluated for each patient individually. [15][16][17][18] Thus, there is a need for a better understanding about which aspects of treatment are most relevant for patients and warrant consideration when eliciting preferences regarding APC treatment. Given the latest developments in preference research and the recent market approval of several novel treatments, APC provides an ideal example to evaluate potential contributions of patient preference studies along the MPLC.
With this systematic literature review, we pursued two aims. First, we aimed to describe the design and findings of previously conducted patient preference studies in APC, focusing on the selection and definition of patient-relevant aspects of treatment. Second, we aimed to assess the potential contribution of these studies according to their stage along the MPLC and identify potential gaps for future research.
Methods
This systematic literature review is reported in line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. 19 The study was part of the first stage of the development of a preference study, which aims to elicit patient preferences for the later conduct of a benefit-harm assessment to inform clinical practice in the context of APC. A protocol for the full project including this systematic review was published on the Open Science Framework platform. 20 The evaluation of the identified preference studies with respect to the MPLC was added during the conduct of the review, since we considered it an important emerging aspect that added substantially to the interpretation of the findings.
Eligibility Criteria
Eligible target decision contexts for APC were metastatic hormone-sensitive prostate cancer and non-metastatic or metastatic castration-resistant prostate cancer. We deemed studies eligible if they involved individuals from the general population, patients with localized prostate cancer, or APC patients. The rationale for including populations at risk of developing APC (ie, general population and prostate cancer patients with localized disease) was that it is currently unclear whether it is more appropriate to elicit preferences from patients with disease experience or populations that have not yet faced the relevant decision and its consequences. 6,21,22 We considered studies eligible if they elicited patient preferences related to treatment outcomes in APC. Studies investigating patient preferences unrelated to treatment outcomes (eg, decision-making preferences), studies related to the treatment of localized prostate cancer, and studies exclusively involving clinical experts or other stakeholders were not considered eligible. Studies were eligible if they used methods allowing to elicit the relative importance and trade-offs between benefits, harms and other aspects of treatment made by patients (eg, discrete choice experiments, best-worst scaling exercises, time trade-off, visual analogue scales, and other approaches).
Information Sources and Search Strategy
We systematically searched MEDLINE (accessed via PubMed) and EMBASE (accessed via Elsevier) up to 5 October 2020 for relevant records using terms and medical subject headings (MeSH) related to patient preferences, benefit and risk assessment, and APC. The full search strategies are provided in Supplementary Tables S1 and S2. We restricted our search to the time since 1 January 2000, since we deemed records before this period to be likely of limited relevance given the more recent establishment of patient preference studies and elicitation approaches, 6,13 as well as recent advances in the treatment of APC. 15,16,23 We further restricted the search to records in English and German. We complemented the systematic search by screening included study reports and relevant related publications for additional records.
Screening and Data Extraction
We screened the de-duplicated records for eligibility based on their titles, abstracts and full text. For included studies, we extracted data regarding the target decision context (ie, disease stage or treatment context of interest), study population characteristics, preference elicitation methodology, supportive research conducted to inform the study design, evaluated aspects of treatment (ie, treatment attributes and attribute levels), main study findings, and study funding. Screening and data extraction for all studies was performed independently and in duplicate by two reviewers, with disagreements resolved by consensus.
Quality Assessment
We assessed the quality of included studies based on the International Society of Preference and Outcome Research (ISPOR) checklist. 24 While this checklist has been developed as a guide for good research practices in conducting conjoint analyses, it has also previously been applied for the evaluation of patient preference studies. [25][26][27] Other available tools [28][29][30] did not cover all methodological items that were of interest for our study. We individually rated each of the 30 checklist items and separately report items that could not be assessed due to inconclusive or missing information. Two reviewers conducted quality assessment independently for all studies and resolved disagreements by consensus.
Analysis and Synthesis
We summarized the extracted data and descriptively analyzed the studies for differences and similarities in their characteristics, methodology, and quality. Furthermore, we assessed the selection and definition of treatment attributes and their relative importance within and across studies. Treatment attributes in preference studies may describe expected benefits, harms, mode of administration, costs, or other patient-relevant aspects of treatment. 24,26,31 The selection of welldefined and patient-relevant attributes and attribute levels is an essential component in conducting quantitative preference studies. 6,24,32 We categorized the identified attributes into benefit outcomes, harm outcomes and other aspects of treatment for analysis.
To evaluate the potential contributions of the identified preference studies to decision-making with respect to the MPLC stage during which they were conducted, we used an analytical framework based on previous work by the FDA, 3 the MDIC, 6,7 as well as the IMI PREFER initiative. [9][10][11][12] We extracted key information on the potential uses of patient preference studies from the published reports and condensed the information according to the different stages of the MPLC. We additionally considered further potential contributions of patient preference studies to clinical practice, such as exploring preference sensitivity and heterogeneity, 6 informing the creation of patient decision aids, 3,6,33 informing guideline development, 1 and conducting highly stratified benefit-harm assessment. 34 We categorized the potential contributions along the MPLC into three categories: informing industry processes and product marketing, informing regulatory assessment and reimbursement, and informing clinical practice and patient-centered decision-making. The resulting analytical framework is presented in Figure 1.
We then mapped the studies to their respective stage along the MPLC. Mapping was performed based on the studies' stated aims, the context provided in the introduction and discussion, Phase III trials and clinical practice guidelines cited within the study reports, the attributes and attribute levels used in the studies, study dates, and study funders. We used this information to determine the specific APC treatment or treatment comparisons for which patient preferences were evaluated by the individual studies. Subsequently, we determined the most likely MPLC stage based on the contextual information. Finally, we sought to compare the different studies by identifying similarities and differences between preference studies conducted at similar stages or with similar aims by assessing their potential contributions according to the analytical framework.
Study Selection and Characteristics
We identified 1140 records through database searches and 9 records through manual searches of reference lists ( Figure 2). We screened 807 records for eligibility and included 7 studies with data from total 1357 participants in our analysis [35][36][37][38][39][40][41] (a list of excluded records is provided in Supplementary Table S3). The characteristics of the included patient preference studies are summarized in Table 1. Two studies each were conducted in multiple European countries, 35,36 the United States, 38,40 and Japan, 39,41 and one in the United Kingdom. 37 The target decision context was metastatic hormonesensitive prostate cancer (mHSPC) 35,37,38 and metastatic castration-resistant prostate cancer (mCRPC) 36,39,41 in three studies each, and non-metastatic castration-resistant prostate cancer (nmCRPC) 40 in one study. The study populations covered a wide range, including APC patients in the respective target decision context, 35,36,[39][40][41] APC patients in different APC disease stages, 35,36 patients with localized or locally advanced prostate cancer, 35,38,39 men from the general public, 37 caregivers (ie, partners, relatives or friends), 40 and physicians. 39 Sample sizes (median 200, range 65 to 292) and average age of participants (range 35 to 75 years) varied considerably between studies.
The design and primary findings of the included studies are shown in Table 2. All but one study applied a discrete choice experiment (DCE) to elicit patient preferences, 35,36,[38][39][40][41] while the other used a time trade-off approach in combination with a visual analogue scale. 37 Exploratory research conducted to define patient-relevant attributes or health states included literature reviews, 35-41 patient 35,37,40 and expert [35][36][37]40,41 interviews, focus groups, 39 expert input, 38 as well as additional information from product labels, 36
Quality of Studies
Overall, we judged the quality of the studies as high. Studies fulfilled between 23 and 27 (median 25) out of 30 items of the ISPOR checklist (Table 1). Some information necessary to evaluate all checklist items was missing in all studies (number of non-reported items ranging from 2 to 4). All studies stated a well-defined rationale and provided adequate information on the decision-making context. The studies used an appropriate elicitation format, and their experimental design was generally well reported. The selection of attributes was described in adequate detail and supported by evidence in all studies. In contrast, the methods and rationale used to select and define attribute levels were less well described and considered insufficient in three studies. 35,36,39 Three studies involved patients in exploratory research guiding the study design 35,37,39 (Table 2). Furthermore, the specification or justification of the sample size and sampling strategy was insufficient in five studies, 35,[37][38][39]41 an examination or testing of respondent characteristics and subgroups was lacking in four studies, 35,36,38,39 and an assessment of the quality of responses (eg, internal validity) was missing in five studies. [35][36][37]39 Meanwhile, all studies presented and discussed their results and limitations appropriately with respect to the existing literature. Details of the quality assessment are provided in Supplementary Table S4.
Relation of Studies to Medical Product Life Cycle
The findings on study aims and contextual information of importance for evaluating the studies' potential contributions along the MPLC are demonstrated in Table 3. The assessment of the temporal relation between study conduct and its stage in the MPLC was complicated due to missing information on study timeframes in all but one study. 40 All studies were funded by a pharmaceutical company and six were co-authored by industry representatives. [35][36][37][39][40][41] Furthermore, all studies discussed or cited clinical phase III trial results or clinical practice guidelines corresponding to a drug marketed by the study funder in the respective context. Based on study reports and contextual information, we categorized one study to have been conducted in the marketing authorization stage, 40 one in the HTA and reimbursement stage, 37 and five in the post-marketing stage of the MPLC. 35,36,38,39,41 Various potential contributions along the MPLC were discussed in the aims or discussion of the studies. The study conducted in the HTA and reimbursement stage explicitly stated the aim of informing regulatory assessment and reimbursement through deriving dis-/utility values for economic evaluation. 37 Meanwhile, the other six studies stated to aim at informing clinical practice and patient-centered decision-making, for example by enabling a discussion of DovePress preferences between patients and physicians, 35,36,38,39,41 by examining differences in preferences between patients and physicians 39 or caregivers, 40 or by quantifying the value placed by patients on specific treatment attributes. 35,38,40 One study additionally mentioned potential uses of patient preference information during marketing authorization and reimbursement negotiations, 35 and another calculated the willingness to pay for treatments based on patients' preferences. 38 However, none of the studies aiming to inform clinical practice explicitly evaluated preference heterogeneity within its decision context, and we deemed it insufficiently clear based on the stated aims as to how the six studies intended to influence clinical practice in reality. Therefore, we categorized the contribution of these six studies to clinical practice as non-specific.
Study Attributes and Their Relative Importance
Among the six studies using a DCE methodology, four defined the attributes describing the benefits of treatment using directly patient-relevant outcomes, 42 such as overall survival 38,40,41 and health-related quality of life 39 (Table 2, Figure 3). Three studies used pain control, 35,36,41 and a majority of five studies used surrogate endpoints for defining benefits of treatment, such as progression-free survival (defined as time to disease progression or effect on keeping disease stable), 35,39 time to chemotherapy, 36 time to symptomatic skeletal event, 41 or time to pain progression. 40 Surrogate outcomes and pain control ranked among the three most important attributes most frequently (three studies each), followed by overall survival (two out of three studies) and quality of life (one study). In all studies including patients with localized prostate cancer, a survival or progression-free survival outcome ranked highest of all treatment attributes, 35,38,39 while harms and pain control were rated as most important in the other DCE studies. 36,40,41 For defining potential harms of treatment, the DCE studies used a wide range of different attributes depending on the decision context and specific treatment of interest. Most frequently used harm attributes were fatigue, 35,36,40,41 nausea, vomiting or diarrhea, 35,38 and cognitive disorder. 36,40 One study used a more generic "side effects" attribute. 39 Of note, all but one DCE study defined the harm attributes by using different levels of risk of experiencing a certain harm outcome (eg, 5% or 10% risk of fatigue). 35,36,38,39,41 One study additionally used severity levels for defining harm outcomes (eg, mild or moderate fatigue). 40 Cognitive or memory disorder ranked among the three most important attributes most frequently (two studies), with fatigue, nausea/vomiting or diarrhea, hematuria, fractures, and falls each ranking among the top three in one study. No harm outcome with the exception of (generic) "side effects" ranked last in any of the DCE studies.
All DCE studies included further attributes unrelated to treatment outcomes, such as mode of administration, 35,38,39,41 need of co-medication, 36 drug interactions, 36 food restrictions, 36 lost work days, 41 or out-of-pocket costs. 38 Among these, mode of administration ranked among the three most important attributes in two studies, but also ranked last in one study. Similarly, out-of-pocket costs, lost work days, and food restrictions ranked last in the respective studies. Attributes regarding mode of administration and need of co-medication commonly reflected the treatments targeted by the studies (eg, intravenous administration of docetaxel, cabazitaxel and Radium-223 compared with oral administration of abiraterone acetate, darolutamide or bicalutamide; Table 3).
The study which applied a combination of a time trade-off approach and a visual analogue scale used three defined base health states corresponding to newly diagnosed patients, patients receiving chemotherapy, and patients postchemotherapy. 37 In addition, five combinations of the health state of patients receiving chemotherapy with different adverse effect experiences (fatigue, nausea and vomiting, diarrhea, fluid retention, susceptibility to infection, and alopecia) were evaluated. In this study, the base health state of patients receiving chemotherapy was less preferred by participants than the base health state of newly diagnosed patients and patients post-chemotherapy. Among the adverse effects, nausea and vomiting, diarrhea, and susceptibility to infection were rated as the most important (ie, having the highest disutility), while alopecia ranked last in importance (ie, had the smallest disutility).
The two studies conducted in the marketing authorization and in the HTA and reimbursement stage differed from the ones conducted in the post-marketing stage of the MPLC. The study conducted in the marketing authorization stage was the only one in which a benefit attribute ranked last and which found all four harm attributes to be more important than the two benefit outcomes. 40 This study was also the only one in the target decision context of non-metastatic castration-resistant prostate cancer and defined benefits as time to pain progression and months of additional survival beyond 4 years. Meanwhile, the study conducted in the HTA and reimbursement stage took a health state valuation approach and was the only one using a general population sample, which was in line with its aim of deriving dis-/utility values for economic evaluation. 37
Discussion
In this systematic review of seven patient preference studies related to the treatment of APC, we found substantial variation in the definition and the relative importance of patient-relevant benefits, harms, and other aspects of APC treatment across studies. The identified studies explored patient preferences in all relevant decision-contexts of APC treatment. We considered five of the included preference studies to be located in the post-marketing stage of the MPLC, and one study each in the marketing approval and the HTA and reimbursement stage. All but one study were conducted in the past five years, reflecting the recent advances in APC treatment.
Study Contributions Along the Medical Product Life Cycle
One study located in the HTA and reimbursement stage explicitly aimed to inform economic evaluation, while all other studies stated to aim at informing clinical practice and patient-centered decision-making. Among these studies, none provided further detail about how it had intended to inform clinical decision-making. In the post-marketing setting, patient preference studies may influence clinical practice in various ways, such as by examining the preference sensitivity of a specific context, 6 exploring preference heterogeneity between patients, 6 informing shared decision-making tools such as patient decision aids, 3,6,33 informing guideline development, 1 or enabling highly stratified patient-centered benefit-harm assessment. 34 We found none of the studies to be specifically designed to inform these processes. Studies frequently stated the aims of highlighting attributes of importance to patients and facilitating discussions regarding preferred treatments between patients and physicians. Meanwhile, none of the studies provided information on preference heterogeneity.
For information from patient preference studies to be useful in clinical practice, an exploration of preference heterogeneity between patients is warranted. 6 While the most important aspects of treatment (ie, the most or least preferred attributes) are important to discuss in clinical practice, attributes for which there is the largest between-patient variation may be most relevant for patient-centered decision-making. Having information on the relative importance of benefits and harms may help to determine the benefit-harm balance of different treatments. Meanwhile, personalized discussions and treatment decisions -as opposed to making generalized recommendations for the whole populationmay be most useful when based on aspects that are valued differently by individual patients. Furthermore, attributes that are most distinctive between different treatment options may matter more in clinical practice than those that are similar. For example, if all treatment options cause fatigue as an adverse effect and to a similar extent (ie, similar risk or severity), other adverse effects may be more relevant for determining patient preferences for the different treatment options. Last, most studies used risk levels for defining harm attributes, with only one using severity levels for certain harms. 40 In the design of DCEs, risk levels combine expectations about both the severity and risk of a harm outcome. 43 However, since harm risks are usually largely known in the post-marketing stage, preferences for different levels of severity may be more informative for assessing the benefit-harm balance for individual patients.
All identified studies were funded by the pharmaceutical industry. Patient preference studies may have an important role in informing marketing strategies and information material, extension of product labels or indications, or future product development. 3,6,[10][11][12] Yet, potential applications of patient preference studies to inform industry processes and product marketing were not mentioned in almost all studies. Only one study stated that preference studies may support submission of application dossiers and negotiations with health authorities during regulatory approval. 35 Since the conduct of patient preference studies is costly and time-consuming, funding for such research may be difficult to obtain. 10 It appears that to date, the strongest interest in conducting preference studies in APC came from the pharmaceutical industry. Potential conflicts of interest arising from this role highlight the need for transparent reporting of the exploratory research conducted, the justifications for choosing attributes, attribute levels and sampling strategies, as well as the aims regarding which processes along the MPLC should be informed by the study.
Quality of Studies and Attribute Selection
While we judged the quality of studies to be high overall, we also identified some shortcomings. One observed issue was related to the justification of the chosen sampling strategy and sample sizes, which were not well specified in several studies. The choice of the study population is considered a key factor in the design of patient preference studies and may influence the interpretability and transferability of study findings. 6,9,10 It is thus important to assess the representativeness of the study population with respect to the target population and to evaluate potential differences between population subgroups (eg, patients at different disease stages), as well as between responders and non-responders. 6,10,26,28 Several studies lacked an examination of respondent characteristics and subgroups, and an evaluation of characteristics of responders and non-responders was not possible based on the presented data. In combination, these issues may impair the interpretation of the respective studies. This is especially relevant when translating the study findings into clinical practice, as substantial heterogeneity in preferences between patients at different disease stages or even individual patients may have to be expected. 6 We found a relevant variation in the treatment attributes used in the studies. All studies selected attributes based on exploratory research and provided an adequate rationale for their selection. However, only little or no detail was provided regarding the selection of attribute levels. In addition, only three studies (43%) involved patients in the attribute selection process, 35,37,39 which we identified as a potentially important methodological issue in the quality assessment. This shortcoming was also identified in other studies. 26 The translation of the exploratory research into the definition of attributes and attribute levels is an integral part of the design of quantitative preference studies and always bears some degree of subjectivity of the involved researchers. 10,24,32 The choice and framing of such attributes and attribute levels may relevantly influence the findings of a study, which has implications for their applicability and translation into the relevant decision-making context. 6,10,44 It is thus crucial that all aspects of attribute and attribute level selection are transparently reported 24 and related to the stated aims of the study within the corresponding stage along the MPLC.
While some differences regarding the selection of attributes between studies were expected depending on the studies' aims, decision contexts, or treatments of interest, only few attributes were used relatively consistently. To capture preferences regarding treatment benefit, DCE studies most frequently used surrogate outcomes and pain control, with survival and health-related quality of life being used less frequently. Surrogate outcomes and pain control also most frequently ranked among the three most important attributes in the respective studies. This is especially interesting since survival and quality of life are commonly considered the most patient-relevant outcomes in advanced cancer settings. 45,46 Meanwhile, pain due to bone metastases is the most frequent symptom of APC. 47 This may explain the relative importance of pain reduction in the DCE studies evaluating this attribute. Definitions of surrogate outcomes were highly heterogenous across studies, and none was used in more than one study.
Regarding treatment harms, fatigue was the only outcome used in more than two DCE studies. It is, however, expected that harms would differ more strongly across contexts due to differences in adverse effect profiles of the respective treatments and target populations (eg, regarding age and (co-)morbidity). We found that mode of administration was a frequently used and important attribute, most especially in contexts where there were substantial differences in the administration of the discussed treatments. 35,38,39,41 Meanwhile, treatment cost (defined as monthly out-of-pocket costs to the patient regardless of insurance coverage in a United States health-care setting) was used as an attribute in only one study among patients with localized prostate cancer, ranking last in importance. 38 The latter is surprising, as out-of-pocket costs may be relevant given the price of novel APC treatments, depending on insurance coverage in the countries of study conduct. 26 Our findings are similar to those of another recent systematic review of patient preference studies in metastatic prostate cancer, 48 which found treatment benefits -expressed as treatment effectiveness and bone pain control -and fatigue to be the most frequently used and most important attributes. In comparison, the inclusion of additional quantitative preference studies in our review revealed more substantial heterogeneity in the definitions and relative importance of benefit and harm attributes related to APC treatment. Thus, we currently consider the evidence to be insufficient to allow judgements about what the most patient-relevant aspects or the preferred treatments are in APC.
We identified differences in the primary study findings between studies conducted in the post-marketing stage compared to the study conducted in the marketing authorization stage, in which treatment benefits were found to be less important than the harm outcomes. 40 However, these differences may also arise due to the different target decisioncontext or study population. Pain progression (eg, due to bone metastases) may not have been considered relevant by participants in the non-metastatic setting, and expected survival in this setting is longer than in later disease stages (which is also reflected in the definition of the survival attributes). Meanwhile, pain reduction was commonly rated as highly important in studies investigating metastatic prostate cancer. 35,36,41 Hence, it remains unclear to what extent these different factors influenced the design and primary findings of this study.
Directions for Future Research
Based on our systematic review, we identified several gaps to be addressed by future research. First, it is currently unclear which treatment attributes are most appropriate to be used in patient preference studies in APC and whether there is substantial preference heterogeneity between individual patients in this context. Future research should further explore key attributes that are both important for patients and relevant for decision-making in clinical practice. Second, the included preference studies allowed only a limited exploration of potential contributions of such studies along the MPLC. Further insights from studies conducted at different stages of the MPLC, with different perspectives or aims, and targeting other disease contexts should be gathered in future studies. Third, using the MPLC as a framework may be helpful for clarifying the research questions and aims of future preference studies. Preference researchers may use the MPLC framework to plan studies aiming to inform clinical practice and patient-centered decision-making, industry processes and product marketing, or regulatory assessment and reimbursement.
Limitations
The systematic review was focused on APC as an example for an innovative field and is thus limited in its scope. While we aimed at exploring the potential contribution of preference studies along the MPLC, most studies were conducted in the post-marketing stage and stated to inform clinical practice and patient-centered decision-making. By widening the topic to other treatment contexts or disease areas, we may have identified further preference studies conducted in other MPLC stages or with different aims. Considering further databases or different search strategies may have yielded additional studies providing further insights. 49 However, based on other systematic reviews of preference studies in advanced cancer settings, 26,50 we deem our study to provide a representative example of studies in this context.
The quality assessment in this review was based on the ISPOR checklist. While this allowed a comprehensive evaluation of the studies, the checklist was not originally intended for such assessments. Thus, it may miss important aspects of study design and does not include a thorough evaluation of the potential risk of bias of studies. Other tools are available to assess the quality, 26,28 risk of bias, 29 and certainty of evidence 30 in preference studies. However, a standard for assessment has not yet been established and there is a relevant overlap between available checklists. We thus chose a methodology that is most comparable to existing research. [25][26][27] Since we were not interested in specific estimates or the certainty of the available evidence, we deemed the checklist to sufficiently cover all dimensions of relevance to this study. More research is needed to establish a standardized assessment covering all relevant dimensions of methodological and reporting quality, as well as the risk of bias of preference studies.
To assess the stages and potential contributions of studies along the MPLC, we conducted a synthesis of existing research which we used as a basic framework for analysis. 3,6,[9][10][11][12] However, the incorporation of patient preference information along the MPLC is a recent development that requires more methodological research and experiences. Thus, we consider the applied framework to be a starting point for discussion which warrants further development and more detailed examination. Meanwhile, we found it useful for categorizing the studies and enabling a discussion about what would constitute useful evidence in the post-marketing setting. We hope that other authors are encouraged by our work to assess preference studies in light of their stage along the MPLC to determine their potential contributions and value for industry, regulatory or clinical decision-making.
Conclusion
In this systematic review of patient preference studies in APC, we found that studies used a wide variety of different attributes for defining benefits and harms of treatment. While the quality of studies was high overall, we identified issues with respect to sample selection and definition of attribute levels. All studies were industry-funded, and most were conducted in the post-marketing stage of the MPLC. All but one study stated the aim to inform patient-centered decisionmaking, but the specific contributions to clinical practice remained unclear. Hence, no judgements regarding the most patient-relevant aspects of APC treatment or preferred APC treatments are currently possible and further research aimed at informing clinical practice in this context is warranted. As this review is one of the first to apply the MPLC framework in the analysis of preference studies in a specific decision-making context, future research may further explore and refine this framework as an analytical tool in other contexts. In addition, an explicit consideration of the MPLC may also help to determine the aims and design of future preference studies.
Data Sharing Statement
All data generated or analyzed during this study are included in this published article and its Supplementary Information Files.
Ethics Approval
As a systematic literature review relying on aggregate information from published studies, this research project did not require ethical approval under the Swiss Human Research Act.
|
2022-07-04T05:06:58.757Z
|
2022-06-28T00:00:00.000
|
{
"year": 2022,
"sha1": "3d8df69f3f605d9997f3369de33038215843ffc3",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3d8df69f3f605d9997f3369de33038215843ffc3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
203405778
|
pes2o/s2orc
|
v3-fos-license
|
Links between autobiographical memory richness and temporal discounting in older adults
When making choices between smaller, sooner rewards and larger, later ones, people tend to discount future outcomes. Individual differences in temporal discounting in older adults have been associated with episodic memory abilities and entorhinal cortical thickness. The cause of this association between better memory and more future-oriented choice remains unclear, however. One possibility is that people with perceptually richer recollections are more patient because they also imagine the future more vividly. Alternatively, perhaps people whose memories focus more on the meaning of events (i.e., are more “gist-based”) show reduced temporal discounting, since imagining the future depends on interactions between semantic and episodic memory. We examined which categories of episodic details – perception-based or gist-based – are associated with temporal discounting in older adults. Older adults whose autobiographical memories were richer in perception-based details showed reduced temporal discounting. Furthermore, in an exploratory neuroanatomical analysis, both discount rates and perception-based details correlated with entorhinal cortical thickness. Retrieving autobiographical memories before choice did not affect temporal discounting, however, suggesting that activating episodic memory circuitry at the time of choice is insufficient to alter discounting in older adults. These findings elucidate the role of episodic memory in decision making, which will inform interventions to nudge intertemporal choices.
the IRI Perspective-Taking subscale, as this was the one we expected to be most strongly related to temporal discounting based on previous research. The LOT-R tests for optimism, which tends to be elevated in older adults 2 , and may be related to future-oriented decision-making 3 . The GDS was included as a screening tool, because symptoms of depression are associated with deficits in memory ability, especially in positive memory recall 4 . Therefore, anyone with a GDS score of 9 or above (out of 15), indicating moderate or severe depression, was excluded. Finally, the VVIQ instructs participants to imagine different scenarios in order to measure individual differences in self-reported imagery vividness. VVIQ scores have been shown to be correlated with temporal discounting 5 . We examined Spearman correlations between scores on these questionnaires and age, as well as with the size of the effect of the positive memory recall manipulation in our study.
Outliers (scores that were more than 2.5 SD from the mean) were removed. There was 1 outlying VVIQ score (3.16 SD below the mean), and 1 outlying IRI-perspective-taking score (3.17 SD below the mean), leaving n = 33 for those analyses.
List of cues used to elicit positive memory recall on Day 1
Participant ratings of memories were not associated with temporal discounting
Although objective assessments of autobiographical memories were associated with temporal discounting, none of the participants' subjective ratings of their memories were significant predictors of temporal discounting: there was no association with the Day 1 rating of "similarity between past and present self" (ρ = 0.08; p = 0.661), the Day 1 rating of "feeling
Participant ratings of memories did not predict change in choice following recall of memories
We investigated whether any subjective characteristics of the memories themselves, as rated by the participants, could predict the extent to which retrieving them was effective in reducing temporal discounting rate. We conducted a series of mixed-effects logistic regressions to see which ratings could predict choice of delayed reward on a trial-by-trial basis, controlling for the subjective value of rewards computed assuming the Control condition discount rate. None of the ratings were significant predictors of choice: there was no effect of the Day 1 rating of
Self-reported perspective-taking is associated with reduction in discounting following memory recall across participants
The VVIQ, which measures individual differences in self-reported imagery vividness, was not correlated with temporal discounting rate (n = 33, 1 outlier excluded; ρ = -0.09; p = 0.627) or with the effect of memory recall on temporal discounting (ρ = -0.03; p = 0.883).
However, VVIQ was associated with age (ρ = -0.71; p < 0.001) in the expected direction, with younger age associated with more vivid mental imagery. With respect to autobiographical memory details, VVIQ was associated with the overall number of internal details (ρ = 0.37; p = 0.033), but not the perception-based detail ratio score (ρ = -0.02; p = 0.903).
The LOT-R, which measures optimism, was associated with the effect of memory recall
|
2019-09-19T09:13:12.325Z
|
2019-09-11T00:00:00.000
|
{
"year": 2020,
"sha1": "bd0fc42aba21c56b38743718b886070a60a08767",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-63373-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e596bad7ba421e132cfc83816fe5943406546e10",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Biology",
"Psychology"
]
}
|
209389090
|
pes2o/s2orc
|
v3-fos-license
|
Developing a newborn rat model of ventriculitis without concomitant bacteremia by intraventricular injection of K1 (−) Escherichia coli
Abstract Background Neonatal meningitis caused by Escherichia coli results in high mortality and neurological disabilities, and the concomitant systemic bacteremia confounds its mortality and brain injury. This study developed an experimental model of neonatal ventriculitis without concomitant systemic bacteremia by determining the bacterial inoculum of K1 capsule‐negative E. coli by intraventricular injection in newborn rats. Methods We carried out intraventricular injections 1 × 102 (low dose), 5 × 102 (medium dose), or 1 × 103 (high dose) colony‐forming units (CFU) of K1 (−) E. coli (EC5ME) in Sprague‐Dawley rats at postnatal day (P) 11. Ampicillin was started at P12. Blood and cerebrospinal fluid (CSF) cultures were performed at 6 h, 1 day, and 6 days after inoculation. Brain magnetic resonance imaging (MRI) was performed at P12 and P17. Survival was monitored, and brain tissue was obtained for histological and biochemical analyses at P12 and P17. Results Survival was inoculum dose‐dependent, with the lowest survival in the high‐dose group (20%) compared with the medium‐ (67%) or low‐ (73%) dose groups. CSF bacterial counts in the low‐ and medium‐dose groups were significantly lower than that in the high‐dose group at 6 h, but not at 24 h after inoculation. No bacteria were isolated from the blood throughout the experiment or from the CSF at P17. Brain MRI showed an inoculum dose‐dependent increase in the extent of brain injury and inflammatory responses. Conclusions We developed a newborn rat model of bacterial ventriculitis without concomitant systemic bacteremia by intraventricular injection of EC5ME.
Despite continuous improvements in antibiotic therapy and intensive care medicine, bacterial meningitis remains a serious disease at any age, and the prognosis is particularly poor in newborn infants, with mortality rates of 20-40% and longterm neurological sequelae, including deafness, blindness, seizures, hydrocephalus, and cognitive impairment in up to 50% of survivors. [1][2][3] The precise mechanisms by which bacterial infection and the ensuing inflammatory responses in the subarachnoid space during neonatal bacterial meningitis lead to neuronal injury, which could result in death or neurological sequelae in survivors, are not completely delineated. Therefore, a better understanding of the mechanism of brain damage is necessary to prevent this neuronal injury and, consequently, to reduce the mortality and morbidities associated with neonatal bacterial meningitis.
Developing an appropriate animal model that could simulate clinical bacterial meningitis in newborn infants is essential to determine its pathogenesis and to test the efficacy of newly developed adjuvant treatments, in addition to the use of antibiotics. Currently, several animal models of neonatal bacterial meningitis, including newborn piglets, 4 mice, 5-7 rats, 8,9 and rabbits 10 are available, and meningitis was induced by various routes, including intraperitoneal, 5,11 intranasal, 6 intravenous, 5,10,12 and intracisternal 7-10,12 inoculation of bacteria. However, these animal models have certain drawbacks, including small sample size, low infectivity, high mortality, and/or variable extent of brain injury. 11 Furthermore, concomitant bacteremia might aggravate the meningitis-induced brain injury, 9,13,14 thus increasing mortality. 8,9,15 Therefore, in the present study, we developed a newborn rat model of neonatal bacterial ventriculitis to mimic the human clinical and neuropathological meningitis, using 11-day-old newborn Sprague-Dawley rats, with titrated intraventricular inoculation of Escherichia coli, (E. coli) the most common gram-negative pathogen of neonatal bacterial meningitis. 3 We attempted to determine the bacterial inoculum dose with maximal brain injury and minimal mortality by using K1 capsule-negative E. coli to confine the infection to the central nervous system, without concomitant systemic bacteremia. 12, 16 We inoculated the bacteria intraventricularly using a stereotaxic frame to simulate the neuropathological progression of clinical neonatal bacterial meningitis, which begins with ventriculitis. 17,18 Brain injury was monitored in vivo by brain magnetic resonance imaging (MRI). [19][20][21][22] Methods
Infecting organism
In this study we used EC5ME, an un-encapsulated mutant of E. coli strain possessing the K1 capsular polysaccharide C5 (serotype 018:K1:H7) (a kind gift from Professor Kwang Sik Kim, Johns Hopkins University, MD, USA) 12,16 to induce only bacterial ventriculitis. Bacteria were cultured overnight in brain-heart infusion broth, diluted in fresh medium, and grown for another 6 h to the mid-logarithmic phase. The culture was centrifuged at 5,000 9 g for 10 min, re-suspended in sterile normal saline to the desired concentration, and used for the intraventricular injection. The accuracy of the inoculum size was confirmed by serial dilution, overnight culture on blood agar plates, followed by a count of colony-forming units (CFU).
The experimental protocols described herein were reviewed and approved by the Animal Care and Use Committee of Samsung Biomedical Research Institute, Seoul, Korea. This study was also performed in accordance with Institutional and National Institutes of Health Guidelines for Laboratory Animal Care.
Animal model of ventriculitis Figure 1 shows details of the experimental schedule. The experiment began at postnatal day (P) 11 and continued to P17. To induce ventriculitis, newborn Sprague-Dawley rats (Orient Co, Seoul, Korea) were anesthetized using 2% isoflurane in oxygen-enriched air and a total of 10 µL EC5ME inoculum in saline was slowly infused into the left ventricle under stereotactic guidance (Digital Stereotaxic Instrument with Fine Drive, MyNeurolab, St. Louis, MO, USA) coordinates: x = AE0.5, y = AE1.0, z = AE2.5 mm relative to the bregma) at P11. To determine the optimal inoculum dose with minimal mortality and maximal brain injury, we tested three different inoculum doses of E. coli: a low inoculum (LE) dose of 1 9 10 2 CFU EC5ME, a medium inoculum (ME) dose of 5 9 10 2 CFU EC5ME, and a high inoculum (HE) dose of 1 9 10 3 CFU EC5ME. For the normal control group (NC), an equal volume of normal saline was given intraventricularly. After the procedure, the rat pups were allowed to recover and returned to their dams; there was no mortality associated with the procedure.
Ten rat pups for each group were allocated to assess the acute pathophysiological changes, and the survivors were sacrificed at 24 h (P12) after bacterial inoculation for histopathological assessment (n = 6, 5, 4, and 3 for the NC, LE, ME, and HE groups, respectively) and biochemical analyses (n = 4, 4, 4, and 3 for the NC, LE, ME, and HE groups, respectively). Using these short-term groups, cerebrospinal fluid (CSF) was obtained using a cisternal tap to determine the bacterial titer at 6 and 24 h after bacterial injection. We also conducted a time course experiment in 10 animals for each group, to determine the survival rate until sacrifice of the survivors at P17 for histopathological assessment (n = 5, 4, 4, and 2 for the NC, LE, ME, and HE groups, respectively) and biochemical analyses (n = 5, 3, 4, and 0 for the NC, LE, ME, and HE groups, respectively). Intraperitoneal injection of ampicillin (200 mg/kg/day) was started 6 h after bacterial inoculation and continued for 3 days until P13. With these long-term groups, CSF was also drawn before sacrifice at 6 days after ventriculitis induction (P17).
Intraperitoneal injection of ampicillin (200 mg/kg/day) was started 24 h after bacterial inoculation and continued for 3 days. Brain MRI was performed at P12 and P17. The body weight of all rats was measured daily and was sacrificed at P12 and P17 under deep pentobarbital anesthesia (60 mg/kg, intraperitoneal). Immediately after extracting the brain, fresh brains were weighed. To assess the possible side effects of the injection, brain histology at P17 was assessed after needle injection into the ventricles only at P11 (Appendix S1).
Bacterial quantification
Bacterial concentrations from each study group were measured in the CSF and blood at 6 h, 24 h, and 6 days after bacterial inoculation for induction of ventriculitis. Bacteria CFU levels in the CSF and blood were measured at dilutions of 10 À4 -10 À8 plated on brain heart infusion agar after overnight incubation at 37°C.
In vivo brain MRI assessment Brain MRI was performed while the rats were kept in an anesthetized state by the administration of 1.5-2% isoflurane in Fig. 1 Experimental protocol. E. coli was injected intra-cerebroventricularly on postnatal day (P) 11 at different doses for each group: low dose 1 9 10 2 colony-forming units (CFU), medium dose 5 9 10 2 CFU, and high dose 1 9 10 3 CFU. Brain magnetic resonance imaging was performed before the rats were sacrificed. CSF, cerebrospinal fluid. oxygen-enriched air using a facemask. All MRI was performed using a 7.0-tesla MRI System (Bruker-Biospin, F€ allanden, Switzerland) prepared with a 20 cm gradient set capable of providing a rising time of 400 mTm-1. The MR images were acquired with 1.0-mm slice thickness, and a total of 12 slices were acquired. Brain MRI was performed at P12 (n = 10, 9, 8, and 6 in the NC, LE, ME, and HE groups, respectively) and at P17 (n = 11, 7, 8, and 2 in the NC, LE, ME, and HE groups, respectively). After MRI , the rat pups were allowed to recover and were returned to their dams.
Measurement of the extent of brain injury by MRI
All MR images were analyzed using Image J software (National Institutes of Health). The infarcted lesion was well identified by the hyperintense areas in T2-weighted imaging at P12 and P17. The area of infarcted hyperintense cortical lesion, lateral ventricles, and whole brain were measured in serial brain MRI sections and the areas were summed to calculate volume. The ratio of the infarcted regional volume to the whole brain volume was calculated as a parameter of brain injury, injury area volume ratio. The ratio of the ventricle volume to the whole brain volume was also calculated as the ventriculomegaly volume ratio.
Tissue preparation
Brain tissue preparation procedures were performed in the surviving rats until P12 (n = 10, 9, 8, and 6 in the NC, LE, ME, and HE groups, respectively) and P17 (n = 10, 7, 8, and 2 in the NC, LE, ME, and HE groups, respectively). The animals were anesthetized with sodium pentobarbital (100 mg/kg), and their brains were isolated after thoracotomy and transcardiac perfusion with ice-cold 4% paraformaldehyde in 0.1 mol/L phosphate-buffered saline. The brains were carefully removed from the animals and fixed overnight with 4% formaldehyde solution at room temperature. The brains were embedded in paraffin and coronal serial sections (4-lm thick) were taken from the paraffin blocks for morphometric analyses at the level of the medial septum area (+0.95 mm to À0.11/bregma) and the hippocampal area (À2.85 to À 3.70 mm). The sections were stained with hematoxylin and eosin to assess the extent of neuronal damage.
Statistical analyses
Statistical analyses were performed using SPSS, version 18.0 (IBM Corp., Armonk, NY, USA). Data are expressed as the mean AE standard error of the mean. For continuous variables, statistical comparison between groups was performed using one-way analysis of variance (ANOVA) and Tukey's post hoc analysis. P < 0.05 was considered statistically significant
Results
Survival rates and weight Figure 1 shows the details of the experimental schedule. The experiment began at P11 and continued through to P17. To induce ventriculitis, at P11, three different doses of E. coli were injected into the cerebroventricles of newborn rats; LE dose of 1 9 10 2 CFU EC5ME, an ME dose of 5 9 10 2 CFU EC5ME, and an HE dose of 1 9 10 3 CFU EC5ME . The survival rate after induction of bacterial ventriculitis was bacterial inoculum dose-dependent, showing the lowest survival rate up to P17 of 20% for the HE dose and 67% and 73% for the ME and LE doses, respectively (Fig. 2a). While the survival rate up to P17 in the HE group was significantly lower compared to that in the NC group, the survival rate of the LE and ME groups was not significantly reduced compared with the NC group.
While birth body and brain weight in each study group was not significantly different between the study groups; the body weight gain at P17 in the LE, ME, and HE groups was significantly lower, the brain weight gain in the ME and HE groups was significantly lower, and the brain/body weight ratio in the ME and HE groups was significantly higher compared with the those in the NC group. The least body and brain weight gain, and the highest brain/body ratio, were observed in the HE group compared with those in the LE and ME groups (Fig. 2b-d).
Bacterial counts
At 6 h (P11) and 24 h (P12) after bacterial injection, to evaluate the bacterial burdens, the CFU were counted in the CSF and blood from each study group before sacrifice. Also, at 6 days after induction of ventriculitis (P17), CSF was drawn from each study group before sacrifice. While no bacterial growth in the blood was detected in all study groups throughout the experiment, the bacterial counts in the CSF at 6 h after the induction of ventriculitis in both the LE and ME were significantly lower compared with that in the HE group. Thereafter, the bacterial counts in the CSF of all study groups increased significantly compared with that at 6 h, and there were no significant inter-group differences at 24 h after the induction of ventriculitis (Fig. 3). No bacterial growth in the CSF was detected in any of the study groups at 6 days after the induction of ventriculitis.
Brain MRI
To assess the extent of ventriculitis-induced brain infarction and hydrocephalus, in vivo brain MRI was performed. The degree of the brain infarct in the ipsilateral cortex and the dilatation of the ventricle to whole brain, as evidenced by the hyperintense areas in the diffusion-weighted MRI, performed at P12 and by T2-weighted MRI, performed at P17 were measured. The brain infarct volume ratios at P12 and P17 were bacterial inoculum dose-dependently increased, showing the highest ratio in the HE group, and a seemingly increased ratio in the Fig. 2 Survival rates. (A) Survival rates in each group were determined using Kaplan-Meier analysis followed by a log-rank test group. (B) Brain weight and (C) body weight were measured at postnatal day (P)17 in each group (n = 11, 7, 9, and 2 in the normal control (NC), low-dose (LE), medium-dose (ME), and high-dose (HE), respectively). Both weights decreased significantly depending on the E. coli dose. (D) The ratio of brain weight: body weight significantly increased in the HE group compared with the other groups. Data are presented as the mean AE standard error of the mean (SEM). *P < 0.05 compared with the normal control (NC) group, # P < 0.05 compared with the LE group, $ P < 0.05 compared with the ME group. ME group compared with that in the LE group that did not reach statistical significance (Fig. 4). The ventriculomegaly volume ratios at P12 were bacterial inoculum dose-dependently increased, showing the highest increase in the HE group compared with the LE and ME groups. In addition, although the absolute extent of ventriculomegaly was significantly reduced compared with P12, the ventriculomegaly volume ratios at P17 were also bacterial inoculum dosedependently increased, showing the highest increase in the HE group compared with the LE and ME groups (Fig. 4).
TUNEL staining and immunohistochemistry
To assess the extent of bacterial ventriculitis-induced cell death, reactivate gliosis, and microglia in the brain, the number of TUNEL-and ED-1 (Ectodysplasin A)-positive cells, and the density of GFAP-positive cells in the hippocampus were estimated at 24 h after induction of ventriculitis (P12). The number of TUNEL-and ED-1-positive cells, and the intensity of GFAP-positive cells in the hippocampus at P12 were bacterial inoculum dose-dependently increased compared with the NC group, showing the highest increase in the HE group. The increased number of TUNEL-positive cells and the intensity of GFAP-positive cells in the ME group were significantly higher compared with those in the LE group (Fig. 5).
Inflammatory cytokines in brain
Levels of inflammatory cytokines, such as interleukin (IL)-1a, IL-1b, IL-6, and tumor necrosis factor alpha (TNF-a), measured in the periventricular brain tissue homogenates at P12 revealed a bacterial inoculum dose-dependent increase, showing the highest increase in the HE group. The inflammatory cytokine levels in the ME group were significantly higher compared with those in the LE group (Fig. 6). Although the brain homogenates of the HE group were not available for measurement because of their high mortality at P17, and the absolute levels of the inflammatory cytokines were significantly reduced compared with P12, the inflammatory cytokines were bacterial inoculum dose-dependently increased, showing significantly higher levels in the ME group compared with those in the LE group.
Discussion
Despite recent improvements in neonatal intensive care medicine and the development of highly active new antibiotics, neonatal bacterial meningitis remains a serious disease with high mortality and neurological morbidities in survivors. 1,3 Currently, few effective adjuvant therapies are available to improve the prognosis of this intractable and devastating neonatal disorder. Therefore, developing an appropriate animal model to simulate clinical bacterial meningitis in newborn infants is an essential first step to determining its pathophysiological mechanisms, and to test the therapeutic efficacy of any potential new treatments. However, the limitations of currently available experimental models of meningitis lie in the wide variability among the species, the inoculation methods, and the age of the animal models. 11 In this study, we used P11 rats as an animal model of neonatal ventriculitis because the rat brain at P11 is comparable in terms of maturation to the human brain at birth. 23 The large litter size provides a considerable number of rat pups per experimental induction of meningitis setup. Also, the larger size of rat pups compared with mice enables easier surgical manipulation at an earlier age and a larger amount of brain tissues obtained at harvest. Furthermore, with our already established newborn rat model of severe intraventricular hemorrhage (IVH), 20-22 middle cerebral arterial occlusion, 24 and hypoxic-ischemic encephalopathy 25 with in vivo brain MRI and histopathological analyses, the pathophysiological mechanisms and therapeutic efficacy could be easily extrapolated to develop a newborn rat model of ventriculitis in this study. To inject bacteria into the brain ventricles, the identical injection technique using stereotaxic guidance that we used in our previous study, induced severe IVH by intraventricular injection of blood in the much younger newborn rat at P4 20 than the newborn rat at P11 used in this study. Overall, the findings of the present study suggested that the newborn rat pup model is suitable and appropriate to research the pathogenesis of neonatal bacterial ventriculitis and to test the efficacy of new treatments.
In a clinical setting, newborn meningitis usually develops with concomitant sepsis. The concomitant bacteremia with meningitis in the newborn rat pup animal model is associated with high mortality and, thus, it was hard to generate and evaluate meningitis-induced brain injury in live rats. In addition, though brain damage was primarily caused by local meningeal infection, accompanying sepsis might exacerbate the severity of brain injury and mortality. Fig. 3 Bacterial counts in the cerebrospinal fluid (CSF). Bacterial counts in the CSF obtained at 6 and 24 h after bacterial inoculation and before initiation of antibiotic treatment. Data are presented as the mean AE SEM. E. coli groups: # P < 0.05 compared to the LE group, $ P < 0.05 compared to the ME group NC, normal control; LE, low-dose; ME, medium-dose; HE, high-dose (E. coli groups); h, hour. Fig. 4 Evolution of brain injury at postnatal day (P)12 and P17. (a) Representative brain magnetic resonance imaging (MRI) of the normal control (NC) (no E. coli control) (left column), low-dose (LE) E. coli group (middle left column), medium-dose (ME) E. coli group (middle right column), and high-dose (HE) E. coli group (right column) groups from the medial septal area on day 1 and day 6 after meningitis (P12 and P17). (b) The intact volume of the cortex area-to-whole brain ratio and (c) the ventriculomegaly volume ratio were measured by MRI at P12 and P17. Data are presented as the mean AE SEM. *P < 0.05 compared with the NC group, # P < 0.05 compared with the LE group, $ P < 0.05 compared with the ME group. 5 Immunostaining in the hippocampus region. Representative photomicrographs of (A) terminal deoxynucleotidyl transferase-mediated deoxyuridine triphosphate nick-end labeling (TUNEL), (C) glial fibrillary acidic protein (GFAP) intensity, and (E) reactive microglia (ED-1)-positive cells in the brain hippocampus of postnatal day (P)12 rats at low magnification (upper panel) and high magnification (lower panel) in each group. TUNEL intensity was labeled with fluorescein isothiocyanate (FITC; green); GFAP and ED-1-positive cells were labeled with tetramethylrhodamine isothiocynate (TRITC; red). The cell nuclei were labeled with 4 0 ,6-diamidino-2-phenylindole (DAPI; blue) (Scale bar = 25 lm). The average intensity of observed (B) TUNEL and (D) GFAP, and the average number of (F) ED-1positive cells per high-power field in each group are also represented. Data are presented as the mean AE SEM. *P < 0.05 compared with the normal control (NC) group, # P < 0.05 compared with the LE group, $ P < 0.05 compared with the ME group. LE, low-dose; ME, medium-dose; HE, high-dose (E. coli groups).
To reduce the meningitis-induced sequelae and to improve outcome and survival, we aimed to make a theoretical model that can dissect meningitis injury from systemic septic shock and, thus, make it possible to assess and monitor the degree of meningitis-induced brain damage.
The presence of K1 capsule and a high degree of bacterial concentration (>104 CFU/mL), but not the collapse of the blood-brain barrier (BBB), are two key determinants for the development of E. coli meningitis and secondary bacteremia. Thus, our data of no concomitant K1 capsule-negative E. coli secondary bacteremia, despite their high bacterial concentrations in the CSF also confirms the critical role of K1 capsule without impairment of BBB for inducing E. coli bacterial ventriculitis and secondary bacteremia. In the present study, by using K1 capsule-negative E. coli we could induce neonatal bacterial ventriculitis in the new rat pup model and prevented the confounding aggravation of mortality and brain injury by concomitant bacteremia.
The neuropathology of neonatal bacterial meningitis begins with choroid plexitis and ventriculitis 18,26,27 and progresses to arachnoiditis and vasculitis, leading to brain edema, hydrocephalus, infarction, and periventricular leukomalacia. 28 In the present study, K1 (À) E. coli was injected intraventricularly to induce ventriculitis and it bypasses the natural hematogenous bacterial invasion across the BBB into the CNS. This model appears to be similar to the condition of ventriculitis that typically happens to ventriculo-peritoneal shunt patients with shunt infection. Furthermore, the neuropathology of neonatal bacterial meningitis begins with choroid plexitis and ventriculitis 17,18,27 and progresses to arachnoiditis and vasculitis, leading to brain edema, hydrocephalus, infarction, and periventricular leukomalacia. 28 In the present study, K1 (À) E. coli was injected intraventricularly to induce ventriculitis. Although it bypasses the natural hematogenous bacterial invasion across the BBB into the CNS, it might first cause ventriculitis, which simulates neonatal bacterial meningitis in the clinical setting.
In the present study, we tested three different doses of K1 (À) E. coli for the induction of ventriculitis to determine the optimal inoculum dose with minimal mortality and maximal brain injury; 1 9 10 2 CFU for the LE group, 5 9 10 2 CFU for the ME group, and 1 9 10 3 CFUs for the HE group. Survival rates, body and brain weight gain, the extent of inflammatory responses, and brain injury correlated significantly with the inoculum dose used to induce ventriculitis, showing highest mortality, extent of inflammatory responses, and brain injury, and the least body and brain weight gain in the HE group. We also observed higher inflammatory responses and the least extent of brain injury in the ME and LE groups, respectively. The mortality rate was positively correlated with the inoculum dose and the extent of inflammatory responses and brain injury. As blood culture was negative throughout the experiment, the inoculum dosedependent increase in mortality, inflammatory responses, and brain injury solely reflects the virulence of EC5ME ventriculitis, without the confounding effects of the concomitant systemic bacteremia. Overall, these findings suggest that ME (5 9 10 2 CFU) of E.coli might be optimal inoculum dose to induce neonatal ventriculitis.
Because of the limitation stemming from the small sample size, the survival rate of the LE and ME groups may not be significantly different to that of NC group. Thus, for the next step, a large group experiment would be required. Furthermore, because of the small size of the rat pups, the CSF obtained with a cisternal tap was very small. It has been used primarily for bacterial culture but not enough CSF is obtained for measuring cytokines. We thus measured cytokine levels in brain tissue homogenates in this study. Using this model, to assess the long-term developmental delays including learning disability, memory deficit, or hearing abnormality, long-term follow-up study would be needed as a next step. 6 Inflammatory cytokines of brain. Interleukin [IL]-1a, IL-1b, IL-6, and tumor necrosis factor [TNF]-a concentrations in brain tissue homogenates at (A) postnatal day (P) P12 and (B) P17, were measured using enzyme-linked immunosorbent assay (ELISA) in each group. Data are presented as the mean AE SEM. *P < 0.05 compared with the NC group, # P < 0.05 compared with the LE group, $ P < 0.05 compared with the ME group. LE, low-dose; ME, medium-dose; HE, high-dose (E. coli groups).
In infants with bacterial meningitis, brain MRI showed abnormalities including cerebral infarct, subdural empyema, cerebritis, and hydrocephalus. 19 Increased brain ventriculomegaly in the acute phase of bacterial meningitis in adults was associated with increased mortality. 29 In agreement with the clinical findings, 19,29 an acute inoculum dose-dependent increase in ventriculomegaly and cerebral infarct was observed at 1 day after the induction of ventriculitis. In addition, although a less absolute extent of ventriculomegaly and a higher extent of cerebral infarct were observed compared with post-inoculation day 1, the inoculum dose-dependent abnormalities persisted at 6 days after the induction of ventriculitis. Taken together, these findings suggested that brain MRI could be an early prognostic indicator that would be useful in identifying patients requiring further therapeutic interventions, and to assess the therapeutic efficacy of any new treatments, both in clinical and experimental settings of meningitis. 19,29 Brain injuries observed in experimental models of neonatal meningitis are unique in consistently reproducing both hippocampal damage and cortical necrosis. 7-9 Inflammatory responses are primarily responsible for the ensuing brain injury in bacterial meningitis. 3,7,8,16 In the present study, the extent of inflammatory responses, both at post-inoculation day 1 and 6, and the increased number of TUNEL, GFAP, and ED-1-positive cells in the hippocampus at 1 day after induction of ventriculitis, were associated with the bacterial inoculum dose. Antibiotic treatment was started 24 h after bacterial inoculation and continued for 3 days; no bacteria were isolated, even in the CSF, at 5 days after the induction of ventriculitis. These findings suggest that increased inflammatory responses, but not increased bacterial proliferation and dissemination, triggered by a higher bacterial inoculum, are primarily responsible for the ensuing brain injury.
In summary, we successfully developed a newborn rat model of neonatal bacterial ventriculitis, without concomitant systemic bacteremia, by intraventricular injection of K1 capsule-negative E. coli at P11. We also determined that a bacterial inoculum dose of 5 9 10 2 CFU of EC5ME had the minimum mortality, and maximal inflammatory responses and ensuing brain injury. This animal model could provide the basis for both pathophysiology and intervention studies for neonatal bacterial ventriculitis not confounded by simultaneous systemic bacteremia. Hopefully, our newly developed newborn rat model of neonatal ventriculitis will lead to more detailed knowledge of, and new treatments for, this intractable and devastating disorder.
|
2019-12-18T14:06:38.679Z
|
2019-12-17T00:00:00.000
|
{
"year": 2020,
"sha1": "5044bf6057b8f77d4e509eef6483d4c2179381ef",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ped.14108",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c23bcb22c6a38e3e1ef8f538f42547bf1e903a3",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244822707
|
pes2o/s2orc
|
v3-fos-license
|
Ethical and Legal Challenges of Telemedicine in the Era of the COVID-19 Pandemic
Background and objective: Telemedicine or telehealth services has been increasingly practiced in the recent years. During the COVID-19 pandemic, telemedicine turned into and indispensable service in order to avoid contagion between healthcare professionals and patients, involving a growing number of medical disciplines. Nevertheless, at present, several ethical and legal issues related to the practice of these services still remain unsolved and need adequate regulation. This narrative review will give a synthesis of the main ethical and legal issues of telemedicine practice during the COVID-19 pandemic. Material and Methods: A literature search was performed on PubMed using MeSH terms: Telemedicine (which includes Mobile Health or Health, Mobile, mHealth, Telehealth, and eHealth), Ethics, Legislation/Jurisprudence, and COVID-19. These terms were combined into a search string to better identify relevant articles published in the English language from March 2019 to September 2021. Results: Overall, 24 out of the initial 85 articles were considered eligible for this review. Legal and ethical issues concerned important aspects such as: informed consent (information about the risks and benefits of remote therapy) and autonomy (87%), patient privacy (78%) and confidentiality (57%), data protection and security (74%), malpractice and professional liability/integrity (70%), equity of access (30%), quality of care (30%), the professional–patient relationship (22%), and the principle of beneficence or being disposed to act for the benefit of others (13%). Conclusions: The ethical and legal issues related to the practice of telehealth or telemedicine services still need standard and specific rules of application in order to guarantee equitable access, quality of care, sustainable costs, professional liability, respect of patient privacy, data protection, and confidentiality. At present, telemedicine services could be only used as complementary or supplementary tools to the traditional healthcare services. Some indications for medical providers are suggested.
Introduction
In the 1970s, the term "telemedicine" was coined with the meaning of "healing at distance", i.e., using Information and Communication Technologies (ICT) to improve patient outcomes by increasing access to care and medical information [1]. Later, a 1998 report of the World Health Organization (WHO) defined telemedicine/telehealth as: "Telemedicine is the delivery of health care services, where distance is a critical factor, by all health care professionals using information and communications technologies (ICT) for the exchange of valid information in diagnosis, treatment and prevention of disease and injuries, research and evaluation, and for the continuing education of health care providers, all in the interests of advancing the health of individuals and their communities, and that its purpose is to improve health outcomes" [2]. In addition, already at that time, the WHO pointed out that telemedicine presents special ethical problems such as maintaining the confidentiality of information and the privacy of patients, and safeguarding the integrity of information systems [2]. Indeed, this is what in fact occurred until the start of Coronavirus disease 2019 (COVID-19) pandemic emergency.
The most recent definition for telemedicine, telehealth, and related terms, which came out in 2020 from the US Centres for Medicare & Medicaid Services (CMS), is as follows: "the exchange of medical information from one site to another through electronic communication to improve a patient's health" [3].
Although, according to the World Medical Association Statement on the Ethics of Telemedicine, "Face-to-face consultation between physician and patient remains the gold standard of clinical care" [4], in recent years, telemedicine has been increasingly practiced. Indeed, it provides several benefits, the most important of which include simplified access to health facilities and a reduction in the distance between patient and doctor, especially in geographical areas where the medical services are difficult to reach or in the case of seafarers, who are remote individuals [5]. Moreover, telemedicine may improve access to physicians for patients with mobility problems, such as patients with disabilities, fragile patients, or older patients [6], and could ideally promote equity of access to health care and quick patient engagement at reduced cost [1,7].
On the other side, teleservices such as teleanalysis, previously avoided as they were considered unsafe because of security concerns, are now recommended given the contemporary situation of many analysts in the world now being forced to work online due to the COVID-19 pandemic emergency [8,9].
Indeed, in this particular situation, telemedicine services have proven indispensable in facing the emerging needs of health care in this specific context [10]. Many lives have been saved during the COVID-19 pandemic through the use of telemedicine services making it possible to avoid physical or face-to-face contact with medical staff, healthcare personnel or other health professionals and patients in hospitals and clinical or health settings (unless strictly necessary), and by possibly reducing the virus spread and preventing or minimizing the risks of contagion either for patients or healthcare personnel [11,12].
The use of telemedicine may be also beneficial for the better management of medical care and diagnoses, and for the reduction of the duration of hospitalization in patients with no serious conditions [13,14] who can be treated in their homes, with higher-level virtually provided medical support or evaluation being available before hospital transfer, allowing the patients to possibly bypass the Emergency Department and be directly placed in a hospital bed [15].
Nevertheless, even in listing the recognized advantages of telemedicine or telehealth services, a series of ethical and legal issues may arise in the use of these disciplines and should be taken into consideration [7].
In the present review, aspects related to ethical or legal challenges dealing with telemedicine applied during the COVID-19 era are reported and presented in order to facilitate a better understanding of the related issues that still need a solution or a standardization across the countries, particularly with regards to patient privacy, informed consent, data protection, physicians' liability and risk of malpractice, and laws and regulations.
Materials and Methods
A specific literature search was performed on PubMed using the following MeSH terms: Telemedicine (which includes Mobile Health or Health, Mobile, mHealth, Telehealth, eHealth), Ethics, Legislation/Jurisprudence, COVID-19. The terms were combined into a search string to retrieve more relevant literature with contents mostly focusing on the above-mentioned topics in the biomedical field (this was the reason behind the choice to use these MeSH terms). The search was set to retrieve articles with publication dates ranging from 1 March 2019 to 14 September 2021, and to identify only full texts in the English language.
Two authors independently screened the different types of articles by reading the abstracts (where available) and drafted a list of the studies that were likely eligible. Once all the full texts were retrieved, an additional selection was carried out, based on reading the complete text of the articles. As regards the articles not selected by both of the authors, a discussion followed, and a consensus was reached for the uncertainty related to some articles to be included in this review.
Besides the filters applied to the above-mentioned search string (i.e., publication date and languages), the other inclusion criteria were: (a) the type of study (e.g., literature reviews, research articles, commentaries); (b) studies not relevant to the aim of this review; (c) articles also considering population of remote patients to be treated with telemedicine.
The exclusion criteria also concerned factors such as: (a) other different types of literature (e.g., letters, editorials, perspectives, viewpoints, abstracts); (b) studies that took into consideration only technical and engineering aspects of medical devices used in telemedicine.
Results
Following the literature search, the total number of retrieved records was 85, 18 of which were review articles. A preliminary selection was carried out and 23 articles were excluded, with the potentially relevant articles at this stage being 62.
At the end of the full-text evaluation process, out of the previous 62 publications, 24 publications-including 14 reviews and 10 research articles-were considered eligible for inclusion (records excluded n = 38), based on their relevance to the aim of this review, namely "examining ethical and legal issues in Telemedicine practice during COVID-19 era" (Figure 1). The articles mainly considered telemedicine/telehealth services in general and across different countries. The following medical fields, clinical routines, or non-urgent care types included in telemedicine/telehealth services emerged: Dermatology, Psychoanalysis and Psychotherapy, Pediatrics including perinatal and neonatal care, Nursing, Radiology, Neurology, Gynecology, Cardiology, Ophthalmology, Otorhinolaryngology, Orthopaedic and Musculoskeletal care, Nephrology, Endocrinology, Sports medicine, Chronic illnesses, COVID-19 care, and Follow-up care. This large list indicated that the application of telemedicine increased with respect to the number of medical fields during the COVID-19 pandemic, due to the risk of contagion and the consequent reduction in face-to-face contacts between patients and physicians.
Core themes related to ethical and legal issues in telemedicine/telehealth were identified and analyzed in the selected literature and questions still to be resolved were raised. The synthesis of the critical factors is reported in Table 1.
Medical or Health Service Ethical and/or Legal Issues Medical Purposes/ Disciplines Article Type Location Reference
Patient's medical records will generally be held and owned by the clinician or health care organization, but patients are entitled to access and take a copy of their records; Safeguarding risks (patient self-harm); Safety (such as how to deal with a patient falling in their home during a consultation): guidance to ensure the safety of patients and clinicians in delivering virtual consultations is needed; Health Insurance Portability and Accountability Act (HIPAA) law revision; Fidelity and responsibility (trusting relationships); Integrity (no fraudulent behavior nor personal gain); Respect for people's rights and dignity (protect privacy and safeguarding); Need of ethics code; Boundaries of competence; Unfair discrimination in treatment delivery; Digital communication with patients must be compliant with the country's and organization's data protection and telehealth regulations that are rapidly evolving and subject to change. Gynecology.
Commentary
Italy [1] Law/regulations/legal issues (83%) stress the absence or variation of the rules among countries and the need for guidelines/best practices or standardization of telemedicine services. In particular, the questions raised regarded the following aspects: costs of services and reimbursement, insurance coverage, virtual prescription of medications, accreditation, licensing, commercialization, recording (as an area of controversy), and evaluation of the effectiveness of the services such as health outcomes and delivery, in terms of quality and cost, individual experience, program implementation, and key performance indicators [1,6,8,10,11,16,19,[21][22][23][24][27][28][29][30][31][32]34].
Remote patients could either benefit or be disadvantaged by virtual care (e.g., lack of access to Internet, smartphones, or other technology should not prevent children from accessing their medical system) [10]. Indeed, the principle of justice includes equal access to care and fair distribution of the technology for marginalized communities [16]. Ideally, the greatest advantage for patients should be the equitable and quick access to healthcare through telemedicine services, but this aspect is still controversial, and in some cases, has been exacerbated during the COVID-19 pandemic (e.g., inequitable access to care, unsustainable costs in a fee-for-service system, and a lack of quality metrics for novel caredelivery modalities). Therefore, the practice of telemedicine needs a strong improvement, with specific rules and codes of conduct to be correctly put in practice for a sustainable program to be built.
Discussion
The selected literature highlighted important issues that have to be considered in the application of telemedicine or telehealth services. Among these, informed consent in telemedicine must have the same basic requirements as for traditional medical services. Therefore, there should be a substantial equivalence between telehealth and traditional services. Indeed, telemedicine should correspond to a different way of providing health and social health services but with similar results as the traditional face-to-face approaches.
The implications should be discussed in the broadest context possible. Moreover, the evolution of telemedicine poses a series of legal or ethical problems ranging from authorization and accreditation profiles to the protection of patient's personal data and many other critical aspects that need require standardization and regulatory processes [7,35]. In this regard, the Italian National Institute of Health (Istituto Superiore di Sanità) recently published a report on COVID-19. Indications on telemedicine healthcare services during the COVID-19 health emergency were provided and recommended the proactive monitoring of the health conditions of people in quarantine, those in isolation or who have been discharged from the hospital, or those isolated at home, who are limited by the rules of social distancing but still in need of continuity of care, even if they are not COVID-19infected [36]. Moreover, a practice pointer was proposed, summarizing the evidence on the use of video consultations in primary and specialist care during the COVID-19 pandemic and offering practical recommendations for video consulting in outpatient settings [34].
However, at present, the characterization of the ethical and legal issues and their solution, in the absence of a specific set of regulations, is mainly left to the assessment of compatibility between the practices adopted so far and the general regulatory framework [1].
The evidence of the effectiveness of video consultations is poor, but points towards effectiveness, safety, and high satisfaction in patients and healthcare providers [34]. The evaluation of telemedicine services is a priority of future research and a fundamental action to ensure the quality of the service and, as a consequence, the final adoption and implementation of the service at full capacity. Until then, telehealth should be a supplementary method and not a substitute of face-to-face methods of health care delivery [16].
With respect to the legal issues of virtual prescription of medication, recently highlighted by Curfman et al. [10], it is worth noting that telepharmacy ("a form of pharmaceutical care in which pharmacists and patients are not in the same place and can interact using information and communication technology (ICT) facilities") also presents unresolved limitations (e.g., legal implications) [37]. In Italy, a recent investigation stressed the need for more efforts to be made by national public health stakeholders to better analyze the contribution of telemedicine services available in public pharmacies and to find the best solutions to implement this innovative technology as an established service [38].
However, despite the difficulties, in the U.S., there are websites that are providing information for health care providers and patients who are geographically isolated, or economically or medically vulnerable, and promote virtual health care (telehealth), such as the one built by the Health Resources and Services Administration (HRSA), which provides legal considerations and best practice guides [39]. The American Medical Association (AMA) is also providing a Code of Medical Ethics in Telemedicine Practice [40], and several other organizations such as the American Psychological Association on are providing information on how to conduct group therapy using telehealth during COVID-19 [41]; in addition, the Kaiser Family Foundation (KFF), a nonprofit organization that provides independent information on national health issues, is providing information about opportunities and barriers for telemedicine in the U.S. during the COVID-19 emergency [42]. Telehealth could be practiced by video visits, phone calls, online communication such as email or text messaging, with careful attention paid to the secure storage of patient data (images, lab results, or vital statistics).
Some of the indications for medical providers include: -
Conducting telehealth in private settings, such as a doctor in a clinic or office connecting to a patient who is at home or at another clinic. Providers should always use private locations and patients should not receive telehealth services in public or semi-public settings, absent patient consent or exigent circumstances; -Obtaining patients' consent verbally and noting it in the medical record. For a signed form, the patient portal or the mail should be used to obtain a signature. It is not necessary to wait for a signed consent form. A telehealth visit can be conducted with patients giving their consent verbally. -Treat telehealth appointments in the same way as an in-person appointment and the patient should not hesitate to ask questions and request explanations or clarifications [39].
Finally, it has to be said that the review presented here has some limitations. The main limitation is that it is a narrative review conducted using only one electronic database (PubMed) to search for articles using MeSH terms. This decision was made in order to make it possible to rapidly review the literature related to the contribution of telemedicine service during the current pandemic situation, and to give an overview of the legal and ethical issues raised in the practice of the service. Although this possibly limited the number of references obtained, we are confident that the reported information covers the issues that need to be faced quite thoroughly, and highlights the fact that telemedicine is considered an indispensable service to be used.
Conclusions
Currently, according to the literature herein reviewed, the ethical and legal issues related to the practice of telehealth or telemedicine services still need standard and specific rules of application in order to guarantee equitable access, quality of care, sustainable costs, professional liability and respect of patient privacy, and data protection and confidentiality. In fact, telemedicine services could be used only as complementary or supplementary to the traditional healthcare services and not as a complete substitute.
Nevertheless, telemedicine has the potential to have widespread applications and health professionals play a fundamental role in terms of following rigorous indications when conducting telehealth visits and in helping to ensure that these technologies respect the therapeutic relationship and the quality of care.
|
2021-12-03T16:06:08.687Z
|
2021-11-30T00:00:00.000
|
{
"year": 2021,
"sha1": "86276ac38efd855ba53aa19a5bcb15b345b9a295",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1648-9144/57/12/1314/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e7f5d283a23380f9d73fa5074770688dcfa7f1b6",
"s2fieldsofstudy": [
"Law",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
52803802
|
pes2o/s2orc
|
v3-fos-license
|
How does mental health care perform in respect to service users' expectations? Evaluating inpatient and outpatient care in Germany with the WHO responsiveness concept
Background Health systems increasingly try to make their services more responsive to users' expectations. In the context of the World Health Report 2000, WHO developed the concept of health system responsiveness as a performance parameter. Responsiveness relates to the system's ability to respond to service users' legitimate expectations of non-medical aspects. We used this concept in an effort to evaluate the performance of mental health care in a catchment area in Germany. Methods In accordance with the method WHO used for its responsiveness survey, responsiveness for inpatient and outpatient mental health care was evaluated by a standardised questionnaire. Responsiveness was assessed in the following domains: attention, dignity, clear communication, autonomy, confidentiality, basic amenities, choice of health care provider, continuity, and access to social support. Users with complex mental health care needs (i.e., requiring social and medical services or inpatient care) were recruited consecutively within the mental health services provided in the catchment area of the Hanover Medical School. Results 221 persons were recruited in outpatient care and 91 in inpatient care. Inpatient service users reported poor responsiveness (22%) more often than outpatients did (15%); however this was significant only for the domains dignity and communication. The best performing domains were confidentiality and dignity; the worst performing were choice, autonomy and basic amenities (only inpatient care). Autonomy was rated as the most important domain, followed by attention and communication. Responsiveness within outpatient care was rated worse by people who had less money and were less well educated. Inpatient responsiveness was rated better by those with a higher level of education and also by those who were not so well educated. 23% of participants reported having been discriminated against in mental health care during the past 6 months. The results are similar to prior responsiveness surveys with regard to the overall better performance of outpatient care. Where results differ, this can best be explained by certain characteristics that are applicable to mental health care and also by the users with complex needs. The expectations of attention and autonomy, including participation in the treatment process, are not met satisfactorily in inpatient and outpatient care. Conclusion Responsiveness as a health system performance parameter provides a refined picture of inpatient and outpatient mental health care. Reforms to the services provided should be orientated around domains that are high in importance, but low in performance. Measuring responsiveness could provide well-grounded guidance for further development of mental health care systems towards becoming better patient-orientated and providing patients with more respect.
Background
Patients' opinions and views are increasingly being recognized as major indicators of how well health services and health systems are performing, as well as providing guidance for further service improvement [1]. The service users' view is particularly relevant when trying to make health services more responsive to users' expectations. In the context of the World Health Report 2000, WHO developed the concept of health system responsiveness as a parameter for a health care system's ability to respond to service users' legitimate expectations of non-medical issues in mental health care [2]. The concept that relates to patient orientation and showing respect for persons in mental health care consists of eight domains: autonomy, confidentiality, communication, dignity, social support, attention, basic amenities and choice. A detailed definition of the domains is presented in table 1 [3]: Good responsiveness in mental health care is measured by the system's ability to abate the negative side effects that are associated with being mentally ill and undergoing medical treatment. Mental illness and medical treatment affect a patient's sense of autonomy and dignity and cause anxiety and shame. Responsiveness, as conceptualised by WHO, aims to strengthen the rights of the individual in the context of the health care system [3,4].
Responsiveness in this sense becomes even more important when considering mental illness and mental health care. The characteristics of mental illness and also of some treatments -such as coercive treatment -as well as the stigma still attached to mental health care, make patients even more vulnerable. Therefore, having good responsiveness is crucial for mental health care systems. Responsiveness is expected to impact positively on health outcomes [5], since it will lower the threshold to seek help early. Beyond this, certain domains such as communication, dignity and autonomy have been shown in studies to positively impact on treatment outcomes [6,7]. Despite the fact that non-medical aspects of care and therapy are often inter-related (particularly in mental health care), responsiveness should be considered as an entity on its own. It is one of the three fundamental and independent objectives of health systems as defined in the World Health Report: good health, fair finance and responsiveness [2].
In this study we applied the WHO concept of responsiveness to a mental health care system for the first time in a standardised way. We thereby attempted to answer the following questions (proposed by WHO as key-questions to responsiveness surveys [8]): • Which aspects of responsiveness work well and which less well?
• Are there any differences between the responsiveness of inpatient and ambulatory health care services?
• What are the perceptions of responsiveness amongst different socio-demographic groups, in particular vulnerable groups, within a country?
• Which responsiveness domains are most important to people? Are these ones with good or poor performance? What is the performance of ambulatory and inpatient mental health care in the context of responsiveness?
• What are the main reported financial barriers and discrimination to access mental health care?
We applied this concept to a population of service users within a catchment area in the mental health care system in Germany. Psychiatric hospital care in Germany is organised in defined catchment areas. Most outpatient care is provided by psychiatrists in private practice. Patients with more complex illnesses can choose to be treated in psychiatric outpatient departments found in larger cities. They offer more intensive treatment by multiprofessional teams. Patients can freely choose where they want to be treated; referral to psychiatric outpatient care is not needed. In case of an acute need for inpatient care, patients are usually confined to being treated their catchment area's hospital. With the exception of a small set fee to be borne by the patient, costs for psychiatric inpatient and outpatient care, including medication, are covered by health insurance companies (98-% of the German population has health insurance [9]).
Methods
The concept WHO developed responsiveness as a concept primarily to evaluate general health care systems on a national level. The development of this concept drew on a broad-scale review of literature concerning patient satisfaction and quality of care. Through this review, and also at a meeting of experts in 1999, the eight domains as defined in table 1 were identified [8]. In previous qualitative work we had evaluated the applicability of this concept to mental health care [10]. The concept was proved to suit mental health service users' expectations. However, service users also had additional expectations that were subsumed under a ninth category, namely continuity [11].
The instrument
To measure responsiveness, WHO developed and validated a questionnaire. It was used to assess responsiveness by population surveys in 60 countries in the Multi Country Service Study (MCSS) [12]. The questionnaire measures responsiveness for inpatient and outpatient care in eight domains (although access to social support is only assessed in inpatient care) as presented in table 1. Responsiveness is measured on a scale ranging from "very good" (one) to "very poor" (five). A more detailed description of the instrument, which meets all classical quality criteria in psychometric testing, and of the MCSS can be obtained from documents available on the internet [8,13,14,14,14]. We tailored the German version of the MCSS questionnaire to suit mental health care by adapting its terminology, adding questions on the additional domain of continuity and attaching a section evaluating experiences with day care and hostel care. These are important pillars of mental health care provision. We shortened the time-frame during which experiences with the health care system were assessed from twelve to six months.
To measure the importance of the domains, participants were -in line with the WHO questionnaire -asked to identify the domain they felt was most important to them in mental health care.
Also, in accordance with the WHO approach, barriers to mental health care were assessed. Participants were asked whether they felt they had been treated badly by the mental health care system during the past six months. Various possible reasons for being treated badly (gender, age, etc.) were given. Participants were also asked whether they had decided not to make use of mental health care for financial reasons in the last six months.
State of health was -as in the WHO questionnaire -evaluated with parts of the WHO DAS II [15]. The demographic data assessment was extended to include information on duration of illness, housing situation (e.g., sheltered or independent) and legal guardianship. Finally, the revised questionnaire was tested with experts and service users in respect to its comprehensibility [16].
The survey
The survey was carried out in the catchment area of the Hanover Medical School's psychiatric departments. The area serves a population of approximately 140,000, living in four districts of the city of Hanover. The city has a total population of 500,000. Between March 1 and June 30, 2006, service users were consecutively recruited in all adult mental health facilities of the catchment area. Private psychiatric practices were not included.
The study was approved by the ethic committee of the Hannover Medical School.
Subjects were recruited after being initially approached by the service staff with regard to their willingness to participate. The criteria for inclusion were: use of complex mental health services in the catchment area during the past six months. "Use of complex services" was defined as making use of social support (e.g., day care, hostel, supported housing) as well as medical support (psychiatrist), or receiving inpatient care during the last six months. In addition to this, participants had to be cognitively capable of following the interview. We chose the criterion " use of complex services" because these service users are the most experienced within the system, generally having greater needs and requiring more intensive care and support. Focusing health care reforms on improving the care of sicker patients with more complex needs has been highlighted as an effective way to improve the performance of the overall system [17].
The interviews were carried out face to face by trained external interviewers. The external interviewers explained the modes of the study once more to the participants and obtained their written consent. Interviews lasted between 45 minutes and one hour. Participants were compensated with 10 €. Interviews were strictly anonymous. To ensure subjects were not accidentally interviewed twice, each record was labelled with a code derived from the participant's name.
Participants were questioned about all parts of the mental health care system that they had experienced during the last six months. However, to prevent interference, only data collected from current inpatient users was used to assess inpatient responsiveness. Likewise, only data collected in outpatient care was included in the analyses of outpatient responsiveness. To quantify sampling bias, the socio-demographic properties of the study group were compared to those of patients who had had contact with the mental health care system during a period of twelve months prior to the start of the study.
Data analysis
Data analysis was done with SPSS for Microsoft Windows. Graphs and figures were produced using Microsoft Excel.
In accordance with WHO's approach in the MCSS, responsiveness outcomes were dichotomised into good responsiveness (combining responses very good and good) and poor responsiveness (combining responses moderate, poor and very poor) [18].
Like WHO, we built an overall responsiveness score for inpatient and outpatient responsiveness. For this purpose, we averaged the raw values of all domains.
Responses regarding present state of health were dichotomised in a similar way to the responsiveness questions.
Differences in responsiveness according to socio-demographic characteristics and service style were analysed using parametric tests in cases of normality and non-parametric tests in all other cases. Normal distribution was assessed by the Kolmogorov-Smirnov test and the Shapiro-Wilk test where the sample population was smaller than 50. Differences in responsiveness between service styles were analysed for state of health using Mantel-Haenzsel statistics. P-values < 0.05 were considered significant.
Study group
312 persons were recruited, 91 in inpatient care and 221 in outpatient facilities (five hostels, two outpatient departments and a company providing sheltered work).
In two of the hostels, the company providing sheltered work and one of the outpatient departments all service users fulfilled the criteria for inclusion. One third of them consented to participation in this study. As we know from analysis of the company providing sheltered work those refusing to being interviewed in the company did not differ by gender (the company had 179 employees, 72% of them were male, and of whom 33% participated) or age. However, many of these service users used several mental health facilities. Thus if they were already interviewed in the sheltered work company they did not sign up for interview in the hostel or outpatient department. While we were able to control that we did not interview someone twice, for data protection reasons, we could not measure how many persons that refused to be interviewed in one facility did so because they were already interviewed in another.
We compared those participants recruited in the outpatient departments for our study with routine data concerning all patients who were treated there the year before (n = 1545). They did not differ by gender, age or duration of illness, or by whether they were living in a hostel or were under legal guardianship. Also participants recruited during inpatient treatment did not differ from those treated there the year before (n = 1055).
Of those participants recruited in outpatient care, 50% had had their last outpatient contact during that last week, 36% between one week and one month ago and 15% between one and six months ago. Two thirds of participants reported an outpatient department as being the location of their last contact with the mental health care system; one third had been to see a practice-based psychiatrist. Of those participants who were recruited in inpatient care, 9% had had to be coerced into being admitted.
Details of the socio-demographic characteristics of the inpatient and outpatient groups are disclosed in table 2.
Responsiveness in inpatient and outpatient care
On average, 15% of participants reported negative experiences in outpatient care and 22% in inpatient care. Overall, inpatient care scored worse than outpatient care in every aspect. However, this was only significant for the domains of dignity (p = .027) and communication (p = .007). This pattern of result did not change when participants who had had to be coerced into being admitted into inpatient care, were excluded. State of health did not contribute to the differences between inpatient and outpatient care (Mantel-Haenszel statistics), except in the case of the domain of dignity. Here, inpatients who rated their state of health as good, reported more often poor experiences in the domain of dignity (p = .063). Figure 1 shows that the relative ranking of domains was quite similar in both service systems.
Both systems performed best in respect to confidentiality. 12% (inpatient) and 6% (outpatient) of users rated this domain as poor. Second best in outpatient care was dignity (7%), whilst in inpatient care both dignity and continuity scored second best, with 15% of participants rating these domains as poor.
Worst performing domains in both service systems were choice of health care provider (27% of outpatients and 31% of inpatients) and autonomy and participation (21% of inpatients versus 28% of outpatients). Basic amenities in inpatient care was rated comparably bad at 29%. Figure 2 shows the importance of the domains in relation to their performance:
Importance of domains and performance
Outpatient care: autonomy and participation and attention are named by the majority as most important. However, they score amongst the lowest in terms of performance.
Only dignity and clear communication score high in importance and in performance.
Inpatient care: prompt attention, which was rated the third most important domain, is the only domain that scores well in both importance and performance (however, the score for performance borders on being not good). Communication, which the majority of inpatient service users indicate as most important, performs poorly. Autonomy is one of the domains frequently indicated as being most important; however its performance is poor.
Responsiveness in respect to vulnerable groups
The overall inpatient and outpatient responsiveness scores were stratified for socio-demographic variables (see tables 3a and 3b) to assess whether specific groups are vulnerable to poorer responsiveness:
Outpatient care
Responsiveness was rated significantly poorer if people had a lower monthly income. Analysing the duration of illness revealed that in the first three quartiles responsiveness worsened the longer a person was ill (p = .03, Jonckheere-Terpstra test). However, service users in the last quartile, who had been ill for more than 22 years, rated responsiveness much better. This results in findings which are not significant when all four quartiles are analysed at the same time using Kruskal-Wallis or Jonckheere-Terpstra statistics.
Inpatient care
No significant differences in responsiveness ratings were found for the variables age, duration of illness, income or working status. However, persons with a basic level of education, as well as those with a university qualification, rated responsiveness significantly better than those having an intermediate level of education.
Barriers to mental health care
23% of all participants reported having experienced discrimination in mental health care for at least one reason.
The answer most often given as a reason for discrimination was "other reasons" (15%), followed by "illness" (12%). Taking a closer look at most participants who gave the answer "other reasons" reveals that they seemed to give a response which did not fit the question, e.g., they revealed who was discriminating against them rather than why. Some answers contained paranoic features.
6.5% of study participants reported that on at least one occasion in the past six months they did not ask for mental health care because they felt they could not financially afford it.
Discussion
In this study we tried to measure the responsiveness of mental health care by the example of a regional mental health care system in a larger German city. The study group can be considered representative of service users in psychiatric inpatient care and of service users using complex services in urban areas of Germany.
It is interesting to compare the ratings of responsiveness in mental health care with data on general health care responsiveness. Within the framework of the MCSS, WHO assessed the responsiveness of the general health system in Germany. For this purpose, a sample of the German general population (n = 1123) was surveyed using comparable methods. 698 persons revealed contact to outpatient care and 96 to inpatient care [19]. Our findings are discussed in the light of this prior study. By doing so, we attempt to answer the key questions which were proposed by WHO for responsiveness surveys.
Which aspects of responsiveness work well and which work less well?
Confidentiality is the best performing domain in inpatient and outpatient care. This finding is in line with the WHO results for the general health care system [19]. Except for cases of severe violation of data protection, patients do not know whether their personal information is handled confidentially or not. However, the general health system and the surveyed mental health system seem to be able to build an atmosphere of trust and promote confidentiality. In fact, standards of data protection in psychiatry are very high. Without a patient's written consent, no case related information can be passed on except to the referred service.
Also, the domains dignity and access to social support while in inpatient care perform well both in the German MCSS and also in our study. This is not the case for choice of health care provider and quality of basic amenities. Unlike in general health care, these domains are among the worst performing ones in inpatient care [19]. The relatively poor performance of basic amenities might reflect the fact that rooms and furniture on psychiatric wards often do not meet the standards that patients have experienced in other clinics. Also to be taken into account is the fact that psychiatric patients often spend (live) many weeks on ward while being in better physical shape than most average medical or surgical patients. Thus, expectations of their surroundings might be higher.
In mental health care, there is indeed less opportunity for free choice in terms of health care provider. This is not only due to the fact that some patients have to undergo coercive treatment but more so due to the scarcity of facilities, the lack of need for competition between facilities for service users and due to the lack of information about alternative services and treatments [20]. This often minimises the choices a patient has and as such, the patient is forced to take whatever is available or not to seek help at all. Poor opportunities for choice are aggravated by the policy of many service providers that for therapeutically reasons do not support service users to change therapists if they do not like the one they are with.
Autonomy does not perform very well in mental health care. The same result is found in responsiveness surveys that focus on general and primary health care [18]. The difficulties involved in letting patients participate in decisions, thereby strengthening their autonomy, is thus not a specific mental health care problem (which, if it were, would be explained by the nature of mental illness). Rather, it seems to be a general problem in medical care that there is still a strong information gradient between provider and service users; paternalistic self-images still persist and consumer empowerment is a challenge that needs to be worked on [21].
Are there any differences between the responsiveness of inpatient and ambulatory health care services?
Only in the domains dignity and clear communication do statistics differ significantly between inpatient and outpatient care. In both the global and the German data, the Percentage of participants rating responsiveness as poor Figure 1 Percentage of participants rating responsiveness as poor. *: p < .05, **: p < .005. MCSS revealed poorer ratings for inpatient care in all domains, but failed, however, to report on statistical significance [19,18]. The difference in the dignity rating between mental health inpatient and outpatient care is mediated by state of health. Inpatients who feel healthy are more critical in respect to dignity which is one of the domains considered most important in mental health care by participants. The difference in ratings among healthy patients might be explained by a kind of "selection effect": the healthier the inpatients become, the more they will question the need for putting up with life on a hospital ward. However, attaining preliminary discharge requires much effort and a lot of arguing with the therapist. In contrast, in outpatient care, patients who feel healthy and who are not content with dignity and respect within their treatment, will simply not keep their next appointment. Also, people rating their health as poor might be more convinced about the need for care and, therefore, will probably adjust their expectations about being treated with dignity.
Differences in ratings for clear communication might be explained by a greater need in hospital for receiving information that a patient can fully understand. This need might be related to the often unfamiliar situation of inpatient care and a patient's greater dependency under these conditions. In addition, relationships in outpatient care are usually long-lasting. Therefore, after a while, most basic questions have probably been discussed.
What are the perceptions of responsiveness among different socio-demographic groups, in particular vulnerable groups?
The German MCSS found that inpatient responsiveness was perceived as worse by all vulnerable groups, i.e., the elderly, the indigent, the less educated and the sicker. In out-patient care, responsiveness was rated worse by the less educated, the sicker and the indigent.
However, our findings in mental health care differ from those of the MCSS: whilst outpatient care was perceived differently depending on education and income, we did not find a difference in respect to state of health. Our study group was probably more homogeneous in respect to health (mostly long-term ill and in need of complex services) than the MCSS general population group. As responsiveness was rated worse the longer a patient was ill, astonishingly, those who had been ill for a very long time rated responsiveness quite well. One hypothesis for this behaviour is that people might lower their expectations during the course of an illness. However, if this were the case, this trend should also have been shown in the third quartile of patients who had been ill for 12 to 22 years. Another explanation is, that those who have been ill for more than 22 years have experienced psychiatric care both before and at the beginning of mental health care reforms 30 years ago. Therefore, they have indeed experienced very low standards of care as a means of comparison.
Other than education, the perception of responsiveness did not differ for socio-demographic characteristics in inpatient care. We do not have a convincing explanation for the relationship between education and responsiveness, particularly when considering that in inpatient care, those with an intermediate level of education perceive responsiveness worse than in outpatient care where it is perceived as better. We believe more research might be useful to clarify the relationship between education and experiences with mental health care.
Although this was not the case in the MCSS mental health inpatient responsiveness did not differ much according to that are not available to them in the uniform and restricted atmosphere of inpatient care.
Which responsiveness domains are most important to people? Are these the ones with good or poor performance results?
Those domains rated less often as being important should not be interpreted as marginal. In most cases, they are those that perform relatively well, as is the case with dignity, social support, confidentiality and continuity. Also the characteristics of a domain such as continuity (added particular as a new domain to the concept) to reveal its quality primarily in a longitudinal perspective might have added to rating it as less important. As also the qualitative research into mental health system responsiveness has shown, continuity is a relevant domain however, compared to other domains such as autonomy not prominent [10]. At the same time, the ratings of a domain assessed as being most important might in fact be negatively influenced by poor performance.
There is a cluster of three domains rated by the majority as most important: attention, autonomy and communication.
Clear communication is valued much higher in inpatient care for reasons discussed above and is related to inpatients being more often in a situation that is unfamiliar to them and them therefore having greater dependency.
Prompt attention seems to be a core expectation in general health care, as shown by the MCSS. The high rating of autonomy -although this is also known from other medical sectors [22] -might have a specific meaning for mental health care. Cognitive constraints are frequent, denial of illness and refusal of treatment too. Also, the possibility of coercive treatment exists. All these aspects lead to a more paternalistic approach than in other medical specialities [23]. The specific desire of mental health service users to be involved in mental health care decisions has been highlighted in other studies too [24,23]. Qualitative exploration of service users' expectations in psychiatry shows that the meaning of autonomy does not only imply the idea of shared decision making but also implies transparency and involvement in report writing. This is not a claim stemming from patients simply being in denial about their illness. Patients accept that there are certain mental states where they are not capable of making all decisions. However, the more they recover, the more they want to be involved [10].
It is of cause for concern that autonomy and attention, indicated so often as most important, do not perform well either in outpatient or inpatient care in the study catchment area. The poor performance of autonomy is probably not only restricted to the catchment area surveyed. More autonomy and participation is also a general claim made by service user organisations [25].
What are the main reported financial barriers and issues of discrimination with regard to obtaining access to mental health care?
The MCSS found that in 2001, 5% of the German population did not ask for health care because of financial reasons. This figure was slightly higher in our study population. Because our sample population included only those who had finally succeeded in entering the mental health care system despite financial barriers, no precise statement about the real impact of financial barriers can be made. Also, the investigation into possible issues of discrimination proved to be difficult in the context of this study. The responses given naming "other causes" as reasons for discrimination indicate that the question was misunderstood by quite a number of participants. We have concluded that the responsiveness questionnaire and the format of this study are not appropriate for assessing barriers to mental health care.
Conclusion
Responsiveness as a parameter for the quality of health care does indeed provide a refined picture of inpatient and outpatient performance in mental health care. Even if only the views of service users with complex service needs are considered, results of this study can be transferred to all users and provide guidance for further development and improvement in mental health care [17].
Domains that are rated high in importance and poor in performance should be given priority and measures should be implemented to improve services. Such domains include prompt attention and autonomy and participation in decisions both in inpatient and ambulatory mental health care. There are indications to show that including cognitively impaired persons and those who deny their illness in decision making may lead to better attitudes towards mental health treatment and compliance [23]. Methods to increase autonomy and participation of mental health service users include shared decision making and improvement in the transparency of mental health reports. Also, models used for other chronic illnesses and diseases, such as diabetes, rheumatoid arthritis and asthma, that purposefully train patients to become experts on their illness [22] and encourage self-management in a structured way, should be explored for ways in which this could be transferred to mental health care. All these measures not only strengthen a patient's participation and control over treatment, but also go hand in hand with increasing the specific knowledge and information about their illness. Thus, there is a strong link here to the domain of communication. Good information and clear communication seems particularly difficult to attain for persons who have mental problems [26].
Responsiveness as a parameter of health system performance provides a structured way to evaluate mental health services in the areas of patient orientation and treating a patient with respect. However, the instrument used in this study is much too complicated and in-depth for routine use. It is planned that the instrument will soon be revised and shortened to make it into a short, self-administrable and easy to understand tool that can be realistically applied in real clinical life. This would provide the opportunity for routine evaluation and for benchmarking service systems with results being fed back to service providers.
Close to 30 years of mental health care reforms in Germany has led to quite a number of community-orientated service provisions. However, motivation for reform has, for some reason, slowed down in recent years [27]. The concept of responsiveness can offer new controllable guidelines for service development and can help better achieve meeting patients' expectations and strengthening them within the system.
|
2016-05-04T20:20:58.661Z
|
2007-07-02T00:00:00.000
|
{
"year": 2007,
"sha1": "20e420e476b69903177fb449f912e587bd25f84c",
"oa_license": "CCBY",
"oa_url": "https://bmchealthservres.biomedcentral.com/counter/pdf/10.1186/1472-6963-7-99",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7a16a98da4f2c09c36596b67e6c091ffa19f96ec",
"s2fieldsofstudy": [
"Medicine",
"Political Science",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
216641643
|
pes2o/s2orc
|
v3-fos-license
|
A fast methodology for large-scale focusing inversion of gravity and magnetic data using the structured model matrix and the $2D$ fast Fourier transform
Focusing inversion of potential field data for the recovery of sparse subsurface structures from surface measurement data on a uniform grid is discussed. For the uniform grid the model sensitivity matrices exhibit block Toeplitz Toeplitz block structure, by blocks for each depth layer of the subsurface. Then, through embedding in circulant matrices, all forward operations with the sensitivity matrix, or its transpose, are realized using the fast two dimensional Fourier transform. Simulations demonstrate that this fast inversion algorithm can be implemented on standard desktop computers with sufficient memory for storage of volumes up to size $n \approx 1M$. The linear systems of equations arising in the focusing inversion algorithm are solved using either Golub Kahan bidiagonalization or randomized singular value decomposition algorithms in which all matrix operations with the sensitivity matrix are implemented using the fast Fourier transform. These two algorithms are contrasted for efficiency for large-scale problems with respect to the sizes of the projected subspaces adopted for the solutions of the linear systems. The presented results confirm earlier studies that the randomized algorithms are to be preferred for the inversion of gravity data, and that it is sufficient to use projected spaces of size approximately $m/8$, for data sets of size $m$. In contrast, the Golub Kahan bidiagonalization leads to more efficient implementations for the inversion of magnetic data sets, and it is again sufficient to use projected spaces of size approximately $m/8$. Moreover, it is sufficient to use projected spaces of size $m/20$ when $m$ is large, $m \approx 50000$, to reconstruct volumes with $n \approx 1M$. Simulations support the presented conclusions and are verified on the inversion of a practical magnetic data set that is obtained over the Wuskwatim Lake region in Manitoba, Canada.
Introduction
The determination of the subsurface structures from measured potential field data is important for many practical applications concerned with oil and gas exploration, mining, and regional investigations, Blakely [1995]; Nabighian et al. [2005]. There are many approaches that can be considered for the inversion of potential field data sets. These range from techniques that directly use the inversion of a forward model described by a sensitivity matrix for gravity and magnetic potential field data, as in, for example, Boulanger and Chouteau [2001]; Farquharson [2008]; Lelièvre and Oldenburg [2006]; Oldenburg [1996, 1998]; Pilkington [1997]; Silva and Barbosa [2006] and Portniaguine and Zhdanov [1999]. Other approaches avoid the problem with the storage and generation of a large sensitivity matrix by employing alternative approaches, as in Cox et al. [2010]; Uieda and Barbosa [2012]; . Of those that do handle the sensitivity matrix, some techniques to avoid the large scale challenge, include wavelet and compression techniques, Li and Oldenburg [2003]; Portniaguine and Zhdanov [2002] and Voronin et al. [2015]. Of interest here is the development of an approach that takes advantage of the structure that can be realized for the sensitivity matrix, and then enables the use of the fast Fourier transform for fast matrix operations, and avoids the high storage overhead of the matrix.
The efficient inversion of three dimensional gravity data using the 2D fast Fourier transform (2DFFT) was presented in the Master's thesis of [Bruun and Nielsen, 2007]. There, it was observed that the sensitivity matrix exhibits a block Toeplitz Toeplitz block (BTTB) structure provided that the data measurement positions are uniform and carefully related to the grid defining the volume discretization. It is this structure which facilitates the use of the 2DFFT via the embedding of the required kernel entries that define the sensitivity matrix within a block Circulant Circulant block (BCCB) matrix, and which is explained in Chan and Jin [2007]; Vogel [2002]. Then, Zhang and Wong [2015] used the BTTB structure for fast computations with the sensitivity matrix, and employed this within an algorithm for the inversion of gravity data using a smoothing regularization, allowing for variable heights of the individual depth layers in the domain. They also applied optimal preconditioning for the BTTB matrices using the approach of Chan and Jin [2007]. Their approach was then optimized by but only for efficient forward gravity modeling and with a slight modification in the way that the matrices for each depth layer of the domain are defined using the approximation of the forward integral equation. In particular, Zhang and Wong [2015] use a multilayer approximation of the gravity kernel, rather than the derivation of the kernel integral in Li and Chouteau [1998]. They noted, however, that their approach is subject to greater potential for error on coarse-grained domains because it does not use the exact kernel integral developed by Li and Chouteau [1998]. Bruun and Nielsen [2007] also developed an algorithm that is even more efficient in memory and computation than the use of the BTTB for each depth layer by using an upward continuation method to deal with the issue that measured data are only provided at the surface of the domain. They concluded that this was not suitable for practical problems. Finally, they also considered the interpolation of data not on the uniform grid to the uniform grid, hence removing the restriction on the uniform placement of measurement stations on the surface, but potentially introducing some error due to the interpolation. On the other hand, their study did not include Tikhonov stabilization for the solution of the linear systems, and hence did not implement state-of-the-art approaches for resolving complex structures with general L p norm regularizers (0 ≤ p ≤ 2). Moreover, standard techniques for inclusion of depth weighting, and imposition of constraint conditions were not considered. The focus of this work is, therefore, a demonstration and validation of efficient solvers that are more general and can be effectively employed for the independent focusing inversion of both large scale gravity and magnetic potential field data sets. It should be noted, moreover, that the approach can be applied also for domains with padding, which is of potential benefit for structure identification near the boundaries of the analyzed volume.
First, we note that the fast computation of geophysics kernel models using the fast Fourier transform (FFT) has already been considered in a number of different contexts. These include calculation using the Fourier domain as in Li et al. [2018]; , and also by Pilkington [1997] in conjunction with the conjugate gradient method for solving the magnetic susceptibility inverse problem. Fast forward modeling of the magnetic kernel on an undulated surface, combined with spline interpolation of the surface data was also suggested by Li et al. [2018] using an implementation of the model in the wave number domain. Further, fast forward and high accuracy modeling of the gravity kernel using the Gauss 2DFFT was discussed by . Moreover, the derivation of the forward modeling operators that yield the BTTB structure for the magnetic and gravity kernels in combination with domain padding and the staggered placement of measurement stations with respect to the domain prisms at the surface was carefully presented in Hogue et al. [2019]. Hence, here, we only present necessary details concerning the development of the forward modeling approach Associated with the development of a focusing inversion algorithm, is the choice of solver within the inversion algorithm, the choice of regularizer for focusing the subsurface structures, and a decision on determination of suitable regularization parameters. With respect to the solver, small scale problems can be solved using the full singular value decomposition (SVD) of the sensitivity matrix, which is not feasible for the large scale. Moreover, the use of the SVD for focusing inversion has been well-investigated in the literature, see for example [Vatankhah et al., 2014[Vatankhah et al., , 2015, while choices and implementation details for focusing inversion are reviewed in [Vatankhah et al., 2020b]. Furthermore, methods that yield useful approximations of the SVD, hence enabling automatic but efficient techniques for choice of the regularization parameters have also been discussed in when considered with iterative Krylov methods based on the Golub-Kahan Bidiagonalization (GKB) algorithm, [Paige and Saunders, 1982], and in Vatankhah et al. [2018Vatankhah et al. [ , 2020a when adopted using the randomized singular value decomposition (RSVD), [Halko et al., 2011]. Recommendations for the application of the RSVD with power iteration, and the sizes of the projected spaces to be used for both GKB and RSVD were presented, but only within the context of problems that can be solved without the use of the 2DFFT. Thus, a complete validation of these algorithms for the solution of the large scale focusing inversion problem, with considerations contrasting the effectiveness of these algorithms in the large scale, is still important, and is addressed here.
We comment, further, that there is an alternative approach for the comparison of RSVD and GKB algorithms, which was discussed by Luiken and van Leeuwen [2020]. The focus there, on the other hand, was on the effective determination of both the size of the projected space and the determination of the optimal regularization parameter, using these algorithms. Their RSVD algorithm used the range finder suggested in [Halko et al., 2011, Algorithm 4], rather than the power iteration. They concluded with their one rather small example for an under-determined sensitivity matrix of size 400 by 2500 that this was not successful. The test for the GKB approach was successful for this problem, but it is still rather small scale as compared to the problems considered here. Instead as stated, we return to the problem of assessing a suitable size of the projected space to be used for large scale inversion of magnetic and gravity data, using the techniques that provide an approximate SVD and hence efficient and automatic estimation of the regularization parameter concurrently with solving large scale problems. We use the method of Unbiased Predictive Risk Estimation (UPRE) for automatically estimating the regularization parameters, as extensively discussed elsewhere, Vogel [2002].
Overview of main scientific contributions. This work provides a comprehensive study of the application of the 2DFFT in focusing inversion algorithms for gravity and magnetic potential field data sets. Specifically, our main contributions are as follows. (i) A detailed review of the mechanics for the inversion of potential field data using focusing inversion algorithms based on the iteratively regularized least squares algorithm in conjunction with the solution of linear systems using GKB or RSVD algorithms; (ii) The extension of these approaches for the use of the 2DFFT for all forward multiplications with the sensitivity matrix, or its transpose; (iii) Comparison of the computational cost when using the 2DFFT as compared to the sensitivity matrix, or its transpose, directly, when implemented within the inversion algorithm, and dependent on the sizes of the projected spaces adopted for the inversion; (iv) Presentation of numerical experiments that confirm that the RSVD algorithm is more efficient than the GKB for the inversion of gravity data sets, for larger problems than previously considered; (v) A new comparison the use of GKB as compared for RSVD for the inversion of magnetic data sets, showing that GKB is to be preferred; (vi) Finally, all conclusions are confirmed by application on a practical data set, demonstrating that the methodology is suitable for focusing inversion of large scale data sets and can provide parameter reconstructions with more than 1M variables using a laptop computer.
The paper is organized as follows. In Section 2 we present the general methodology used for the independent inversion of gravity and magnetic potential field data. The BTTB details are reviewed in Section 2.1 and stabilized inversion is reviewed in Section 2.2. Details for the numerical solution of the inversion formulation are provided in Section 2.3 and the algorithms are in Section 2.4. The estimated computational cost of each algorithm, in terms of the number of floating point operations flops is given in Section 2.5. Numerical results applying the presented algorithms to synthetic and practical data are described in Section 3, with the details that apply to all computational implementations given in Section 3.1 and the generation of the synthetic data used in the simulations provided in Section 3.2. Results assessing comparison of computational costs for one iteration of the algorithm for use with, and without, the 2DFFT are discussed in Section 3.3.1. The convergence of the 2DFFT-based algorithms for problems of increasing size is discussed in Section 3.3.2. Validating results for the inversion of real magnetic data obtained over a portion of the Wuskwatim Lake region in Manitoba, Canada are provided in Section 3.4 and conclusions in Section 4. A provides brief details on the implementation of the computations using the embedding of the BTTB matrix in the BCCB matrix and the 2DFFT, and supporting numerical evidence of the figures illustrating the results are provided in a number of tables in B.
2. Methodology 2.1. Forward Model and BTTB Structure. We consider the inversion of measured potential field data d obs that describes the response at the surface due to unknown subsurface model parameters m. The data and model parameters are connected via the forward model where G is the sensitivity, or model, matrix. This linear relationship is obtained via the discretization of a Fredholm integral equation of the first kind, where exact values d and m are the discretizations of continuous functions d and ζ, respectively, and G in (1) provides the discrete approximation of the integrals of the kernel function h over the volume cells. For the specific kernels associated with gravity and magnetic data, assuming for magnetic data that there is no remanence magnetization or self-magnetization, (2) describes a convolution operation.
Using the formulation of the integral of the kernel as derived by Haáz [1953]; Li and Chouteau [1998] for the gravity kernel, and by Rao and Babu [1991] for the magnetic kernel, sensitivity matrix G decomposes by column blocks as where block G (r) is for the r th depth layer. The individual entries in G correspond to the projections of the contributions from prisms c pqr in the volume to measurement stations, denoted by s ij , at or near the surface. The configurations of the volume and measurement domains are illustrated in Figure 1. Here it is assumed that the measurement stations are all on the surface with coordinates (a i , b j , 0) in (x, y, z). Prism c pqr of the domain has dimensions ∆ x , ∆ y and ∆ z in x, y and z directions with coordinates that are integer multiples of ∆ x , ∆ y and ∆ z , and is indexed by 1 ≤ p ≤ s x + p xL + p xR = n x , 1 ≤ q ≤ s y + p yL + p yR = n y , and 1 ≤ r ≤ n z . This indexing assumes that there is padding around the domain in x and y directions by additional borders of p xL , p xR , p yL and p yR cells. The distinction between the padded and unpadded portions of the domain is that there are no measurement stations in the padded regions. This yields G ∈ R m×n where m = s x s y , and n = n x n y n z , and each G (r) ∈ R m×nr , where n r = n x n y .
In (3), m ≤ n r n and the system is drastically underdetermined for any reasonable discretization of the depth (z) dimension of the volume. Moreover, when n is large the use of the matrix G requires both significant computational cost for evaluation of matrixmatrix operations and significant storage. Without taking account of structure in G, and assuming that a dot product of real vectors of length n requires 2n floating point operations (flops), calculating GH, for H ∈ R n×p , takes O(2nmp) flops and storage of matrix G uses approximately 8mn × 1e −9 GB 1 . For example, suppose p = m = n/8 and n = 10 6 , then storage of G requires approximately 1000GB, and the single matrix multiplication uses ≈ 10 18 /32 flops or 10 7 Gflops, without any consideration of additional software and system overheads. These observations limit the ability to do large scale stabilized inversion of potential field data in real time using current desktop computers, or laptops, without Figure 1. The configuration of prism c pqr , 1 ≤ p ≤ s x + p xL + p xR = n x , 1 ≤ q ≤ s y + p yL + p yR = n y , 1 ≤ r ≤ n z , in the volume relative to a station on the surface at location Here the stations are shown as located at the centers of the cells on the surface of the domain and that there are no measurements taken in the padded portion of the domain.
taking into account any further information on the structure of G. This is the topic of the further discussion here. Bruun and Nielsen [2007] observed that the configuration of the locations of the stations in relation to the domain discretization is significant in generating G (r) with structure that can be effectively utilized to improve the efficiency of operations with G and to reduce the storage requirements. Assuming that the stations are always placed uniformly with respect to the domain prisms, and provided that the distances between stations are fixed in x and y, then matrix G (r) for the gravity kernel has symmetric BTTB structure (SBTTB). Then, it is possible to embed G (r) in a BCCB matrix and matrix operations can be efficiently performed using the 2DFFT, as explained in [Vogel, 2002]. This structure was also discussed and then utilized for efficient forward operations with G in . There it was assumed that the stations are placed symmetrically with respect to the domain coordinates, as illustrated for the staggered configuration in Figure 1 with the stations at the center of the cells on the surface. With respect to the magnetic kernel, Bruun and Nielsen [2007] demonstrated G (r) can also exhibit BTTB structure, but they did not use the standard computation of the magnetic kernel integral as described in Rao and Babu [1991]. On the other hand, a thorough derivation of the BTTB structure for G (r) using the approach of Rao and Babu [1991] has been given in Hogue et al. [2019]. That analysis also considered for the first time the use of the padding for the domain and the modifications required in the generation of the required entries in the matrix G (r) . It should be noted, as shown in Hogue et al. [2019], that regardless of whether operations with G are implemented using the 2DFFT or by direct multiplication, it is far faster to generate G taking advantage of the BTTB structure. Here, we are concerned with efficient stabilized inversion of potential field data using this BTTB structure, and thus refer to A for a brief discussion of the implementation of the needed operations using G when implemented using the 2DFFT, and point to Hogue et al. [2019] for the details.
2.2. Stabilized Inversion. The solution of (1) is an ill-posed problem; even if G is wellconditioned the problem is underdetermined because m n. There is a considerable literature on the solution of this ill-posed problem and we refer in particular to Vatankhah et al. [2020b] for a relevant overview, and specifically the use of the unifying framework for determining an acceptable solution of (1) by stabilization. Briefly, here we estimate m * as the minimizer of the nonlinear objective function Φ α (m) subject to bound constraints Here α is a regularization parameter which trades off the relative weighting of the two terms Φ d (m) and Φ S (m), which are respectively the weighted data misfit and stabilizer, given by The weighting matrices W d , W h , W z and W L are all diagonal, with dimensions that depend on the size of D, which can be used to yield an approximation for a derivative. Here, while we assume throughout that D = I n×n 2 and refer to [Vatankhah et al., 2020b, Eq. (5)] for the modification in the weighting matrices that is required for derivative approximations using D, we present this general formulation in order to place the work in context of generalized Tikhonov inversion. We also use m apr = 0, but when initial estimates for the parameter are available, perhaps from physical measurements, note that these can be incorporated into m apr as an initial estimate for m. The weighting matrix W d has entries (W d ) ii = 1/σ i where we suppose that the measured data can be given by d obs = d exact + η, where d exact is the exact but unknown data, and η is a noise vector drawn from uncorrelated Gaussian data with variance components σ 2 i . Whereas stabilizer matrix W L in W = W h W z W L depends on m, W h and W z are constant hard constraint and constant depth weighting matrices. Although W h can be used to impose specific known values for entries of m, as discussed in [Boulanger and Chouteau, 2001], we will use W h = I n×n . Depth weighting W z is routinely used in the context of potential field inversion and is imposed to counteract the natural decay of the kernel with depth. With the same column structure as for G, W z = blockdiag(W z (1) , . . . , W z (nz) ) where W z (r) = (.5(z r + z r−1 )) −β I nr×nr , .5(z r + z r−1 ) is the average depth for depth level r, and β is a parameter that depends on the data set, [Li and Oldenburg, 1996]. Now, diagonal matrix W L depends on the parameter vector m via i th entry given by where parameter λ determines the form of the stabilization, and focusing parameter 0 < 1 is chosen to avoid division by zero. We use λ = 1 which yields an approximation to the L 1 norm as described in [Wohlberg and Rodríguez, 2007], and is preferred for inversion of potential field data, although we note that the implementation makes it easy to switch to λ = 0, yielding a solution which is compact, or λ = 2 for a smooth solution. Based on prior studies we use 2 = 1e − 9, .
2.3. Numerical Solution. We first reiterate that (4) is only nonlinear in m through the definition of W L . Supposing that W L is constant and that null(W d G) ∩ null(W) = ∅, then the solution m * of (4) without the bound constraints is given analytically by Equivalently, assuming that W is invertible, and definingG = W d GW −1 ,r = W d (d obs − Gm apr ) and y = m − m apr , then y solves the normal equations and m * can be found by restricting y + m apr to lie within the bound constraints. Now (8) can be used to obtain the iterative solution for (4) using the iteratively reweighted least squares (IRLS) as described in Vatankhah et al. [2020b]. Specifically, we use superscript k to indicate a variable at an iteration k, and replace α by and m − m apr by m − m (k−1) , initialized with W L (1) = I, and m (0) = m apr respectively. Then y (k) is found as the solution of the normal equations (8), and m (k) is the restriction of y (k) + m (k−1) to the bound constraints.
This use of the IRLS algorithm for the incorporation of the stabilization term Φ S contrasts the implementation discussed in [Zhang and Wong, 2015] for the inversion of potential field gravity data. In their presentation, they considered the solution of the general smoothing Tikhonov formulation described by (8) for general fixed smoothing operator D replacing W L . For the solver, they used the re-weighted regularized conjugate solver for iterations to improve m (k) from m apr . They also included a penalty function to impose positivity in m (k) , depth weighting to prevent the accumulation of the solution at the surface, and adjustment of α with iteration k to encourage decrease in the data fit term. Moreover, they showed that it is possible to pick approximations D which also exhibit BTTB structure for each depth layer, so that Dx can also be implemented by layer using the 2DFFT. Although we do not consider the unifying stabilization framework here with general operator D as described in Vatankhah et al. [2020b], it is a topic for future study, and a further extension of the work, therefore, of Zhang and Wong [2015] for the more general stabilizers. In the earlier work of the use of the BTTB structure arising in potential field inversion, Bruun and Nielsen [2007] investigated the use of a truncated SVD and the conjugate gradient least squares method for the minimization of the data fit term without regularization. They also considered the direct solution of the constant Tikhonov function with W L = I and a fixed regularization parameter α, for which the solution uses the filtered SVD in the small scale. Here, not only do we use the unifying stabilization framework, but we also estimate α at each iteration of the IRLS algorithm. The IRLS algorithm is implemented with two different solvers that yield effective approximations of a truncated SVD. One is based on the randomized singular value decomposition (RSVD), and the second uses the Golub Kahan Bidiagonalization (GKB).
2.4. Algorithmic Details. The IRLS algorithm relies on the use of an appropriate solver for finding y (k) as the solution of the normal equations (8) for each update k, and a method for estimating the regularization parameter α (k) . While any suitable computational scheme can be used to update m (k) , the determination of α (k) automatically can be challenging. But if the solution technique generates the SVD forG (k) , or an approximation to the SVD, such as by use of the RSVD or GKB factorization forG (k) , then there are many efficient techniques that can be used such as the unbiased predictive risk estimator (UPRE) or generalized cross validation (GCV). The obtained estimate for α (k) depends on the estimator used and there is extensive literature on the subject, e.g. Hansen [2010]. Thus, here, consistent with earlier studies on the use of the GKB and RSVD for stabilized inversion, we use the UPRE, denoted by U (α), for all iterations k > 1, and refer to and Vatankhah et al. [2018Vatankhah et al. [ , 2020a for the details on the UPRE. The GKB and RSVD algorithms play, however, a larger role in the discussion and thus for clarity are given here as Algorithms 1 and 2, respectively.
For the use of the GKB we note that Algorithm 1 uses the factorizationGA tp = H tp+1 B tp , where A tp ∈ R n×tp and H tp+1 ∈ R m×tp+1 . Steps 6 and 11 of Algorithm 1 apply the modified Gram-Schmidt re-orthogonalization to the columns of A tp and H tp+1 , as is required to avoid the loss of column orthogonality. This factorization is then used in Step 15 to obtain the rank t p approximate SVD given byG = (H tp+1 U tp )Σ tp (A tp V tp ) T . The quality of this approximation depends on the conditioning ofG, [Paige and Saunders, 1982]. In particular, the projected system of the GKB algorithm inherits the ill-conditioning of the original system, rather than just the dominant terms of the full SVD expansion. Thus, the approximate singular values include dominant terms that are good approximations to the dominant singular values of the original system, as well as very small singular values that approximate the tail of the singular spectrum of the original system. The accuracy of the dominant terms increases quickly with increasing t p , Paige and Saunders [1982]. Therefore, to effectively regularize the dominant spectral terms from the rank t p approximation, in Step 16 we use the truncated UPRE that was discussed and introduced in Vatankhah et al. [2017]. Specifically, a suitable choice for α (k) is found using the truncated SVD of B tp with t terms. Then, in Step 17, y (k) is found using all terms in the expansion of B tp . The matrix Γ(α, Σ) in Step 17 is the diagonal matrix with entries σ i /(σ 2 i + α 2 ). In our simulations we use t p = floor(1.05 t) corresponding to 5% increase in the space obtained. This contrasts to using just t terms and will include terms from the tail of the spectrum. Note, furthermore, that the top t terms, from the projected space of size t p > t will be more accurate estimates of the true dominant t terms than if obtained with t p = t. Effectively, by using a 5% increase of t in the calculation of t p , we assume that the first t terms from the t p approximation provide good approximations of the dominant t spectral components of the original matrixG. We reiterate that the presented algorithm depends on parameters t p and t. At Step 16 in Algorithm 1 α (k) is found using the projected space of size t but the update for y in Step 17 uses the oversampled projected space of size t p . The results presented for the synthetic tests will demonstrate that this uniform choice for t p is a suitable compromise between taking t p too small and contaminating the solutions by components from the less accurate approximations of the small components, and a reliable, but larger, choice for t p that provides a good approximation of the dominant terms within reasonable computational cost.
The algorithm presented in Algorithm 2, denoted as RSVD, includes a single power iteration in Steps 3 to 6. Without the use of the power iteration in the RSVD it is necessary to use larger projected systems in order to obtain a good approximation of the singular space of the original system, Halko et al. [2011]. Further, it was shown in [Vatankhah et al., 2020a], that when using RSVD for potential field inversion, it is better to apply a power iteration.
Algorithm 1: Use GKB algorithm for factorizationGA tp = H tp+1 B tp and obtain solution y of (8).
Input:r ∈ R m ,G ∈ R m×n , a target rank t and size of oversampled projected problem t p , t < t p m. Output: α and y. 1 Set a = zeros(n, 1), B = sparse(zeros(t p + 1, t p )), H = zeros(m, t p + 1), Apply UPRE to find α using U tp (:, 1 : t) and Σ tp (1 : t, 1 : t); 17 Solution y = r 2 A tp V tp Γ(α, Σ tp )U tp (1, :) T ; Skipping the power iteration steps leads to a less accurate approximation of the dominant singular space. Moreover, the gain from taking more than one power iteration is insignificant as compared to the increased computational time required. As with the GKB, the RSVD, with and without power iteration, depends on two parameters t and t p , where here t is the target rank and t p is size of the oversampled system, t p > t. For given t and t p the algorithm uses an eigen decomposition with t p terms to find the SVD approximation ofG with t p terms. Hence, the total projected space is of size t p , the size of the oversampled system, which is then restricted to size t for estimating the approximation ofG. 3 It is clear that the RSVD and GKB algorithms provide approximations for the spectral expansion ofG, with the quality of this approximation dependent on both t and t p , and hence the quality of the obtained solutions y (k) at a given iteration is dependent on these choices for t and t p . As noted, the GKB algorithm inherits the ill-conditioning ofG but the RSVD approach provides the dominant terms, and is not impacted by the tail of the spectrum. Thus, we may not expect to use the same choices for the pairs t and t p for these algorithms. Vatankhah et al. [2020a] investigated the choices for t and t p for both gravity and magnetic kernels. When using RSVD with the single power iteration they showed that suitable choices Algorithm 2: Use RSVD with one power iteration to compute an approximate SVD ofG and obtain solution y of (8) Input:r ∈ R m ,G ∈ R m×n , a target matrix rank t and size of oversampled projected problem t p , t < t p m. Output: α and y. 1 Generate a Gaussian random matrix Ω ∈ R tp×m ; for t, when t p = t + 10, are t m/s, where s ≈ 8 for the gravity problem and s ≈ 4 for magnetic data inversion. This contrasts using s ≈ 6 and s ≈ 2 without power iteration, for gravity and magnetic data inversion, respectively. On the other hand, results presented in suggest using t p m/s where s 20 for the inversion of gravity data using the GKB algorithm. This leads to the range of t used in the simulations to be discussed in Section 3. We use the choices s = 40, 25, 20, 8, 6, 4 and 3. This permits a viable comparison of cost and accuracy for GKB and RSVD. Observe that, for the large scale cases considered here, we chose to test with least s = 3 rather than s = 2. Indeed, using s = 2 generates a large overhead of testing for a wide range of parameter choices, and suggests that we would need relatively large subspaces defined by t = m/2, offering limited gain in speed and computational cost.
Computational Costs.
Of interest is the computational cost of (i) the practical implementations of the GKB or RSVD algorithms for finding the parameter vector y (k) when operations with matrix G are implemented using the 2DFFT, and (ii) the associated impact of the choices of t p on the comparative costs of these algorithms with increasing m and n. In the estimates we focus on the dominant costs in terms of flops, recalling that the underlying cost of a dot product of two vectors of length m is assumed to be 2m. Further, the costs ignore any overheads of data movement and data access.
First, we address the evaluation of matrix products withG orG T required at Steps 4 and 9 of Algorithm 1 and 2, 4, 6 and 8 of Algorithm 2. Matrix operations with G, rather thanG, use the 2DFFT, as described in A for Gx, G T y and y T G, based on the discussion in [Vogel, 2002]. The cost of a single matrix vector operation in each case is 4n x n y n z log 2 (4n x n y ) = 4n log 2 (4n r ). This includes the operation of the 2DFFT on the reshaped components of x r ∈ R nxny and the inverse 2DFFT of the component-wise product ofx r withĜ (r) , for r = 1 : n z , but ignores the lower cost of forming the component-wise products and summations over vectors of size n r . Thus, multiplication with a matrix of size n × t p has dominant cost (9) 4nt p log 2 (4n r ), in place of 2mnt p . In the IRLS algorithm we need to use operations withG = W d GW −1 rather than G. But this is handled immediately by using suitable component-wise multiplications of the diagonal matrices and vectors. Specifically, and the 2DFFT is applied for the evaluation of Gw where w = W −1 x. Then, given z = Gw, a second component-wise multiplication, W d z, is applied to complete the process. Within the algorithms, matrix-matrix operations are also required but, clearly, operationsG (k) X, (G (k) ) T Z, Z TG(k) are just loops over the relevant columns (or rows) of the matrices X and Z, with the appropriate weighting matrices provided before and after application of the 2DFFT.
The details are provided in A. Now, to determine the impact of the choices for t (and t p ) we estimate the dominant costs for finding the solution of (8) using the GKB and RSVD algorithms. This is the major cost of the IRLS algorithm. The assumptions for the dominant costs of standard algorithms, given in Table 2, are quoted from Golub and Van Loan [2013]. But note that the cost for eig depends significantly on problem size and symmetry. Here t can be quite large, when m is large, but the matrix is symmetric, hence we use the estimate 9t 3 , [Golub and Van Loan, 2013, Algorithm 8.3.3]. To be complete we note that svds for the sparse bidiagonal matrix B is achieved at cost which is at most quadratic in the variables. A comment on the cost of the qr operation is also required. Generally, in forming the QR factorization of a matrix we would maintain the information on the Householder reflectors that are used in the reduction of the matrix to upper triangular form, rather than accumulating the matrix Q. The cost is reduced significantly if Q is not accumulated. But, as we can see from Steps 2, 4, 6 and 8 of Algorithm 2, we will need to evaluate products of Q withG or its transpose. To take advantage of the 2DFFT we then need to first evaluate a product of Q with a diagonal scaling matrix, which amounts to accumulation of matrix Q. Experiments, that are not reported here, show that it is more efficient to accumulate Q as given in Algorithm 2, rather than to to first evaluate the product of Q with a diagonal scaling matrix without pre accumulation. Then, the cost for accumulating Q is 2t 2 (m − t/3) for a matrix of size m × t, [Golub and Van Loan, 2013, page 255] yielding a total cost for the qr step of 4t 2 (m − t/3), as also reported in Xiang and Zou [2013].
Using the results in Table 1 we can estimate the dominant costs of Algorithms 1 and 2. In the estimates we do not distinguish between costs based on t p or t, noting t p = floor(1.05 t) and t = m/s. We also ignore the distinction between m and n r , where n r > m for padded domains. Moreover, the cost of finding α (k) and then evaluating y (k) is of lower order than the dominant costs involved with finding the needed factorizations. Using LOT to indicate the lower order terms that are ignored, and assuming the calculation without the use of the Table 1. Computational costs for standard operations. Matrix G ∈ R m×n , X ∈ R n×t , Y ∈ R m×t , sparse bidiagonal B ∈ R t+1×t , A T A ∈ R t×t , and Z ∈ R m×t . The modified Gram-Schmidt for C ∈ R m×i is repeated for i = 1 : t, yielding the given estimate. These costs use the basic unit that the inner product x T x for x of length n requires 2n operations.
Both pairs of equations suggest, just in terms of flop count, that Cost RSVD > 2 Cost GKB . Thus, we would hope to use a smaller t for the RSVD, than for the GKB, in order to obtain a comparable cost. This expectation contradicts earlier experiments contrasting these algorithms for the inversion of gravity data, using the RSVD without power iteration, as discussed in Vatankhah et al. [2018]. Alternatively, it would be desired that the RSVD should converge in the IRLS far faster than the GKB. Further, theoretically, the gain of using the 2DFFT is that the major terms are 8t 2 n and 2t 2 n for the RSVD and GKB, respectively. as compared to 8nmt > 8t 2 n and 4mnt > 2t 2 n, noting t < m. Specifically, even though the costs should go up with order nt 2 eventually with the 2DFFT, this is still far slower than the increase mnt that arises without taking advantage of the structure. Now, as discussed in Xiang and Zou [2013], measuring the computational cost just in terms of the flop count can be misleading. It was noted by Xiang and Zou [2013] that a distinction between the GKB and RSVD algorithms, where the latter is without the power iteration, is that the operations required in the GKB involve many BLAS2 (matrix-vector) operations, requiring repeated access to the matrix or its transpose, as compared to BLAS3 (matrix-matrix) operations for RSVD implementations. On the other hand, within the qr algorithm, the Householder operations also involve BLAS2 operations. Hence, when using Matlab, the major distinction should be between the use of functions that are builtin and compiled, or are not compiled. In particular, the functions qr and eig are builtin and hence optimized, but all other operations that are used in the two algorithms do not use any compiled code. Specifically, there is no compiled option for the MGS used in steps 6 and 11 of Algorithm 1, while almost all operations in Algorithm 2 use builtin functions or BLAS3 operations for matrix products that do not involve the matrices with BTTB structure. Thus, in the evaluation of the two algorithms in the Matlab environment, we will consider computational costs directly, rather than just the estimates given by (13) -(14). On the other hand, the estimates of the flop counts should be relevant for higher-level programming environments, and are thus relevant more broadly. We also note that in all implementations none of the results quoted will use multiple cores or GPUs.
Numerical Experiments
We now validate the fast and efficient methods for inversion of potential field data using the BTTB structure of the gravity and magnetic kernel matrices.
3.1. Implementation parameter choices. Diagonal depth weighting matrix W z uses β = 0.8 for the gravity problem, and β = 1.4 for the magnetic problem, consistent with recommendations in Li and Oldenburg [1998] and Pilkington [1997], respectively. Diagonal W d is determined by the noise in the data, and hard constraint matrix W h is taken to be the identity. Moreover, we use m apr = 0, indicating no imposition of prior information on the parameters. Regularization parameter α (k) is found using the UPRE method for k > 1, but initialized with appropriately large α (1) given by Here σ i are the estimates of the ordered singular values for W d GW −1 given by the use of the RSVD or GKB algorithm, and the mean value is taken only over σ i > 0. This follows the practice implemented in Vatankhah et al. [2018]; for studies using the RSVD and GKB, and which was based on the recommendation to use a large value for α (1) , [Farquharson and Oldenburg, 2004]. In order to contrast the performance and computational cost of the RSVD and GKB algorithms with increasing problem size m, different sizes t of the projected space for the solution are obtained using t = floor(m/s). Generally, the GKB is successful with larger values for s (smaller t) as compared to that needed for the RSVD algorithm. Hence, following recommendations for both algorithms, as discussed in Section 2.4, we use the range of s from 40 to 3, given by s = 40, 25, 20, 8, 6, 4 and 3, corresponding to increasing t, but also limited by 5000. For all simulations, the IRLS algorithm is iterated to convergence as determined by the χ 2 test for the predicted data, If this is not attained for k ≤ K max , the iteration is terminated. Noisy data are generated for observed data d obs = d exact + η using where e is drawn from a Gaussian normal distribution with mean 0 and variance 1. The pairs (τ 1 , τ 2 ) are chosen to provide a signal to noise ratio (SNR), as calculated by (ii) the number of iterations to convergence K which is limited to 25 in all cases, (iii) the scaled χ 2 estimate given by (17) at the final iteration, and (iv) the time to convergence measured in seconds, or to iteration 25 when convergence is not achieved.
3.2. Synthetic data. For the validation of the algorithms, we pick a volume structure with a number of boxes of different dimensions, and a six-layer dipping dike. The same structure is used for generation of the gravity and magnetic potential field data. For gravity data the densities of all aspects of the structure are set to 1, with the homogeneous background set to 0. For the magnetic data, the dipping dike, one extended well and one very small well have susceptibilities .06. The three other structures have susceptibilities set to .04. The distinction between these structures with different susceptibilities is illustrated in the illustration of the iso-structure in Figure 2(a) and the cross-section in Figure 2(b). The domain volume is discretized in x, y and z into the number of blocks as indicated by triples (s x , s y , n z ) with increasing resolution for increasing values of these triples. They are generated by taking (s x , s y , n z ) = (25, 15, 2), and then scaling each dimension by scaling factor ≥ 4 for the test cases, correspondingly, s x s y = 375 is scaled by 2 with increasing , yielding a minimum problem size with m = 6000 and n = 48000. The grid sizes are thus given by the triples (∆ x , ∆ y , ∆ z ) = (2000/s x , 1200/s y , 400/n z ). The problem sizes considered for each simulation are detailed in Table 2. For padding we compare the case with pad = 0% and pad = 5% padding across x and y dimensions. These are rounded to the nearest integer yielding p xL = p xR = round(pad s x ), and n x = s x + 2 round(pad s x ). n y is calculated in the same way, yielding n = (s x + 2 round(pad s x ))(s y + 2 round(pad s y ))n z . Certainly, the choice to use pad = 5% is quite large, but is chosen to demonstrate that the solutions obtained using the 2DFFT are robust to boundary conditions, and thus not impacted by the restriction due to lack of padding or very small padding. For these structures and resolutions, noisy data are generated as given in (18) to yield an SNR of approximately 24 across all scales as calculated using (19). This results in different choices of τ 1 and τ 2 for each problem size and dependent on the gravity or magnetic data case, denoted by (τ g 1 , τ g 2 ) and (τ m 1 , τ m 2 ), respectively. In all cases we use τ g 1 = τ m 1 = .02 and adjust τ 2 . The simulations for the choices of τ g 2 and τ m 2 for increasing problem sizes are detailed in Table 1. As an example we illustrate the true and noisy data for gravity and magnetic data, when = 12, in Figure 3. Table 2. Dimensions of the volume used in the experiments with scaling of the small problem size (25, 15, 2) by scale factor in each dimension. m and n are the dimensions of the measurement vector and the volume domain, respectively, G ∈ R m×n . Here m = s x s y = 375 2 and n = mn z where n x = s x and n y = s y without padding. Here, we use n pad = n x n y n z to denote the volume dimension n with 5% padding, using n x = s x + 2 round(pad s x ) and n y = s y + 2 round(pad s y ) for padding obtained using a percentage, pad, on each side of the domain so that p xL = p xR = round(pad s x ), and similarly for s y . .
Numerical Results.
The validation and analysis of the algorithms for the inversion of the potential field data is presented in terms of (i) the cost per iteration of the algorithm (Section 3.3.1), (ii) the total cost to convergence of the algorithm (Section 3.3.2), and (iii) the quality of the obtained solutions, (Section 3.3.3). Supporting quantitative data that summarize the illustrated results are presented as Tables in B. 3.3.1. Comparative cost of RSVD and GKB algorithms per IRLS iteration. We investigate the computational cost, as measured in seconds, for one iteration of the inversion algorithm using both the direct multiplications using matrix G, respectively, G T , and the circulant embedding, for the resolutions up to = 6 that are indicated in Table 2, using both the RSVD and GKB algorithms, and for both gravity and magnetic data. For fair comparison, all the timing results that are reported use Matlab release 2019b implemented on the same iMac 4.2GHz Quad-Core Intel Core i7 with 32GB RAM. In this environment, the size of the matrix G is too large for effective memory usage when > 6. The details of the timing results for one step of the IRLS algorithm are illustrated in Figures 4-7, with the specific values for the magnetic data case, given in Table 4. Figure 4 provides an overview of the computational cost with increasing projection size t, for a given m, when the algorithm is implemented using G directly, or using the 2DFFT. These costs exclude the cost of generating G. In these plots, we use the open symbols for calculations using G and solid symbols when using the 2DFFT. The same symbols are used for each choice of t and . An initial observation, confirming expectation, is that the timings for equivalent problems and methods, are almost independent of whether the potential field data are gravity or magnetic, comparing Figures 4(a)-4(b) with Figures 4(c) and 4(d). The lack of entries for triple [175,105,14] indicates that the matrix G is too large for the operations, = 7. With increasing , (increasing values of the triples along the x−axis), it can also be . Running time in seconds for one iteration of the inversion algorithm for the inversion of magnetic and gravity data, without padding the volume domain. Problems are of increasing size, as indicated by the x−axis for triples [n x , n y , n z ] and increasing projection size t (y−axis using log scale) determined by fractions of m = s x s y . In Figures 4(a) and 4(c) the running time for the GKB algorithm using t p = floor(1.05t) (an oversampling percentage 5%), for the magnetic and gravity problems respectively. In Figures 4(b) and 4(d) the equivalent running times using the RSVD algorithm with one power iteration. In these plots the solid symbols represent the timing for one iteration of the algorithm using the 2DFFT and the open symbols represent the timing for the same simulation using G directly. Matrix G for problem size = 7, which corresponds to triple [175,105,14], requires too much memory for implementation in the specific computing environment.
observed that the open symbols are more spread out vertically, confirming that the algorithms using G directly are more expensive for problems at these resolutions.
In Figure 5 we plot the relative computational costs for one iteration of the IRLS algorithm using the matrix G as compared to the algorithm using the 2DFFT, as indicated by Here, the values for the relative cost that are less than 1, below the horizontal line at y = 1, indicate that it is more efficient to use G directly. Values that are greater than 1 indicate that it is more efficient to use the 2DFFT. Open symbols indicate the GKB algorithm and solid symbols the RSVD algorithm. In each case the given plots for a fixed are for increasing projection size t as given by m/s for the selections of t as used in Figure 4.
Cost G /Cost 2DFFT , for the data presented in Figure 4. Along the x−axis we give the size t used for the projected problem in terms of the ratio m/s. The lines with solid blue symbols are for results using the RSVD algorithm, and the open black symbols are for the GKB algorithm. Here, the values for the relative cost that are less than 1, below the horizontal green line at y = 1, indicate that for the specific algorithm it is more efficient to use G directly. Values that are greater than 1 indicate that it is more efficient to use the 2DFFT for the given algorithm and problem size. It is apparent that it is not beneficial to use the 2DFFT for the smaller scale implementation of the RSVD algorithm, when = 4 or 5. But the situation is completely reversed using the GKB algorithm for all choices of and the RSVD algorithm for ≥ 6. Thus, the relative gain in reduced computational cost, by using the 2DFFT depends on the algorithm used within the IRLS inversion algorithm. The decrease in efficiency for a given size problem, fixed but increasing size t (in the x−axis), is explained by the theoretical discussion relating to equations (13)-(14). As t increases the impact of the efficient matrix multiplication using the 2DFFT is reduced. Again the gravity and magnetic data results are comparable. Figure 4 provides no information on the relative costs of the GKB and RSVD algorithms with increasing , independent of the use of the 2DFFT. Figure 6 shows the relative computational costs, Cost GKB /Cost RSVD . Note that Figure 6(a) also includes results for larger problems. These plots demonstrate that the relative costs for a single iteration are not constant across all t with the GKB generally cheaper for smaller t, and the RSVD cheaper for larger t. These results confirm the analysis of the computational cost in terms of flops provided in (13) Figure 6. The relative computational cost for one iteration of the IRLS algorithm for inversion using the GKB as compared to the RSVD algorithm (Cost GKB /Cost RSVD ), for given and projected size t. In each case the given plots for a fixed are for increasing projection size t as given by m/s as in Figure 5. The horizontal line at y = 1 represents the data for which the costs are the same, independent of whether using the RSVD or GKB algorithms. The GKB is more efficient when t is maintained small, s = 40, 25 and 20. The gain in using the GKB decreases, however, as increases. For small and t, the estimates confirm the computational cost estimates in (13)- (14), but for larger projection sizes t, the RSVD is more efficient. In Figure 6(a) the relative costs are also included for = 8 and = 9, where t ≤ 5000.
for small t. The relative computational costs increase from roughly 0.6 to 2.5, increasing with both and t. Still, this improved relative performance of RSVD with increasing and t appears to violate the flop count analysis in (13)-(14). As discussed in Section 2.5, this is a feature of the implementation. While RSVD is implemented using the Matlab builtin function qr which uses compiled code for faster implementation, GKB only uses builtin operations for performing the MGS reorthogonalization of the basis matrices A tp and H tp . Once again results are comparable for inversion of both gravity and magnetic data sets. Figure 7 summarizes magnetic data timing results from Table 4 for domains which are padded with 5% padding in x and y directions. Data illustrated in Figures 7(a)-7(b) are equivalent to the results presented in Figures 4(a)-4(b), but with padded volume domains. Again these results show the open symbols are more spread out vertically, for increasing , confirming that the algorithms using G directly are more expensive for problems at these resolutions, with greater impact when using the GKB algorithm for small . This is further confirmed in Figure 7(c), equivalent to Figure 5(a), showing that the computational cost of performing one step of the IRLS algorithm using matrix G directly, is always greater than that using the 2DFFT. This is more emphasized for the GKB algorithm. The relative costs shown in Figure 7(d), equivalent to Figure 6(a), again shows that the GKB algorithm is cheaper for small t when is small. But as the problem size increases and the projected Figure 4(a) and 4(b) but with padding, pad = 5%, added to the volume domain. Problems are of increasing size, as indicated by the x−axis for triples [n x , n y , n z ] and increasing projection size t (y−axis using log scale) determined by fractions of m = s x s y . In these plots the solid symbols represent the timing for one iteration of the algorithm using the 2DFFT and the open symbols represent the timing for the same simulation without using the 2DFFT for the kernel operations. In Figure 7(c) the relative costs for these results, as also provided in Figure 5(a) for the case without padding, and in Figure 7(d) the relative costs of the two algorithms with the 2DFFT, as in Figure 6(a) without padding.
problem size also increases, it is more efficient to use the RSVD algorithm, consistent with the observations for the unpadded domains. problem. In Table 5 we report the timing results for the inversion of gravity and magnetic data for problems of increasing size and projected spaces of sizes t p . The relative total computational costs to convergence, Cost GKB /Cost RSVD , (the last two columns in Table 5) are illustrated via Figures 8(a)-8(b), for the magnetic and gravity results, respectively. There is a distinct difference between the two problems. The results in Figure 8(a) for the magnetic problem demonstrate a strong preference for the use of the GKB algorithm, except for large t, t = floor(m/3). In contrast, the RSVD algorithm is always most efficient for the solution of the gravity problem, which is consistent with the conclusion presented in Vatankhah et al. [2018] for RSVD without power iteration. Moreover, the data presented in Table 7 for the gravity problem, indicate that the RSVD algorithm generally converges more quickly and yields a smaller relative error. Furthermore, if based entirely on the calculated RE, the results suggest that good results can be achieved for relatively small t as compared to m, certainly s 8 leads to generally acceptable error estimates, and in contrast to the case without the power iteration, here with power iteration, the errors using the GKB are generally larger for comparable choices of t.
For the magnetic data, the results in Table 6 demonstrate that the RSVD algorithm generally requires more iterations than the GKB algorithm, and that the obtained relative errors are then comparable, or slightly larger. This is then reflected in Figure 8(a) that the GKB algorithm is most efficient. Referring back to Table 6, it is the case that the RSVD algorithm often reaches the maximum number of iterations, K = 25, without convergence, when GKB has converged in less than half the number of iterations, when t is small relative to m, t = floor(m/s) with s = 40, 25 and 20. This verifies that the RSVD needs to take a larger projected subspace t in order to capture the required dominant spectral space when solving the magnetic problem, as compared to the gravity problem, and confirms the conclusions presented in Vatankhah et al. [2020a]. On the other hand, the use of the GKB as compared to the RSVD was not discussed in Vatankhah et al. [2020a]. Our results now lead to a new conclusion concerning these two algorithms for solving the magnetic data inversion problem.
In particular, the results suggest that the GKB algorithm be adopted for inversion of magnetic data. Further, the results suggest that the relative error obtained using the GKB generally decreases with increasing t, and that it is necessary to use subspaces with t at least as large as floor(m/8). It remains to verify these assertions by illustrating the results of the inversions and the predicted anomalies for a selection of cases.
3.3.3. Illustrating Solutions with Increasing and t. We first compare a set of solutions for which the timing results were compared in Section 3.3.2. Figure 9 illustrates the predicted anomalies and reconstructed volumes for gravity data inverted by both algorithms, with resolutions given by = 4 and = 7 with t = floor(m/8) and t = floor(m/4). For the cases using = 4 it can be seen that the predicted anomalies are generally less accurate than with = 7. Moreover, there is little deterioration in the anomaly predictions when using t = floor(m/8) instead of t = floor(m/4), except that the results with the GKB show more residual noise. On the other hand, it is more apparent from consideration of the reconstructed volumes shown in Figures 9(i)-9(p) that the RSVD algorithm does yield better results in all cases, and specifically the high resolution = 7 results are very good, even using t = floor(m/8). When including the consideration of the computational cost, it is clear that if using = 7 it is sufficient to use t = floor(m/8) and the RSVD algorithm, but that a reasonable result may even be obtained using the same algorithm but with = 4 and requiring less than 5 minutes of compute time.
The results for the inversion of the magnetic data are illustrated in Figure 10 for the same cases as for the inversion of gravity data in illustrated in Figure 9. Now, in contrast to the gravity results, the predicted anomalies are in good agreement with the true data for the results obtained using the GKB algorithm, with apparently greater accuracy for the lower resolution solutions, = 4 for both choices of t. On the other hand, the predicted magnetic anomalies are less satisfactory for small and t but acceptable for large . Then, considering the reconstructed volumes, there is a lack of resolution for = 4 which is evidenced by the loss of the small well near the surface, which is seen when = 7 for both cases of t, when using the GKB. The other structures in the domain are also resolved better with = 7, but there is little gain from using t = floor(m/4) over t = floor(m/8). Then, considering the reconstructions obtained using the RSVD algorithm, while it is clear that the result with = 4 and small t is unacceptable, the anomaly and reconstructed volume with = 4 and t = floor(m/4) is acceptable and achieved in reasonable time, approximately 11 minutes, far faster than using = 7 with GKB. Thus, this may contradict the conclusion that one should use the GKB algorithm within the magnetic data inversion algorithm. If there is a large amount of data and a high resolution volume is required, then it is important to use GKB in order to limit computational cost. Otherwise, it can be sufficient to use the RSVD provided t ≥ floor(m/8) for a coarser resolution solution obtained at reasonable computational cost.
We now investigate the quality of solutions obtained for magnetic data using higher resolution data sets, and both GKB and RSVD algorithms to assess which algorithm is best suited for such larger problems. In these cases we pick t = floor(m/20), to assess quality with a necessarily restricted subspace size as compared to the size of the given data set. Results using = 11 with t = 2268 and = 12 with t = 2700, corresponding to m = 45375 and n = 998250, and m = 54000 and n = 1296000, respectively, are illustrated in Figure 11. For these large scale problems, the memory becomes too large for implementation on the environment with just 32GB RAM. Thus, these timings are for an implementation using a Table 7 with timings in Table 5 Table 6 with timings in Table 5. The units for the anomalies are nT.
= 12) it can be seen that the predicted anomalies are always better for the larger problem, and in particular the result shown in Figure 11(e) shows greater artifacts when using RSVD.
The obtained reconstruction for this case, shown in Figure 11(g) is, however, acceptable. Overall, trading off between computational cost and solution quality, there seems little gain in using = 12 and the results with = 11 obtained with the GKB algorithm in 227 minutes (nearly 4 hours) are suitable. These results also show that it is sufficient to use a relatively smaller projected space, t = floor(m/20) when m is larger. Indeed, notice that even in these cases the largest matrix required by both algorithms is of size n × t p and requires 17.7GB and 27.4GB, for = 11 and = 12, respectively. Effectively, it is this large memory requirement that limits the given implementation using either GKB or RSVD for larger size problems. Numerical experiments for the inversion of gravity data, similar to the testing for the magnetic data, demonstrates that indeed the RSVD algorithm with power iteration is to be preferred for the inversion of gravity data, yielding acceptable solutions at lower cost than when using the GKB algorithm. Representative results are detailed in Figure 12 for the same parameter settings as given in Figure 11 for the magnetic problem.
3.4. Real Data. For validation of the simulated results on a practical data set we apply the GKB algorithm for the inversion of a magnetic field anomaly that was collected over a portion of the Wuskwatim Lake region in Manitoba, Canada. This data set was discussed in Pilkington [2009] and also used in Vatankhah et al. [2020a] for inversion using the RSVD algorithm with a single power iteration. Further details of the geological relevance of this data set is given in these references. Moreover, its use makes for direct comparison with these existing results. Here we use a grid of 62 × 62 = 3184 measurements at 100m intervals in the East-North direction with padding of 5 cells yielding a horizontal cross section of size 72 × 72 in the East-North directions. The depth dimension is discretized with ∆z = 100m, yielding a regular cube, to ∆z = 8m for rectangular prisms with a smaller edge length in the depth dimension for a total depth of 2000m, and providing increasing values of n from 103680 to 1238976 as detailed in Table 3. The given magnetic anomaly is illustrated in Figure 13(a).
In each inversion the GKB algorithm is run with t = 480, corresponding to t = floor(m/8), where m = 3184 and oversampled projected space of size 504, and a noise distribution based on (18) is employed using τ 1 = .02 and τ 2 = .018. All inversions converge to the tolerance χ 2 /(m + √ 2m)) < 1 in no more than 19 iterations for all problem sizes, as given in Table 3. The computational cost measured in seconds is also given in Table 3 and demonstrates that it is feasible to invert for large parameter volumes, in times ranging from just under 5 minutes for the coarsest resolution, to just over 73 minutes for the volume with the highest resolution.
Here the computations are performed on a MacBook Pro laptop with 2.5 GHz Dual-Core Intel Core i7 chip and 16GB memory. In Figure 14(a) we show that the UPRE function has a well-defined minimum at the final iteration for all resolutions, and in Figure 14(b) that the convergence of the scaled χ 2 value is largely independent of n. The final regularization parameter α (K) decreases with increasing n, while the initial α found using (15) increases with n, as reported in Table 3. Figure 13. The given magnetic anomaly in Figure 13(a) and the obtained predicted anomalies for the inversion using the parameters for the first and last lines of data in Table 3 in Figures 13(b)-13(c), respectively.
Results of the inversion, for the coarsest and finest resolutions are presented in Figures 13, 15 and 16, for anomalies, reconstructed volumes, and depth slices through the volume domain, respectively. First, from Figures 13(b)-13(c), as compared to Figure 16b in Vatankhah et al. [2020a], we see that the predicted anomalies provide better agreement to the measured anomaly, with respect to structure and the given values. Moreover, more structure is seen in the volumes presented in Figures 15(a)-15(b) as compared to Figure19 Vatankhah et al. [2020a], and the increased resolution provides greater detail in Figure 15(b)as compared to Figure 15(a). Here the volumes are presented for the depth from 0 to 1000m only, but it is seen in Figures 16(e) and 16(j), which are the slices at depth 1100m, that there is little structure evident at greater depth. Comparing the depth slices for increasing depth, we see that the use of the higher resolution leads to more structure at increased depth. Moreover, the results are consistent with those presented in Vatankhah et al. [2020a] for the use of the RSVD for a projected size t = 1100 as compared to t = 480 used here. It should also be noted that the RSVD algorithm with one power iteration does not converge within 50 steps, under the same configurations for m, n and t.
Conclusions and Future Work
Two algorithms, GKB and RSVD, for the focused inversion of potential field data with all operations for the sensitivity matrix G implemented using a fast 2DFFT algorithm have been developed and validated for the inversion of both gravity and magnetic data sets. The results show first that it is distinctly more efficient to use the 2DFFT for operations with matrix G rather than direct multiplication. This is independent of algorithm and data set, for all large scale implementations considered. Moreover, the implementation using the 2DFFT makes it feasible to solve these large scale problems on a standard desktop computer n n z ∆z K α (1) α (K) χ 2 /(m + Table 3. Inversion of magnetic data as illustrated in Figure 13 for m = 3844 on a grid of 62 × 62 stations, with ∆x = ∆y = 100m and padding of 5 cells in both x and y-directions, yielding blocks of size n r = 5184. The inversion uses the GKB algorithm with t = 480 (floor(m/8)) and t p = 504. The noise in the algorithm uses (18) as given for the simulations with τ 1 = .02 and τ 2 = .018. These results are obtained using a MacBook Pro laptop with 2.5 GHz Dual-Core Intel Core i7 chip and 16GB memory. Figure 14. The plot of the regularization function U (α) for the UPRE algorithm, at the final iteration K for increasing values of n as indicated in Table 3 in Figure 14(a) and the progression of the scaled χ 2 estimate as a function of iteration k and for increasing n in Figure 14(b).
without any code modifications to handle multiple cores or GPUs, which is not possible due to memory constraints when m and n increase. While both algorithms are improved with this implementation, the results show that the impact on the GKB efficiency is greater than that on the RSVD efficiency. A theoretical analysis of the computational cost of each algorithm for a single iterative step demonstrates that the GKB should be faster, but this is (a) Iso-surface using n = 103680.
(b) Iso-surface using n = 1238976. not always realized in practice as the problem size increases, with commensurate increase in the size of the projected space. Then, the efficiency of GKB deteriorates, and the advantage of using builtin routines from Matlab for the RSVD algorithm is crucial. When considering the computational cost to convergence for both algorithms, which also then includes the cost due to the requiring projected spaces that are of reasonable size relative to m, the results confirm earlier published results that it is more efficient to use RSVD, with t ≥ floor(m/8) for inversion of gravity data. Moreover, generally larger projected spaces are required when using RSVD for the inversion of magnetic data. On the other hand, prior published work did not contrast GKB with RSVD for the inversion of magnetic data. Here, our results contribute a new conclusion to the literature, namely that GKB is more efficient for these large-scale problems and can use also t ≈ floor(m/8) rather than larger spaces for use with RSVD. Critically, which algorithm to use is determined by the spectral space for the underlying problem-specific sensitivity matrix G, as discussed in Vatankhah et al. [2020a]. Moreover, we can relax the restriction t ≈ floor(m/8), indeed satisfactory results are achieved using t ≈ m/20 for large problems, for the inversion of magnetic data.
It should be noted that equivalent conclusions can be made when the implementations use padding, only that generally fewer iterations to convergence are required. Furthermore, all the implementations use the automatic determination of the regularization parameter using the UPRE function. The suitability of the UPRE function was demonstrated in earlier references, and is thus not reproduced here, but results that are not reported here demonstrated that the earlier results still hold for these large scale problems and algorithms.
Overall, it has been shown that the use of the BTTB structure inherent in the sensitivity matrices leads to fast algorithms that make it feasible to solve large-scale focusing inversion problems using standard GKB and RSVD algorithms on desktop environments, without modifications to handle either multiple cores or GPUs. It is clear that yet greater efficiency could be achieved with such modifications, that may then be architecture specific and thus less flexible. Moreover, these results suggest that the development of alternative algorithms that avoid the need to use storage of matrices of size n × t, is desirable and is a topic for future study.
The element-wise complex multiplication in (21) is for a reshaped vector of size (s x + n x − 1)(s y + n y − 1) ≈ 4m, and each complex multiplication requires 6 flops. Furthermore, the inverse 2DFFT requires approximately the same number of operations as the forward 2DFFT. Hence Cost G (r) x (r) ≈ 4m log 2 (4m) + 24m, and (22) Cost Gx ≈ 4mn z log 2 (4m) + 24mn z + (m − 1)n z ≈ 4n log 2 (4m) + 25n + LOT , where the first term is for the multiplication and the second for the summation over the n z vectors of length m. It is then immediate that the dominant cost for obtaining GX, for X ∈ R n×tp , ignoring all but third order terms is Cost GX ≈ 4t p n log 2 (4m) + LOT .
The derivation of the computation, and the cost, for obtaining G T y for y ∈ R m follows similarly, noting that G T y = [G (1) , G (2) , . . . G (nz) ] T y, requires the computation of (G (r) ) T y for each r and that no summation is required as in (22). Hence Cost G T y ≈ 4n log 2 (4m) and Cost GY ≈ 4t p n log 2 (4m). Furthermore, we note that X T G T = (GX) T and Y T G = (G T Y ) T . Thus, the computations and computational costs are immediately obtained from those of GX and G T Y , respectively.
Appendix B. Supporting Numerical Results of Simulations
Supporting results illustrated as figures in Sections 3.3.1-3.3.3 are reported in a set of tables, with captions describing the details. Table 4 reports the timing for one iteration of the inversion algorithm using both GKB and RSVD algorithms for magnetic data inversion, comparing timings using matrix G directly and the 2DFFT. The time to convergence for the algorithms is given in Table 4 for both magnetic and gravity data sets for domains without padding. Tables 6-7 give the details of the number of iteration steps to convergence K and the resulting relative errors, RE, for the timing results of Table 5.
|
2020-04-30T01:01:09.910Z
|
2020-04-29T00:00:00.000
|
{
"year": 2020,
"sha1": "889455b711e2a2c94731b4d2a509495875b6c175",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2004.13904",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "382f8cf012f1a9e6b308cb71d22b82c171dfb814",
"s2fieldsofstudy": [
"Geology",
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science",
"Physics"
]
}
|
58553553
|
pes2o/s2orc
|
v3-fos-license
|
Modeling the spatial distribution of grazing intensity in Kazakhstan
With increasing affluence in many developing countries, the demand for livestock products is rising and the increasing feed requirement contributes to pressure on land resources for food and energy production. However, there is currently a knowledge gap in our ability to assess the extent and intensity of the utilization of land by livestock, which is the single largest land use in the world. We developed a spatial model that combines fine-scale livestock numbers with their associated energy requirements to distribute livestock grazing demand onto a map of energy supply, with the aim of estimating where and to what degree pasture is being utilized. We applied our model to Kazakhstan, which contains large grassland areas that historically have been used for extensive livestock production but for which the current extent, and thus the potential for increasing livestock production, is unknown. We measured the grazing demand of Kazakh livestock in 2015 at 286 Petajoules, which was 25% of the estimated maximum sustainable energy supply that is available to livestock for grazing. The model resulted in a grazed area of 1.22 million km2, or 48% of the area theoretically available for grazing in Kazakhstan, with most utilized land grazed at low intensities (average off-take rate was 13% of total biomass energy production). Under a conservative scenario, our estimations showed a production potential of 0.13 million tons of beef additional to 2015 production (31% increase), and much more with utilization of distant pastures. This model is an important step forward in evaluating pasture use and available land resources, and can be adapted at any spatial scale for any region in the world.
Introduction
The global livestock sector has an estimated value of upwards of $1.4 trillion and employs more than 1.3 billion people [1]. Animal products make up 40% of the current global food demand [2], which is expected to double by 2050 (from 2005 levels) [3]. The largest increases in the a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 suitability of the GLW for regional and local studies [22]. There have also been attempts to assess fine-scale livestock dynamics with spatial extrapolation of local livestock numbers as a proxy for the area used as pasture [24]. However, such approaches fail to account for the productivity of grasslands, the biomass intake by the different types of grazing livestock, or regional grazing practices.
We developed a model to estimate the grazing intensity of livestock (cattle, sheep, goats, and horses) in Kazakhstan. Grazing intensity is the percent utilization of net primary production, and allows for the gridded estimation of grazing pressure based on natural productivity. Kazakhstan is particularly interesting because of its long history of livestock rearing and the vast land resources suitable for grazing. The ninth-largest country in the world, Kazakhstan's permanent pastures and meadows were estimated at 1.87 million km 2 (69% of total land area) in 2014 by the Food and Agriculture Organization (FAO); by comparison, Russia had 0.93 million km 2 and the United States 2.51 million km 2 [25]. Historically, large numbers of wild herbivores shared Kazakhstan's grasslands with livestock, the most populous being the saiga antelope (Saiga tatarica). However, during the 1990s, populations of saiga plummeted due to the loss of enforcement of hunting quotas and illegal trade in saiga horns [26,27]. Saiga are also vulnerable to periodic, catastrophic die-offs, one of which occurred in spring 2015 [28,29]. In summer 2015, saiga in Kazakhstan numbered around 84,000 [30]. Other wild herbivores of note in Kazakhstan include the kulan (Equus hemionus) and the goitered gazelle (Gazella subgutturosa), which in 2015 numbered around 3,500 and 13,000, respectively [30]. These numbers pale in comparison to grazing livestock numbers (6.2 million cattle, 18 million sheep and goats, and 2.1 million horses in 2015) [31]. Similarly, domestic camels in 2015 numbered 170,000 [31]. While in some districts they represented a noticeable source of grazing demand, camels and wild herbivores contributed very little to total grazing intensity at the national scale and thus were not considered in this research.
The model estimates the distribution of livestock grazing and contributes to closing the knowledge gap regarding the spatial dimension and intensity of pasture use. We used statistical data on livestock numbers and fodder production along with recommended energy intake levels per animal type to estimate the livestock grazing demand (the total energy consumed through grazing) for Kazakhstan in 2015. Based on reported grazing practices, we applied this demand to biomass productivity maps to determine the most likely distribution of utilized pasture. The inverse of the result is the amount of energy that is available for grazing, but not currently utilized. Specifically, we were interested in whether the differences in grazing practices among the predominant farm structures and climatic zones would be captured in the output.
We aimed to answer the following research questions: 1. What was the energy demand and associated pasture requirement for Kazakh grazing livestock in 2015?
2. What was the spatial distribution of this pasture?
3. Where was the pasture being underutilized, and what is the potential for increasing the herd size and the production of meat and milk?
Study area
Situated in the middle of the Eurasian Steppe, Kazakhstan has a northeast-southwest precipitation gradient that allows rain-fed grain production in the north (upwards of 350 mm/year), but transitions quickly to dry steppe and further to desert climate in the southwest (less than 100 mm/year). Precipitation does not suffice for cropland agriculture in many parts of the country where grazing is the only agriculturally significant activity possible [11]. Unlike many other grassland regions of the world, Kazakhstan once supported a much larger livestock population than it has at present, particularly while a part of the Soviet Union, when livestock production was viewed as a pillar of centrally planned economies [32]. After the Soviet Union dissolved, livestock numbers in Kazakhstan plummeted and widespread abandonment of pastures and croplands occurred [24,33,34]. While livestock numbers have been steadily recovering since 2000, as of 2015 they are still only two-thirds of 1990 levels [31].
However, Soviet levels of production cannot be assumed the desirable goal, as it is generally accepted (though poorly documented) that soil degradation was widespread, with overgrazing a major contributor [35]. Furthermore, degradation of grazing lands has continued up to the present, despite the drastic reduction in grazing livestock numbers [36]. Under the Soviet system, the human population was settled in permanent villages. Livestock migration continued, but was structured and strictly regulated. To supplement grazing (especially during winter), there was a large, subsidized network of fodder supply. When the Soviet Union dissolved, so too did the fodder supply network. Yet the human population remained sedentary, with livestock held almost exclusively in small, household farms. This led to localized overgrazing around settlements, as distant outposts fell into disrepair, and long-distance migration all but stopped [37].
Kazakh farm structure
The type of farm that livestock are raised on in Kazakhstan determines to a great degree how the livestock are grazed [32]. There are three distinct farm types in Kazakhstan: the largest farms are called agricultural enterprises (AE), ranging from a few thousand to tens of thousands of animals. These farms are the spiritual (and often actual) descendants of Soviet farms and usually have the resources to provide high-quality fodder and support longer-distance grazing migrations. The second group are the private farms (PF). Since 2000, this is the fastestgrowing farm type, and the most diverse in terms of size and grazing practices, ranging from tens to thousands of animals. The third and smallest farm type, households (HH), typically have only a few animals per unit, but overall in Kazakhstan, 60% of grazing livestock are kept in this farm type. Livestock are usually housed overnight in the owner's backyard, meaning that their grazing area is limited to the distance they can walk in one half-day. This immobility results from the sedentarization of farms under the Soviet regime, and the path dependency following its dissolution [38][39][40]. Prior to the Soviet era, livestock generally grazed in large, transhumant, communal herds [41].
Model development
Three main components are used as inputs to our livestock distribution model (Fig 1). Grazing supply (1) is the base map, which consists of energy stored in all the biomass that could be used for grazing. Grazing demand (2) is the sum of all energy required by livestock for growth, maintenance, and lactation, less the amount of energy that they receive through supplied fodder. Energy supplied through fodder is calculated based on statistics [31] and energy conversion ratios used in Kazakhstan [42]. To spatially distribute the grazing demand, grazing characteristics (3), including home base location, daily grazing range, seasonal migration, and off-take rate (percentage of produced biomass that is consumed) need to be defined.
Knowing that the livestock are clustered around settlements [43], and assuming that they, to some extent, behave as optimal foragers [44,45], we defined a search algorithm to seek out the most productive grassland for grazing surrounding the geographic coordinates of the settlements [46]. Grassland productivity was captured using net primary productivity (NPP) [47,48]. As a way to disaggregate livestock numbers from district-level statistics to the settlement level, the human population of each settlement was used [49], along with a piecewise linear function (S1 Equation) describing the relationship between settlement size and number of livestock owners, determined from expert opinion and personal observation (S1 Fig).
The search radius defines the area around the settlement within which pixels are selected. Our model allows the search radii to be defined independently for each livestock/farm type combination. The search radii used in this research were developed based on farmer interviews and the findings of Kamp et al. [50] and Coughenour et al. [43]. Three different radii were used to simulate three different grazing practices found in Kazakhstan. For cattle, sheep, and goats on households and private farms, the search radius was set to 2 km to represent the relative immobility of these livestock. For cattle, sheep, and goats on agricultural enterprises, the search radius was set to 5 km, to represent the increased resources of the enterprises to seek better pastures farther from the settlement (displaying slightly more "optimal forager" activity) [44]. For horses on all farm types, the search radius was set to 10 km, because horses are more mobile and rarely return to the settlement overnight, and thus have the greatest ability to approach optimal foraging. For a summary of the assumptions made in the distribution model, and their implications, see S1 Table. To distribute the demand spatially and realistically, we use an iterative approach. For each combination of livestock type and farm type, a radius r is defined (2, 5, or 10 km, as defined above). For every settlement, the pixels within r are sorted according to their NPP value, and the settlement's demand is distributed to the pixels, choosing higher values first, and precluding chosen pixels from possible duplicate distribution. If the demand is not met within the initial search radius, the search radius is increased to 2r. The distribution process repeats with increasing search radii (3r, 4r, etc.) until the demand is satisfied. Moreover, the distribution for all settlements is performed simultaneously to allow fair competition, reflecting the reality. This results in the distribution of all available land in (n-1) radii around a settlement, with the final radius containing the last pixels distributed to meet the demand. A magnification of the results, showing the settlement-level pixel distribution, is shown in S2 Fig. We chose to distribute the demand of sheep and goats before cattle, though their grazing areas for the most part overlap (literature indicates cattle will travel slightly further during a day) [51,52]. Thus, for sheep, goats, and cattle, farm type is more important than animal type, and the algorithm distributed all sheep, goats, and cattle in households first, then private farms, and finally agricultural enterprises. Horses, on the other hand, typically graze separate from the other livestock, and occupy areas distant from settlements, because they are more mobile and remain on the pasture overnight. To account for this, horses of all farm types were distributed last, thus receiving the pasture furthest from the settlements. Based on the characteristics of the different farm types, and the mobility of the different animal types, the livestock were distributed in this order: 1. sheep and goats in households, 2. cattle in households, 3. sheep and goats in private farms, 4. cattle in private farms, 5. sheep and goats in agricultural enterprises, 6. cattle in agricultural enterprises, 7. horses in households, 8. horses in private farms, and 9. horses in agricultural enterprises. The model is run within the R environment, with image processing carried out in R and in QGIS [53, 54].
Grazing supply
For grazing supply, we used net primary productivity (NPP) produced by Eisfelder et al. [47] who used the Biosphere Energy Transfer Hydrology Model (BETHY/DLR) [55]. NPP is dry matter production per unit area per unit time. We used the best available land-cover map for the region [11] to mask out areas that are unavailable for grazing, namely forest, water, ice, and artificial surfaces. Grazing is known to occur in all types of protected areas except nature preserves (zapovedniki), which cover 1370 km 2 and have the highest level of protection [56]. Thus, nature preserves were also excluded.
Areas defined as "bare soils" were included as grassland because they, though low in NPP, contain areas where grazing is known to occur [43]. A common practice in Kazakhstan is the foraging of crop residues after harvest, especially on wheat fields. To account for this, we split the map into a grassland and a cropland map. For both maps, we created a composite with the mean of the nine-year NPP data (compiled into annual totals) and converted NPP (grams carbon/square meter) to biomass production (grams dry matter/square meter) using the conversion coefficient of 0.47 grams carbon/grams dry matter [57]. On the grassland map, we converted biomass to available energy using a value of 8.6 Megajoules (MJ) per kilogram dry matter (kgDM), based on literature from similar regions and climates (Fig 1) [58][59][60]. The total energy available from grasslands was calculated to be 3537 PJ. To estimate the biomass available from the foraging of crop residues, we applied a harvest index of 0.48 (using wheat as the base reference) to the annual NPP [61,62]. The harvest index is the mass ratio of crop yield (grain) to the crop's total aboveground biomass [63]. Wheat is the dominant crop grown in Kazakhstan and thus was used for the calculation of crop residues [31]. At harvest, around 90% of wheat biomass is aboveground [64]. The energy contained in crop residues is generally less than that of pasture, and a value of 6 MJ/kgDM (using wheat as the base reference) was used for croplands [63,65]. The total energy available for grazing from croplands was calculated to be 293 PJ. The grassland map and cropland map were then merged to produce a map of grazing supply (Fig 2).
Grazing demand
To calculate grazing demand, we gathered data on livestock numbers, fodder yield and fodder consumption at the district level (2 nd level administrative division) for 2015 from the Kazakh National Statistics Agency (Fig 3) [31]. In Kazakhstan, there are 200 district-level units, consisting of districts (rayons) and city administrative units (gorodskie administratsii). Livestock nutritive requirements were taken from recommended values published in a livestock nutrition handbook by KazAgroInnovation [66], a subordinate of the Kazakh Ministry of Agriculture. These values are specific to Kazakh livestock at different stages of growth and for different animal functions (e.g., beef heifers for breeding vs. beef heifers for finishing), as well as for different desired growth rates. Because sheep far outnumber goats in Kazakhstan, and because of their similar grazing characteristics and energy requirements, all goats were treated as sheep.
To calculate the total nutritive demand (Fig 1), we used information on the age group and animal function of the Kazakh livestock herds. The proportions of the different age groups and animal functions were available at the province (oblast) level for the different livestock types from the 2006 agricultural census [67]. As no newer or more detailed data exist, these proportions were applied to the 2015 numbers in our disaggregation equation (S2 Equation). Animal productivity differs depending on living conditions, and living conditions in Kazakhstan can broadly be defined based on the farm type. The differences in animal productivity, and thus energy demand, on the different farm types was estimated using different animal growth rates (g/day) as indicated in the handbook (S2 Table) [66]. A description of how the handbook values were used to calculate the different animal age and function groups can be found in S3 Table.
Fig 2. Map of annual available grazing supply (MJ/m 2 ) derived in this study.
Based on total NPP measured by Eisfelder et al. [47]. The land-cover classification used for masking is from Klein et al. [11], and the protected area mask is from Kamp The fraction of total energy demand that is not met by fodder and therefore must be met by grazing (i.e., grazing demand divided by total demand) is called the grazing gap, and was obtained by subtracting the amount of energy that is consumed as fodder from the total demand [68]. The estimation of fodder consumption was not straightforward, as such statistics are reported consistently only for agricultural enterprises. Instead, gross yield of harvested fodder crops at the district level was used [31], and fodder consumption statistics were used in an equation (S3 Equation) to allocate fodder to the different livestock types using their relative proportions (i.e., proportion of total fodder consumption allocated to cattle, sheep, goats, pigs, poultry, horses, and camels). This was the only possible way to estimate the total amount of fodder consumed by grazing livestock in Kazakhstan. Fodder crop yields were converted to energy values using the Soviet system of "fodder units" [42], due to its easy conversion to MJ and its widespread use in Kazakh agricultural literature and statistical reporting (S4 Table shows the conversion rates used). Table 1 details the inputs used to distribute the demand onto the supply as shown in Fig 1. When distributing the supply, it is important to acknowledge that only a fraction of the total NPP can be consumed by livestock. The Eisfelder et al. [47] map is a measurement of total NPP, which includes the portion that is belowground and unavailable to the livestock. We used the work of Propastin et al. [69], who found aboveground NPP in central Kazakhstan to be on average 77% of total NPP. In addition, a considerable portion of the aboveground NPP must be left to allow regrowth. Published values of recommended stocking rates and pasture utilization in similar climatic conditions suggest a maximum off-take rate of 40% of aboveground NPP [70][71][72], and thus the maximum sustainable off-take rate was estimated at 40% of 77%, or 30% of total NPP.
Obviously, not all pasture is grazed at the maximum sustainable off-take rate. Mapping actual grazing intensity requires estimating the variation in off-take rate on a spatial scale. To map variation in off-take rate using our model, we first ran the model under a range of eleven different off-take rate assumptions (5% increments from 10%-60%). We then calculated the maximum distance from each settlement that each livestock type needed to fulfill their grazing demand under each off-take rate. To determine an accurate off-take rate for each settlement, we used the maximum grazing distances for cattle in households as the defining variable, as they, along with sheep and goats (which are distributed before cattle), are the most restricted by distance from settlement. We chose 10 km as the maximum distance for cattle in households based on the findings of Kamp et al. [50]. We grouped settlements into their districts, and for each district, we selected the lowest off-take rate that corresponded to the median of maximum grazing distances for cattle in households being less than 10 km. In cases where districts had no reported cattle in households, sheep and goats in households instead were used (districts without either of these had no grazing livestock of any kind). The model was then run again, with settlements maintaining their determined off-take rate.
Production potentials of meat and milk
We estimated the potential to increase production of meat and milk in Kazakhstan based on the efficiency with which pasture is being used. To estimate pasture use efficiency, we took meat and milk production in 2015 from the national statistics [31]. We calculated pasture requirement from the model results and estimated the yield of meat and milk (tons per km 2 utilized) for the different livestock types. Our model results do not differentiate between beef and dairy cattle, so we made an adjustment based on the relative proportions of beef and dairy cattle. The fraction of cattle classified as dairy in 2015 for agricultural enterprises, private farms, and households was 0.40, 0.50, and 0.85, respectively [31]. We multiplied the land requirement by this fraction as a rough estimate for the area used by dairy cattle. We then divided milk production by the adjusted land requirement to estimate milk productivity. We used the calculated land use efficiencies to estimate production potential. First, we made a conservative assumption that all land within 10 km of a settlement could currently be utilized. Therefore, unutilized land within 10 km of a settlement was considered for potential expansion. The modeled off-take rates were used to calculate the number of additional livestock that could be supported. Second, we proposed a scenario where pasture was grazed at its maximum sustainable intensity (30% off-take rate), and calculated the resulting unutilized area within 10 km of a settlement. Using a less conservative assumption that all land within 20 km of a settlement could be utilized, we repeated the previous two calculations. Potential increase in beef production was calculated with the assumption that all additional livestock were beef cattle. Similarly, for potential increase in milk production, we assumed that all additional livestock were dairy cattle. Therefore, the results presented are "either-or", and the reality likely falls somewhere in between.
Grazing gap and demand distribution
The grazing demand was calculated for each animal and farm type combination (Fig 4). The total energy demand by all livestock types in 2015 (the sum of all bars in Fig 4) was 368 Petajoules (PJ). The grazing gap is displayed above each bar as the fraction of total energy demand obtained through grazing. Of the three livestock types, the grazing gap is lowest for cattle, and of the three farm types, the grazing gap is lowest for agricultural enterprises, with the lowest being cattle on agricultural enterprises. This is due to cattle on agricultural enterprises receiving more and higher-quality fodder than other livestock and on other farm types. The total The darker bottom portion of each bar is the fodder supply, and the lighter top portion is the remaining demand that must be acquired from grazing. The total demand is represented by the full bar height. Fractions above each bar show the grazing gap (grazing demand divided by total demand). Nutritive demand information is from KazAgroInnovation [66] and supply statistics from KazStat [31].
https://doi.org/10.1371/journal.pone.0210051.g004 amount of energy supplied by fodder was 82 PJ (sum of all darker portions of the bars), leaving 286 PJ to be obtained through grazing. Fig 5 shows the total grazing demand of all livestock types for each settlement in Kazakhstan. Grazing demand is not distributed evenly across the country. In the north, the demand is large, but dispersed across many settlements, whereas in the south it is also large, but concentrated in relatively fewer settlements. The center and southwest have both few settlements and little grazing demand.
Off-take rate
Off-take rate is not uniform across Kazakhstan. The grazing demand for 2015 was 7.5% of the total biomass supply (when converted to energy)-i.e., if the off-take rate were 7.5%, all available land would be utilized. We tested the sensitivity to off-take rate by running the model with eleven different off-take rates, from 10% to 60% (5% increments) (Fig 6). As the off-take rate decreases, the area required for grazing increases exponentially (Fig 7). In our results, all off-take values used are as a percent of total available NPP.
Grazing distances and pasture extent
The maximum distances traveled by household cattle under each off-take rate assumption were analyzed at the district level to determine the average off-take rate in each district. The model was re-run with variable off-take rates to derive the maximum grazing distances. Fig 8 shows the median and quartiles of these distances by animal and farm type. A smaller quartile range on the left-hand side of the median for every livestock type is a result of the distances being skewed by relatively few settlements with a large livestock population located close to one another, most notably in southcentral Kazakhstan (Fig 5). Most settlements had much shorter maximum grazing distances, within 6 km for cattle, sheep, and goats in households, and within 15 km for cattle, sheep, and goats on private farms. Despite being distributed later, cattle on agricultural enterprises were found to have lower maximum grazing distances than sheep and goats on agricultural enterprises. This was due to agricultural enterprises specializing in cattle production being located mainly in the north in small settlements, whereas agricultural enterprises with sheep and goats were located mainly in the south and southeast in or near large settlements. Fig 9 shows the land-use footprint of grazing livestock in 2015, using variable off-take rates at the district level. The area required was 1.22 million km 2 , 48% of the area theoretically available for grazing. While the off-take rate was determined at the district level, the result shows that off-take rates did not strictly adhere to district boundaries, as individual settlements are not obliged to graze within district boundaries. In the north, a relatively higher number of livestock are kept in private farms and agricultural enterprises, which are not as restricted as household livestock to the immediate vicinity of settlements. Thus, they can utilize distant pastures at lower off-take rates, and almost all of the north and northeast was utilized to some extent. The south and southeast showed less land being utilized, however at a much higher offtake rate. In the east, high NPP allows for lower off-take rates, and high numbers of private farm livestock can search out distant pastures. Two riparian pasture regions are clearly visible due to their course running through otherwise arid and semi-arid regions: the Ural in the far west and the Syr Darya flowing northwest out of the southern tip. The Chu River (to the east of the Syr Darya) is a historically important river that used to flow into the Syr Darya, but for many years has been diverted for irrigation and now disappears before reaching the Syr Darya. The Ili River in the southeast flows from the mountains of Tian Shan into Lake Balkhash, where it forms a large delta, providing grazing opportunities in an otherwise arid landscape.
Production potentials of meat and milk
The summarization of utilized pasture made it possible to estimate the associated productivity of livestock production with regard to pasture use. Table 2 shows the area of pasture utilized and the respective productivity of meat and milk (production per km 2 utilized). Meat productivity is highest for cattle on agricultural enterprises, but not by a lot. With the much smaller grazing gap for cattle on agricultural enterprises (Fig 4), one would expect the meat productivity (which doesn't account for fodder) to be much higher. This is not the case because most cattle on agricultural enterprises are in northcentral Kazakhstan (Fig 3), where the off-take is low (Fig 9) and a lot of grazing on cropland (Fig 2) occurs. Both factors increase the land utilized by cattle on agricultural enterprises compared to other livestock and on other farm types, and thus decrease the relative meat productivity. Total beef production in 2015 was 417 thousand tons (kt), and total dairy milk production was 5.1 million tons (Mt).
For sheep, goats, and horses, meat productivity is highest in households. For sheep and goats, this is due to sheep on agricultural enterprises and private farms primarily being raised for wool, with meat only a byproduct. Similarly, for horses, most meat production is done at the household level. Regarding milk production, cattle on agricultural enterprises are clearly the most land productive (when adjusted for the proportion of dairy production). Milk productivity is very low for sheep and goats, with almost all production coming from the very few goats in the country. Horse milk productivity is somewhat higher, because of the demand for the traditional horse-milk drink kumys, which is produced mainly at the household level.
We produced a conservative estimate for increased production potential by implementing the scenario where all land within 10 km of a settlement is utilized. Assuming the estimated off-take rates shown in Fig 9 as business-as-usual (BAU), the additional pasture utilized was 0.14 million km 2 , with an associated energy of 29.9 PJ. Assuming a proportional increase in fodder production (i.e., that the grazing gap remains the same), if the additional 0.14 million km 2 of pasture was used entirely for cattle on agricultural enterprises, beef production could be increased by 0.13 Mt, an increase of 31% (Fig 10). Conversely, if all expansion was used for dairy on agricultural enterprises, dairy milk production could be increased by 3.11 Mt (above 2015 level) under the business-as-usual scenario, an increase of 60%. These are conservative estimates, as off-take rates were very low for most settlements (Fig 9). If all land within 10 km was used at its maximum sustainable off-take rate (30%), and if all additional livestock on pasture within 10 km were cattle on agricultural enterprises, beef production could be increased 1.91 Mt (above 2015 level), an increase of 457%. By comparison, Brazilian beef exports in 2013 totaled 1.25 Mt [75]. Hence, Kazakhstan has the potential to become one of the leading beef exporters in the world. If the radius of land around a settlement that can be utilized was increased to 20 km, with business-as-usual off-take rates, and if all additional pasture was utilized by cattle on agricultural enterprises, beef production could be increased by 0.41 Mt (98%). Assuming maximum sustainable off-take rates within 20 km, this estimate increases to 3.96 Mt.
Discussion
This model is a new and unique way to analyze pasture use. As such, it cannot be compared directly to existing products. Most maps of livestock distribution or pasture extent are at the global scale, and we endeavored to create a more accurate map by focusing on the regional scale. We used detailed information on livestock nutritive requirements to calculate the total energy demand and we used district-level data on fodder supply to determine the grazing gap and grazing demand. To measure the energy available for grazing, we obtained NPP estimates as opposed to using suitability indices, which allowed us to estimate grazing intensity instead of probability of pasture occurrence or stocking density. Based on extensive literature review and field experience, we defined regional grazing practices specific to different farm and animal types to better distribute grazing demand. The result is a model that can distribute pasture utilization at fine spatial scales.
Comparison to global products
We compared our results visually to global products. The Gridded Livestock of the World (GLW) [22] shows very dense populations of cattle, sheep, and goats in southcentral and southeastern Kazakhstan, whereas the north and northeast is less dense, but fairly uniform across a larger area. This agrees with the high off-take rates calculated in the south and the low off-take rates in the north, which results in very large continuous pasture in the north (Fig 9). More recently, Nicolas et al. [76] improved on the GLW by, among other things, preventing Table 2. Pasture use, production, and productivity of meat and milk in 2015.
Livestock type Farm type Pasture utilized (mil. km 2 ) Meat production (kt) Meat productivity (t/km 2 ) Milk production (kt) Milk productivity (t/km 2 )
Cattle AE 0. 30 Meat and milk production statistics from KazStat [31]. These numbers do not account for land used for fodder production. the GLW from placing livestock where humans are absent, which was also the aim of our model. However, they found no systematic improvements, likely due in part to their coarse distinction between urban and rural areas. The definition of urban and rural areas (and the presence of livestock in each) varies greatly between countries, and using one gradient for the entire world is unlikely to produce accurate results. Compared visually to Erb et al. [14], our model found less utilized pasture. This was due to the subtractive method used by Erb et al. [14]-if land was not used for crops or anything else and as long as it was above the precipitation threshold, then it was considered grazing land. That qualifies almost all land in Kazakhstan as grazing land, including lands-such as in central Kazakhstan-where few, if any, livestock exist. However, Erb et al. [14] also published a Table 2). The dotted line shows beef and dairy production in 2015. https://doi.org/10.1371/journal.pone.0210051.g010 grazing suitability map, which shows most of Kazakhstan, save for the far south and southeast, to be in the lowest suitability class. Though suitability is not necessarily related to utilization, the areas of high suitability (the south and southeast) match our areas of high off-take rates. Overall, the four global grazing land-use maps reviewed in Erb et al. [12] have low agreement in Kazakhstan, and especially in the drier central and southwest part of the country, where the difference in estimates of grazing area range by up to 100%. In another study based on Haberl et al.'s [77] estimation of Human Appropriation of NPP, Erb et al. [12] estimated the grazing intensity as a percent of actual NPP, and almost all of Kazakhstan is in the 0-10% range. Only in the southeast and especially in the southcentral does grazing intensity increase, up to about 50%, which is in line with Fig 9. Fetzel et al. [68] combined the efforts of the previously described global pasture extents [14][15][16][17] with the GLW [22], grazing suitability [14], Human Appropriation of NPP [77], Moderateresolution imaging spectroradiometer net primary productivity [78], and feed intake estimates [79][80][81] to create a global, gridded map of grazing intensity. Shortcomings in the grazing intensity distribution include the use of FAO statistics for livestock numbers, the resolution of the 2007 GLW (5 km), as well as the spatial scale of feed intake estimates (usually national level) [68]. These are general issues with global-scale models, which underline the usefulness of regional-scale approaches, as long as these can make use of data with higher spatial and temporal resolution. Further comparison to our results can be found in S3 Fig. As regional studies of the spatial distribution of grazing intensity do not exist, we compared our results to the global product by Fetzel et al. [68], cut to the area of Kazakhstan (S3a Fig). While the resolution of the maps is not comparable, the suitability algorithm employed by Fetzel et al. [68] resulted in a similar relative distribution of grazing intensity. The takeaway from this comparison is that our maps agree in the areas of high, and low, grazing intensity. There are a number of reasons why the results are not directly comparable, not the least of which being that Fetzel et al. measured grazing intensity in 2000, when grazing livestock numbers were 57% of their 2015 levels [31]. Also, the assumptions and inputs used in our model were specific to Kazakhstan. Our calculation of grazing gap (Fig 4) produced a much higher grazing demand in Kazakhstan than the average grazing demand calculated in Herrero et al. [81] for the region defined as the Commonwealth of Independent States. Additionally, our search parameters-using settlements as "home bases"-resulted in a smaller, more intensely grazed area. One noticeable inconsistency between the two maps is in the east. This is likely due to the very high NPP in the region, which would make it seemingly ideal for intense grazing. However, there are relatively few livestock in the region (which is only seen in district-level statistics), and much of the far eastern parts are covered in forest, which were not available for grazing in our model.
Validation
For this research, we combined remotely sensed NPP, livestock statistics, feed recommendations, and allocation rules. Each of these components may contribute a degree of uncertainty. For instance, official statistics can be subject to manipulation and inconsistent measurements, particularly during the transition period. We utilized 2015 livestock numbers, after the reformation of reporting livestock statistics that occurred in 2010-11, which included livestock tagging. Similarly, actual feed consumption may vary according to individual production costs, output, and available fodder resources or other endogenous factors, such as experience and background of the farmer. Yet, agricultural producers tend to follow the recommendations of the extension services and the Ministry of Agricultre.
McNaughton et al. [82] summarize the pitfalls associated with ground-based measurements of grazing intensity, mainly-but not limited to-the difficulty in measuring plant growth, removal, and re-growth over a season. More recently, remote sensing-based global pasture products acknowledge the inherent difficulty in detecting pasture existence, much less determining grazing intensity [83,84]. Grazing, especially low-intensity grazing, can leave almost no detectable signature. Adding the mobility of livestock and differing land covers, detection and measurement of grazing intensity is extremely difficult. As elaborated by Verburg et al. [85], improving or re-thinking the current methods of ground-truthing is a very immediate need in land use and land cover products and a cause for future research. Our research combined satellite-derived data (NPP) with sub-national livestock statistics. This makes our approach a member of the group of methods described by Kuemmerle et al. [10] that, while not ground-truthed themselves, use ground-truthed products to disaggregate spatially-aggregated data, a group that also includes the work by Neumann et al. [86] and the GLW [21][22][23].
Targeting areas for livestock expansion
Expanding on the GLW, Chang et al. [87] estimated the potential livestock density for Europe and found that virtually everywhere there is large room for expansion. However, competition for land use in Europe is very high, and expansion is unlikely to occur at a large scale. This is not the case in Kazakhstan, where land is abundant and much of it is not utilized or utilized at low intensity levels. Increases in livestock mobility through improved infrastructure can reduce localized overgrazing and allow pastoralists to regain access to the vast steppe that is currently out of reach. Areas high in NPP, but with low off-take rates and relatively low livestock numbers, can be found in the east and in the northwest of Kazakhstan. These regions could be the easiest targets for increases in livestock production. In the south, there appears to be less pasture utilization in terms of area, but that belies the much higher off-take rates. It is possible that a 10 km threshold for household livestock is too restrictive in the south, where a much higher proportion of livestock are kept on household farms, and some medium-range seasonal migration is known to still occur [88]. This would result in more utilized pasture, with lower off-takes rates. In the north, a much higher proportion of livestock are held on agricultural enterprises, and the relatively low numbers of household livestock contribute to the low calculated off-take rate, usually 10%. This forces the agricultural enterprise cattle to range at large distances, further than 100 km for several settlements in Akmola Province. While this is certainly possible for agricultural enterprises with labor and capital resources, it is more likely that they graze closer to the farm at higher off-take rates. It is well-documented that overgrazing of pasture is still a feature of Kazakh livestock production, especially around settlements [89]. This would result in an overall smaller utilized pasture in the north, with higher off-take rates.
Notes on model inputs and assumptions
In the estimation of maximum sustainable off-take rate, we used the findings of Propastin et al. [69], based on 14 test sites in Karaganda Oblast. While they have corroborated their results with other similar studies, in fact the proportion of above-to belowground NPP varies depending on the plant species, soil type, soil texture, and climate [90]. To date, robust datasets that map the spatial distribution of above-and belowground NPP in Kazakhstan are lacking. In similar fashion, no countrywide spatial datasets exist on the energy content of predominant species. In our model, a distinction was made only between cropland and grassland, and another improvement on the energy supply side would be to distribute spatially energy content based on predominant plant species. Including the spatial heterogeneity of the above-and belowground NPP and the plant energy content would improve the accuracy of our model's estimation of grazing supply.
Our study found an overall grazing gap of 78%. Erb et al. [12] report the overall grazing gap for 11 world regions, with Central Asia and the Russian Federation having an aggregated grazing gap of around 40%. However, Russia has a large amount of production on primarily mixed feedlot systems and thus a low grazing gap, and similar climatic regions to Kazakhstan, such as Australia and Sub-Saharan Africa, have grazing gaps between 70 and 80% [12]. We should note, in Kazakhstan, it is likely that there is substantial fodder production that is not being reported and is therefore unaccounted for in our model. Hay cutting by households and private farms on grassland is a common practice and usually goes unreported. At the same time, communication with local experts and farmers suggested that it is unlikely that fodder imports from other countries occur in significant quantities, which is supported in official statistics [25]. However, some limited amounts of high-energy fodder may be traded from districts with large cropland areas to districts with large grassland areas (and vice versa with hay moving from grassland-dominant districts to cropland-dominant districts), which has not been accounted for in our model. However, the degree of fodder transport between districts is unknown.
Also unknown is the extent to which livestock utilize crop residues. As residues may or may not be left in the field, there are no statistics on the yield of residues available for livestock. However, expert opinion and field experience suggested that there is wide-scale grazing of harvested lands, and a lack of physical boundaries (such as fences) led us to assume in our model that all crop residues were available to livestock. NPP from crop residues available to livestock was calculated to be 293 PJ. This is small in comparison with the grassland supply of 3537 PJ (less than 7% of the total energy supply, yet occupying 14% of the total area), but is a significant source of energy for livestock, particularly in northcentral Kazakhstan. Notably, the time of year that livestock forage on croplands changes from year to year. The wheat harvest can sometimes last into winter when weather conditions prevent earlier harvest. In our model, we do not account for this, making the implicit assumption that most residues from these late harvests will still be available for foraging the following spring, or for horses that graze yearround. Indeed, for an average annual grazing supply map such as we used, whether the residues are consumed in spring or in fall is moot.
The area utilized in 2015 covers much of the highest-productivity areas shown in Fig 2, and there are clear trends of pasturing near rivers and in other areas of high NPP (Fig 9). As the restriction to settlement location is much more important than the productivity of the pasture, this is a result of the location of the settlements and not the model search algorithm. Most settlements were founded because of their proximity to water or other biophysical characteristics that favor higher biomass productivity. Indeed, the average energy production on land within 10 km of a settlement in Kazakhstan is 1.75 MJ/m 2 , whereas land further than 10 km from a settlement averages 1.37 MJ/m 2 . This also supports the hypothesis of abandonment of pastures occurring mainly on marginal lands [24]. Additionally, analysis of Fig 7 shows that the off-take rate increases more than the expected function 1/x as off-take rate decreases, indicating lower productivity as the distance from settlement increases.
The model assumes that livestock are only located where people are located, similar to assumptions made by others [76]. While this is generally the case, livestock are not distributed evenly among the settlements, especially on agricultural enterprises. For example, agricultural enterprises hold a large number of animals and are commonly located in medium and small settlements, where they are the main source of jobs for the residents. Since some (but not all) small settlements have an agricultural enterprise, there is no way to place agricultural enterprises accurately within a district without preexistent knowledge of their location. This means that, in reality, herd sizes in small settlements are very unbalanced, whereas in the model, similarly sized settlements within a district receive equal numbers of livestock. Additionally, settlements that exist for a specific industrial purpose may have pasture distributed in the model where in fact no livestock are kept.
Briefly mentioned but unelaborated on is the role that access to water plays in livestock distribution. Wells have been an integral part of grazing migrations in Kazakhstan for centuries, and during Soviet times, thousands of new pump wells were installed to accommodate the regulated migrations of much larger herds [91]. Around these wells, outposts were erected and served as seasonal base camps for livestock herders. However, many of these outposts were abandoned after the collapse of the Soviet Union and the wells fell into disrepair [92]. There is currently no accessible countrywide data on the prevalence or location of working wells or occupied outposts. For this reason, wells and outposts were not considered in the distribution of livestock. Inclusion of these water sources as potential home bases for livestock would give the search algorithm access to grazing supply that was previously far from settlements, and thus it is possible that there is pasture utilized in remote areas not captured in the model.
Livestock productivity on private farms
The low meat and milk productivity of cattle on private farms is surprising (Table 2), especially given that private farms have much lower grazing gaps than households (Fig 4). In the scenarios explored for Fig 10, a similar expansion on only private farms would result in an increase of 67 kt (16%), compared to 113 kt for agricultural enterprises. Through personal observation and anecdotal evidence, this lower productivity could be because private farmers do not have the resources necessary to provide proper nutrition to their livestock, resulting in suboptimal meat and milk production. Indeed, outside of agricultural enterprises, almost all fodder produced is hay, which is one of the least energy-rich fodder types. On agricultural enterprises, hay comprises 57% of fodder by mass, whereas on private farms hay comprises 95% of fodder by mass [31]. Additionally, in many regions private farms are essentially large households, and have similar grazing practices. In those situations, private farms at best share pasture with household livestock and at worst are pushed to the less productive lands far from the settlement. In the model, private farm livestock are always distributed after the same type of livestock in households, which biases private farm livestock to pastures with lower NPP, though across the country the effect should be rather small. Private farms are currently the most dynamic and fastest growing sector in livestock farming, but so far, their efficiency has not surpassed that of household farms.
Conclusion
Here we have presented a new model to allocate livestock and pasture spatially, using a map of biomass production and information on animal types, farm types, and fodder intake. Our methods of delineating land used by livestock and estimating the intensity with which the livestock are using the land allow pinpointing the extent and location of underutilized grazing resources.
The model enabled a gridded estimate of the utilized pasture in Kazakhstan, which is a prime example of a country well suited to grazing livestock production. Kazakhstan's dry continental climate also reduces the suitability of livestock production's main competitor for land, crop production, making it a suitable target area for development of range-based livestock production. Our results show that despite relatively low natural productivity, ample capacity exists to increase livestock production in Kazakhstan because large areas are characterized by low pasture utilization and off-take rate, and available biomass resources could support many more grazing animals, especially in the east and the northwest (Fig 9). Under conservative estimates of grazing range constraints and with 2015 productivity levels, beef production could be increased by 0.13 Mt (31%) or milk production by 3.11 Mt (60%), or some combination. However, harnessing even a fraction of these potentials would necessitate infrastructure development measures, such as more, improved processing facilities and improved road networks and market access. Repaired wells and outposts would allow the rejuvenation of old migration patterns and would open up distant pastures for even more potential production increases.
This research is an important step forward in the field of livestock mapping. Our model uses much finer-scale inputs than other global-scale products, and a direct measurement of biomass production, enabling us to make a gridded estimate of pasture distribution based on the energy demand of the livestock. The search algorithm we created is easily transferable to other regions where livestock are restricted to a central point, but can be adapted to any region where grazing patterns can be defined. The result of our research can be used to find patterns in livestock distribution, and to target areas where the supply is underutilized. Moreover, our results help the spatial targeting of possible investments for expanding the production of grazing livestock, including assessing the tradeoffs of production expansion with greenhouse gas emissions and biodiversity conservation. Table. Values used in the calculation of total energy demand [66]. The number of each livestock species are recorded at the district level for each farm type. Livestock numbers for each age group were recorded in the 2006 agricultural census at the regional level (in which cattle were further divided into beef and dairy) [67]. The number of livestock in each age group of each livestock species in each district was estimated by multiplying the number of each livestock species by the number in each species' age group (in 2006) and dividing by the number of each livestock species (in 2006). Sheep and goat numbers were combined and sheep nutritive requirements were used, due to the low number of goats and their similar nutritive requirements. (DOCX) S3 Table. Conversion ratios used to convert kg to MJ [42]. Production of each fodder type is recorded at the district level for each farm type. Consumption of each fodder type by each livestock species is recorded at the regional level (all farm types combined). The consumption of each fodder type by each livestock species at the district level for each farm type was estimated by multiplying the production of each fodder type by the consumption of each fodder type by each livestock species and dividing by the total consumption of each fodder type by each livestock species. (DOCX)
|
2019-01-22T22:31:48.914Z
|
2019-01-11T00:00:00.000
|
{
"year": 2019,
"sha1": "14701dd7dab1b0826a284ffd675bd5cbcd77254e",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0210051&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1b8d21693835190d3756a7790ac48fa7b346a84f",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
}
|
242287641
|
pes2o/s2orc
|
v3-fos-license
|
Induction of labour in mid-trimester pregnancy using double-balloon catheter placement within 12 hours versus within 12-24 hours
This study aims to evaluate the ecacy and safety of the induction of labour in mid-trimester pregnancy using double-balloon catheter (DBC) within 12 hours versus within 12–24 hours. Methods In this retrospective study, a total of 58 pregnant women with gestation age of 14 + 0 weeks to 27 + 6 weeks were enrolled as research objects, and they underwent intended termination of pregnancy at our birth center from January 1, 2017, to June 31, 2019. Based on the duration time of DBC, the cases were divided into two groups (DBC group within 12 hours and DBC group within 12–24 hours).
of induction (successful abortion of fetus and placenta without the implementation of dilatation and evacuation) was higher in the DBC group within 12-24 hours (96.3%, 29/31) than that in DBC group within 12 hours (71.0%, 18/27) (p < 0.05). At the same time, the time from DBC removal to delivery in 12-24 hours DBC group was signi cantly shorter than that in DBC group within 12 hours (3.0 h versus 17.8 h) (p < 0.05), the degree of cervical dilation after DBC removal in DBC group within 12-24 hours was larger than that in DBC group within 12 hours (p < 0.05).
Conclusion
In the clinic, the placement time of DBC is generally lasting for about 12 hours. However, considering the cervical condition is immature in the mid-trimester, properly extending the placement time of DBC to 24 hours will bene t for cervical ripening and lead to reduce chance of dilatation and evacuation.
Background
In prenatal screening, we often use ultrasound to measure the thickness of the fetal nuchal translucency with 11 + 0 to 13 + 6 weeks of gestation and carry out maternal serum screening to screen for fetal aneuploidy chromosome abnormalities in early pregnancy. Besides, ultrasound examination is normally performed at the 20 + 0 to 24 + 6 weeks of gestation to screen for structural abnormalities [1]. After learning about severe fetal abnormalities and their poor prognosis, most families will choose to terminate pregnancy in mid-trimester [2]. Induction of labor is a common obstetric intervention that occurs in a high proportion of pregnancies [3]. Both medical and surgical methods are available for mid-trimester pregnancy. Dilation and evacuation procedures (D&E) are more common methods in the United States. In contrast, medical methods such as mifepristone plus with misoprostol, are more common in the United Kingdom, Europe and developing countries [2,4]. In our country, the common approach for induction of labor in mid-trimester pregnancy is to use pharmacological and mechanical devices. Pharmacological devices include mifepristone combined with ethacridine lactate, or mifepristone combined with misoprostol [5]. The mechanical devices include transcervical Foley balloon catheters, cervical double balloon catheters (DBC). Single balloon or DBC have been used increasingly in recent years at term with an immature cervix [6][7] or for pregnant women with history of previous cesarean section [3]. The mechanical methods are the earliest approaches to develop a mature cervix, and its effectiveness level is equivalent to that of prostaglandins [8][9]. The balloon treatment is well accepted by pregnant women [10][11], and will be not cause excessive stimulation or poor fetal heart monitoring (CTG) changes [12].
In our medical center, the common method for termination of mid-trimester pregnancy is to apply mifepristone combined with ethacridine lactate or misoprostol. We have also applied mifepristone combined with DBC for cases with liver and renal dysfunction, oligohydramnios and failure of the cervical ripening after using ethacridine lactate or misoprostol.
In this retrospective study, we evaluated the e cacy of DBC within 12-24 hours versus DBC within 12 hours on the termination of mid-trimester pregnancy from January 1, 2017, to June 31, 2019 in our birth center.
Methods
Ethical approval and patient consent The study protocol was approved by the Ethics Committee of Maternal and Child Health Hospital of Hubei Province [2019] IEC(XM008) . All included women signed written informed consent for therapeutic procedures and also for the publication of those reports.
Selection of patients and study design
The owchart of the experimental program design is demonstrated in Figure 1. In this retrospectively study, we included pregnant women with gestation age between 14+0 weeks and 27+6 weeks. All included women underwent intended termination of pregnancy at our birth center from January 1, 2017, to June 31, 2019. Our center is a big birth center in China, with annual deliveries of nearly 30,000 in the last 3 years. Regarding inclusion criteria, the pregnant women with Chinese nationality and age of 18 to 50 years, who underwent DBC to induce labor for fetal death, fetal anomaly and maternal serious complication that prevented continue pregnancy were included in this study. For exclusion criteria, those with age less than 18 years or older than 50 years, those with non-Chinese nationality, and those who did not undergo DBC to induce labor (such as mifepristone combined with ethacridine lactate, mifepristone plus misoprostol or cesarean section) were excluded from this study. A total of 263 patients were selected in this study, excluding 160 cases undergoing induction of labor by using mifepristone combined with ethacridine lactate, 20 cases undergoing induction of labor via mifepristone plus misoprostol, 20 cases undergoing induction of labor by mifepristone only, 4 cases of spontaneous labor and 1 case of cesarean section, so nally the remaining 58 cases were included in our study. Based on the indwelling time of DBC, the 58 cases were divided into two group, namely DBC group within 12 hours(0 DBC time≤12h) containing 31 cases, and DBC group within 12-24 hours(12 DBC time≤24h) containing 18 cases.
DBC DBC is usually used as per the manufacturer's instructions (Cervical Ripening Balloon; Cook OB/GYN, Spencer, IN, USA). It involves 2 balloons (uterine and vaginal balloons) and each balloon can ll with a maximum of 80mL of normal saline. First, the uterine balloon (red piston, marked with "U") was sent into the lower part of uterine cavity and 40mL of normal saline solution was injected into it. Then, the vaginal balloon (green piston, marked with "V") was placed outside the cervical ori ce and 40mL of normal saline solution was injected into it. After vaginal examination for checking if DBC was placed normally, the uid amount in the both balloons was alternatively increased by 20mL each time until each of the both balloons totally reached to 80mL. After making sure the balloons were positioned correctly, the proximal end of the catheter was xed to the inside of the patient's thigh. If the patient felt unbearable with symptoms of sweating or ustering, then withdraw 10-20mL of normal saline from both balloon till the patient can tolerate the DBC.
Based on the judgment of the attending physician on the cervical condition of the pregnant women in mid-trimester pregnancy, the longest placement time of DBC was set to 12 hours or 24 hours, but the device was removed immediately upon the occurrence of any of the following events, including spontaneous labor, expulsion, spontaneous ruptured of membranes, or unexplained vaginal bleeding [13]. If those events did not happen, the device of DBC was removed after holding for maximum of 12 hours in DBC group within 12 hours, and for 24 hours in DBC group within 12-24 hours.
Intervention for pregnancy termination
In our hospital, we use DBC for termination of mid-trimester pregnancy in the following situations. First, oral mifepristone combined with extra-amniotic administration of ethacridine lactate (Rivanol) was applied, 150mg of mifepristone were administrated to patient in 3 days (each pill contains 25 mg of mifepristone, 2 pills a day), and an injection of 100mg of ethacridine lactate was performed at approximately 9:00 AM on the 4 th day. Then, patients mostly delivered in the following 24-48 hours. If the patients still exhibited no response and their cervixes were immaturity in the following 72 hours, then the DBC was used. Second, we also used mifepristone plus DBC for cases with liver and renal dysfunction, or oligohydramnios. 150mg of mifepristone were administrated to patient in 3 days and the DBC was used at approximately 9:00 AM on the 4 th day. After taking out DBC, the oxytocin was infused for both groups to assist labor at a dose of 2.5-5.0 units in 500 mL Ringer solution, with an infusion rate of 8-40 mL/h. At the same time, surgical method of dilatation and evacuation was used by experienced obstetrician when the body temperature of cases was up to 38.5℃ or antenatal massive hemorrhage occurred.
Observation indicators
The gestational age estimated to be 11+0 and 13+6 weeks via ultrasonography. The Bishop scoring system is based on a digital cervical exam of a patient with zero point as minimum and 13 point as maximum. The scoring system can be used to evaluate cervical dilation, position, effacement, consistency of the cervix, and fetal station. Cervical dilation, effacement, and fetal station are allocated with 0 to 3 points, while cervical position and consistency are given 0 to 2 points [1]. To compare the e cacy of DBC in the two groups, the primary outcome was the success rate of labor induction, which means successful abortion of fetus and placenta without implementation of dilatation and evacuation.
The secondary outcomes include the time from induction of DBC to labor and the time from taking out DBC to delivery, as well as the outcome parameters of the maternal and fetal, such as the rate of antepartum hemorrhage, uterine artery embolism (UAE) before delivery, postpartum hemorrhage (PPH), and puerperal infection.
Statistical methods
All analyses were conducted using the Statistical Package of Social Sciences software (SPSS Version 13.0 Inc., Chicago, IL, USA). The values and variables were reported in the form of mean±standard deviation. Student's test was performed to compare the variables in Gaussian distribution. Chi-square test was used to evaluate the categorical variables. Wilcoxon test was used to evaluate the difference in non-Gaussian distribution between in the two groups. The difference was considered statistically signi cant at p<0.05.
Results
Demographic data of the 12 hours group and the 24 hours group The baseline data and pregnancy characteristics of the two groups are listed (Table 1). There were no signi cant differences in maternal age, Gestation, parity, Nulliparous, Maternity insurance, gestational age at termination, the rate of placenta previa, history of previous cesarean section, body mass index between in the two groups (p>0.05). There was no signi cant difference in reasons for pregnancy termination between the two groups (p>0.05).
The cervical ripen before and after DBC in within 12 hours group and within 12-24 hours group There was no signi cant difference in cervical ripen between the two groups according to Bishop scores (p>0.05), but after ripen by DBC for within 12 hours or within 12-24 hours, the cervix was more ripen in the 24h DBC group than in the 12 hours group (p<0.05) ( Table 2, 3).
The time from induction and DBC removal to delivery
The time from induction to delivery in within12-24 hours group was shorter than that in within12 hours group (median time, 27.0 h versus 29.8 h), but the difference was not statistically signi cant (p>0.05). However, the time from DBC removal to delivery in the within 12 hours group (median time, 17.8h) was longer than that in within 12-24 hours group (median time, 3.0h), indicating a signi cant difference(p<0.05) ( Table 4).
The maternal and fetal outcome parameters between in the two groups
In the within 12-24 hours, there were 2 cases (3.7%, 2/27) undergoing hysterectomy using dilatation and evacuation after taking DBC out, while there was 9 cases in DBC group within 12 hours (29.0%, 9/31) (p<0.05) ( Table 5). None of both groups underwent cesarean section to induce labor, and all of them had successful vaginal delivery.
There was no signi cant difference in the weight of fetus, blood loss at delivery, rate of antepartum hemorrhage, rate of puerperal infection, UAE before delivery, rate of PPH, and rate of ICU care(p>0.05) ( Table 5).
The WBC cell count and hemoglobin in the two groups
There was no signi cant difference in the WBC cell count and hemoglobin at admission and discharge between in the two groups(p> 0.05) ( Table 6).
Hospitalization days and expenditure
In the within 12-24 hours, the average hospitalization day was 9.8d, which was shorter than that in the within 12 hours (12.3d) (p<0.05). At the same time, the hospitalization expenditure in the 24 hours group was lower than that in the 12 hours group (p<0.05) ( Table 7).
Discussion
In recent few years, family policy issued by central government in China has been changed. The former one-child policy was gradually replaced by universal two-child policy (2015). With the increasing rate of multiparous women in China, birth defects are a challenging issue. Women at very advanced maternal age (≥ 43 years) have a higher risk of preeclampsia, intrauterine growth retardation, stillbirth, and placental abruption than the younger counterparts [14]. Zhang X et al [15] analyzed 1,260,684 births from the surveillance system in Zhejiang province, China, and found that the rate of birth defects during 2013, 2015, and 2017 was 245.95, 264.86, and 304.36 per 10,000 births, respectively, and there was age-related anomalies after the release of China's new two-child policy. There were many clinical problems during induced labor need to be solved, especially complete placenta previa [16], and immature cervical condition [2]. More and more women in developing countries choose to postpone pregnancy [17], and studies have found an association between elder pregnant women and a high risk of chromosomal abnormalities, miscarriages and preterm birth with gestation of less than 34 weeks. In addition, stillbirths are more commonly in women aged 35-39 [17]. Therefore, the methods, safety, effectiveness and postoperative complications of labor induction in mid-trimester pregnancy are worth pondering and exploring. Our study observed the pregnant women who underwent intended termination of pregnancy for fetal death, fetal anomaly and maternal serious complication in mid-trimester pregnancy. Of those selected 58 cases, there were 22 cases using DBC for lasting immature cervical condition after applying mifepristone combined with rivanol, or mifepristone plus with misoprostol, and the other 36 cases were subjected to mifepristone plus DBC directly for liver and kidney dysfunction or oligohydramnios.
The mechanism of DBC-based labor induction is basically the compression effect of DBC balloon on cervix, which leads to the release of endogenous prostaglandins [9]. In addition to the local effect, mechanisms that involve neuroendocrine re exes (such as the Ferguson re ex) may promote the onset of contractions [8]. Researchers have evaluated the effectiveness of these devices by comparing them with prostaglandins, and have reported that they are equally effective, and the incidence of tachycardia is lower than that of prostaglandins [9]. Placing a transcervical DBC can be the primary method, or one of the alternative medical methods if the patient and/or obstetrician prefers not to conduct surgical operation [2]. DBC are commonly used for cervical ripening to induce labor with and without prior cesarean section in term [18,19]. Korb D et al. [18] compared the effectiveness of cervical ripening by DBC(n = 117) and prostaglandins(n = 127) in women with history of previous cesarean delivery and an unfavorable cervix(Bishop score 6), and found there was no signi cant difference between the them in terms of cesarean rate and the median interval between the start of ripening and delivery (42.5% and 28.7 h in the prostaglandin group vs 42.7% and 25.6 h in the DBC group). There are very few studies on induced labor in mid-trimester pregnancy by using DBC.
In the clinic, the placement time of DBC generally lasts for 12 hours. In our experiment, the placement time of DBC was extended to 24 hours for the rst time if there was no occurrence of spontaneous labor, expulsion, or spontaneous ruptured of membranes. We compared the effects of DBC within 12 hours and in that within 12-24 hours for induction of labor in mid-trimester pregnancy. We founded that the success rate of induction of labor was higher in DBC group within 12-24 hours (96.3%, 29/31) than that in DBC group within 12 hours (71.0%, 18/27). It can be known that properly extending the time of DBC can reduce the chance of surgical induction of labor, thereby reducing maternal damage, and also help to obtain complete fetal tissues. Although there was no signi cant difference in the time from induce to delivery between two group, the time from DBC removal to delivery in the within 12-24 hours group was signi cantly shorter than that in the within 12 hours group (3.0 h versus 17.8 h). This may help reduce risk of fever and the labor pain by using pharmacological methods to assist labor. In addition, the hospitalization days and expenditure in the within 12-24 hours group were lower than that in the within 12 hours. There was no signi cant difference in the rate of antepartum hemorrhage, rate of UAE before delivery, rate of PPH, rate of ICU care as well as in the WBC cell count and hemoglobin at admission and discharge between the two groups, but the hospitalization days was longer and expenditure was higher in the within 12 hours group.
Conclusion
Clinically, the placement time of DBC generally last for about 12 hours and the cervical condition is still immature after removal of DBC in mid-trimester pregnancy. Properly extending the placement time of DBC to 24 hours can bene t cervical ripening and lead to reduced chance of dilatation and evacuation in mid-trimester pregnancy. Limitation First, this study was a retrospective study, in which the data were only collected from patients' medical record and the Bishop score was the only index for cervix evaluation. Second, the most seriously complication for induced labor in mid-trimester pregnancy by DBC lasting for 12 hours to 24 hours was infection, so some sensitive index of infection should be added in addition to body temperature, WBC count. We are grateful to these patients who gave informed consent to publish the paper. In addition, we also thank the obstetricians and nurses for the diagnosis and treatment of these pregnant women. Availability of data and materials All data generated or analysed during the current study are available from the corresponding author on reasonable request.
Ethics approval and consent to participate
The study protocol was approved by the Ethics Committee of Maternal and Child Health Hospital of Hubei Province [2019] IEC(XM008) . All included women signed written informed consent for therapeutic procedures and also for the publication of those reports.
Consent for publication
All included women signed written informed consent for therapeutic procedures and also for the publication of those reports.
|
2020-10-28T18:54:58.271Z
|
2020-10-07T00:00:00.000
|
{
"year": 2020,
"sha1": "0c4986d775530710ac3f1f85bdc7bc868aee8c70",
"oa_license": "CCBY",
"oa_url": "https://bmcpregnancychildbirth.biomedcentral.com/track/pdf/10.1186/s12884-020-03513-7",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "736ad828987f151288d1a5600661fbc2949ccc1b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
235220106
|
pes2o/s2orc
|
v3-fos-license
|
Construction of a Promising Tumor-Infiltrating CD8+ T Cells Gene Signature to Improve Prediction of the Prognosis and Immune Response of Uveal Melanoma
Background CD8+ T cells work as a key effector of adaptive immunity and are closely associated with immune response for killing tumor cells. It is crucial to understand the role of tumor-infiltrating CD8+ T cells in uveal melanoma (UM) to predict the prognosis and response to immunotherapy. Materials and Methods Single-cell transcriptomes of UM with immune-related genes were combined to screen the CD8+ T-cell-associated immune-related genes (CDIRGs) for subsequent analysis. Next, a prognostic gene signature referred to tumor-infiltrating CD8+ T cells was constructed and validated in several UM bulk RNA sequencing datasets. The risk score of UM patients was calculated and classified into high- or low-risk subgroup. The prognostic value of risk score was estimated by using multivariate Cox analysis and Kaplan–Meier survival analysis. Moreover, the potential ability of gene signature for predicting immunotherapy response was further explored. Results In total, 202 CDIRGs were screened out from the single-cell RNA sequencing of GSE139829. Next, a gene signature containing three CDIRGs (IFNGR1, ANXA6, and TANK) was identified, which was considered as an independent prognostic indicator to robustly predict overall survival (OS) and metastasis-free survival (MFS) of UM. In addition, the UM patients were classified into high- and low-risk subgroups with different clinical characteristics, distinct CD8+ T-cell immune infiltration, and immunotherapy response. Gene set enrichment analysis (GSEA) showed that immune pathways such as allograft rejection, inflammatory response, interferon alpha and gamma response, and antigen processing and presentation were all positively activated in low-risk phenotype. Conclusion Our work gives an inspiration to explain the limited response for the current immune checkpoint inhibitors to UM. Besides, we constructed a novel gene signature to predict prognosis and immunotherapy responses, which may be regarded as a promising therapeutic target.
INTRODUCTION
Uveal melanoma (UM) is the most common intraocular malignant tumor in adult, but much rarer than skin cutaneous melanoma (CM). UM often derives from uveal melanocytes and fast metastasis (Patel, 2013). The incidence of UM is one thousandth of 0.06-0.07, and around 50% of UM patients will eventually die from metastases (Singh et al., 2011;Goh et al., 2020). Despite both UM and CM originate from similar cell types, cancer cells in UM are biologically different from CM (Heppt et al., 2017b). For instance, genic mutations such as TTN, NRAS, and BRAF universally appeared in CM and seldom detected in UM, whereas the mutations of GNA11, GNAQ, and BAP1 are commonly observed in UM (Van Raamsdonk et al., 2009, 2010Cancer Genome Atlas Network, 2015;Livingstone et al., 2020). Moreover, compared with CM, UM bears a lower tumor mutational burden and has a tumor-promoting immune microenvironment (Wang et al., 2020).
Up to now, no systemic treatment has been successfully proven to improve the clinical outcomes of metastatic UM. Despite promising immunotherapies, such as anti-CTLA4, anti-PD1, and anti-PDL1, therapies have been successfully used in CM, and limited response rates toward these immune checkpoint inhibitors were usually observed in UM (Hoefsmit et al., 2020;Qin et al., 2020). For example, the latest clinical outcomes manifested that the 5-year overall survival rate of CM for nivolumab plus ipilimumab therapy was 52% (Larkin et al., 2019). However, the response rate of UM to ipilimumab monotherapy was 0-5% and nivolumab monotherapy was 6%. There was even no response observed to a combination of nivolumab and ipilimumab at median progression-free survival of 2.9 months (Alexander et al., 2014;Zimmer et al., 2015;Heppt et al., 2017a). Notably, higher tumor mutational burden is considered to be closely correlated with higher neoantigens, which tumor-specific T cells may recognize easier (Qin et al., 2020). The mutational burden in CM is known to be much higher than UM, which may partly clarify the distinct response toward immune checkpoint inhibitors. In addition, it is also suggested that tumor-infiltrating T cells take a pivotal role in killing tumor cells, and mediate tumor rejection and antitumor immune responses (Reiser and Banerjee, 2016;Saleh et al., 2020).
For progression cancers, tumor-infiltrating T cells are the most preferred immune cell to effectively target cancer. T-cell density has been demonstrated as a favorable prognostic biomarker for patient survival in glioblastoma, colorectal carcinoma, and ovarian carcinoma (Shionoya et al., 2017). However, compared with many other cancers, the high infiltration of tumor-specific T cells in UM indicated a poor prognosis (Wang et al., 2020). Previous studies proved that tumor-infiltrating CD8+ T cell was the dominated immune cell in UM, which was regarded as a poor prognostic indicator (Bartlett et al., 2014). The opposite effect suggested that different CD8+ T-cell subsets or dysfunction of tumor-infiltrating CD8+ T cells may exist in UM immune environment (Tumeh et al., 2014). Therefore, immune gene-associated tumor-infiltrating CD8+ T cells might be an interesting target to identify gene signature that would possibly improve the response of immunotherapy.
In order to comprehensively evaluate the different subgroups of immune cells and identify the CD8+ T-cell type-specific genes in UM, single-cell RNA sequencing dataset deposited in the Tumor Immune Single-Cell Hub (TISCH) website was first explored. Next, combined with much bulk RNA-seq of UM datasets and corresponding clinical information, we constructed a promising tumor-infiltrating CD8+ T-cell gene signature by using multiple machine learning algorithms. This gene signature may be future targets for rescuing the exhausted CD8+ T cells, stimulating immune surveillance as well as enhancing the efficacy of immune checkpoint blockade therapy.
Estimation of CD8+ T Cells in Cutaneous Melanoma and Uveal Melanoma
In order to explore the association between tumor-infiltrating CD8+ T cells and clinical outcome in cutaneous and uveal melanoma, the Tumor Immune Estimation Resource (TIMER2.0) database 1 was used to comprehensively analyze immune infiltrates across diverse cancer types by multiple immune deconvolution methods . Besides, TIMER2.0 affords the Cox regression and Kaplan-Meier survival analyses to estimate the prognostic value of corresponding immune infiltrates in various cancer types.
Identification of CD8+ T Cell-Associated Immune-Related Genes in Uveal Melanoma
The 7307 CD8+ T cell type-specific genes in UM (Supplementary Table 1) were obtained from the Tumor Immune Single-Cell Hub (TISCH) website 2 , which is a single-cell RNA-seq database and aims to characterize tumor microenvironment at single-cell resolution (Sun et al., 2021). Next, the cutoff criterion of | log2 FC| ≥ 0.5 and adjusted p values < 0.05 were applied to screen the different expressed genes (DEGs) in CD8+ T cells. Moreover, the latest version of immune-related genes was acquired from the ImmPort database 3 . Finally, the overlapped genes of DEGs in CD8+ T cells and immune-related genes were regarded as CD8+ T cell-associated immune-related genes (CDIRGs) for subsequent analysis.
Uveal Melanoma Dataset Collection and Processing
The bulk RNA sequencing datasets of UM as well as corresponding clinical information were downloaded from the TCGA database 4 . Besides, several UM-related gene expression datasets (accession number: GSE22138 and GSE84976) (Laurent et al., 2011;van Essen et al., 2016) deposited in the Gene Expression Omnibus 5 were also downloaded for outside validation. Moreover, a previous study treated with CTLA-4 and PD-1 blockade therapy was obtained from published literature to predict immunotherapy response (Roh et al., 2017). The raw gene expression datasets were processed by using the following steps: First, probe IDs were annotated to genes by using the Bioconductor package and the corresponding platform annotation profiles. Next, the genes with missing values >50% of samples were excluded. Finally, the raw matrix data were quantile normalized and log2 transformed.
Construction of CD8+ T Cell-Related Gene Signature
The association between CDIRGs and the overall survival (OS) time of UM patients in TCGA was analyzed. Univariate Cox regression analysis was performed to identify the survival-related genes (p values < 0.05). Next, the variable importance (VIMP) algorithm in random survival forest (RF) was used to select the importance of candidate genes, then the multivariate Cox regression method was performed to construct a risk score model with selected CDIRGs. The risk score was calculated as follows: Risk score = N i = 1 (coef i × expr i ), in which N is the number of genes selected by RF, expri is the expression value, and coefi is the coefficient of genes. Furthermore, the Kaplan-Meier tests were applied to the multiple gene combination signatures, and log-rank p values were calculated, which were further used to compare different gene combinations and eventually screened the best gene signature (Sui et al., 2019). Receiver operating characteristic (ROC) analysis for 3-and 5-year OS or metastasisfree survival (MFS) was performed, and area under the curve (AUC) was calculated to assess the sensitivity and specificity of the gene signature. Besides, to test the robustness of the result, this CDIRG gene signature was further verified in the GES22138 and GSE84976 datasets.
Subgroup Analysis
To evaluate the relationship between risk score distribution and clinical features, the subgroup analyses were separately performed for different types of UM clinical variables including age, stage, histological type, chromosome 3 status, metastasis, and vital status. Besides, in order to evaluate the prognostic value, multivariate Cox regression analysis was performed to determine whether the risk score had a prognostic value independent of other clinical variables.
Pathway Enrichment Analysis
In order to explore the different signaling pathways between the low-and high-risk groups, the gene set enrichment analysis (GSEA) was conducted. First, the differential analysis of all genes between low-and high-risk groups was generated, and these genes were ordered by the value of log2 fold change. Then gene set databases including cancer Hallmarks (h.all.v7.0.symbols) and Kyoto encyclopedia of genes and genomes (c2.cp.kegg.v7.2.symbols) were used to investigate the signaling pathways correlated with different subgroups of UM. Significance pathway was set at FDR ≤ 0.1 and p-value ≤ 0.05, and the top five pathways considered as the most significant are illustrated in the figures.
Potential Indicator for Immunotherapy Response
To assess the possible ability of risk score for prediction of immunotherapy response, the correlation between the risk score and immune checkpoint genes such as PD-1, CTLA-4, and LAG3 was explored. Most importantly, the immunotherapy response molecular marker-immunophenoscore was also included in our research, which is a well-established predictor of response to checkpoint blockade in melanoma (Charoentong et al., 2017). Next, to investigate the associations between risk score and immune microenvironment, the "CIBERSORT" algorithm was applied to calculate the proportions of immune cells. Then correlation and subgroup analyses between the risk score and these immune cells were conducted. Finally, the tumor immune dysfunction and exclusion (TIDE) algorithm was used to predict clinical response to immune-checkpoint inhibitors, and subclass mapping (SubMap) was performed to compare the expression similarity between the subgroup (high/low risk score) and the melanoma patients with different anti-PD-1 and anti-CTLA-4 therapy responses to predict the efficacy of immunotherapy in UM patients.
Statistical Analysis
All statistical analyses were conducted by using the R software (v.3.6.0). RF algorithm was calculated by the "randomForestSRC" package (Nasejje et al., 2017). The Kaplan-Meier test and ROC analysis were applied by using the "survival" and "survivalROC" packages (Therneau and Li, 1999;Huang et al., 2020). The best cutoff values were computed by using the "survminer" package (Zeng et al., 2019). The CIBERSORT method was estimated by the "CIBERSORT" package (Newman et al., 2015). GSEA was performed by "clusterProfiler" package (Yu et al., 2012). The correlation analysis was calculated by Spearman test. For comparisons of two groups and more than two groups, unpaired test and one-way ANOVA analysis were used, respectively. Univariate and multivariate Cox regression was used to evaluate the relevant prognostic factors. The hazard ratios (HR) and 95% confidence intervals (95% CI) of the prognostic factors were calculated. P < 0.05 was regarded as statistically significant in all statistical tests.
Opposite Outcome for CD8+ T Cells in Cutaneous Melanoma and Uveal Melanoma
In the TIMER2.0 website, multiple immune deconvolution methods including "XCELL" (Aran et al., 2017), "TIMER" , "QUANTISEQ" (Finotello et al., 2019), "MCPCOUNTER" (Becht et al., 2016), "CIBERSORT-ABS, " and "CIBERSORT" (Newman et al., 2015) were used to estimate immune infiltrates in cutaneous and UM. Through univariable Cox proportional hazard model, we astonishingly found that tumor-infiltrating CD8+ T cells work as a protective factor for cutaneous melanoma patients, whereas the increase in tumor infiltration of CD8+ T cells will risk UM patients ( Figure 1A and Supplementary Table 2). Kaplan-Meier curves also showed that the high tumor-infiltrating CD8+ T cell subgroup have a Frontiers in Cell and Developmental Biology | www.frontiersin.org significant shorter survival time than the low tumor-infiltrating CD8+ T cell subgroup in UM regardless of which kind of deconvolution method (Figures 1B-I).
Identification of CDIRGs Based on Single-Cell RNA-Seq
The single-cell RNA-seq of GSE139829 was well processed and deposited in the TISCH website (Durante et al., 2020), which contains 103,703 tumors and non-neoplastic cells from three metastatic and eight primary UM tumors. By applying UMAP algorithms, these mixed cells can be definitely clustered and annotated into eight cell types including B cells, CD4+ T cells, CD8+ T and T exhausted cells, endothelial, malignant, mono/macrophage, and plasma (Figure 2A). The pie plot showed that the number of CD8+ T cells was the main component for UM tumor immune environment (Figure 2B), and the bar plot manifested that the CD8+ T cells take a large proportion for each patient (Figure 2C), respectively. Therefore, the CD8+ T cell-type-specific marker genes were obtained for further analysis. Afterward, according to the selected criterion, 2,920 DEGs were screened out in the GSE139829 dataset, where 1,691 genes were upregulated, and 1,229 genes were downregulated ( Figure 2D). Moreover, 1,793 immune-related genes were downloaded from the ImmPort database. Finally, 202 CDIRGs were acquired from the overlapped plot ( Figure 2E). The gene ontology (GO) enrichment analysis revealed that these CDIRGs were significantly enriched in T-cell activation, positive regulation of lymphocyte activation, immune response-activating cell surface receptor signaling pathway, MHC protein complex, antigen binding, immune receptor activity, and so on ( Figure 2F).
Construction of CD8+ T-Cells-Related Gene Signature
Totally, the RNA sequencing data and clinical information of 171 eligible UM patients were acquired from the three datasets including TCGA of UM (n = 80), GSE22138 (n = 63), and GSE84976 (n = 28). According to the results of overlap between DEGs in CD8+ T cells and immune-related genes, 202 CDIRGs were selected for univariate Cox regression analysis in TCGA dataset and found that a total of 16 CDIRGs was significantly associated with survival of UM patients (p < 0.05) ( Figure 3A). Next, the top 10 important genes including IFNGR1, CDK4, ANXA6, HSP90AA1, TANK, SOS1, CSK, CKLF, MET, and RORA were screened out by the random forest algorithm (Figure 3B). In order to find the optimal gene signature, Kaplan-Meier tests and log-rank p values were applied to compare the different gene models. Eventually, the best gene signature contained three genes (IFNGR1, ANXA6, and TANK) with the highest -log10 P value selected out (Figure 3C). The violin plot of different cell types in the GSE139829 dataset showed that these three genes had higher expression levels ( Figure 3D). The UMAP plots also revealed that these genes were largely expressed in the cluster of CD 8+ T cells (Figure 3E).
Then the three genes were further used to construct a risk score system by applying multivariate Cox analysis in TCGA dataset. According to the formula, a risk score for each patient will be calculated. Afterward, UM patients in TCGA dataset were classified into a high-risk group and a low-risk group by applying the best cutoff value of the risk score. Kaplan-Meier curves showed that patients in the high-risk group have a shorter survival time than the low-risk group with log-rank p = 0.00031 and HR = 6.781 ( Figure 4A). To estimate the prediction power of gene signature, the ROC curve was drawn, and 3 and 5 years of AUCs were 0.637 (95% CI: 0.479-0.847) and 0.681 (95% CI: 0.468-0.865), respectively ( Figure 4D). Besides, verification tests were conducted in GSE22138 and GSE84976 datasets. The GES22138 and GSE84976 datasets were divided into high-risk and low-risk groups accordingly. Kaplan-Meier curves manifested that patients in the high-risk group have a worse prognosis than those in the low-risk group regardless of whether the GSE22138 dataset (log-rank p = 0.018 and HR = 2.593) ( Figure 4B) or GSE84976 dataset (log-rank p < 0.0001 and HR = 6.519) (Figure 4C)
The Relationship Between Risk Score Distribution and Clinical Features
The UM patients in TCGA, GSE22138, and GSE84976 datasets were divided into the high-or low-risk score groups by applying the optimal cutoff value. The distribution of patients in the risk score groups, chromosome 3 status, metastasis, and vital status clusters is illustrated in the Sankey plot ( Figure 5A). The box plots manifested that chromosome 3 status (Figure 5C), metastasis (Figure 5D), vital status (Figure 5E), and histological type ( Figure 5F) were correlated with risk score. Other clinical features, such as age (Figure 5G), gender (Figure 5H), and tumor stage ( Figure 5B) had no relationships with risk score. Furthermore, to explore prognostic factors for OS or MFS in multiple datasets, the risk score of gene signature and clinical variables was analyzed by the multivariate Cox regression analyses ( Figure 6A). The forest plot revealed that stage, metastasis, chromosome 3 status, histological type, and risk sore were significantly associated with MFS or OS. More importantly, the risk score was significantly correlated with MFS or OS and could be regarded as an independent risk factor in TCGA (HR = 9.170, P = 0.001), GSE22138 (HR = 2.420, P = 0.048), and GSE84976 (HR = 1.820, P = 0.036).
Gene Set Enrichment Analysis
In order to explore the different hallmark pathways enriched in the high-and low-risk groups, GSEA was performed. According to the ordered pathways enriched in each phenotype, the significant pathways in cancer Hallmarks and KEGG pathway collection were screened out (Supplementary Table 3), and the top five pathways were illustrated in the GSEA plot. The results suggested that hallmarks like allograft rejection, inflammatory response, interferon alpha and gamma response, and oxidative phosphorylation were all enriched in the lowrisk group (Figure 6B). The results of KEGG enrichment indicated that the low-risk group was associated with pathways such as antigen processing and presentation, cell adhesion molecule cams, chemokine signaling pathway, cytokinecytokine receptor interaction, and natural killer cell-mediated cytotoxicity ( Figure 6C).
Potential Indicator for Uveal Melanoma Immunotherapy
To further explore the potential response for immunotherapy, the association between risk score and the expression level of immune checkpoint genes (PD-1, CTLA-4, and LAG3) was investigated. The correlation analyses manifested that the risk score of gene signature was significantly positively associated with PD-1 (r = 0.445 and p < 0.001), CTLA-4 (r = 0.25 and p = 0.025), and LAG3 (r = 0.417 and p < 0.001) (Figure 7A). The expression value of PD-1, CTLA-4, and LAG3 between the highand low-risk subgroups was compared; the box plots showed that those in the high-risk group had a significant higher expression level of PD-1, CTLA-4, and LAG3 than those in the low-risk group ( Figure 7C). Moreover, immunophenoscore, considered as an effective predictor of immunotherapy, was also positively correlated with risk score (r = 0.261 and p = 0.019) (Figure 7B). Subgroup analysis indicated that the value of immunophenoscore in the high-risk group was higher than in the low-risk group (Figure 7D). In addition, to explore the association between risk score and immune microenvironment, the CIBERSORT algorithm was first used to calculate 22 immune cells for further investigation of the UM samples (Supplementary Figure 1). Afterward, the correlation analyses between risk score and 22 immune cells suggested that CD8 T cells, regulatory T cells, and B memory cells were positively correlated with risk score, while naïve B cells, activated dendritic cells, M2 macrophages, monocytes, and neutrophils were negatively associated with risk score (Figure 7E). The different analyses of immune infiltration between high-and low-risk score in 22 immune cells indicated that CD8 T cells were highly infiltrated in the high-risk group, and naïve B cells, monocytes, and neutrophils were highly infiltrated in the low-risk group (Figure 7F).
The close associations of the risk score with immune checkpoint genes and tumor immune infiltration prompted us to speculate that the risk score may be used to predict the response for UM immunotherapy. Therefore, we conducted the TIDE algorithm 6 (Jiang et al., 2018) to calculate the TIDE score for each sample in TCGA (Figure 8A), GSE22138 (Figure 8C), and GSE84976 ( Figure 8E). We surprisingly found that the low-risk score group has a larger percentage of response than the highrisk group whether in TCGA dataset (high/low = 32.61%/47.06%; Figure 8B), GSE22138 (high/low = 33.33%/47.62%; Figure 8D), or GSE84976 (high/low = 0.00%/33.33%; Figure 8F). What is more, we performed subclass mapping to compare the expression profile of the high/low subgroups and another published dataset containing 47 patients with melanoma that responded to immune checkpoint inhibitors (CTLA-4 and PD-1) (Roh et al., 2017). Interestingly, we found that the high-risk group is more promising in responding to anti-PD-1 therapy whether in TCGA, GSE22138, or GSE84976 (Figure 8G), whereas, the patients in the low-risk group are insensitive to anti-CTLA-4 or anti-PD-1therapy.
DISCUSSION
Currently, cancer immunotherapy, regarded as a promising therapeutic method, is generally used in CM patients. However, unresponsive or limited response rates to immunotherapies are often observed in UM patients (Hoefsmit et al., 2020). As we know, successful application of immune checkpoint blockade in CM greatly depends on the ability of anti-tumor immune response, which largely owes to the density of tumor-infiltrating CD8+ T cells (Tavera et al., 2018). Compared with the skin, the eye is regarded as an immune privileged site, which restrains the secretion of immune-mediated cytokines and limited lymph circulation, further increasing retention of tumor antigens and eventually causing CD8+ T-cell exhaustion for continuous exposure (Niederkorn, 2012;Rossi et al., 2019). Therefore, we first performed multiple immune deconvolution methods to comprehensively analyze the prognostic role of tumor-infiltrating CD8+ T cell in UM and CM. The results manifested that higher infiltration of CD8+ T cells in CM indicated a favorable clinical outcome, while larger numbers of CD8+ T cells will decrease the overall survival of UM patients. It is consistent with previous studies that CD8+ T cell refers to favorable prognosis in CM and predicts poor prognosis in UM (Azimi et al., 2012;Gartrell et al., 2018;Wang et al., 2020). Besides, Luo et al. recently identified several prognostic genes in UM, and almost every gene was correlated with abundance in CD8+ T cell (Luo and Ma, 2020). Hence, it is urgent to explore the adaptive immune response gene signature to improve the effect of tumorinfiltrating CD8+ T cell targeting approaches and the response of immunotherapies in UM.
The general RNA sequencing of tumor tissue cannot be representative of CD8+ T cell genomic signature well in UM. Therefore, in this study, single-cell sequencing of UM was used to explore the tumor immune environment and found that CD8+ T cell were the main component immune cell. Besides, exhausted CD8+ T cells take a larger proposition in each UM patient, which is in accordance with the prior reports that UM patients have a higher ratio of exhausted CD8+ T cells (Durante et al., 2020;Hoefsmit et al., 2020). This phenomenon highlights that an immunosuppressive environment exists in UM and suggests that high infiltration of exhausted CD8+ T cells promotes tumor immune evasion. Next, the main concern behind this study was the potential molecular mechanism of CD8+ T cells that regulates the immune tolerance; thus, we screened the CDIRGs based on previous immune-related genes and CD8+ T-cell-specific genes identified from singlecell RNA-seq. Within the CDIRGs, we found that these genes were positively associated with pathways like immune response-activating signal transduction, MHC complex, and immune receptor activity, which further ensure the validity and reliability of our results. The top five cancer hallmarks include allograft rejection, inflammatory response, interferon alpha and gamma response, and oxidative phosphorylation, which were enriched in the low-risk group. (C) The results of KEGG enrichment included antigen processing and presentation, cell adhesion molecule cams, chemokine signaling pathway, cytokine-cytokine receptor interaction, and natural killer cell-mediated cytotoxicity, which were active in the low-risk group.
Furthermore, we constructed a prognostic gene signature, which classified the OS or MFS of UM into high-and lowrisk groups. Patients in the high-risk group indicated a poor survival. The prognostic gene signature contained three CDIRGs including IFNGR1, ANXA6, and TANK. Interestingly, all these genes have been proven to be associated with cancer or immune response. For instance, IFN-γ signaling is known as an essential effector molecule for anti-tumor immune response, which must bind the IFN-γ receptor (IFNGR1 or IFNGR2) to modulate the JAK-STAT pathways and affects the immune cell activation (Dunn et al., 2005). Several studies reported that the defect in IFNGR1 will promote cancer cells that are unresponsive to immunotherapy, which finally leads to proliferation of cancer cells (Fu et al., 2011;Gao et al., 2016). Annexin A6 (ANXA6) is a superfamily member of membrane-binding annexin proteins, and it has been reported that the expression level of ANXA6 is closely correlated with various cancers (Qi et al., 2015). Rhea et al., suggested that ANXA6 was the most important component of T cell plasma membrane. The lack of ANXA6 was supposed to disturb T-cell proliferation and affect immune signaling pathways (Cornely et al., 2016). Besides, the TRAF family memberassociated NF-κB activator (TANK) is regarded as an inhibitor in the immune response via IL1R/TLR activation (Kawagoe et al., 2009). Wang et al. (2015) also reported that TANK may be considered as a therapeutic target to prevent hyperimmune response and improve cancer therapeutic resistance.
To prove the accuracy of gene signature for prognostic prediction, the associations between CD8+ T cell gene signature and clinical parameters were investigated. The results revealed that the risk score of gene signature was intimately correlated with chromosome 3 status, metastasis, vital status, and histological type. Additionally, the multivariate Cox regression analysis also indicated that the risk score of gene signature could be regarded as an independent prognostic factor in UM. Notably, all evidences indicated that the CD8+ T cell gene signature is well constructed and can accurately predict OS or MFS of UM.
Through GSEA, we found that low-risk phenotype has immune activation. Immune pathways such as allograft rejection, inflammatory response, interferon alpha and gamma response, antigen processing and presentation, and cytokine-cytokine receptor interaction were all positively activated. By CIBERSORT estimation, we also observed that the high-risk group have a higher infiltration of CD8 T cells. Thus, it is easy to understand why low-risk UM patients have a better survival outcome than the high-risk group.
Presently, only a few UM patients are responding to immunotherapies in clinical observations. However, we surprisingly found that the risk score has a significant positive correlation with the expression of PD-1, CTLA-4, LAG3, and immunophenoscore. Hence, it is essential to assess the value of gene signature in predicting immunotherapy responses. Luckily, Jiang et al. (2018) developed TIDE algorithm to help researchers identify patients who may benefit from immune checkpoint inhibitors (ICB) more. Combined with TIDE algorithm analysis, we found that low-risk UM patients with a lower TIDE score are more promising in responding to ICB. Therefore, we convinced that this CD8 T cell-related gene signature is a potential indicator of UM immunotherapy response. However, what kind of immune checkpoint inhibitors are suitable for UM is still unclear. Thus, the subgroups with different risk scores were explored in another published dataset containing 47 patients with melanoma who respond to immune checkpoint inhibitors (anti-PD-1 or anti-CTLA-4) (Lu et al., 2019). We surprisingly found that the low-risk group is promising in response to immune checkpoint inhibitors but is unresponsive to anti-PD-1 or anti-CTLA-4 therapy, whereas the high-risk group is sensitive to anti-PD-1 and anti-CTLA-4 therapy, but has a lower TIDE score. These opposite results prompted us to assume that it is urgent to discover and apply novel immune checkpoint inhibitors in clinical treatment. For example, recent studies showed that LAG-3 is the dominant marker in CD8+ exhausted T cells, rather than PD-1 or CTLA-4 (Danaher et al., 2017). Anti-LAG-3 therapy might rescue the exhausted T cells or in an adjuvant approach in treatment of UM (Puhr and Ilhan-Mutlu, 2019;Durante et al., 2020).
To sum up, our study comprehensively constructed a prognostic and immunotherapy responses-related gene signature by integrative analysis of tumor-infiltrating CD8+ T cells, immune-related genes, and clinical information. Our work gives an inspiration to explain the distinct response for the current immune checkpoint inhibitors between CM and UM. Moreover, the gene signature could classify subsets of UM with different infiltrations of CD8+ T cells and afford potential individual immunotherapy in the future.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
|
2021-05-28T13:22:46.812Z
|
2021-05-28T00:00:00.000
|
{
"year": 2021,
"sha1": "e0589b785d88b9efa2969e461064e8004e0bc4ec",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2021.673838/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e0589b785d88b9efa2969e461064e8004e0bc4ec",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256832793
|
pes2o/s2orc
|
v3-fos-license
|
The role of genetic testing in the diagnostic workflow of pediatric patients with kidney diseases: the experience of a single institution
Purpose Inherited kidney diseases are among the leading causes of kidney failure in children, resulting in increased mortality, high healthcare costs and need for organ transplantation. Next-generation sequencing technologies can help in the diagnosis of rare monogenic conditions, allowing for optimized medical management and therapeutic choices. Methods Clinical exome sequencing (CES) was performed on a cohort of 191 pediatric patients from a single institution, followed by Sanger sequencing to confirm identified variants and for family segregation studies. Results All patients had a clinical diagnosis of kidney disease: the main disease categories were glomerular diseases (32.5%), ciliopathies (20.4%), CAKUT (17.8%), nephrolithiasis (11.5%) and tubular disease (10.5%). 7.3% of patients presented with other conditions. A conclusive genetic test, based on CES and Sanger validation, was obtained in 37.1% of patients. The highest detection rate was obtained for ciliopathies (74.4%), followed by nephrolithiasis (45.5%), tubular diseases (45%), while most glomerular diseases and CAKUT remained undiagnosed. Conclusions Results indicate that genetic testing consistently used in the diagnostic workflow of children with chronic kidney disease can (i) confirm clinical diagnosis, (ii) provide early diagnosis in the case of inherited conditions, (iii) find the genetic cause of previously unrecognized diseases and (iv) tailor transplantation programs. Supplementary Information The online version contains supplementary material available at 10.1186/s40246-023-00456-w.
Introduction
Pediatric nephropathies comprise widely different disease entities in terms of clinical presentation, evolution, and therapeutic options [1][2][3][4]. Approximately 30% of children with chronic kidney disease (CKD) suffer from a monogenic condition, a percentage increasing when considering children with end-stage renal disease (ESRD) [5,6]. Many of these children remain undiagnosed at the time of transplantation [4,6,7]. Reaching a diagnosis for these patients not only implies the end 3 Pediatric Nephrology Dialysis and Transplantation, University Hospital "Città della Salute e della Scienza di Torino", Turin, Italy of a diagnostic odyssey, but presents several advantages for prognosis, management, and treatment.
Pioneering studies have consistently shown that the implementation of next-generation sequencing (NGS) techniques has significantly improved the diagnostic yield in patients with inherited kidney diseases (IKD) [8]. More recently, the widespread use of NGS has made available genetic diagnosis in a reasonable time and at affordable costs, raising the question of whether and when it should be integrated in the routine diagnostic workflow [5].
Genetic diagnosis in children is of utmost importance for different aspects. The first is that it may be relevant in the clinical approach to the disease, the typical example being that of nephrotic syndromes where the identification of structural variants in podocyte-related genes argues against immunosuppressive therapies that would otherwise be routinely used over a period of several months [9]. The second is that it may be highly relevant for the child's family and for the identification of other members who are carrying the variant possibly transmissible to future generations. Once a pathogenic variant is identified in a proband, cascade testing of family members and genetic counselling in variant-carriers represent a standard practice in clinical genetics [5]. The third is that knowing the pathogenic variant is essential in the transplantation context where the donor may be a relative. Indeed, it is critical to rule out the presence of the same variant(s) in the organ donor, as well as it is critical to identify all family members potentially in need of a transplant. The fourth is that some diseases present a high risk of relapse after the organ transplantation, such as the focal segmental glomerulosclerosis [10] or their outcome may be improved by a more tailored choice of the transplant to be performed, such as in the case of primary hyperoxaluria where a combined kidney-liver transplant may result in a better outcome [11]. Finally, having a clear disease diagnosis may be useful for the patient to take part in clinical trials and to benefit from novel treatment options [12,13].
At the end of 2018, in a collaboration between pediatric nephrologists and geneticists, we started performing genetic tests for monogenic conditions potentially leading to ESRD and hence transplantation. Our hospital is the biggest in Northwest Italy, draining from an area of about 5 million people. We selected a "one size fits all" kind of analysis, with sequencing of the clinical exome, i.e., approximately 6700 genes that are associated to monogenic conditions, focusing analysis on gene panels tailored on the clinical suspicion and therefore limiting incidental findings and reducing time for sequence analysis.
Overall, by applying this pipeline, we obtained a diagnostic yield in line with published data, with some heterogeneity among the different clinical suspicion, as expected. The results obtained confirm the relevance of including routine genetic testing and counselling in the diagnostic workflow of pediatric patients affected by nephropathies. Indeed, the identification of causative variants is critical for their clinical management, and potentially for optimal live-donor selection.
Patients' recruitment
The study was based on a diagnostic cohort of 191 consecutive pediatric patients (age at recruitment < 18 years old), recruited by the Pediatric Nephrology, Dialysis and Transplantation Units at the Regina Margherita Children's Hospital and referred to the Immunogenetics and Transplant Biology Service for genetic analysis. All patients included in the study provided a written informed consent signed by both parents, whenever possible.
Sample preparation, sequencing, and bioinformatics analyses
Nucleic acid extraction from peripheral blood, analysis of DNA quality, library preparation and sequencing were performed as previously reported [14]. Raw data obtained from sequencing were converted in FASTQ files and then aligned with Enrichment 3.1.0 or DRAGEN Enrichment tools (Illumina) and mapped on TruSight-One Expanded v2.0 manifest using Homo Sapiens UCSC GRCh37 genome as reference to obtain single nucleotide variants, copy number variants (CNV) and structural variants vcf files. For copy number identification, a baseline made of sequencing data from 5 different patients, all negative for CNV (as per array comparative genomic hybridization data) was used. This approach allows to detect CNV even in sexual chromosomes since the reference group was made both of female and male individuals and gender of the subject to be analyzed was always specified during the alignment phase. Variants calling and prioritization was made following defined criteria. Reads alignment and exons coverage of gene of interest were checked and displayed by Integrative Genomics Viewer-IGV, freely available from the UC San Diego-University of California and the Broad Institute of MIT and Harvard University-Boston (https:// softw are. broad insti tute. org/ softw are/ igv/). Variants to be included in the final genetic report were classified according to the American College of Medical Genetics and Genomics (ACMG) criteria.
Generation of in silico gene-disease list
Genes to be considered for variants identification and prioritization were defined based on the clinical suspicions. In silico gene lists were generated matching (i) data from different databases, correlating genotype to phenotype (OMIM, PanelApp England, ClinGen, Malacards), and (ii) data from literature. The available gene lists are updated once a year based on novel evidence of gene-disease association.
Sanger sequencing and multiplex ligation-dependent probe amplification (MLPA)
Sanger sequencing and/or MLPA analyses were performed to validate variants identified by NGS and for family segregation studies. Briefly, DNA was extracted starting from a second independent aliquot of the proband peripheral blood and from the parents. The DNA regions of interest were amplified by PCR using specific experimental conditions. The purity and specificity of the amplified regions were checked by 0.8 or 1.5% agarose gel. Amplified PCR products were then Sanger sequenced using the same primers. For PKD1 variants, validation was performed using a long-range PCR followed by a nested PCR to avoid any inference from the pseudogenes. Electropherograms were then analyzed using the Chromas software version 2.6, freely available at www. techn elysi um. com. au.
Definition of criteria for the return of genetic analysis results
We previously reported on the design and set-up of a "kidney" gene panel that comprises > 400 genes all involved in different forms of kidney diseases [14]. For this study, we implemented analysis with subpanels focused on the specific clinical category of suspicion (e.g., CAKUT, glomerulopathy, tubulopathy, etc.) and prioritized a specific group of genes for analysis. This approach limited the number of analyzed genes, simplifying analyses, and reducing the number of incidental findings. Only when, at the end of analytical flow with relevant panel(s), the genetic result was negative and (i) the clinical phenotype not clearly indicated or (ii) overlapping different disease categories, genetic analysis was extended to the so-called "kidney full-list or kidneyome", a super-panel comprising all the genes included in the subpanels.
As a first step, we defined a set of criteria for interpretation of NGS results. After performing clinical exome sequencing (CES) and data alignment, a pipeline of analysis was determined to filter-in the relevant variants. Specifically, based on the clinical suspicion, identified variants were filtered based on in silico gene lists, specific for the different disease macro-categories. Synonymous variants not impacting on the splicing mechanism or intronic variants not mapping within the splicing region were excluded, keeping in consideration only the nonsynonymous, nonsense, frameshift, and splicing-affecting variants. Then, only rare variants (frequency less than 1% in the population) and variants with an allele frequency in the patient of at least 0.2 and a coverage of at least 20 reads were included. The remaining variants were annotated and further curated based on (i) mode of inheritance, (ii) nucleotide conservation, (iii) protein impact, exploiting different databases to check the scores, and (iv) literature, if any. At this point, filtered-in variants were listed in a so-called "technical report". The technical report was interpreted by a medical geneticist to produce the final genetic report for the patient and his/her family. During the genetic consult, family segregation studies were proposed to (i) confirm the variants in the proband, and (ii) to include/exclude non-causative variants based on their segregation in the family (Fig. 1).
By adopting these criteria, we were able to define three different categories of genetic report. First, a "conclusive report" that included pathogenic (C5) and likely pathogenic (C4) variants. Reports of variants of unknown significance-VUS (C3) were considered conclusive only if fully compatible with the clinical picture and if family segregation studies confirmed their possible role. Second, an "uncertain report" that included C3 variants identified by CES that are not yet or could not be validated in the context of family segregation studies or C4/C5 variants that were not fully in line with the clinical phenotype. Third, an "inconclusive report" that included (i) a negative CES analysis, meaning that no variants were identified by NGS; (ii) single variants in recessive genes; and (iii) C3 variants not in line with the clinical phenotype, identified when the analysis was extended to all kidneydisease-related genes ( Fig. 1).
Main features of the recruited cohort
This study describes a cohort of 191 pediatric patients (0-18 years of age) who were consecutively referred by the Pediatric Nephrology Unit for genetic analysis from November 2018 to May 2022, with an average of 50 new patients enrolled each year. Criteria for genetic testing were (i) nephropathy associated to positive family history for kidney disease or (ii) clinical suspicion of a monogenic condition or (iii) need to rule out a monogenic condition (as in the case of nephrotic syndromes, where distinguishing monogenic versus non-monogenic diseases is clinically meaningful for prognosis and treatment).
The cohort was divided on the basis of the clinical suspicion, considering 6 different disease macro-categories: congenital abnormalities of kidney and urinary tract (CAKUT; n = 34), ciliopathies (n = 39), glomerulopathies (n = 62), nephrolithiasis (n = 22), tubulopathies (n = 20) and other diseases that included also syndromic phenotypes (n = 14). Except for CAKUT and tubulopathies that showed an equal distribution between females and males, in all the other categories there was a prevalence of male subjects (Fig. 2a). When looking at the age of recruitment, no significant different distribution was highlighted among the different groups, with mean age ranging from 6.7 to 10 years old in "Other diseases" and glomerulopathies, respectively. However, when looking at the median age, CAKUT diseases showed the lowest value (4.6 years old), in keeping with a congenital phenotype. From the ethnicity point of view, independently of the disease macro-category considered, most patients were European, followed by African, with only few patients being Asian, Latin-American or crossbred (Fig. 2b). Among the cohort, only 8 patients had consanguineous parents. Finally, when taking into account family history for kidney diseases considering male and female subjects separately, heterogeneity in distribution between positive and negative cases was evident considering the different disease categories. Overall, most of the recruited cohort and independently of the gender was not characterized by a positive family history, as shown for CAKUT, glomerulopathies, tubulopathies and other kidney disorders. However, in ciliopathies and nephrolithiasis, a significant proportion of cases presented with a positive family history with a different distribution between females and males (13 cases out of 22 for ciliopathies, and 5 out of 12 for nephrolithiasis; Fig. 2c).
CES and family segregation studies allowed the identification of causative variants in a significant proportion of patients
All patients underwent CES and were analyzed for variant prioritization and annotation following the criteria described above (Fig. 1). Overall, variants were detected in 154 patients (80.6%) with 37 patients (19.4%) presenting no variants. Sanger validation of the identified variant(s) and family segregation studies have been performed so far in 90 out of the 154 patients (58.4%). This approach allowed to (i) confirm the variants identified by CES in all cases, (ii) confirm their segregation with the phenotype, and (iii) identify de novo variants. Among the group of patients in which variants were identified by CES, a conclusive genetic report was obtained in 71 (46.1%; in 49 patients, variants were validated by Sanger sequencing and family segregation studies), while 22 (14.3%) and 61 (39.6%) patients remained with an uncertain genetic diagnosis or were classified as inconclusive, respectively (Fig. 3).
Overall, the application of CES followed by, whenever possible, family segregation studies allowed to reach a genetic diagnosis in a significant proportion of cases.
The diagnostic yield is heterogenous when considering the different disease macro-categories with ciliopathies showing the higher diagnostic rate (74.4% of patients diagnosed), followed by nephrolithiasis and tubulopathies (45.5% and 45%, respectively), glomerulopathies and CAKUT (24.2 and 20.6%). The macro-category "others" that was the most heterogenous one presented the lowest diagnostic yield, with only 1 patient having a genetic diagnosis out of 14 (7.1%; Table 1). Not surprisingly, when considering a positive versus a negative family history, the former group of patients showed a higher diagnostic rate with 59.5% of patients diagnosed (Table 1).
Variant distribution and characteristics in the diagnosed cohort of patients
Looking at the patients with a conclusive genetic report, 96 variants in 32 genes were identified and listed in patients' genetic report ( Fig. 4a and Additional file 1: Tables S1-S6). A few considerations can be drawn analyzing the data: (i) several patients presented with more than one variant either in the same or in different genes, not considering the compound heterozygous variants in recessive genes (e.g., #41, #43, #109; Fig. 4a and Additional file 1: Tables S2, S3). A significant proportion of these cases belong to the ciliopathies macro-category and specifically to polycystic kidney disease, posing the question whether additional variants within PKD1 gene may have a clinical impact, leading to an earlier diagnosis (manuscript in preparation). (ii) The most recurrently mutated genes within the cohort were COL4A5 and PKD1, in keeping with Alport and autosomal-dominant polycystic kidney disease (ADPKD) disease frequencies.
(vi) No significant differences in the diagnostic rate were highlighted when considering European versus non-European subjects, even though it has to be taken in mind that the former group represented the great majority of the cohort.
Finally, (vii) independently of the disease category considered, most of the identified variants were already published and associated to specific clinical phenotype ( Fig. 4d). Ciliopathies, and in particular polycystic kidney disease, was the only suspicion presenting a significant number of unpublished variants (13 out of 46), probably because of the higher number of variants identified compared to the other disease categories. All the identified variants are detailed in Additional file 1: Tables S1-S6.
Discussion
Though rare in children, CKD has a profoundly negative impact on normal growth and development, compromising quantity and quality of life. The most recent analyses on adult and pediatric patients, who have received a kidney transplant or are included in the transplant waiting list or are present in the registries of the European Renal Association-European Dialysis and Transplant Association (ERA-EDTA) indicate that up to 27% of them are undiagnosed at the time of transplantation [7]. In line with these data, by analyzing the Transplant Registry of the Italian National Transplant Center, we recently reported that approximately 17.2% of the pediatric cohort was without a clear clinical diagnosis [6]. In addition, when considering the different disease categories, the great majority was affected by rare conditions and up to 50% by a monogenic disease [6]. These results suggest that genetic screening may be a valuable addition for increasing the diagnostic rate. It also represents a potent tool to confirm clinical diagnosis, as well as understanding the genetics underlining more complex or syndromic diseases, finally impacting on prognosis, management, and patients' treatment.
Here, we report the results of the systematic use of a powerful genetic test, such CES in the diagnostic workflow of pediatric patients affected by nephropathies. Overall, a conclusive genetic test, based on CES followed by Sanger-based segregation studies, was obtained in 37.1% of patients, with a certain degree of heterogeneity when considering the different disease macro-categories. As expected, based on clinical presentation, the highest detection rates were obtained for ciliopathies (74.4%), followed by nephrolithiasis (45.5%) and tubular diseases (45%), while most glomerular diseases and CAKUT remained undiagnosed. In the case of glomerular diseases, a negative genetic test is important per se in that it rules out a structural cause for the disease, with significant implications for clinical management and transplantation outcome.
These data are in line with previously published results, even though some differences may be registered based on the group of patients considered, especially for highly homogeneous cohorts mainly based on same ethnic group (Table 2). It must be noted that many of the families where a VUS was identified are currently under investigation for variant segregation, most likely improving these performances.
These results underline the importance of an NGSbased genetic test together with family segregation studies and/or complementary tests (e.g., MLPA, array-CGH) as part of the routine diagnostic workflow. In addition to the relevance of having a diagnosis, these tests allow to identify other family member that may carry the same pathogenic variants as well as estimate the risk of disease recurrence. Moreover, considering that a significant percentage of these patients require kidney transplantation at some point, the availability of a genetic test to screen family member carries important implications in the selection of a live donor within the family.
A second point to be discussed is the importance to distinguish between genetic and non-genetic causes for some diseases. As an example, in the presence of a child with steroid resistant nephrotic syndrome (SRNS) it is essential to rule out conditions caused by mutations in genes coding for structural proteins of the podocyte. This can help to refine therapy, as children carrying pathogenic variants in podocyte genes generally do not benefit from immunosuppressive therapy or predicting prognosis, as "immunologic" SRNS is more likely to recur after transplantation. Genetic diagnosis may also result in fewer kidney biopsies, particularly for patients with glomerulopathy.
A third relevant consideration in favor of genetic testing in the clinical diagnostic workflow of pediatric patients is the translational impact of the identification of genetic variants. Indeed, there are actionable genes meaning that the corresponding disease conditions can be treated based on the presence of pathogenic variants, as in the case of renin-angiotensin blockade for patients carrying pathogenic variants in COL4A3/COL4A4/ COL4A5 genes. On the same line, having a genetic report may avoid useless or even deleterious treatment, such as immunosuppressive therapies for patients carrying mutations in collagen-coding genes [5]. Moreover, it can be useful for patients' stratification and to assess potential risk of recurrence after a kidney transplant. As an example, patients diagnosed with atypical Hemolytic Uremic Syndrome and with a positive genetic report identifying pathogenic variants in CFH, C3 or CFB genes, are at moderate to high risk of recurrence after transplantation [28]. For them, administration of eculizumab showed significant positive results with no relapse or relapse in a minority of cases, while its administration can be avoided for those patients at low risk [28][29][30].
A fourth point concerns family planning, as we are dealing with a pediatric population with parents that may wish to have additional children. The availability of a genetic diagnosis may be extremely useful for genetic counseling proposing to the couple all the available options for a future pregnancy. In line with this point, considering the present cohort, in 2 different cases a prenatal diagnosis was performed via Sanger sequencing, screening the fetus for the specific variant originally found by CES in the proband.
A fifth point that needs to be stressed concerns the number of C3-VUS variants identified by NGS, always posing a serious dilemma about their role in disease onset and progression and how they can be communicated during genetic counselling. In the last years, this topic has been addressed by setting up and designing novel computational methodologies that take in consideration not only nucleotide conservation, protein impact but also gene-association networks and pathway connections. Additional hints in deciphering the real meaning of C3 variants may come from transcriptomic analyses through the detection of aberrant expression or aberrant splicing mechanisms, as well as functional validation studies [31,32]. Lastly, it is important to stress the relevance of periodic re-analysis of negative or inconclusive genetic reports and periodic re-evaluation of C3 variants. This kind of approach relies on (i) the discovery of new gene-disease/variant-disease associations, (ii) updated information from publicly available databases, (iii) reclassification of genetic variants based on functional evidence, (iv) amelioration of the in silico tools used for data alignment and variants annotation [33]. Up to now, no clear indication of the time interval after which negative/inconclusive cases or C3 variants must be re-analyzed has been provided by the Italian Society of Human Genetics (SIGU) or by the American College of Medical Genetics and Genomics guidelines. However, both these institutions suggest reviewing negative cases or variant classification either based on new findings by the lab or by external sources (e.g., literature or databases) or following clinicians' request [34][35][36]. In line with this final point of discussion, it is worthy to note that in some cases, a single variant in recessive genes highly compatible with the clinical phenotype was found. While per se, these variants cannot explain clinical presentation, it is important to re-analyze and possibly to re-align original sequencing data to determine whether a second causative variant can be found.
A final point to be discussed is the financial impact of these tests for the National Healthcare system. In the Italian system, CES followed by analyses of a limited number of genes (< 8) is in the range of 1200 euros, all included, while larger panels are approximately double the amount. While these costs may seem elevated, a timely diagnosis may avoid unnecessary additional tests, including biopsies and may lead to optimized patient care and to early identification of family members with the same disease or at risk of developing the same disease. Family segregation studies are in the range of 150 euro per variant tested per person. In our view, to make this diagnostic workflow efficient and sustainable, it is necessary to identify local/ regional "reference hubs" that can centralize these analyses, reducing costs and accumulating essential experience in variant calling and interpretation.
Overall, these results confirm the relevance of including routine genetic testing and counselling in the diagnostic workflow of pediatric patients affected by nephropathies where a monogenic condition is suspected or with a positive family history. For these patients genetic testing should be considered at the beginning of their diagnostic journey, as it may improve clinical management, spare unnecessary treatments, or diagnostic procedures, identify other family members potentially at risk of having the same genetic variants and, in case of kidney transplant, lead to optimal live-donor selection.
|
2023-02-14T15:32:46.497Z
|
2023-02-13T00:00:00.000
|
{
"year": 2023,
"sha1": "bfe9caa7c78973b2f1d4f873046faf76a9dc5144",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "bfe9caa7c78973b2f1d4f873046faf76a9dc5144",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
53956455
|
pes2o/s2orc
|
v3-fos-license
|
Fabrication of Electrospun Chitosan / Nylon 6 Nanofibrous Membrane toward Metal Ions Removal and Antibacterial Effect
Nylon 6/Chitosan membranes were fabricated by electrospinning onto a Millipore glass fiber filter to produce nanofibrous filter. Scanning electron microscopy (SEM) and water contact angle (WCA) were done to characterize the produced filter. Filter removal capability (adsorption) for metal ions was investigated for lead nitrate (Pb(NO3)2) and sodium chloride (NaCl). The antibacterial effect of the Nylon 6/Chitosan was investigated for Escherichia coli. Removal optimum values for Pb(NO3)2 and NaCl reached 87% and 75%, respectively. This research demonstrated that Nylon 6/Chitosan nanofibrous membrane has an enormous applicable potential removal formetal ions from aqueous solutions, antibacterial activity reaching 96%, and reasonable inhibition zone against Escherichia coli (E. coli).
Introduction
Removal of pathogens, chemicals, and heavy metals to produce clean drinking water is the most important factor that water purification focuses on.Polymer gels and several polymer solutions were electrospun to produce filters that were able to remove these contaminates.Most common used electrospun polymers are chitin [1], Chitosan derivative [2], Poly(ethylene-vinyl alcohol) [3], Poly(glycolic acid) and chitin [4], and Chitosan/PVA [5].Recently, Chitosan and its derivatives had gained a lot of interest because of the wide range of applications in biomedicine, bioseparation, and food science.Chitosan, which is deacetylate's product of chitin (produced from the crust of crustacean shells), has desirable properties such as biofriendly, biodegradable, and antibacterial compound [6].Also, Chitosan and chitin are able to remove metals and dyes so that they can be used to clean water; this is due to the adsorption of the metals and dyes to the cationic amine functional groups on chitosan.Various cross-linked polysaccharide materials with Chitosan can remove different metal and dye pollutants [7].Chitosan and chitin also had the ability to remove polycyclic aromatic hydrocarbons more effectively [8].Removal of particles like As +5 (arsenic) can be effected by the degree of deacetylation (DD).Adsorption and removal of particles in water can be influenced by crystallinity.Crystallinity increased by the amine group which forms a hydrogen bond to other chitosan monomers, and this affects the fiber morphology and adsorption [9].Negatively charged dye like reactive Black 5 can be removed by Chitosan.It can positively charge in an acid environment and electrostatic forces are responsible for dye removal.It had been mentioned in the results that changing the molecular weight (MW) of chitosan from 80,100 to 308,300 reduces chitosan's ability to adsorb dye [10,11].This is because the changing in the internal structure of the chitosan chains and hydrogen bonding between hydroxyl and amine groups which reduced the possibility of dye binding.It had been found that the number of amine groups, determined by the DD, was affected by the removal of metal ions in water.High DD value (97%) produced higher removal efficiency, comparing to chitosan with 52% of DD value [11,12].Finally, nanofibers and in particular Chitosan nanofibers are better in removing metals, chemicals, and bacteria from water [11].with the degree of deacetylation ((DD) ≥ 90%), molecular weight for repeat unit was 161.16 g/mol, and the total molecular weight was Mw 338,000.Lead nitrate (Pb(NO 3 ) 2 ) was purchased from Fluka Chemika, Germany (molecular weight 331.2 g/mol), sodium chloride (NaCl) was purchased from Edutek Chemicals, India (molecular weight 58.44 g/mol), and all agents were used without further purification.Micropore glass filter with binder (technical microfilter) has pore size of 2 m, porosity of 90%, thickness of 1.2 mm, and fiber diameter of 124 mm.
Preparation of Nylon 6/Chitosan Blend Electrospinning
Solution.Nylon 6 pellets and Chitosan powder at a concentration of 25% wt were blended in formic acid solution (25% wt/v) at 30 ∘ C using a laboratory magnetic stirrer for three hours to ensure complete dissolution of solutes and obtain a homogenous solution.Various ratios of Nylon 6/Chitosan (100/0, 90/10, 80/20, and 70/30) were utilized to prepare the blended nanofibrous membranes.
Electrospinning.
The prepared electrospun solution was collected in 10 mL syringe equipped with a 22-gauge stainless steel needle tip.The electrospinning process was carried at 25 kV voltage, 0.5 mL/hr flow rate, 0.7 mm needle diameter tip, and 15 cm distance between electrodes.The electrospinning process took four hours' time and these parameters will be fixed for all the following solutions.The electrospinning process was carried out at room temperature of 25 ± 1 ∘ C and relative humidity of 25-30%.The prepared electrospun nanofiber sheet has an area of 153.86 cm 2 and diameter of 14 cm, whereas the weight of the prepared electrospun nanofiber sheet equals 0.0035 g for 1 cm 2 .The electrospinning process was carried out by a bioelectrospinning/electrospray system (ESB-200), provided by (Nano NC, South Korea), which is shown in Figure 1.
Characterization.
Scanning electron microscopy (Model: VEGA3 LM-TESCAN) was used in studying the surface morphology of nanofibers.Low variable pressure was used in all SEM tests for all specimens, the pressure was enough, and there was no need for any more of sample gold ion sputtering coating to chive conductivity on the surface of the specimens in SEM tests.Specimen conductivity was measured by a model (C and 7110 Inolab).Solution viscosity was measured by a viscometer of type DV-II-pro at room temperature (25 ± 1 ∘ C).Solution surface tension was measured by surface tensiometer model (JYW-200A-Laryee Technology Co.) using a ring of platinum.Solution electrical conductivity was measured using the electrical conductivity device of model (C and 7110 Inolab).
Contact Angle Measurement.
The wettability of the electrospun mats was measured with deionized water contact angle measurements.Contact angle meter of the type CAM 110, Germany, was used.Deionized water was automatically dropped onto the membrane.Measurement was carried out in three seconds.
2.6.Permeability Test.Experiments were carried out at a temperature of 25 ± 1 ∘ C. Tests were done and conducted by using pressure cross flow filtration system.The diameter of membrane filter sample was 30 mm.Measuring pure water flux and salt rejection was the used method that characterized Nylon 6 with various additives membranes.Membranes were exposed to a pressure with the value of 1 bar for 50 min.They were placed with a shim and a mesh structured spacer to eliminate pressure polarization.They were pressurized with a mechanical pump controlled by pressure regulators and then the pressure was controlled to the operating pressure (1-6 bar).Figure 2 shows pressure cross flow filtration system which is designed especially to conduct the test.The permeable flux was calculated by where is the permeable flux (L/m 2 × h), is the volume of permeate (liter), is the effective membrane surface area (m 2 ), and is the time (hour).
The salt rejection was determined using atomic absorption device (AA-7000 atomic absorption spectrophotometer, Shimadzu).The rejection of salts was obtained by where and are ion concentration in permeate and feed, respectively, and () is rejection as a percentage.Figure 2 shows the system used in this process.During the rejection test three cells of membranes (three layers) on one layer of microfiber were used to increase the relation ratio.
Mechanical Strength Measurement.
Mechanical properties of the electrospun membrane were measured by a tensile mechanical tester of the type Tinus Olsen, H50 KT.A (5 N) load cell was used in that device.Specimen thicknesses were measured by an optical microscope.Specimens with 10 mm dimensions width and 100 mm length were tested and the extension rate at the room temperature was 0.5 mm/min.The experiments were carried out three times to calculate the average value of the results.
Antibacterial Activity Test (Disc Diffusion Method).
Bacteria were grown aerobically in nutrient broth at 37 ∘ C for 12 hours [13].Cells were washed and suspended in distilled water until reaching the final concentration of 10 6 CFU/mL.The antimicrobial susceptibility of Nylon 6 nanofibers/Chitosan was evaluated using the disc diffusion method."Muller-Hinton agar" was prepared from a commercially available dehydrated medium according to manufacturer's instructions.The dried surface of "Muller-Hinton agar" plate was inoculated with E. coli by swabbing over entire the sterile agar surface.Two forms of nanofiber sterilized membrane samples were cut into small standard circles (6 mm in diameter) for each circle and placed on the surface of the inoculated media.The first form of nanofiber membrane contains additive and the other one without additive was used as a control.The plates were incubated at 37 ∘ C for 24 hours.Incubation plates then were examined in order to identify zones of on growth characteristic for antibacterial activity (halos around the fragments).
Antibacterial Activity Test (Optical Density Method).
The antibacterial activity of electrospun nanofibers membrane was also tested by immobilizing nanofiber onto filters antibacterial activity was evaluated quantitatively with the following equation: where and are the numbers of surviving cells in the control and test samples, respectively [14].Chitosan molecules seem to be very large when compared with Nylon molecules [15].Table 1 shows the effect of Chitosan on electrical conductivity which had increased because of the increase of the Chitosan; this result agreed with "Bizarria et al.," whose result shows improving in electrical conductivity with increase in Chitosan concentration when using Chitosan blended with PEO [16].It is obvious that the surface tension increased with increasing Chitosan concentration due to high solution viscosity, but there is no significant change in surface tension.
Electrospun Membrane Characterization. Nanofibers
(SEM) images of nanofiber diameter histogram distribution are shown in Figure 3. Figure 3(a) represents the Nylon 6/Chitosan (100/0) nanofibers with an average diameter of 139 nm and the desirable morphology had been obtained.The nanofiber diameter increased with the increasing of Chitosan content in the blended solution and this is because of the increasing in the viscosity of the solution.Figure 4(a) shows an increase in fiber diameter and this had been happing because of very high viscosity values and there is difficulty in the ejection of jets from polymer solution and it results in a larger fiber diameter.This result had been matched with "Deitzel et al." [17].The desirable morphology had been converted to a defective structure with the increasing of the Chitosan content in the blend solution.Experiments showed that increasing of Chitosan ratio above 30% will rise the solution viscosity in which electrospinning became impossible.Usually fouling decreases with an increase in hydrophilicity of the polymeric material.In fact, a decrease in contact angle leads to increase in the flux ratio, which means decreased fouling [18]; therefore, evaluating the membrane characteristics was followed by contact angle measurements.Figure 5 shows the contact angle values of droplet with nanofibrous membranes after 3-second contact.
Water contact angle (WAC) for pure Nylon 6 nanofiber reaches the value 70 while the WAC of Nylon 6/Chitosan at a weight ratio of 70/30 reaches the value of 12.8.The results indicate a significant improvement in hydrophilicity of a membrane which is increased by increasing the Chitosan content.This can be attributed to the presence of a large number of functional groups like acetamide, primary amino, and/or hydroxyl groups in Chitosan structure.These outcomes agreed with the work reported by "Zhang et al." [19].relationship between flux and pressure is explained in Figure 7 which represents that when the pressure increased the flux permeates increased, because flux is directly proportional to the pressure drop across the membrane as shown in Figure 7, the same behavior for all membranes as the pressure increased.The transport mechanism within membrane can be explained on the basis of the solution diffusion model.According to this model, transport process within the membrane involves three steps: (1) sorption at the surface of the membrane, (2) diffusion into the membrane, and (3) desorption.The hydrophilic nature of Chitosan acts as driving force for sorption of water into membrane [20].
The presence of an amine and hydroxyl groups on Chitosan developed hydrogen bonding and Van der Wall's forces which caused a change in hydrophilicity of blended membranes [20].Due to increase electrostatic interactions, extensive cross-linkage was developed which helped in the exclusion of salt ions [20].Chitosan and chitin had been shown to remove metals and dyes so that they can be used to clean water [7].Various cross-linked polysaccharide materials with Chitosan can remove different metal and dye pollutants [7]. Figure 8 shows the effect of Chitosan on the rejection percentage for Pb(NO 3 ) 2 and NaCl.When the Chitosan concentration had been increased, rejection increases functional groups and electrostatic interaction.In general, increasing Chitosan concentration will increase the expulsion of metal ions due to increase in functional groups and the electrostatic force which kept within electrospun nanofiber during the electrospinning process.Another important factor of electrospun nanofiber membrane high surface area provides excessive adsorption sites for metal ions and dye adsorption and the higher porosity leads to smaller driving forces to push the water through the membrane which makes the process less energy intensive and facile [21].Chitosan has high contents of amino and hydroxyl functional groups.Owing to these properties, Chitosan is widely used in removal of contamination from wastewater [21].
Among the salts potassium sulphate showed more flux and rejection followed by sodium chloride.This can make it clear with the assistance of ionic size and charge density.The high rejection of K + ion reaches the value of 80% for the membrane (PEI) and this is because of the higher ionic radius of K + ion, where its value is 157 pm compared with the value of 116 pm for Na + [22].This clarified why the high rejection ratio of Pb +2 ion and rejection ratio proportion came up to 87% and Na +1 ion up to 75%.Nanofiber adsorption methods show that nanofibers are the best to remove metal ions while having low pressure drops and high water fluxes.Figure 9 shows the effect of time on the flux which decreases as the time increases due to membrane fouling.Figure 10 shows the efficiency of salt ions removal which also decreases with time which indicates the effect of Chitosan decrease with time.
Antibacterial Activity Results
. The Chitosan is known for its antimicrobial activity.Generally, it is accepted to say that the amine group of Chitosan can react with the anionic groups on the bacterial cell surface.Such interaction brings extensive change to the cell surface and the cell permeability [23][24][25].Leakages of intracellular substances can be caused by cell permeability leading to cell death.This mechanism has been demonstrated by electron microscopy [26].When amine groups increase the charge of the Chitosan will increase and that will cause a stronger interaction between Chitosan and cells.Figure 12 shows the effect of Chitosan on E. coli bacteria and the measure in the inhibition zone of 30/70-Chitosan/Nylon 6 which reached up to 8 mm and increasing the Chitosan percentage increases the inhibition zone.Figure 11 shows the formation of inhibition zone around the Chitosan/Nylon membrane.
Figure 13 shows the antibacterial activity and when the Chitosan concentration increases the antibacterial activity increases to 96% at 30/70-Chitosan/Nylon ratio.SEM image shows how the electrospun nanofiber captures and prevents the bacteria from penetrating into water, as in Figure 14.
Mechanical Test Result.
The tensile strength of prepared membranes was given in Figures 15 and 16.In this research the mean tensile strength of Nylon membranes was 1.45 MPa.The tensile strength of 10/90-Chitosan/Nylon 6 membrane (2.844MPa) was slightly higher than that of pure Nylon membrane, the tensile strength of 20/80-Chitosan/Nylon 6 membrane was 4.1 MPa, and 30/70-Chitosan/Nylon 6 membrane was 4.5 MPa.Clearly, Chitosan/Nylon 6 membrane had the highest tensile strength.There is an increase in the tensile strength of Chitosan/Nylon 6 electrospun membrane with the increase in Chitosan percentage.
Results Statistical Analysis.
The standard deviation was calculated from the results obtained in Figures 5, 8, 10, and 11, which can be seen in Table 2.
As can be seen, the results of standard deviation calculations showed that increasing the wt.% of Chitosan led to making water contact angle the most affected result, which made it the most significant result, while the antibacterial activity (%) was the lowest significant result.Standard deviation calculation of rejection (%) for NaCl salts and Pb(NO 3 ) 2 salts gives a different indication as is seen, it gives an opposite indication for rejection (%) for NaCl salts, and this cannot be considered as a final result and the results obtained in Figure 8 are more reasonable because of the difficulty of capturing the ions of NaCl salts comparing with the ions of Pb(NO 3 ) 2 salts.Table 3 illustrates the standard deviation calculation for a strain of the nanofiber membrane and its strength with the increase of the wt.% of Chitosan.
As it can be seen, the increase of the wt.% of Chitosan to 30% increases the strain of the membrane and also increases its tensile strength which is a desirable efficient factor.
Conclusions
Chitosan/Nylon 6 nanofiber membranes were prepared via electrospinning by mixing different amounts of Chitosan with Nylon 6.The addition of Chitosan improves the hydrophilicity and mechanical strength of Nylon 6 nanofiber membrane.The enhanced hydrophilicity of the nanofiber membrane can decrease the antifouling effect to make the mat a potential candidate for water filtration and improve the performance of membrane at low pressures.The Chitosan blended with Nylon 6 membranes showed better bactericidal ability as compared to the neat Nylon 6 membranes.Water filter media with a high rejection ratio against heavy metals ions reached 87% at 30/70-Chitosan/Nylon membrane.
Figure 3 :
Figure 3: (a) SEM image of pure Nylon 6 nanofibers membrane and (b) fiber diameter distribution.
Figure 6 Figure 6 :
Figure6shows the behavior of pure water permeation flux with time at room temperature and 1-bar membrane pressure.It was found that the pure water permeation flux of Nylon 6/Chitosan nanofiber fiber membrane decreases with time from 2037.063 to 1293.53 (l/m 2 ⋅hr) for 0/100 Nylon 6/Chitosan.Figure6shows that flux has been improved as the concentration of Chitosan increased because Chitosan increases the hydrophilicity of the membrane, whereas the
Figure 7 :
Figure 7: Effect of membrane pressure on the pure water permeation flux of the Nylon 6/Chitosan nanofiber membrane.
Figure 8 :
Figure 8: Effect of Nylon 6/Chitosan ratio on rejection of Pb(NO 3 ) 2 salt and NaCl Salt aqueous solutions with initial concentration 100 ppm and 4000 mg/L, respectively, at room temperature and 1 bar.
Figure 9 :Figure 10 :
Figure 9: Effect of time on flux of water containing 100 ppm (Pb(NO 3 ) 2 ) for Nylon/10% Chitosan membrane at room temperature and 1 bar.
Figure 11 :Figure 12 :
Figure 11: The formation inhibition zone around the Chitosan/Nylon 6 membrane after 24 hours and 37 ∘ C.
Figure 13 :
Figure 13: Effect of Chitosan addition on the antibacterial activity.
Figure 14 :
Figure 14: SEM of membrane after being exposed to bacteria, where (a) and (b) show the bacteria cluster on the membrane surface as a dendritic shape colony and (c) shows how fiber captures the bacteria.
Advances in Materials Science and Engineering was purchased from Cheng Du Micxy Chemical Co. Ltd.
2.1.Materials.The materials used in this research are Nylon 6 and formic acid, which were purchased from Sigma-Aldrich Co. (USA), molecular weight for repeat unit was 113.16 g/mol, and the total molecular weight was 25,000 g/mol.Chitosan 2 Viscosities of Nylon 6 and Nylon 6/Chitosan solutions at various concentrations were shown in Table 1.Adding Chitosan in Nylon 6 solution led to increase the viscosity with increasing Chitosan content.
Table 1 :
The electrospinning solution properties (these average values after three-time test).
Table 2 :
Standard deviation calculation of water contact angle, rejection %, inhibition zone against E. coli.bacteria, and antibacterial activity (%) results.
Table 3 :
Standard deviation calculation for tensile strength and strain of the nanofiber membrane.
|
2018-11-23T22:47:32.262Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "b6083e5232c7d1efec12f39044d8fb76d5961594",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/amse/2016/5810216.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b6083e5232c7d1efec12f39044d8fb76d5961594",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
12444233
|
pes2o/s2orc
|
v3-fos-license
|
Neuroprotective Effects of Sulforaphane on Cholinergic Neurons in Mice with Alzheimer’s Disease-Like Lesions
Alzheimer’s disease (AD) is a common neurodegenerative disease in elderly individuals, and effective therapies are unavailable. This study was designed to investigate the neuroprotective effects of sulforaphane (an activator of NF-E2-related factor 2) on mice with AD-like lesions induced by combined administration of aluminum and d-galactose. Step-down-type passive avoidance tests showed sulforaphane ameliorated cognitive impairment in AD-like mice. Immunohistochemistry results indicated sulforaphane attenuated cholinergic neuron loss in the medial septal and hippocampal CA1 regions in AD-like mice. However, spectrophotometry revealed no significant difference in acetylcholine level or the activity of choline acetyltransferase or acetylcholinesterase in the cerebral cortex among groups of control and AD-like mice with and without sulforaphane treatment. Sulforaphane significantly increased the numbers of 5-bromo-2'-deoxyuridine-positive neurons in the subventricular and subgranular zones in AD-like mice which were significantly augmented compared with controls. Atomic absorption spectrometry revealed significantly lower aluminum levels in the brains of sulforaphane-treated AD-like mice than in those that did not receive sulforaphane treatment. In conclusion, sulforaphane ameliorates neurobehavioral deficits by reducing cholinergic neuron loss in the brains of AD-like mice, and the mechanism may be associated with neurogenesis and aluminum load reduction. These findings suggest that phytochemical sulforaphane has potential application in AD therapeutics.
. Measurement of mice body weight. During the treatment, no significant difference in body weight was observed among control, Alzheimer's disease-like (AD-like) mice and sulforaphane-treated Alzheimer's disease-like (AD + SFN-like) mice (n = 18; mean ± S.E.M.; One-way analysis of variance followed by post hoc least significant difference multiple comparison tests).
Analysis of Aluminum Level in the Mouse Brain
Brain aluminum levels were significantly higher in AD-like mice with and without SFN treatment than in controls (p < 0.01). AD-like mice treated with SFN exhibited lower brain aluminum levels than AD-like mice without SFN treatment (p < 0.01; Figure 2).
Figure 2.
Analysis of aluminum level in the mouse brain. Brain aluminum levels were significantly higher in AD-like mice and AD + SFN-like mice than in controls; AD-like mice with sulforaphane treatment exhibited lower brain aluminum levels than AD-like mice without sulforaphane treatment. (n = 10; means ± S.E.M.; One-way analysis of variance followed by post hoc least significant difference multiple comparison tests; ** p < 0.01 versus control, ## p < 0.01 versus AD).
Step-Down-Type Passive Avoidance Tests
The results from step-down-type passive avoidance tests are shown in Figure 3. In the training session, escape latency and the number of errors were significantly higher in AD-like mice than in controls (p < 0.01); SFN markedly reduced escape latency and the number of errors in AD-like mice (p < 0.01). In the retention test, shortened step-down latency and an increased number of errors were observed in AD-like mice compared with controls (p < 0.05 and p < 0.01, respectively), while SFN obviously increased step-down latency and reduced the number of errors in AD-like mice (p < 0.01). No significant difference was detected between AD-like mice with SFN treatment and controls in escape latency or the number of errors in the training session, or step-down latency or the number of errors in the retention test. Step-down-type passive avoidance tests in control, AD-like mice and AD + SFN-like mice. In the training session, escape latency and the number of errors were significantly higher in AD-like mice than in controls; Sulforaphane markedly reduced escape latency and the number of errors in AD-like mice (a,b); In the retention test, shortened step-down latency and an increased number of errors were observed in AD-like mice compared with controls, while sulforaphane obviously increased step-down latency and reduced the number of errors in AD-like mice (c,d); No significant difference was detected between AD-like mice with sulforaphane treatment and controls in escape latency or the number of errors in the training session (a,b), or step-down latency and number of errors in the retention test (c,d). (n = 18; median and interquartile range; Kruskal-Wallis non-parametric one way analysis of variance followed by followed by Mann-Whitney U-test (two-tailed); ** p < 0.01 versus control, ## p < 0.01 versus AD).
Choline Acetyltransferase (ChAT) Immuno-Positive Neuron Assays
ChAT immunohistochemistry results are shown in Figure 4. Brown immunoreactive cells indicated the presence of imply cholinergic neurons in mouse brains. The numbers of cholinergic neurons were markedly decreased in the medial septal (MS) and hippocampal CA1 regions of AD-like mice compared with controls and AD-like mice treated with SFN (p < 0.05). However, the numbers of cholinergic neurons in these regions did not differ significantly between control and AD-like mice with SFN treatment. and hippocampal CA1 regions. Brown immunoreactive cells imply the cholinergic neurons in mice brain (see arrows, bars = 50 μm). The numbers of cholinergic neurons were markedly decreased in the MS (a) and hippocampal CA1 (c) regions of AD-like mice compared with controls and AD + SFN-like mice. However, the numbers of cholinergic neurons in these regions did not differ significantly between groups of control and AD-like mice with sulforaphane treatment; (b) and (d) showed quantitative assessments of choline acetyltransferase (ChAT) immuno-positive neurons in MS and hippocampus CA1, respectively. (n = 8; means ± S.E.M.; One-way analysis of variance followed by post hoc least significant difference multiple comparison tests; * p < 0.05 versus control, # p < 0.05 versus AD; magnify 400×; all cholinergic neurons in hippocampus CA1 were counted; number of cholinergic neurons of each field of vision in the MS was counted).
5-Bromo-2'-deoxyuridine Immuno-Positive Cell Assays
5-Bromo-2'-deoxyuridine (BrdU), a proliferation marker, naturally incorporates into proliferating cells as a thymidine analog. BrdU immunohistochemistry results are shown in Figure 5. Compared with controls, significantly more BrdU-positive cells were observed in the subventricular zone (SVZ) and subgranular zone (SGZ) of AD-like mice with and without SFN treatment (p < 0.05 and p < 0.01, respectively). Moreover, SFN significantly increased the numbers of BrdU-positive cells in the SVZ and SGZ of AD-like mice (p < 0.05).
Acetylcholine (Ach) Level and Activities of ChAT and Acetylcholinesterase in the Cerebral Cortex
Although ACh level showed a decreasing trend in AD-like mice, no significant alteration was found, and activities of ChAT and acetylcholinesterase (AChE) in the cerebral cortex did not differ significantly among groups ( Figure 6).
Discussion
Animal models have been commonly used in defining critical disease-related mechanisms and for the preclinical evaluation of potential therapeutic interventions in AD. In this study, mice with AD-like lesions induced by the combined administration of aluminum and D-galactose [13] were used to investigate the anti-AD effects of SFN. SFN ameliorated cognitive impairment and attenuated cholinergic neuron loss in the MS and hippocampal CA1 regions in AD-like mice. However, no significant difference in ACh level or the activity of ChAT or AChE in the cerebral cortex was detected among groups of control and AD-like mice with and without SFN treatment. Moreover, SFN markedly increased the numbers of BrdU-positive neurons in the SVZ and SGZ of AD-like mice, which were significantly augmented compared with controls. Additionally, brain aluminum levels were significantly lower in AD-like mice with than in those without SFN treatment.
Behavioral dysfunction, especially memory loss, is the prominent and early symptom in AD patients. The Morris water maze is commonly used to test hippocampal-dependent spatial learning and memory. Morris water maze results from a previous study and ours showed impaired spatial memory in AD-like mice subjected to combined D-galactose and aluminum treatment [13,14]. In this study, step-down-type passive avoidance tests conducted to investigate non-spatial long-term memory demonstrated reduced cognitive function in AD-like mice. Memory dysfunction is well known to be associated mainly with cholinergic neuron loss in several regions of the brain, especially the basal forebrain and hippocampal CA1 region [15][16][17]. Consistent with previous studies [18][19][20], ChAT (a specific marker for cholinergic cells) immunohistochemical results from this study indicated cholinergic neuron loss in the MS and hippocampal CA1 regions in AD-like mice.
The combined administration of aluminum (20 mg/kg, intragastrically, once per day) and D-galactose (120 mg/kg, injected subcutaneously, once per day) has been shown to decrease the whole-brain ACh level and activities of ChAT and AChE in mice after treatment for 10 weeks [13]. However, we detected no significant alteration in ACh level or the activity of ChAT or AChE in the cerebral cortex of AD-like mice. The reasons for these inconsistent findings remain unclear. Generally, the literature reflects disagreement on ACh level and ChAT and AChE activities in the AD brain. Reduced ACh level and ChAT activity and increased AChE activity have been reported in the brains of patients with severe AD compared with controls [21]. However, Giacobini [22] indicated that AChE levels decreased by as much as 90% compared with normal values in severe AD. Dekosky et al. [23] reported no change in ChAT activity in the inferior parietal, superior temporal, and anterior cingulate cortices in individuals with mild cognitive impairment (MCI) and mild AD compared with controls; ChAT activity in the superior frontal cortex was significantly elevated in subjects with MCI compared with normal controls, whereas no difference was observed between the mild AD group and the MCI and no cognitive impairment groups. Dekosky et al. [23] also found significantly higher hippocampal ChAT activity in subjects with MCI than in the control and AD groups. Thus, it can be speculated that significant reductions in ACh level and the activities of ChAT and AChE in mouse brains are likely related to differences in disease progression and among brain regions. Taken together, our results suggest that ChAT-positive cell loss in the MS and hippocampal CA1 regions of the AD brain occurs earlier than changes in the cholinergic system (ACh level and ChAT and AChE activities) in the cerebral cortex. In addition, different brain regions or progressions of AD may contribute to inconsistent alterations in ACh level and activities of ChAT and AChE. SFN has many advantages, including water solubility, good pharmacokinetics, and safety after oral administration, as well as the potential ability to penetrate the blood-brain barrier [24,25]. Animal models of traumatic brain injury [26], idiopathic Parkinson's disease [27], cortical neuron injury [28], and spinal cord injury [29] have suggested that SFN is an effective neuroprotector. Furthermore, we found that SFN might play a role in protecting against cognitive function impairment and cholinergic neuron loss in the brains of mice with AD-like lesions induced by combined administration of aluminum and D-galactose. The findings of a previous study and ours suggest that SFN is a promising compound with neuroprotective properties, which is expected to be useful in the prevention of AD [12,14].
Neurogenesis is maintained in two regions-the SVZ and SGZ-in the adult brain [30,31]. The proliferation of neurocytes in these regions has been studied in various AD mouse models, but the findings have been inconsistent. Although an overall trend of impaired neurocyte proliferation in these areas in AD has been documented [32][33][34][35], Kamphuis et al. [36] and Díaz-Moreno et al. [37] observed enhanced proliferation in APP/PS1 and SAMP8 mice, respectively. In our study, dramatic increases in the numbers of BrdU-immunoreactive cells in the SVZ and SGZ were observed in AD mice compared with controls. Differences in disease progression may contribute to the conflicting results obtained in various studies. Numerous studies have provided evidence that SFN can inhibit the proliferation of cancer cells [38][39][40]. In contrast, human mesenchymal stem cells incubated with low doses (0.25 and 1 μM) of R-SFN exhibited higher proliferation rates compared with controls, whereas 20 μM R-SFN induced a significant reduction in the proliferation index [41]. We observed more BrdU-immunoreactive cells in AD-like mice treated with 25 mg/kg D,L-SFN than in control and AD-like mice that received no treatment, suggesting that SFN can enhance the proliferation of neurocytes in the AD brain. This effect may partly account for the increased number of cholinergic neurons in AD-like mice with than in those without SFN treatment. As is well known, BrdU can be incorporated into DNA only during the S-phase of the mitotic process. Further research is thus needed to confirm these results using another marker, such as Ki-67, which is fully positive for its whole duration. Moreover, a Terminal-deoxynucleoitidyl Transferase Mediated Nick End Labeling (TUNEL) test is necessary to further validate the results by showing no change in the presence and absence of SFN in the mouse brain.
In the present study, a significant decrease in brain aluminum level was observed in AD-like mice treated with SFN. Moreover, SFN remarkably reduced the blood aluminum level in similar AD-like mice in our previous study. These findings suggest that SFN relieves the brain aluminum load by accelerating blood aluminum excretion, which could be partly ascribed to its ability to induce phase II detoxifying enzymes by activation of the transcription factor Nrf2 [42,43]. Therefore, aluminum load reduction may be implicated in SFN-mediated neuroprotective effects in AD-like mice. Furthermore, oxidative stress and inflammation play independent and/or dependent roles in the initiation and progression of AD [44,45]. The involvement of Nrf2 signaling in the regulation of oxidative stress and inflammation has been well documented [46,47]. As a potent activator of Nrf2, SFN may also play an important neuroprotective role against AD via anti-inflammation and/or antioxidation. It has been reported that SFN attenuates Aβ-induced oxidative cell death via activation of Nrf2 [11], and protects the brain from Aβ deposits and peroxidation in mice with AD-like lesions induced by combined administration of D-galactose and aluminum [14]. Therefore, further study is required to confirm whether the activation of Nrf2 signaling is involved in the anti-AD effect of SFN.
In conclusion, our study provides insight into the application of SFN to prevent and cure AD. However, the potentially neuroprotective mechanism of SFN against AD should be investigated further. Moreover, this study did not include control mice treated only with SFN, which limits the interpretation of the results.
Animals, Treatments, and Tissue Collection
Eight-week-old Kunming mice (animal code SCXK 2008-1105) were purchased from the Experimental Animal Center at China Medical University (Shenyang, China). The mice were maintained on a 12 h light/dark cycle with controlled temperature (22 ± 2 °C) and humidity (55% ± 15%), and provided with food and water ad libitum. The Animal Care and Use Committee of China Medical University approved the experiment (no. CMU20130105), which complied with the National Institute of Health's Guide for the Care and Use of Laboratory Animals. All efforts were made to minimize suffering and the number of animals used.
According to body weight, mice were randomly divided into 3 groups (n = 18, equal numbers of male and female mice): a control group and AD-like groups with and without SFN treatment. Animal groups and treatments are shown in Scheme (Figure 7). Mice in groups of AD-like with or without SFN treatment were administered by the daily and free provision of drinking water containing aluminum (0.4 g/100 mL) and subcutaneous injection of 200 mg/kg D-galactose (dissolved in physiological saline) every other day. Distilled water and an equivalent amount of physiological saline were administered to mice in the control group as drinking water and by subcutaneous injection, respectively. AD-like mice in the SFN treatment group were gavaged with 25 mg/kg SFN (dissolved in distilled water) once a day, whereas mice in the AD-like and control groups were gavaged with an equivalent amount of distilled water. All mice were treated for 90 days.
Behavioral tests were performed after 80 days of treatment. Three days before sacrifice, the thymidine analog BrdU (25 mg/kg, twice per day, injected subcutaneously) was co-administered to 8 mice in each group to measure ongoing cell proliferation. Twelve hours after final BrdU administration, killed the mice and collected the brains, which were used to investigate the numbers of ChAT-positive and BrdU-positive cells by immunohistochemical assays. The brains of the remaining 10 mice in each group were removed immediately and bisected. The left cerebra were weighed and aluminum level was measured by graphite furnace atomic absorption spectrometry. The right cerebral cortices were used to test ACh level and activities of ChAT and AChE. Figure 7. Scheme of animals groups and treatments. As shown in (a), mice were divided into 3 groups: control, AD-like and AD + SFN-like. Mice in groups of AD and AD + SFN were administered by the daily and free provision of drinking water containing aluminum (0.4 g/100 mL) and subcutaneous injection of 200 mg/kg D-galactose (dissolved in physiological saline) every other day. Distilled water and an equivalent amount of physiological saline were administered to mice in the control group as drinking water and by subcutaneous injection, respectively. Mice in the AD + SFN group were gavaged with 25 mg/kg sulforaphane (dissolved in distilled water) once a day, whereas mice in groups of AD and control were gavaged with an equivalent amount of distilled water; (b) shows that all mice were treated for 90 days. Behavioral tests were performed after 80 days of treatment. Three days before sacrifice, 25 mg/kg 5-Bromo-2'-deoxyuridine (BrdU) was subcutaneously injected twice per day to 8 mice in each group.
Step-Down-Type Passive Avoidance Tests
Step-down-type passive avoidance tests were conducted using a YLS-3TB platform recorder (Yiyan Technology, Jinan, China) to investigate the effects of SFN on learning and memory according to a modification of the method reported by Ukai et al. [48] and Sakaguchi et al. [49]. The experimental device is a 12 × 12 × 18-cm electronic avoidance-response chamber, three sides of which are made of blank Plexiglas and one side of which is hard black plastic. The floor of the chamber is composed of parallel stainless-steel grids. Electric shocks were delivered to the grids. A rubber platform (5 cm high, upper surface 4 cm in diameter) was fixed in a corner on the floor of the chamber.
The test consisted of a training session and a retention session (24 h after the training session). During the training session, each mouse was placed on the steel grids and then exposed to an electric shock (30 V for 5 min) until it stepped up onto the rubber platform. Escape latency (the time required for the mouse to escape from electric shock) and the number of errors (the number of times that the mouse stepped down from platform) were recorded. In the retention test, each mouse was placed on the platform. When the mouse stepped down and placed its paws on the grids, an electric shock was delivered.
Step-down latency and the number of errors were recorded. The cut-off time in both sessions was 300 s.
Aluminum Measurement by Atomic Absorption Spectrometry
The left cerebra of mouse brains were immediately dissolved in a solution (3 mL) consisting of concentrated nitric acid and 30% hydrogen peroxide. The solution was digested at 120 °C for 3 h, and diluted to 10 mL with aluminum-free water. Brain aluminum levels were determined by graphite furnace atomic absorption spectrometry (Hitachi, Tokyo, Japan) using a wavelength of 309.3 nm, slit width of 1.3 nm, lamp current of 10.0 mA, and injection volume of 10 μL.
Immunohistochemistry Assays
Mice were deeply anesthetized and transcardially perfused with saline followed by ice-cold 4% paraformaldehyde (pH 7.4). The brains were removed, post-fixed with the same fixative overnight at 4 °C, and embedded in paraffin. Coronal sections (5 μm) were dried well at 60 °C for 2 days, and then dewaxed in xylene and hydrated in an alcohol row, and washed 3 times in phosphate-buffered saline (PBS; pH 7.2).
Endogenous peroxidase activity was blocked by immersion in 3% hydrogen peroxide for 10 min. Sections were incubated in 0.3% Triton-100X and then washed in PBS. DNA denaturation for BrdU staining was carried out by incubating sections in 2 N HCl at 37 °C. All sections were then blocked in normal goat serum for 30 min at room temperature. Sections were incubated with rabbit anti-BrdU polyclonal antibody (1:200, bs-0489R; Bioss Biotechnology, Wuhan, China) or rabbit anti-ChAT polyclonal antibody (1:400, ab68779; Santa Cruz, TX, USA) overnight at 4 °C. After washing in PBS, sections were incubated with secondary antibody (goat anti-rabbit immunoglobulin G) for 30 min, then washed with PBS and incubated with horseradish peroxide avidin-biotin complex for another 15 min, followed by washing with PBS. The reaction products were visualized with diaminobenzidine chromogen solution. Sections were then counterstained with hematoxylin and numbers of ChAT-positive and BrdU-positive cells were counted under an optical microscope.
ACh Level and Activities of ChAT and AChE
ACh level and activities of ChAT and AChE in the cerebral cortex were measured by colorimetric diagnostic kits according to the manufacturer's instructions. Total protein in supernatant was detected using the Coomassie brilliant blue method. ACh level was expressed as micrograms per milligram fresh tissue protein, and activities of AChE and ChAT were described in units per milligram fresh tissue protein.
Statistical Analyses
Data from step-down-type passive avoidance tests were expressed as medians and interquartile ranges, and statistical comparisons were made by Kruskal-Wallis non-parametric one-way analysis of variance (ANOVA), followed by the Mann-Whitney U-test (two-tailed). SPSS software (version 13.0; SPSS Inc., Chicago, IL, USA) was used for all analyses. Data from other indices were presented as means ± standard errors of the mean (S.E.M.) and compared using one-way ANOVA followed by post hoc least significant difference multiple comparison tests. Probability values <0.05 were considered to indicate statistical significance.
Conclusions
SFN ameliorates neurobehavioral deficits by reducing cholinergic neuron loss in the AD-like mouse brain, and the mechanism may be associated with neurogenesis and aluminum load reduction. These findings suggest that the phytochemical SFN has potential application in AD therapeutics.
|
2016-03-14T22:51:50.573Z
|
2014-08-01T00:00:00.000
|
{
"year": 2014,
"sha1": "1133009d227e0a29fa958af58d0516488ec9e4da",
"oa_license": "CCBY",
"oa_url": "http://www.mdpi.com/1422-0067/15/8/14396/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1133009d227e0a29fa958af58d0516488ec9e4da",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
254347345
|
pes2o/s2orc
|
v3-fos-license
|
Determination of the Degradation Degree of Pasture Lands in the West Kazakhstan Region Based on Monitoring Using Geoinformation Technologies
Land degradation, including pasture lands is one of the global problems. Currently, one of the most urgent problems of the West Kazakhstan region is the preservation and restoration of the vegetation cover of pasture lands. To date, large areas of the region have been occupied by agricultural land. Several main reasons negatively affect agriculture, one of which is land degradation associated with anthropogenic impact in terms of the irrationality of land use. Thus, to preserve the biodiversity of the pastures of the West Kazakhstan region, it is necessary to fully study the projective cover of the vegetation, determine the dominant plant species, and also monitor the condition of pastures to prevent land degradation on time by conducting land and forest improvement activities. The study aimed to carry out a phyto-ecological assessment of degraded pastures of the Karatobinsky district of the West Kazakhstan region using geoinformation technologies and field study results. The paper presents the results of desktop decoding of high-resolution satellite images and ecological profiling of the studied territories. Decoding features of landscape types allowed making a preliminary map of landscape contours. The use of this technique makes it possible to monitor the condition of degraded pasture lands in a short time and justify the organization of pastures with a regulated grazing system in the study area.
INTRODUCTION
Today, the problem of degraded pasture lands has become relevant for the whole world, since the annual global loss of productive pasture lands is about 55-60%. The urgency of the problem arising from this is explained by the fact that the area of degraded lands is expanding annually under the influence of anthropogenic factors [Kubenkulov et al., 2019].
According to Kucherov [2012] and Nasiev [2013], the degradation of pasture lands occurs from long-term grazing and the violation of the seasonality of their use, which results in changes or (in some places) disappearance of the species composition of vegetation. Kulik [2004] and Nasiev [2013] have demonstrated the issues of monitoring pasture lands. The most complete studies of changes in species composition and the productivity of vegetation cover of semi-desert and desert zones of the West Kazakhstan region (WKR) were considered in the works of Vlasenko [2011], Ivanov [2007], Determination of the Degradation Degree of Pasture Lands in the West Kazakhstan Region Based on Monitoring Using Geoinformation Technologies Darbaeva [2001], whereas studies about the features of sandy lands were performed in the works of Gael [1999].
Currently, the problem of obtaining information about the state of pasture lands is solved with the use of geoinformation technologies that allow obtaining reliable and complete information about the state of pasture landscapes [Kaldybaev et al., 2022]. Satellite images make it possible to objectively assess the situation and take effective measures aimed at preserving the natural vegetation [Karynbaev, 2015]. Space information enables to assess the state of the territory and study changes in vegetation cover based on remote data [Yuferev et al., 2010;Esmagulova et al., 2015]. This is important since, with the increase in the number of livestock on private farms, the need to increase the territory of pasture lands also grows. Therefore, the problem of proper organization of activities aimed at preserving and improving pasture lands remains important and relevant [Kushnir and Konstantinov, 2008;Qnagayev et al., 2016].
The authors believe that the correct organization of measures to improve pasture lands will allow the use of pastures with the greatest effect on the development of agriculture. The use of remote geoinformation technologies and the results of ground field studies will solve the problem of degradation of pasture lands in Kazakhstan.
The purpose of the study was a phyto-ecological assessment of degraded pastures of the Karatobinsky district of the WKR using geoinformation technologies and field study results, followed by the development of the measures to improve pasture lands.
MATERIALS AND METHODS
The paper presents the results of remote monitoring and field studies of degraded pasture lands in 2019.
The study area was degraded pasture lands. Irrational methods of using pastures and anthropogenic impact accelerated the processes of their degradation and caused a decrease in fodder production. The object of the study was the pasture lands of the Karatobinsky district (Kazakhstan), located in the desert and semi-desert zone of the WKR. The paper presents the results of remote monitoring and field studies of degraded pasture lands in 2019.
Remote monitoring of pasture lands of the West Kazakhstan region is relevant due to the fragility and vulnerability of the semi-desert and desert zone. The pastures of the region occupy 73% of its entire area. Despite the huge area of pasture lands, the area of run-down and overgrown pastures with non-edible plants is steadily growing in the region. The anthropogenic impact plays a significant role in this process due to the intensive increase in the number of farm animals.
On the territory of the Karatobinsky district, the main animals grazed on pastures are sheep, goats, and cattle. As a result of prolonged excessive anthropogenic loads, degradation of pasture lands is observed on the territory of the Karatobinsky district. This requires careful analysis and some measures to preserve the natural vegetation.
The key site Alymshagyl is a sandy massif of the same name, located in the Karatobensky district of the WKR, northeast of the Karatobe village. The coordinates of the center of the key site under consideration are 49°41∍02∀ north latitude; 53°32∍33∀ east longitude. The total area of the key site is 274 hectares, of which 17.2% are occupied by the settlements of Karatobe and Shoptikol.
The following materials were used during the study: The study was carried out based on a threestage method, including a pre-field desktop stage, a field expeditionary study stage, and a post-field desktop stage [Kulik, 2004]. Stage 1. Pre-field desktop studies included the processing of cartographic material, as well as the selection of satellite images to determine the key site. At this stage, the route of the expeditionary field study was developed based on satellite images, and the routes for laying landscape and ecological profiles were planned. Stage 2. During the field study, the following works were carried out at the key site: • landscape and ecological profiling; • establishment of geobotanical sites and sloping sites; • determination of the degree of pasture degradation by the total projective cover and productivity of the herbage.
Landscape and ecological profiling was carried out according to the "Guidelines for landscape and ecological profiling" [Kulik et al., 2007]. Geoinformation processing was carried out based on the images of Landsat series spacecraft, from the Landsat 5 satellite, obtained from the archive of the United States Geological Survey (USGS) [Landsat Satellite Archives, 2018]. Pasture lands were decrypted in the Global Mapper software, which included the definition of pasture lands, territories of sandy lands used for pasture, territories of settlements, and roads. Decryption is the Global Mapper software included the following layers: a soil map, types of sands for vegetation colonization, and routes for laying a landscape and ecological profile to examine the main objects identified during desktop decryption at their smallest extent.
The vegetation was described on 12 geobotanical sites measuring 100 m 2 . The quantitative ratio of species was characterized by the Drude scale. The productivity of pasture lands was determined by using the cutting method: plant samples were collected on geobotanical sites, that is, a cutting site with an area of about 2.5 m 2 was selected, in three-fold repetition. After cutting the plants from the cutting sites, they were immediately weighed and the results were recorded in a form. The raw mass of all plants was collected in place (if the mass exceeded 1 kg) and placed in a gauze bag for drying. The productivity of the dry mass was determined under the office conditions after the end of field studies.
The : the species grows sparsely; Un.: the species occurs in single instances. To determine the degree of degradation of vegetation cover by the projective cover, the scale of V.P. Voronina [2009] was taken as a basis, where a very heavily run down pasture has a projective cover <25% (IV); a heavily run down has a cover of 25-50% (III); a medium run down one has a cover of 50-75% (II), and a slightly run down one has a projective cover.
Stage 3. For forest improvement mapping of the territory of a key site on a sandy massif, a forest improvement classification was used [Kulik et al., 2021], according to which degraded pastures are divided into 4 forest improvement categories (FIC), differing among themselves according to the state of the soil and vegetation cover. The categories identified in the FIC I include desertification foci that have arisen as a result of excessive loading of livestock at watering holes, sheep pens, and settlements, as well as pasture areas that have undergone desertification and withdrawal from economic circulation due to plowing. Depending on the area, the foci of desertification are classified as small (less than 100 ha), medium (100 to 500 ha), and large (more than 500 ha).
FIC II includes sands of different landforms in various stages of the soil-forming process slightly colonized or colonized with vegetation, often with disjointed deflation spots, as well as territories with sandy desert soils. They easily lose their soil and vegetation cover and become desertified with an increased load of livestock and even with partial plowing with wide stripes.
FIC III includes the areas with sandy loam zonal (light chestnut, brown semi-desert, graybrown desert, takyr-like desert) soils capable of deflating with continuous plowing.
FIC IV includes loamy and clay soils that are practically not exposed to wind erosion not only during intensive grazing but also during plowing.
FICs, in turn, are divided into forest improvement types (FIT), which are distinguished by the level and mineralization of groundwater (GW).
FIT a has available GW, its depth of occurrence equals from 0 to 4 m, and mineralization up to 1 g/l; FIT b has limited available GW, its depth of occurrence equals from 4 to 8 m, with mineralization over 1 g/l; FIT c has redistributed precipitation (snow accumulation, surface runoff); FIT d has inaccessible GW and is devoid of the specified sources of moisture.
The combination of FIC and FIT gives forest improvement divisions (FID) designated by the double category and type index [Kulik, 2004].
The degree of colonization of the sandy massif with grassy and woody vegetation was estimated from satellite images and isolinear maps based on them. An isolinear map was created based on the data obtained by dividing the area of grassy and woody vegetation by the total area of a regular grid square (translated into %), which was superimposed on the satellite image: where: C -the degree of colonization of the sandy massif with vegetation, %; S 1 -the area of herbaceous or woody vegetation (ha or km 2 ); S 2 -the square area (ha or km 2 ).
Using these points in the Surfer software, using the kriging interpolation method [Silkin, 2008], isolinear maps of the GWL and GWM for the key site were constructed. To work in Surfer, spreadsheets must contain at least three columns of data: the fi rst two columns of data are the coordinates Χ and Υ, and the third column is the value Ζ assigned to the Χ, Υ point. In the considered case, Χ and Υ are the coordinates of the center of the cell of the regular grid, which is superimposed on the satellite image, and Ζ represents either GWL or GWM data.
Similarly, isolinear maps were constructed according to the degree of colonization of the sands with herbaceous and woody vegetation.
RESULTS
According to decoding signs, the surface of the key area is fl at. The soils around the sandy massif are sandy loam brown semi-desert solonized soils (Figure 1). The landscape and ecological profi le begins on the bank of the Kaldygaity River and is located in the direction from south to north. The length of the profi le is 2 km. The landscape and ecological profi le ( Figure 2) and the geobotanical description (Table 1) allow performing the most expressive and complete assessment of the features of the key site.
Plant communities change frequently throughout the profi le. The profi le begins with the astragalus and spurge community. Downy brome, milfoil, and sand oats are found only occasionally.
On the 3rd site, an open area is observed, the projective cover on which does not exceed 1%, and camel thorn is found only once. At 500 m from the Kaldygaity River, the projective cover reaches 20% and the plant community changes to bluegrass and astragalus. After 1,500 m from the beginning of the profile, calligonum bushes grow and wild rye and spurge are found. The profile ends at the top of a sandy hillock with a height of 7 m. Here, in the depressions between the hillocks, forest outliers of Jesuit's bark and birch grow, up to 3-5 m high. From the herbaceous vegetation, goose grass is found in some places. It should be noted that throughout the entire profile, the species composition of vegetation is no more than 5 and the territory of the key site is a very heavily run down pasture.
On the basis of soil and landscape and ecological maps, a forest improvement map of the key site was compiled ( Figure 3). All forest improvement categories are present on this site. Over 50% of the land is under sandy loam brown semi-desert solonized soils (FIC III). The land of FIC I occupies 9.8% of the total area and is confined to the villages of Karatobe and Shoptikol, where pockets of deflation have arisen as a result of overgrazing.
The lands of FIC II are completely located within the Alymshagyl sandy massif and occupy 15.3% of the total area of the key site, according to inter-hillock depressions, the projective cover increases to 25% (Table 2).
To determine the FIT, isolinear maps of sand colonization with vegetation were constructed ( Figure 4). The analysis of Figure 4 showed that the main tree stands are concentrated in the central part of the sandy massif and represent birch outliers along inter-hillock depressions. The main According to the isolinear maps of the GWL and GWM (Figure 5), it can be seen that most of the key site has limited available GW (4-6 m), while its mineralization does not exceed 1 g/l.
The distribution of areas by FIT is shown in Table 3, from which it can be seen that the largest area is occupied by the land of the FIT b (40.8%), and they are located in the contour of the FIC III, where the Kaldygaity River passes through the territory of the key site. The lands with accessible and limited available GW (FIT a and b) occupy 32.5%.
Thus, eight FID were identified at the key site ( Table 4). The analysis of the data in Table 4 shows that the Alymshagyl key site is dominated On the key site, FIT d is present, located on the site of FIC III d and occupying 9.5% of the total area of the site.
DISCUSSION
Improving the microclimate of the territory and creating productive fodder base on the pastures of the Karatobinsky district remains an urgent problem. An exceptional role in the preservation of pastures from wind erosion and increasing their productivity on degraded pastures of the Karatobinsky district, as a rule, belongs to pasture-protective forest strips, and shade clumps, which are the most powerful means of protecting pastures. The issues of proper use of pastures are important.
The most important task in processing remote monitoring data is to determine the condition of pasture lands. According to Bekmukhamedov [2019], the pasture lands of Kazakhstan are characterized by three degradation factors: overgrazing, cutting of shrubs, and abandonment. According to the conducted study, among the important remote indicators of the ecological state of phytocenoses, one can name the run down vegetation cover, which was formed due to overgrazing.
As one can see in the key area, due to intensive grazing of livestock, natural grains have disappeared. The areas with excessive grazing are mostly adjacent to settlements and wells, where animals cross the territory several times during one day. In such areas, the vegetation is almost completely run down. Therefore, a detailed analysis of satellite images and landscape and ecological profiling makes it possible to conduct monitoring of pasture lands in a remote mode without specialists visiting the site. Remote monitoring enables to justify the transformation of degraded pasture lands into other types of land, organize pastures with a regulated grazing system, fix the sands with psammophytes, and plant woody plants such as tamarix, marsh elder, and calligonum along inter-hillock depressions.
According to studies [Nasiyev and Bekkaliyev, 2019; Nasiyev et al., 2022], when organizing pastures with a regulated grazing system, the fodder base improves, and productivity increases. Therefore, to maintain pastures in good natural condition in the study area, based on the instructions [Yunusbaev, 2001], the creation of a pasture rotation with alternating spring season with winter and summer with autumn is recommended.
CONCLUSIONS
The use of this study method and the interpretation of satellite images in the territory of the WKR will ensure constant monitoring of the condition of objects, high quality, and efficiency of creating thematic maps for agroforestry activities, as well as high reliability of information about the state of pasture landscapes.
The results of the conducted study showed that the concentration of animals along livestock centers, watering holes, settlements, and overgrazing on pastures leads to complete desertification of the territory. The recommended measures should be comprehensive, taking into account both natural and climatic and anthropogenic factors.
|
2022-12-07T19:08:06.974Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "4a22c4a8d24d88258271eacf278fb51948b9e6f2",
"oa_license": "CCBY",
"oa_url": "http://www.jeeng.net/pdf-155167-83411?filename=Determination%20of%20the.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "ddf478e80f6b448c082d73ac0236d282a201584a",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
4475555
|
pes2o/s2orc
|
v3-fos-license
|
Development and psychometric properties of a questionnaire to measure drug users’ attitudes toward methadone maintenance treatment (DUAMMT) in Iran
Background Assessing drug users’ attitudes towards different kinds of addiction treatment is necessary to design tailored strategies. The aim of the present study is to develop and examine the psychometric properties of a new scale, called the DUAMMT, for assessing drug users’ attitudes toward methadone maintenance treatment in Iran. Methods A multi-phase development method was applied in developing an instrument from February to December 2016. The item generation and scale development were performed through literature review, a qualitative approach, and interviews with an expert panel. Then, the psychometric properties of the scale were evaluated by means of cross-sectional studies with drug users. We performed an exploratory factor analysis, a confirmatory factor analysis, and item-scale correlations; and we tested the internal consistency of the scale. Furthermore, test-retest reliability was evaluated among an Iranian sample of drug users. Results The mean age of participants was 34.12 years. The exploratory factor analysis revealed four factors (perceived barriers, perceived concerns, methadone side effects, and perceived positive effects) containing 17 items that jointly accounted for 60.53% of the observed variance. The confirmatory factor analysis showed a model with appropriate fitness for the data. The Cronbach’s alpha coefficient for the subscales ranged from .70 to .79. The intra-class correlation coefficient (ICC) ranged from .774 to .970, which is well above the acceptable threshold. Conclusions The findings of the present study suggest that the DUAMMT is a valid and reliable instrument to measure drug users’ attitudes toward methadone maintenance treatment. The DUAMMT can be applied at the start of treatment so that clinical intervention can be targeted to promote retention in treatment.
Background
The International Classification of Diseases (ICD-10) has classified opioid dependence as a chronic and relapsing disorder [1]. Opioid use disorder has been a main contributor to comorbidity and premature mortality caused by overdose and blood-borne infections, such as human immunodeficiency virus (HIV) and hepatitis [2][3][4][5][6]. In addition, opioid dependence was responsible for 51,000 deaths throughout the world in 2013 [7] and also accounted for the greatest proportion of universal disability-adjusted life years (DALYs) attributed to drug dependence, i.e., 9.2 million DALYs in 2010 [8]. More concerns may be raised considering the fact that DALYs attributed to opioid use disorder increased during the time period of 1990-2010 [8]. Aside from physical harm, previous studies have also shown that people with opioid use disorders have higher risks of panic disorders, social phobia, agoraphobia, low self-reported health, lifetime anxiety, and mood disorders [2,9,10].
Methadone, which is a synthetic opioid with potent analgesic effects, was initiated in Germany during World War II and was prescribed as a painkiller. In 1964, however, the first "methadone program" was established by Dole and Nyswalder (1965) in order to treat heroin dependence [11,12]. Methadone maintenance treatment (MMT) is an opioid replacement therapy (ORT); taking stable daily doses of methadone in the long term has been proven to alleviate the uncomfortable withdrawal symptoms of opioid abstinence, reduce opioid craving, and block opioid euphoria [6,11,12]. Although opioid replacement therapies have not been limited to methadone maintenance treatments in contemporary decades, MMT has been widely accepted as one of the best evidence-based medication-assisted therapies for chronic opioid dependence and can be considered a harm reduction strategy [6,[12][13][14]. Methadone has a positive influence on public health and security as well as human capital and social productivity [15][16][17]. There is evidence that in lowand middle-income countries where there is a lack of treatment programs, the expansion of MMT programs might lead to savings in social and health expenditures [5,13,18]. Meta-analysis of different data found that MMT is highly efficient in reducing heroin dependence [19], reducing risky behavior related to HIV transmission [20], reducing overdose-related deaths [21], and reducing crime rate [22].
Although MMT programs are one of the most important treatment strategies to reduce individual and public harm associated with opioid use, and despite the central role of methadone therapy in harm reduction approaches to opioid use in Iran as well as many other countries, previous studies have pointed out that a large proportion of eligible patients refuse to participate in this treatment program [4,16,[23][24][25]. There are some factors that prevent the tendency of patients to use this treatment, such as lack of access and high cost. Evidence has shown that some other items, such as attitudes and beliefs of patients regarding methadone treatment, can affect their acceptance of this treatment [26]. Positive attitudes toward methadone treatment have been related to retention in treatment [27].
Thus, assessing drug users' attitudes toward different kinds of addiction treatment may be necessary to design more effective treatment plans. High prevalence of illicit drug use [28]; being close to major illicit drug production regions in Afghanistan [29]; social, cultural, and economic special conditions [30]; and huge problems regarding drug treatment [31] are all reasons the development of instruments related to addiction treatment in Iran has priority. A United Nations Office on Drug and Crimes (UNODC) report indicated that more than 80% of the recognized drug treatment seekers in Iran were primarily people with opioid use disorders [32]. Although many Iranian researchers have attempted to develop instruments for addiction treatment [33][34][35][36], none of them focused on respondents' attitudes toward methadone. Thus, the aim of the present study is to develop and examine the psychometric properties of a new scale, called the DUAMMT, for assessing Iranian drug users' attitudes toward methadone maintenance treatment.
Research design
This study was approved by the Ethics Committee of Kurdistan University of Medical Sciences [Grant number 14/ 23311], and all patients completed informed written consent. The study was conducted in two phases. Firstly, item generation and scale development were performed by applying three approaches: a literature review, a qualitative method approach, and interviews with an expert panel. In the second phase, the psychometric properties of the scale were evaluated by means of cross-sectional studies with drug users. We performed exploratory factor analysis, confirmatory factor analysis, and item-scale correlation, and we assessed the internal consistency of the scale. Furthermore, test-retest reliability was evaluated among an independent sample of 30 drug users. Table 1 provides the descriptive characteristics of the participants from the two phases.
Phase 1: Item generation and scale development phase
In this phase, we aimed to develop an instrument to measure drug users' attitudes toward MMT. Two methods were applied to develop an item pool in the present study: First, a qualitative study was designed to explore the drug users' attitudes toward MMT. For the purpose of this phase, 12 individual interviews were conducted among a sample of drug users. Patients were recruited from dropin centers (DICs) and MMT centers in Sanandaj, Iran. The DICs are run by local nongovernmental organizations offering therapies and psychosocial support and facilitating self-help groups. Maximum variation sampling was used in this phase, meaning that we recruited participants with different sociodemographic characteristics so that they complement each other's viewpoints in experiencing methadone. In order to obtain maximum variation, patients engaging in various types of drug use were chosen from different ages and socioeconomic backgrounds. The descriptive characteristics of patients are shown in Table 1. In-depth individual interviews delivered a condition for us to talk about patients' beliefs about methadone treatment and, as a result, to recognize the level of their attitudes. Patients had different levels of education.
The interviews were initiated by defining maintenance treatment and applying a semi-structured inventory that started with an open-ended question: "What is your opinion about methadone treatment?" Then, based on the answers from the patients, several questions were asked to promote discussion. All discussions were recorded, and we wrote our analytic concepts in a memo text.
All patients were informed about the aim of the study, and they filled in the informed consent. The interviews were held in the DIC and MMT centers, and all discussions were tape-recorded. Data saturation was achieved after 12 individual interviews. Afterward, we applied an inductive method to analyze the recorded discussions. Inductive content analysis was applied to detect themes by studying the raw data of the interviews through continuous comparison [37]. Clear procedures were used to draw conclusions from the interviews in order to ensure credibility. For transferability, we provided rich explanations that can be applied by other researchers to other situations and backgrounds. Furthermore, in order to ensure conformability, we checked the internal coherence of the results [38].
Thereafter, experts were asked, "What are the most effective treatments for addiction and relapse prevention in people with opioid use disorders? Why do you consider these important? And why do you think the other treatments for drug users are not as important as your selected approaches?" In the end, all data obtained from qualitative research and interviews with experts were cross-checked, and based on the three approaches, 30 items in Farsi were created for an initial scale. Each item was rated on a 5-point Likert scale ranging from 1 (strongly agree) to 5 (strongly disagree). Subsequently, content and face validity were evaluated.
Content validity
In this study, both qualitative and quantitative content validity (content validity index/ratio) were assessed. In the qualitative stage, a scientific panel of 10 experts (including health educators, psychologists, and addiction therapists) evaluated the initial scale. They evaluated the grammar, wording, and scaling of each item. To assess the quantitative content validity, both the content validity index (CVI) and content validity ratio (CVR) were calculated. The simplicity, accuracy, and clarity of each item were measured by the CVI [39,40]. In order to calculate the CVI, a 4-point Likert-type ordinal scale was applied by the expert panel.
The answers were rated between 1 (not relevant, not clear, and not simple) and 4 (very relevant, very clear, and very simple). The CVI was assessed as the proportion of items that received a rating of 3 or 4 by the experts [41]. A CVI score lower than .80 was not acceptable [42]. The essentiality of the items was tested by the CVR. Each item was scored by the expert panel as 1 (essential), 2 (useful but not essential), or 3 (not essential) [41]. Then, based on the Lawshe Table [43], items with a CVR score of 0.62 or above were considered to be acceptable and were retained.
In the quantitative stage, items with a CVR and CVI less than .62 and .82, respectively, were deleted.
In total, 8 items were deleted, resulting in a 22-item pool. Furthermore, the expert panel revised the scale with regard to grammar, wording, and item allocation. For example, the sentence "Methadone treatment does not reduce return to reusing drug" was changed to "Methadone does not have an effect on the prevention of relapse." The 22-item pool remained in the analyses below and consisted of positively worded and negatively worded statements with five response options: 1 = totally disagree, 2 = disagree, 3 = neither disagree nor agree, 4 = agree, and 5 = totally agree.
Face validity
In this step, both qualitative and quantitative approaches were used to assess face validity. A group of drug users (n = 10) were asked to evaluate each item of the scale and to indicate if they felt ambiguity or difficulty in answering the Iranian version of the DUAMMT questionnaire. Based on the respondents' perspectives, the ambiguous items were adapted. In the quantitative phase, the impact score (frequency × importance) was assessed to show the percentage of drug users who identified each item as important or somewhat important on a 5-point Likert scale. Items were considered to be inappropriate if they had an impact score less than 1.5 (which matches a mean frequency of 50% and a mean importance of three on the 5-point Likert scale) [44]. Overall, three items had an impact score less than 1.5 and were deleted. The range of the impact score for the remaining 19 items was from 1.7 to 5. The first form of the questionnaire containing 19 items was established for the next phase of psychometric evaluation.
Phase 2: Psychometric phase
The main study and the data collection In order to assess the psychometric properties of the DUAMMT questionnaire in a wider setting, a cross-sectional study was designed to be carried out in Sanandaj, Iran, from February to December 2016. A simple random sampling method was applied. Firstly, four DIC and MMT centers were randomly selected among DIC and MMT centers in Sanandaj, Iran. patients who visited DIC and MMT centers were entered into the study if they were male patients with substance abuse referred to harm reduction centers, met the diagnostic criteria for substance dependence disorder based on the DSM-IV, were literate, and wanted to take part in the study. After the main investigator provided an explanation about the aim of the study, patients who agreed to take part in this study completed the DUAMMT questionnaire. In addition, the demographic characteristics of patients including age, educational level, employment status, marital status, and type of drug use were also collected. In order to collect data, educated investigators conducted face-to-face interviews.
Statistical analysis
Several statistical methods were applied to test the psychometric properties of the DUAMMT scale. These are presented as follows.
Construct validity
After the item analysis, the 19 remaining items were considered to estimate the construct validity using exploratory factor analysis (EFA), confirmatory factor analysis (CFA), and item-scale correlation.
Exploratory factor analysis EFA was performed to identify the main factors of the scale. The sample size was estimated a priori. As recommended by Gable and Wolf, a sample of five to ten patients per item is required to ensure a conceptually clear factor structure for EFA [45]. The preferred maximum required sample size was thus determined to be 200 drug users. These patients were recruited from the DIC and MMT centers. The main factors of the scale were extracted by performing EFA, applying the principal component analysis (PCA) with varimax rotation. The Kaiser-Meyer-Olkin (KMO) test and Bartlett's test of sphericity were applied to assess the adequacy of the sample for the factor analysis [46]. In order to extract the factors, a factor with an eigenvalue above 1 was considered significant. Additionally, a scree plot was used to specify the number of factors. Factor loadings equal to or greater than .40 were considered acceptable [47].
Confirmatory factor analysis (CFA) A CFA was performed in order to assess the fit of the model. Considering the possible attrition related to test-retest analysis, a separate sample of 120 drug users was planned to be recruited from harm reduction, MMT, and DIC centers. Assigning seven patients to each item, a sample size of 120 was estimated [48]. The model fit was evaluated using multiple fit indices. As suggested, several fit indices measuring relative chisquare (χ2/df), comparative fit index (CFI), goodness of fit index (GFI), non-normed fit index (NNFI), normed fit index (NFI), root mean square error of approximation (RMSEA), and standardized root mean square residual (SRMR) were taken into account [49,50]. Relative chi-square is the chisquare ratio to degrees-of-freedom, and it is suggested that a value less than 3 indicates an acceptable fit [51]. The values of GFI, CFI, NNFI, and NF range from 0 to 1, but values equal to 0.90 or above are commonly indicated as acceptable model fits [52]. An RMSEA value between .08 and .10 indicates an average fit, and a value below .08 shows a good fit. Values below .05 indicate a good fit for SRMR, but values between .05 and .08, and between .08 and .10, indicate a close fit or are acceptable, respectively [53].
Item-scale correlation Finally, item-scale correlations were calculated in order to assess the degree to which each item was correlated to its subscale by use of the Spearman correlation coefficient. We expected that, for each subscale of the DUAMMT, the item scores of the subscale (e.g., perceived barriers) would correlate more with the total score of the respective subscale (e.g., perceived barriers) rather than the total score of other subscales (e.g., perceived side effects). Correlation values between 0 and .20 are considered poor; between .21 and .40, fair; between .41 and .60, good; between .61 and .80, very good; and above .81, excellent [54].
Reliability
Internal consistency The internal consistency of the DUAMMT questionnaire was assessed by calculating the Cronbach's' alpha coefficient of the whole scale and each dimension of the DUAMMT questionnaire. Alpha values equal to .70 or higher were considered acceptable [54,55].
Test-retest A subsample of drug users (n = 30) filled out the DUAMMT questionnaire twice, with a 2-week interval, in order to examine the stability of the questionnaire by estimating the intra-class correlation coefficient (ICC). ICC values of .40 or above are considered acceptable [55]. All statistical analyses, except CFA, were performed using SPSS 18.0 [56]. The CFA was performed using LISREL 8.80 [57].
DUAMMT questionnaire The final version of the DUAMMT questionnaire is shown in Appendix 1. Each item is rated on a five-point response scale. Four items were negatively worded (items 2, 11, 13, and 15) and have to be reverse scored [Appendix 2].
Construct validity Exploratory factor analysis
The measured KMO was .742, and the Bartlett's test of sphericity was significant (χ2 = 644.03, p < .001), indicating adequacy of the sample for EFA. Initially, for the 19-item scale, seven factors showed eigenvalues above 1.0, explaining the 60.53% variance. Additionally, the scree plot showed a 4-factor solution ( Fig. 1). This 4-factor solution was explored by repeatedly assessing the item performance by eliminating the items in a step-by-step process. After removing the items with factor loadings below .40, a final factor solution was obtained that consisted of a 17-item questionnaire loading on four distinct constructs. These constructs jointly accounted for 60.53% of the observed variance. As presented in Table 2, four factors were found: Factor 1 (perceived barriers toward methadone treatment) included 7 items (items 11, 12, 13, 14, 15, 16, and 17), factor 2 (perceived side effects) included 4 items (items 1, 2, 3, and 4), factor 3 (perceived concerns) included 3 items (items 5, 6, and 7), and factor 4 (perceived positive effects) included 3 items (items 8, 9, and 10). Refer to Appendix 1 for the items of the DUAMMT.
Confirmatory factor analysis
CFA on the 17-item DUAMMT questionnaire was conducted to test the fitness of the model obtained from the EFA. Figure 2 shows the best model fit for the DUAMMT questionnaire. Covariance matrices were used, and fit indices were calculated. All fit indices proved to be acceptable. The relative chi-square (χ2/df) was equal to 2.04 (p < .001).
The RMSEA of the model was .039 (90% CI = .001-.063), and the SRMR was .030. All comparative indices of the model, including GFI, AGFI, CFI, NNFI, and NFI, were more than .80 (.88, .84, .92, .90, and .80, respectively). Table 3 presents the item-scale correlation for the DUAMMT questionnaire. As can be seen, all coefficients are higher than .20, and most of them are higher than 0.40. Perceived barriers and perceived positive effects had the lowest and the highest item-scale correlation, respectively.
Reliability
In order to measure the reliability, the Cronbach's alpha was calculated separately for the DUAMMT and each factor of the DUAMMT. The Cronbach's alpha coefficient for Fig. 1 Scree plot for determining factors of the designed instrument the DUAMMT was .93 and ranged from .70 to .79 for its subscales, which is well above the acceptable threshold. Therefore, no items of the scale were omitted in this phase. In addition, test-retest analysis was conducted to test the stability of the DUAMMT questionnaire. The results indicated satisfactory results. ICC was .94 (good to excellent) for the DUAMMT and ranged from .774 to .970 for the subscales of the DUAMMT, lending support for the stability of the scale. The results are presented in Table 4.
Discussion
Because there is not a questionnaire to measure attitudes toward methadone in Iran, the present study as initial research described the development and psychometric properties of the questionnaire for assessing the attitudes toward methadone maintenance treatment in Iran. The results demonstrated that the final 17-item DUAMMT questionnaire is a robust, valid, and reliable questionnaire that comprises four subscales (perceived barriers, perceived concerns, methadone side effects, and perceived positive effects).
Overall literature review showed similar studies developing scales for assessing attitudes toward methadone. For instance, the 36-item Brown questionnaire that assesses attitude toward methadone contains two factors (barriers and benefits of methadone) [58]. Similarly, Kayman et al. developed a 14-item questionnaire to measure beliefs about methadone that contained four constructs: benefits of MMT, treatment-related barriers, reduction of crime, and feelings about leaving the methadone program [59]. In addition, Caplehorn developed a 53-item questionnaire to measure attitudes toward addiction and methadone, which included knowledge about methadone maintenance, disapproval of drug use, abstinent-orientation, attitude toward methadone, and attitude toward illegal drugs. In Caplehorn's study, attitudes about methadone were not specifically investigated in their questionnaire; they only considered attitudes about illegal drugs [60]. Also, Schwartz and colleagues assessed attitudes toward methadone by using the attitudes toward methadone scale that contains 28 items related to perceptions of methadone potential helpfulness, negative physical and cognitive effects associated with methadone, and the perceived purpose of methadone treatment.
It is noteworthy that so far, none of these tools have been translated or used in any research in Iran. Other similar tools are more focused on barriers and perceived benefits, but in the DUAMMT, perceived barriers are explained in detail, and methadone side effects have been investigated in detail. Most of the participants in the qualitative study indicated methadone side effects.
The items of the DUAMMT were developed using a qualitative study. Thereafter, we conducted both exploratory and factor analyses; the results showed that the structure of the questionnaire was good. EFA showed that the total variance of the questionnaire was 60%, and the CFA indicated that the factor structure of the questionnaire was suitable. In this study, the χ 2 /df ratio was 1.43, the GFI for the model was 0.001, the SRMR was 0.80, and the NNFI was 0.92. These results indicate that the model was a very good fit for our data. According to EFA, four latent factors were extracted. These subscales were named by considering the concepts and after several meetings with expert panel members. In the present study, we also applied CFA, which is superior compared to analyses of other studies, such as those used in the study by Kayman and colleagues [59] or Brown and colleagues [58].
The internal consistency of the final instrument as assessed by the Cronbach's alpha and the test-retest coefficient was found to be .70 and .93, respectively, indicating acceptable reliability and homogeneity of the items of the DUAMMT. Compared to the Brown questionnaire, the Cronbach's alpha of the DUAMMT was higher at 0.89 vs 0.68. The Caplehorn scale calculated only the overall Cronbach's alpha, with a value of 0.89 [60]. The reliability of the present questionnaire was also higher than that of the Kayman et al. [59] questionnaire (0.60), indicating the fundamental reliability of this tool among the patients struggling with drug abuse.
The DUAMMT is able to identify attitudes toward methadone as well as predict retention among patients. Kayman and colleagues, utilizing an abbreviated version of the methadone scale, found that attitudes toward methadone predict retention in methadone treatment [59]. Based on the results, the validity proved to be good, as well as the reliability and stability of the questionnaire. There is a lack of appropriate tools to measure attitudes toward methadone in Iran. This questionnaire can be used as a standard questionnaire in future studies.
The present study, however, has some limitations. First, with regard to the sampling, we only interviewed drug users who were treated in MMT centers in Sanandaj. Because these patients are culturally homogeneous, their viewpoints may not be generalized to the views of patients treated in other cultures. Consequently, it might be interesting for future studies to test the reliability and validity of the DUAMMT in a sample of drug users from different cultural backgrounds and areas. Second, regarding the sampling, 22% of the patients in the present study were unemployed and 100% were men. In future studies, it would be necessary to examine the psychometric properties of the DUAMMT in Fig. 2 A four-factor model for the questionnaire obtained through confirmatory factor analysis (n = 120) patients from both urban and rural areas with different levels of education, employment status, and economic status. Third, the DUAMMT was developed by only using samples of men, and was tested among men, as a result, it may not be representative of women with opioid use disorders. Future studies should examine its validity and reliability among women as well. Furthermore, in this study, we did not test how the DUAMMT is associated with similar scales. However, one of the strengths of the study is that two separate samples were recruited for the EFA and CFA. Attitude toward methadone treatment is related to retention in treatment. Also, long-term retention in treatment has a favorable outcome for both patients and society.
Conclusions
Generally, the findings of the present study suggest that the DUAMMT is a valid and reliable instrument to measure the attitudes towards MMT among male opioid users in Iran. Further studies in different populations, and particularly with women, are recommended to establish stronger psychometric properties for the questionnaire. Such studies can enhance the tailoring of appropriate MMT to optimize successful outcomes among people with opioid use disorders.
|
2017-11-29T18:20:41.248Z
|
2017-11-28T00:00:00.000
|
{
"year": 2017,
"sha1": "6e583ae775e591198dc1c57e798fa586c95555c6",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-017-4911-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3809954f0704c905ac4b3469a2ae95eb9faf8d7d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119643967
|
pes2o/s2orc
|
v3-fos-license
|
Racks, Leibniz algebras and Yetter-Drinfel'd modules
A Hopf algebra object in Loday and Pirashvili's category of linear maps entails an ordinary Hopf algebra and a Yetter-Drinfel'd module. We equip the latter with a structure of a braided Leibniz algebra. This provides a unified framework for examples of racks in the category of coalgebras discussed recently by Carter, Crans, Elhamdadi and Saito.
INTRODUCTION
The subject of the present paper is the relation between racks, Leibniz algebras and Yetter-Drinfel'd modules.
An augmented rack (or crossed G-module) can be defined as a Yetter-Drinfel'd module over a group G, viewed as a Hopf algebra object in the symmetric monoidal category pSet,ˆq. Explicitly, it is a right G-set X together with a G-equivariant map p : X Ñ G where G carries the right adjoint action of G. A main application of racks is the construction of invariants of links and tangles, see e.g. [3,6,7] and the references therein.
Leibniz algebras are vector spaces equipped with a bracket that satisfies a form of the Jacobi identity, but which is not necessarily antisymmetric, see Definition 2 below. They were discovered by A.M. Blokh [2] in 1965, and then later rediscovered by J.-L. Loday in his search of an understanding for the obstruction to periodicity in algebraic K-theory [15]. In this context the problem of the integration of Leibniz algebras arose, that is, the problem of finding an object that is to a Leibniz algebra what a Lie group is to its Lie algebra. Lie racks provide one possible solution, see [4,5,12].
Analogously to augmented racks over groups, the Yetter-Drinfel'd modules M over a Hopf algebra H in pVect, bq form the Drinfel'd centre of the monoidal category of right H-modules, see Section 4.1. Taking in an H-tetramodule (bicovariant bimodule) M the invariant elements inv M with respect to the left coaction defines an equivalence of categories between tetramodules and Yetter-Drinfel'd modules. Thus they are the coefficients in Gerstenhaber-Schack cohomology [8]. Another application is in the classification of pointed Hopf algebras, see e.g. [1].
Our aim here is to directly relate Leibniz algebras to Yetter-Drinfel'd modules, starting from the fact that the universal enveloping algebra of a Leibniz algebra gives rise to a Hopf algebra object in the category LM of linear maps [16], see Section 2.3. We extend some results from Woronowicz's theory of bicovariant differential calculi [23] which are dual to Hopf algebra objects in LM. In particular, we show that one can construct braided Leibniz algebras as studied by V. Lebed [14] by generalising Woronowicz's quantum Lie algebras of finite-dimensional bicovariant differential calculi: This allows us to study racks and Leibniz algebras in the same language, which provides in particular a unified approach to [3,Proposition 3.1] and [3,Proposition 3.5], see Examples 4 and 5 at the end of the paper.
The paper is structured as follows: Section 2 recalls basic facts and definitions about the category LM of linear maps and the construction of the universal enveloping algebra of a Leibniz algebra. In Section 3 we explore analogues in LM of functors relating groups and Lie algebras to Hopf algebras, with a view towards the integration problem of Lie algebras in LM.
In particular we point out that the linearisation p : kX Ñ kG of an augmented rack p : X Ñ G is not a Hopf algebra object in LM, but instead a map of kG-modules and comodules, see Proposition 3. Section 4 recalls background on Yetter-Drinfel'd modules over bialgebras. The main section is Section 5 where we prove Theorem 1 and finish by discussing concrete examples.
Acknowledgements: UK and FW thank UC Berkeley where this work took its origin. FW furthermore thanks the University of Glasgow where this work was finalised. UK is supported by the EPSRC Grant "Hopf algebroids and Operads" and the Polish Government Grant 2012/06/M/ST1/00169.
ALGEBRAIC OBJECTS IN LM
In this section we recall the neceesary background on the category of linear maps, algebraic objects therein, and the relevance of these for the theory of Leibniz algebras, mainly from [16,17]. Throughout we work with vector spaces over a field k, although the results can be generalised to other base categories. An unadorned b denotes the tensor product over k.
2.1. The tensor categories LM and LM ‹ . The following definition goes back to Loday and Pirashvili [16]: The category of linear maps LM has linear maps f : V Ñ W between vector spaces as objects, which are usually depicted by vertical arrows with V upstairs and W downstairs. A morphism φ between two linear maps pf : V Ñ W q and pf 1 : The infinitesimal tensor product between f and f 1 is defined to be The infinitesimal tensor product turns LM into a symmetric monoidal category with unit object being the zero map 0 : t0u Ñ k.
Remark 1.
Alternatively, LM is the category of 2-term chain complexes with a truncated tensor product; one has just omitted the terms of degree two in the tensor product of complexes. One can analogously define categories LM n of chain complexes of length n and a tensor product which is truncated in degree n, so in this sense LM " LM 1 and Vect " LM 0 . Taking the inverse limit, one passes from these truncated versions to the category of chain complexes with the ordinary tensor product Chain " LM 8 . △ Interpreting LM as the category of cochain rather than chain complexes of length 1 and depicting them consequently by arrows pointing upwards results in a different monoidal structure b ‹ on LM in which The resulting tensor category will be denoted LM ‹ .
Algebraic objects in LM.
In a symmetric monoidal tensor category, one can define associative algebra objects, Lie algebra objects and bialgebra objects. Loday and Pirashvili exhibit the structure of these in the tensor category LM. For this, they use that the inclusion functor with T given in Sweedler notation by T pxq "´Spm p´1q qm p0q Spm p1q q. Thus T is uniquely determined by the antipode S on H and is not additional data.
Remark 3.
Dually, a bialgebra object f : H Ñ M in LM ‹ consists of a bialgebra H in Vect and an H-tetramodule M such that f is a derivation and bicolinear. If M " span k tgf phq | g, h P Hu, this structure is referred to as a first order bicovariant differential calculus over H [23], see e.g. [13] for a pedagogical account. Linear duality F : V Þ Ñ V˚yields a (weakly) monoidal functor F : LM Ñ pLM ‹ q op , which is strongly monoidal on the subcategory of finite-dimensional vector spaces. In Remark 7 below we will describe the class of bialgebras in LM that is under F dual to first order bicovariant differential calculi. △ 2.3. Universal enveloping algebras in LM. Loday and Pirashvili furthermore construct in [16] a pair of adjoint functors P (primitives) and U (universal enveloping algebra) associating a Lie algebra object in LM to a Hopf algebra object in LM, and vice versa, and prove an analogue of the classical Milnor-Moore theorem in this context. For a given Lie algebra object f : M Ñ g, the enveloping algebra is φ : The underlying Ug-tetramodule structure on Ug b M is as follows: the right Ug-action on Ug b M is induced by for all x P g, all u P Ug and all m P M. The left action is by multiplication on the left-hand factor. The left and right Ug-coactions are given by the coproduct on the left-hand factor, that is, for x P g, m P M they are pxbmq Þ Ñ 1bpxbmq`xbp1bmq, pxbmq Þ Ñ p1bmqbx`pxbmqb1.
Leibniz algebras.
We finally recall from [16] that a particular class of Lie algebra objects in LM arises in a canonical way from Leibniz algebras: is called a (right) Leibniz algebra, in case for all x, y, z P g rrx, ys, zs " rx, ry, zss`rrx, zs, ys holds.
In particular, any Lie algebra is a Leibniz algebra. Conversely, for any Leibniz algebra g the quotient by the Leibniz ideal generated by the squares rx, xs for x P g is a Lie algebra g Lie , and the right adjoint action of g Lie on itself lifts to a well-defined right action on g. So by construction, the canonical quotient map π : g Ñ g Lie is a Lie algebra object in LM. The universal enveloping algebra of g as defined in [17] is exactly the abelian extension of the associative algebra Ug Lie in Vect that is defined by the universal enveloping algebra Upg Ñ g Lie q, see [16,Theorem 4.7].
THE PROBLEM OF INTEGRATING LIE ALGEBRAS IN LM
In this section we discuss the direct analogues in LM of some functorial constructions that relate groups to Lie algebras, with a view towards the problem of integrating Leibniz algebras to some global structure. Augmented racks and their linearisations are one possible framework for these, so we end by recalling some background on racks.
From Lie algebras to groups. Consider the following diagram of functors:
Lie Here Lie is the category of Lie algebras over the field k, Grp is the category of groups, Hopf is the category of k-Hopf algebras, and ccHopf and cHopf are its subcategories of cocommutative respectively commutative Hopf algebras. The functor U is that of the enveloping algebra, and χ is the functor of characters, while H˝is the Hopf dual of a Hopf algebra H, that is, the Hopf algebra of matrix coefficients of finite-dimensional representations, see e.g. [13,20].
An affine algebraic group G over an algebraically closed field k of characteristic 0 can be recovered in this way from its Lie algebra g :" LiepGq as χpUg˝q provided G is perfect, i.e. G " rG, Gs. More generally, if G has unipotent radical, then G is isomorphic to the characters on the subalgebra of basic representative functions on Ug, see [10] for details.
Characters of Hopf algebra objects in LM.
The functor χp´q (characters) can be extended to Hopf algebra objects in LM, hence one might attempt to use it to integrate Lie algebras in LM and in particular Leibniz algebras. By definition, a character χ of a Hopf algebra object f : M Ñ H is an algebra morphism in LM from f : M Ñ H to the unit of the tensor category LM which is simply 0 : t0u Ñ k. This amounts to a commutative diagram One therefore obtains just characters χ 0 of H, because χ 1 is supposed to be the zero map. The same applies to Hopf algebra objects in LM ‹ , that is, the component of the character associated to the tetramodule vanishes. Thus we have: The functor χp´q (characters), applied to a Hopf object in LM or LM ‹ , results just in characters of the underlying Hopf algebra H.
Hence the integration of Lie algebra objects in LM (and thus in particular Leibniz algebras) along the lines outlined in the previous section must fail. One can associate to a Lie algebra object in LM its universal enveloping algebra, and then by duality some commutative Hopf algebra object in LM ‹ , but characters of this object will always be only characters of the underlying Hopf algebra.
Formal group laws in LM.
Another approach to the integration of Lie algebras is that of formal group laws, see [22]. Here one studies a continuous dual of Ug.
Recall that a formal group law on a vector space V is a linear map F : SpV ' V q Ñ V which is unital and associative, i.e. its extension to a coalgebra morphism F 1 : SpV q b SpV q Ñ SpV q is an associative product on the symmetric algebra SpV q.
Mostovoy [21] transposes this definition into the realm of LM. Namely, a formal group law in LM is a map whose extension to a morphism of coalgebra objects is an algebra object in LM. Starting with a Lie algebra object M Ñ g in LM, the product in the universal enveloping algebra UpM Ñ gq composed with the projection onto the primitive subspace yields a formal group law using the identification of UpM Ñ gq with SpM Ñ gq provided by the analogue of the Poincaré-Birkhoff-Witt theorem for Lie algebra objects in LM. Explicitly, one gets a diagram [21] shows then: Proposition 2. The functor that assigns to a Lie algebra object M Ñ g in LM the primitive part of the product in UpM Ñ gq is an equivalence of categories of Lie algebra objects in LM and of formal group laws in LM.
An interesting problem that arises is to specify what this framework gives for the Lie algebra objects in LM coming from a Leibniz algebra, i.e. for those of the form π : g Ñ g Lie . Furthermore, one should clarify what the global objects associated to these formal group laws are. The results in the present paper are meant to motivate why augmented racks are a natural candidate, by going the other way and studying the Hopf algebra objects in LM that are obtained by linearisation from augmented racks.
3.4. Augmented racks. The set-theoretical version of LM is the category M of all maps X Ñ Y between sets X and Y . One defines an analogue of the infinitesimal tensor product in which the disjoint union of sets takes the place of the sum of vector spaces, and the cartesian product replaces the tensor product. This defines a monoidal category structure on M with unit object H Ñ t˚u. However, the latter is not terminal in M, thus one cannot define inverses, and a fortiori group objects.
One way around this "no-go" argument is to consider augmented racks: Definition 3. Let X be a set together with a binary operation denoted px, yq Þ Ñ x ✁ y such that for all y P X, the map x Þ Ñ x ✁ y is bijective and for all x, y, z P X, Then we call X a (right) rack. In case the invertibility of the maps x Þ Ñ x✁y is not required, it is called a shelf.
The guiding example of a rack is a group together with its conjugation map pg, hq Þ Ñ g ✁ h :" h´1gh. Augmented racks are generalisations of these in which the rack operation results from a group action: Definition 4. Let G be a group and X be a (right) G-set. Then a map p : X Ñ G is called an augmented rack in case p satisfies the augmentation identity, i.e. for all g P G and all x P X (1) ppx¨gq " g´1 ppxq g.
In other words p is equivariant with respect to the G-action on X and the adjoint action of G on itself. The G-set X in an augmented rack p : X Ñ G carries a canonical structure of a rack by setting x ✁ y :" x¨ppyq.
Remark 4.
Any rack X can be turned into an augmented rack as follows: let AspXq be the associated group (see for example [6]) of X, which is the quotient of the free group on the set X by the relations y´1xy " x ✁ y for all x, y P X. Then there is a canonical map p : X Ñ AspXq assigning to x P X the class of x in AspXq which turns X into an augmented rack. △ A more conceptual point of view goes back to Yetter, confer [7]: a group is the same as a Hopf algebra object in the symmetric monoidal category Set withˆas monoidal structure. In this sense, right G-modules are just right G-sets while right G-comodules are just sets X equipped with a map p : X Ñ G. The augmentation identity (1) becomes the Yetter-Drinfel'd condition that we will discuss in detail in the next section. Thus augmented racks are the same as Yetter-Drinfel'd modules over G in Set, or, in other words, the category of augmented racks over G is the Drinfel'd centre of the category of right G-sets.
3.5. Linearised augmented racks. By linearisation, one obtains the group algebra kG of a group G which consequently is a Hopf algebra in Vect, see e.g. [11, p.51, Example 2]. Hence one might ask whether a linearisation of an augmented rack p : X Ñ G defines a Hopf algebra object in LM. The functor k´(k-linearisation of a set) sends p : X Ñ G to a linear map p : kX Ñ kG. Consider kX as a kG-bimodule where kG acts on kX on the right via the given action and on the left via the trivial action. Consider further the two linear maps given for x P X by Then we have: Proposition 3. The maps △ l , △ r turn kX into a kG-bicomodule such that p : kX Ñ kG is a morphism of bicomodules and bimodules, where kG carries the left and right coaction given by the coproduct, the trivial left action, and the adjoint right action.
Proof. The augmentation identity ppx¨gq " g´1ppxqg, @x P X, g P G shows that p is a morphism of bimodules. We have pp b 1qp△ r xq " ppxq b ppxq and p1 b pqp△ l xq " ppxq b ppxq for all x P X, thus p is a morphism of bicomodules.
In particular, p : kX Ñ kG is not a Hopf algebra object in LM in general.
3.6. Regular functions on augmented racks. Taking the coordinate ring krXs of an algebraic set X is a contravariant functor, so applying it to an algebraic augmented rack p : X Ñ G gives rise to an algebra map p˚: krGs Ñ krXs which is most naturally considered in LM ‹ .
The right G-action on X induces a right krGs-comodule structure on krXs. Together with the trivial left comodule structure, krXs becomes a krGs-bicomodule. On krGs itself, we consider the bicomodule structure obtained from the trivial left coaction and the right adjoint coaction given in Sweedler notation by f Þ Ñ f p2q b Spf p1q qf p3q , and then obtain:
Proposition 4. p˚: krGs Ñ krXs is a morphism of bimodules and bicomodules.
Proof. For the augmented rack p : X Ñ G, we have the following commu- Applying the functor kr´s to this diagram yields krXs / / krXs b krGs This means exactly that p˚is a morphism of right comodules. As the left coactions on krGs and krXs are trivial, it is a map of bicomodules.
The Yetter-Drinfel'd braiding.
It is well-known (see for example [11] p. 319) that the category of augmented racks over a fixed group G carries a braiding: Proposition 5. Define for augmented racks p 1 : X Ñ G and p 2 : Y Ñ G with respect to a fixed group G their tensor product X b Y by XˆY with the action px, yq¨g :" px¨g, y¨gq and the equivariant map p : XˆY Ñ G being ppx, yq :" p 1 pxqp 2 pyq. Then the formula c X,Y : X b Y Ñ Y b X, c X,Y px, yq :" py, x¨ppyqq defines a braiding on the category of augmented racks over G. This is just a special case of the Yetter-Drinfel'd braiding that we are going to study in detail next.
YETTER-DRINFEL'D MODULES
In this section we recall definitions and facts about Yetter-Drinfel'd modules over Hopf algebras in Vect that we need. For more information, the reader is referred to [11,13,19,20] and left and right coaction given in Sweedler notation by These coactions and actions are compatible in the sense that M H is a Hopf tetramodule if and only if M is a Yetter-Drinfel'd module: Definition 5. A Yetter-Drinfel'd module over H is a right module and right comodule M for which we have for all x P M and h P H.
Remark 5.
If H is a Hopf algebra with antipode S, then the Yetter-Drinfel'd condition (2) is easily seen to be equivalent to
△
More precisely, H is a Hopf algebra if and only if M Þ Ñ M H defines an equivalence between the categories of Yetter-Drinfel'd modules and that of Hopf tetramodules. In this case, the inverse functor is given by taking the invariants with respect to the left coaction, This is an equivalence of monoidal categories, where the tensor product of Hopf tetramodules is b H .
Example 1.
Let G be a group and M be a kG-Yetter-Drinfel'd module. Then M is in particular a kG-module, i.e. a G-module. The comodule structure of M is a G-grading of this G-module: The Yetter-Drinfel'd compatibility condition now reads for u P kG and m P M which means for a group element g " u P G and a homogeneous element m P M h pgmq p´1q b pgmq p0q " ghg´1 b g¨m. This means that the action of g P G on M maps M h to M ghg´1 .
When the module M is a permutation representation of G, that is, is obtained by linearisation from a (right) G-set X, M » kX, then M is Yetter-Drinfel'd precisely when X carries the structure of an augmented rack. The full subcategory of the category of all Yetter-Drinfel'd modules over kG of these permutation modules has been studied first by Freyd and Yetter, see [7,Definition 4.2.3].
Example 2.
Recall from Section 2.3 that if f : M Ñ g is any Lie algebra object in LM, then the universal enveloping algebra construction in LM yields the Ug-tetramodule Ug b M. In this case, M is recovered as the Yetter-Drinfel'd module of left invariant elements, with trivial right coaction and right action being induced by the right g-module structure on M.
More generally, every right module over a cocommutative bialgebra H becomes a Yetter-Drinfel'd module with respect to the trivial right coaction.
The Yetter-Drinfel'd braiding revisited.
Every right H-module and right H-comodule M carries a canonical map The following well-known fact characterises when τ is a braiding:
Proposition 6. The map (4) is a braiding on M if and only if M is a Yetter-
Drinfel'd module.
One can view ker ε as a bicomodule with respect to the trivial left coaction h Þ Ñ 1 b h, and then the inclusion map ι : ker ε Ñ H is a coderivation. This is universal in the sense that every coderivation factors through ι: (1) We have im f Ď ker ε.
(2) The restriction of f tof : inv M Ñ ker ε is right H-colinear with respect to the coaction△ on ker ε.
(3) If M is a tetramodule and f is H-bilinear, thenf is a morphism of
Yetter-Drinfel'd modules. Proof.
(2) For left invariant m P M, we have m p´1q b m p0q " 1 b m, so subtracting 1 b f pmq from the coderivation condition yields △pf pmqq " pf pmqq p1q b pf pmqq p2q´1 b f pmq " m p0q b f pm p1q q.
(3) The right action on inv M respectively ker ε is obtained from the bimodule structure on M respectively H by passing to the right adjoint actions, sof pm đ hq " f pSph p1q qmh p2q q " Sph p1q f pmqh p2q "f pmq đ h.
Remark 7.
In Remark 7 we mentioned that first order bicovariant differential calculi in the sense of Woronowicz are formally dual to certain bialgebras in LM. We can explain this now in more detail: given a first order bicovariant differential calculus over a Hopf algebra A, that is, a bicolinear derivation d : A Ñ Ω with values in a tetramodule Ω which is minimal in the sense that Ω " span k tadb | a, b P Au, one defines R pΩ,dq :" ta P ker ε | Spa p1q qda p2q " 0u.
It turns out that pΩ, dq Þ Ñ R pΩ,dq establishes a one-to-one correspondence between first order bicovariant differential calculi and right ideals in ker ε that are invariant under the right adjoint coaction a Þ Ñ a p2q b Spa p1q qa p3q of A, see [13,Proposition 14.1 and Proposition 14.7]. When A " krGs is the coordinate ring of an affine algebraic group, Ω are the Kähler differentials and da is the differential of a regular function a, then R pΩ,dq is just pker εq 2 and ker ε{R pΩ,dq is the cotangent space of G in the unit element.
Motivated by this example, one introduces the quantum tangent space T pΩ,dq :" tφ P A˚| φp1q " 0, φpaq " 0 @ a P R pΩ,dq u, where A˚" Hom k pA, kq denotes the dual algebra of A. Provided that Ω is finite-dimensional in the sense that dim k inv Ω ă 8, the quantum tangent space belongs to the Hopf dual H :" A˝of A and uniquely characterises the calculus up to isomorphism, see [13,Proposition 14.4] and the subsequent discussion. By definition, T pΩ,dq is then a subspace of ker ε Ă H which is by [13, (14)] invariant under the right coaction△ and as a consequence of [13,Proposition 14.7] it is also invariant under the right adjoint action of H on itself; in other words, the quantum tangent space is a Yetter-Drinfel'd submodule of ker ε, and if we equip M :" HbT pΩ,dq with the corresponding H-tetramodule structure we can extend the inclusion of the quantum tangent space into ker ε to a Hopf algebra object f : M Ñ H in LM. Thus first order bicovariant differential calculi should be viewed as structures dual to Hopf algebra objects f : M Ñ H in LM for which the induced mapf is injective. △
BRAIDED LEIBNIZ ALGEBRAS
The definition of a Leibniz algebra extends straightforwardly from Vect to other additive braided monoidal categories [14]. In this final section we discuss the construction of such generalised Leibniz algebras from Hopf algebra objects in LM which is the main objective of our paper.
Definition.
The following structure is meant to generalise both racks and Leibniz algebras in their role of domains of objects in LM:
Definition 6.
A braided Leibniz algebra is a vector space M together with linear maps satisfying (5) px ✁ yq ✁ z " x ✁ py ✁ zq`px ✁ z x1y q ✁ y x2y @x, y, z P M.
Remark 8.
We do not assume that τ maps elementary tensors to elementary tensors, the notation y x1y b x x2y should be understood symbolically like Sweedler's notation △phq " h p1q b h p2q for the coproduct of an element h of a coalgebra H which is also in general not an elementary tensor. △ Remark 9. It is natural to ask for τ to satisfy the braid relation (Yang-Baxter equation), so that M is just a braided Leibniz algebra as studied e.g. in [14]. Instead of assuming this a priori we rather characterise this case in the examples that we study below, and later we investigate the consequences of this condition. △ Example 3. When τ is the tensor flip, y x1y b x x2y " y b x, we recover Definition 2 from Section 2.4 with x ✁ y ": rx, ys, as the Leibniz rule (5) becomes the (right) Jacobi identity in the form rrx, ys, zs " rx, ry, zss`rrx, zs, ys. x ✁ y :" xqpyq.
Then pM, τ, ✁q is a braided Leibniz algebra with respect to from (4) provided that holds for all x P M and h P H.
Remark 10.
Observe that applying id H b ε to (7) implies qpxq " εpqpxqq`qpxq, so this condition necessarily requires im q Ď ker ε Ă H. If H is a Hopf algebra, then (6) is equivalent to the right H-linearity of q with respect to the right adjoint action of H on ker ε. Furthermore, the condition (7) can be stated also as saying that q : M Ñ ker ε is right H-colinear with respect to the right coaction△ on ker ε from Section 4.3. △ Thus we can restate the above proposition also as follows:
Leibniz algebras from Hopf algebra objects in LM.
Altogether, the above results provide a proof of our main theorem: Proof of Theorem 1. From the description of Hopf algebra objects in the category of linear maps LM in Section 2.1, it follows that f : M Ñ H is the data of a Hopf algebra H, a tetramodule M and a morphism of bimodules f which is also a coderivation. Hence Lemma 1 proves the first part of the theorem. Now Corollary 1 applied to q :"f yields the structure of a braided Leibniz algebra on inv M.
Now we see that classical Leibniz algebras can be viewed as a special case of the constructions from this subsection: Example 4. Let pg, r¨,¨sq be a (right) Leibniz algebra in the category of kvector spaces with the flip as braiding as in Example 3. We have recalled in Section 2.2 how to regard g as a Lie algebra object in LM, and in Section 2.3 how to associate to it its universal enveloping algebra, which is a Hopf algebra object φ : Ug Lie b g Ñ Ug Lie in LM. The canonical quotient map π : g Ñ g Lie is given by πpxq " φp1 b xq.
Recall now from Example 2 that g is recovered as inv pUg Lie b gq (with trivial right coaction), and in this sense, π coincides withφ. The Yetter-Drinfel'd braiding thus becomes the tensor flip, and the generalised Leibniz bracket ✁ on g is the original one. This generalises the corresponding example for Lie algebras [19] p. 63, [3] Proposition 3.5, to Leibniz algebras. △ The above example should be viewed as an infinitesimal variant of the following one: Example 5. Let X be a finite rack and G :" AspXq be its associated group [6]. Then p : X Ñ G is an augmented rack, see Remark 4 above. We have seen in Proposition 3 that the linearisation p : kX Ñ kG is not a Hopf algebra object in LM, so we cannot apply Theorem 1 in this situation in order to obtain a Leibniz algebra structure on kX.
However, recall from Example 1 that kX is by the very definition of an augmented rack a Yetter-Drinfel'd module over the group algebra kG, and we obtain a morphism q : kX Ñ ker ε Ă kG, x Þ Ñ ppxq´1 of Yetter-Drinfel'd modules. Now we can apply Corollary 1 to obtain a braided Leibniz algebra structure x ✁ y " xpppyq´1q. This construction works for all augmented racks, so augmented racks can be converted into special examples of braided Leibniz algebras. In this way, we recover [3, Proposition 3.1]. △ Example 6. If T Ă H :" A˝is the quantum tangent space of a finitedimensional first order bicovariant differential calculus over a Hopf algebra A and f : H b T Ñ H is the corresponding Hopf algebra object in LM (recall Remark 3), then the generalised Leibniz bracket from Theorem 1 becomes x ✁ y " xf pyq " Spy p1q qxy p2q . That is, the generalised Leibniz algebra structure is precisely the quantum Lie algebra structure of T , compare [13,Section 14.2.3].
Example 7.
We end by explicitly computing the R-matrix representing the Yetter-Drinfel'd braiding from Example 4 for the Heisenberg-Voros algebra g. This is the 3-dimensional Leibniz algebra spanned by x, y, z such that the only non-trivial brackets are rx, xs " z, ry, ys " z, rx, ys " z, ry, xs "´z This Leibniz algebra can also be described as a 1-dimensional central extension of the abelian 2-dimensional Lie/Leibniz algebra, but rather than being antisymmetric, the cocycle has a symmetric and an antisymmetric part (in contrast to the Heisenberg Lie algebra).
The shelf structure on g is given for constants a, b, c, d, a 1 , b 1 , c 1 , d 1 P k by pa`bx`cy`dzq ✁ pa 1`b1 x`c 1 y`d 1 zq " aa 1`a1 bx`a 1 cy`zpa 1 d`bb 1`b c 1´c b 1`c c 1 q.
One computes the R-matrix to bë Observe the 13th line. This matrix does not square to 1.
|
2014-03-17T16:09:07.000Z
|
2014-03-17T00:00:00.000
|
{
"year": 2015,
"sha1": "0ab8465bb216a0b25a8c75c7b3471433f3d56a21",
"oa_license": null,
"oa_url": "http://eprints.gla.ac.uk/112014/1/112014.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9f6f46361372707fed90f49d15efdf45989640d0",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
129945276
|
pes2o/s2orc
|
v3-fos-license
|
Harbourne constants, pull-back clusters and ramified morphisms
We describe the effect of ramified morphisms on Harbourne constants of reduced effective divisors. With this goal, we introduce the pullback of a weighted cluster of infinitely near points under a dominant morphism between surfaces, and describe some of its basic properties. As an application, we describe configurations of curves with transversal intersections and $H$-index arbitarily close to $-25/7\simeq -3.571$, smaller than any previously known result.
Introduction
The question whether, in every algebraic surface, self-intersections of irreducible and reduced curves are bounded from below has intrigued algebraic geometers for decades, and continues to do so. The so-called Bounded Negativity Conjecture (BNC for short) is an old folklore conjecture, now formally posed by Bauer et al. in [4] (where some of its history is also explained), that asserts an affirmative answer: Conjecture 1.1 (BNC). Let S be a smooth complex projective surface. Then there exists a positive integer b(S) ∈ Z such that for every irreducible and reduced curve C ⊂ S one has C 2 ≥ −b(S).
By curve in this paper we mean an effective (reduced) divisor on S. In recent years the question of bounded negativity has received considerable attention, especially via the approach of trying to determine classes of surfaces S which satisfy Conjecture 1.1 (see [4,8,9,12]). In particular, [3,Problem 1.2] raised the question whether bounded negativity is a birational property, which leads to the following question: Question 1.2. Let S be a smooth complex projective surface, and assume that b(S) ∈ Z is a positive integer such that for every irreducible and reduced emphasis on the plane case [3]. These indices can be viewed as the average intersection numbers of negative curves by the number of singular points that they possess. Definition 1.3. Let C ⊂ P 2 be a reduced curve of degree d, and let K ⊂ P 2 be a finite set. The Harbourne constant of C at K is defined as where |K| denotes the cardinality of K. The Harbourne index of a curve C with ordinary singularities (a curve singularity is ordinary if it consists of smooth branches meeting transversely) is the Harbourne constant of C at the set of singular points: h(C) = H(C, Sing(C)).
The most negative Harbourne index for curves with ordinary singularities found so far in the literature is provided by Wiman's configuration of lines W [3], which has h(W ) = −225/67 ≃ −3.358. In this work we provide more negative examples: Theorem B. There exist reduced curves C ⊂ P 2 with ordinary singularities and Harbourne indices h(C) arbitrarily close to −25/7 ≃ −3.571.
In sharp contrast with all previously known examples with very negative Harbourne index, the curves in Theorem B do not have a large stabilizer group in PGL 3 (C).
For curves with non-ordinary singularities (such as the examples mentioned above showing lim inf n→∞ h(P 2 , n) ≤ −4), it is natural to modify the definition of Harbourne constants and indices by allowing some of the points in K to be infinitely near. In section 2.1 we introduce the notion of Harbourne constant at a multi-cluster of infinitely near points and extend the definition of Harbourne index to arbitrary curves on smooth surfaces.
In order to prove Theorems A and B, we study pullbacks of suitable curves by ramified morphisms; in fact, the effect of ramified morphisms on H-constants is the main theme of this work. Our motivation for this study stems from [11], where we observed that the pullback of a reduced curve C ⊂ P 2 by a ramified morphism P 2 → P 2 may have a more negative H-index than the original curve C. Even if one is primarily interested in curves with unknown negative curves, or to Brian Harbourne, one of the main contributors in these developments. ordinary singularities, their pullbacks by ramified morphisms may acquire non-ordinary singularities; to understand these, we apply the methods of [6]. In particular, clusters of infinitely near points and the corresponding extension of Definition 1.3 become essential tools.
Let us stress that the idea to use pullback curves is rather natural in the context of negative curves. For instance, in positive characteristic it leads to a well-known counterexample to the BNC -using the powers of the Frobenius endomorphism on the product X = C × C, where C is a genus g(C) ≥ 2 curve, one can create an unbounded negativity phenomenon. In sharp contrast, T. Bauer et al. proved in [4] that over the complex numbers every surface admitting a surjective endomorphism which is not an isomorphism has bounded negativity. We expect that Theorem A actually holds on every smooth projective surface admitting a ramified endomorphism; such a surface S must have κ(S) = −∞ by [7,Lemma 2.3].
Given a surjective morphism f : S → S ′ of surfaces and a set K of proper and infinitely near points (more precisely, a multi-cluster, see section 2.1) on S ′ with assigned multiplicities, we define in section 3.1 a pull-back multicluster f * (K) with multiplicities, such that for every curve C going through the points of K with the assigned multiplicities, f * (C) goes through f * (K) with the pullback multiplicities. If f does not contract any curve to a point, we can control the number of points in f * (K) and their multiplicities using the local multiplicity ν p (f ) of f at each proper point p ∈ f * (K) (writing f in local coordinates as a pair of power series, ν p (f ) is the minimum of the orders of both power series, see section 3.1 or [6]). We obtain the following (cf. [12,Lemma 7]).
Theorem C. Let f : S → S ′ be a finite morphism of smooth projective surfaces, C ⊂ S ′ a reduced curve, and K a multi-cluster on S ′ . Assume that f * (C) is reduced and H(C, K) ≤ 0. Then The notion of pullback cluster and Theorem C form the technical core of this paper. Because of their generality we expect that their application will not be restricted to the bounded negativity conjecture, so these might be of independent interest.
Preliminaries
By a surface S we mean a connected 2-dimensional complex (analytic) manifold (so, smooth and irreducible). Unless otherwise stated we always work with the analytic topology.
A bimeromorphic map between surfaces is a proper holomorphic map π : S π → S such that there exist proper analytic subsets T ⊂ S and T ′ ⊂ S π such that π restricts to an isomorphism S π \ T ′ → S \ T . A bimeromorphic model dominating a given surface S is a surface S π with a bimeromorphic map π : S π → S.
Clusters and H-constants
Singularities of curves on a smooth surface S will be described in terms of their clusters of multiple points, in the spirit of [5], i.e., taking into account the infinitely near multiple points -which have to be blown up in every embedded resolution. This description will allow a convenient treatment of pullback curves and their H-constants. We begin by recalling the notions of infinitely near points and clusters.
. Given a surface S and a point p ∈ S, denote π p : S p → S the blowup of S at p. Points in the exceptional curve E p = π −1 (p) are called points in the first (infinitesimal) neighborhood of p. Iteratively, a point q in the k-th neighborhood of p is defined as a point in the first neighborhood of a point in the (k − 1)-th neighborhood of p. Note that in this case, the point in the (k − 1)-th neighborhood is uniquely determined; it is called the immediate predecessor of q. More generally, for every 1 ≤ i ≤ k − 1, q is in the i-th neighborhood of a unique point in the (k − i)th neighborhood of p. The point p itself can be considered to be its 0-th neighborhood.
On every blowup such as π p : S p → S, it is convenient and natural to identify each point q ∈ S, q = p, with its unique preimage in S p . To do such identifications consistently across different blowups, and more generally across bimeromorphic models dominating S, we shall rely on infinitelynear-ness, a pre-order relation between points on such models. Points in the infinitesimal neighborhoods of p ∈ S provide paradigmatic instances. The equivalence relation induced by the pre-order will provide the desired identification of points, so the set of equivalence classes inherits a partial ordering by infinitely-near-ness.
In particular, every point on a bimeromorphic model, q ∈ S π → S, is infinitely near to a unique point on S, namely π(q). Obviously, if q is in the k-th infinitesimal neighborhood of p for some k ≥ 1, then q ≥ p. Denote ≈ the equivalence relation induced by the pre-order, so that q 1 ≈ q 2 if q 1 ≥ q 2 and q 2 ≥ q 1 . Lemma 2.3. Let q 1 ∈ S π 1 , and q 2 ∈ S π 2 be points in two bimeromorphic models of S. Then q 1 ≈ q 2 if and only if there exist open neighborhoods U i ⊂ S π i of q i and an S-biholomorphism ̟ : Proof. Let us prove the "if" part, as the "only if" part is obvious.
By definition, q i ≥ q j if and only if there is an open neighborhood U i ⊂ S π i of q i and a holomorphic map Proposition 2.4. For every p ∈ S and every q ≥ p there is a unique n ≥ 0 and a unique point in the n-th neighborhood of p equivalent to q.
Proof. There is a bimeromorphic map π : S π → S with q ∈ S π and π(q) = p. First we observe that (1) q is equivalent to a point of S (which must be q ≈ p) if and only if π is a biholomorphism around q.
Factor π : S π → S as a finite sequence of point blowups, which is possible [1,III,4.4], and denote {p 1 , . . . , p m } the set of centers of blowups that are images of q. We will show, by induction on m, that q is equivalent to a point in the m-th neighborhood of p, and that m and the equivalence classes of p 1 , . . . , p m are independent of the factorization. The case m = 0 follows by (1), so assume m > 0 and π is not a biholomorphism around q. The points p i are totally ordered by infinitely-near-ness, i.e., p 1 < · · · < p n < q, with strict infinitely-near-ness < because of (1).
Since π(q) = p, p 1 is equivalent to p, and since blowups of distinct proper points commute [5, 4.3.1], we may rearrange the sequence of blowups so that p 1 = p is the center of the first blowup. This rearranging affects neither m nor the equivalence class of the p i . Now π factors through Bl p (S), and the image of q in Bl p (S) is a well defined point r. We have q ≥ r, and the bimeromorphic map S π → Bl p (S) factors as a finite sequence of point blowups, where the centers of blowups that are images of q are {p 2 , . . . , p m }. By induction it follows that q is equivalent to a point in the m-th neighborhood of p.
Now assume there is a second factorization, whose centers that are images of q arep 1 , . . . ,pm. As before,p 1 is equivalent to p and we may in fact assumep 1 = p. Because the blowup of p is bimeromorphic, the factorization through Bl p (S) is unique, and the image of q in Bl p (S) obtained from the second factorization is still r. By the induction hypothesis again,m = m and all centers are equivalent.
Definition 2.6. In the sequel, we shall identify equivalent points; thus for us a point infinitely near to p is by definition an equivalence class of points in bimeromorphic models of S mapping to p. Infinitely-near-ness is then a partial order on the set of points infinitely near to p. An infinitely near point of S is a point infinitely near to some p ∈ S. If q is a point in a bimeromorphic model of S, we will denote the infinitely near point it determines by the same symbol q, recalling that equality of infinitely near points means equivalence of points in models of S.
We also observe that it follows from the proof of the previous proposition that a point in the n-th neighborhood of p is infinitely near to exactly n + 1 points infinitely near to p (including p and q). Sometimes we call points p ∈ S proper points of S. Definition 2.7. A cluster based at p is a finite set of points K infinitely near to p such that, for every q ∈ K, if q ′ is a point infinitely near to p and q is infinitely near to q ′ , then q ′ ∈ K. A multi-cluster is a finite union of clusters based at distinct points of S.
By Proposition 2.4 and its Corollary 2.5, our notion of cluster agrees with the one in [5].
A curve C is said to go through the infinitely near point q ∈ S π → S if its strict transform in S π goes through q. The property is well defined, because clearly if q ′ ∈ S π ′ → S is equivalent to q ′ , then the strict transform of C in S π goes through p if and only if the strict transform of C in S π ′ goes through p ′ ; in the sequel we implicitly leave such routine checks to the reader. For instance, the multiplicity of C at q, denoted mult q C, is well defined as the multiplicity of its strict transform. For every curve C on S, the set of all points infinitely near to p, where C has multiplicity > 1, is a cluster [5, 3.7.1], which we denote Mult p (C), and the set of all points, proper and infinitely near, where C has multiplicity > 1, is a multi-cluster Mult(C).
Remark 2.8. The multi-cluster Mult(C) just defined may be strictly contained in the multi-cluster of singular points of C [5, Section 3.8], which is formed by all points that have to be blown up to obtain an embedded resolution (also called good resolution in the literature) of C.
Given a multi-cluster K on S, one may blow up all points in K, as follows. First blow up S with center at one of the proper points p of K, then perform successive blowups on the resulting surfaces, with centers which belong to K and are proper points of the surfaces obtained by previous blowups. Subsequent centers may be chosen in any order compatible with the natural ordering by infinitely-near-ness (if q 1 precedes q 2 , then q 1 must be blown up first); the final surface and bimeromorpic map obtained as the composition of all blowups, which will be denoted π K : S K → S, are independent on the order of these blowups -up to unique S-biholomorphism (a detailed proof in the case of a single cluster can be found in [5,Proposition 4.3.2], the general case follows easily).
In fact, every bimeromorphic model of S is the blowup of all points in a convenient cluster: if π : S π → S is a bimeromorphic map, for every factorization of π as a finite sequence of point blowups, the centers of the blowups clearly form a multi-cluster K. The proof of Proposition 2.4 can be easily modified to show that this multi-cluster is independent of the factorization (i.e., two distinct factorizations consist of the same number of blowups, and the centers are equivalent) and there is a unique S-biholomorphism S π ∼ = S K .
We denote by E q (respectively,Ẽ q ) the pullback or total transform (respectively, the strict transform) in S K of the exceptional divisor of the blowup centered at q. It is not hard to see that q 1 precedes q 2 if and only if E q 2 − E q 1 is an effective divisor. Definition 2.9. Let C ⊂ S be a reduced curve, and let K be a multi-cluster on the surface S. The Harbourne constant of C at K is Note that the strict transform of C on the blowup S K of all points in K is so the numerator in the definition of H(C, K) is the self-intersection ofC in S K . We define the Harbourne index of C as its Harbourne constant at its cluster of multiple points, i.e., If the singularities of C are ordinary, then the multi-cluster Mult(C) consists of proper points of S only, so this definition of Harbourne index extends the one recalled in the introduction.
Remark 2.10. Fix a reduced curve C on S. For every multi-cluster K and every point (proper or infinitely near) q of S, let K + q be the minimal multi-cluster which contains both K and q. Note that all points preceding q which are not in K belong to (K + q) \ K. Assume K is such that there is a point q ∈ Mult(C) \ K. Then C has multiplicity at least 2 at all points in (K + q) \ K, i.e., mult q (C) 2 ≥ 4 for every such point. Therefore, if H(C, K) ≥ −4, then On the other hand, in the case of S = P 2 every known value H(C, K) is larger than −4, so in all known cases for plane curves the cluster that gives the smallest value for H(C, K) is K = Mult(C), and the value is h(C).
Singularities of curves in smooth surfaces
By assigning integral multiplicities ν = {ν q } q∈K to the points of a cluster K one gets a weighted cluster.
Definition 2.11. A weighted cluster K = (K, ν) is consistent if there exist germs of curve in S whose strict transform at each q ∈ K has multiplicity exactly ν q . The cluster Mult p (C) of multiple points on C infinitely near to p ∈ S, weighted with the multiplicity of C at each point, will be denoted by Mult p C.
Let K = (K, ν) be a given weighted cluster of points infinitely near to p, and π K : S K → S the blowup of S at all points of K, introduced above. Continuing to denote by E q the total transform in S K of the exceptional divisor above each q ∈ K, we associate to the weights ν an effective divisor on S K , D K = ν q E q . [5,Chapter 4]). In particular, if the strict transform of a curve C at every q ∈ K has multiplicity equal to ν q , then C goes through K. The complete ideal H K = π K * (−D K,ν ) ⊂ O S,q is formed by the equations of germs of curve at p going through K. The self-intersection of K is defined as the opposite of the self-intersection of D K : K 2 = ν 2 q . If K is consistent, then its self-intersection equals the intersection multiplicity at q of two sufficiently general germs of curve going through K with multiplicities equal to ν [5, 3.3.1 and 4.2.3].
The notions of weighted cluster, and hence of going through a weighted cluster, consistency and self-intersection carry over to the multi-cluster setting verbatim. We will denote Mult(C) the weighted multi-cluster obtained as the union of all weighted clusters Mult p C, where p ∈ S is a singular point of C. Proof. For every positive integer k, let kK = (K, kν) be the weighted cluster consisting of the same points as K and all weights multiplied by k. It satisfies the proximity inequalities, so it is consistent. By [ Lemma 2.16. Let S be a smooth projective surface, let p 1 , . . . , p r ∈ S, and for every i = 1, . . . , r we denote by K i = (K i , ν (i) ) a consistent weighted cluster of points infinitely near to p i . Let K = (K, ν) be the multi-cluster formed by all these clusters and C ⊂ S a reduced curve going through K. Then q the multiplicity of the strict transform of C at the point q ∈ K i . Then the clusters K ′ i = (K i , µ (i) ), for i = 1 . . . , r, are consistent. Moreover, the strict transform of C on the blowup π K : S K → S at all points of the multi-cluster K isC = π * (C) − D K ′ whereas, since C goes through all clusters with multiplicities ν (i) , the divisor π * (C) − D K is effective. It follows that for each i, D K ′ i ≥ D K i , and therefore there is an inclusion of ideals as wanted.
Harbourne constants under ramified morphisms
Our next goal is to describe the singularities of preimages of curves under ramified holomorphic surface maps, in enough detail to first show that Hconstants can only drop under such a process on projective surfaces, and secondly to provide new examples of plane curve arrangements in the complex projective plane with very negative H-indices.
The pullback cluster
Fix for this section the following notation: S, S ′ are two smooth complex surfaces, f : S → S ′ is a dominant holomorphic map (i.e., f (S) is not contained in a curve of S ′ ), p ∈ S and p ′ = f (p) are points, and we are interested in the singularity at p of the pullback (or preimage) of a curve C ′ ⊂ S ′ whose singularity at p ′ is known. Take x, y and u, v as local coordinates on S and S ′ with origins at p and p ′ , respectively, and assume that on a suitable open neighborhood U at p, f : S → S ′ is given by the equalities u = f 1 (x, y), v = f 2 (x, y), where f i 's are non-invertible convergent series in x, y. The multiplicity of f at p, denoted by ν p (f ) or simply ν(f ) if there is no risk of ambiguity, is the minimum of the orders of vanishing of f 1 and f 2 at p.
Put d = gcd(f 1 , f 2 ), as elements in O S,p . The pencil of curves in U defined by {C α : α 1 f 1 + α 2 f 2 = 0}, α = α 1 /α 2 ∈ C ∪ {∞}, formed by the pullbacks of the curves α 1 u + α 2 v = 0, has a fixed part F : d = 0 (which might be empty) and a variable part {D α : We call F the curve contracted to p ′ , as f (F ) = p ′ . On the other hand, the variable part of the pencil has, like every pencil without fixed part, a weighted cluster of base points which consists of the points and multiplicities shared by all but finitely many curves in the pencil [5, 7.2]. This cluster is called the cluster of base points of f , and denoted BP p (f ), or simply BP(f ) if no confusion is likely. By definition, BP(f ) is a consistent cluster, and for every curve C ′ ⊂ S ′ through p ′ , the pullback f * (C ′ ) goes through BP(f ) (if F is nonempty, then f * (C ′ ) − F goes through BP(f )). It may happen that only finitely many D α go through p; in this case the cluster BP(f ) is empty. The multiplicity of f satisfies ν(f ) = ν p + mult p (F ), where ν p is the multiplicity of p in BP(f ).
Remark 3.1. Given a dominant holomorphic map f : S → S ′ and a point p ′ ∈ S ′ . The set of points p ∈ S such that f (p) = p ′ and BP p (f ) is nonempty is discrete. Indeed, let p satisfy f (p) = p ′ , and write f in local coordinates as above, (f 1 (x, y), f 2 (x, y)) in a neighborhood U of p. Let d = gcd(f 1 , f 2 ). Since f 1 /d, f 2 /d have no common factor in O S,p , their common zeros in a possibly smaller neighborhood U ′ ⊂ U are a discrete set, and for q ∈ U ′ , the cluster BP q is nonempty if and only if q is a common zero of f 1 /d and f 2 /d.
Note that if d is invertible in O S,p , then p is an isolated preimage.
Let π p ′ : S ′ p ′ → S ′ be the blowup centered at p ′ . It is natural to describe BP(f ) as the cluster of points which need to be blown up to resolve the indetermination at p of the "meromorphic map"f = π −1 p ′ • f : S S p ′ ; we include a proof for completeness, since this characterization will be the starting point for our definition of the pullback cluster.
Lemma 3.2. Keeping the same notation as above, assume f (p) = p ′ and let U be a neighborhood of p such that BP q (f ) is empty for all q ∈ U , q = p.
There is a unique local liftf of f to the blowup of the points of BP(f ) which makes the following diagram commute: Moreover, if π : U π → U is a bimeromorphic model of U which admits a lift f : U π → S ′ p ′ , then π factors through π BP(f ) . The weights ν q of the base points are determined by the formulã Remark 3.3. If q is a point infinitely near to p, which belongs as a proper point to the model S π → S, then (f • π)(q) = p ′ . It then follows from the definition that q ∈ BP p (f ) if and only if q ∈ BP q (f • π) (see also [5, section 7.2]).
Proof. Write f in local coordinates as above, (u, v) = (f 1 (x, y), f 2 (x, y)), and let d = gcd(f 1 , f 2 ). Assume first that BP(f ) is empty, which by definition means that either f 1 /d or f 2 /d does not vanish at p; without loss of generality we assume f 2 /d does not vanish. Consider the chart V of S ′ p ′ which admits (u/v, v) as local coordinates, and let U ′ ⊂ U be the open set where f is given by (f 1 , f 2 ) and f 2 /d does not vanish. Then the restriction f | U ′ lifts uniquely tof : U ′ → V , given in coordinates as (u/v, v) = (f 1 (x, y)/f 2 (x, y), f 2 (x, y)).
Conversely, if there is a neighborhood U ′ of p where f lifts tof : U ′ → S ′ p ′ , thenf (p) belongs either to the chart where (u/v, v) are local coordinates, or to the chart where (u, v/u) are local coordinates; without loss of generality we assume it is the first case. Then (u/v) •f = f 1 (x, y)/f 2 (x, y) is a regular function in a neighborhood of p, so f 2 /d does not vanish and BP(f ) is empty.
So BP(f ) is empty if and only if there is a liftf : U ′ → S ′ p ′ in some neighborhood U ′ of p. We now observe that uniqueness of a liftf if it exists is clear because π BP(f ) is bimeromorphic. Therefore, to show existence in the general case it is enough to prove it in a neighborhood of a point q ∈ U BP(f ) , because by uniqueness local lifts will match to the desiredf . Now, by assumption BP q (f ) is empty for all q ∈ U, q = p, so BP q (f • π BP(f ) ) is empty for all q ∈ U, f (q) = p, and by Remark 3.3, BP q (f • π BP(f ) ) is empty for all q ∈ U, f (q) = p. Therefore the previous paragraph shows that there is a liftf as claimed.
Moreover, if p ∈ BP(f ) then as observed above there is no lift of f to any neighborhood U ′ of p. Therefore, if π : U π → U is a bimeromorphic model of U which admits a liftf : U π → S ′ p ′ , then π is not biholomorphic onto any neighborhood of p, and so it factors through π p . The second claim now follows by induction on |BP(f )|.
Let now (ν q ) q∈BP(f ) be the weights determined by (2). It remains to show that ν q =ν q for each q, which we do by induction on |BP(f )|. To this end, let q be a point in the first neighborhood of p, not in BP(f ) nor on F ; this means that not all (strict transforms of) D α go through q, and without loss of generality we assume that the strict transform of f 2 /d does not go through q. Then a direct computation in coordinates shows that the pullback π * p (f 2 ) of E p ′ : v = 0 vanishes to order exactly ν p along E p at q, i.e., ν p =ν p . This in particular gives the desired equality if BP(f ) consists only of the point p. Finally, let q 1 , . . . , q r be the points of BP(f ) in the first neighborhood of p; the induction hypothesis applied to BP q i (f • π p ) finishes the proof.
According to [6], the local degree of f at p, denoted by deg p (f ), is the number of points in f −1 (q) that approach p when q approaches p ′ along α 1 u + α 2 v = 0 for a general α. If the contracted curve F is empty, then this is simply the number of points in f −1 (q) that approach p when q approaches p ′ . In general, it satisfies where ν q are the weights of the cluster of base points BP p (f ). Note that ν 2 q = BP p (f ) 2 is the intersection multiplicity at p of any two distinct curves D α , D α ′ in the pencil of variable parts [5,Ex. 7.2], whereas ν q · mult q (F ) is the intersection multiplicity at p of a general D α with the fixed part F .
Let K = (K, µ) be an arbitrary weighted cluster of points infinitely near to the target point p ′ of f . Generalizing the cluster of base points of f which describes the singularities of pullbacks of general curves smooth at p ′ , we next associate to K a pullback cluster f * (K) = (f * (K), f * µ) of points infinitely near to p in order to describe the singularities of pullbacks of general curves going through K. Following the characterization of BP(f ) given as Lemma 3.2, let π K : S ′ K → S ′ be the composition of the blowups centered at all points of K, and let D K = q∈K µ q E q be the associated divisor on S ′ K . Then we define f * (K) as the cluster of all points which need to be blown up to resolve the indeterminacy at p of the composition π −1 K ′ • f , and its multiplicities f * µ as determined bỹ where F K is formed by all components off * (D K ) which do not contract to p in S. F K can be called the fixed part of f * (H K ); all of its components map in S to components of the contracted curve F of f , with multiplicities depending on K. If the weighted cluster K is consistent, the pullback cluster f * (K) can be described as a suitable cluster of base points; the following lemma gives the precise statement.
Lemma 3.5. Let w, z ∈ O S ′ ,p ′ be local equations of two curves going through K with multiplicities exactly µ and sharing no further point (see [5,Section 4.2]). Let V ⊂ S ′ be an open neighborhood of p ′ such that (w, z) determine a dominant holomorphic map g : V → C 2 . Then F K is the pullback in S f * (K) of the contracted germ of the holomorphic map g • f , and f * (K) differs from the cluster of base points at p of g • f at most in some points of multiplicity 0.
Sketch of proof. The key point of the proof is to show that there is a lift U f * (K) → Bl 0 (C 2 ) of g • f , which follows from the straightforward fact that g lifts to Bl K (V ) → Bl 0 (C 2 ) (in fact by the definitions K is the cluster of base points of g).
Put d = gcd(f * (w), f * (z)). It follows from the lemma that the curve F K is given by d = 0, and the multiplicities of all but finitely many curves in the pencil {α 1 f * (w)/d + α 2 f * (z)/d} at the points of f * (K) are exactly the weights f * µ. More precisely, the cluster of base points of this pencil consists of the subcluster of f * K of the points with positive multiplicity. If K is consistent, then f * K is consistent as well. However, when K has points q whose excess D K ·Ẽ q is zero [5, Section 4.2], f * K may have points with multiplicity zero.
Corollary 3.6. Let f : S → S ′ be a dominant holomorphic map between smooth complex surfaces, p ∈ S a point with f (p) = p ′ , and let K = (K, µ) be a consistent weighted cluster of points infinitely near to p ′ . If the curve contracted to p ′ is empty, then Proof. Let w, z ∈ O S ′ ,p ′ be the local equations of two distinct curves going through K with multiplicities exactly µ, and sharing no further point. Consider the dominant holomorphic map g : V → C 2 determined by (w, z) as above. Since it has empty contracted curve, by (3), deg p ′ (g) = K 2 , and by the lemma, deg p (g Proof. Note that p is an isolated preimage of p ′ , so in particular the curve contracted to p ′ is empty. We argue by induction on |K|. If |K| = 1, then K = {p ′ } and f * (K) = BP(f ). Since deg p (f ) = q∈BP(f ) ν 2 q and ν q ≥ 1 for all q ∈ BP(f ) by definition, the claims follow. Now assume |K| > 1, and let q ′ 0 be a maximal point by the partial ordering by infinitely-near-ness, so that K 0 = K \ {q ′ 0 } is a cluster. By induction we may assume that Since p is an isolated preimage of p ′ , there exist open neighborhoods U ⊂ S of p and V ⊂ S ′ of p ′ such that f | U : U → V is a surjective proper holomorphic map. Then (see [10, §3.A]). Consider the blowups π f * (K 0 ) and π K 0 of all points in the clusters f * (K 0 ), K 0 , and the corresponding lift of f , The point q ′ 0 belongs toṼ . More precisely, to the preimage of p ′ , which is an effective divisor D inṼ , andf −1 (q ′ 0 ) is contained in the divisorf * (D), which is the preimage of p inŨ .
Choose local coordinates u, v in an open neighborhood V ′ ⊂Ṽ of q ′ 0 , such that uv = 0 along D ∩ V ′ (this is possible because at most two prime components of D meet at q ′ 0 ). Then for a general member of the pencil {L α : α 1 u + α 2 v = 0}, every point on L α except q ′ 0 has exactly deg p (f ) preimages byf , (5). The setf −1 (q ′ 0 ) need not be finite, but it is easy to see that Notice that Q is also the set of indeterminacy points of π −1 is the blowup centered at q ′ 0 . Consider, for each q ∈ Q, the cluster BP q (f ). It is clear that blowing upŨ at all points of q∈Q BP q (f ) resolves the indeterminacies of π −1 q ′ 0 •f . Therefore, blowing up all points in the cluster so we are done. Note that if ν(f ) > 1, then (by the induction hypothesis) the inequality |f * (K) 0 | < deg p (f ) · (|K| − 1) is strict and hence also |f * (K)| < deg p (f ) · |K|.
Pullback of multi-clusters and H-indices
We are now going to apply the local results from the previous section in the global setting of finite morphisms f : P 2 → P 2 in order to study the behavior of H-constants under pullbacks. and Proposition 3.7 gives with a strict inequality if ν p f > 1 at some p with f (p) = p i . Taking into account that H(C, K) ≤ 0, the equation (6) now gives with a strict inequality if there exist some i and p with f (p) = p i , and ν p f > 1.
For plane curves and in terms of h-indices, we have the following corollary: Corollary 3.9. Let f : P 2 → P 2 be a finite morphism, and let C ⊂ P 2 be a reduced curve with f * (C) reduced. Theorem 3.8 means that H-constants of negative curves can only decrease under pullbacks. By using suitable ramified morphisms we can now prove Theorem A.
Proof of Theorem A. Let S π → P 2 be the composition of n point blowups, and let K be the multi-cluster formed by the n points blown up. Let C be a reduced curve on S π . We want to show that C 2 /|K| > h. By Lemma 2.16, we may assume that C is the strict transform of a reduced curve C ′ on P 2 . By Theorem 3.8, it will be enough to show that there exists f : P 2 → P 2 satisfying: 2. There exists p ∈ S such that ν p (f ) > 1 and f (p) is a proper point of K.
This is obviously possible: let f : P 2 → P 2 be a Kummer cover given in suitable coordinates by f ([x : y : z]) = [x n : y n : z n ]) with n ≥ 2, where the coordinates are chosen such that no coordinate line is a component of C (hence f * (C) is reduced) and at least one coordinate point belongs to K.
Furthermore, we prove that there is no minimal h-index in any sense: 2. There is no curve C 0 ⊂ P 2 with ordinary singularities such that Proof. We shall show that, given a particular reduced curve C ⊂ P 2 with r ≥ 2 singular points and h(C) < 0, there exists another curve C ′ with h(C ′ ) < h(C), and if C has ordinary singularities then C ′ can be chosen with ordinary singularities too. So let p be a nonsingular point of C, and choose coordinates in P 2 such that Choose an integer k such that −k 2 < h(C) and consider the Kummer cover f : P 2 → P 2 given coordinate-wise by f ([x : y : z]) = [x k : y k : z k ], which is a morphism of degree k 2 branched along the coordinate triangle and has multiplicity k at the (fixed) point p. The singularities of f * (C) are as follows: • For each singular point q of C there are k 2 locally isomorphic singularities in the k 2 distinct preimage points of q.
• There is an ordinary singularity of multiplicity k at p.
Note that, if C has ordinary singularities, then so does f * (C). Denote K = Mult(C) the weighted multi-cluster of multiple points of C. We have as claimed.
Note that Theorem A and Proposition 3.10 do not mean that the values of H-constants are not bounded from below -some examples of sequences of reduced curves with decreasing H-indices are known, which converge to finite limits. So the question whether the BNC holds for blowups of the complex projective plane is open. The method of Proposition 3.10 does mean that, if there is a uniform bound h(C) ≥ h, then for curves with a fixed number of singular points s a stronger bound than h can be given: Proof. Choose coordinates on P 2 such that the three coordinate vertices lie on smooth points of C, and each coordinate line meets C in exactly d = deg(C) distinct points. Consider the Kummer cover f ([x : y : z]) = [x k : y k : z k ], and let K = Mult(C) as before. Then The limit of the right hand side for k → ∞ is h(C) − 3/|K|, whence the claim.
Fermat arrangements
We observe that some of the known curve arrangements with negative Hindices can be obtained as pullbacks of simpler arrangements by suitable ramified maps.
Let C ⊂ P 2 be a reducible cubic made up of three concurrent lines; for simplicity assume it is given by the homogeneous equation (x − y)(y − z)(z − x) = 0. Obviously Mult(C) = {p} is the single point p = [1 : 1 : 1] with multiplicity 3, and h(C) = 0. Let f k : P 2 → P 2 be the Kummer cover f k ([x : y : z]) = [x k : y k : z k ]. The so-called k-th Fermat arrangement of lines is the reduced curve f * k (C), which has k 2 triple points, three points of multiplicity k, and computing as in (7) we obtain h(f * k (C)) = −3k 2 /(k 2 +3).
Curves with ordinary singularities
In the proof of Propositions 3.10 and 3.11 we used morphisms which have multiplicity > 1 at smooth points of a given curve C, for simplicity. In practice, in the search for curves with very negative H-indices, using a morphism with multiplicity > 1 at singular points of C turns out to be more effective. However, we can obtain a more negative index. Consider a projective coordinate system which has its three coordinate points sitting in triple points of the Wiman configuration W of 45 lines [3] and such that none of the coordinate lines belong to W . Then the intersection of each coordinate line with W consists of the two chosen triple points and 39 transverse intersections with the lines not going through the triple points (this is presumably well known, we checked it using Singular). Denote as before K = Mult(W ), and apply the Kummer cover f ([x : y : z]) = [x k : y k : z k ] to W . Each vertex p of the coordinate system is its unique preimage, and f * (W ) has an ordinary singularity of multiplicity k 2 mult p (W ) there, so: By taking large values of k, we see that there exist reduced curves C ⊂ P 2 with ordinary singularities and Harbourne index arbitrarily close to This proves Theorem B.
Klein-invariant configurations of higher degree
In [11], we described the singularities of the configuration of 21 reducible polars to the Klein quartic Φ 4 : x 3 y + y 3 z + z 3 x, computing in particular their H-constants, and we introduced additional very negative configurations of curves of higher degree. We next recall the construction and give an explicit description of some clusters of singular points of these configurations, leading to a bound on their h-indices. Denote f : P 2 → P 2 the gradient map given by the partial derivatives of Klein's quartic equation, explicitly P 2 ∋ [x : y : z] 9:1 −→ [u : v : w] = 3x 2 y + z 3 : 3y 2 z + x 3 : 3z 2 x + y 3 ∈ P 2 .
We shall not attempt at a complete description of the singularities of the configurations K k : (f k ) * (Φ 21 ) = 0, but we focus on the singularities lying on the preimage of the singular points of K 2 : Φ 63 = 0; these are enough to show that the Harbourne index h k of K k is a decreasing sequence whose limit is at most −1283/410 ≃ −3.123. This can be equivalently stated as follows: where X is a finite set of points with p∈X deg p f = 6 · 42. Denote, for each k ≥ 2, S k = ∪ p∈O 42 S p,k the multi-cluster of singular points of (f k ) * (Φ 21 ) supported at O 42 , and split its pullback as f * (S k ) = S 42 k ∪ S X k where S 42 k is the subcluster supported at O 42 and S X k is the subcluster supported at X.
|
2018-10-09T11:51:24.000Z
|
2018-10-09T00:00:00.000
|
{
"year": 2018,
"sha1": "acd82888b6e5d2fa4aa551ef145919a89cc399c6",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00025-019-1031-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "0fc7c91be81ead3d4c0a09076e9af928e1f04234",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
158986693
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Different Seed Rate and Spacing on Yield and Economics of Ginger (Zingiber officinale Rosc) Cultivation
Ginger (Zingiber officinale Rosc) is widely used in food, beverage, confectionery and medicine. It is valued in medicine as a carminative and stimulant of gastro intestinal tract. Dry ginger is used for the manufacture of oil, oleoresin, essence, soft drinks etc. India is the largest producer, consumer and exporter of ginger in the world. The size of planting materials and spacing are among the major factors influencing growth, yield and economics of ginger. Considering these the present investigation was undertaken to study the effect of different spacing and seed rate i.e. size of seed materials on yield and yield component and economics of ginger cultivation.
Introduction
Ginger (Zingiber officinale Rosc) is widely used in food, beverage, confectionery and medicine. It is valued in medicine as a carminative and stimulant of gastro intestinal tract. Dry ginger is used for the manufacture of oil, oleoresin, essence, soft drinks etc. India is the largest producer, consumer and exporter of ginger in the world. The size of planting materials and spacing are among the major factors influencing growth, yield and economics of ginger. Considering these the present investigation was undertaken to study the effect of different spacing and seed rate i.e. size of seed materials on yield and yield component and economics of ginger cultivation.
Materials and Methods
The experiment was carried out at H.R.S., Mondouri, Bidhan Chandra Krishi Viswavidyalaya, Nadia in two consecutive year (2013-14 and 2014-15). The experiment was laid out in split plot design with three replications. Five different spacing i.e. P 1 (20 x 15 cm) P 2 (20 x 20 cm) P 3 (25 x 20 cm), P 4 (25 x 25 cm) and P 5 (30 x 25 cm) as main plot and two seed rate (size of planting material) i.e. S 1 (20 g) and S 2 (30 g) as subplot treatments were included in this investigation. There were ten treatments combinations. Indofil-M 45 (0.3%) treated rhizome (cv. Gorubathan) were planted in the middle of April during both the years. Fertilizers were applied @ 125: 100: 100 kg NPK / ha. Entire P with ½ K and 1/ 3 N along with FYM @ 20 t / ha were given as basal application. 1/ 3 N at 45 DAP and 1/ 3 N and 1/ 2 K were applied at 90 DAP followed by earthing up and mulching. The rhizome was harvested at 210 DAP. The observations on different parameters were recorded from five randomly selected plants per replication.
Results and Discussion
The clump weight increased from 225.33 g to 321.66 g and 188.50g to 267.50g with the increase in spacing from 20 x 15 cm to 30 cm x 25 cm in the respective years, so, increasing trend in yield per plant, was observed with increase in spacing, decrease in plant population level. The plants raised from the bigger seed rhizome (30 g) produced bigger clump 276.13 g and 241.80g in two respective years. The yield increased with seed rate (Mohanty et al., 1988). Among the interactions, maximum clump weight (350.66 g) was recorded in plants raised under widest spacing (30 x 25 cm) coupled with bigger (30 g) rhizome (P 5 S 2 ) in the 1 st year but minimum clump weight (184.00 g) was recorded in the P 1 S 1 (20 x 15 cm, 20 g) treatment combination in the 2 nd year. Data presented in table 2 revealed that maximum length of clump 21.03 cm was recorded under widest spacing (30 x 25 cm) in the 1st year. The longest (22.40 cm) clump was recorded from plant at widest spacing (30 x 25cm) i.e. less population in combination with bigger seed rhizome (30 g) in 1 st year. Maximum breadth was observed 14.80 cm under 30 x 25 cm spacing with 30 g seed size in the 1 st year and 12.63 cm in the 2 nd year. Higher growth and yield were associated with greater size of planting material .
Higher length of finger was recorded with wider spacing during both the years of experiment. The maximum length of 10.75 cm and 10.68 cm and maximum breadth of 2.99 cm and 2.89 cm were recorded with 30 x 25 cm spacing in the respective years (Table 3). Bigger seed rhizome produced the longer finger as compared to smaller one. In case of interaction effect P 4 S 2 (25 x 25 cm, 30 g) treatment produced the longest finger (10.97 cm). Maximum breadth (3.01 cm) was observed in P 5 S 2 (30 x 25 cm, 30 g) treatment combination (Table 4). Such a difference in the production can be sought from sourcesink relationship in plant. Bigger size of planting material constitutes a stronger sink than the smaller one. Interaction effects of spacing and corm size indicated that P 1 S 2 (40 x 40 cm, 500g) combination produced maximum yield per plot of 74.52 kg, 68.37 kg and 71.45 kg during the respective years (1998)(1999)(2000)(2001) and pooled data as compared to minimum yield of 20.26 kg, 17.29 kg and 18.78 kg with P 5 S 1 (90 x 85 cm, 300g) combination in the year 1999, 2000 and pooled data, respectively.
Hence, translocation and mobilization of assimilates and nutrients from the source are more, thereby produced superior fingers. Increasing trend in the weight of finger was noticed with increase in spacing up to certain limit. Yield increased linearly as the spacing reduced due to superior yield in the case of high plant populations over that of low plant population (Ghosh and Bandopadhya, 2008). The bigger size rhizome (30 g) recorded maximum weight (66.35 g) of finger. The bigger seed rhizome (30 g) coupled with 30 x 25 cm spacing (P 5 S 2 ) recorded the maximum weight of finger (71.51 g). These results are in good conformity with the observations of Korla et al., (1989), reported mean pseudo stem of 5.1,6.2,6.3 and 7.1 found from seed size of 5-10g, 10-15g, 15-20g and 20-25g, respectively and also stated that mean yield of 33.3, 65.4, 79.7 and 122.5g per plant found from seed size of 5-10g, 10-15g, 15-20g and 20-25g, respectively. Closer spacing might affect the growth and development of plants due to competition among them for nutrients and other resources available per unit area but under spacing above the optimum, the utilization of the land may be less and thereby the yield might have been reduced. These results are in good conformity with the observations of Singh et al., (2000). The data presented in table 1 revealed that yield per hectare of ginger was maximum with closer spacing in both the years. The increase in spacing from 20 x 15 cm to 30 x 25 cm showed a decreasing trend in total yield. Maximum yield of 13.74 t, 10.88 t were obtained under closest spacing (20 x 15 cm) in the respective years. The plant raised from bigger seed rhizome (30 g) recorded the higher yield of 11.78 t and 10.00 t per hectare in the respective years as compared to 10.76 t and 9.31 t from smaller seed rhizome (20 g).In case of interaction effect, closest spacing (20 x 15 cm) in combination with bigger seed rhizome (30 g) produced highest yield of 13.95 t and 11.04 t as compared to minimum yield of 9.00 t and 7.78 t with widest spacing (30 x 25 cm) in combination with smaller seed rhizome (20 g) in the respective years.
The results are in good agreement with Korla et al., (1989) and Randhwa et al., (1972) and Pandey (1999) reported that closer spacing was optimum for getting maximum yield in mango-ginger and kacholam (Family -Zingiberaceae) respectively. The reduction in yield attributes under narrower spacing might be ascribed due to comparatively poor growth and development of individual plants owing to competition for growth resource like space, sun-light, nutrients, moisture etc. which is supported by the earlier findings (Singh et al., 2000;Mohanty et al., 1993).
The cost cultivation, gross return and net return decreased significantly with the increase in spacing (Table 5) decrease of plant population per unit area. It was observed that maximum cost of cultivation were recorded (Rs. 1,06,312/-) and (Rs. 101146/-) in both the year respectively with 20x15cm spacing and 30g seed size. The gross return (Rs. 175,597/-and Rs 155,362) were also highest with this treatment ( Table 6). The benefit: cost ratio was highest in P 5 S 1 (2.16 and2.29) followed by P 4 S 1 (2.12 and 2.27) in the respective years.
It may be concluded that yield maximization and income enhancement point of view in interaction effect, closest spacing (20 x 15 cm) in combination with bigger seed rhizome (30 g) produced highest yield of 13.95 t and 11.04 t/ha, also the highest income of Rs. 175597/-and Rs 155362 during 2013-14 and 2014-15 respectively. These may be suggested as the most effective cultivation option in the Alluvial zone of West Bengal.
15 and fund for this research work received from Bidhan Chandra Krishi Viswavidyalaya has been duly acknowledged.
|
2019-05-20T13:02:47.660Z
|
2017-09-20T00:00:00.000
|
{
"year": 2017,
"sha1": "24d50c9ad1cb9e1bb9ef828069592abcfbd3c7c6",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/6-9-2017/Nilanjana%20Datta,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "4a73c4ea371e09ce51d0cf75ec5f651cb7b81056",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Economics"
],
"extfieldsofstudy": [
"Biology"
]
}
|
246764411
|
pes2o/s2orc
|
v3-fos-license
|
Effects of Process Parameters on the Corrosion Resistance and Biocompatibility of Ti6Al4V Parts Fabricated by Selective Laser Melting
Excellent biocompatibility and corrosion resistance of implants are essential for Ti6Al4V parts fabricated by selective laser melting (SLM) for biomedical applications. To achieve better corrosion resistance and biocompatibility of Ti6Al4V parts, the effects of SLM processing parameters on the corrosion resistance and the biocompatibility of Ti6Al4V parts are investigated by changing the scanning speeds and laser powers. The detailed influence mechanism of processing parameters on the properties of Ti6Al4V parts is studied from two aspects, including microstructure and defects. It is found that the corrosion resistance and biocompatibility of Ti6Al4V parts can be adjusted by changing the scanning speed and the laser power due to the constituent phase and the number and size of defect holes of Ti6Al4V parts. Compared with the laser power, the scanning speed has a stronger influence on the performance of the part, which can be used as “coarse tuning” based on the performance requirements. At the scanning speed of 1100 mm/s and the laser power of 280 W, Ti6Al4V parts with better corrosion resistance can be obtained. Ti6Al4V parts with better biocompatibility are fabricated at the scanning speed of 1200 mm/s and the laser power of 200 W.
INTRODUCTION
Nowadays, artificial implants are extensively applied to replace damaged or diseased parts of human bone tissue, and the global bone repair market has huge potential. 1,2 Metals used as implant materials are usually stainless steel, 3 cobalt−chromium alloy, 4 and titanium alloy. 5 Compared with stainless steel and cobalt−chromium alloys, titanium alloys exhibit a lower elasticity modulus, which avoids stress shielding. Titanium alloys are widely used in the medical field due to their excellent mechanical properties 6,7 and biocompatibility. 8,9 Due to the personalized characteristics of bone tissue engineering, the personalized manufacturing of implants is urgently needed. Due to its layer-by-layer processing principle, 10−12 threedimensional (3D) printing technology has unique advantages in the personalized manufacturing of implants. Titanium alloy implants are usually fabricated by selective laser melting (SLM), which can effectively shorten the manufacturing cycles, improve material utilization, and fabricate complex personalized parts. 13− 16 Excellent biocompatibility and corrosion resistance of implants are the essential requirements for medical applications. The corrosion resistance and biocompatibility of implants are also vital properties for Ti6Al4V parts fabricated by SLM for biomedical applications. The corrosion of implants caused by the physiological environment results in the precipitation of metallic ions and the destruction of implant surface morphology, leading to not only an inflammatory reaction but also organ damage. 17,18 The biocompatibility of implants is the most basic and important performance after implantation. Hence, it is necessary to study the corrosion resistance and biocompatibility of implants. The corrosion resistance of Ti6Al4V parts is investigated in various solutions. 19−22 Heakal et al. demonstrated that the increase in azide concentration in a solution accelerated the corrosion of Ti6Al4V parts. 19 Sharma et al. reported that better corrosion resistance of Ti6Al4V parts fabricated by SLM could be obtained in NaCl and NaOH than in H 2 SO 4 . 20 Corrosion resistance of cast Ti6Al4V parts could be improved due to the formation of a passive film. 21 However, there are relatively fewer studies about the corrosive behavior of Ti6Al4V parts fabricated by SLM in the simulated body fluid. In addition, the processing parameters are also critical for the properties of Ti6Al4V parts fabricated by SLM. 12,23,24 Lu et al. found that Ti6Al4V parts fabricated by SLM had better corrosion resistance (the corrosion voltage of −0.352 V) 23 at the laser power of 200 W. Qian et al. found that the density of Ti6Al4V alloy decreased with the increase in the laser scanning speed, leading to the reduction of corrosion resistance. 12 The corrosion resistance of SLM-fabricated Ti6Al4V parts was anisotropic in different planes and the corrosion resistance of the XY plane was better than that of the XZ plane in the HCl solution. 24 Researchers have also done a lot of research on the influence of SLM processing parameters on the biocompatibility of Ti6Al4V parts. Ni et al. investigated the effects of TiN and TiCrN coating layers deposited on the surface of SLM Ti6Al4V on the mechanical properties and biocompatibility of 3D-printed Ti6Al4V. 25 Cox et al. demonstrated that Ti6Al4V parts processed by SLM changed the surface morphology, resulting in a direct impact on the adhesion of cells and biofilms. 26 The Ti-6Al-4V-6Cu alloy processed by SLM could inhibit the activity of proinflammatory cytokines, regulate angiogenesis and Ni−Cr alloy processed by SLM present a lower human adipose stem cells proliferation and viability compared to Co−Cr. 27,28 Ran et al. evaluated the implications of porosity and pore size of Ti6Al4V scaffolds fabricated by SLM in vivo and in vitro. 29 Pore dimension was also considered for the dental area, which was known to be around 20−25 GPa and proximate to cancellous bone modulus. 14 A dense core associated with peripheral larger pores supports cellular proliferation and mechanical resistance. 30 Ghosh et al. conducted an experimental study to obtain polymer processed by SLM grafted on Ti6Al4V hip prosthesis to offset the surface roughness of untreated titanium. The surface roughness, which is conducive to the absorption of proteins, is conducive to the osseointegration of dental implants but not to hip joint reconstruction. 30 At present, there are few systematic studies on the effect of scanning speed and laser power on the corrosion resistance and biocompatibility of Ti6Al4V parts prepared by SLM, and there are few explanations on the influence mechanism. In this paper, the influence of SLM process parameters on the corrosion resistance and biocompatibility of Ti6Al4V parts is systematically studied, and the influence mechanism of process parameters on the performance of Ti6Al4V parts is studied in detail from the two aspects of microstructure and defects.
In this study, the effects of SLM processing parameters (scanning speed and laser power) on the corrosion resistance and the biocompatibility of Ti6Al4V parts are investigated. The detailed influence mechanism of processing parameters on the properties of Ti6Al4V parts is studied from a point of view of microstructure and cavity defects. At the scanning speed of 1100 mm/s, the laser power of 280 W, Ti6Al4V parts with better corrosion resistance. At the scanning speed of 1200 mm/s, the laser power of 200 W, Ti6Al4V parts are with better biocompatibility and the cell proliferation rate is the largest. The corrosion resistance and biocompatibility of Ti6Al4V parts can be regulated by changing the scanning speed and the laser power. This can guide and promote Ti6Al4V parts fabricated by SLM of clinical application.
MATERIALS AND METHODS
2.1. Materials and Sample Preparation. Ti6Al4V alloy spherical powder produced by the gas atomization method (EOS company, Germany) is used in our experiments. The particle size of powder ranges from 25 to 57 μm and the average size is about 38 μm. The chemical composition of the material is shown in Table 1, and its morphology and particle size distribution are shown in Figure 1. Different Ti6Al4V parts fabricated by EOS M280 (EOS company, Germany) are obtained by changing the scanning speed and the laser power. The experimental parameters of the scanning speed (v), the laser power (P), the hatch distance (d), and the layer thickness (h) are shown in Table 2. Ti6Al4V samples (10 mm × 10 mm × 10 mm) are obtained by changing the scanning speed and the laser power. These are then abraded with silicon carbide (SiC) papers (grade from 240 to 2000), immersed in the Keller reagent (95 mL of water, 2.5 mL of HNO 3 , 1.5 mL of HCl, 1.0 mL of HF), and then ultrasonically cleaned with ethanol and deionized water for 10 min. The surface morphology is obtained by an optical microscope (OM) and s scanning electron microscope (SEM, FEI QUAUTA 200).
Corrosion
Behavior. The electrochemical test is used to evaluate the corrosion resistance of Ti6Al4V parts fabricated by SLM. A CHI660D electrochemical workstation and 0.9% sodium chloride solution as an electrolyte are used in our experiments. Potentiodynamic polarization is tested in an electrochemical cell with the three-electrode system, which consists of a Ti6Al4V sample working electrode (10 mm × 10 mm × 10 mm), a platinum counter electrode, and a saturated calomel electrode with a Luggin capillary bridge. The experiment is performed at room temperature, and the distance between the Luggin capillary and the surface of the working electrode is fixed at 2 mm. According to the ISO10271-2011 standard, the potentiodynamic current/ potential curves are recorded by a C view software with the scanning speed of 10 mV/s from −1.6 to 1.5 V. When the electrochemical reaction is in equilibrium, the sample is in a self-corrosion state, and the net current is not accumulated. When the system is out of the equilibrium state, the amount of material released from the cathode is proportional to the current strength and conduction time. Hence, the corrosion rate is assessed using the corrosion current density. The potential−current density curve (as a logarithm of current in the form of Tafel graph), open-circuit potential, and corrosion current density are obtained.
2.3. Biocompatibility. In vitro cytotoxicity is used to characterize the biological properties of Ti6Al4V parts. The 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) method is used to assess the cytotoxicity of the material. A microplate reader is used to obtain the absorption value of MTT dissolved in dimethyl sulfide. The absorbance value can reflect the number of surviving cells and the strength of cell metabolic activity. The relative growth rate (RGR) of the cells is calculated based on the absorbance value. The samples are extracted in a cell culture medium containing 10% calf serum for 24 h at the ratio of 1.25 cm 2 :1 mL (surface area of parts: extraction medium) at 37°C. Mouse fibroblasts L929 are used to subculture the vigorously growing cells for 48−72 h in the experiment. The prepared 1 × 10 5 /mL cell suspension is inoculated on a 96-hole plate, and blank control, negative control, positive control, and material group are set up. Each group is equipped with at least six holes, and 100 μL of the cell suspension is inoculated for each hole. After being cultured in a 5% CO 2 incubator for 24 h, the original culture medium is discarded. The blank control group is added with a fresh cell culture medium. The negative control group was added with high-density polyethylene extract. The positive control group is added with 5% dimethylsulfoxide (DMSO). The material group is added with the Ti6Al4V part extract (divided into two groups, 100 and 50% extract), and 100 μL of the Ti6Al4V part extract is added in each hole, and then put in a 5% CO 2 incubator for 24 h. After discarding the culture medium in the net hole, 50 μL of the MTT solution with a mass concentration of 1 g/L is added to each hole. After 2 h of continuous culture, the liquid in the net hole is discarded. One hundred microliters of isopropanol is then added and mixed evenly. Finally, the absorbance at 570 and 650 nm wavelength of the enzyme standard instrument is obtained and the relative value-added rate is calculated according to the formula where RGR is the relative growth rate (%), A is the absorbance of the test group (negative and positive groups), and A B is the absorbance of the blank control group.
3. RESULTS AND DISCUSSION 3.1. Effects of the Scanning Speed. In this experiment, Ti6Al4V parts are fabricated by SLM parameters as follows: the scanning speed v = 1000, 1100, 1200, 1300, and 1400 mm/ s, the laser power P = 280 W, the hatch distance h = 0.14 mm, and the layer thickness d = 0.03 mm. The corrosion behavior of Ti6Al4V parts fabricated by SLM is evaluated by an electrochemical test. Figure 2 shows the self-corroding Tafel potentiodynamic polarization curves of five groups of Ti6Al4V parts. The self-corrosion potential is a stable potential when the system is not subjected to external polarization. The higher the self-corrosion potential, the smaller the corrosion tendency. At the scanning speed of 1100 mm/s, the self-corrosion potential of the Ti6Al4V part is about −146.2 mV, showing the lowest corrosion tendency. At a scanning speed of 1300 mm/s, the self-corrosion potential is −116.2 mV, and the related corrosion tendency is the greatest. The self-corrosion potential only reflects the stability of the system.
The performance of corrosion dynamics is characterized by corrosion current density. Table 3 shows the measured values of the corrosion current density and self-corrosion potential of the five groups of Ti6Al4V parts. With the increase in the scanning speed, the corrosion resistance first improves and then degrades gradually. At the scanning speeds of 1000, 1300, and 1400 mm/s, the corrosion current density is relatively larger, 3.486 × 10 −5 , 3.477 × 10 −5 , and 3.439 × 10 −5 A/cm 2 , respectively. At the scanning speeds of 1100 and 1200 mm/s, the corrosion current densities of the prepared parts are about 2.507 × 10 −5 and 2.701 × 10 −5 A/cm 2 , respectively. The larger corrosion current density means a larger corrosive quantity. When the scanning speed increases from 1000 to 1100 mm/s, the corrosion current density reaches the minimum value (2.507 × 10 −5 A/cm 2 ). When the scanning speed exceeds 1100 mm/s, the corrosion current density increases. Hence, from To evaluate the biological performance of Ti6Al4V parts, cell culture and proliferation are also investigated in this study. Figure 4, when the scanning speed is 1300 mm/s, the cell proliferation rate is the largest (74.4%), and when the scanning speed is 1400 mm/s, the cell proliferation rate is the smallest (65.2%). Hence, the biological performance of Ti6Al4V parts can be adjusted by changing the scanning speed.
To understand the detailed influence mechanism of scanning speed on the corrosion resistance and biological performance, the SEM images of Ti6Al4V parts fabricated at different scanning speeds are shown in Figure 5. The above experimental phenomena are attributed to the constituent phase and the number and size of defect holes of Ti6Al4V parts. As shown in Figure 5, at the scanning speeds of 1000 mm/s, more acicular α′-Ti phase and a small number of defect holes with the size of about 6 μm are observed on the surface of Ti6Al4V parts. It is known that the acicular α′-Ti phase is easily dissolved and favorable for cell growth and attachment. 31 In addition, the large defects hole not only increase the contact area between the etching solution and Ti6Al4V parts but also hinder the proliferation of cells and release toxic ions, resulting in a low relative growth rate and a high round shrinkage rate of cells on the surface of the part. At the scanning speed of 1100 mm/s, the β-Ti phase is most commonly observed and defects holes are relatively rare. It is known that the β-Ti phase plays an important role in resisting dissolution. 31 However, there are few α′-Ti phases conducive to cell growth and attachment. Hence, Ti6Al4V parts fabricated at 1100 mm/s scanning speed have better corrosion resistance but poor biocompatibility. At the scanning speed of 1200 mm/s, relatively more β-Ti phase and a small amount of α′-Ti phase are observed and a certain number of defect holes with larger size are also obtained on the surface of Ti6Al4V parts. Due to the existing big holes and α′-Ti phase, the corrosion resistance of Ti6Al4V parts is reduced, compared with the situation of 1100 mm/s. On the other hand, the α′-Ti phase leads to the acceleration of cell proliferation and an increase in the relative growth rate. At the scanning speed of 1300 mm/s, more acicular α′-Ti phase and tiny holes are observed, which is conducive to the adhesion and proliferation of the cells and increasing the relative growth rate of the cells. At the scanning speed of 1400 mm/s, more acicular α′-Ti phase and a lot of defect holes (both large holes and tiny holes) are observed. The Ti6Al4V parts fabricated at the scanning speed of 1400 mm/s are with the worst corrosion resistance and biocompatibility.
3.2. Effects of Laser Power. The Ti6Al4V parts are fabricated by SLM parameters as follows: the laser power P = 200, 240, 280, 320, and 360 W; the scanning speed v = 1200 mm/s; the hatch distance h = 0.14 mm; and the layer thickness d = 0.03 mm. The corrosion behavior is evaluated by an electrochemical test. Figure 6 shows the self-corroding Tafel potentiodynamic polarization curves of five groups of Ti6Al4V samples. The self-corrosion potential of Ti6Al4V parts sharply increases and then rapidly decreases with the increase in the laser power. The self-corrosion potential is −264.9 mV at the laser power of 320 W, which indicates the smallest corrosion tendency. At the laser power of 360 W, the self-corrosion potential is −102.5 mV, exhibiting the highest corrosion tendency. Table 4 shows the measured values of the corrosion current density and self-corrosion potential of five groups of Ti6Al4V parts. As the laser power increases, the corrosion current density shows oscillating behavior: decreasing sharply, then increasing quickly, and finally rapidly decreasing. At the laser power of 200, 240, and 320 W, the corrosion current density is relatively larger (3.003 × 10 −5 , 3.171 × 10 −5 , and 3.356 × 10 −5 A/cm 2 ) and the corrosion resistance is relatively lower. At the laser power of 360 W, the relatively smaller corrosion current density (2.801 × 10 −5 A/cm 2 ) indicates better corrosion resistance. The corrosion current density reaches the minimum (2.701 × 10 −5 A/cm 2 ) at 280 W laser power, and the Ti6Al4V parts exhibit the best corrosion resistance. Hence, the corrosion resistance of Ti6Al4V parts can be adjusted by changing the laser power. Figure 7 depicts the cell morphology of L929 cells cultured in different extracts for 24 h. Figure 7a shows the cell morphology of the blank control. As shown in Figure 7b, the cell morphology of the negative control group is normal, which is similar to the cell morphology given in Figure 7a. While the morphology of the cells of the positive control is rounded, as shown in Figure 7c. The cell morphology shown in Figure 7a− c proves that the experiment is effective. Figure 7d−h shows the difference in cell morphology on the surface of Ti6Al4V parts fabricated under different laser powers. When the laser power is increased from 200 to 360 W, the cell round shrinkage rates are about 11, 10, 8, 7, and 10%. The changes in cell proliferation rate in five groups of different laser powers are investigated in this study, as shown in Figure 8. It is easily found that the changing trend of cell proliferation rate is consistent with that of cell morphology. With the increase in laser power, the cell proliferation rate first slowly decreases and then slowly increases. When the laser power is 280 W, the minimum cell proliferation rate is obtained (70.8%). When the laser power is 200 W, the maximum cell proliferation rate is 75.5%. Hence, the biological performance of Ti6Al4V parts can be adjusted by changing the laser power.
As shown in Figure 9, Ti6Al4V parts fabricated by SLM at different laser powers have different microstructures and defects, causing different corrosion resistance and biological properties of Ti6Al4V parts. As shown in Figure 9, at the laser power of 200, 240, and 320 W, relatively more α′-Ti phase and more hole defects appear on the surface of Ti6Al4V parts, which leads to the reduction of corrosion resistance. At the laser power of 280 W, relatively more β-Ti phase and relatively fewer defects are observed, Ti6Al4V parts are with better corrosion resistance. Due to the β-Ti phase, α′-Ti phase, and hole defects, the corrosion resistance of Ti6Al4V parts is
CONCLUSIONS
Selective laser melting (SLM), as one of the typical additive manufacturing technologies, has been widely used in the medical field, especially for implant applications. The corrosion resistance and the biocompatibility of implants are vital properties for Ti6Al4V parts fabricated by SLM for biomedical applications. To achieve better corrosion resistance and biocompatibility of Ti6Al4V parts, the effects of SLM processing parameters on the corrosion resistance and the biocompatibility of Ti6Al4V parts are investigated by changing the scanning speeds (1000, 1100, 1200, 1300, and 1400 mm/ s) and the laser powers (200, 240, 280, 320, and 360 W). The experimental results show that (1) the corrosion resistance and biocompatibility of Ti6Al4V parts can be regulated by changing the scanning speed and the laser power due to the constituent phase and the number and size of defect holes of Ti6Al4V parts; (2) the large number of defect holes leads to a relatively lower growth rate and a high round shrinkage rate of cells on the surface of the part, due to the increasing of the contact area (between the etching solution and Ti6Al4V parts) and the release of the toxic ions; (3) tiny holes are conducive to the adhesion and proliferation of the cells and the increase in the relative growth rate of the cells; (4) compared with laser power, the scanning speed has a stronger influence on the performance of the part; (5) at the scanning speed of 1100 mm/s, the laser power of 280 W, the hatch distance of 0.14 mm, and the layer thickness of 0.03 mm, Ti6Al4V parts show better corrosion resistance; (6) Ti6Al4V parts with better biocompatibility are fabricated at the scanning speed of 1200 mm/s, the laser power of 200 W, the hatch distance of 0.14 mm, and the layer thickness of 0.03 mm, and the cell proliferation rate is the largest (75.5%). The proposed research is applied to improve the biological activity of titanium alloy implants, which can increase the surgical success rate of implants and promote the clinical application of implants. However, the more detailed influence mechanism of scanning speed on the properties of Ti6Al4V parts is not investigated in this study, as it is still being investigated.
|
2022-02-12T16:21:59.468Z
|
2022-02-10T00:00:00.000
|
{
"year": 2022,
"sha1": "d8961d9bb9ccf646a70189c999a6e21770d1323b",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.1c06246",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3995d996689a80b27bbf16824a9e2996f1a14524",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
104427930
|
pes2o/s2orc
|
v3-fos-license
|
Association of moderately elevated trimethylamine N-oxide with cardiovascular risk: is TMAO serving as a marker for hepatic insulin resistance
To date, at least five prospective cohort studies have concluded that increased plasma levels of trimethylamine N-oxide (TMAO) predict increased risk for major adverse cardiovascular (CV) events in patients with pre-existing coronary heart disease.1–5 Moreover, though some epidemiology does not support a connection between plasma TMAO and CV risk,6 7 a recent meta-analysis of 11 prospective cohort studies concludes that higher plasma TMAO correlates with a 23% increase in risk for CV events (HR 1.23, 95% CI 1.07 to 1.42), as well as a 55% increase in all-cause mortality.8 The possibility that TMAO may be a mediating factor in this regard is raised by rodent studies in which plasma levels have been raised either by direct oral administration of TMAO, or by administration of very high doses (proportionately very much higher than would be employed in human supplementation) of its precursors phosphatidylcholine and carnitine; in these studies, in which the achieved plasma level of TMAO was at least an order of magnitude higher than commonly observed in humans, a proatherogenic effect was documented.9–14 In vitro studies, likewise employing supraphysiological concentrations of TMAO, have demonstrated effects suggesting proatherogenic potential.12 13 15–17
In case–control epidemiology, elevated TMAO has also been linked to substantially increased risk for type 2 diabetes and metabolic syndrome.18–20 Indeed, the correlations between TMAO and diabetes risk appear to be stronger than those for CV risk.
Yet the notion that TMAO acts as a human vascular toxin at the plasma concentrations seen in people with reasonably normal renal function is difficult to square with other recent findings. Preformed TMAO is notably high in fish, in which it serves to maintain osmotic balance; levels tend to be higher in deep-sea fish, which must survive at higher pressures.21–24 This TMAO can be directly absorbed …
ElEvatEd trimEthylaminE-n-oxidE is an EstablishEd CardiovasCular and mEtaboliC risk faCtor To date, at least five prospective cohort studies have concluded that increased plasma levels of trimethylamine N-oxide (TMAO) predict increased risk for major adverse cardiovascular (CV) events in patients with pre-existing coronary heart disease. [1][2][3][4][5] Moreover, though some epidemiology does not support a connection between plasma TMAO and CV risk, 6 7 a recent meta-analysis of 11 prospective cohort studies concludes that higher plasma TMAO correlates with a 23% increase in risk for CV events (HR 1.23, 95% CI 1.07 to 1.42), as well as a 55% increase in all-cause mortality. 8 The possibility that TMAO may be a mediating factor in this regard is raised by rodent studies in which plasma levels have been raised either by direct oral administration of TMAO, or by administration of very high doses (proportionately very much higher than would be employed in human supplementation) of its precursors phosphatidylcholine and carnitine; in these studies, in which the achieved plasma level of TMAO was at least an order of magnitude higher than commonly observed in humans, a proatherogenic effect was documented. [9][10][11][12][13][14] In vitro studies, likewise employing supraphysiological concentrations of TMAO, have demonstrated effects suggesting proatherogenic potential. 12 13 15-17 In case-control epidemiology, elevated TMAO has also been linked to substantially increased risk for type 2 diabetes and metabolic syndrome. [18][19][20] Indeed, the correlations between TMAO and diabetes risk appear to be stronger than those for CV risk.
nutritional intakEs of tmao and its prECursors do not CorrElatE with Cv risk Yet the notion that TMAO acts as a human vascular toxin at the plasma concentrations seen in people with reasonably normal renal function is difficult to square with other recent findings. Preformed TMAO is notably high in fish, in which it serves to maintain osmotic balance; levels tend to be higher in deep-sea fish, which must survive at higher pressures. [21][22][23][24] This TMAO can be directly absorbed after fish consumption. 25 However, at least in those who do not ingest a very large amount of fish, a high proportion of their plasma TMAO arises from bacterial metabolism of dietary choline (usually ingested as phosphatidylcholine) and carnitine; trimethyllysine also makes a minor contribution in this regard. 9 10 26 Certain gut bacteria can metabolise these compounds to trimethylamine (TMA) via TMA lyase activity; inhibition of this lyase activity prevents induction of atherosclerosis in mice fed high-dose choline. 11 27 28 This TMA can then be absorbed; its subsequent oxidation by hepatic flavin-containing monooxygenases (FMOs) converts it to TMAO. 29 30 Unless choline has cardioprotective properties, we are currently unaware of, a diet relatively rich in choline would be expected to increase CV risk if physiological levels of plasma TMAO can indeed provoke CV disease or CV events. Yet a recent meta-analysis of prospective epidemiological studies concluded that dietary choline intake has no significant impact on risk for incident CV disease or CV mortality; with respect to CV mortality, only two pertinent studies were available, so the conclusion in this respect might not be definitive. 31 Likewise, a recent meta-analysis failed to associate consumption of eggs-a rich source of phosphatidylcholine-with increased CV risk. 32 With respect to carnitine and CV risk, a meta-analysis of prospective clinical trials in patients who had recently experienced a myocardial infarction (MI) concluded that carnitine supplementation is markedly protective with respect to total mortality, ventricular arrhythmias and new-onset angina; trends for lower incidence of reinfarction or heart failure did not achieve statistical significance, possibly owing to the modest size of the studies included. 33 Clinical trials have also reported favourable effects of supplemental carnitine or carnitine esters on angina, intermittent claudication and heart failure. [34][35][36][37][38][39] Moreover, rodent atherogenesis studies, in which carnitine has been administered in doses reasonably proportional to the supplementation doses used clinically, have found that carnitine is antiatherogenic, despite its propensity to raise TMAO. 22 40-42 With respect to fish, the primary dietary source of preformed TMAO, a meta-analysis found that fish consumption correlates dose dependently with CV protection, likely because of the long-chain omega-3 content of fish. 43 While it might be argued that the benefits of omega-3 ingestion are masking a genuine adverse impact of TMAO on CV risk, the impact of moderate supplemental intakes of fish omega-3 on this risk seems to be rather modest in the context of current drug therapy, primarily influencing risk for sudden death arrhythmias. [44][45][46] Hence, these benefits would seem unlikely to overwhelm the adverse effects of TMAO if these were of important magnitude. In aggregate, these findings are difficult to square with the notion that TMAO is a mediating CV risk factor, at least in commonly occurring levels, since increased ingestion of choline, carnitine or fish would be expected to increase TMAO levels, but is not associated with increased CV risk.
It is, therefore, reasonable to suspect that moderately elevated TMAO, rather than being a mediator of the associated CV risk, is a marker for factors which both promote CV events and increase plasma TMAO. 47 48 Indeed, after a plethora of multicentre supplementation trials, we have learnt something precisely comparable about moderately elevated homocysteine and coronary risk. 49 50 Whereas the highly elevated homocysteine levels seen in genetic hyperhomocysteinaemia are evidently directly pathogenic to the vascular system, and homocysteine at comparably high levels exerts proinflammatory effects on vascular cells in vitro, we were never presented with evidence that the moderately elevated levels of homocysteine associated with increased CV risk-roughly an order of magnitude lower-exerted important effects in vitro. Currently, TMAO appears to be in an analogous position. 22 diminishEd rEnal funCtion Can markEdly ElEvatE tmao Evidently, an increase in plasma TMAO can reflect an increase in TMAO synthesis or a reduction in its renal clearance. A promising lead is offered by the observation that plasma TMAO levels are highly dependent on renal function. A study examining plasma TMAO levels in patients with varying degrees of renal compromise and in healthy controls found that TMAO averaged 5.8 µM/L in the controls (with average measured glomerular filtration rate [mGFR]-83 mL/min), 14.6 µM/L in patients with stages 3-4 kidney disease (mGFR 28 mL/min) and 75.5 µM/L (mGFR 7 mL/min) in stage 5 patients. 51 Hence, as mGFR falls, plasma TMAO tends to rise almost proportionally.
Although it is well known that severe kidney disease is associated with a considerable increase in CV risk, a meta-analysis of general population cohort studies has found that even a mild reduction in estimated GFR (eGFR) is a risk factor for CV mortality. 52 Thus, whereas risk for CV mortality was found to be relatively flat for eGFRs in the range of 75-120 mL/min, a significantly higher risk was seen at eGFR 60 mL/min, and this mortality rose progressively as eGFR fell. Hence, even relatively modest reductions of eGFR sometimes considered to be within the 'normal' range of kidney function (eGFR of 60 mL/ min or greater) are associated with increased CV risk. This increased risk could presumably reflect an impact of suboptimal kidney function per se (leading to increased levels of phosphate or other uraemic toxins), as well as of vasculotoxic factors inducing reduction of kidney function. These may not have been adequately corrected for in epidemiological analyses focusing on TMAO.
Nonetheless, there is good reason to believe that, whereas uncorrected correlations of TMAO with CV risk are explained in part by the CV risk associated with diminished renal function, this is not the sole explanation for the utility of TMAO as a risk factor. That is because the five cohort studies cited above-included analyses which adjusted for eGFR as a covariate; while this correction markedly decreased the calculated CV risk associated increased TMAO, it by no means eliminated it. [1][2][3][4][5] We can, therefore, conclude that factors, which boost TMAO synthesis, are mediators of some of the risk associated with elevated TMAO. arE bad baCtEria thE Culprit? Two steps are involved in the synthesis of TMAO: generation of TMA from certain dietary precursors-most notably, choline and carnitine-by the TMA lyase activity of gastrointestinal (GI) bacteria; and oxidation of circulating TMA to TMAO by hepatic FMOs, by far the most active of which in this regard is FMO3. 29 With respect to GI bacteria, rodent studies have led to increasing awareness of the fact that microbiota can notably modulate metabolic health. [53][54][55][56] Is it possible that certain commonly occurring GI bacteria are quite proficient at generating TMA, while simultaneously increasing CV risk by certain mechanisms-for example, by suppressing incretin synthesis or maximising bile acid reabsorption (which might elevate LDL cholesterol)? Or could some dietary factor-soluble fibre, perhaps-suppress the capacity of GI bacteria to generate TMA, while simultaneously protecting CV health?
Editorial
While this is an intriguing hypothesis that merits further follow-up, research studies to date provide little support for it. Controlled clinical studies of supplementation with probiotic micro-organisms linked to improved intestinal health-Lactobacillus casei Shirota and another preparation providing a Lactobacillus, Bifidobacterium and Streptococcus thermophilus-have so far failed to demonstrate reductions in plasma TMAO. 57 58 Faecal microbiota transplantation from vegan donors to recipients with metabolic syndrome, while it did succeed in altering the latter's GI flora, did not lower their plasma TMAO levels. 59 Administration of the cholesterol-lowering prebiotic glucaro-1,4-lactone to rats fed a high-fat diet, which markedly boosted intestinal levels of Lactobacillus, Bifidobacteria and Enterococcus, while suppressing Escherichia coli, was associated with an increase of TMAO in urine. 60 Supplementation of mouse diets with either galacto-oligosaccharides/ inulin or polydextrose and insoluble bran fibre increased serum TMAO levels, whereas supplementation with both simultaneously failed to influence TMAO. 61 In one mouse study, supplementation with soluble fibre from wheat bran did lower colonic TMA lyase activity as well as serum cholesterol. 62 However, it seems unlikely that an increased intake of protective soluble fibre explains the association of TMAO with vascular risk, since very ample intakes of soluble fibre are required to achieve a modest reduction in Low-density lipoprotein (LDL) cholesterol-intakes which very few people ingest; and in any case the associated risk persists after adjustment for lipid risk factors such as LDL cholesterol. 63 While it is feasible to produce mice whose intestines have been colonised with bacteria with limited capacity to generate TMA, there so far is no evidence that this confers any special vascular protection on these mice when they eat normal diets. 27 ElEvatEd hEpatiC fmo3 aCtivity Can rEflECt hEpatiC insulin rEsistanCE Which brings us to the alternative thesis: that modulation of hepatic FMO3 activity by certain factors that can influence CV health, can rationalise the epidemiology of TMAO. The regulation of hepatic FMO3 requires much further research, but several intriguing findings have emerged. Insulin suppresses FMO3 expression at both the messenger RNA (mRNA) and protein level; conversely, glucagon elevates FMO3 expression. 64 Also, the FXR receptor, for which many bile acids serve as activating ligands, stimulates transcription of the FMO3 gene. 29 65 With respect to the impact of insulin, genetically modified mice in which hepatic expression of the insulin receptor has been selectively ablated (Liver-specific insulin receptor knockout mice) have greatly enhanced hepatic expression of FMO3. 64 These mice develop marked hypercholesterolaemia and are exceptionally prone to atherosclerosis when fed a proatherogenic diet, and also understandably have an elevated hepatic glucose output. 66 The pertinence of these findings to humans has been clarified by a study in which liver biopsies were obtained both from obese subjects and lean controls; mRNA expression of FMO3 was about twice as high in the obese subjects, likely reflecting hepatic insulin resistance in the context of hyperinsulinaemia. 64 Recent studies suggest that the hepatic insulin resistance associated with obesity and metabolic syndrome is mediated by increased hepatic influx of free fatty acids (FFAs), giving rise to increased levels of diacylglycerol; the latter promotes activation of protein kinase C-epsilon, which in term hampers the tyrosine kinase activity of the insulin receptor by phosphorylating threonine-1160 of the beta-chain. [67][68][69] Other kinase or phosphatase activities stimulated by lipid overload may also impair insulin signalling at points downstream from the insulin receptor. 70 71 Excess FFA influx also drives increased triglyceride synthesis, giving rise to the hepatic steatosis often associated with hepatic insulin resistance. However, increased hepatic triglyceride levels per se may not promote hepatic insulin resistance; such resistance correlates with hepatocyte levels of diacylglycerol, rather than of triglycerides. 69 72 Hepatic insulin resistance and its common concomitant hepatic steatosis are associated with increased CV risk, as well as elevated risk for type 2 diabetes-risks likewise associated with elevated TMAO. 66 73-77 It is, therefore, straightforward to postulate that TMAO can serve as a marker for hepatic insulin resistance, and that this explains at least a portion of the risk for CV events and diabetes linked to TMAO. Although studies establishing TMAO as an independent CV risk factor have often corrected for certain correlates of obesity, such as body mass index or diabetes, it is unlikely that such corrections fully capture the impact of hepatic insulin resistance.
CorrECting hEpatiC insulin rEsistanCE
This analysis suggests that healthful measures which tend to correct hepatic insulin resistance may favourably impact the vascular and metabolic health of subjects with high TMAO. Evidently, sustained remediation of the visceral obesity which often underlies hepatic insulin resistance should be helpful in this regard; nonetheless, it is easier to recommend this than to achieve it! By improving the insulin sensitivity of hypertrophied adipocytes, thiazolidinediones such as pioglitazone tend to improve hepatic insulin resistance in people with diabetes by quelling excessive fatty acid efflux from adipocytes, even though they tend to increase body fat mass somewhat. [78][79][80][81] Hormones and medications which boost hepatic AMPK activity tend to improve impaired hepatic insulin sensitivity. AMPK achieves this, at least in part, by downregulating mTORC1 activity, which acts indirectly to promote phosphorylations of insulin receptor substrate-1 that impede transmission of the insulin signal. 82 Also, by promoting oxidative disposal of FFAs while suppressing lipogenesis, AMPK could be expected to lessen hepatic diacylglycerol synthesis, thereby getting to the root of hepatic insulin resistance. 83 84 The favourable impact of metformin on hepatic insulin resistance in diabetes is thought to be mediated by activation of AMPK. [85][86][87][88] The phytochemical nutraceutical berberine, widely used in China for the management of type 2 diabetes, is likewise thought to improve glycaemic control via activation of AMPK, and has been shown to counter hepatic insulin resistance in diabetic hamsters. [89][90][91][92][93] Both adiponectin and glucagon-like peptide-1 (GLP-1) act on the liver to stimulate AMPK activity; moreover, they have been shown to combat hepatic insulin resistance, and work in various ways to promote vascular and metabolic health. [94][95][96][97][98][99][100][101][102][103][104][105][106] Hence, elevated TMAO may often be a marker for suboptimal adiponectin and/ or GLP-1 activity. The antidiabetic drug pioglitazone tends to boost the diminished adiponectin secretion of hypertrophied adipocytes. 107 108 It seems likely that plantbased diets of rather low-protein content can increase adiponectin production, as these boost the liver's production of fibroblast growth factor-21, one of whose major functions is to promote adiponectin secretion by adipocytes. 109 110 Such diets are also useful for preventing or correcting the obesity that often underlies hepatic insulin resistance. [111][112][113] With respect to GLP-1, acarbose, dietary lente carbohydrate, bile acid sequestrants and certain prebiotics can boost GLP-1 production, drugs inhibiting plasma dipeptidyl peptidase-4 can prolong its half-life, and injectable GLP-1 receptor agonists can mimic its bioactivity. [114][115][116][117] PPARalpha agonists, such as fenofibrate, also promote hepatic fatty acid oxidation, owing to induction of a range of mitochondrial enzymes (including carnitine palmitoyl transferases-1a and -2, fatty acyl-CoA dehydrogenase, UCP-2) which catalyse such oxidation. 118 119 Moreover, PPARalpha agonism also acts indirectly to stimulate AMPK in the liver and other tissues by boosting adiponectin production in adipose tissue; PPARalpha enhances hepatic synthesis and release of fibroblast growth factor-21, which in turn stimulates adiponectin synthesis in adipocytes. [120][121][122][123] Not surprisingly, fenofibrate has been shown to decrease hepatic levels of diacylglycerol and alleviate hepatic insulin resistance in rodents fed diets high in fat and/or fructose. [124][125][126][127][128] Moreover, fenofibrate therapy has been shown to reduce risk for CV events in patients with metabolic syndrome. 118 There is recent evidence that the carotenoid antioxidant astaxanthin can also serve as a PPARalpha agonist, and, both in rodents and humans, alleviate the dyslipidaemia associated with metabolic syndrome. [129][130][131][132][133][134][135] In obese mice, astaxanthin has been reported to improve hepatic insulin resistance. 136 Krill oil provides esterified forms of astaxanthin which have superior bioavailability, as well as health-protective omega-3 fatty acids, oxidised forms of which likewise serve as PPARalpha agonists. [137][138][139][140] Moreover, krill oil supplementation has been found to beneficially modulate serum lipid profile-including, intriguingly, a reduction in LDL cholesterol-in controlled clinical trials. 141 Krill oil, even when compared with fish oil, suppresses hepatic steatosis in rodents. [142][143][144] This may be due to its astaxanthin content, which is not found in fish oil. Moreover, krill oil, but not fish oil, reduces diacylglycerol and ceramide content in the liver. 145 The phospholipid fraction of krill oil has also been noted to reduce hepatic glucose production, unlike fish oil. 146 Thus, krill oil, being a source of highly bioavailable form of astaxanthin, appears to have additional advantages for reducing hepatic steatosis and hepatic insulin resistance compared with fish oil.
In brief, if this analysis is accurate, various measures which alleviate hepatic insulin resistance-correction of visceral obesity, activation of 5' adenosine monophosphate-activated protein kinase (AMPK) with metformin or berberine, activation of PPARalpha with fenofibrate or astaxanthin, amplification of adiponectin production with pioglitazone or plant-based diets, and clinical strategies which boost the production or bioactivity of GLP-1, could be expected to decrease elevated TMAO while also decreasing the risk for vascular events and diabetes associated with this risk factor. Figure 1 summarises these relationships.
fmo3 might also mEdiatE risk assoCiatEd with ElEvatEd tmao One intriguing observation to emerge from TMAO research is that elevated hepatic expression of FMO3 boosts hepatic lipogenesis and gluconeogenesis, independent of its impact on TMAO levels; this might reflect FMO3's ability to somehow support expression of FoxO1. 30 64 This raises the interesting prospect that drugs selectively targeting FMO3 might have some utility in diabetes and hyperlipidaemia, particularly when elevated TMAO levels suggest that hepatic FMO3 expression is high. However, since FMO3 plays a systemic role in catecholamine metabolism, suppressing its function might not prove to be innocuous; genetic absence of FMO3 activity has been associated with hypertension. 147
Editorial
In any case, when hepatic insulin resistance is present, correcting this should lessen hepatic FMO3 expression.
ovErviEw Accumulating evidence points to elevated plasma TMAO as a risk factor for both atherosclerosis, CV events and type 2 diabetes, and rodent studies have found that extremely high dietary intakes of TMAO per se or its dietary precursors choline and carnitine are proatherogenic. Moreover, supraphysiological concentrations of TMAO exert proinflammatory effects in cell culture studies. These findings have led some observers to recommend that dietary or supplementary consumption of choline and carnitine should be minimised-although these analysts have rarely recommended abstinence from fish, the richest dietary source of preformed TMAO. In fact, a meta-analysis of pertinent nutritional epidemiology has failed to observe an impact of dietary choline on CV risk. Supplemental use of carnitine has been found to reduce mortality and diminish risk for arrhythmias and new-onset angina in patients who have suffered a previous MI, has shown clinical utility in angina, intermittent claudication and heart failure, and exerts antiatherogenic effects in rodents when fed at moderate levels comparable to human supplemental intake. And, fish consumption correlates dose dependently with favourable vascular outcomes. These findings point ineluctably to the conclusion that TMAO is not a mediating risk factor, at least in the concentrations seen in people whose renal function is not severely defective.
Hence, moderately elevated TMAO must be viewed as a marker for other factors that both raise TMAO and confer increased risk for vascular disease and diabetes. Plasma levels of TMAO are highly reflective of renal function, and hence a portion of the risk associated with elevated TMAO is mediated either by impaired renal function, or renotoxic factors that are also vasculotoxic or promote diabetes. Nonetheless, TMAO remains predictive of vascular risk after statistical correction for eGFR; factors influencing TMAO synthesis evidently mediate some of this risk. While it is theoretically possible that certain strains of GI bacteria possessing high TMA lyase activity exert adverse effects on vascular and metabolic health, this remains to be demonstrated, and efforts to lower plasma TMAO with probiotics thought to be health protective have so far failed.
Factors which upregulate hepatic expression and activity of FMO3, chiefly responsible for conversion of TMA to TMAO, therefore, fall under suspicion. In this regard, it is notable that subnormal hepatic insulin activity reflecting hepatic insulin resistance has been found to boost hepatic FMO3 expression. Hepatic insulin resistance is typically induced by the excessive FFA influx associated with metabolic syndrome and visceral obesity, well-known risk factors for vascular disease and diabetes. This excessive FFA influx also gives rise to hepatic steatosis; although excessive accumulation of triglycerides in the liver does not appear to mediate hepatic insulin resistance, it serves as a marker for the increased FFA influx that does. Subnormal activities of either adiponectin or GLP-1-both of which exert favourable vascular and metabolic effects-can also promote hepatic insulin resistance. It is, therefore, reasonable to speculate that lifestyle measures which reverse visceral obesity, or nutraceutical/drug/dietary measures which boost the production or bioactivity of adiponectin and/ or GLP-1, will alleviate the risk associated with elevated TMAO by ameliorating hepatic insulin resistance. Activation of AMPK with metformin or berberine, or of PPARalpha with fenofibrate or astaxanthin, could also be expected to have a favourable impact in this regard, in part by accelerating the oxidative disposal of excessive hepatic FFAs. Finally, elevated FMO3 activity per se may mediate some of the risk associated with high TMAO via upregulation of hepatic lipogenesis and gluconeogenesis.
Importantly, this analysis does not exclude the possibility that TMAO might be directly pathogenic at the very elevated levels typically seen in severe kidney dysfunction. Indeed, cell culture studies suggest that TMAO can be proinflammatory in the plasma concentrations achieved during kidney failure. It generally is wise to minimise the consumption of nitrogenous compounds in this context.
In conclusion, there is a reason to suspect that the elevated risk for vascular events and type 2 diabetes associated with elevated TMAO, after correction for recognised risk factors, is mediated largely by hepatic insulin resistance and the metabolic factors which induce it. This implies that a range of measures which typically improve hepatic insulin sensitivity, as catalogued above, could be expected to decrease elevated TMAO-a proposition that is readily clinically testable-while ameliorating the vascular and metabolic risk associated with high TMAO.
Contributors All authors contributed to the final manuscript.
funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests JJD is the author of The Salt Fix and Superfuel. MM: owner and science director of NutriGuard Research, a nutraceutical company which, among other things, sells berberine and astaxanthin supplements. JO: chief medical officer and founder of CardioTabs, a nutraceutical company which sells omega-3 supplements, and has a major ownership interest in the company. open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/.
|
2019-04-10T13:13:34.637Z
|
2019-02-01T00:00:00.000
|
{
"year": 2019,
"sha1": "ebfba2aa5883b741d0325c1203a3eaacc99b7b9e",
"oa_license": "CCBYNC",
"oa_url": "https://openheart.bmj.com/content/openhrt/6/1/e000890.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ebfba2aa5883b741d0325c1203a3eaacc99b7b9e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253480826
|
pes2o/s2orc
|
v3-fos-license
|
Repression of the PRELP gene is relieved by histone deacetylase inhibitors through acetylation of histone H2B lysine 5 in bladder cancer
Background Proline/arginine-rich end leucine-rich repeat protein (PRELP) is a member of the small leucine-rich proteoglycan family of extracellular matrix proteins, which is markedly suppressed in the majority of early-stage epithelial cancers and plays a role in regulating the epithelial–mesenchymal transition by altering cell–cell adhesion. Although PRELP is an important factor in the development and progression of bladder cancer, the mechanism of PRELP gene repression remains unclear. Results Here, we show that repression of PRELP mRNA expression in bladder cancer cells is alleviated by HDAC inhibitors (HDACi) through histone acetylation. Using ChIP-qPCR analysis, we found that acetylation of lysine residue 5 of histone H2B in the PRELP gene promoter region is a marker for the de-repression of PRELP expression. Conclusions These results suggest a mechanism through which HDACi may partially regulate the function of PRELP to suppress the development and progression of bladder cancer. Some HDACi are already in clinical use, and the findings of this study provide a mechanistic basis for further investigation of HDACi-based therapeutic strategies. Supplementary Information The online version contains supplementary material available at 10.1186/s13148-022-01370-z.
Background
Although cancer is often thought to be a disease caused by mutations in genes involved in growth and differentiation [1,2], epigenetic changes that alter the structure of chromatin and consequently affect gene transcription can also occur at any stage during cancer progression [3,4]. Chromatin consists of a complex of DNA and a series of histones in the nucleus of eukaryotic cells, and its basic unit is called a nucleosome. The nucleosome consists of 147 bp of DNA wrapped around an octamer of four core histones H2A, H2B, H3, and H4 [5,6]. The N-terminal tail of the core histones contains lysine and arginine residues, rendering them susceptible to posttranslational modifications. Among the modifications, acetylation of core histones neutralizes the positive charge of lysine residues, thereby weakening their interaction with negatively charged DNA molecules. Changes in acetylation are particularly dynamic and reversible mechanisms that are altered by a variety of stimuli [7]. Specifically, the transition from one state to another is catalyzed by histone acetyltransferases (HATs) and histone deacetylases (HDACs). HATs can be divided into three major families, whereas the HDACs are grouped into four families consisting of 18 different HDACs [8]. Currently, there is a growing interest in HDACi as potent anticancer therapeutics. In particular, the application of HDACi has been shown to be useful in hematological diseases [9]. For example, in cutaneous T cell lymphoma (CTCL), the dynamic chromatin architecture of CTCL explains the efficacy of monotherapy with HDACi [10]. In contrast, monotherapy for solid tumors has been shown to be largely ineffective, and therefore the focus has been directed toward combined inhibition strategies [11].
Bladder cancer is one of the most common cancers worldwide, accounting for over 500,000 new cases and 200,000 cancer-related deaths annually [12]. Low-grade non-muscle-invasive bladder cancer (NMIBC) rarely acquires invasive features but usually has the potential to recur and a 5-year survival rate of approximately 90%. In contrast, high-grade muscle-invasive bladder cancer (MIBC; stage T2 or higher) often progresses to metastatic cancer and has a poor prognosis, with a 5-year survival rate of < 50% [13,14]. In terms of genomic aberrations, the tumors are usually resistant to various therapeutic regimens because of the high frequency of somatic mutations and high molecular heterogeneity. Because chromatin-regulated genes are more frequently mutated in MIBC than in other epithelial tumors [15], targeted therapies for chromatin abnormalities in chemo-resistant clones may prove beneficial for this disease. To date, methotrexate-vinblastine-adriamycin-cisplatin and gemcitabine-cisplatin have been the backbone of systemic chemotherapy. However, despite initial chemosensitivity, the majority of treated patients eventually develop chemoresistance, resulting in significantly shortened survival [14]. Therefore, there is an urgent need to develop new systemic strategies for the clinical management of this disease.
Small leucine-rich proteoglycans (SLRPs) constitute a family of 17 proteoglycans that are secreted as extracellular matrix (ECM) proteins [16]. Members of SLRPs not only modify ECM tissues but also function as regulators of ligand-induced signaling pathways [16][17][18][19]. We previously showed that the expression levels of two SLRPs (secreted ECMs), osteomodulin (OMD) and PRELP, are strongly repressed in the majority of early-stage epithelial cancers and that they play a role in the regulation of epithelial-mesenchymal transition (EMT) by altering cell-cell adhesion [20]. Furthermore, they were shown to be important factors that negatively regulate the development and progression of bladder cancer [20]. Although we showed that chromosome 9q deletion, which is involved in the development of bladder cancer, is responsible for the loss of function of the OMD gene, the mechanism of PRELP gene repression remains unresolved.
In this study, we showed that PRELP gene repression is relieved by HDACi mediated by histone acetylation in bladder cancer cells. In addition, we showed that acetylation of lysine residue 5 of histone H2B (H2BK5ac) in the PRELP gene promoter region is a marker that relieves PRELP gene repression. These results provide mechanistic insights into HDACi-mediated inhibition of the development and progression of bladder cancer, partly via regulation of PRELP. Some of these inhibitors are already in clinical application, and our data provide a mechanistic basis for considering their action as a therapeutic option.
Mechanism of PRELP repression is not mainly mediated by a genetic mutation, copy number aberration (CNA), or DNA methylation
Previously, we showed that the mRNA expression of PRELP was strongly repressed in the majority of epithelial cancers [20]. To investigate the association between PRELP mRNA expression and genomic aberrations, we first analyzed the correlation between PRELP expression and somatic mutations and CNAs using a comprehensive genomic dataset of 412 MIBCs characterized in multiple TCGA platforms [21]. In 99.8% of the cases (407/408), PRELP retained its wild-type form, and there was no association between somatic mutations and PRELP mRNA expression (Fig. 1A, B). Deletions in the PRELP gene were found in 15.3% of cases, and PRELP gene amplification was found in 30.7% of cases. However, these alterations did not correlate with PRELP mRNA expression (Fig. 1C, D). These results indicate that somatic mutations and CNAs are not strongly associated with PRELP mRNA expression. We next examined the DNA methylation status of the PRELP gene region that has been suggested to be associated with gene silencing but found no apparent association (Fig. 1E). These results suggest that the repression of PRELP mRNA expression is not dependent on somatic mutations, CNAs, or DNA methylation, but rather on transcriptional regulatory mechanisms associated with protein posttranslational modifications, such as histone modification.
Elucidation of the mechanism of repression of PRELP gene using in vitro models of bladder cancer
Next, we used two bladder cancer cell lines, RT4 and J82 to investigate the relationship between histone modification and repression of PRELP gene expression [22][23][24]. As we have already shown that RT4 and J82 cell lines have significantly reduced mRNA expression of PRELP compared to normal tissues [20], they constitute suitable cell lines for histone modification-related functional analysis. To further investigate whether these cell lines faithfully reflect the antitumor effects of PRELP [20], we stably introduced the PRELP gene into these cells using a lentivirus. The transduced PRELP gene contained an inducible promoter (Tet-On system), which allowed us to control the timing of PRELP gene expression. Therefore, we added doxycycline to the established cell lines to induce PRELP protein expression. Western blotting results showed that PRELP protein was overexpressed following the addition of doxycycline ( Fig. 2A, B). Importantly, we observed that cell proliferation was significantly suppressed in the cell lines in which PRELP protein was induced (Fig. 2C, D), confirming that PRELP protein functions negatively in cell proliferation [20]. These results suggest that both RT4 and J82 cell lines are suitable for the analysis of PRELP protein expression status and function.
HDACi reverses the repression of PRELP gene expression
To investigate the relationship between histone modification and repression of PRELP gene expression, we treated RT4 and J82 cells with several inhibitors of histone modification-related enzymes and then analyzed PRELP mRNA expression by RT-PCR. The inhibitors used were (1) G9a/GLP inhibitors, BIX01294 and UNC0638, which are compounds that selectively inhibit an enzyme that methylates histone H3 lysine 9 and are known to negatively regulate transcription [25,26]; (2) Ezh2 and Ezh1/Ezh2 inhibitors: EPZ011989 and UNC1999, which are compounds that selectively inhibit enzymes that add up to three methyl groups on lysine 27 of histone H3, particularly H3K27me3, x-axis). C Pie chart showing CNAs in PRELP gene (n = 404). Shallow deletion (CNA = − 1), diploid (CNA = 0), gain (CNA = + 1) and amplification (CNA = + 2) are shown in white, light gray, gray, and black, respectively. D Same as B, but plotted by distinct type of CNAs (x-axis); deep deletion (n = 0), shallow deletion (n = 62), diploid (n = 218), gain (n = 122), and amplification (n = 2). E Same as B, but plotted against DNA methylation level (beta-values; x-axis) of PRELP (n = 408). Spearman correlation coefficient (r) between DNA methylation and PRELP mRNA expression is shown the most important epi-marker for cancer diseases [27,28]. None of the compounds promoted increased PRELP mRNA expression in RT4 cells (Fig. 3A). Furthermore, consistent with the TCGA database analysis (Fig. 1E), treatment with 5-azacytidine, an inhibitor of DNA methyltransferase 1 (DNMT1) [29,30], did not show consistent upregulation of PRELP mRNA expression (Fig. 3A). Finally, we used the pan-HDACi, trichostatin A (TSA), which inhibits histone deacetylation [31,32]. Strikingly, there was a marked increase in PRELP mRNA expression in RT4 cells (Fig. 3A). To further verify these results, we confirmed the PRELP mRNA expression by RT-PCR after treatment with another pan-HDACi, suberanilohydroxamic acid (SAHA), which is used clinically as an anticancer drug [33,34]. Indeed, there was a marked increase in the mRNA expression of PRELP following treatment with SAHA ( Fig. 3A). Similar results were obtained in J82 cells (Fig. 3B). In summary, these results suggest that the repression of PRELP gene expression involves protein deacetylation and that HDACi reverse this repression.
Repression of PRELP mRNA expression is mediated by deacetylation of H2BK5
The regulatory mechanism of gene expression involving protein acetylation and deacetylation is often regulated by modifications of the histone in the gene promoter region [7]. To investigate the acetylation/deacetylation status of histones in the PRELP gene promoter region, we performed ChIP followed by quantitative PCR (ChIP-qPCR) [35,36]. Chromatin was immunoprecipitated using histone acetyl group-specific antibodies. The DNA bound to the immunoprecipitated chromatin was purified, and PCR primers specific to the PRELP gene promoter region (Fig. 4A) were used for PCR amplification. The antibodies used in this study were specific to acetyllysine residues 9 and 27 of histone H3, acetyllysine residues 12 and 16 of histone H4, and acetyllysine residues 5, 12, and 15 of histone H2B (Fig. 4B). Although no increase in acetylation of histone H3 and H4 was observed following SAHA treatment, we found a noticeable increase in the acetylation of H2B. In particular, H2BK5ac expression was significantly increased in RT4 cells (Fig. 4B). Furthermore, ChIP-qPCR experiments using different PCR primer sets designed for the PRELP gene promoter Fig. S1). Considering histone acetyltransferases (HATs) as transcriptional activators, we next examined whether the p300/CBP protein complex might be involved in the transcriptional activation of PRELP via H2BK5 acetylation. To this end, RT4 cells were treated with p300/CBP inhibitors along with SAHA addition. Indeed, ChIP-PCR experiments showed that H2BK5 acetylation by SAHA was diminished by adding histone acetyltransferase inhibitors, C646 and SGC-CBP30 (Fig. 4D). These results suggest that p300/CBP can acetylate H2BK5 in the promoter region of the PRELP gene. Although not statistically significant, acetylation of H2BK15 was also observed (see Fig. 4B, C). These results are consistent with CBP/p300 being involved in the acetylation of H2B, including lysine 15 [37]. H2BK5ac was also significantly increased in the PRELP gene promoter region in J82 cells after SAHA treatment (Fig. 4E). Although H2BK5 acetylation was increased in the protein as a whole following SAHA treatment (Additional file 1: Fig. S2), our results indicate that repression of PRELP gene expression involves deacetylation of H2BK5ac in the promoter region, and SAHA treatment reverses the repression via H2BK5ac in bladder cancer cells.
Antitumor effects of HDACi in combination with cisplatin on bladder cancer cells
As HDACi have been shown to have very little effect on solid tumors as single agents, combinatorial use of inhibitors has attracted considerable attention [11]. Therefore, we investigated the antitumor effects of cisplatin in combination with SAHA, an FDA-approved anticancer drug that has been shown to enhance PRELP gene expression as described above. The results showed that combination therapy with cisplatin and SAHA severely inhibited the growth of RT4 and J82 cells compared to either treatment alone (Fig. 5A, B). When data were analyzed based on the combination index (CI), strong synergism existed with CI less than 1 for dose combinations tested (Fig. 5C, D), consistent with the synergistic effect of cisplatin and TSA [38]. In particular, when SAHA (at a concentration of 2.5 μM or higher) was combined with cisplatin, the anticancer effect of cisplatin at lower concentrations was significantly enhanced (Fig. 5A, B). This is consistent with the fact that the PRELP gene is derepressed at this concentration range and exerts an antitumor effect (Figs. 2C, D, 3A, B). It would be interesting to Fig. S3) and a strong synergistic effect with cisplatin in RT4 and J82 cells (Additional file 1: Fig. S4) [39]. Although these results suggest that Class I HDACs are involved in suppressing PRELP expression, we further tested various selective HDAC inhibitors to understand the selectivity of HDACs for PRELP gene repression (Additional file 1: Fig. S5). Indeed, the selective class I HDAC inhibitor, tacedinaline, showed an increase in PRELP expression, albeit less pronounced in J82, indicating that class I HDACs contribute to the inhibition of PRELP expression. On the other hand, Santacruzamate A, a selective HDAC2 inhibitor, did not upregulate PRELP expression. Somewhat unexpectedly, we found the marked upregulation of PRELP expression in LMK-235, a selective HDAC4,5 inhibitor. Given that LMK-235 is known to inhibit HDACs other than HDAC4,5 with a relatively higher concentration [40], it is unclear whether or not HDAC4,5 are involved with PRELP repression. These results suggest that HDAC1 is likely involved in the inhibition of PRELP expression, but further verification is needed because other HDACs may also be involved in the inhibition of PRELP expression. In sum, these results suggest that HDACi, which activate PRELP gene expression, enhance the inhibition of cell proliferation when combined with cisplatin. Of note, paclitaxel, a different class of chemotherapeutic agent, did not show synergistic anticancer activity with SAHA in RT4 and J82 cells (Additional file 1: Fig. S6).
Discussion
HDACs are aberrantly expressed in various tumors. For example, class I HDACs have been found to be overexpressed in bladder tumors [41], breast tumors [42], prostate tumors [43], and renal cells [44], and overexpression of HDAC2 and HDAC3 has also been shown to be associated with clinicopathological indicators of disease progression [42]. Many studies have reported the anticancer effects of HDAC inhibition with the induction of apoptotic cell death [45]. In particular, the synergistic anticancer effect of combination therapy with HDACi and other drugs has already been demonstrated in several carcinomas [11], and different mechanisms of action have also been reported for each type of carcinoma [46]. The mechanism of action of this synergistic effect is complex, and various scenarios are possible. For example, each mechanism of action may prevent the acquisition of drug resistance by acting on different pathways [9]. However, each mechanism of action may work in a complementary manner to elicit a robust anticancer effect [47].
This study presents an antitumor mechanism for HDACi, which are often thought to globally enhance gene transcription because they increase overall histone acetylation (Additional file 1: Fig. S2); however, approximately half of the genes with variable expression are negatively regulated, likely through the functions of nonhistone proteins [48]. However, in the case of the PRELP gene, HDACi positively affects PRELP gene expression, as it increases H2BK5 acetylation in its promoter region and activates gene transcription (Figs. 3, 4). We previously showed that PRELP gene overexpression inhibits cancer progression by blocking TGF-β and EGF pathways, reversing EMT, activating cell adhesion, and inhibiting various oncogenic pathways [20]. Then, do HDAC inhibitors also cause EMT reversal? Indeed, Tang et al. and Zhao et al. have identified multiple class I HDAC inhibitors that cause EMT reversal, consistent with our results of PRELP expression induced by class I HDAC inhibitors [49,50]. The fact that PRELP expression does not affect the expression levels of HDAC1, 2 (Additional file 1: Fig. S7) is consistent with the view that the PRELP gene is activated following acetylation of H2BK5 and orchestrates the EMT program in bladder cancer cells. Of note, the PRELP gene is strongly repressed in the majority of early-stage epithelial cancers [20]. Therefore, it will be interesting to test whether HDACi alleviates the repression of PRELP gene expression via acetylation of H2BK5 in various other epithelial cancers.
Recent data suggest that H2BK5ac is a reliable predictor of gene expression [51] and an important modifier in the orchestration of EMT programs. Mechanistically, MAP3K4-regulated chromatin modifiers CBP and HDAC6 each regulate thousands of genes during EMT by controlling promoter acetylation of H2BK5 [52,53]. Although the increase in acetylation levels of H2BK5 was the same in the two cell lines, the antiproliferative effects of PRELP overexpression (Fig. 2C, D) and restoration of PRELP gene expression by HDACi (Fig. 3A, B) were more pronounced in RT4 cells than in J82 cells. On the other hand, the combination of SAHA with cisplatin was much more effective in J82 cells than in RT4 cells (Fig. 5). These results suggest that the acetylation of H2BK5 may have different effects on open chromatinization and subsequent recruitment of transcription factors and on the phenotypic output of the cells, depending on the cell.
However, we would also like to clarify that the data presented in this study are insufficient to explain the molecular basis of these findings. First, we did not demonstrate whether the acetylation of H2BK5 is directly involved in the expression of the PRELP gene. Second, PRELP is a secreted ECM protein; therefore, it is unclear whether it has a direct role in the orchestration of the EMT program. Recent proteomic studies have suggested that PRELP interacts with two growth factor receptors, the insulin-like growth factor I receptor (IGFI-R) and the low-affinity nerve growth factor receptor (p75NTR) [54], and further showed that SAHA treatment enriched endogenous PRELP protein on the membrane fraction (Additional file 1: Fig. S8), supporting our hypothesis. Nevertheless, there is a need to elucidate the mechanism of the anticancer effects associated with the administration of PRELP as an ECM protein.
The clinical application of HDACi in solid tumors has been largely disappointing mainly due to limited combination chemotherapeutic studies and lack of patient stratification. In contrast, it has been reported that treatment with HDACi causes hyperacetylation of histones and relaxation of chromatin structure, leading to efficient DNA damage and cell death when treated with DNA-interacting drugs such as cisplatin. Indeed, in some preclinical and clinical settings, HDACi pretreatment approach has been reported to allow the use of lower doses of chemotherapeutic agents, consistent with the above explanation [55]. Although elevated PRELP gene expression during HDACi pretreatment may help determine the dose of HDACi with DNAinteracting chemotherapeutic agents needed to achieve a better therapeutic effect, further validation using animal studies is needed. Thus, activation of the PRELP gene with the relaxation of chromatin structure may be a good biomarker for the combination of HDACi and chemotherapy.
Conclusions
This study revealed that HDACi promotes the acetylation of H2BK5 leading to PRELP mRNA expression. We also found that the acetylation of H2BK5 in the promoter region of the PRELP gene was associated with the restoration of PRELP gene expression. Thus, the activation of PRELP is an indicator of anticancer activity associated with changes in chromatin structure accompanying histone acetylation and may be a useful biomarker in combination strategies using HDACi and chemotherapy.
Database analysis
RNA-seq, DNA promoter methylation, DNA copy number, gene mutation, and clinical data of 412 patients with MIBC in the Cancer Genome Atlas (TCGA) cohort [21] were sourced from the cBioPortal (http:// www. cbiop ortal. org/) for cancer genomics [56,57]. The individual data used to generate the graphs are listed in Additional file 2: Tables S1-S3.
Plasmids and siRNAs
The lentiviral packaging plasmids pMD2.G (#12259) and psPAX2 (#12260) were obtained from Addgene (Watertown, MA, USA). To generate lentiviral vectors for conditional PRELP gene expression, a modified vector was constructed using Edit-R inducible lentiviral hEF1a-Blast-Cas9 nuclease plasmid DNA (CAS11229, GE Healthcare, Chicago, IL, USA) as the backbone. To generate a unique restriction site, the NheI restriction site was mutated immediately downstream of the hEF1 promoter region using the Gibson Assembly System (E2611, New England BioLabs, Ipswich, MA, USA). The expression construct of PRELP-myc was derived from the pCS2-PRELP-myc vector [20]. The PRELP-myc cDNA was PCR-amplified with the following primers: PRELP-myc_F_NheI: 5′-ACC CAA GCT GGC TAG CCA CCA TGA GGT CAC CCC TCT GCT G-3′, PRELP-myc_R_NotI: 5′-CAG CAC AGT GGC GGC CGC TCG AGT CTA GAC TAT AGT TCT AGA GGC TCG A-3′, and cloned into the modified Edit-R inducible lentiviral plasmid at NheI and NotI sites. All plasmids were verified by Sanger sequencing.
Conditional protein expression of PRELP
A lentivirus transduction system was used to induce the conditional expression of PRELP. To produce lentiviruses, the viral vectors and packaging plasmids were co-transfected into 293T cells using Lipofectamine 3000 (L3000-008, Thermo Fisher Scientific, Waltham, MA, USA), according to the manufacturer's instructions. After 48 h, the cell culture medium containing lentiviruses (for conditional PRELP-myc expression) was collected and filtered through a 0.45-μm filter. Target cell lines were plated in 24-well plates and cultured with a lentivirus-containing medium for 3 days in the absence of polybrene. PRELP-myc-expressing cells were selected with blasticidin S (10 μg/mL) (029-18701, Fujifilm Wako Pure Chemical Co., Osaka, Japan). Conditional expression was induced by the addition of 1 μg/mL of doxycycline (DOX) (D9891, Sigma-Aldrich, St. Louis, MO, USA).
Cell viability assay
For the cell proliferation assays, following overexpression of PRELP, the cells were plated in 96-well plates at 500 cells/well for RT4 and J82 cell lines. Culture media with the respective treatment reagents were replaced every 3 days. For experiments on the combined effects of cisplatin and HDACi, cells were plated in 96-well plates at 1000 and 1500 cells/well for RT4 and J82, respectively. After 24 h, the inhibitors were added at the indicated concentrations and incubated for 48 h. At the indicated time points, 10 μl of Cell Counting Kit-8 (343-07623, Dojindo, Kumamoto, Japan) reagent was added to each well. After 2 h of reaction, cell viability was analyzed by measuring the absorbance at 450 nm using Multiskan FC (Thermo Fisher Scientific, Waltham, MA, USA).
Synergism determination
To assess the combined effects of SAHA or entinostat with cisplatin, cell viability assay data were converted to a fraction of growth inhibition by each drug alone or by the drug combinations. Isobologram analysis was performed using CompuSyn software (v1, ComboSyn, Inc., Paramus, NJ, USA), which enabled the calculation of a combination index (CI) according to the Chou-Talalay CI-Isoblogram theory [59]. The CI indicates synergism at less than 1.0, antagonism at greater than 1.0, and additive at 1.0.
Statistical analysis
Statistical analyses were performed using GraphPad Prism (v7, GraphPad Software, San Diego, CA, USA). P values are indicated in the figures and figure legends.
|
2022-11-13T14:40:41.830Z
|
2022-11-12T00:00:00.000
|
{
"year": 2022,
"sha1": "071e68c26fbce13a9d4638f253261e9cab602d59",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "3b4fc8cd816273c43aa43aaa2b4af035afcc4d6a",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
88521097
|
pes2o/s2orc
|
v3-fos-license
|
Cumulants of products of Normally distributed random variables
To find moments of various estimators related to Autoregressive models of Statistics, one first needs the cumulants of products of two Normally distributed random variables. The purpose of this article is to derive the corresponding formulas, and extend them to products of three or more such variables.
Introduction
The formulas presented in this article are crucial for finding moments (and, subsequently, approximate distributions) of various parameter estimators related to AR(k) models (see [1] and [2]).
Multivariate Normal distribution
Assume that X 1 , X 2 ,... are centralized (having zero mean), Normally distributed random variables. Their moment-generating function is given by where V is the corresponding variance/covariance matrix. Based on (1), we can easily find the expected value of a product of any number of such random variables (which defines the corresponding moment), getting zero when this number is odd, and when this number is even. The summation is over all possible Notationally (and rather symbolically), this has been indicated by iǫP 2k where i represents the {i 1 , i 2 }, {i 3 , i 4 }, ... indices and P 2k the set of all such selections. Example Note that the resulting formulas are fully general; (3) can be read as but using specific integers simplifies the notation; also, the formulas allow any duplication of indices, e.g.
It is important to realize that, for random variables with zero mean (the variables of this section), there is no difference between simple and central moments; our µ thus stands for either.
Joint cumulants
Consider a collection of random variables (not necessarily centralized, nor Normally distributed), say Y 1 , Y 2 ,..., and their joint moment-generating function, defined by The corresponding joint cumulant of Y 1 , Y 2 , ...Y ℓ is defined by It is well known (and easy to derive) that κ i = E (Y i ) , and that all higherorder cumulants can be expressed in terms of moments (of the Y variables) thus: where A(1, 2, ...ℓ) is the collection of all partitions of the 1, 2, ...ℓ indices. A partition is a division of a set into an arbitrary number of non-empty and nonoverlapping subsets (these are denoted j 1 , j 2 , ...). There are two points to make about (6): • the formula is correct regardless of whether the moments are central or simple (from now on, we denote central moments by µ j and simple moments byμ j , respectively), • using central moments simplifies the RHS substantially -any partition containing a single index can be omitted (the corresponding µ j is zero).
We will spell out explicitly the first few of these formulas, first using simple moments then, using central moments (note the simplification): To continue, we consider only the special case of having all indices identical (the corresponding general formulas get too lengthy): ...
We now proceed to derive explicit formulas for these cumulants when some of the Y variables are centralized, Normally distributed (the X's of the previous section -we call them ' singlets' ), and the others are products of two such X's (doublets). This poses a bit of a notational challenge; we will use simple indices for singlets, two indices in parentheses for doublets. For example, κ 1,2,(3,4) indicates a third-order cumulant of three random variables, X 1 , X 2 and X 3 X 4 .
Cumulants involving singlets and/or doublets
It is well known and easy to derive, by combining (1) and (5), that all cumulants involving only singlets are equal to zero, with the exception of For doublets, one can derive (by combining (6) and (2), and using a routine, 'brute-force' computation -see the Appendix), that One can thus see that the number of terms on the RHS of (8) is (k−1)!×2 k−1 , which equals to 2, 8, 48 when k = 2, 3 and 4 respectively (a fast-growing sequence).
Mixed cases
Let us now investigate cumulants with a mixture of singlets and doublets. The rules for computing these prove to be quite simple: A cumulant involving • one singlet (regardless of the number of doublets) equals to zero, • more than two singlets (and any number of doublets) is also equal to zero.
• When a cumulant contains two singlets, it equals to the cumulant in which the singlets are replaced by one doublet, i.e.
Beyond doublets
The same approach enables us to develop formulas for cumulants which may also involve triplets, quadruplets, etc. These are not needed when dealing with parameter estimation related to AR(k) models, but may have a potential application elsewhere. Thus, for example, odd-order cumulants involving only triplets are all equal to zero; for even-orders we get . . . following (7). Note that the first of these cumulants turns out to be a sum of 15 terms of the type, while the second one has already 9720 terms! Conjecture 1 It seems that each cumulant with all indices distinct (regardless of how they are grouped) can be always expressed in a form of the RHS of (8), whereP 2k is a specific subset of P 2k (and 2k is the total number of indices, which must be even for all non-zero cumulants).
Appendix
For readers familiar with Mathematica, we now supply a few Mathematica functions to facilitate the computation of cumulants involving centralized, Normally distributed random variables and their products.
Finding moments
Computing the mean value (MV) of a product of centralized, Normally distributed random variables can be achieved by: To find the fifth-order cumulant of the random variables X 3 , X 1 X 3 , X 1 X 3 , X 1 X 2 X 3 and X 1 X 2 X 2 3 , one has to type (to simplify the answer, we have assumed that the X's are standardized, i.e. have a mean of zero and the variance equal to 1): where C i,j is the correlation coefficient between X i and X j . Note that
|
2015-06-17T13:17:33.000Z
|
2015-06-17T00:00:00.000
|
{
"year": 2015,
"sha1": "ff1aca6a7b823dfb92f95b07b107f321acc5457c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ff1aca6a7b823dfb92f95b07b107f321acc5457c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
236780365
|
pes2o/s2orc
|
v3-fos-license
|
DEM- and GIS-Based Analysis of Soil Erosion Depth Using Machine Learning
: Soil erosion is a form of land degradation. It is the process of moving surface soil with the action of external forces such as wind or water. Tillage also causes soil erosion. As outlined by the United Nations Sustainable Development Goal (UN SDG) #15, it is a global challenge to “combat desertification, and halt and reverse land degradation and halt biodiversity loss.” In order to advance this goal, we studied and modeled the soil erosion depth of a typical watershed in Taiwan using 26 morphometric factors derived from a digital elevation model (DEM) and 10 environmental factors. Feature selection was performed using the Boruta algorithm to determine 15 factors with confirmed importance and one tentative factor. Then, machine learning models, including the random forest (RF) and gradient boosting machine (GBM), were used to create prediction models validated by erosion pin measurements. The results show that GBM, coupled with 15 important factors (confirmed), achieved the best result in the context of root mean square error (RMSE) and Nash–Sutcliffe efficiency (NSE). Finally, we present the maps of soil erosion depth using the two machine learning models. The maps are useful for conservation planning and mitigating future soil erosion.
Introduction
The United Nations General Assembly adopted 17 sustainable development goals (SDGs) in September 2015, which apply to all countries on the planet. Soil science is intertwined with a number of the SDGs. Among them, soils especially play an essential role in SDGs 2, 3, 6, 7, 12-15 [1].
Soil erosion is a form of land degradation and a severe threat to sustainable development. It is the process of moving surface soil with the action of external forces such as wind or water. Tillage also causes soil erosion. Among them, water erosion is the most tangible form of soil erosion in Taiwan. Soil erosion and sediment movement caused by rainfall and flooding, intense and persistent winds, agricultural activities, grazing, logging, mining, and construction result in significant damage to properties and potentially result in loss of lives, not to mention the livelihood support the land provides for communities. Therefore, it is a global challenge by 2030 to "combat desertification, and halt and reverse land degradation and halt biodiversity loss," as outlined by SDG 15. Although the soil erosion process may seem to be slow at times, it dramatically impacts soil fertility, agriculture, and the ecosystem. Globally, it is estimated that the average soil erosion from agriculture is 75 billion tons/year ( [2,3], as cited in [4]). Other scholars point out that about 85% of the 2 billion hectares of worldwide surface soil degradation stem from wind and water erosion ( [5], based on [6,7]). The economic costs of erosion and sedimentation are substantial. For example, the cost of removing sediments alone could be somewhere between USD 7 and USD 68/yard 3 (or USD $9.16-USD 88.94/m 3 ) in the US ( [8], as cited in [9]). In Iran, the economic costs associated with soil erosion are thought to be around 10 trillion rials or USD 23,750,148 ([10], as cited in [11]). As a result, soil erosion modeling is critical to understanding soil erosion processes and preventing future soil loss.
Materials
Shihmen Reservoir watershed is located in northern Taiwan, which plays a crucial role in the metropolitan and irrigation areas of Taipei and Taoyuan [14]. It is also the third-largest reservoir in Taiwan. Typhoons bring the majority of the annual rainfall of 2350 mm to the Shihmen Reservoir watershed between May and October [24].
Environmental Factors and Erosion Pin Measurements
The 10 environmental factors (or parameters, or features, or variables, or attributes) examined in this study are main subwatershed, distance to river, distance to road, type of slope, slope direction, rainfall amount, lithology, epoch, elevation, and slope class. Environmental factors were obtained from various GIS sources such as land use/land cover maps, geological maps, river maps, and road system maps. These factors and four additional factors (% sand, % silt, % clay, and % organic) were previously analyzed in Nguyen et al. [22]. However, the four additional factors were removed from this study because they were point data and could not be directly mapped to the entire study area (watershed). We used morphometric factors to replace the point data.
The erosion pin data used in this study came from field surveys conducted over three years (September 2008 to October 2011). The erosion pins were mounted on 55 slopes in 17 of the 93 subwatersheds of the study area ( Figure 1). Each slope had 10 erosion pins mounted, and the average value of the 10 pins represents the slope's erosion depth [25]. The measurements of erosion pins were taken with a caliper, as shown in Figure 2.
Environmental Factors and Erosion Pin Measurements
The 10 environmental factors (or parameters, or features, or variables, or attributes) examined in this study are main subwatershed, distance to river, distance to road, type of slope, slope direction, rainfall amount, lithology, epoch, elevation, and slope class. Environmental factors were obtained from various GIS sources such as land use/land cover maps, geological maps, river maps, and road system maps. These factors and four additional factors (% sand, % silt, % clay, and % organic) were previously analyzed in Nguyen et al. [22]. However, the four additional factors were removed from this study because they were point data and could not be directly mapped to the entire study area (watershed). We used morphometric factors to replace the point data.
The erosion pin data used in this study came from field surveys conducted over three years (September 2008 to October 2011). The erosion pins were mounted on 55 slopes in 17 of the 93 subwatersheds of the study area ( Figure 1). Each slope had 10 erosion pins mounted, and the average value of the 10 pins represents the slope's erosion depth [25]. The measurements of erosion pins were taken with a caliper, as shown in Figure 2.
Morphometric Factors
Morphometric analysis is the "quantitative description and analysis of landforms as practiced in geomorphology that may be applied to a particular kind of landform or to drainage basins and large regions generally" [26]. It is a technique for determining the scale and shape of watersheds, including two types of descriptive numbers: linear scale measurements and dimensionless numbers [27]. This approach can quantify the erosional growth of streams and their drainage watersheds, and compare geomorphic characteristics [28,29].
For this study, the Shihmen Reservoir watershed was divided into 93 subwatersheds to calculate the morphometric factors (or parameters, or features, or variables, or attributes) using the Central Geological Survey (CGS) DEM of Taiwan (10 m resolution) and ArcGIS 10.4.1. First, the DEM was filled in order to create flow paths and flow accumulations. Then, the stream networks were generated based on the flow accumulations of individual cells with a threshold value of 500. Finally, ArcGIS's Stream Link and Watershed functions were used to construct the subwatershed polygons. A total of 26 morphometric factors were calculated and described below (also see Table 1).
Subwatershed area (A) is the total area of a subwatershed. It ranged from 2.88 km 2 to 26.84 km 2 in this study. Research has indicated that total runoff or sediment yield is primarily determined by the subwatershed area [27].
Subwatershed perimeter (P) is the length of the boundary that surrounds a subwatershed. Its value varied between 10.70 and 37.29 km in the study area.
Stream order (U) indicates the complexity of a stream drainage system. The trunk river has the highest stream order and defines the order of a subwatershed [28]. An example of the stream order of a subwatershed is shown in Figure 3.
Number of streams (Nu) is the number of streams of a given stream order in a subwatershed. Figure 3 shows an example of the number of streams. The total number of streams ( ) is the summation of the number of streams of all orders. Stream length (Lu) is the total channel length of a given stream order in this study for compatibility with the definition of the number of streams. It is not the cumulative channel length of a given order that includes all lesser orders, as sometimes defined [27]. The total stream length ( ) is the summation of the stream length of all orders. Mean subwatershed slope (S) is the average slope of a subwatershed. It is calculated by the Slope function of ArcGIS and characterizes the steepness of a subwatershed.
Mean stream length (Lsm) is defined as the ratio between the stream length and the number of streams of a given stream order in a subwatershed in this study. We computed the average of the mean stream lengths as the characteristic mean stream length of the subwatershed.
Morphometric Factors
Morphometric analysis is the "quantitative description and analysis of landforms as practiced in geomorphology that may be applied to a particular kind of landform or to drainage basins and large regions generally" [26]. It is a technique for determining the scale and shape of watersheds, including two types of descriptive numbers: linear scale measurements and dimensionless numbers [27]. This approach can quantify the erosional growth of streams and their drainage watersheds, and compare geomorphic characteristics [28,29].
For this study, the Shihmen Reservoir watershed was divided into 93 subwatersheds to calculate the morphometric factors (or parameters, or features, or variables, or attributes) using the Central Geological Survey (CGS) DEM of Taiwan (10 m resolution) and ArcGIS 10.4.1. First, the DEM was filled in order to create flow paths and flow accumulations. Then, the stream networks were generated based on the flow accumulations of individual cells with a threshold value of 500. Finally, ArcGIS's Stream Link and Watershed functions were used to construct the subwatershed polygons. A total of 26 morphometric factors were calculated and described below (also see Table 1).
Subwatershed area (A) is the total area of a subwatershed. It ranged from 2.88 km 2 to 26.84 km 2 in this study. Research has indicated that total runoff or sediment yield is primarily determined by the subwatershed area [27].
Subwatershed perimeter (P) is the length of the boundary that surrounds a subwatershed. Its value varied between 10.70 and 37.29 km in the study area.
Stream order (U) indicates the complexity of a stream drainage system. The trunk river has the highest stream order and defines the order of a subwatershed [28]. An example of the stream order of a subwatershed is shown in Figure 3.
Number of streams (Nu) is the number of streams of a given stream order in a subwatershed. Figure 3 shows an example of the number of streams. The total number of streams (ΣN u ) is the summation of the number of streams of all orders.
Stream length (Lu) is the total channel length of a given stream order in this study for compatibility with the definition of the number of streams. It is not the cumulative channel length of a given order that includes all lesser orders, as sometimes defined [27]. The total stream length (ΣL u ) is the summation of the stream length of all orders.
Mean subwatershed slope (S) is the average slope of a subwatershed. It is calculated by the Slope function of ArcGIS and characterizes the steepness of a subwatershed.
Mean stream length (Lsm) is defined as the ratio between the stream length and the number of streams of a given stream order in a subwatershed in this study. We computed the average of the mean stream lengths as the characteristic mean stream length of the subwatershed. Relief ratio (R) - Ruggedness number (Rn) - Stream length ratio (Rl) - Shape factor (Bs) - Texture ratio (T) 1/km
Methods
This study had five objectives: first, to identify and collect morphometric factors and environmental factors that affect soil erosion; second, to use feature selection to identify critical factors that can be used to model soil erosion depths; third, to apply machine learning algorithms to create models that can be used to predict soil erosion depth in the study area; fourth, to assess the validity of the models using statistical indices and threefold cross-validation; fifth and finally, to produce prediction maps of soil erosion depth for the study area. Subwatershed length (Lb) in this study is defined as "the longest dimension of the basin parallel to the principal drainage line," as in the definition of relief ratio below [29]. The length is determined by ArcGIS 10.4.1.
Stream frequency (Fs) is the number of streams per unit area [28]. This value ranged from 0.47 to 2.46 streams/km 2 in this study.
Drainage density (Dd) is defined as the sum of the stream lengths divided by the subwatershed area. It is a crucial indicator of the linear scale of landform elements in a subwatershed [27].
Constant of channel maintenance (C) is defined as the inverse of drainage density. Along with drainage density, this value compares soil's erodibility or other factors influencing surface erosion [29]. Here, metric units were used, and the conversion factor of 5280 (from miles to feet) was ignored.
Length of overland flow (Lo) ranged from 0.32 km to 0.64 km in the study area. It is the length of runoff over the ground surface until it concentrates in definite stream channels and is half the reciprocal of drainage density [28].
Infiltration number (If) is the product of stream frequency and drainage density ( [30], as cited in [31]). This value ranged from 0.44 to 2.98 in this study.
Subwatershed relief (H) is the difference in elevations between the lowest (h min ) and the highest (h max ) points in a subwatershed.
Relief ratio (R) is "the ratio between the total relief of a basin" and "the longest dimension of the basin parallel to the principal drainage line" [29]. For the study area, the relief ratio varied from 0.07 to 0.57.
Melton index (M), or the ruggedness of a subwatershed, is characterized by the dimensionless ratio between the subwatershed relief and the square root of the subwatershed area [32].
Ruggedness number (Rn) is known as the dimensionless product of drainage density and relief. As a result, high drainage density and low relief areas are just as rugged as low drainage density and high relief areas ( [33], as cited in [34]).
Bifurcation ratio (Rb) is the average number of branchings or bifurcations of streams. It is defined as the number of streams of a given stream order to that of streams of the next higher order [28]. For a subwatershed, there are different bifurcation ratios for different stream orders. Following the example of Jothimani et al. [35], we computed the average of the bifurcation ratios as the characteristic bifurcation ratio of the subwatershed. For the 93 subwatersheds in the study area, the bifurcation ratio ranged from 0.50 to 8.00.
Stream length ratio (Rl) is defined by the average length of streams of a stream order to the next lower order [28]. Various stream length ratios exist for various stream orders. Therefore, we computed the average of the stream length ratios as the characteristic stream length ratio of the subwatershed, similar to Jothimani et al. [35]. For the 93 subwatersheds in the study area, the stream length ratio ranged from 0.46 to 5.86.
Ratio Rho (ρ) is the stream length ratio divided by the bifurcation ratio [28]. Elongation ratio (Re) is the ratio between the diameter of a circle with the same area as the subwatershed and the longest dimension of the subwatershed parallel to the main drainage line [29], as determined for the relief ratio.
Circularity ratio (Rc) is the circumference of a circle with the same area as the subwatershed divided by the subwatershed perimeter [29].
Form factor (Ff) is the ratio of the width to the length of a subwatershed and is defined as the subwatershed area divided by the square of the length of the subwatershed [36]. The subwatershed length is "measured from a point on the watershed-line opposite the head of the main stream" [36]. Here, we used subwatershed length (Lb) to be the length of the subwatershed.
Shape factor (Bs) is defined as the square of the length of a subwatershed divided by the area of the subwatershed, although other definitions have also been proposed ( [37], as cited in [38]). The length of a subwatershed is defined as "the longest dimension from the mouth to the opposite side." Here, we used the subwatershed length (Lb) to represent the length of the subwatershed.
Compactness or compactness coefficient (Cc) is the ratio of the perimeter of the subwatershed to that of a circle with an equal area [36].
Texture ratio is the ratio of the number of crenulations on the contour with the maximum number of crenulations within the subwatershed to the length of the perimeter of the subwatershed [39]. Crenulations are chosen because they indicate streams too small to be shown on a topographic map [27]. The ratio is a measure of channel spacing closeness and thus is related to drainage density. For ease of computation, we used the total number of streams to replace the crenulations in this study. The texture ratio ranged from 0.16 to 1.27.
Methods
This study had five objectives: first, to identify and collect morphometric factors and environmental factors that affect soil erosion; second, to use feature selection to identify critical factors that can be used to model soil erosion depths; third, to apply machine learning algorithms to create models that can be used to predict soil erosion depth in the study area; fourth, to assess the validity of the models using statistical indices and threefold cross-validation; fifth and finally, to produce prediction maps of soil erosion depth for the study area. Figure 4 depicts the five research steps of this analysis. First, we created an input dataset of 36 independent factors by combining 26 morphometric and 10 environmental factors. Second, we divided the dataset into three folds of roughly the same size based on the main subwatershed attribute to balance the class distribution from the five main subwatersheds [40]. We also used the erosion pin measurement as the target variable. Each time one of the three folds was held as the test data for testing the model, the remaining two folds were used as the training data. The whole process was repeated three times. Third, we applied the random forest (RF) and gradient boosting machine (GBM) to create erosion models based on the training data. Fourth, we assessed the models with the test data. In the process, we eliminated the unessential factors and kept the best models. Finally, we created the spatial distribution maps of soil erosion depth of the study area using the machine learning models.
Feature Selection
In order to identify the key factors that will generate the most credible soil erosion models, we used feature selection to rank the 36 morphometric and environmental factors in the study. Specifically, the Boruta algorithm was used to select the subsets of factors (predictors) for ML model building.
Boruta is a feature selection algorithm and feature ranking tool based on the RF algorithm and introduced by Kursa et al. [41]. It works by creating a randomized copy of the input dataset, merging it with the original dataset, and constructing the expanded system's classifier. Then, Boruta compares the importance of the factors in the original dataset to those of the randomized factors to identify the key factors. Only factors with greater importance than the randomized factors are considered essential. The advantage of Boruta is that it allows researchers to choose the most significant factors that influence the outcome. For this study, the Boruta package in the R software was used, and the maximum number of times the algorithm was run (maxRun) was set to the default value of 1000.
Machine Learning Models
In this analysis, two machine learning methods were used. They are the random forest and the gradient boosting machine.
Feature Selection
In order to identify the key factors that will generate the most credible soil erosion models, we used feature selection to rank the 36 morphometric and environmental factors in the study. Specifically, the Boruta algorithm was used to select the subsets of factors (predictors) for ML model building.
Boruta is a feature selection algorithm and feature ranking tool based on the RF algorithm and introduced by Kursa et al. [41]. It works by creating a randomized copy of the input dataset, merging it with the original dataset, and constructing the expanded system's classifier. Then, Boruta compares the importance of the factors in the original dataset to those of the randomized factors to identify the key factors. Only factors with greater importance than the randomized factors are considered essential. The advantage of Boruta is that it allows researchers to choose the most significant factors that influence the outcome. For this study, the Boruta package in the R software was used, and the maximum number of times the algorithm was run (maxRun) was set to the default value of 1000.
Machine Learning Models
In this analysis, two machine learning methods were used. They are the random forest and the gradient boosting machine.
Random forest (RF) was proposed by Breiman [42]. It is a supervised ML method that combines all tree-based results into the most appropriate model for the application. Random forest (RF) was proposed by Breiman [42]. It is a supervised ML method that combines all tree-based results into the most appropriate model for the application. The RF algorithm runs many iterations and divides the training dataset (in terms of data and attributes) into many subsets at random to create many trees and produce better results than individual decision trees. The randomForest() package in the R software was used to implement random forest in this analysis, which uses the Gini index to separate data in order to minimize impurity at each node. Tsai et al. [23] provided a more detailed overview of the Gini index and random forest.
Friedman [43] proposed the gradient boosting machine as a simple and highly flexible machine learning tool. It is a widely used machine learning algorithm that has been shown to be effective in a variety of applications [44][45][46]. The basic idea behind GBM is to build a prediction model using a set of poor learning algorithms, most commonly decision trees. Unlike RF, which produces an ensemble of individual trees in parallel, GBM creates a sequenced tree ensemble. The knowledge gained from previously grown trees is used to grow new trees in a sequential manner. The GBM model was once used to model soil erosion [22]. It was implemented in this study using R software's "gbm" package.
Assessment of Models
In this study, the ML model performance was evaluated using two statistical indices. As shown in Equations (1) and (2), they are the root mean square error (RMSE) and the Nash-Sutcliffe efficiency (NSE).
where P is the predicted value, O is the observed value, and O is the mean observed value. RMSE was used to compare the difference between the expected values (model outputs) and the observed values (erosion pin measurements) in the two indices, while NSE was used to determine the effectiveness of the model against the average observed value [20][21][22].
Results
In this analysis, we used R version 4.0.5. In order to assess soil erosion in the Shihmen Reservoir watershed, this study employed two machine learning models, RF and GBM. To substitute four factors that were only point data, 26 morphometric factors were added to the original dataset of 14 environmental factors. In total, 36 variables were examined for their relationship with soil erosion depth (erosion pin measurement). The training data (used to create the ML models) made up two folds of the dataset, while the remaining fold was used to evaluate the models based on RMSE and NSE. Finally, through spatial mapping, machine learning models were used to predict the soil erosion depth for the entire Shihmen Reservoir watershed.
Feature Selection
Boruta was used as a feature selection tool to assess the relative importance of variables that influence soil erosion. Table 2 and Figure 5 depict the findings. It can be seen that Table 2 was divided into three categories based on decisions: rejected, tentative, and confirmed. They are also ranked by median importance. In total, 15 factors were identified as important, which includes texture ratio, subwatershed length, epoch, elongation ratio, lithology, subwatershed perimeter, form factor, relief ratio, total stream length, Melton index, the total number of streams, elevation, shape factor, subwatershed area, and type of slope. One factor was considered tentative, i.e., the main subwatershed. Moreover, 20 variables were ruled out, which consist of distance to river, mean stream length, ruggedness number, slope direction, ratio Rho, circularity ratio, distance to road, stream length ratio, stream frequency, rainfall, compactness coefficient, stream order, constant of channel maintenance, drainage density, length of overland flow, infiltration number, slope class, subwatershed slope, bifurcation ratio, and subwatershed relief. They should play no important role in the prediction of soil erosion. According to the Boruta analysis, the type of slope, subwatershed area, and shape factor are the three most significant variables among the factors that are shown to be important.
Boruta generates a corresponding "shadow" factor for each factor, whose values were obtained by shuffling the original factor's values across objects. The system then classifies these using all of the extended system's factors and calculates the importance of each factor [47]. Green is used to color the 15 factors listed as important in Figure 5. The 20 rejected factors are colored red, while the one tentative factor is colored yellow. To differentiate the variables, Figure 5 also shows the minimum, mean, and maximum of shadow factors. In general, factors ranked higher than the shadow maximum have been tested to be more significant than chance. Among the green (important) factors, 4 are environmental factors, while the remaining 11 are morphometric factors. The percentage of the environmental factors in the confirmed group (4/15 = 27%) is slightly less than the overall percentage of the environmental factors in the dataset (10/36 = 28%). On the other hand, the environmental factors account for 100% of the tentative factor (1/1) and 25% (5/20) of the rejected factors. Furthermore, the environmental factors selected in the confirmed group are the type of slope, elevation, lithology, and epoch. Compared to the study by Nguyen et al. [22], which also reported the relative importance of environmental factors, we can see some similarities. The top four factors from Nguyen et al. [22] were slope direction, type of slope, % organic, and elevation. Two (type of slope and elevation) were also selected for this study, while one (slope direction) was not, and the other (% organic) was not included in this study because it is a point data. It is worth noting that Nguyen et al. [22] used 70% training and 30% test data, while this study used threefold cross-validation. 20 rejected factors are colored red, while the one tentative factor is colored yellow. To differentiate the variables, Figure 5 also shows the minimum, mean, and maximum of shadow factors. In general, factors ranked higher than the shadow maximum have been tested to be more significant than chance. Among the green (important) factors, 4 are environmental factors, while the remaining 11 are morphometric factors. The percentage of the environmental factors in the confirmed group (4/15 = 27%) is slightly less than the overall percentage of the environmental factors in the dataset (10/36 = 28%). On the other hand, the environmental factors account for 100% of the tentative factor (1/1) and 25% (5/20) of the rejected factors. Furthermore, the environmental factors selected in the confirmed group are the type of slope, elevation, lithology, and epoch. Compared to the study by Nguyen et al. [22], which also reported the relative importance of environmental factors, we can see some similarities. The top four factors from Nguyen et al. [22] were slope direction, type of slope, % organic, and elevation. Two (type of slope and elevation) were also selected for this study, while one (slope direction) was not, and the other (% organic) was not included in this study because it is a point data. It is worth noting that Nguyen et al. [22] used 70% training and 30% test data, while this study used threefold cross-validation.
Machine Learning
Based on the results of feature selection, we performed machine learning on three sets of factors separately: (1) all 36 factors, (2) 15 confirmed factors, and (3) 15 confirmed factors plus 1 tentative factor. Using threefold cross-validation in each set of factors, the dataset was divided into three, roughly equal folds. Then, two folds were used as the training data, and the other fold was used as the test data. The process was repeated three times so that every fold was used as the test data in the analysis. Both RF and GBM were used to analyze the same data. Finally, the results (RMSE and NSE) of three attempts were averaged. They are shown in Table 3 and Figure 6. Table 3. Performance comparison of machine learning models using threefold cross-validation.
Model and Factors
No
Model Prediction
Using the RF and GBM models, we predicted the soil erosion depth of the entire study area, as shown in Figure 7. The data of the whole Shihmen Reservoir watershed were investigated and then entered into the R software after the preparation of the machine learning models for predicting the soil erosion depth. The results were transferred to the ArcGIS software to create the soil erosion depth maps. Figure 7 showed the spatial distribution of soil erosion depth (in mm/yr) over the Shihmen Reservoir watershed produced by each model's three sets of factors: all, confirmed, and nonrejected. The red area represents a high erosion depth, whereas the blue area has a low erosion The findings (Table 3) reveal that the ML models delivered good results. Both the average values of RMSE and NSE in Table 3 exhibit the same trend. The smaller the RMSE and the higher the NSE were, the better the model was. As shown in Figure 6, GBM consistently outperforms RF in both training data and test data. GBM also edges out RF in all three datasets that used different factors (all, confirmed, and nonrejected). For the training data, the best RF model result was obtained with the all-factor group, followed by the confirmed group and then the nonrejected group. However, for the test data, the confirmed group is the best, followed by the nonrejected group and then the all-factor group. This shows that the RF models were overfitted with more factors, and that feature selection indeed contributes to improving the ML models when facing unknown data.
On the other hand, the GBM model does not exhibit an overfit bias. For both the training and test data, the confirmed group is the best, followed by the nonrejected group and then the all-factor group.
Overall, the best test result obtained in this study is 1.50 mm/yr (GBM) and 1.91 mm/yr (RF). Both of them are from the confirmed group. Compared to the previous study [22], which used a 70/30 split and only 14 environmental factors, the results are mixed. In terms of RF, the Nguyen et al. [22] result was 1.75 mm/yr, which is better than the current study (1.91 mm/yr). However, in terms of GBM, the present study (1.50 mm/yr) is better than the previous study (1.72 mm/yr). If we only consider the best model, which is GBM in this case, this study is better than the previous study.
Model Prediction
Using the RF and GBM models, we predicted the soil erosion depth of the entire study area, as shown in Figure 7. The data of the whole Shihmen Reservoir watershed were investigated and then entered into the R software after the preparation of the machine learning models for predicting the soil erosion depth. The results were transferred to the ArcGIS software to create the soil erosion depth maps. Figure 7 showed the spatial distribution of soil erosion depth (in mm/yr) over the Shihmen Reservoir watershed produced by each model's three sets of factors: all, confirmed, and nonrejected. The red area represents a high erosion depth, whereas the blue area has a low erosion depth. Due to the morphometric factors used in the ML models, it is clear that the individual subwatershed has a significant impact on the soil erosion depth distribution. Figure 7 shows that the all-factor group's maps (a and b) have more variance within individual subwatersheds than the confirmed group's (c and d) and the nonrejected group's maps (e and f). This is most likely due to the fact that there are more variables used in the mapping of all factors (36). The confirmed and nonrejected maps, on the other hand, appear to be more uniform in color throughout each subwatershed. They both have a similar appearance because they used a similar number of variables (16 and 19). depth. Due to the morphometric factors used in the ML models, it is clear that the individual subwatershed has a significant impact on the soil erosion depth distribution. Figure 7 shows that the all-factor group's maps (a and b) have more variance within individual subwatersheds than the confirmed group's (c and d) and the nonrejected group's maps (e and f). This is most likely due to the fact that there are more variables used in the mapping of all factors (36). The confirmed and nonrejected maps, on the other hand, appear to be more uniform in color throughout each subwatershed. They both have a similar appearance because they used a similar number of variables (16 and 19). The minimum, mean, and maximum erosion depths expected for the entire Shihmen Reservoir watershed from different model/factor combinations are shown in Table 4. The table also includes field measurements of erosion pins for comparison. The table shows that the averages of various model/factor combinations are quite similar to the average of erosion pins. However, no model/factor combination accurately forecasts the extreme The minimum, mean, and maximum erosion depths expected for the entire Shihmen Reservoir watershed from different model/factor combinations are shown in Table 4. The table also includes field measurements of erosion pins for comparison. The table shows that the averages of various model/factor combinations are quite similar to the average of erosion pins. However, no model/factor combination accurately forecasts the extreme values of real-world measurements. The predictions are too high for the minimum value and too low for the maximum value.
Discussion
This study continues to model the soil erosion depth as measured by erosion pins in the Shihmen Reservoir watershed because of the watershed's significance and the degree to which it is affected by soil erosion [20][21][22]. Since the morphometric features of a watershed influence surface runoff and water erosion, they were included in this research to create a complete picture of the erosion activity in the study region and to improve the ML models. However, due to the overlapping (and sometimes conflicting) nature of some of the morphometric features, the overwhelming number of factors extracted from the morphometric analysis may be a deterrent to further analysis. As a result, feature selection was performed in this study before machine learning modeling. The widely used Boruta algorithm was used to separate the important from the nonimportant factors. In the end, 11 morphometric factors were identified as influential in estimating the soil erosion depth. They are texture ratio, subwatershed length, elongation ratio, subwatershed perimeter, form factor, relief ratio, total stream length, Melton index, total number of streams, shape factor, and subwatershed area. Overall, the morphometric factors were chosen in 42 percent (=11/26) of the cases. On the other hand, only four environmental factors (slope type, elevation, lithology, and epoch) were chosen as important. They account for 40% (=4/10) of the overall environmental factors.
Note that the point data (% sand, % silt, % clay, and % organic) in the original 14 environmental factors had to be removed because they were not available watershedwide and cannot be used for model prediction of the entire Shihmen Reservoir watershed. Therefore, the lower selection rate of the environmental factors than the morphometric factors in this study could be attributed to the removal of these point data because some of them were shown to be important in the previous study [22].
Another aspect that distinguishes this study from the previous studies [20][21][22] is the use of threefold cross-validation instead of the 70/30 split with stratified random sampling. The threefold cross-validation divides the dataset into three roughly equal folds with a balanced class distribution. Therefore, each class (stratum) is adequately represented, as with the stratified random sampling. However, in the threefold cross-validation, two folds were used as the training data, and the third fold was used as the test data. The procedure was replicated three times so that the algorithm takes turns using two-thirds of the data as the training data, and each fold was used as the test data only once. The 70/30 split with stratified random sampling, on the other hand, did not rotate the training and test data. To find the average answer, the 70/30 split had to be repeated three times from the beginning using different random seeds.
Regardless of which set of factors was used (all, confirmed, or nonrejected), our analysis shows that GBM consistently outperforms RF in terms of RMSE and NSE. As compared to the previous study [22], the best RMSE value was noticeably reduced from 1.72 mm/yr to 1.50 mm/yr (GBM with confirmed factors). This demonstrates that, despite the elimination of potentially valuable point data, the inclusion of morphometric factors improves the soil erosion modeling.
Additionally, unlike the previous study that used point data [22], this study does not need to interpolate the modeling prediction for the entire research area. Instead, complete maps of the spatial distribution of soil erosion depth can be produced from the ML models directly. The resulting maps (Figure 7) show finer resolution of change with more features in color variation. There is densely packed information not present in the previous maps. It is a huge step forward for soil erosion control and prioritization.
Conclusions
To sum up, previous studies built machine learning models for the Shihmen Reservoir watershed using point data that were only available at individual slopes monitored with erosion pins. The current research improved upon past studies by incorporating new independent variables (morphometric factors) derived from the watershed digital elevation model and eliminating the dependence on the point data. A dataset of 36 predictive factors and one target factor was created. Feature selection was performed to remove redundant factors and to avoid the overfitting of models. In the end, 15 important factors were identified that include 4 environmental factors and 11 morphometric factors. Two ML algorithms, RF and GBM, were used in the analysis. Despite the removal of four environmental factors used in previous studies (point data that were not available watershed-wide), the new GBM model in this study shows an improvement in RMSE, which was reduced from 1.72 mm/yr to 1.50 mm/yr. Consequently, we were able to create the most accurate ML model to date of the distribution of soil erosion depth in the study area. This proves the value of adding morphometric factors to soil erosion analysis. Furthermore, the ML models were used to create prediction maps of soil erosion depth of the entire Shihmen Reservoir watershed, which were not possible, and only interpolation approximation was achieved previously (due to the point data issue). The new maps show great details of what needs attention for soil erosion control and prioritization. It is a valuable advancement of our understanding and future study of soil erosion modeling. Since the ML models are data-driven and rely on sufficient monitoring data, it is crucial to improve our data collection methods and use the latest technologies to record information. Solar-powered Internet of Things (IoT) devices that can monitor the change of slope surfaces are currently being experimented with in the Shihmen Reservoir watershed. The inexpensive and large amount of data generated by these devices will likely be the key driver for future research on this topic.
Author Contributions: Conceptualization, Walter Chen; data curation, Kieu Anh Nguyen and Walter Chen; formal analysis, Kieu Anh Nguyen and Walter Chen; funding acquisition, Walter Chen; investigation, Walter Chen; methodology, Walter Chen; project administration, Walter Chen; resources, Walter Chen; software, Kieu Anh Nguyen and Walter Chen; supervision, Walter Chen; validation, Walter Chen; visualization, Kieu Anh Nguyen and Walter Chen; writing-original draft preparation, Kieu Anh Nguyen and Walter Chen; writing-review and editing, Kieu Anh Nguyen and Walter Chen. All authors have read and agreed to the published version of the manuscript. Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Data sharing is not applicable to this article.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2021-08-03T13:14:28.865Z
|
2021-07-01T00:00:00.000
|
{
"year": 2021,
"sha1": "ca8f8388143bff9cd4006057ab02602e95f2c3c6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2220-9964/10/7/452/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a9fbe17564d632fc0d2e8d350fc8409489b11635",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
159143125
|
pes2o/s2orc
|
v3-fos-license
|
“Towards a Beautiful Country”: The Nationalist Project to Transform Japan
Japan is often regarded by scholarship as an example of what a healthy East Asian liberal democracy ought to look like. Despite its reputation for pacifism and liberal democracy, Japan has demonstrated a remarkable shift in political culture in the last decade, as successive governments have embraced decidedly nationalist policy choices. As the Abe Administration continues to push ahead with its plan for Constitutional Revision, a goal long advocated for by nationalist groups, Japan seems poised to enter a period of renewed nationalist discourses and policymaking. Existing scholarship presents these shifting political trends as having been facilitated by the political elite, and many scholars argue that elite driven, or top-down nationalism, is the driving force of political change in the modern Japanese political system. This paper challenges these assertions, instead arguing that resurgent nationalism in Japanese politics can be traced to the grassroots of society. Through a study of two non-government organizations, Nippon Kaigi 日本会議and Jinja Honchō 神社本庁, this paper clearly demonstrates the critical impact that grassroots organizing through non-government organizations has had on driving nationalist policymaking at the national level. The political success of these lobbying groups has been clearly evidenced in their presence at the highest level of Japanese government, as well as the remarkable similarities between their organizational goals and the political goals of the ruling Liberal Democratic Party. This paper demonstrates that the relationship between grassroots nationalist organizations and the Japanese government is one of influence and pressure, rather than a coincidental alignment of political ideals.
Introduction
Japan has entered an era of deep political change. The days of deep government factionalism and a laser-focus on economic development have since given way to shifts in mainstream Japanese political discourse. With the turn of the 21 st century, Japan has faced new challenges and new political realities, as ideology is no longer taking a backseat to extreme economic growth. A nationalist revival is taking place in Japan, from the grassroots all the way up to the national Cabinet. This political shift POLITICAL SCIENCE UNDERGRADUATE REVIEW VOL. 4 Winter 2019 ARTICLE 10 towards nationalism carries important implications for both policy and public discourse. An important marker of these shifts has been the increasing embrace of nationalist discourse by politicians within the Liberal Democratic Party (LDP) and in the opposition parties. This embrace of nationalist tendencies has taken many forms, from repeated visits to the controversial Yasukuni Shrine by elected officials to an increasingly aggressive push toward constitutional revision, a goal long advocated for by those on the right of Japan's political spectrum. Contemporary literature on Japanese politics is in relative agreement that this nationalist shift is taking place, and scholars such as Giulio Pugliese and Margarita Estévez-Abe have argued that this phenomenon is elite driven. 46 This top-down approach to examining Japan's nationalist discourses is rooted in the idea that elected officials are the primary force for advocating meaningful political change. It is easy to come to such a conclusion, as Prime Minister Shinzō Abe and his Cabinet have increasingly led the call for more nationalist policy choices, such as key changes to the Constitution. However, this assertion leaves out key factors in understanding the changing Japanese political landscape. By arguing that the nationalist revival is being driven by Japan's elites, these scholars ignore the critical role that non-government organizations and private institutions have played in advocating change at both the national and grassroots level. This paper will challenge existing assertions of elite-driven nationalism and demonstrate the rising influence of non-government nationalist organizations on public policy outcomes. Through two in-depth case studies of Japan's most influential nationalist organizations, Nippon Kaigi (Japan Conference) and Jinja Honchō (The Association of Shintō Shrines), this project will clearly demonstrate the existence of a complex and influential network of nationalist activists that continue to exert significant influence on public officials and policymaking outcomes. 47 By examining the origins, organization, and goals of these two institutions, as well as their extensive connections to elected officials, this paper will highlight the extensive role in which private organizations have played in driving nationalist policy outcomes in Japan since the turn of the century. This paper argues that such organizations have created an expansive network of influence extending from the grassroots deep into the highest echelons of the political office, resulting in significant shifts in political discourse and the formation of nationalist policy outcomes.
Defining Nationalism
Any discussion of ideological trends in society or in government is at risk of abstraction, especially when dealing with a topic as politically controversial as nationalism. It is therefore critical that we construct a clear working definition for what this paper refers to as 'nationalist policies' or 'nationalist discourses.' Such terms as 'nation' and 'nationalism' are all too commonly misused or loosely applied by both academics and news media alike, which propagates contradiction and misunderstanding. 48 To understand what is meant by the term nationalism, a clear definition of nation must first be ascertained. 46 . Giulio Pugliese, "The China Challenge, Abe Shinzo's Realism, and the Limits of Japanese Nationalism," SAIS Review of International Affairs 35, no. 2 (2015): 47. Pugliese argues that Abe has purposely fanned nationalist furor, coining the term "top-down nationalism"; Margarita Estévez-Abe, "Feeling Triumphalist in Tokyo: The Real Reasons Nationalism Is Back in Japan," Foreign Affairs 93, no. 3 (2014): 165. http://www.jstor.org/stable/24483416. Estévez-Abe argues that increased nationalist discourse has been promoted by Abe as a conscious policy choice. 2 . All translations are by author unless otherwise noted. 48 . Lowell W. Barrington, ""Nation" and "Nationalism": The Misuse of Key Concepts in Political Science," PS: Political Science & Politics 30, no. 4 (1997): 712. Lowell Barrington, in his extensive attempt at defining such terms, defines the nation as a collective that is "united by shared cultural features (myths, values, etc.) and the belief in the right to territorial selfdetermination." 49 In the context of Japanese studies, this definition is easily applied as Japan has historically existed as a relatively homogenous society with well-defined territorial borders. In addition, Japanese history is rife with references to a common creation myth, which has served as a collectively unifying principle under the Imperial Household. With this definition in mind, nationalism can therefore be characterized as, in Barrington's terms, "the pursuit of a set of rights for the self-defined members of the nation, including, at a minimum, territorial autonomy or sovereignty." 50 This definition implies that nationalism must define both territorial boundaries that the nation has a right to control, as well as the membership boundaries of the individuals that are thought to have a right to belong to the collective. 51 In contrast to this definition, many popular definitions, such as those used in mass media, refer to nationalism as "right-wing political thought and action aligned with militarism," and as Matthew Penney explains, "a whole complex of beliefs, assumptions, habits, representations, and practices that reinforce the concept of the nation." 52 With these definitions in mind, it therefore becomes possible to define nationalist policies and nationalist discourses as those policy decisions and accompanying discourses aimed at strengthening a sense of collective national unity through the strengthening and protection of territorial borders and the boundaries that define that collective nation. In terms of Japan, this refers to a set of policies and beliefs that view the Japanese people as a quantifiable collective, unified through shared historical experiences, values, and collective identity.
Shifting Political Discourses
Since the turn of the 21 st century, there has been a notable shift in policy priorities and discourse at the highest levels of the Japanese government. Beginning with the election of Prime Minister Mori Yoshirō in 2000, who famously declared that Japan was "a divine nation centring around the Emperor," 53 along with his successor, Koizumi Jun'ichirō, who visited the controversial Yasukuni Shrine to pay homage to Japan's war dead an unprecedented six times, Japan's elected officials have grown increasingly bold in their embrace of nationalist discourses. 54 Under Prime Minister Abe Shinzō, these embraces of nationalist discourse have accelerated and taken the form of actual policy outcomes. Such policy outcomes include an expanded role for the Self-Defence Forces, continued revisions of history textbooks, the mandatory singing of the national anthem in schools, and the legalization of the Imperial Calendar. Abe's party, the LDP, has also released a draft constitution containing numerous proposed amendments favouring removal of pacifist clauses such as Article 9, which forbids Japan from maintaining the capacity to wage war. 55 Such developments have not gone unnoticed by scholars, the vast majority of whom have declared the trend toward nationalism as being driven by elites such as Abe and 49 . Barrington,[712][713] . Barrington,714. 51 . Barrington,714. 52 his Cabinet. Fabian Schäfer refers to Abe's "hidden nationalist agenda" and writes that the government is purposefully utilizing populist right-wing strategies to advance a nationalist agenda. 56 Similarly, Mike Mochizuki argues that Abe's recent electoral success is not due to his ideological positions, but is instead the result of the collapse of opposition parties. 57 He continues to explain that this situation has simply presented Abe with the opportunity to "pursue his nationalist agenda" without an opposition to stand in the way. 58 Taking this argument even further, Jeff Kingston writes that all of the recent nationalist trends in contemporary Japan are a trend that is elite-driven and vigorously promoted by the nation's political leadership". 59 All of these scholars are correct in their assertions that nationalist policies and discourses are being promoted at the highest levels of Japanese government. There is little doubt that Abe and his Cabinet have voiced support for such policies, even if many policy goals yet to be attained. What these scholars ignore, however, is the underlying explanation for such a dramatic shift in Japanese political discourse. The argument that Abe and his government are the primary drivers of nationalist change does not adequately account for the dramatic uptake of nationalist discourse into the mainstream of Japanese politics, a reality that would likely have been dismissed by scholars before the year 2000. As scholars of liberal democracies know, democratic governments are designed to be representatives of certain interests. Democratically elected politicians are not only held accountable to voters but are almost always held accountable to interest groups or lobbies that support them financially and/or politically. This is undoubtedly the case in Japan, where the influence of interest groups and lobbies has continued to flourish since the electoral reforms of the 1990s. 60 Through an examination of such interest groups, which are by definition non-government organizations, it becomes clear that the recent trends towards nationalism in the Japanese government are the direct result of specific interests and influence campaigns with the intent of explicitly influencing policymaking at the government level.
Nippon Kaigi
The first of this paper's case studies examines the rise of Nippon Kaigi (Japan Conference) and its increasing activity at the highest levels of Japanese government. Nippon Kaigi is often described as Japan's most successful and most established right-wing advocacy group and lobbying organization. 61 The group was largely unknown outside of Japan until 2014, when the New York Times introduced it as "a nationalistic right-wing group that was all but unknown until recently," following a renewed media scrutiny on Nippon Kaigi's influence on politics after the 2014 Diet elections. 62 Nippon Kaigi was actually founded in 1997, as a merger of two existing right-wing nationalist organizations, the 56 National Conference to Protect Japan and the Society for the Protection of Japan. 63 Nippon Kaigi's origin in these other two groups is notable, as the use of the term "protection", or mamoru in Japanese, is clearly in line with this paper's definition of nationalism. The prevalence of the term mamoru implies a sense that there are territorial or societal boundaries that must somehow be protected from some perceived harm. Utilization of such a term in this context can therefore be interpreted as explicitly nationalist in the framework of this paper's definition. Since 1997, Nippon Kaigi has quickly established itself as an umbrella organization of right-wing groups, intellectuals, business leaders, and politicians, as well as a grassroots membership of 38,000 fee-paying members across all 47 Japanese prefectures. 64 Nippon Kaigi has a clear set of organizational objectives which guide its activities, including such goals as: "A new constitution suitable for a new era," "Politics that protect the country's reputation and the people's lives," "Creating education that fosters Japanese sensibility," and "Contributing to world peace by enhancing national security." 65 A list of goals such as these serves as a set of guiding ideological principles for the organization. In order to measure the actual influence of Nippon Kaigi, however, it is necessary to examine the way in which these abstract organizational goals translate to real policy outcomes.
Nippon Kaigi maintains a parliamentary division, the Parliamentary League for Nippon Kaigi
(Nippon kaigi kokkai giin kondankai), which serves as its direct connection to lawmakers. 66 Within the National Diet, Japan's parliament, 280 sitting lawmakers are listed as members of Nippon Kaigi's parliamentary league, including Prime Minister Abe himself, who serves as "special advisor" to Nippon Kaigi. 67 In addition to its influence in the Diet, Nippon Kaigi also claims 1,692 members elected to local councils across the country. 68 It is important to note that Nippon Kaigi did not obtain this substantial presence in politics by recruiting elected officials. Instead, as Thierry Guthmann notes in his overview of Nippon Kaigi, many of these politicians have maintained close personal ties with the nationalist lobby since the earliest days of their careers. 69 This implies that Nippon Kaigi members and sympathizers have actively sought out elected office, which challenges existing assertions made by some scholars that elected officials have gravitated toward the nationalist lobby for political purposes. 70 Nippon Kaigi, throughout its history, has demonstrated a multi-pronged approach at driving policy change at both the national and local level. This includes signature drives and a sustained grassroots effort at mobilizing both people and resources to enact political change and influence politicians. 71 for the Tokyo Metropolitan Government's passing of measures mandating punishment for teachers who refuse to stand, face the flag and sing the anthem during school ceremonies. 72 Nippon Kaigi's ability to mobilize at the grassroots level serves as the core of influence campaign, and members often hold "lectures and rallies to pressure local assemblies to submit resolutions to Tokyo by bombarding them with requests, petitions, and phone calls." 73 This type of grassroots mobilization has helped to drive the explosive growth that Nippon Kaigi has continued to enjoy across Japan.
In addition to this grassroots foundation, it can be argued that Nippon Kaigi's most successful approach to enacting change has been their extensive network of influence within the highest levels of Japanese government. As previously discussed, as many as 280 members of the Diet are associated with Nippon Kaigi's parliamentary group. Even more significantly, well over half of the 20 members of Cabinet are also Nippon Kaigi members. The fact that this organization has been able to create a network of politicians so vast that they hold the majority in the executive branch is a further indication of their growing influence. It is important to note, however, as James Babb points out, the presence of right-wing members in the government is not a new phenomenon, but rather "political dynamics now allow and even encourage them to express these views more clearly." 74 These political dynamics have largely been changed by the shifting political discourses around the idea of nationalism, which has largely been led by Nippon Kaigi. The group has facilitated the rise of a generation of politicians that appear to be less attached to post-war pacifism and are more willing to embrace significant change in the pursuit of the protection of the nation. The close relationship between these politicians has seen substantial policymaking achievements, such as Nippon Kaigi's successful lobbying for the reinterpretation of the constitution to allow for limited Japanese military action abroad. Nippon Kaigi has also led the lobbying for the introduction of revised history textbooks in schools that reinterpret Japan's role in the Second World War, and has helped to design the new LDP draft constitution, which contains several proposed amendments to the constitution that would enact sweeping changes on many aspects of life in Japan. 75 The LDP draft constitution is an almost perfect copy of the proposed constitution and calls for many of the same policy changes, such as the restoration of the Emperor as the head of state, and the rewriting of Article 9, which deals with the legal status of the Self Defense Forces. 76 Changes such as this have been the goal of nationalists and the Japanese right wing since the end of the war, but it has only been since the turn of the century that such reforms have been gained traction with the support of groups like Nippon Kaigi. In his book on Nippon Kaigi published in 2016, journalist Aoki Osamu wrote that the group only appears influential because the ideological tenets that they espouse are coincidentally aligned with that of the Abe Administration, concluding that there is no causal link between the operations of Nippon Kaigi and the noticeable shift in political discourse since the beginning of the Abe Administration. 77 Aoki insists that the relationship between Abe and Nippon Kaigi is one of sympathy and resonance, rather than 72 . McNeil,5. 73 . Mizohata,4. 74 . Babb,359. 75 . McNeil,4. 76 . The Japan Times, "The LDP's draft constitution," 24 August 2016.
https://www.japantimes.co.jp/opinion/2016/08/24/commentary/japan-commentary/ldps-draftconstitution/#.XAsNnKfMyYU. 77 . Shibuichi,191. influence and control. 78 What Aoki fails to consider is the clear material connection between Abe, his Cabinet, and Nippon Kaigi. As Guthmann explained in his assessment of the ideological foundations of Nippon Kaigi, Abe and many of his colleagues have been members and deep supporters of Nippon Kaigi since the beginning of their political careers, and were supporters of nationalist values well before advocating for change within the government. 79 Further, Aoki's dismissal of any causal link between the government and Nippon Kaigi in spite of evidence to the contrary is explained as simply being a coincidence. The ideological coherence between members of the Abe Cabinet and Nippon Kaigi run deep, as evidenced by their unity on the topics of constitutional reform and education reform, which casts serious doubt on Aoki's suggestions of coincidence. Nippon Kaigi's ideological foundation, its organizational structure, and its ability to mobilize at both the grassroots and government levels demonstrate its significant influence on enacting policy change and introducing nationalist discourse.
Jinja Honchō
The second case study this paper will examine is Jinja Honchō (The Association of Shintō Shrines), the expansive administrative organization responsible for overseeing the management of Japan's 80,000 Shintō shrines. Historically, Shintō was a belief system that existed as an extension of the Japanese creation myth, in which the Emperor was revered as a living God and spiritual leader of the Japanese nation. 80 This system, often referred to as State Shintō before 1946, reflected an attempt at unifying religion and state into a unitary Japanese identity; an identity that was based in the common belief that the Japanese people had descended from the gods, or kami , in Japanese. 81 According to this paper's previously established definition of nationalism, this attempt at unifying the Japanese people under a set of shared customs and myths is a critical element of nationalist discourse. While State Shintō no longer exists in an established political form, the impact of Shintō on Japanese identity is still noteworthy. Following Japan's defeat at the end of the Second World War, American occupying forces introduced what was called the Shinto Directive, aimed at dismantling the wartime influence of State Shinto and established a legal basis for secularism in Japan. 82 With the relegation of Shintō places of worship to the private realm, Jinja Honchō was established as a private, non-government association dedicated to the continued management of the Shrines that previously had been under the jurisdiction of the imperial government.
Despite its existence as an administrative organization, Jinja Honchō has proven to be one of the most influential and effective political lobbying organizations in Japan. 83 Through the establishment of its political arm, Shintō seiji renmei (Shintō Association of Spiritual Leadership), Jinja Honchō has successfully lobbied for several nationalist causes, such as the legalization of the National Flag, the reinstatement of the National Anthem, and the establishment of a national holiday on April 29 th 78 . Shibuichi,191. 79 . Guthmann,214. 80 . Guthmann,209. 81 . Guthmann,209. 82 . Guthmann,211. 83 . Guthmann,208. in honour of wartime Emperor Showa. 84 Jinja Honchō's lobbying arm boasts high membership levels in the national Diet, with some estimates suggesting that the group has more membership among politicians than even Nippon Kaigi. 85 In addition to its nationalist policy lobbying efforts, Jinja Honchō has also been a staunch advocate for continued visits by public officials to the controversial Yasukuni Shrine, an act that many of Japan's neighbours in Asia view as a way of celebrating Japan's wartime military activities. 86 Any comprehensive study of nationalist discourses in Japan cannot be divorced from the study of Shintō and its ability to organize politically. Through Jinja Honchō's Shinto Association of Spiritual Leadership, the organization has established an influential network of sympathizing politicians in the highest levels of government. After the 2016 Cabinet reshuffle, 19 of Abe's 20 Cabinet members were members of the Shinto Association of Spiritual Leadership, which led some scholars to conclude that Shintō-inspired elements have been a central element of the Abe's government's ideological foundation. 87 The true influence of Jinja Honchō and political Shintō, however, lies in the organization's hand in building the nationalist coalition that has proved to be so influential in enacting policy change in Japan under Abe. The existence of Nippon Kaigi is directly tied to its ideological unity with Jinja Honchō, and the ties between these two organizations suggest that little separates the two groups organizationally. The ideological foundations of Nippon Kaigi's founding in 1997 has been closely linked with Jinja Honchō political and religious syncretism. The two organizations are united by a profound resentment for the postwar order and share a deep nostalgia for the perceived "golden age" of Japanese political and cultural life. 88 Since Nippon Kaigi's founding in 1997, the board of directors has largely been staffed by representatives and leaders from within Jinja Honchō. 89 Ideologically speaking, these two organizations are highly synchronized as a result, and some scholars have suggested that Jinja Honchō continues to form the backbone of Nippon Kaigi both ideologically and organizationally. 90
An Alliance of Nationalists
When viewed through the lens of nationalism, the policy proposals and discourses discussed throughout this paper reflect a deep concern with national identity, which in the Japanese context is profoundly reflecting in Shintō. The alliance between Nippon Kaigi and Jinja Honchō is further indicative of the religious foundation of Japanese nationalism, even within the framework of a secular state. The goals of these two organizations are highly aligned, even if they are not stated to be explicitly religious. Both Nippon Kaigi and Jinja Honchō are fundamentally built on the idea that Japanese identity ought to be protected, and the way to accomplish this is to "rebuild" a Japan that is centred around the Imperial Household, which they view as the "essential constitutive element of the nation". 91 It is important to remember that despite the extensive involvement of these groups in the Diet and in the 84 . McNeil,5. 85 . Babb,361. 86 . Shibuichi,182. 87 . Mizohata,10. 88 . McNeil,3;Guthmann,207. 89 . Guthmann,214. 90 . Guthmann,215. 91 . Guthmann,216. Cabinet, they are fundamentally private and non-governmental in nature. Both groups exist primarily as grassroots organizations that lead fundraising and signature drives in the pursuit of effecting policy change in the name of nationalism. The success that these groups have enjoyed in recent years is not the result of coincidentally aligned views between the grassroots and the elite. To the contrary, the evidence demonstrates the extensive inroads that Nippon Kaigi and Jinja Honchō have made in rallying elected officials to their causes, and it can be effectively argued that in many respects, these nationalist groups are the primary drivers of Japanese politics. 92 The effect of these groups on mainstream political discourse goes beyond the confines of the Abe government or even the LDP. Since 2016, Japanese politics has seen a spectacular collapse of the opposition parties and the further entrenchment of power by the LDP. During the leadup to the 2017 Diet Elections, the LDP's main opposition, the Democratic Party, collapsed and announced that it would not contest the election. 93 In its place rose a new opposition party, Kibō no Tō (Party of Hope), led by Tokyo Governor Koike Yuriko. Interestingly, Koike herself had served as the Minister of Defense under Abe and was a member of both Nippon Kaigi and the Shintō Association for Spiritual Leadership. 94 In addition, Koike established a "litmus test" for politicians looking to join the Kibō no Tō, ensuring that the party was represented by politicians that supported Nippon Kaigi policies such as constitutional revision. 95 While this election resulted in a stunning defeat for the upstart party, it solidified an ideological trend in Japanese politics: the consolidation nationalist ideology across party lines. Nippon Kaigi and Jinja Honchō, as evidenced by the 2017 election, have accrued influence across multiple parties, and crafted a political system in which nationalist ideology has become the dominant political discourse. While these shifts in political discourse are most visible within the elected elite, it is important to consider the driving ideology and influence of groups like Nippon Kaigi in facilitating this consolidation of ideological influence.
Conclusion
There is little doubt among scholars that Japan is experiencing foundational shifts in its political discourse and ideologies which constitute its government, resulting in some degree of uncertainty about where the country is headed in the years ahead. While it is generally agreed that nationalism and nationalist rhetoric has become more mainstream in Japanese political life until recent years, the mechanism by which these changes have taken place is more complex and cannot be attributed simply to the ideological leanings of a few elected elite. While scholars such as Pugliese, Aoki, and Glosserman have argued that this phenomenon is elite-driven and a coincidental partnership between like minded politicians and interest groups, this paper has demonstrated that shifts in Japanese political discourse can be traced back to actions of grassroots political and religious movements with their ideological origins in the postwar order. Non-government organizations such as Nippon Kaigi and Jinja Honchō have spent years building a complex system of influence from grassroots activists straight up into the highest 92 . Tawara Yoshifumi, "What is the Aim of Nippon Kaigi, the Ultra-Right Organization that Supports Japan's Abe Mizohata,3. 95 . Pekkanen,32. echelons of elected government. These organizations embrace an ideology that can be defined as explicitly nationalist according to the definition put forward by this paper and have seen a high level of success in enacting meaningful policy change in line with their agenda. As this paper has explained, understanding the origins of these organizations and the ideological foundations on which they have been built is critical in crafting an accurate analysis of the mechanisms by which political change in Japan has been created. Nippon Kaigi and its ideological backbone Jinja Honchō have each created extensive political lobbying wings which reign in politicians at both the local and national levels in order to drive nationalist policy outcomes from the ground up. This is not a phenomenon that is primarily elite-driven, as evidence suggests that a nationalist movement has been built by these organizations from the grassroots of Japanese society. As elected officials in the Diet and Cabinet have continued to align themselves with the ideological platform of Nippon Kaigi and Jinja Honchō, these organizations will continue to consolidate power in the form of ideological unity across party lines. As Japan appears to be nearing a vote on constitutional revision, the activity of these groups will intensify, and the pressure placed on politicians to align themselves with a burgeoning 'nationalist movement' will continue to develop. Japan's increasing embrace of nationalist discourse has taken many forms, all with the goal of establishing a "new normal" in Japanese politics, and grassroots movements will continue to exist at the forefront of driving decision-making among Japan's elected elites. 96 Future scholarship in the field of nationalist political discourse in Japan ought to examine the foundations of such ideological shifts at the grassroots level, rather than viewing political change strictly through the lens of elite-driven political discourses. 96 . Catherine Wallace, "Japanese Nationalism Today-Risky Resurgence, Necessary Evil or New Normal?," Mejiro journal of humanities 12: 76.
|
2019-05-21T13:06:05.206Z
|
2019-04-21T00:00:00.000
|
{
"year": 2019,
"sha1": "bd439329a5f54ad77b0714a596568e5f0f60a720",
"oa_license": null,
"oa_url": "https://journals.library.ualberta.ca/psur/index.php/psur/article/download/83/73",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "3488dda1372c988757767d67925ff7aaba1d80a7",
"s2fieldsofstudy": [
"Political Science",
"Sociology"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
257413412
|
pes2o/s2orc
|
v3-fos-license
|
Distribution and Speciation of Heavy Metal(loid)s in Soils under Multiple Preservative-Treated Wooden Trestles
The widespread use of wood preservatives, such as chromated copper arsenate (CCA), alkaline copper quaternary (ACQ), and copper azole (CA), may cause environmental pollution problems. Comparative studies on the effect of CCA-, ACQ-, and CA-treated wood on soil contamination are rarely reported, and the behavior of soil metal(loid) speciation affected by preservatives has been poorly understood. Soils under the CCA-, ACQ-, and CA-treated boardwalks were collected to investigate metal(loid) distribution and speciation at the Jiuzhaigou World Natural Heritage site. The results showed that the maximum mean concentrations of Cr, As, and Cu were found in soils under the CCA, CCA, and CCA plus CA treatments and reached 133.60, 314.90, and 266.35 mg/kg, respectively. The Cr, As, and Cu contamination in soils within a depth of above 10 cm was high for all types of boardwalks and limited in the horizontal direction, not exceeding 0.5 m. Cr, As, and Cu in soils were mainly present as residual fractions in all profiles and increased with depth. The proportion of non-residual As in soil profiles under CCA- and CCA plus CA-treatment and exchangeable Cu in CA- and CCA plus CA-treatment were significantly higher than those in the profiles under the other preservative treatments. The distribution and migration of Cr, As, and Cu within soils were influenced by the preservative treatment of trestles, in-service time of trestles, soil properties (e.g., organic matter content), geological disasters (e.g., debris flow), and elemental geochemical behavior. With the CCA treatment for trestles successively replaced by ACQ and CA treatments, the types of contaminants were reduced from a complex of Cr, As, and Cu to a single type of Cu, achieving a reduction in total metal content, toxicity, mobility, and biological effectiveness, thus reducing environmental risks.
Introduction
Wood is one of the most abundant sustainable biomaterials on Earth [1,2]. However, it is susceptible to biological or chemical decay by biotic and abiotic components [3][4][5]. To protect the structural integrity of wood from the harmful effects of fungi, termites, and various other pests [6][7][8], improve the efficiency of wood utilization, and conserve wood resources and increase the functionality of wood products, various wood preservatives have been persistently invented [9]. Preservative-treated wood is widely used in decks, porches, utility poles, railroad sleepers, bridge piers, fence posts, and picnic tables, among other things [10][11][12].
Chromated copper arsenate (CCA) was once the most widely used wood preservative worldwide [13]. CCA-C, consisting of 47.5% CrO 3 , 18.5% CuO, and 34.0% As 2 O 5 [14], is an ideal wood preservative with excellent preservative efficiency and a low cost. However, since the beginning of this century, CCA has been banned in some countries due to the leaching of Cu, Cr, and As from CCA-treated woods, and the high ecological risks resulting from these metal(loid)s, particularly Cr and As. As a worldwide shift to alternative copperbased preservative-treated wood products [15], alkaline copper quaternary (ACQ) and copper azole (CA), both of which contain copper in the form of copper oxide mixed with organic cobiocides [16], are seeing increasing use. According to the ratio of the primary active ingredient copper oxide and quaternary ammonium salt and the use of different solvents (amine soluble or ammonia soluble), ACQ can be divided into four subtypes, i.e., ACQ-A, ACQ-B, ACQ-C, and ACQ-D [17], and the most widely used in China is ACQ-D. CA contains triazole instead of quaternary ammonium salt used in ACQ. CA is divided into boron-containing formulations (CA-A) and boron-free formulations (CA-B). The current market basically uses boron-free formulations because boron is easily lost [18]. Currently, the type of preserved woods used in China mainly includes CCA-treated wood (accounting for~85-95%), a small amount of ACQ-treated timber (10%-15%), and a tiny amount of CA-preserved wood [19,20].
The extra entrance of Cr, As, and Cu in CCA-treated wood into the surrounding environment cannot be ignored [21][22][23][24][25]. This not only directly affects the preservative effect, but also the persistence of Cr, As and Cu, producing a risk and threat to environmental security and human health [26][27][28][29]. Although the new environmentally friendly copperbased preservatives exclude Cr and As, the leaching of Cu from the treated wood is still unavoidable [30][31][32]. It has been shown that the leaching rate of Cu from ACQtreated wood is up to 15 times higher than that in CCA-treated timber [33]. Using X-ray fluorescence spectroscopy, 6.92-19.54% and 9.38-22.46% Cu leaching from ACQ-treated wood and CA-treated wood, respectively, were observed [34]. Despite Cu causing less harm than Cr and As, increased Cu content in the environment may cause damage to aquatic organisms [35,36].
It is well known that Cr, As, and Cu can accumulate and persist for a long time after entering soils and may harm human health through the food chain or airborne dust [37][38][39]. In recent years, numerous studies on the characteristics of Cr, As, and Cu contamination in soils from CCA-treated wood are essential for understanding the soil contamination processes of CCA-treated wood and for contamination management [15,21,24,25,[40][41][42][43][44]. However, the contamination processes and behavior of ACQ and CA, so-called "environmentally friendly preservatives", have yet to be determined. In addition, previous studies on metal(loid) contamination caused by a single type of preserved wood mainly focused on accumulation characteristics and spatial distribution without evaluating soil metal(loid) pollution and potential risk analysis. Meanwhile, with the more recent iterations of preserved wood, studies have not been conducted on whether cumulative effects occur when different wood preservatives are successively introduced into the soil environment.
Jiuzhaigou National Natural Reserve (JNNR) was listed as a World Natural Heritage site by UNESCO in 1992. Over 70 km of wooden plank roads have been built to preserve natural scenery in Jiuzhaigou since 2001. CCA-C, ACQ-D, and CA-B treated wood planks were constructed during 2001-2012, 2013-2017, and 2018-2022, respectively. Therefore, there is an urgent need to evaluate the metal(loid) contamination situation and the ecological risk associated with the presence of a large number of preservative-treated wooden trestles in the Jiuzhaigou scenic area to protect the natural ecosystem.
The As concentrations on the surfaces of in-service CCA-treated wood planks in the JNNR have been investigated using a portable XRF analyzer [45], but lacked attention to the contamination of soils triggered by CCA, ACQ, and CA preservatives. In this study, the content of major contaminants (Cr, As, Cu) and their speciation distribution in soil profiles and soil physicochemical properties (pH, organic matter content) were determined under CCA-, ACQ-, and CA-treated wood planks in the JNNR. The objectives of this study were: (1) to investigate the accumulation and distribution characteristics of soil heavy metal(loid)s in different types of preservative-treated wood at various in-service ages; (2) to determine the metal speciation distribution in the soil profile; and (3) to provide data to support the selection and management of preserved wood types in ecologically sensitive areas where preservative-treated wood panels are heavily used.
General Description of the Study Area
Jiuzhaigou National Natural Reserve, situated in northern Sichuan Province, Southwest China, mainly consists of highland travertine lakes, travertine waterfalls, and other typical karst landscapes, covering an area of 643 km 2 ( Figure 1). The altitude of the tourist area is 2000-3100 m above sea level. The scenic route is shaped like a "Y" and is approximately 30 km long. The mean annual temperature and precipitation are 7.3 • C and 622 mm, respectively. Most of the yearly rainfall is concentrated from May to September,~150 days. The soil texture is mainly sandy loam [46]. Due to various geological disasters, e.g., landslides and debris flows, vegetation and soils in the area were damaged to varying degrees, resulting in rockfall accumulation or exposure of limestone bedrock. typical karst landscapes, covering an area of 643 km 2 ( Figure 1). The altitude of the tourist area is 2000-3100 m above sea level. The scenic route is shaped like a "Y" and is approximately 30 km long. The mean annual temperature and precipitation are 7.3 °C and 622 mm, respectively. Most of the yearly rainfall is concentrated from May to September, ~150 days. The soil texture is mainly sandy loam [46]. Due to various geological disasters, e.g., landslides and debris flows, vegetation and soils in the area were damaged to varying degrees, resulting in rockfall accumulation or exposure of limestone bedrock.
As one of the most attractive scenic spots in China, the JNNR receives millions of domestic and international tourists [47]. To ensure the safety of tourists and reduce the impact of human activities on the ecosystem, more than 70 km of wooden plank roads have been set up in the JNNR since 2001 (Figure 1). For twenty years, three types of preservative-treated planks have been successively replaced. The CCA-treated planks were installed in Jiuzhaigou from 2001 to 2012. Subsequently, ACQ-treated wood was used for building from 2013 to 2017. Since the JNNR was struck by a strong earthquake (Jiuzhaigou earthquake, Ms = 7.0) in 2017, increasing landslides, debris flows, and other natural disasters have resulted in severe damage to infrastructure and public facilities, including the boardwalk in the scenic area. Therefore, as one of the critical projects for restoration and reconstruction, ~73 km-long plank roads with CA treatment were upgraded at the original site from 2018 to 2022. As one of the most attractive scenic spots in China, the JNNR receives millions of domestic and international tourists [47]. To ensure the safety of tourists and reduce the impact of human activities on the ecosystem, more than 70 km of wooden plank roads have been set up in the JNNR since 2001 (Figure 1). For twenty years, three types of preservativetreated planks have been successively replaced. The CCA-treated planks were installed in Jiuzhaigou from 2001 to 2012. Subsequently, ACQ-treated wood was used for building from 2013 to 2017. Since the JNNR was struck by a strong earthquake (Jiuzhaigou earthquake, Ms = 7.0) in 2017, increasing landslides, debris flows, and other natural disasters have resulted in severe damage to infrastructure and public facilities, including the boardwalk in the scenic area. Therefore, as one of the critical projects for restoration and reconstruction, 73 km-long plank roads with CA treatment were upgraded at the original site from 2018 to 2022.
Sample Collection
Samples were collected at 0-2 cm and 0-10 cm to avoid the dilution effect of contaminant concentrations when collecting 0-10 cm soil. Therefore, a total of 135 topsoil samples (including 0-2 cm soils (N = 42) and 0-10 cm soils (N = 93)) were collected for the different types and service durations of preservative-treated boardwalks (Table 1). Each sample is a combination of five subsamples from the vicinity of 10 m. One hundred and fifteen profile soil samples were collected under the preservative-treated wood at depths of 0-2 cm, 2-5 cm, 5-10 cm, 10-20 cm, 20-30 cm, and 30-40 cm. Some profiles were only excavated to the gravel horizon or bedrock within a depth of 30 cm. To determine the background levels of metal(loid)s in the JNNR, seven surface soil (0-2 cm), eleven surface soil (0-10 cm) and twenty-one soil profile samples at the higher position were collected from sites at least 20 m away from the boardwalks ( Table 1). In addition, eight, one, and five CCA-, ACQ-, and CA-treated boards used in JNNR were collected, respectively.
Soil and Wood Sample Analysis
The collected soil samples were air-dried at room temperature. Rock gravel, debris, and plant residues were picked out and weighed separately. A portion of the dried soil sample was weighed, ground, and passed through a 2 mm sieve for pH determination. A portion of the sample was ground until it passed through a 0.15 mm sieve for soil total heavy metal(loid) analysis, heavy metal(loid) speciation analysis, and organic matter content measurement. The boardwalks were sawed into small pieces and then placed in an oven at 60 • C for 48 h. They were ground into powder, passed through a 0.25 mm sieve, and preserved for use.
In the laboratory, the soil samples were digested with a microwave digestion (GT-400, Preekem) method using HCl-HNO 3 -HF-HClO 4 , and the contents of Cr, As, and Cu were determined using inductively coupled plasma mass spectrometry (ICP-MS, NexION 300, PerkinElmer). The wood samples were digested by HNO 3 and the metal(loid) content was determined by ICP-MS. Soil pH was measured using potentiometry at a 1:2.5 (soil: water) ratio with a pH meter. The soil organic matter (SOM) content was analyzed by the Walkley and Black method through wet oxidation with K 2 Cr 2 O 7 [48].
Heavy metal(loid)s speciation in profile soils was sequentially extracted with the BCR sequential extraction procedure [49,50]. The four sequential extraction fractions are the exchangeable fraction, reducible fraction, oxidizable fraction, and residual fraction. The heavy metal fractions are shown as a percentage of the total extractable content (%). The determination of the residual fraction was consistent with the total heavy metals. For quality control, the GSS-9 material published by the Institute of Geophysical and Geochemical Exploration, Beijing, China, was used simultaneously during the total and speciation analysis of heavy metals.
Statistical Analysis
ArcGIS 10.7 was used to map the spatial distribution of sampling sites. IBM SPSS Statistics 26 was used for statistical analysis of heavy metal(loid) concentrations and soil properties, one-way ANOVA (p < 0.05), and correlation analysis. In addition, the distributions of heavy metal contents, fractions, and soil physicochemical properties in the soil were drawn by Origin 2022.
Cr, As, and Cu Concentrations in Various Preservative-Treated Wooden Boardwalks
Although the CCA-treated wooden boardwalks have been used for 10-19 years, a large amount of metal(loid)s were still retained in the woods in the JNNR ( Table 2). Studies have shown that the preservative retention rate in preservative wood is about 75~95% after 43~1 years of exposure [51][52][53][54]. In this study, the average contents of Cr, As, and Cu in CCA-treated trestles were 2782, 904, and 1561 mg/kg, respectively, indicating that the retention of As was the lowest and Cr was the highest. As is most easily leached from CCA-treated wood and Cr is relatively stable [23,55]. Low Cr and As concentrations were measured in the ACQ-treated trestles relative to the CCA-treated trestles, in contrast with the high Cu content of 5234 mg/kg in Sample W-ACQ-1 from the JNNR. The mean Cu content was 5415 mg/kg in the CA-treated boardwalks (W-CA-1 to W-CA-5) sampled from the JNNR with the lowest Cr and As contents relative to CCA-treated and ACQ-treated woods (Table 2), of which the highest Cu content exceeding 8000 mg/kg was found in fresh Sample W-CA-2.
High concentrations of Cr, Cu, and As were also observed in CCA-treated wood from other parts of the world, with averages of 5944, 6726, and 2742 mg/kg, respectively. Notably, the highest levels were observed in W-CCA-15, a wood pile erected at sea and approximately 25 years old, with Cr, Cu, and As contents of 14,500, 20,700, and 7300 mg/kg, respectively. The Cr, As, and Cu contents from CCA-treated wood are strongly heterogeneous due to various intended usages, realistic usage scenarios, and initial treatment concentrations of the wood. In addition, ACQ-and CA-treated wood from other regions had higher levels of Cu with significant variations in Cu concentration, while As and Cr remained relatively stable with low concentrations. In conclusion, except for the preserved wood used under harsh conditions, the contents of Cr, As, and Cu in different preserved stacks from the JNNR were comparable to those in other regions.
Physicochemical Properties and Metal(loid)s in Surface Soils
In surface soils at a depth within 0-2 cm, the mean values of SOM beneath the CCA, ACQ and CA-treated as well as CCA plus CA-treated trestles were 29.51%, 23.08%, 14.63%, and 18.29%, respectively, while the mean SOM in background samples was 39.74% (Table 3). In surface soils at a depth within 0-10 cm, SOM values under these four treatments were 30.07%, 25.56%, 18.33%, and 18.96%, respectively, while the mean background value was 34.84%. The SOM content at depths of 0-2 cm and 0-10 cm showed the same pattern under different trestles. The background SOM was significantly higher than that of CA-and CCA plus CA-treated soils (p < 0.05) and insignificantly different from that of the other preservative-treated soils. This phenomenon may be because the surface humus of soils was removed when constructing the wooden walkways, resulting in a reduction in the SOM, while the accumulation of biomass, such as dead leaves and mosses, increased the OM Toxics 2023, 11, 249 6 of 25 content gradually with the in-service duration of wood planks. SOM contents decreased with increasing soil depth in all profiles (Figure 2), including the background sample (more details are presented in Table S1). Likewise, the averaged SOM value also increased in surface soils at a depth of 10 cm or shallower within the profile with increasing in-service age of the boardwalks from CA-treatment (0-3 years) to CCA-treatment (8-19 years), and the content in background soils was the highest. "-" is unmeasured.
Toxics 2023, 11, 249 7 of 25 within a depth of 10 cm was significantly lower (p < 0.05) than that in soils under the four wood treatments. The significantly low pH value in the background soils in the JNNR is because (1) a higher organic matter content, derived to a great degree with respect to the alpine elevation of the study area, appeared in the background soils [59,60]. Higher organic matter corresponds to high levels of humic acid, xanthic acid, and low molecular weight organic acids [61,62]; and (2) the use of alkaline materials, such as cement during the construction of trestles, might increase the soil pH [63]. In surface soils at a depth within 0-2 cm, the mean pH values in soils under the CCA-, ACQ-, CA-, and CCA plus CA-treated wooden boards as well as background soils were 7.20, 7.26, 7.48, 7.49, and 6.08, respectively, and were 7.23, 7.13, 7.5, 7.52, and 6.23, respectively in surface soils at a depth within 0-10 cm ( Table 3). Most of the samples were neutral to alkaline in pH. The pH of the background was significantly lower than that in the rest of the soils at depths of 0-2 cm and 0-10 cm, and the difference in pH was insignificant between the soils under various preservative treatments. The pH value within the soil profiles generally increased with increasing soil depth ( Figure 2). Detailed information was provided in the Supplementary Materials (Table S2). Similarly, the pH value in background soils within a depth of 10 cm was significantly lower (p < 0.05) than that in soils under the four wood treatments. The significantly low pH value in the background soils in the JNNR is because (1) a higher organic matter content, derived to a great degree with respect to the alpine elevation of the study area, appeared in the background soils [59,60]. Higher organic matter corresponds to high levels of humic acid, xanthic acid, and low molecular weight organic acids [61,62]; and (2) the use of alkaline materials, such as cement during the construction of trestles, might increase the soil pH [63].
The mean Cr concentrations in the surface soil samples within a depth of 2 cm under the CCA, ACQ, CA, and CCA plus CA treatments were 141.77, 69.79, 61.79, and 92.57 mg/kg, respectively, which were higher than the background value (32.34 mg/kg); the Cr content in the surface soils under the CCA and CCA plus CA treatments was significantly higher than the mean value in the background soils (p < 0.05) ( The background values of Cr, As, and Cu in surface soils at a depth of 0-10 cm were 52.39, 20.93, and 19.85 mg/kg, respectively. The average Cr levels in soil samples under the treatments of CCA, ACQ, CA, and CCA plus CA were 133.60, 67.43, 67.05, and 95.73 mg/kg, respectively, which were higher than those in background soil samples. The mean contents of Cr in CCA-treated and CCA plus CA-treated samples were significantly higher than those in the background soils (p < 0.01). The average concentration of As in each type of surface soil sample (314.9, 62.6, 31.6 7, 74.90 mg/kg for CCA, ACQ, CA, and CCA plus CA treatments, respectively) was higher than that in the background soil and risk control standards [64]. Among them, 100%, 100%, 81.82% and 100% of soil samples under the boardwalks treated by CCA, ACQ, CA and CCA plus CA exceeded the risk control standard, respectively. The average As concentration in the CCA-treated samples was the highest among all treatments and significantly higher than that in the CA, CCA plus CA, and background samples (p < 0.05). Unlike Cr and As, the highest Cu values were found in the soil samples affected by the CCA plus CA treatment. The average contents of surface soil Cu under the wooden boards treated with CCA, ACQ, CA, and CCA plus CA were 7.7, 9.5, 6.9, and 13.5 times higher than the background values, respectively.
The Cr, As, and Cu contents in soils at a depth between 0 and 2 cm were generally higher than those in soils at depths between 0 and 10 cm under the corresponding treatments (Table 3). A higher concentration in the shallower horizon than in the deeper horizon with the soil profile is often observed when the heavy metal(loid)s in the soils originate mainly from the external environment [65]. This might be related to the high organic matter content of the surface layer. It was found that organic matter content was significantly and positively correlated with As and Cr contents in soils, and organic matter is prone to form complexes with heavy metal ions, thus reducing their ability to migrate downward [66]. This supports the concept that there are low levels of Cu and Cr in soluble or exchangeable form in high organic matter soils and much higher levels in tightly bound organic matter [67]. Furthermore, Cr, As, and Cu contents exhibited the same variation among soils under different boardwalks in both surface soil samples, while the difference between the two types of surface soils with different depths was not significant (Table 3).
Lateral Distribution of Cr, As, and Cu in Surface Soils
The distribution of Cr in the soils at a distance within 0 and 1 m near all treatment boardwalks in this study showed a decreasing trend in the horizontal direction ( Figure 3a). The pattern of averaged Cr contents in surface soils related to the CCA-and ACQ-boardwalks was similar, with the maximum content occurring at 0 m (beneath the boardwalks), and decreased with increasing distance from the boards. The average Cr content in the surface soils at the horizontal distances of 0, 0.5, and 1 m for the CCA-treated boardwalks was 122.54, 74.28, and 62.56 mg/kg, respectively, and the soil Cr content at a distance of 1 m significantly decreased by 48.95% (p < 0.01) relative to that at 0 m. The Cr content in surface soils near the CA-treated boardwalks insignificantly varied with increasing horizontal distance and was close to the background Cr concentration due to the low Cr content in the CA-treated boards.
The As content was highest under the CCA-treated boardwalks and decreased with increasing horizontal distance from 0, 0.5, and 1 m, with average As contents of 211.48, 100.64, and 30.49 mg/kg, respectively, and decreased significantly by 52.41% (p < 0.01) and 85.58% (p < 0.01) at 0.5 and 1 m compared to 0 m (Figure 3b). The higher As content in the ACQ treatment corresponds to the Cr content, perhaps because the CCA influenced the sampling site. The higher As content in soils under the ACQ treatment corresponds to the Cr content, most likely because the CCA influenced the sampling site. The As concentration in the soils under the CA-treated boardwalks was close to the background value, attributed to the low As contents in CA-treated boardwalks ( Table 2). walks), and decreased with increasing distance from the boards. The average Cr content in the surface soils at the horizontal distances of 0, 0.5, and 1 m for the CCA-treated boardwalks was 122.54, 74.28, and 62.56 mg/kg, respectively, and the soil Cr content at a distance of 1 m significantly decreased by 48.95% (p < 0.01) relative to that at 0 m. The Cr content in surface soils near the CA-treated boardwalks insignificantly varied with increasing horizontal distance and was close to the background Cr concentration due to the low Cr content in the CA-treated boards. , and Cu (c) concentrations in surface soils at a depth of 0-10 cm under CCA-, ACQ-, and CA-treated wooden walkways. *, ** indicate significant (p < 0.05) and highly significant (p < 0.01) differences.
The As content was highest under the CCA-treated boardwalks and decreased with increasing horizontal distance from 0, 0.5, and 1 m, with average As contents of 211.48, 100.64, and 30.49 mg/kg, respectively, and decreased significantly by 52.41% (p < 0.01) and 85.58% (p < 0.01) at 0.5 and 1 m compared to 0 m ( Figure 3b). The higher As content in the ACQ treatment corresponds to the Cr content, perhaps because the CCA influenced the sampling site. The higher As content in soils under the ACQ treatment corresponds to the Cr content, most likely because the CCA influenced the sampling site. The As concentration in the soils under the CA-treated boardwalks was close to the background value, attributed to the low As contents in CA-treated boardwalks ( Table 2).
The mean Cu content in the surface soils under different preservative-treated boardwalks decreased with increasing distance in the horizontal direction (Figure 3c). The averaged soil Cu content under the boards in a descending order is: CA > ACQ > CCAtreatments, in accordance with the order of Cu contents in these preservatives-treated boardwalks ( Table 2) and leaching of Cu specifically in different preservatives-treated boardwalks [34]. If the two outliers at the distance from 0.5 and 1 m away from the CCAtreated board were excluded, the average contents of surface soils near CCA-treated boardwalks at a distance of 0, 0.5 and 1 m from the wooden plank were 129.66, 62.58 and 27.36 mg/kg, respectively, and they were significantly decayed by 51.74% (p < 0.05) and The mean Cu content in the surface soils under different preservative-treated boardwalks decreased with increasing distance in the horizontal direction (Figure 3c). The averaged soil Cu content under the boards in a descending order is: CA > ACQ > CCAtreatments, in accordance with the order of Cu contents in these preservatives-treated boardwalks ( Table 2) and leaching of Cu specifically in different preservatives-treated boardwalks [34]. If the two outliers at the distance from 0.5 and 1 m away from the CCA-treated board were excluded, the average contents of surface soils near CCA-treated boardwalks at a distance of 0, 0.5 and 1 m from the wooden plank were 129.66, 62.58 and 27.36 mg/kg, respectively, and they were significantly decayed by 51.74% (p < 0.05) and 78.90% (p < 0.01) at 0.5 and 1 m relative to that under the CCA-treated boards, respectively, probably due to the horizontal migration (~0.5 m) of Cu after long-term entrance into the soils as Cr and As. The distribution pattern of Cu under the ACQ and CA treatments was similar, in that extremely high Cu contents concentrated in surface soils under the boards, while a sharp reduction appeared at a distance of 0.5 and 1 m away from the boards (Figure 3c).
Vertical Distribution of Metal(loid) Concentrations
Compared with the background profile, Cr was mainly enriched in the 0-5 cm soil layer and decreased with increasing soil depth in CCA-treated profiles ( Figure 4). Detailed information was provided in the Supplementary Materials (Table S3). The Cr content exhibited significant increases of 249% (p < 0.01), 154% (p < 0.01), and 56% (p < 0.05) at depths within 0-2, 2-5, and 5-10 cm relative to background samples at the same depth, respectively. The mean soil Cr content of each soil layer indicated that Cr might leach out from the CCA-treated woods into soils deeper than 30 cm. The average Cr content of the soils under ACQ-and CA-treated boardwalks gradually decreased with increasing soil depth in contrast with a gradual rise in the background soil profiles (Figures 4b and 4c). For the CCA plus CA treatment, the Cr contents decreased with the increasing soil depth within the profiles except for CCA plus CA-2.
It is obvious that the As content was mainly enriched at a depth of 0-10 cm in the soils under CCA-treated boards compared to that in the background profile and gradually decreased to the background value with increasing soil depth ( Figure 5). Detailed information was provided in the Supplementary Materials (Table S4). The average As content at depths of 0-2 cm, 2-5 cm, 5-10 cm, 10-20 cm, and 20-30 cm reached 20, 16, 13, 4, and 4 times as high as that in the background content at the corresponding depths, respectively. The mean As contents in the soils under the ACQ-and CA-treated boardwalks were close to the background value and did not change significantly with increasing soil depth, except for the ACQ-2 profile. The As content in the CCA plus CA-treatment profiles decreased with increasing soil depth, except for CCA plus CA-2. Compared with the soil risk screening values, the As contents in the profiles of the ACQ-and CA-treatments were close to each other except for the ACQ-2 profile, which was less hazardous to the environment. The As contents in the CCA-treatment and the CCA plus CA-treatment far exceeded the soil risk screening values, and therefore posed a high environmental risk.
Compared with the background profile, Cr was mainly enriched in the 0-5 cm soil layer and decreased with increasing soil depth in CCA-treated profiles (Figure 4). Detailed information was provided in the Appendices (Table S3). The Cr content exhibited significant increases of 249% (p < 0.01), 154% (p < 0.01), and 56% (p < 0.05) at depths within 0-2, 2-5, and 5-10 cm relative to background samples at the same depth, respectively. The mean soil Cr content of each soil layer indicated that Cr might leach out from the CCAtreated woods into soils deeper than 30 cm. The average Cr content of the soils under ACQ-and CA-treated boardwalks gradually decreased with increasing soil depth in contrast with a gradual rise in the background soil profiles (Figs. 4b and 4c). For the CCA plus CA treatment, the Cr contents decreased with the increasing soil depth within the profiles except for CCA plus CA-2. It is obvious that the As content was mainly enriched at a depth of 0-10 cm in the soils under CCA-treated boards compared to that in the background profile and gradually decreased to the background value with increasing soil depth ( Figure 5). Detailed information was provided in the Appendices (Table S4). The average As content at depths of 0-2 cm, 2-5 cm, 5-10 cm, 10-20 cm, and 20-30 cm reached 20, 16, 13, 4, and 4 times as high as that in the background content at the corresponding depths, respectively. The mean As contents in the soils under the ACQ-and CA-treated boardwalks were close to the background value and did not change significantly with increasing soil depth, except for the ACQ-2 profile. The As content in the CCA plus CA-treatment profiles decreased with increasing soil depth, except for CCA plus CA-2. Compared with the soil risk screening values, the As contents in the profiles of the ACQ-and CA-treatments were close to each other except for the ACQ-2 profile, which was less hazardous to the environment. The As contents in the CCA-treatment and the CCA plus CA-treatment far exceeded the soil risk screening values, and therefore posed a high environmental risk. In the background profiles, Cu concentrations varied slightly with soil depth, ranging from 4.95 to 23.89 mg/kg (Table S5). In general, the Cu concentration within the soil profiles under the four treatment boardwalks decreased with increasing soil depth and dominantly accumulated in the surface soil layer at a depth of 5 cm or shallower ( Figure 6). In the soil profiles under the CCA-treated boardwalks, the Cu concentration in the 2- In the background profiles, Cu concentrations varied slightly with soil depth, ranging from 4.95 to 23.89 mg/kg (Table S5). In general, the Cu concentration within the soil profiles under the four treatment boardwalks decreased with increasing soil depth and dominantly accumulated in the surface soil layer at a depth of 5 cm or shallower ( Figure 6). In the soil profiles under the CCA-treated boardwalks, the Cu concentration in the 2-5 cm soil layer was higher than that in the 0-2 cm layer, indicating that Cu might migrate downward owing to prolonged rainwater leaching over 10 years. Among all profiles, the highest Cu concentration occurred within the CCA plus CA-1 profiles. Overall, the Cu concentrations in the profiles under all four treatments did not exceed the soil risk screening values and therefore had a low environmental hazard.
Metal Fractionation and Mobility in Vertical Directions
In the background profile, Cr dominated the residual fraction, accounting for 92.99-97.17% of the total content (Figure 7a), and the percentages of each fraction were constant across the profiles. Detailed information about Cr content of each fraction in the soil profiles was presented in the Appendices (Table S6). Overall, the proportions of each fraction in the soil profiles with four preservative treatments were in the following order of magnitude: residual > oxidizable > reducible > exchangeable, indicating that the residual fraction was still dominant and expressed an increasing trend with increasing profile depth, although the Cr in the residual fraction decreased compared with the background profiles (Figure 7b-e). In addition, the oxidizable fractions in the CCA and CCA plus CA treatment soil profiles were 4.35-38.21% and 4.74-15.71%, respectively, which significantly increased compared to the background profile and other treatment profiles. The oxidizable Cr in the CCA treatment profile decreased with increasing soil depth.
Metal Fractionation and Mobility in Vertical Directions
In the background profile, Cr dominated the residual fraction, accounting for 92.99-97.17% of the total content (Figure 7a), and the percentages of each fraction were constant across the profiles. Detailed information about Cr content of each fraction in the soil profiles was presented in the Supplementary Materials (Table S6). Overall, the proportions of each fraction in the soil profiles with four preservative treatments were in the following order of magnitude: residual > oxidizable > reducible > exchangeable, indicating that the residual fraction was still dominant and expressed an increasing trend with increasing profile depth, although the Cr in the residual fraction decreased compared with the background profiles (Figure 7b-e). In addition, the oxidizable fractions in the CCA and CCA plus CA treatment soil profiles were 4.35-38.21% and 4.74-15.71%, respectively, which significantly increased compared to the background profile and other treatment profiles. The oxidizable Cr in the CCA treatment profile decreased with increasing soil depth. The residual fraction was the main form of As within the background profile, accounting for 82.70-95.86% of the total content. The oxidizable fraction followed with 2.42-15.81%, while the reducible and exchangeable fractions accounted for a small percentage (Figure 8a). The speciation distributions of As in the profiles under the ACQ and CA treatments were similar to those in the background (Figure 8c,d). In contrast, the percentage of non-residual fractions in the profiles under the CCA and CCA plus CA treatments increased significantly (Figure 8a,e). In the soil profile under CCA treatment, 36.63-87.22%, 11.01-27.84%, 1.55-28.11%, and 0.22-17.59% of residual, oxidizable, reducible, and exchangeable fractions were observed, respectively. Detailed information about As content of each fraction in the soil profiles was presented in the Appendices (Table S7). Overall, the proportion of residual fractions increased with increasing soil depth for all profiles, and the other fractions decreased with increasing depth. The residual fraction was the main form of As within the background profile, accounting for 82.70-95.86% of the total content. The oxidizable fraction followed with 2.42-15.81%, while the reducible and exchangeable fractions accounted for a small percentage (Figure 8a). The speciation distributions of As in the profiles under the ACQ and CA treatments were similar to those in the background (Figure 8c,d). In contrast, the percentage of non-residual fractions in the profiles under the CCA and CCA plus CA treatments increased significantly (Figure 8a,e). In the soil profile under CCA treatment, 36.63-87.22%, 11.01-27.84%, 1.55-28.11%, and 0.22-17.59% of residual, oxidizable, reducible, and exchangeable fractions were observed, respectively. Detailed information about As content of each fraction in the soil profiles was presented in the Supplementary Materials (Table S7). Overall, the proportion of residual fractions increased with increasing soil depth for all profiles, and the other fractions decreased with increasing depth. The residual fraction was the main form of As within the background profile, accounting for 82.70-95.86% of the total content. The oxidizable fraction followed with 2.42-15.81%, while the reducible and exchangeable fractions accounted for a small percentage (Figure 8a). The speciation distributions of As in the profiles under the ACQ and CA treatments were similar to those in the background (Figure 8c,d). In contrast, the percentage of non-residual fractions in the profiles under the CCA and CCA plus CA treatments increased significantly (Figure 8a,e). In the soil profile under CCA treatment, 36.63-87.22%, 11.01-27.84%, 1.55-28.11%, and 0.22-17.59% of residual, oxidizable, reducible, and exchangeable fractions were observed, respectively. Detailed information about As content of each fraction in the soil profiles was presented in the Appendices (Table S7). Overall, the proportion of residual fractions increased with increasing soil depth for all profiles, and the other fractions decreased with increasing depth. In the background profiles, the Cu speciation in soil horizons was predominantly in the residual and oxidizable fractions, accounting for 83.64-93.88% and 5.46-15.82% of the total content, respectively, while the proportions of the reducible and exchangeable fractions were deficient: less than 1% (Figure 9a). The non-residual fractions in four soil profiles under the preservative treatments decreased with increasing soil depth, and the distribution of each fraction in the profiles was variable: the fraction of Cu in soil profiles under the CCA and ACQ treatment profiles was present in the residue and oxidizable states (38.67-91.54%, 8.37-59.14% (CCA-5); 67.66-91.91%, 7.94-31.03% (ACQ-3) of the total content, respectively). The distribution of reducible and exchangeable states in the CA-6 and CCA plus CA-3 profiles increased significantly (Figure 9b-e). More details about Cu content of each fraction in the soil profiles was presented in the Supplementary Materials (Table S8). In the background profiles, the Cu speciation in soil horizons was predominantly in the residual and oxidizable fractions, accounting for 83.64-93.88% and 5.46-15.82% of the total content, respectively, while the proportions of the reducible and exchangeable fractions were deficient: less than 1% (Figure 9a). The non-residual fractions in four soil profiles under the preservative treatments decreased with increasing soil depth, and the distribution of each fraction in the profiles was variable: the fraction of Cu in soil profiles under the CCA and ACQ treatment profiles was present in the residue and oxidizable states (38.67-91.54%, 8.37-59.14% (CCA-5); 67.66-91.91%, 7.94-31.03% (ACQ-3) of the total content, respectively). The distribution of reducible and exchangeable states in the CA-6 and CCA plus CA-3 profiles increased significantly (Figure 9b-e). More details about Cu content of each fraction in the soil profiles was presented in the Appendices (Table S8). In general, Cr, As, and Cu in soils mainly appeared as residual fractions in all profiles and increased with depth. The proportion of non-residual heavy metals increased in soils after the preservative-treated trestles had been established and decreased with increasing soil depth in the profiles. Meanwhile, the ratio of residual and non-residual fractions decreased with the in-service time of trestles.
Factors Affecting the Distribution of Cr, As, and Cu in Soils
Although the metal(loid)s content of the preservative-treated boardwalks differed in retention in diverse contexts, it was still observed that the treatment with the highest initial concentration had the highest retention, the preservative wood that had been used for a long time retained less metal than those with short-term use from the same batch, and the boardwalks that had been rained on contained less metal(loid) than those that had not been rained on. This result indicated that the loss of metal in the plank road was related to the initial treatment, the in-service time, and the use scenario [45]. In addition, the distribution of metal(loid)s in the soil along the edge of the preserved treatment boardwalk might be influenced by a combination of many factors.
Preservatives Used for Wood Treatment
Elevated levels of Cr, As, and Cu were observed in soils under different boardwalks compared to background concentrations. The differences in metal(loid) contents in the In general, Cr, As, and Cu in soils mainly appeared as residual fractions in all profiles and increased with depth. The proportion of non-residual heavy metals increased in soils after the preservative-treated trestles had been established and decreased with increasing soil depth in the profiles. Meanwhile, the ratio of residual and non-residual fractions decreased with the in-service time of trestles.
Factors Affecting the Distribution of Cr, As, and Cu in Soils
Although the metal(loid)s content of the preservative-treated boardwalks differed in retention in diverse contexts, it was still observed that the treatment with the highest initial concentration had the highest retention, the preservative wood that had been used for a long time retained less metal than those with short-term use from the same batch, and the boardwalks that had been rained on contained less metal(loid) than those that had not been rained on. This result indicated that the loss of metal in the plank road was related to the initial treatment, the in-service time, and the use scenario [45]. In addition, the distribution of metal(loid)s in the soil along the edge of the preserved treatment boardwalk might be influenced by a combination of many factors.
Preservatives Used for Wood Treatment
Elevated levels of Cr, As, and Cu were observed in soils under different boardwalks compared to background concentrations. The differences in metal(loid) contents in the surface soils were mainly attributed to the differences in preservative types in boardwalks.
The levels of Cr and As in soils were significantly higher under planks with both CCA-and CCA plus CA-treatments than ACQ-and CA-treatments because ACQ-and CA-treated woods contained only trace amounts of Cr and As. Among the three metal(loid)s, As concentrations were the highest in soils under CCA-treated planks, indicating that As is readily leached from CCA-treated wood [41,68,69] and was retained in the soils. The highest values of Cu content were found in soils under the CCA plus CA treatments, likely as a result of the combined effect of the CCA and CA treatments. The Cu content in soils under the ACQ-treated boards was higher than that under the CCA-treated boards. This result is consistent with a previous study showing that Cu leaching rates were higher in ACQ preservative-treated wood than in CCA-treated wood [31]. In a study to evaluate the loss of different types of preserved wood under field conditions, the leaching rate of Cu was higher in ACQ-treated wood than in CCA-treated wood with different Cu retention levels [32]. The leaching of Cu was no lower in the CA boardwalks than in the other treatments, despite the shorter in-service time, indicating greater leaching of CA preservative-treated wood [34].
Compared to the background profiles, there was a concentration gradient of Cr and As only within the CCA and CCA plus CA profiles, and some of these heavy metals were present in the non-residual fraction. In contrast, the contents of Cr and As within the soil profiles under the ACQ and CA treatments were very low, mainly appearing in the residual fraction, and there was no significant trend with increasing soil depth. The differences in the types of heavy metals exhibited between the profiles were likewise attributed to the different preservative treatments on boardwalks.
Soil Properties
In the SS2 samples, SOM content was positively correlated with the contents of the three elements and reached a significant correlation with Cr and Cu contents (Table 4). It was found that organic matter content was significantly and positively correlated with As and Cr contents, respectively, which was attributed to the fact that organic matter in soils tends to form complexes with heavy metal ions, thus reducing their activity and leading to an increase in soil heavy metal content [66]. Cr, As, and Cu measured in this study exhibited extremely high retention in the surface soils. In contrast, samples collected in areas with sandy soil (average sand content of 95%) contaminated with CCA-treated wood for 5-10 years showed low retention [42]. Usually, Cu is present in a less mobile and biologically effective form in contaminated soils [70]. Only 1-20% of Cu in the soils is bioavailable; however, most Cu is bound to organic matter [71]. Speciation classification studies on Cu in acidic vineyard soils have shown that more than 50% of Cu is organically bound [72]. Therefore, the significant positive correlation between Cr and Cu content and SOM content in the SS2 samples should be due to the formation of complexes between SOM and Cr and Cu, thus reducing bio-effectiveness and mobility and leading to an increase in soil metal content. In the samples of soils at a depth within 0-10 cm, the correlations between metal(loid)s and SOM and soil pH values were not very strong, possibly due to the predominant effect of the extremely high concentration of As, Cr, and Cu in the preservative woods on the corresponding contents in soils. By observing the vertical distribution of Cr, As, and Cu in the profile under the CCAtreated stack, it was found that As was able to migrate significantly and was enriched at soil depths of 0-10 cm, while Cr and Cu were enriched only at 0-5 cm. Furthermore, the speciation distribution of As in the CCA-treated profiles dramatically differed from that of Cr and Cu. In this study, As was less susceptible to transform from an unstable to a stable fraction than Cr and Cu and was the most longitudinally mobile among the three elements. This finding corresponds with those of previous studies, which revealed that when Cr, As, and Cu entered the environment from CCA-treated wood, especially into the soil environment, As was more mobile than Cr and Cu [10,33,73,74].
CCA wood leaching experiments showed that As in leachate was favorable in the form of H 2 AsO 4 − and HAsO 4 2− . However, when As entered the soil, As in the leachate was present as As(III) [69]. Adsorption and oxidation reactions of As(III), which is more soluble and mobile than As(V) in soils, are two crucial factors affecting the fate and transport of As in the environment [75]. First, the addition of organic matter may enhance the release of As and increase the migration rate [75][76][77]. Second, As remains soluble in reducing environments, unlike other heavy elements. Anaerobic conditions in soils, as well as increasing pH and decreasing Eh, can promote both the release and migration of As [78,79]. The greater mobility of As in this study might be due to the high organic mass in soils. Once the organic matter increasingly decomposes and the Eh value might decrease, the quantity of As adsorbed initially on the oxide surface is desorbed, and its mobility is enhanced.
In this study, the residual fraction of Cr dominated all contaminated soil profiles, which was consistent with the results of a previous study [80]. After the entrance of exogenous Cr into the soils, the water-soluble and exchangeable fractions of Cr recovered to the control level after six weeks [81]. In this study, the vertical mobility of Cr was low, in accordance with previous studies that have also shown that Cr is the most stable element in preservative wood [82][83][84][85]. Most of the Cr(VI) was reduced to Cr(III) in aged wood [69]. In nature, Cr often exists as Cr(III) (Cr 3+ , CrO 2 − ) and Cr(VI) (Cr 2 O 7 2− and CrO 4 2− ) [86]. In soils, Cr(VI) is difficult for soil colloids to absorb, so it has high activity, while Cr(III) is easily adsorbed by soil colloids, so its activity is low [87]. Usually, Cr(VI) is readily converted to Cr(III) via biological and chemical reactions in the natural environment. In addition, natural substances such as soil organic matter, Fe(II), microorganisms, and decomposition products of peroxide compounds such as aldehydes, may reduce Cr(VI) [88]. Therefore, the higher the soil organic matter content is, the higher the capacity and rate of Cr(VI) reduction [89]. Here, Cr may likely be leached out from the CCA-treated wood as Cr(III). Furthermore, the surviving Cr(VI) would also be quickly converted to Cr(III) after entering the soil, so that Cr in the profile mainly exists as less reactive Cr(III) with a low migration rate.
In the profiles under CCA-and ACQ-treated woods, Cu existed mainly in residual and oxidizable fractions. Cu in preservative-treated wood took the form of Cu(II) regardless of the in-service time [90], and Cu mobility and bio-effectiveness in soils are largely controlled by the sorption-desorption behavior of organic and inorganic colloids [91]. The low rate of Cu(II) desorption in soils with high organic matter content is due to the ability of organic matter to form complexes with Cu, which enhances the stability of Cu in soils [67,[92][93][94] The proportion of reducible and exchangeable Cu in soils under CA-and CCA plus CA-treated planks was significantly higher than that under CCA-and ACQ-treated planks, which was also related to the in-service time of the trestles, i.e., the aging time of heavy metal(loid)s.
The initial sorption reaction on heavy metal(loid)s was rapid after entering the soils, usually within minutes to hours, and was often followed by a long-term response with a slow decrease in leachability, exchangeability, bioefficiency, and toxicity. The whole process is called aging [95][96][97][98]. The exogenous water-soluble fractions of heavy metal(loid)s in soil would result in a decrease in the proportion of metal fractions weakly bound to the soil solid phase (i.e., exchangeable fraction) and an increase in the proportion of other more strongly bound fractions [95,99,100]. In this study, the soil profiles under the CA and CCA plus CA treatments were contaminated by the newly installed CA-treated boardwalks, while the CCA-and ACQ-treated boardwalks were used for much longer than the CA-treated trestles. Cu gradually transformed from a highly mobile to a less-migrated fraction in soils after a long aging period.
Debris Flow
In the JNNR, numerous debris-flow gullies have occurred across the Jiuzhaigou Valley [101]. These gullies are frequently impacted by destructive earthquakes, e.g., the Wenchuan earthquake (Ms = 8.0, 2008), Jiuzhaigou earthquake (Ms = 7.0, 2017), and a large number of extra geological hazards, e.g., debris flow gullies and landslides, have broken out the JNNR [102][103][104]. Debris flows may destroy the natural scenery and ecosystems [101], and the original soils under the boards might be washed out or buried by the debris flows in the JNNR. Within the M2 profile, the soil at a depth within 0-10 cm is most likely composed of loose debris and mud brought by debris flows, i.e., the newly formed soil horizons (A and B). The soil at a 10-40 cm depth is brown in color and loamy in texture, of which the soils within the depth between 10 and 20 cm and 20 and 40 cm are the buried surface horizon (2A) and subsurface horizon 2B, respectively. The 2A horizon has more organic matter accumulation and is darker than the 2B horizon while the 2B horizon has less soil organic matter content, a finer texture and more compact structure relative to the 2A horizon (Figure 10). Anomalies in Cr, As, and Cu contents were found in the subsurface soils (10-20 cm) within the M2 profile ( Figure 10), which might be because this soil layer was originally a topsoil layer and was interbedded by the debris flow. The new topsoil formed on the debris flow materials would have been affected by the CCA and CA-treated boardwalks concerning As and Cu enrichment in the topsoil at 0-2 cm depth. Meanwhile, it was presumed that the subsoil layer within the depth of 10-20 cm might have been impacted by the CCA-treated board because of the extraordinarily high Cr, As, and Cu contents ( Figure 10). This presumption was also evidenced by the highest organic matter content (27.56%) and low pH (7.36) occurring in this layer within the profile (Figure 10), which are typical features in surface soils (see Section 3.2.1).
Environmental Risks and Biological Toxicity of Preservative-Treated Boardwalks
Cr and As contamination was present in the soils beneath the CCA and CCA plus CA treatment boardwalks, and the Cr content was significantly higher than that in the background soils (p < 0.05) in the JNNR. However, the environmental risk produced by soil pollution depends not only on the heavy metal(loid) concentration, but also on the metal speciation, ecotoxicity of the metals, aging time, and physical and chemical properties of soil.
The presence of Cr in the soil profiles under the CCA-and CCA plus CA-treated woods in this study was mainly in the residual fraction, and most of the Cr might be in the form of Cr(III), as speculated in the previous paper, both indicating the limited mobility and bio-effectiveness of Cr in the soils. Since Cr(III) is not well absorbed in any pathway, the toxicity of Cr is mainly attributed to Cr(VI) [105]. Cr(VI) is a toxic industrial pollutant classified as a human carcinogen [106]. Human exposure to Cr has been reported to occur through respiratory and dermal contact [107,108]. Inhalation of high Cr(VI) can irritate the nasal mucosa and cause nasal ulcers [109]. The main health issues following the ingestion of Cr(VI) compounds in animals are irritation and ulceration of the stomach and small intestine, anemia, sperm damage, and damage to the male reproductive system.
On the other hand, Cr(III) compounds are much less toxic and do not seem to cause these problems [105]. For plants, Cr promotes some plant growth when low doses of Cr are applied [110]. The toxicity of Cr to plants decreases significantly with increasing aging time when high doses are used, and SOM is one of the important factors that promote Cr aging [111]. The prolonged aging time could attenuate the toxicity of Cr on the potential nitrification rate of soil microorganisms [112]. Because Cr originated from CCA-treated boardwalks and the Cr aging time was longer than ten years, coupled with abundant SOM content in the soil in JNNR, it is inferred that the ecological risk of Cr in soils beneath CCA and CCA plus CA-treated boardwalks is low.
Environmental Risks and Biological Toxicity of Preservative-Treated Boardwalks
Cr and As contamination was present in the soils beneath the CCA and CCA plus CA treatment boardwalks, and the Cr content was significantly higher than that in the background soils (p < 0.05) in the JNNR. However, the environmental risk produced by soil pollution depends not only on the heavy metal(loid) concentration, but also on the metal speciation, ecotoxicity of the metals, aging time, and physical and chemical properties of soil.
The presence of Cr in the soil profiles under the CCA-and CCA plus CA-treated woods in this study was mainly in the residual fraction, and most of the Cr might be in the form of Cr(III), as speculated in the previous paper, both indicating the limited mobility and bio-effectiveness of Cr in the soils. Since Cr(III) is not well absorbed in any pathway, the toxicity of Cr is mainly attributed to Cr(VI) [105]. Cr(VI) is a toxic industrial pollutant classified as a human carcinogen [106]. Human exposure to Cr has been reported to occur through respiratory and dermal contact [107,108]. Inhalation of high Cr(VI) can irritate the nasal mucosa and cause nasal ulcers [109]. The main health issues following the ingestion of Cr(VI) compounds in animals are irritation and ulceration of the stomach and small intestine, anemia, sperm damage, and damage to the male reproductive system. On the other hand, Cr(III) compounds are much less toxic and do not seem to cause these As concentrations in surface soils (0-10 cm) under the CCA-and CCA plus CA-treated trestles were~19.3 and 4.6 times higher than those in background soils, respectively. Ascontaminated soils are ecotoxic and carcinogenic [113]. The toxic effects of As are dependent on several factors, of which the chemical speciation (inorganic or organic) and oxidation state are most important. Inorganic As is more toxic than organic As, and As(III) in inorganic is more toxic (2-10 times) than As(V) [114]. Furthermore, As(III) might occur in surface and subsurface environments up to 40% of the total As [115], and leached As(III) from CCAtreated wood was present in soils [69]. In our study, As was more vertically mobile than Cr and Cu, with enrichment depths up to 10 cm, and exchangeable fractions of 0.22-17.59% and 2.25-5.63% were observed in the profiles under the CCA-and CCA plus CA-treated boardwalks, so it is presumed that As was mainly present as As(III). As phytotoxicity is also affected by aging, and the EC50 threshold for As phytotoxicity increased 1.76-fold for soils aged 5 years compared to those aged 0.25 years [97]. Furthermore, EC50 is highly dependent on soil properties [116]. Acidic soils have greater EC values than moderately alkaline soils [97]. In this study, the soils under the CCA and CCA plus CA treatments were mostly moderately alkaline, so the smaller the corresponding EC values were, the greater the toxicity risk. In the JNNR, dermal contact, airborne dusts and contaminated soil might be potential sources of inhalation into the human body. Low-dose and long-term exposure to As can also lead to "arsenic poisoning" in humans [117]. Chronic As poisoning can cause skin damage [118], and inorganic As has the potential to cause skin cancer in humans if ingested over a long period of time [119]. In addition, lung, liver, bladder, and kidney cancers have also been associated with chronic intake of As [120][121][122].
Cu is an essential trace element for plants and humans, but excess Cu plays a negative role in plant growth and human health [123][124][125]. Excess Cu can impair the process of photosynthetic electron transfer in plant cells [126]. Free copper ions in the body's cells catalyze the production of damaging free radicals [127], causing chest tightness, hemoptysis, nasopharyngeal mucosal congestion, and memory loss in humans. The Cu content of the soils under all CCA plus CA-, ACQ-, CCA-, and CA-treated trestles was 14.7, 10.4, 8.4, and 7.5 times that of background soil, respectively. Similar to other metals, the toxicity and bio-effectiveness of Cu mainly depend on the form of Cu rather than the total amount [128]. Cu in preservative-treated wood is Cu(II), regardless of the duration of use for both conventional and micronized Cu-based preservatives [90]. Similarly, Cu is also present in soils in the form of Cu(II). In our study, the concentration and speciation distribution of Cu in the vertical profiles showed its limited mobility. In addition, the weakly alkaline soil pH and high organic matter in soils of the JNNR were not conducive to Cu migration in the vertical direction [129]. Exchangeable Cu was only present in the profiles under CA-and CCA plus CA-treated trestles (0.25-3.09% and 1.02-9.70% of the total, respectively). After a long period of aging, Cu in the soils under the CCA-and ACQtreated trestles existed mainly in the residual and organic fractions, and the precipitated and organically bound fractions of Cu were generally nontoxic. Therefore, the ecological risk of Cu in the soils under the CCA and ACQ treatments in this study was low. However, since the newly installed CA-treated boardwalks leach large amounts of Cu, concern should be raised about Cu contamination in the area of new boardwalks.
In summary, the environmental risk of heavy metal(loid)s in soils under CCA treatment was the highest compared to ACQ and CA treatment in this study, largely contributing to the significant toxicity and mobility of As. The risk of Cr and Cu was lower than that of As. For CA treatment, due to the short in-service time, some Cu existed in the soil in a highly effective fraction. The subsequent monitoring of Cu leaching from CA boardwalks and its migration transformation in the soil should be continued.
Conclusions
In this study, we determined the heavy metal(loid) content in the existing CCA-, ACQ-, and CA-treated trestles in the JNNR and analyzed the total and speciation distribution characteristics in soils. The CCA-treated trestles released large amounts of Cr, As, and Cu into the soils, while the ACQ-and CA-treated trestles released only Cu. Multiple preservative-treated trestles produce great pollution in the underlying surface soil (above 10 cm soil), but this pollution is limited and does not exceed 0.5 m. In general, Cr, As, and Cu in soil were mainly present as residual fractions in all profiles and increased with depth. As is the most mobile element compared to Cr and Cu, and some As is still present in the profile as an exchangeable fraction. In addition, the distribution characteristics and migration behavior of Cr, As, and Cu in the vertical direction are influenced by the in-service time of boardwalks, soil properties (e.g., organic matter content), and geological disasters (e.g., debris flow). Based on the findings of this study, it can be concluded that the types of contaminants were expected to reduce from Cr, As, and Cu to Cu in the future as ACQ and CA-treated boardwalks gradually replaced CCA-treated boardwalks. This resulted in a decrease in total metal content, toxicity, migration, and biological effectiveness, thus reducing the environmental risk. Therefore, early and proper disposal of abandoned and replaced old trestles, especially CCA trestles, is recommended. In the future, ACQor CA-treated trestles with less pollutants and fewer ecological risk are recommended instead of CCA-treated trestles. New trestles should be located at the address of the original trestle to the greatest extent possible to avoid expanding the scope of contamination. In addition, long-term Cu contamination monitoring and assessment should be strengthened for new trestles.
|
2023-03-09T16:16:01.811Z
|
2023-03-01T00:00:00.000
|
{
"year": 2023,
"sha1": "a1a68c19c0f754b1025e7a78c3504a8b84311cab",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2305-6304/11/3/249/pdf?version=1678172329",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f36cb010ae86298a44d813d63ef88b308d1a9ee9",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
234188420
|
pes2o/s2orc
|
v3-fos-license
|
Group Consensus of Heterogeneous Multiagent Systems with Time Delay
In this paper, a neighbour-based control algorithm of group consensus is designed for a class of hybrid-based heterogeneous multiagent systems with communication time delay. We consider the statics leaders and active leaders, respectively. The original systems are transformed into new error systems by transformation. On the basis of the systems, applying Lyapunov stability theory and adopting the linear matrix inequality method, sufficient conditions which guarantee the heterogeneous multiagent systems stability are obtained. To illustrate the validity of theoretical results, some numerical simulations are given at the end of the paper.
Introduction
Consensus problem of multiagent systems (MAS) has always been a hot topic in the control field. In recent years, due to the huge leap in electronic technology, the application of MAS is extremely extensive, mainly involving mechanical engineering, unmanned aerial vehicles, robot formations, neural networks and so on [1][2][3][4]. In previous studies, each agent had the same dynamic system, and their own attributes and traits were the same. But in today's ever-changing era, any electronic device is constantly improving, and the cycle of updating is becoming shorter and shorter. erefore, in order to adapt to the development of the times, many scholars have focused on heterogeneity. On the agent system, the dynamic model of the agent in the system is not exactly the same, and consensus issues for heterogeneous MAS have been investigated in [5][6][7].
Due to the complexity of practical network international circumstances and the diversity of control systems, it is impossible for all agents to only tend to be in a stable state. In view of this, the group consensus issue for heterogeneous MAS has attracted the favour of many academics at home and abroad, among which the most of it is common for heterogeneous systems composed of second-order MAS in continuous time [8]. In Hu et al.'s study [8], the authors considered the heterogeneous MAS with uncertain parameters. ere are also heterogeneous systems which are consisted of a first-order system and a second-order MAS [9]. In Yu et al.'s study [9], investigated group consensus matters for heterogeneous multiagent networks in a rival system. In Wen et al.'s study [10], the authors discussed group consensus issues for heterogeneous MAS in the situation of input saturation. It was composed of two firstorder MAS with or without nonlinear and second-order multiagent systems. In addition to continuous time, there are a large number of works dealing with the matters of consensus of cluster and group for discrete-time heterogeneous multiagent systems [11]. For example, in Shi et al.'s study [11], based on discrete time, scholars investigated dynamic interactions nonsynchronous group consistency for heterogeneous MAS in the topology of message interaction. In Jiang et al.'s study [12], the authors studied couplegroup consensus for heterogeneous MAS in discrete time though adopting synergetic-competitive interaction and time-lags. In the study by Feng and Zheng [13], the issues of group consensus control for the first-order and secondorder heterogeneous MAS in the situation of discrete time was discussed minutely. At present, most of the research focuses on the interactive messages of agent's neighbours to obtain relative status information of each agent, but in the process of the information transmission often has some physical factors such as transmission channels, maybe lead to the inability of messages between agents to arrive in time.
is requires scholars to refer to the important factor of communication delay in research. In Wen et al.'s study [14], researched dynamical group consensus matters for heterogeneous MAS with communication lags. Under the circumstances directional structure of message interactions, Li et al. considered group consensus issues for the MAS with sampled and quantized data in detail in [15]. However, in practical applications, many engineering problems are not a single linear system, in order to make works that are closer to the actual application, in the study of group consistency issue for heterogeneous MAS, not only time lag but also the consideration of the influence of nonlinear term. Under the situation of parametric uncertainties, Hu et al. researched group consensus for heterogeneous multiagent systems in [8]. In Liu et al.'s study [16], they investigated consensus for heterogeneous MAS under fixed and switching topologies. It enriches the content of heterogeneous multiagent systems.
Inspired by the above-mentioned results, this article studies the group consensus problem for mixed-order heterogeneous MAS with time delay, which is grouped into two and three portions. Part one, in the circumstance of the presence of time lags, group consensus issues are investigated for the secondorder MAS with or without nonlinear; in the second part, the group consensuses for the systems which are consisted of the second-order nonlinear multiagent system and the secondorder and first-order MAS in a linear environment are mainly studied. e analysis and conclusion of three parts can be proceeded primarily via utilizing Lyapunov stability theory. e approach of linear matrix inequality is used to prove it. Finally, the results are numerically simulated by Matlab, and the validity of the conclusion is further demonstrated. e construction of the remainder of the article is described below. Section 2 mainly illustrated textual preparatory work, which contains the two portions of graph theory and dynamical systems description. ere are couplepart contents in Section 3, part one under the premise of active leaders in multiagent systems, a sufficient criterion is provided to implement group consensus; the other part in the presence of static leaders, we also acquire a sufficient condition in regard to group consistency of heterogeneous MAS with time-lags. In the sequel, numerical modelling and conclusion are expounded in Sections 4 and 5, separately.
Notations: for simplicity of the proof process of the paper, some mathematical notations are adopted throughout this article. Suppose R n×n denotes the real matrix with n × n. Let diag · · · { } denote the matrix with the elements are all zero except the main diagonal elements. Let R T signify the transpose of matrix R. Suppose 1 n denotes a column vector with all elements are 1.
Graph eory.
Suppose G � (I, T, and C) is a weighted digraph which has a set of node I � 1, . . . , n { }, a set of trajectory T⊆I × I, and a weighted adjacency matrix C � [c ij ] ∈ R n×n . A trajectory of Gis denoted by (i and j), it means to from j to i. If element c ij fulfilled the trajectory in the digraph, then it is positive, denotes c ij ≠ 0⟺(i, j) ∈ T. Let c ii � 0 for all agents i ∈ I. e set of neighbours of node i is expressed by N i � j ∈ I: (i, j) ∈ T . en, Laplacianof the weighted digraph G is defined as On the basis of the peculiarity of Laplacian can be aware of the all row-sums of L are zero. erefore, L has a zero eigenvalue corresponding to a right eigenvector
Dynamic Systems Description.
In this subsection, a generalized graph G is defined which contains followers and two active leaders. ere is no absence of generality, graph G is divided into two portions, the first subgroup G 1 contains the former m followers which follow their leader l 1 , and the rest of the followers belong to the second group G 2 which follows l 2 . Let G * j � (I * j , T * j , C * j )(j) � (1, 2) be a subgroup of G * � (I * , T * , C * ) and its leader l j for j � (1, 2). e dynamical dynamics of followerican be indicated as where p i (t), q i (t), and u i (t) ∈ R f (pit, qi(t)) are position status, velocity status, and control input of agent i. e dynamics of the leader of heterogeneous MAS can be constructed as where p * 1 (t) ∈ R and q * 1 (t) ∈ R are the location state and speed status of the leader l 1 .
e dynamical modelling of the other follower i can be depicted as where p i (t), q i (t), and u i (t) ∈ R and f i (p i (t), t) are location status, velocity status, control input, and the inherent nonlinear term of agent i. e dynamic state of leader can be depicted as follows: where p * 2 (t) and q * 2 (t) ∈ R and f(p * 2 (t), t) are position state, speed state, and the consecutive nonlinear function of the leader l 2 .
On account of the presence of time lags, all agents may not receive the messages from others and their leaders.
Mathematical Problems in Engineering
Hence, for agent i, a coupled control protocol is put forward in the following form: where the lag r(t) is ever-changing and differentiable and In this subsection, the followers can be divided into three groups as follows. And we consider static leaders. en, the dynamical model of the second-order nonlinear agent i can be written as where p i (t), q i (t), and u i (t) ∈ R and f i (p i (t), t) are location status, velocity status, control input, and nonlinear portion of agent i, respectively. e dynamical modality of the second-order agent i can be described as where p i (t), q i (t), and u i (t) ∈ R are position state, velocity state, and control input of agent i, respectively. e dynamical model of the first-order agent i can be described as where p i (t) ∈ R and u i (t) ∈ R are position state and control input of agent i. e control protocol of systems (6)-(8) are as follows: where k 1 , k 2 , and k 3 are all positive constants, which standing for the positive coupling strengths, severally. And p σ j represents the consistent position equilibrium of the group in which the j − thagent resides, and σ i � 1, 2, and 3.
In order to the simplicity of presentation, we lay emphasis on one-dimensional space. However, for any highdimensional space, we can generalize the results through applying the characteristics of the Kronecker Kronecker product, denoted as ⊗ .
Mathematical Problems in Engineering
e portion of agents which are second-order agents and second-order nonlinear agents, the adjacency matrix C is partitioned as with Suppose L S 1 and L S 2 signify Laplacian matrix of the second-order agents in the two situations of linearity and nonlinearity, respectively. en, the matrix can be attained as follows: with Remark 2. e portion of agents are consisted of the secondorder agents with or without nonlinearity and first-order agents, then the segmentation of C as follows: Presume L s 1 , L s 2 , and L f are Laplacian matrix of the second-order agents with nonlinearity and linearity, and first-order agents, severally. en, L can be depicted as with Next, we will give the definition of the two-group consensus and three-group for heterogeneous multiagent systems. And the similar definition can be given for multigroup consensus problem.
Definition 1.
e group consensus of heterogeneous multiagent systems can be achieved if the states of followers tend to the states of leaders in the sense of Assumption 1 (see in [13]).
Lemma 2 (see in [15]). For arbitrary constant vectors a, b ∈ R n and a positive definite matrixR ∈ R n×n , then the following inequality holds Lemma 3 (see in [14]) (SchurSchur complement). Given a matrix Δ � Δ 11 Δ 12 . us the following conditions are equivalent as follows:
Active Leaders.
In systems with active leaders, the models change as follows: Let en, according to the characteristic of L, (1) and (2) can be substituted as where Mathematical Problems in Engineering
Theorem 1. Group consensus of systems (24) and (25) can be achieved under control protocol (5) if the following condition holds:
where R 1 and R 2 both are arbitrary matrices; τ M is a constant; * denotes symmetry elements of the matrix.
Proof. According to Lyapunov functional theory, we construct Lyapunov functions as follows:
Mathematical Problems in Engineering
And P 1 , P 2 , Q, and Rare all positive definite matrices. Take the derivation of V(t) along (26), we have By Lemma 2, we get According to Lemma 1, (32) become us, the derivative of V 2 (t) can be written as Mathematical Problems in Engineering As known by Lemma 2, us, the derivative of V(t) can become erefore, we can able to see the second-order heterogeneous MAS can attain group consensus if the abovementioned systems satisfy the condition which has been given in eorem 1.
Static Leaders.
In systems with static leaders, the models change as follows. Let en, according to the characteristic of L, (6)-(8) can be substituted as Let where (9) if the following criterion holds:
Theorem 2. Group consensus of systems (39)-(41) can be attained under control protocol
Mathematical Problems in Engineering 9 where R 5 and R 6 both are arbitrary matrices; P 1 , P 2 , P 3 , Q 1 , Q 2 , Q 3 , and R are positive definite matrices; τ M is a constant; * denotes symmetry.
Proof:. through Lyapunov functional theory, one set Lyapunov functions V � V 1 + V 2 + V 3 + V 4 + V 5 + V 6 + V 7 as follows: Figure 1: e topology construction of followers and leaders. erefore, if the error system (43) contents the above condition which has been given in eorem 2, the heterogeneous MAS can reach group consensus.
Active Leaders.
In the condition of f i (p i (t), t) � 0.15 sin(p i (t)) and k � 2, τ M � 0.1, when the agents belong of the second-order linear and nonlinear heterogeneous MAS, and the topology of the multiagent systems are given by Figures 1 and 2, then the error figures of the position and velocity graphs are shown as follows, respectively:
Static Leaders.
When k 1 � 1, k 2 � 2, k 3 � 1, τ M � 0.1, and f i (p i (t), t) � 0.15 sin(p i (t)), the agents of the firstorder, second-order linear and nonlinear heterogeneous MAS, and relative state figures are as follows, respectively: From Figures 3 and 4, it can be seen that all the tracks of the agents trend to balance position; in other words, each error curves all go to zero. From Figure 5, it can be observed that all trajectories of the agents which belong to the same group trend to the identical line. As you can see from Figure 6, all curves verge to a straight line; then, these graphs can illustrate that the systems achieve group consensus.
Conclusions
Based on neighbour's status information, the group consensus problem of heterogeneous MAS in the case of communication time lag was considered, and the corresponding control protocol was designed. For the active and static leaders, we considered the two-group and three-group problems for the multiagent systems. Via the methods of Lyapunov stability theory and linear matrix inequality, the sufficient condition for the heterogeneous MAS with a time delay to achieve group consensus. Finally, the given simulation examples are given to show the effectiveness of the obtained results.
Data Availability
e data used to support the findings of this study are available from the corresponding authors upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest.
|
2021-05-11T00:03:36.407Z
|
2021-01-19T00:00:00.000
|
{
"year": 2021,
"sha1": "40c3505f89cbc5d2bba10809723e4d4e6e1e27c4",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2021/8834882.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d1ce25eccd264f01b98527f0dcf8edabada8bb10",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
9628246
|
pes2o/s2orc
|
v3-fos-license
|
Self-catalysed aerobic oxidization of organic linker in porous crystal for on-demand regulation of sorption behaviours
Control over the structure and property of synthetic materials is crucial for practical applications. Here we report a facile, green and controllable solid–gas reaction strategy for on-demand modification of porous coordination polymer. Copper(I) and a methylene-bridged bis-triazolate ligand are combined to construct a porous crystal consisting of both enzyme-like O2-activation site and oxidizable organic substrate. Thermogravimetry, single-crystal X-ray diffraction, electron paramagnetic resonance and infrared spectroscopy showed that the methylene groups can be oxidized by O2/air even at room temperature via formation of the highly active Cu(II)-O2˙− intermediate, to form carbonyl groups with enhance rigidity and polarity, without destroying the copper(I) triazolate framework. Since the oxidation degree or reaction progress can be easily monitored by the change of sample weight, gas sorption property of the crystal can be continuously and drastically (up to 4 orders of magnitude) tuned to give very high and even invertible selectivity for CO2, CH4 and C2H6. Controlling and selectively altering the structure of materials opens up the possibility of modulating their physical properties. Here, the authors report a method for altering the properties of a coordination polymer, where the metal centres active molecular oxygen to oxidize the organic ligand.
C ompared with conventional adsorbents, porous coordination polymers (PCPs) are unique for their diversified and tailorable coordination frameworks [1][2][3][4][5][6][7][8][9][10][11] . While tailoring structure/property generally refers to design/synthesis of new ligands and frameworks, some PCP prototypes can adopt several different metal ions and/or ligands, providing a rational strategy for adjusting the structure/property, albeit only in a limited degree 5,8,[12][13][14] . A few of these prototypes can even form mixedcomponent (solid-solution) crystals with variable concentration/ ratio of functional building blocks, which in principle allows the structure/property to be adjusted more continuously and precisely 15,16 . However, so far there is no rational strategy to directly monitor/control the composition of solid-solution frameworks during the synthesis process because complicated reaction environments involving solvents and/or liquid reactants are generally required for known direct-synthesis and postsynthetic modification (PSM) 3,7,[17][18][19][20][21] methods. It should be noted that the reaction time/feeding ratio can be hardly used as a parameter for precise control over the reaction progress/ framework composition, since the two variables have complicated relationships. Single-crystal X-ray diffraction is straightforward for direct visualization of the physical adsorption and chemical reaction events in PCPs 22 , but retaining the sample single crystallinity after physical/chemical changes is always a great challenge [22][23][24][25][26] , and crystallography is not a technique for quantitative analysis. If a PCP crystal could react with a gas (O 2 is a good yet difficult candidate) in the absence of assistant solvent/liquid, the sample weight or gas pressure can serve as an easily measurable parameter directly and linearly associated with its framework composition.
The selective and 'green' oxidation of organic molecules by air or dioxygen (O 2 ) is fundamental in the biosystem and has long been pursued in industry and chemical sciences [27][28][29] . Because the energy barrier for electron transfer from singlet organic substrates to the triplet O 2 is generally very high 30 , harsh reaction conditions and/or efficient catalysts are usually necessary to activate the O 2 molecule for aerobic oxidization, so that the earth atmosphere can maintain a high O 2 concentration. Natural enzymes are well known as highly selective and efficient catalysts under mild conditions. For instance, the highly efficient O 2 activation centres in various copper proteins, such as galactose oxidase, tyrosinase and dopamine b-monooxygenase, have been extensively studied or used as structural models for synthetic catalysts.
Considering that PCPs generally lack sufficient robustness, reactivity and/or O 2 -activation ability to allow aerobic oxidation of themselves, we designed and synthesized a porous metal azolate framework (MAF) 4 on the basis of Cu(I) and a methylene-bridged bis-triazolate ligand (Fig. 1). Although Cu(I)-based PCPs are very scarce because Cu(I) can be easily oxidized as Cu(II) by air to destroy the original metal-ligand connectivity 31 , azolate derivatives have been demonstrated as suitable ligands for highly stable PCPs even with Cu(I) (ref. 4), and the coordination between Cu(I) and triazolate can be expected to give low-coordinated metal centres similar to those in the O 2 -activating copper proteins. Furthermore, the methylene bridge in a diarylmethane-type ligand is obviously flexible and oxidizable (activated by two aromatic rings) 32 , which could be oxidized in suitable catalytic conditions to form a more rigid and polar ketone group.
Results
Preparation and characterization of the porous crystal. Colourless crystals of the titled compound, MAF-42, were synthesized in its large-pore (lp) form as [Cu 4 (btm) 2 ] Á C 6 H 6 (denoted as C 6 H 6 @MAF-42-lp, H 2 btm ¼ bis(5-methyl-1,2,4triazolate-3-yl)methane) by solvothermal reaction of H 2 btm and [Cu(NH 3 ) 2 ]OH in a mixed aqueous ammonia/methanol/ benzene solvent. Single-crystal X-ray diffraction analysis (Supplementary Table 1) showed that MAF-42-lp is a threedimensional (3D) porous coordination framework composed of five independent (two of them locate at twofold axes with occupancies of 1/2) Cu(I) ions and two independent, fully deprotonated btm 2 À ligands in a 2:1 molar ratio. As expected, all btm 2 À ligands are six-coordinated and the average coordination number of Cu(I) ions is three 4 . Nevertheless, each Cu(I) ion either adopts the linear, distorted T-shaped or tetrahedral coordination geometry ( Supplementary Fig. 1). The two independent ligands have different surrounding environments. The methylene group (C4) of one ligand is adjacent to a two-and two threecoordinated Cu(I) ions, while that (C11) of another ligand is only adjacent to two three-and a four-coordinated Cu(I) ions (Supplementary Table 2). The two triazolate rings of btm 2 À are not coplanar because they are linked by a sp 3 hybridized methylene C atom. The 3D coordination framework can be regarded as a cross-packing structure of ribbon-like fragments ( Supplementary Fig. 2), which retains large 1D rhombic channels (void 37.8%, cross-section size 4.8 Â 7.1-6.0 Â 10.8 Å 2 ) with disordered benzene molecules filled inside. It should be noted that the Cu(I) ions and methylene groups are fully exposed on the pore surface (Fig. 2a). Thermogravimetry (TG) and powder X-ray diffraction (PXRD) measurements of C 6 H 6 @MAF-42-lp showed that the benzene molecules can be completely removed below 260°C to form the small-pore (sp) phase [Cu 4 (btm) 2 ] (denoted as MAF-42-sp or simplified as MAF-42), which can be stable up to 410°C ( Supplementary Figs 3 and 4). Single-crystal structure of MAF-42-sp (Supplementary Table 1) showed significantly contracted unit cell (-22%) and channel size (void 14.1%, cross-section size 2.4 Â 2.6-3.5 Â 3.7 Å 2 ) because the cross angle of packed ribbons reduced from 72°to 53° (Fig. 2b) 5,8 . In MAF-42-sp, the Cu(I) ions and methylene groups are less exposed on the pore surface; however, their separations are similar to those in MAF-42-lp (Supplementary Table 2). While the coordination bond lengths were changed very little (D max ¼ 0.022 Å), the structural variation mainly occurred on the ligand conformations and coordination Tables 2 and 3). Actually, the ligand-bending directions are reversed, that is, the average torsion angle between two triazolate rings of btm 2 À changes from 167.6°in MAF-42-lp to À 165.1°in MAF-42-sp ( Supplementary Fig. 5). The structure transformation between C 6 H 6 @MAF-42-lp and MAF-42-sp can be reversibly triggered by adsorption/desorption of benzene vapour ( Supplementary Fig. 4).
Self-catalysed aerobic oxidation. In air, colourless C 6 H 6 @MAF-42-lp turns brown and then black quickly (Fig. 3a), which should not be originated from Cu(I) to Cu(II) oxidation because the Cu(II) complex is usually blue or green. Electrospray ionization mass spectrometry of the demetalated samples showed that fresh C 6 H 6 @MAF-42-lp has only a signal at m/z ¼ 179 (H 3 btm þ ), while a new peak at m/z ¼ 193 corresponding to the expected oxidation product bis(5-methyl-1,2,4-triazol-3-yl)methanone (H 3 btk þ ) appeared after the sample was exposed in air ( Supplementary Fig. 6). The infrared spectrum of the oxidized sample exhibits a strong band at 1641 cm À 1 in the characteristic region of 1,750 ± 150 cm À 1 for carbonyl groups ( Supplementary Fig. 7). The single-crystal structure of a black crystal obtained by prolonged exposure (1 month) of C 6 H 6 @MAF-42-lp in air was measured (Supplementary Table 1). A residual electron peak appeared near one of the two crystallographically independent methylene groups that adjacent to the two-coordinated Cu(I) ion, which can be refined as an oxygen atom without any restriction, giving an occupancy of 0.28(5) and a C ¼ O bond length of 1.20(5) Å (Supplementary Fig. 8) 33 . These observations demonstrated that the btm 2 À ligands in C 6 H 6 @MAF-42-lp can be readily oxidized by air at room temperature, although the reaction rate is quite slow. The oxidation of MAF-42-sp (almost no colour change after exposed in air at room temperature for several days) is much slower, which may be ascribed to the even smaller pore size and less exposed active sites.
The TG analysis showed that heating microcrystalline MAF-42-sp in an O 2 flow at 418 K could be an optimized reaction condition, in which complete oxidation can be achieved after B8,000 min (Fig. 3b). Meanwhile, mass spectrometry analysis of the effluent showed that water is produced as a byproduct during the oxidation process ( Supplementary Fig. 9). Therefore, on the basis of the simple reaction equation (Fig. 1b), the oxidation degree of the crystals can be simply calculated and controlled by the heating time or by monitoring the sample weight. The PXRD pattern of the fully oxidized sample is similar to that of MAF-42sp ( Supplementary Fig. 10), indicating the retention of the original framework connectivity. X-ray photoelectron spectroscopy showed that, even on the particle surface of the oxidized sample, only very small amounts of Cu(I) ions have been oxidized to Cu(II) ( Supplementary Fig. 11). While some oxidation reactions have been used to modify the organic linkers in PCP crystals, strong and environment-unfriendly oxidants (such as nitric acid and dimethyldioxirane), as well as organic solvents are necessary 17,18,[34][35][36] . Compared with the conventional oxidation methods, the solvent-free aerobic oxidation is obviously much greener yet more difficult. At the same conditions, the free ligand H 2 btm cannot be oxidized ( Supplementary Fig. 12), highlighting the difficulty of covalent redox PSM and crucial role of enzymelike Cu(I) centres. Compared with known direct-synthesis and PSM methods, the aerobic oxidation is also noteworthy for its controllable, solvent-free and solid-gas reaction mechanism, which can be conveniently used to tailor the adsorbent properties. Interestingly, the oxidized samples can be completely reduced back to C 6 H 6 @MAF-42-lp by hydrazine ( Supplementary Fig. 13).
The structure of a highly oxidized single crystal [Cu 4 (btm) 0.7 (btk) 1.3 ] (denoted as O65 to reflect the oxidation degree, 65%) has been successfully measured (Supplementary Table 1). There are residual electron peaks near the two independent methylene groups, which were refined as oxygen atoms to give occupancies of 0.73(2) and 0.57(2) and C ¼ O bond lengths of 1.236(10) and 1.234(11) Å, respectively (Fig. 3c,d). Again, the oxidation degree of methylene near the twocoordinated Cu(I) is higher. In O65, weak Cu(I)-carbonyl interactions can be observed (Supplementary Table 2), which are consistent with the relatively low wave number of carbonyl absorption in the infrared spectrum. Such unambiguous crystallography evidence is noteworthy because the harsh reaction condition for covalent PSM generally degrades crystallinity so that the modification degree is not high enough to be determined by crystal-structure analysis 23,37,38 . O65 has a slightly smaller and distorted unit cell compared with MAF-42-sp (DV/V 1 ¼ À 0.6%, Db ¼ À 1.0°), which can be ascribed to the generation of polar carbonyl groups on the pore surface and dipole-dipole attractive interactions within the host framework of O65. While the unitcell volume follows C 6 H 6 @MAF-42-lp 4 4 MAF-42-sp4O65, their organic ligands deviate from the planar conformation by 12.4, À 14.9 and À 12.9°, respectively ( Supplementary Fig. 5). Obviously, oxidation enhances the conjugation degree and planarity of ligands, which are disadvantageous for framework contraction. To further shrink from MAF-42-sp to O65, the coordination bonding lengths (D max ¼ 0.044 Å) and angles (D max ¼ 9.6°) are forced to change more (Supplementary Tables 2 and 3). On the other hand, framework contraction also reduces the planarity of ligands in O65.
To gain more insight into the aerobic oxidation process, we tried to capture some reaction intermediates. In the single-crystal structure of MAF-42 loaded with O 2 (denoted as O 2 @MAF-42) at low temperature of À 140°C (Supplementary Table 1), although O 2 molecules are highly disordered, it can be seen that the primary adsorption sites are close to the two-and three- Table 2). The presence of Cu(II) species during the aerobic oxidation reaction was confirmed by the axially symmetric signal with g || ¼ 2.292B2.348 and the vertical signal with g > ¼ 2.089 in the in situ electron paramagnetic resonance (EPR) spectroscopy measured at 200°C (Fig. 3f) 39 . Because the peak intensity at gB2.0-2.3 originated from the Cu(II) ion is vastly greater than that of the radical O 2 ? À at gB2, the latter is difficult to assign. On the other hand, in situ diffuse reflectance-Fourier transform infrared spectroscopy showed absorption bands at 1,140 cm À 1 characteristic for the O 2 ? À species (Fig. 3g) and 458 cm À 1 characteristic for the Cu-O coordination bond (Supplementary Fig. 14) 40 . The observation of physical adsorption at low temperature and chemical adsorption at high temperature is similar with some other heterogeneous reactions involving gas reactants and solid catalysts 41 . Therefore, some key stages of the aerobic oxidation mechanism have been observed, in which O 2 molecules first attack the low-coordinated Cu(I) ions, and then form the highly active intermediate Cu(II)-O 2 ? À reactive enough to break the hydrocarbon C-H bond, and finally returned to Cu(I) and produced the carbonyl product and water. The reaction can be unambiguously assigned to the Cu(II)/Cu(I) redox couple and metal-centred four-electron oxidation mechanism, which are commonly expected for natural enzymes and small molecular complexes but have been scarcely confirmed 18,19 . The catalytic activity of the low-coordinated Cu(I) sites in MAF-42 for aerobic oxidation of guest reactants was further confirmed by its effectiveness and low activation energy for the reaction of Supplementary Figs 4 and 15). Regulation of gas adsorption properties. To reveal the structure-property relationship, MAF-42 was oxidized to different degrees by the above established aerobic oxidation method. By virtue of the solid-gas reaction mechanism, the products were obtained in quantitative yields and were used without further workup procedure. Solid-state 13 C nuclear magnetic resonance (NMR) spectra showed that the methylene peak at 27 p.p.m. and carbonyl peak at 169 p.p.m. gradually reduces and increases, respectively, as the oxidation degree increases (Fig. 3h). On the basis of the peak areas of the methylene (2.8 p.p.m.) and methyl (0.8 p.p.m.) groups observed in the solution 1 H NMR spectra of the DCl-digested samples ( Supplementary Fig. 16), the oxidation degrees of the oxidized samples (hereafter denoted as O53, O74 and O100, respectively) were calculated as 53%, 74% and 100%. PXRD and TG showed that oxidization enhances the hydrophilicity, decreases the thermal stability and reduces the unit-cell volumes of the samples ( Supplementary Figs 17 and 18), being consistent with the generation of more polar carbonyl groups 4 . Single-component CO 2 isotherms were measured for MAF-42, O53, O74 and O100 at 195 K to characterize their porosity and framework flexibility/rigidity, which all exhibited hysteresis ( Fig. 4a and Supplementary Fig. 19a). The isotherm of MAF-42 showed a direct transition from the nonporous (np) phase (P/ P 0 o0.25) to the lp phase (P/P 0 40.40). The absence of the sp phase in the isotherm of MAF-42 can be explained by its extremely small aperture size (2.2 Â 2.6 Å 2 ) and inert pore surface. Differently, O53 and O74 exhibited the expected sp-tolp transition, as their first-step saturation uptakes are coincident with the theoretical values calculated from the crystal structures of MAF-42 and O65 (Supplementary Table 4). The increased CO 2 -binding ability of the oxidized samples is consistent with their enhanced pore surface polarity. The second-step saturation uptake and the corresponding pore volume follow O74 (128 and 0.23 cm 3 g À 1 ) 4O53 (116 and 0.21 cm 3 g À 1 ) 4MAF-42 (105 and 0.19 cm 3 g À 1 ), indicating that oxidization can expand the lp phase of the host frameworks, which was further quantified by the unit-cell volumes of guest-saturated samples ( Supplementary Fig. 20). Framework expansion of the lp phase could be ascribed to the enhanced rigidity of the oxidized ligand, which usually increases the bridging length. In the lp phase, the dipole-dipole attraction effect (causing the shrinkage of the sp phase 42 ) can be eliminated by insertion of guest in the channels 5,8 .
More interestingly, when the oxidation degree increases, the gate-opening/closing pressure (defined as the intermediate pressure between two isotherm steps) increases during adsorption (P/P 0 ¼ 0.32, 0.35 and 0.74 for MAF-42, O53 and O74, respectively) but decreases during desorption (P/P 0 ¼ 0.25, 0.21 and 0.14 for MAF-42, O53 and O74, respectively) to widen the hysteresis loop, which demonstrates that oxidization increases the ligand rigidity and difficulty of phase transition and the oxidized samples are solid-solution frameworks rather than mechanical mixtures 15,43 . Further, O100 showed one-step sorption isotherm with a small saturation uptake of ca 30 cm 3 g À 1 (Fig. 4a and Supplementary Fig. 19a), confirming that it's sp phase has the most shrunk structure compared with the counterparts with lower oxidation ratios, and is too rigid to undergo the sp-to-lp transition. Therefore, the oxidized framework exhibited larger breathing amplitude, energy change and energy barrier during sp-to-lp transition. Only a couple of examples have demonstrated the control of framework flexibility by PSM of PCPs 42,44 , in which different functional building blocks were added into the coordination framework, which always reduced the pore volume and the regulation is uncontrollable and discontinuous. In our case, the aerobic oxidation has negligible size effect (similar for methylene and carbonyl) but directly increases the ligand/framework rigidity and the pore volume of the lp phase, so that the oxidized framework exhibits enhanced adsorption capacity. More importantly, the solid-gas reaction mechanism allows modification ratio and gas sorption property to be continuously and conveniently monitored/controlled.
High-pressure single-component CO 2 , C 2 H 6 and CH 4 sorption isotherms were measured at 298 K (Fig. 4b-d and Supplementary Fig. 19b-d). The room-temperature CO 2 sorption isotherms are similar to those measured at 195 K, except that MAF-42 shows the sp-to-lp transition at room temperature. The abnormal temperature-dependent sorption behaviour of the sp phase of MAF-42 can be ascribed to thermal expansion ( Supplementary Fig. 4) and/or decreased diffusion barrier 45 . The gradually increased gate-opening pressure (16,18,28 and 440 bar for MAF-42, O53, O74 and O100, respectively) and decreased isotherm slope of the sp-to-lp transition are consistent with the increased framework rigidity of the oxidized samples (Fig. 4e). For CH 4 , MAF-42, O53 and O74 show type-I adsorption isotherms corresponding to the sp phase (Supplementary Table 4). Compared with CO 2 , CH 4 has very low boiling point and polarity (Supplementary Table 5), meaning that it interacts with the host framework very weakly and can hardly open the narrow channel or induce the sp-to-lp transition. The variation of C 2 H 6 sorption isotherms of MAF-42, O53 and O74 resembles that of CO 2 at 195 K. The gate-opening pressure is 8 bar for the np-to-lp transition of MAF-42, which is higher than those of 5 and 6 bar for the sp-to-np transitions of O53 and O74, respectively. The nonporous sp phase of MAF-42 for C 2 H 6 can be explained by the very large molecular size of the guest (Supplementary Table 5). Because of the over-shrinkage of the host framework, O100 is virtually nonporous for the low-polarity gases CH 4 and C 2 H 6 .
The finely/drastically tunable gas sorption properties suggest their usefulness for on-demand gas separation. Mixed gas adsorption isotherms were measured to reveal the real gas adsorption selectivities of selected samples with promising isotherms. The CO 2 /CH 4 selectivities of MAF-42 and O100 at 1-10 bar (measured by mixed CO 2 /CH 4 with 40:60 molar ratio) were observed as 28-14 and 700-600, respectively (Supplementary Methods, Fig. 5a-c), meaning that the performance of the crystal can be significantly improved by the aerobic oxidation treatment. The extremely high CO 2 /CH 4 selectivity of O100 should be suitable for purifying CH 4 from biogases (CH 4 45B65%, CO 2 30B50%) 46 and landfill gases (CH 4 35B55%, CO 2 40B45%) 47 by selective adsorption of CO 2 .
More interestingly, the real CH 4 /C 2 H 6 selectivities (measured by mixed CH 4 /C 2 H 6 with 20:80 molar ratio) were observed to be 500-200 and 1/24-1/17 for MAF-42 and O74, respectively, indicating the catalytic oxidation can drastically invert the CH 4 / C 2 H 6 selectivity up to 4 orders of magnitude, which has not been realized by other adsorbents or methods (Supplementary Methods, Fig. 5d-f). The selective adsorption of CH 4 by MAF-42 at low pressure is due to its nonporous nature for C 2 H 6 according to the molecular sieving effect. On the other hand, O74 prefers adsorption of C 2 H 6 because it is porous to both gases, and C 2 H 6 has the larger molecular weight and quadrupole moment (Supplementary Table 5). This property may be useful for ondemand purification of different mixture gases (for example, natural gases usually contain CH 4 69B96% and C 2 H 6 1B14%) 2,48 .
Conventional adsorbents generally exhibit Langmuir-type isotherm and monotonically decreased (versus pressure) adsorption selectivity, which is not suitable for some gas separation applications operating at high pressures 49 . Some flexible PCPs showing gate-opening phenomena and stepped isotherms may be used to solve this problem; however, the real selectivities are usually much lower than those predicted by the singlecomponent isotherms because the non-preferred guest can also enter the opened channel above the gate-opening pressure 50 . We observed that the real C 2 H 6 /CH 4 selectivity (measured by mixed C 2 H 6 /CH 4 with 75:25 molar ratio) of O53 significantly increases from 92 to 140 at C 2 H 6 partial pressure from 4.7 to 5.5 bar, respectively, and remains 113 at C 2 H 6 partial pressure of 8.1 bar (Supplementary Methods and Supplementary Fig. 21). The abrupt increase in real adsorption selectivity occurs around the gate-opening pressure of single-component C 2 H 6 isotherm at 4.7-5.2 bar. This observation indicates that the gate-opening is forced by filling the C 2 H 6 molecules in the newly generated space. In other words, there is no space left for CH 4 , preventing the commonly observed co-adsorption effect.
Discussion
In summary, we demonstrated that Cu(I) ions with enzyme-like O 2 activation ability can cooperate with a flexible and oxidizable bis-triazolate ligand to fabricate a porous crystal reactive towards molecular oxygen even at room temperature. The methylene bridge of the bis-triazolate ligand provides the key reactivity to molecular oxygen, as well as modifiable framework flexibility and pore surface polarity. Although the free ligand is inert to oxygen, the flexible methylene bridge near the low-coordinated Cu(I) centres in the coordination framework is reactive enough to be oxidized as a more rigid and polar carbonyl group. In situ structural and spectroscopic analyses confirmed that the lowcoordinated Cu(I) centres behave like those in copperproteins during the aerobic oxidation. Compared with known material synthesis and processing methods, the solvent-free aerobic oxidation reaction has a series of advantages, such as green, quantitative yield, easy and precise monitor/control of modification degree, work-up procedure free, and so on. While conventional adsorbents separate adsorbates with fixed selectivities by either sorption affinity difference or molecular sieving effect, the porous crystal consisting of a reactive organic linker with modifiable flexibility and polarity offers a possibility to utilize both mechanisms, as demonstrated by the very high, drastically tunable and even switchable gas sorption selectivities. These results may enlighten future design and construction of multifunctional and controllable porous materials.
Methods
Materials. Commercially available reagents and solvents were used without further purification. The ligand H 2 btm was synthesized according to the literature method 51 .
Measurements. Infrared spectra were obtained from KBr pellets on a Bruker TENSOR 27 FT IR spectrometer in the 400-to 4,000-cm -1 region. Diffused reflection Fourier transform infrared spectroscopy (DR-FTIR) was performed on a Bruker VERTEX 70 spectrometer in the 400-to 4,000-cm -1 region. TG-mass spectra were performed on a hyphenated apparatus of NETZSCH STA 449 F3 Jupiter and NETZSCH QMS 403C Aedo. Elemental analyses (C, H and N) were performed with a Vario EL elemental analyser. TG analyses were performed by using a TA Q50 system. Solid-state 13 C NMR measurements were carried out at ambient temperature on a Bruker AVANCE 400 spectrometer. Solution 1 H NMR measurements were carried out at ambient temperature on a VARIAN Mercury-Plus 300 spectrometer, for which B20 mg of solid samples were digested with sonication in 550 ml of DCl (20 wt% in D 2 O). PXRD patterns were collected (0.02°p er step, 0.06 s per step except for otherwise stated) on a Bruker D8 Advance diffractometer (Cu Ka) at room temperature. Mass spectra were measured by a SHIMADZU LCMS-2010A equipment using an electrospray ionization source with MeOH as the mobile phase. Electron paramagnetic resonance measurements were performed at 9.7 GHz (X-band) using a Bruker BioSpin A300 spectrometer. The spin concentrations in the samples were determined from the second integral of the spectra using CuSO 4 Á 5H 2 O as a standard. The as-synthesized sample (weight of B100 À 200 mg) was placed in the self-made quartz tube and dried for 8 h at 260°C under high vacuum to remove the remnant solvent molecules, sealed with back-filled O 2 before measurements.
Synthesis of materials. A mixture of aqueous ammonia (25%, 4 ml) solution of [Cu(NH 3 ) 2 ]OH (0.025 mol l À 1 ), H 2 btm (0.009 g, 0.05 mmol), methanol (3.0 ml) and benzene (1.0 ml) was sealed in a 15-ml Teflon-lined stainless reactor, which was heated in an oven at 160°C for 72 h. The oven was cooled to room temperature at a rate of 5°C h À 1 . The resulting colourless block crystals were filtered, washed by ethanol and dried in air to give C 6 H 6 @MAF-42-lp (yield ca 83%). Guest-free MAF-42 was obtained by heating C 6 H 6 @MAF-42-lp at 260°C under high vacuum for 24 h. The oxidized samples were obtained by heating MAF-42 at 418 K in the O 2 flow for different times.
X-ray crystallography. Single-crystal diffraction intensities were collected on a Bruker Apex CCD diffractometer with graphite-monochromated Mo Ka radiation or a Oxford Gemini S Ultra CCD diffractometer using mirror-monochromated Cu Ka radiation. Absorption corrections were applied by using the multiscan programme SADABS. The structures were solved by the direct method and refined with a full-matrix least-squares technique with the SHELXTL 6.10 programme package. The occupancies of carbonyl oxygen atoms and guest molecules were obtained by free refinement. Anisotropic thermal parameters were applied to all non-hydrogen atoms of host frameworks except for the oxygen atom in structure obtained by prolong exposure of C 6 H 6 @MAF-42-lp in air. Hydrogen atoms were generated geometrically. Crystal data for the complexes were summarized in Supplementary Table 1. PXRD data for Pawley refinement were collected in 0.02°per step and 3 s per step. Indexing and Pawley refinement of the PXRD patterns was carried out by using the Reflex module of Material Studio 5.0. The patterns were indexed by the TREOR90 method with the aid of unit-cell parameters from single-crystal data. Pawley refinements were carried out with the cell parameters obtained from indexing in space group C2/c. Peak profiles, zero-shifts and unit-cell parameters were refined simultaneously. The peak profiles were refined by the Pseudo-Voigt function with Berar-Baldinozzi asymmetry correction parameters.
|
2018-04-03T04:42:40.642Z
|
2015-02-23T00:00:00.000
|
{
"year": 2015,
"sha1": "092b1be6954ced2f329298a0955b0f2dca3185ea",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/ncomms7350.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "7f1ac9571cd55ba7f572187ffe17494a165c0d2f",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
119065114
|
pes2o/s2orc
|
v3-fos-license
|
Long live the Higgs portal!
In Higgs portal models of fermion dark matter, scalar couplings are unavoidably suppressed by strong bounds from direct detection experiments. As a consequence, thermal dark matter relics must coexist with mediators in a compressed spectrum of dark particles. Small couplings and small mass splittings lead to slow mediator decays, leaving signatures with displaced vertices or disappearing tracks at colliders. We perform a comprehensive analysis of long-lived mediators at the LHC in the context of a minimal dark matter model with a naturally small Higgs portal, also known as the wino-bino scenario in supersymmetry. Existing searches for disappearing charged tracks and displaced hard leptons already exclude tiny portal couplings that cannot be probed by current direct and indirect detection experiments. For larger portal couplings, we predict new signatures with displaced soft leptons, which are accessible with run-II data. Searches for displaced particles are sensitive to weakly coupling mediators with masses up to the TeV scale, well beyond the reach of prompt signals.
I. INTRODUCTION
The hypothesis of thermal Higgs portal dark matter had almost been declared a dead end. If dark matter interacts with the standard model through the Higgs boson, the portal strength is indeed strongly constrained by null results from direct detection experiments [1][2][3][4][5]. A dark matter relic from thermal freeze-out would thus be overabundant today, unless its annihilation rate in the early universe was enhanced by co-annihilation with mediator particles. In Higgs-portal scenarios with fermion dark matter, mediators are a necessary prediction of a UV-complete theory. In this work, we assume that dark matter is a fermion with no charges under the standard model gauge group. Fermionic mediators with renormalizable Higgs couplings induce generally strong dark matter-nucleon scattering through electroweak mixing [6][7][8][9][10]. Mediator triplets or higher multiplets of the weak gauge group lead to non-renormalizable Higgs couplings, which are naturally suppressed by a high cutoff scale [11]. The small portal coupling implies slow decays of the heavy dark states. This can result in a decay length of several centimeters or more, before the mediator decays into dark matter and leptons or hadrons. At the LHC, long-lived mediators thus leave characteristic signatures of displaced vertices or disappearing charged tracks in the detector.
Higgs-portal scenarios with small dark matter couplings are thus a perfectly viable option for thermal dark matter that can be tested at colliders. In this work, we explore the prospects to discover thermal Higgs-portal dark matter through signatures with long-lived mediators at the LHC. We focus on a non-renormalizable Higgs portal with a Majorana fermion singlet and a fermion triplet in the adjoint representation of the weak gauge group. The portal coupling is naturally small and thus evades direct detection bounds. Our model can be UV-completed for instance by a fermion doublet or a scalar triplet. The former construction is known as the wino-bino scenario in the context of supersymmetry [12], the latter is similar to type-II seesaw models for neutrino masses [13,14]. Due to its rich dark matter phenomenology and promising collider signatures, the small coupling regime of this and similar models has recently received increased interest [15][16][17][18][19][20]. We perform a comprehensive analysis of two scenarios: a scalar and a pseudo-scalar Higgs portal. While the scalar scenario has been the focus of most previous collider studies, the phenomenology of long-lived mediators in the pseudoscalar scenario is still largely unexplored. We point out the characteristic differences between both scenarios, regarding their dark matter and collider phenomenology. We propose new signatures with displaced particles and show that they can probe mediator masses well beyond the reach of prompt signatures at the LHC.
This article is organized as follows. In Section II, we introduce our model and discuss the relations between the scalar and pseudo-scalar Higgs portals. In Section IV, we derive constraints on the portal strength from direct detection experiments. Section V is devoted to a detailed discussion of the relic dark matter abundance in different regimes of a small portal coupling. Section III is a brief interlude on the interpretation of our model in the minimal supersymmetric standard model (MSSM). In Section VI, we eventually explore the collider phenomenology of the two scenarios under the assumption of a thermal dark matter candidate. We conclude in Section VII.
II. THE SINGLET-TRIPLET HIGGS PORTAL
We extend the standard model by two self-adjoint fermion fields with vector-like weak interactions, Here χ S is a standard-model singlet Majorana fermion and χ T transforms under the weak gauge group as a triplet with zero hypercharge. We assume a discrete Z 2 symmetry, under which χ S and χ T are odd and all standard-model particles are even. The lightest fermion state is stable and a dark matter candidate. In this scenario, there are no renormalizable Higgs couplings to dark fermions. At energies below a cutoff scale Λ, the scalar sector is described by the effective Lagrangian 1 is the Higgs doublet of the standard model. Gauge-invariant contractions of the fields are implicitly assumed. Higgs couplings to dark fermions are of mass dimension five and thus naturally small at energies well below Λ. Since we will focus on the parameter region with a high cutoff scale Λ M W , the impact of operators with higher mass dimensions can be neglected. We assume all parameters to be real in order to preserve CP invariance. The dark matter and collider phenomenology of the Higgs-portal couplings κ S and κ T has been extensively studied [21][22][23]. In general, a complete theory that induces a singlet-triplet Higgs portal can also generate singlet-singlet and/or triplet-triplet Higgs couplings. However, since κ S and κ T will play essentially no role in our phenomenological analysis, we neglect them by setting κ S = κ T = 0.
After electroweak symmetry breaking, χ 0 S and χ 0 T mix through the Higgs portal. The scalar Lagrangian for the neutral fermions reads We can choose m T positive without losing generality. The sign of µ is not observable in tree-level processes, so we assume it to be positive as well. Furthermore, we always request m T > −m S , so that the singlet fermion is lighter than the triplet. Due to electroweak mixing, the gauge eigenstates are not mass eigenstates of our model. We define a mass matrix M by By diagonalizing M , the physical eigenstates χ and χ h are readily obtained as The corresponding mass eigenvalues for the neutral states χ , χ h and the charged states χ ± are For m S > µ 2 /m T , the mass m of the lightest state is positive. The physical Lagrangian then reads (neglecting interactions with two Higgs bosons) where θ w denotes the weak mixing angle. Singlet-triplet mixing induces scalar Higgs couplings and charged vector currents of the lightest state χ . Neutral currents of χ and χ h are absent due to the Majorana nature of the neutral fermions. In the limit of small mixing, the lightest state is mostly a weak singlet with suppressed couplings to the standard model.
A pseudo-scalar singlet-triplet Higgs portal can be obtained through a chiral rotation of the singlet fermion, By applying this transformation to the Lagrangian in Eq. (3), we obtain The chiral rotation turns the scalar portal into a pseudo-scalar portal and flips the sign of m S . In the mass eigenbasis, we obtain The physical mass terms and interactions are given by Now the Higgs interactions with one heavy and one light neutral fermion are pseudo-scalar, while heavy-heavy and light-light couplings remain scalar. The gauge couplings of χ have an axial-vector structure. The chiral rotation also flips the sign of the lightest mass eigenvalue in the spectrum, see Eq. (10). The parameter space m S < µ 2 /m T with negative mass m < 0 in the scalar Lagrangian L S thus corresponds to positive mass m > 0 in the pseudo-scalar Lagrangian L P . We identify two physical scenarios, pseudo -scalar scenario : L P with m S < µ 2 /m T ↔ m > 0.
It is instructive to study the features of our model in the limit of small singlet-triplet mixing. In this limit, the couplings of the lightest state χ are approximated by For fixed values of m T and µ, the mixing θ in the pseudo-scalar scenario is smaller than in the scalar scenario, see Eq. (12). Gauge and diagonal Higgs couplings of the lightest state are thus weaker in the pseudo-scalar scenario. The masses of the neutral fermions are approximated as The lightest state χ is thus mostly a gauge singlet, while the heavier states χ + and χ h approximately correspond to the charged and neutral components of a weak triplet. In both scenarios, the mass splitting between the heavier states is given by The first contribution is due to singlet-triplet mixing and the second contribution due to electroweak loop corrections [24]. The mass difference between the charged state and the lightest state is different in the two scenarios. Neglecting electroweak corrections and assuming small fermion mixing, one finds In Fig. 1, we show two typical scenarios of dark fermion spectra. For very small singlet-triplet mixing, electroweak corrections dominate the mass splitting ∆m hc and the charged fermion is the heaviest state (right panel). For larger mixing, χ + and χ h become degenerate in mass and eventually flip their positions (left panel). The spectra in the scalar and pseudo-scalar scenarios look very similar, unless the mixing is large. The mass hierarchy plays a crucial role for the collider phenomenology of the dark fermions (see Section VI).
III. IN CASE OF SUPERSYMMETRY
Our model can be interpreted as the so-called wino-bino scenario in the MSSM with conserved R parity, where the superpartners of the gauge bosons reside around the electroweak scale and higgsinos are much heavier. Here we briefly discuss this supersymmetric realization of our Higgs-portal scenario. We adopt the notation for the wino-bino scenario from Ref. [19] and relate it to ours. After integrating out the higgsinos, the phenomenology is described by an effective Lagrangian, where W A and B are the wino and bino fields, C are Wilson coefficients of the dimension-five operators. 2 This Lagrangian is to be compared with the Higgs-portal Lagrangian in Eq. (2). The mass parameters m S and m T in our model correspond with the gaugino masses M 1 and M 2 . The Higgs-portal coupling µ/v is related to the higgsino mass parameter µ through where tan β is the ratio of vacuum expectation values of the two Higgs fields and M W is the mass of the W boson. The cutoff Λ can thus be interpreted as the higgsino mass µ, provided that κ ST and sin(2β) are not suppressed. For tan β = 1, a portal coupling of µ/v = 0.01 corresponds to a higgsino mass of µ = 1.4 TeV. As we will see in Section VI, collider searches with long-lived mediators probe parameter regions with µ/v 0.01 and thus higgsino masses well beyond resonant production at the LHC.
Using these relations, one can directly interpret the results of our work in the wino-bino scenario. The pseudo-scalar scenario is a special case of the wino-bino scenario the complex MSSM with flipped bino mass M 1 → −M 1 . Signatures of long-lived winos scenario have been discussed in the scalar scenario [18,25,26].
To the best of our knowledge, a comprehensive analysis of LHC signatures with displaced particles has not been performed yet. The phenomenology of mediator decays in the pseudo-scalar scenario has been much less explored [27]. Our work can serve as a framework to systematically search for supersymmetric gauginos in the limit of higgsino decoupling. Notice that in the MSSM the dark matter phenomenology and the resulting collider signatures can be altered by the presence of other light superpartners or additional scalars. The interpretation of our results in a specific scenario should thus be done with care.
IV. DARK MATTER SCATTERING OFF ATOMIC NUCLEI
From now on, we will interpret the lightest neutral fermion χ as a dark matter candidate. An important bound on our model is derived from searches for dark matter scattering off atomic nuclei in direct detection experiments. Since χ does not couple to the Z boson, spin-independent scattering is mediated by Higgs boson exchange via a scalar current. 3 Thanks to the small momentum transfer, the interaction of dark matter with the quarks inside the nucleon can be described by an effective Lagrangian, where G F is the Fermi constant and M h is the Higgs mass. Notice that the Higgs coupling to two dark matter states is not affected by a chiral rotation and is scalar, regardless of whether the Higgs portal has a scalar or pseudo-scalar structure. In terms of the effective interaction, the coupling of dark matter to protons can be expressed as and analogously for neutrons with p → n. The contribution of a particular quark flavor to the proton mass is m p f (p) T q = p|m qq q|p , determined experimentally. The cross section for spin-independent dark matter scattering off a nucleus with Z protons and A nucleons in total is given by FIG. 2: Bounds from Xenon1T [28] on the Higgs coupling to dark matter, as a function of the dark matter mass. The grey region is excluded at 90 % confidence level. The results have been obtained using micrOMEGAs [29]. where is the reduced dark matter-nucleon mass and σ n is the cross section for spinindependent dark matter-nucleon scattering. The currently strongest upper bound on nucleon scattering has been obtained by the Xenon1T collaboration [28], In our model, this translates to a strong bound on the Higgs coupling to dark matter, which we show in Fig. (2) as a function of m . For instance, for a dark matter mass of 100 GeV we obtain For sizeable mass splittings m T − m S µ, fermion mixing is small. In this regime, the Xenon1T results can be interpreted as an upper bound on the mass splitting between the mediators, The mediator states are thus nearly degenerate in mass. For smaller mass splittings m T − m S ≈ 15 − 30 GeV, as favored by co-annihilation, µ/v must be suppressed to evade direct detection. In this regime, fermion mixing can still be close to maximal. As µ/v is lowered below the Xenon1T bounds, fermion mixing decreases and dark matter decouples from the standard model. Future direct detection experiments are expected to probe even smaller dark matter-nucleon scattering cross sections. If they were able to test rates comparable to coherent neutrino scattering (see Fig. 2), this would probe Higgs couplings at the permille level.
V. DARK MATTER ANNIHILATION AND RELIC ABUNDANCE
The interpretation of the lightest state as dark matter strongly depends on the Higgs-portal strength µ/v. We consider couplings that are large enough for dark matter to be in thermal equilibrium with the primordial plasma before freeze-out. The relic abundance is then determined by the freeze-out of processes that change the dark matter number density. In Table I, we show the annihilation and scattering processes relevant around the freeze-out temperature and their dependence on the model parameters µ/v and θ. Throughout our analysis, we consider dark matter masses below 1 TeV, corresponding to the region that can be probed at the LHC. In this mass range, the effect of Sommerfeld enhancement on pair annihilation is mild and will be neglected in our analysis [30]. Scattering off standard-model fermions keeps the dark fermions in thermal equilibrium. Co-scattering, mediator decays and inverse decays ensure chemical equilibrium among the dark fermions.
process scaling process scaling
A. Pair annihilation
Dark matter pairs can annihilate through the Higgs boson or through weak charged currents. Since direct detection results set stringent bounds on the Higgs coupling and also on the mixing of dark fermions below the TeV scale, pair annihilation is suppressed. In Figure 3, we show the parameter regions of our model that are excluded by Xenon1T in the ∆m c − m plane for three values of µ/v. The grey areas are excluded by searches for pair-produced charged fermions at LEP [31]. For µ/v = 0.2, Xenon1T excludes large parts of the parameter space, while for µ/v = 0.02 Xenon1T sets weaker bounds than LEP. Notice that in the pseudo-scalar scenario the bounds from direct detection are much weaker, since the mixing θ for fixed µ/v, m , and ∆m c is smaller than in the scalar scenario. The colored curves show the observed relic abundance Ω χ h 2 = 0.1199 [32] for fixed µ/v. To obtain these predictions, we have implemented our model in micrOMEGAs [29]. The vertical lines around m = 63 GeV correspond to the freeze-out of pair annihilations through the Higgs resonance. As can be seen in the figure, resonant annihilation is compatible with the observed relic abundance even for small µ/v. For larger dark matter masses, the relic abundance is determined by pair annihilation through gauge interactions, provided that fermion mixing is sizeable. In the scalar scenario, this is the case for µ/v = 0.2, while in the pseudo-scalar scenario pair annihilation is too small to provide the correct relic abundance.
B. Co-annihilation
For smaller µ/v, pair annihilation is suppressed by θ 4 and becomes irrelevant around the freeze-out temperature. The relic abundance is now set by co-annihilation processes like χ χ + → ff [33], which scale as θ 2 . Since the thermally averaged co-annihilation cross section is Boltzmann-suppressed, a moderate mass difference of ∆m c /m ≈ 10 % is required to prevent overabundance. In the scalar model, for large dark matter masses co-annihilation becomes relevant at larger µ/v, since the pair annihilation rate is smaller than for lighter dark matter. Similarly, in the pseudo-scalar model co-annihilation starts at larger µ/v than in the scalar model, because the mixing suppression of pair annihilation is stronger.
C. Mediator annihilation
For even smaller µ/v, the co-annihilation rates become inefficient and dark matter decouples earlier than the heavier dark fermions. The relic density in this case is determined by mediator annihilation χ + χ − → ff (and similar processes involving χ h ), which can still change the dark matter number density through mediator decays χ + → χ ff . Mediator annihilation is not suppressed by fermion mixing (see Table I), but the thermally averaged rate is Boltzmann-suppressed by the mediator mass. Therefore χ + and χ h should not be much heavier than χ . Mediator annihilation thus predicts a compressed spectrum, similar to co-annihilation but with smaller dark matter couplings. In summary, direct detection sets an upper bound of µ/v 0.2 (0.6) on viable thermal relics below 1 TeV in the scalar (pseudo-scalar) scenario. In the pseudo-scalar scenario, all processes decouple for comparably larger Higgs-portal couplings, due to the smaller fermion mixing in this scenario. In particular, for all three displayed values of µ/v the relic abundance away from the Higgs resonance is set by mediator annihilation.
D. Co-scattering
In the region of mediator annihilation, dark matter is still in chemical equilibrium with the heavier dark fermions through co-scattering and decays. While decays, co-scattering and co-annihilation all scale as θ 2 or (µ/v) 2 , the latter is relatively suppressed by the number density of the non-relativistic mediators and thus decouples earlier. In this regime, determining the relic abundance requires solving the coupled system of Boltzmann equations for the number density evolution of all dark fermions, taking into account co-scattering and decay processes. This approach, however, is not included in automated programs such as micrOMEGAs, DarkSUSY and MadDM [29,34,35]. While a detailed numerical analysis of the non-equilibrium processes is beyond the scope of our work, we offer a qualitative discussion of the dark matter phenomenology in this regime.
Mediator annihilation determines the relic abundance, as long as mediator decays are prompt around the freeze-out temperature. For a fixed value of µ/v, decays are still rapid in our model when co-annihilation processes have already decoupled. Once the mediator decays drop below the Hubble rate, the dark matter number density can only decrease through co-scattering processes χ SM ↔ χ + SM and χ SM ↔ χ h SM, followed by mediator annihilation. This happens only for very small µ/v, where mediator decays decouple before co-scattering processes. Eventually, the latter decouple as well and dark matter departs from chemical equilibrium, while the mediators remain in equilibrium. The relic abundance is now driven by the freeze-out of co-scattering processes and thus very sensitive to the strength of the Higgs portal. Similar scenarios, dubbed conversion-driven freeze-out, have been identified in Refs. [36,37]. In non-standard cosmological scenarios with early matter domination, the observed relic abundance can also be obtained out of thermal equilibrium [38].
In Figure 4, we illustrate the different phases of dark matter freeze-out for small Higgs-portal couplings in the scalar scenario (blue) and the pseudo-scalar scenario (green). For concreteness, we choose a benchmark point with m = 500 GeV and a mass splitting of ∆m c = 30 GeV, as it is typical for efficient co-annihilation and mediator annihilation. For other dark matter masses, the main features of the phase diagram are very similar. In the pseudo-scalar scenario, each phase is reached at a larger coupling µ/v. This is due to the fact that the freeze-out of the relevant processes is very sensitive to the fermion mixing, which is smaller than in the scalar scenario [39]. The different Lorentz structure of Higgs and gauge couplings in both scenarios (cf. Eq. (13)) has only a subleading effect on the annihilation rates. In particular, all processes that dominate the dark matter abundance can proceed in an s-wave in either scenario (cf. Ref. [40]).
For large values of µ/v, the relic abundance is set mostly by pair annihilation and co-annihilation. Part of this region is excluded by Xenon1T (shaded in grey). For µ/v 0.02 (0.2) in the scalar (pseudo-scalar) scenario, the relic abundance is determined by mediator annihilation, mostly by χ h χ + → SM SM. Part of this region might be probed by future direct detection experiments, but the region below the neutrino floor (indicated by a dashed line) is not accessible with current methods. At µ/v 3 × 10 −4 (10 −3 ), the thermally averaged co-scattering rate becomes smaller than the (thermally averaged) mediator annihilation rate (see the dashed line). Mediator decays χ + → χ ff remain fast, so that co-scattering does not affect the relic abundance yet. At µ/v 2 × 10 −7 (5 × 10 −6 ), charged mediator decays drop below the Hubble rate and the relic abundance is determined by co-scattering. 4 Thermal equilibrium is preserved for couplings well below the range considered in this work. The regimes of mediator annihilation and co-scattering thus provide us with a thermal dark matter candidate that cannot be tested by direct detection experiments. Therefore colliders play an important role in probing Higgs portal dark matter with tiny portal couplings.
VI. LONG-LIVED MEDIATORS AT THE LHC
The hypothesis of a Higgs-portal dark matter relic with small couplings is directly testable at colliders. In this section, we investigate the LHC phenomenology of our model, constraining the parameter space with existing searches and predicting new observables that test regions that have not been explored yet. At the LHC, the mediators are pair-produced through Drell-Yan-like processes and subsequently decay into the lightest dark fermion. Two examples of such processes are shown in Figure 5. The production rate is set by the invariant mass of the mediator pair and the weak gauge coupling. 5 LEP has set a lower bound on the mass of the charged fermion, m c 100 GeV. Direct detection results imply that the Higgs-portal coupling must be small, µ/v 0.2 for mediators below the TeV scale. Furthermore, direct detection sets an upper bound on the mass splitting between mediators, ∆m hc . Viable scenarios of dark matter freeze-out favor a moderate mass splitting between dark matter and the mediators, ∆m c . These requirements determine the parameter region of interest as m c 100 GeV, ∆m hc few GeV, In this parameter range, fermion mixing is small. At the LHC, we search for a compressed spectrum of dark fermions, featuring potentially long-lived mediators with soft decay products and missing energy. In what follows, we first determine the lifetimes of the mediators and then discuss the resulting signatures and how to test them.
A. Mediator decays
The lifetime of the mediators depends sensitively on the Higgs-portal coupling, µ/v, and on the mass difference between the dark fermions in the initial and final states, ∆m. Neglecting a potential phase-space suppression, the mediator decay width scales as where x and y depend on the decay process. For decays into the lightest state, χ + → χ and χ h → χ , the mass splitting is sizeable, ∆m c ≈ ∆m h ≈ 15 − 30 GeV. The mediator is thus long-lived on collider scales only if its decay is suppressed by a small portal coupling µ/v. In the decay χ + → χ h , in turn, ∆m hc can be arbitrarily small, as we discussed in Sec. II. In this case, the mediator decay is suppressed by the mass splitting. Let us first focus on the charged state χ + . Depending on the Higgs-portal strength, χ + can decay either via the two-body decay χ + → χ h π + or the three-body decays χ + → χ + ν and χ + → χ + hadrons. The two-body decay is kinematically allowed for mass splittings larger than the pion mass. In our model this is fulfilled for very small µ/v, where the mixing-induced mass difference is negligible and ∆m hc ≈ −160 MeV, see Eq. (15). For m π < |∆m hc | m c , the decay width is given by where f π 130 MeV is the pion decay constant and V ud is a CKM matrix element. The decay rate is strongly suppressed by the small mass difference ∆m hc , as well as by the limited kinematic phase space. Since fermion mixing is tiny in this regime, cos θ ≈ 1 and the decay rates in the scalar and pseudo-scalar scenarios are the same. The nominal decay length of the charged fermion for |∆m hc | ≈ 160 GeV is given by where τ χ is the proper lifetime of χ + . Charged particles with a decay length in the centimeter range leave tracks in the inner layers of the ATLAS and CMS detectors, which we will discuss in more detail below. As µ/v increases, the mass splitting ∆m hc drops below the pion mass and the two-body decay is forbidden. In this regime the three-body decays χ + → χ + ν and χ + → χ +hadrons dominate the decay rate. In the scalar and pseudo-scalar scenarios, three-body decays proceed through vector currents (7) and axial-vector currents (11), respectively. For ∆m c M W , m c , the partial width of the leptonic decay χ + → χ + ν is given by [41] 6 in the scalar and pseudo-scalar scenarios. The mass splitting is very similar in both scenarios, ∆m c ≈ ∆m c . Contrary to pion decay, three-body decays are not suppressed by the mass splitting, but by the small fermion mixing θ. Notice that in the pseudo-scalar scenario the three-body decays are smaller than in the scalar scenario, leading to a longer mediator lifetime. This is due to the smaller mixing angle, which reduces the weak coupling to dark matter, see Eq. (13). In the region where three-body decays dominate, the nominal decay length reaches up to cτ χ + ≈ 1.5 cm (4 cm) in the scalar (pseudo-scalar) scenario. For larger portal couplings, the decay length decreases. Leptonic three-body decays lead to displaced soft leptons, which can in principle be observable at the LHC for displacements larger than 200 µm. Hadronic decays lead to soft jets, which are difficult to detect. The lifetime of the heavy neutral state χ h depends on the mass hierarchy of the dark fermions. For sizeable µ/v, corresponding to a normal mass hierarchy (Figure 1, left), χ h decays dominantly via χ h → χ + π − , provided that the channel is kinematically allowed. In this case, the decay rate of χ h is the same as for χ + in the limit µ/v → 0, given by Eq. (28). For smaller µ/v, the mass hierarchy is inverted (Figure 1, right), and χ h decays dominantly through χ h → χ bb via an off-shell Higgs boson. For ∆m h m h , the partial decay width in the respective scenarios is given by The decay rate is not suppressed by mixing, but by the small Higgs-portal and bottom Yukawa couplings. 7 In the pseudo-scalar scenario, the decay proceeds through a pseudo-scalar current. Parity conservation requires that χ and the bb pair are emitted in a relative p-wave near the kinematic endpoint, resulting in an additional suppression of (∆m h /m h ) 2 . Therefore the heavy neutral fermion lives longer in the pseudo-scalar scenario than in the scalar scenario. Compared with χ + , the strong suppression of χ h decays results in a longer lifetime of the neutral mediator. For µ/v → 0, χ h can be arbitrarily long-lived. The typical signature of a long-lived neutral fermion is a pair of b-jets with displaced vertices, which is expected to be detectable by ATLAS and CMS in the range of 1 cm < cτ χ h < 1 m [19].
B. LHC signatures with displaced particles
We are now prepared to investigate the predicted LHC signatures in detail. In Figure 6, we present the parameter region of our model that can be tested with prompt and long-lived mediators in the m c versus µ/v plane. For each parameter point, the mass difference ∆m c is determined by requiring the observed dark matter abundance of Ω χ h 2 = 0.1199 ± 0.0022. In most of the parameter space, co-annihilation or mediator annihilation set the relic abundance (cf. Fig. 4). At the lower end of the plots, the relic abundance starts to be determined by co-scattering, as indicated by a dashed line. Null results of Xenon1T exclude the upper left corner in the scalar scenario. 8 In the pseudo-scalar scenario, direct detection bounds are weaker and do not appear in the figure. In either case, current direct detection experiments do not probe parameter regions with long-lived mediators yet. If future experiments became sensitive to scattering rates comparable to coherent neutrino scattering, they would test the region with µ/v 0.01. Indirect detection cannot probe small portal couplings, as dark matter pair annihilation is strongly suppressed. Colliders are thus the only terrestrial instruments to date that can test the hypothesis of dark matter from a tiny Higgs portal.
The various signatures are classified as follows. Green regions are already excluded by existing searches; orange regions correspond to predictions for the LHC with full run-II data (displaced b-jet pairs) or for the HL-LHC (disappearing charged tracks); red regions have not been explored yet, but can be probed with new signatures we predict in this work. In what follows, we discuss the signatures one by one, starting from tiny portal couplings and moving upwards in the parameter space.
C. Disappearing charged tracks
When the portal coupling is tiny, the mass difference between χ + and χ h is induced radiatively by electroweak corrections and decreases with increasing µ/v. In this parameter region, the spectrum is inverted and χ + is the heaviest particle in the dark sector (see Figure 1, right). If the splitting is saturated by electroweak corrections, the decay χ + → χ h π + is kinematically allowed and dominant. The charged mediator decays with a nominal length up to 7 cm (see Eq. (29)). It leaves a track in the innermost layers of the detector and decays before reaching the outer tracking layers [26,42,43]. The ATLAS collaboration has performed a dedicated search for supersymmetric winos with similar decay length [44], which is directly applicable to our case. Notice that the analysis assumes that the by-product of chargino production, the neutral wino, is stable at collider scales. In our model, χ h decays via χ h → χ bb, which is strongly suppressed by the tiny portal coupling in this regime. This assumption is thus fulfilled by our heavy neutral fermion.
Interpreting the ATLAS results in our scenarios, we find that it excludes mediator masses up to m c ≈ 460 (480) GeV and portal couplings of µ/v 10 −6 (10 −5 ) in the scalar (pseudo-scalar) scenario. Notice that parts of the parameter space with disappearing charged tracks correspond to co-scattering. In this region, the mediator's decay length can deviate from that predicted from co-annihilation. Since co-scattering saturates the relic abundance at smaller couplings µ/v, the mass splitting tends to be larger, but without exceeding ∆m hc ≈ 160 MeV. The shown results are thus not expected to change much in the co-scattering phase. In particular, we expect the area below the plot to be excluded by disappearing charged track searches as well. At the upper edge of the excluded region, the rates for two-and three-body decays are equal, Γ(χ + → χ h π + ) = Γ(χ + → χ + ν) + Γ(χ + → χ + hadrons). In the pseudo-scalar scenario, this condition is met at larger µ/v, due to the smaller three-body decay rate (see Eq. (30)). Above this boundary, three-body decays dominate. In the semi-transparent regions, the branching ratio for χ + → χ h π + is less than 90 %, which may weaken the bounds derived assuming 100 % decay into pions.
In Ref. [42], a dedicated study of the HL-LHC prospects for disappearing charged tracks has been performed for supersymmetric higgsinos. Rescaling the production rate in these predictions for our wino-bino scenario, we estimate that with a luminosity of 3 ab −1 the HL-LHC can extend the search to mediator masses up to 1 TeV.
D. Displaced soft leptons
In the region where three-body decays dominate the lifetime of χ + , a soft charged lepton or soft jets are a typical signature of χ + → χ ff decays. Here we focus on signatures with displaced soft leptons. The dominant process with a final state of two soft leptons and missing energy is shown in the left panel of Figure 5. Due to the small mass splitting 15 ∆m c 30 GeV, the transverse momenta of the leptons typically range around p T, ≈ 5 − 40 GeV. For a sufficiently small portal coupling, the charged mediator can be long-lived and the soft leptons originate from a displaced vertex. The longest decay length of χ + is obtained for small portal couplings, cτ χ + = 1.5 cm (4 cm) in the scalar (pseudo-scalar) scenario. The smallest decay length that can be observed, cτ χ + ≈ 200 µm, is limited by the vertex resolution of the detector [45]. The corresponding parameter region is shown in Fig. 6.
Searches for displaced leptons have been performed by CMS at 8 TeV with a lepton momentum cut of p T, > 25 GeV [46] and at 13 TeV for p T, > 40 GeV [45]. While the momentum cut of the 13-TeV analysis is too strong for our model, a fraction of events with displaced soft leptons falls into the signal region of the 8-TeV analysis. In Ref. [15], this analysis has been recasted for quintuplet dark matter, which leads to the same final state of displaced soft lepton pairs through a decay chain of doubly-charged fermions. We rescale the quintuplet event rates by a factor of 1/4 and derive the bounds for our model, requiring that the decay length of χ + is equal to the decay length of the doubly-charged quintuplet fermion. The resulting 95 % CL exclusion bound is shown as a green area in Fig. 6. Displaced lepton searches exclude mediator masses up to m c ≈ 200 GeV for portal couplings µ/v 3 × 10 −5 (3 × 10 −4 ) in the scalar (pseudo-scalar) model. The upper boundary on µ/v is determined by the minimal decay length that can be probed with the CMS search. The sensitivity of the displaced lepton search extends up to decay lengths of cτ χ + 7 cm (and thus to smaller µ/v), where disappearing charged tracks can be observed. In the light green region, the mass splitting ∆m c is up to 25 % smaller than in the quintuplet model, and only very few soft leptons still pass the p T, cut. The sensitivity to our model is thus strongly limited in this region.
We therefore suggest to extend searches for displaced lepton pairs to lower transverse momenta. As the lepton momenta spread over a certain range, it could be experimentally beneficial to first lower the threshold for one displaced lepton, i.e., to search for signals with one soft and one harder displaced lepton. Soft displaced leptons can potentially probe much larger mediator masses, as indicated by the red area. The highest accessible masses are experimentally limited by the signal-background discrimination efficiency for small event rates.
E. Displaced b-jet pairs
In the region where χ + decays are prompt, parameter regions with heavy mediators can still be probed by displaced signatures if χ h is long-lived. This is indeed the case, because the decay χ h → χ bb through an off-shell Higgs boson is small (see Eq. (31)). Produced via pp → χ + χ h , the slow χ h decay leaves a signature of a pair of displaced b-jets. Due to the small mass splitting ∆m h , the b-jets are rather soft. In Ref. [19], such a signature has been analyzed in the context of a supersymmetric wino-bino scenario. Projections for the LHC running at 14 TeV are made, assuming an integrated luminosity of 300 fb −1 . The detection criteria for displaced vertices are adapted from an ATLAS study based on 8 TeV data. Accordingly, a good detection efficiency can be achieved for a decay length in the range 1 cm cτ χ h 1 m, reaching its maximum around cτ χ h ≈ 10 cm. Reinterpreting these predictions for our scenarios, we derive the parameter region that can be probed with displaced b-jet pairs, as indicated by the orange area in Fig. 6. The lower edge of this area corresponds to the largest detectable decay length, cτ χ h = 1 m. The upper edge is set either by the sensitivity limit cτ χ h = 1 cm or, in the pseudo-scalar scenario for small m c , by m h = m c + m π , where the spectrum is inverted and χ h decays dominantly via χ h → χ + π − . The hatched regions indicate the (large) uncertainties on the displaced vertex reconstruction [19]. Overall, it is expected that mediator masses up to m h ≈ 800 GeV can be probed for appropriate couplings µ/v. Neutral mediators with longer lifetimes, corresponding to a smaller portal coupling µ/v, escape the LHC detector before decaying. Proposed surface detectors might be able to probe this scenario [17] and thus cover parts of the area below the yellow band in Figure 6.
F. Displaced + prompt soft leptons
For m h > m c , the mass splitting due to fermion mixing dominates over electroweak corrections. In this region, the portal coupling is too large to cause displaced χ + decays. However, χ h can be long-lived for m h m c +m π , due to a strong phase-space suppression by the small mass difference ∆m hc . From pair-produced mediators via pp → χ h χ + , one expects one prompt decay, χ + → χ + ν, and one slow decay, χ h → π − /ff (χ + → χ + ν), which produces a displaced soft lepton ( Figure 5, right). We thus predict a signature with large missing energy and two soft leptons, one displaced and one prompt. The leptons can be either of opposite or of same electric charge, appearing at equal rates. The pion from the decay χ h → π − χ + is very soft and not detected.
In Fig. 6, we show the parameter space for displaced + prompt soft lepton signals in red. The upper edge of the area is determined by requiring a minimal nominal decay length of cτ χ h 200 µm. Above this line, the mass splitting ∆m hc is larger and decays become rapid. The lower edge is set by the kinematic threshold for a two-body decay, m h = m c + m π . Below this threshold, the heavy neutral fermion decays mostly via χ h → χ bb, and no soft leptons are produced.
G. Prompt soft leptons
Mediators with a decay length cτ χ + < 200 µm leave signatures with prompt soft leptons. Both χ + and χ h decays can produce prompt soft leptons via χ + → χ + ν or χ h → π − (χ + → χ + ν), respectively. Signatures with two prompt leptons and missing energy are thus expected from pp → χ + χ − and pp → χ + χ h production. Which process dominates depends on the χ h decay branching ratio into leptons. If χ h is long-lived, pairs of prompt soft leptons can only be produced via pp → χ + χ − with subsequent prompt decays. Searches for pairs of prompt soft leptons have been performed by ATLAS [47] and CMS [48] in the context of pure wino production via pp → χ + χ h . The analyses assume a decay via χ + χ h → χ W * χ Z * → χ + ν χ + − to 100 %. However, in our model χ h → χ Z * → χ + − is loop-and mixing-suppressed and thus small compared to χ h → χ h * → χ bb. In Ref. [15], the CMS analysis [48] has been recasted under the assumption of pure χ + χ − production for a triplet dark matter model, which corresponds to our scalar scenario. We reinterpret their results and derive a lower bound of m c 130 GeV for 5 × 10 −5 < µ/v < 0.02 in the scalar scenario and 7 × 10 −4 < µ/v < 0.07 in the pseudo-scalar scenario. The excluded parameter region is shown in Figure 6 as a green band.
In case of a normal mass hierarchy, χ h decays are prompt for µ/v 0.02 (0.07) in the scalar (pseudo-scalar) scenario. Soft di-leptons are now produced from both pp → χ + χ − and pp → χ + χ h . Adding the contribution from the latter process enhances our signal rate. However, our pp → χ + χ h contribution favors a different kinematic regime and produces only two charged leptons in the final state, compared to three leptons in the experimental analyses. We therefore do not expect a significantly stronger bound than m c 130 GeV when including χ + χ h contributions. The sensitivity could be enhanced by optimizing the signal region for signatures with exactly two soft leptons in the final state. All in all, however, searches for prompt soft leptons are very limited in their mass reach, due to the small production rates and large backgrounds. Searches for displaced + prompt leptons or displaced b-jet pairs are expected to be sensitive to much higher mediator masses. In Figure 7, we summarize the various displaced signatures that are predicted from long-lived mediator decays at the LHC. They are classified according to the nominal decay length of the mediator χ + (salmon) or χ h (purple) in both scenarios. Colored areas correspond to regions of experimental sensitivity with the ATLAS and CMS detectors. Hatched regions could be probed with an extended sensitivity. All other edges are theory bounds, which were explained above for each individual signature. Disappearing charged track searches have already excluded the blue and green parameter regions in the scalar and pseudo-scalar scenario, respectively. It is apparent that our model predicts signatures with different final states in basically all accessible layers of the LHC detectors. The search for dark matter from a small Higgs portal is thus most efficiently done by gathering all these signatures in a combined interpretation.
VII. CONCLUSIONS
In this work, we have investigated singlet-triplet fermion dark matter interacting with the standard model through a scalar or pseudo-scalar Higgs portal. The nature of the Higgs portal also has implications on the weak charged currents, which are vector-like in the scalar scenario and axial-vector-like in the pseudo-scalar scenario. Both scenarios are related through a chiral rotation of the fermion singlet. Besides the different Lorentz structure of the Higgs and W -boson couplings, the fermion mixing in the pseudo-scalar scenario is generically smaller than in the scalar scenario. This leads to different lifetimes of the mediators, with observable consequences for the dark matter and collider phenomenology.
Due to the strong bounds on the portal coupling from direct detection experiments, the thermal relic dark matter abundance relies on co-annihilation with mediators. In the regime of very small portal couplings, co-scattering and mediator decays have a crucial impact on the dark matter number density during freezeout. For a reliable prediction of the relic abundance, co-scattering and decay processes thus need to be taken into account when solving the coupled system of Boltzmann equations. Since the phase of co-scattering is a common prediction in models with small portal couplings, we suggest to include co-scattering and mediator decays in existing automated tools for relic density calculations.
Thermal dark matter with tiny Higgs couplings and a compressed dark sector implies long-lived mediators. At the LHC, this leads to a plethora of signatures with both prompt and displaced vertices, as well as disappearing tracks. Due to the small electroweak production rates and the softness of the visible decay products, searches for prompt signatures are limited in their mass reach. Current searches for prompt soft leptons cannot probe mediator masses above about 150 GeV. Displaced signatures, in turn, leave clear signatures that can be distinguished from the background with a handful of events. Disappearing charged track searches already exclude mediator masses up to m c ≈ 460 (480) GeV for tiny portal couplings of µ/v 10 −6 (10 −5 ) in the scalar (pseudo-scalar) scenario. Parameter regions with larger couplings can be probed with signatures of displaced b-jet pairs, displaced soft lepton pairs, or one displaced and one prompt lepton. Existing searches for displaced leptons probe portal couplings µ/v 3 × 10 −5 (3 × 10 −4 ). They exclude mediator masses up to m c ≈ 200 GeV, but are very limited in their sensitivity, due to the cut on the lepton transverse momentum. Lowering the momentum threshold will strongly enhance the sensitivity of displaced lepton searches to soft decay products, which are typical for compressed dark sectors. Notice that the parameter region with mediator masses not much larger than m c ≈ 150 GeV and larger portal couplings can be probed with prompt leptons, as well as displaced + prompt leptons and displaced b-jet pairs. A potential discovery of one of these signatures in this region can thus be confirmed by a complementary search for the other signature.
Our analysis shows that mediator masses of several hundred GeV can be accessible at the ATLAS and CMS detectors using the run-II data set. With the data set expected at the HL-LHC, the reach can be extended to probe mediator masses up to the TeV scale and even beyond with future colliders [25,42,[49][50][51]. While neither indirect nor direct detection will be able to test dark matter scenarios with tiny Higgs-portal couplings in the foreseeable future, collider signatures are perfectly characteristic probes. Displaced soft objects in association with missing energy are sensitive to the mass splitting and the coupling of the dark sector at the same time. We enthusiastically encourage the ATLAS and CMS collaborations to exploit the lifetime frontier in the search for Higgs-portal dark matter.
VIII. ACKNOWLEDGMENTS
We thank Felix Brümmer, Tao Han, Michel Tytgat and José Zurita for helpful discussions and Victor Ananyev for technical support. A warm acknowledgment goes to Nishita Desai for helping us to interpret the displaced lepton searches. We acknowledge support by the DFG Forschergruppe "New physics at the LHC" (FOR 2239). AF is funded by the DFG through the research training group "Particle physics beyond the Standard Model" (GRK 1940). SW acknowledges funding by the Carl Zeiss foundation through an endowed junior professorship (Junior-Stiftungsprofessur ).
|
2018-12-11T19:00:01.000Z
|
2018-12-11T00:00:00.000
|
{
"year": 2019,
"sha1": "cdad805bb162441a029ee44b894c2b6ea1304703",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP02(2019)140.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "a0166bc7806bcfdc13d6ca1ac28c7251ca9a1845",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
10642443
|
pes2o/s2orc
|
v3-fos-license
|
Primary Prevention of Macroangiopathy in Patients With Short-Duration Type 2 Diabetes by Intensified Multifactorial Intervention
OBJECTIVE To explore whether intensified, multifactorial intervention could prevent macrovascular disease in patients with recently diagnosed type 2 diabetes. RESEARCH DESIGN AND METHODS A total of 150 type 2 diabetic patients, with disease duration of <1 year and without clinical arteriosclerotic disease or subclinical atherosclerotic signs confirmed by ultrasonographic scanning of three conducting arteries, were randomized into an intensive intervention group and a conventional intervention group. They then received intensive, multifactorial intervention or conventional intervention over 7 years of follow-up. The patients’ common carotid intima-media thicknesses (CC-IMTs) were measured every year. The primary outcome was the time to the first occurrence of CC-IMTs ≥1.0 mm and/or development of atherosclerosis plaques in the carotid artery. The secondary outcome was clinical evidence of cardiovascular disease. RESULTS A total of 70 patients in the intensive group and 68 patients in the conventional group completed the 7-year follow-up. Subclinical macrovascular (primary) outcomes occurred in seven cases in the intensive group and 22 cases in the conventional group for a cumulative prevalence of 10.00 and 32.35%, respectively (P < 0.05). No significant differences between the two groups were observed regarding the secondary outcome. CONCLUSIONS Primary prevention of macrovascular diseases can be achieved through intensified, multifactorial intervention in patients with short-duration type 2 diabetes. Type 2 diabetic patients should undergo intensive multifactorial interventions with individual targets for the prevention of macrovascular diseases.
T he prevalence of diabetes, especially type 2 diabetes, is increasing markedly worldwide, including in China (1,2). The chronic complications of diabetes seriously affect quality of life and result in a significant decrease in life expectancy; they also impose a heavy economic burden. Therefore, the prevention and treatment of chronic diabetes complications have become a considerable medical problem attracting worldwide attention.
The macrovascular complications of diabetes, which can lead to cardiovascular diseases, are the major cause of death in patients with type 2 diabetes. A reduction in all-cause mortality among individuals with diabetes has occurred over time; however, the mortality rate from cardiovascular causes among individuals with diabetes remains approximately twofold higher than the rate in those without diabetes (3,4). In recent years, the results of several large-scale clinical trials have illustrated that interventions for the various atherosclerosis (AS) risk factors in patients with type 2 diabetes can reduce the risk of cardiovascular death by different degrees, although it remains controversial whether intensive glucose control can help prevent cardiovascular events. The Steno-2 study, which was conducted in patients with type 2 diabetes and microalbuminuria of any duration, demonstrated that target-driven, long-term, intensified interventions aimed at multiple risk factors can reduce the risk of cardiovascular and microvascular events bỹ 50% (5,6).
Thickening of the common carotid intima-media (CC-IMT) is considered a surrogate marker of early AS and vascular remodeling because it is correlated with all of the traditional vascular risk factors (7). Monitoring a combination of CC-IMT thickening and plaque formation could significantly improve the prediction of cardiovascular events (8). Moreover, these factors can be assessed quickly, noninvasively, and inexpensively with high-resolution ultrasound.
Thus, we designed a prospective study in which patients with short-duration type 2 diabetes without AS were assigned to receive a combined intervention targeting multiple risk factors of AS, and their CC-IMTs were measured to explore whether intensified, multifactorial intervention could prevent the occurrence of macrovascular disease over a 7-year period.
RESEARCH DESIGN AND
METHODSdIn brief, 150 patients with type 2 diabetes, diagnosed according to the World Health Organization criteria published in 1999, were recruited at the First Affiliated Hospital of Dalian Medical University. The enrollment took place from 1 April 2002 to 31 December 2002. The design of our parallel controlled study has previously been described (9).
The protocol for this study was in accordance with the Declaration of Helsinki and was approved by the ethics committee of the First Affiliated Hospital of Dalian Medical University. All of the patients provided written informed consent before enrollment and underwent a 7-year clinical follow-up.
The inclusion criteria were as follows: 1) age 35-70 years; 2) diabetes duration ,1 year; 3) no previous histories or present characteristics of cardiovascular diseases, cerebral vascular diseases, or peripheral artery disease as assessed by thorough examinations before enrollment; and 4) IMT values in the conducting arteries (common carotid artery, femoral artery, and iliac artery) ,1.0 mm and no AS plaques detected by ultrasonography (10).
Ultrasonographic scanning of the common carotid artery (between 5 cm upstream and 5 cm downstream of the carotid bulb), the femoral artery (within 10 cm upstream of the femoral artery bifurcation), and the iliac artery (within 10 cm downstream of the abdominal aorta bifurcation) was performed by designated physicians who were unaware of the clinical characteristics of the subjects.
The exclusion criteria included the following: 1) type 1 diabetes or other special type of diabetes; 2) acute diabetes complications within the previous 6 months, including diabetic ketoacidosis, hyperglycemic hyperosmolar status, lactic acidosis, and hypoglycemic coma; 3) renal failure (serum creatinine .106 mmol/L) or hepatic dysfunction (serum alanine aminotransferase .80 units/L); 4) diagnosis of coronary heart disease, cerebral vascular stroke, and/or peripheral artery disease; and 5) a conducting artery IMT $1.0 mm or AS plaques detected by ultrasonography.
Sex, age, BMI, waist-to-hip ratio, systolic blood pressure (SBP), diastolic blood pressure (DBP), and resting 12-lead electrocardiogram were recorded upon enrollment in the clinical trial. Fasting serum total cholesterol, triglyceride, HDL cholesterol (HDL-C), LDL cholesterol (LDL-C), creatinine, and alanine aminotransferase levels, along with plasma glucose, were measured by routine laboratory techniques. HbA 1c was measured by high-performance liquid chromatography.
A total of 268 patients underwent screening, and 150 patients met the inclusion criteria. The 150 patients were randomized into an intensive, multifactorial intervention group or a conventional intervention group as shown in Fig. 1. The total duration of the follow-up was 7 years.
Intensive treatment protocol Physical examination and plasma glucose (fasting plasma glucose [FPG] and 2-h plasma glucose [2hPG]) measurements were conducted monthly. HbA 1c , blood lipid, serum creatinine, and alanine aminotransferase levels were measured every 6 months. CC-IMTs and electrocardiograms were analyzed yearly. During the consultations, a healthy lifestyle (e.g., at least three 30-min sessions of light to moderate exercise per week) and diet (e.g., obtain 60-70% of daily caloric intake from carbohydrates from whole grains, fruits, and vegetables, together with monounsaturated fat) were recommended using one-to-one teaching or group counseling supplemented with audiovisual and printed materials monthly.
Hypoglycemic strategy
Overweight patients (BMI .24 kg/m 2 ) received metformin (starting at 0.25 g three times daily; maximum 0.5 g three times daily); nonoverweight patients received glipizide (starting at 2.5 mg three times daily; maximum 10 mg three times daily). At the next follow-up, if FPG was .7.0 mmol/L, 2hPG was .10.0 mmol/L, and/ or HbA 1c was .7.0%, metformin was prescribed to the nonoverweight patients and glipizide to the overweight patients. Acarbose was prescribed to only those patients with 2hPG still .10.0 mmol/L after any type of hypoglycemic administration. Insulin supplementation was recommended for patients whose HbA 1c remained .7.0% on maximal doses of oral agents or drug combinations and in patients who had intolerable adverse reactions to oral drugs. Premixed, combined human insulin (30% short-acting insulin and 70% neutral protamine Hagedorn insulin) was the first choice.
Antihypertensive strategy
Patients primarily received ACE inhibitor and/or calcium channel blockers; if unsuccessful, a diuretic and/or b-blocker was added as a supplemental therapy. The blood pressure target was 130/85 mmHg.
Lipid-lowering strategy
Statins or a Chinese herb complex called Xue-Zhi-Kang was recommended to patients with hypercholesterolemia and/or high levels of serum LDL-C, and fenofibrate was prescribed to patients with hypertriglyceridemia. Total cholesterol within 4.66 mmol/L, triglyceride within 1.7 mmol/L, and LDL-C within 2.6 mmol/L were considered controlled. Low-dose acetylsalicylic acid (100 mg/day) was also recommended to all of the patients who did not exhibit contraindications. The dosages of the drugs were modulated every month based on the levels of FPG, HbA 1c , blood pressure, and blood lipid until target values were achieved. The patients were treated under the guidance of specialists, and all of the examinations and some of the drugs were freely provided.
Conventional treatment protocol
In the conventional group, loose outpatient management was performed without intensive intervention targets, and the drugs were not provided freely. These patients could go to any hospital at any frequency that they chose. The same research indices as those measured in the intensive group were measured each year free of charge in our center.
Primary and secondary outcomes
The primary outcome (subclinical AS) was the time to the first occurrence of CC-IMT $1.0 mm and/or development of AS plaques in the carotid artery. The secondary outcome (clinical AS) was clinical evidence of cardiovascular Statistical analyses SPSS 13.0 was used for the statistical analysis. Normally distributed data are presented as means 6 SD. An independent t test was adopted for group comparisons, and a pair bond t test was adopted for intergroup comparisons. Numerical data are presented as absolute frequency or percentage, and the x 2 test was used for comparison between groups. Statistical significance was accepted at P , 0.05.
RESULTSdA total of 268 patients who had type 2 diabetes for ,1 year and no clinical AS underwent the screening, and 101 (37.69%) were found to have subclinical AS. One hundred and fifty patients who showed no signs of AS on ultrasound were randomly divided into an intensive group and a conventional group, with 75 cases in each group. Seventy patients in the intensive group and 68 patients in the conventional group finished the 7-year follow-up (6.67 and 9.33% lost to follow-up, respectively). The biochemical characteristics of the patients at baseline have previously been described (9). The data at every follow-up year and at the end of the follow-up period (7 years) are shown in Table 1 and Supplementary Fig. 1. The two study groups were similar at baseline but differed significantly at the end of the intervention period, indicating that intensive therapy was superior to conventional therapy in controlling the level of FPG, SBP, HbA 1c , and fasting serum total cholesterol.
After 7 years of follow-up, among the 68 patients in the conventional group, IMTs $1.0 mm and/or AS plaques in the carotid artery were observed in 22 patients; 1 patient developed myocardial infarction, 4 patients suffered from angina pectoris, 1 patient developed silent myocardial ischemia (electrocardiogram showed that the ST segment was descended, and the T wave was low and calm in contrast to baseline), 2 patients had a transient ischemic attack, and 1 patient developed intermittent claudication. Thus, clinical macrovascular events occurred in nine cases. Five of the nine patients who developed clinical macrovascular events also had increased CC-IMTs and/or AS plaques in the common carotid arteries. However, among the 70 patients in the intensive group, IMTs $1.0 mm and/or AS plaques in the carotid arteries were observed in only 7 patients. One patient developed myocardial infarction in addition to increased CC-IMT, two patients suffered from angina pectoris, and one of these patients also had increased CC-IMT. One patient had silent myocardial ischemia, and one patient died suddenly.
(No autopsy was performed; the cause of death was unknown and was considered relevant to diabetic macroangiopathy.) In total, final clinical macrovascular events occurred in five cases in the intensive group. Two of the five patients who developed clinical macrovascular events also had increased CC-IMTs and/or AS plaques in the common carotid arteries. The difference in the frequency of subclinical macrovascular outcomes between the two groups was significant (P = 0.002); however, no significant difference in the frequency of clinical macrovascular events was observed between the two groups (P = 0.271) ( Table 2 and Fig. 2). (11)(12)(13)(14), although studies on intensive glucose control alone in patients with type 2 diabetes have reached conflicting conclusions regarding the incidence of major cardiovascular events or death (15)(16)(17). However, only a delayed effect in reducing the incidence of cardiovascular events was observed in UKPDS (18), suggesting that long-term observation might be necessary for the study of macroangiopathy in recent-onset type 2 diabetes and that cardiovascular events or death cannot be taken as indicators if the investigators want to draw conclusions about diabetes in the short term.
In our study, we implemented a multifactorial intervention aimed at primary prevention for patients with type 2 diabetes without any manifestation of AS that used macrovascular end points, including subclinical AS lesions, as the evaluation index. We measured the preventive efficacy after 4-7 years of intervention, expanding upon the results of UKPDS and the STENO-2 trial and strengthening their conclusions. Our approach achieved the primary prevention of diabetic macrovascular complications, implying that intensive, multifactorial intervention should be administered to type 2 diabetic patients as soon as possible to provide the most benefits. Recent results from UKPDS suggested that the effects of blood pressure-and glucose-lowering interventions might be additive; there was a trend toward a greater benefit with a combination of intensive blood pressure-and glucoselowering interventions. Because only a small subset of hypertensive subjects received both interventions, UKPDS had insufficient power to determine conclusively whether the effects of the treatments were additive in this group or in the broader population with type 2 diabetes (19). The new results of the Action in Diabetes and Vascular Disease (ADVANCE) trial demonstrated that a combined approach of routine blood pressure-lowering interventions and intensive glucose control resulted in substantial reductions in major renal events and all-cause deaths, supporting and strengthening the results of the UKPDS trial and providing further evidence for thebenefits of a multifactorial treatment approach in patients with type 2 diabetes (20). However, ADVANCE emphasized the control of only two risk factors for diabetic macroangiopathy. As demonstrated by the STENO-2 study, a target-driven, long-term, intensified intervention aimed at multiple risk factors in patients with type 2 diabetes and microalbuminuria can reduce the risk of cardiovascular and microvascular events bỹ 50%; furthermore, the benefits were maintained over the long term even after the randomized treatment period (5,6). However, the STENO-2 subjects were different from ours in that the statuses of their arterial intima were uncertain at baseline.
Ultrasonography to measure CC-IMT is a noninvasive test that can be used to determine the presence of coronary AS. IMT is an independent predictor of future cardiovascular events, and it is often used in research trials as a surrogate for the presence of cardiovascular disease (21)(22)(23). The 150 patients with a diabetes duration of ,1 year included in our study had initial IMTs of ,1.0 mm in the three conducting arteries (common carotid artery, femoral artery, and iliac artery) and no atherosclerotic plaques detected by ultrasonography in addition to an absence of clinical manifestations or history of macrovascular diseases; these patients were considered not to have AS. They then underwent intensified or conventional treatment. The reduced incidence of subclinical outcomes in the intensive group indicates that these interventions reduced the incidence of macroangiopathy, which suggests that this intensified, multifactorial intervention can produce a marked effect on the primary prevention of macrovascular disease in patients with type 2 diabetes. No significant differences between the two groups were observed if only the secondary outcome was considered, irrespective of the primary outcome. Benefits emerged only after a relatively short period when IMTs and/or the occurrence of AS plaques were regarded as end points, implying that evidence of earlystage AS might be more important. These data also suggest that as a chronic progressive disease, subclinical AS might be considered an important index in the study of diabetic macroangiopathy.
In contrast to the uncertain follow-up frequency of those in the conventional group, the subjects in the intensive group were followed up every month. These monthly visits may themselves represent an intervention and may have partially contributed to the final outcomes.
In addition, incidence of macroangiopathy in our study decreased significantly when the HbA 1c target of 7.0% was reached. However, during the 7-year follow-up, the mean HbA 1c in the intensive group was actually~6.5%; furthermore, no severe hypoglycemic events occurred, indicating that an HbA 1c of 6.5%, rather than 7%, might be desirable in patients with short-duration type 2 diabetes without macroangiopathy who are younger than 60 years old. The HbA 1c target in the Action to Control Cardiovascular Risk in Diabetes (ACCORD) trial was ,6.0%, and the all-cause mortality and cardiovascular fatality rates in the intensive blood glucose therapy group were both significantly higher than those in the control group (24). Therefore, it might be reasonable to consider an HbA 1c of 7.0% as the target for intensive blood glucose control in patients with relatively long durations of type 2 diabetes.
Because this study was performed in a small group of type 2 diabetic patients, there was insufficient information for a stratified analysis of the correlation between each hypoglycemic regimen and macrovascular end points. Additionally, the period of observation was only 7 years, and total clinical macrovascular events occurred in only 14 cases. We expect to observe the correlation between subclinical AS and clinical atherosclerotic disease, followed by increased clinical macrovascular events, as time progresses.
In conclusion, the primary prevention of macrovascular disease could be achieved through intensified, multifactorial intervention in patients with type 2 diabetes. Patients with short-duration type 2 diabetes should receive an intensive multifactorial intervention approach with individual targets for the prevention of macrovascular diseases.
|
2017-04-08T08:37:59.507Z
|
2013-03-14T00:00:00.000
|
{
"year": 2013,
"sha1": "20c4864544e8bc59b270ac6f5496e5ef1121e113",
"oa_license": "CCBYNCND",
"oa_url": "https://care.diabetesjournals.org/content/diacare/36/4/978.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "20c4864544e8bc59b270ac6f5496e5ef1121e113",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233810410
|
pes2o/s2orc
|
v3-fos-license
|
An explicitly designed paratope of amyloid-β prevents neuronal apoptosis in vitro and hippocampal damage in rat brain
Synthetic antibodies hold great promise in combating diseases, diagnosis, and a wide range of biomedical applications. However, designing a therapeutically amenable, synthetic antibody that can arrest the aggregation of amyloid-β (Aβ) remains challenging. Here, we report a flexible, hairpin-like synthetic paratope (SP1, ∼2 kDa), which prevents the aggregation of Aβ monomers and reverses the preformed amyloid fibril to a non-toxic species. Structural and biophysical studies further allowed dissecting the mode and affinity of molecular recognition events between SP1 and Aβ. Subsequently, SP1 reduces Aβ-induced neurotoxicity, neuronal apoptosis, and ROS-mediated oxidative damage in human neuroblastoma cells (SH-SY5Y). The non-toxic nature of SP1 and its ability to ameliorate hippocampal neurodegeneration in a rat model of AD demonstrate its therapeutic potential. This paratope engineering module could readily implement discoveries of cost-effective molecular probes to nurture the basic principles of protein misfolding, thus combating related diseases.
Introduction
The deposition of amyloid brils has consequences with numerous protein-misfolding diseases, including Alzheimer's, Parkinson's, and Huntington's disease, Prion diseases, and type-2 diabetes. 1,2 The detailed molecular mechanism of Alzheimer's disease (AD) is not intelligible yet. However, growing shreds of evidence suggest that the aggregation of amyloidb peptide (Ab) from native non-toxic monomers to highly toxic amyloid brils in the extracellular space and formation of neurobrillary tangles (NFTs) in neurons are the principal hallmarks for the pathogenesis of AD. 3,4 In the past two decades, numerous strategies have been exercised to nd a cure for AD. 5 These strategies involve metal chelators, nanoparticles, the amyloidogenic core region (KLVFF) [6][7][8] or other fragments of the Ab peptide, 9,10 chemical chaperones, 11,12 peptide-based inhibitors, [13][14][15][16] small molecules, 5,17 and conformation-selective antibodies. [18][19][20][21] Antibody-based drug design is the most intriguing as antibodies engulf and eliminate the toxic Ab species. Besides, antibodies have demonstrated the scope and potential of immunotherapy. Nevertheless, they are associated with severe adverse effects such as Fc-mediated pro-inammatory immune responses. Recently, affibodies [22][23][24][25][26] have shown prevention of the self-aggregation of Ab by encapsulating the Ab peptide and reducing pro-inammatory immune responses, which led to a novel therapeutic approach against AD. [18][19][20][21][22][23][24][25][26][27] Among the mentioned strategies, a rationally designed, short peptide from a self-aggregation site of Ab showed promising results even in clinical trials with superior bioavailability and less toxicity. 5,28 Here, we aimed to construct an explicitly designed synthetic paratope inspired by a peptide fragment of Ab that could potentially be a clinical candidate for targeting Ab. A paratope is a part of an antibody known to recognize the epitope region of an antigen selectively. [18][19][20][21] The knowledge from prior investigations by our group and numerous reports has empowered us to construct a exible, parallel b-hairpin-like synthetic paratope (SP1, Fig. 1a and b). The size of the designed SP1 is smaller than that of any existing antibody and affibody. We explored its efficiency in binding to Ab using various spectroscopic techniques. The atomicscale mechanistic study by NMR dissected the recognition mechanism. We show that SP1 remarkably disaggregates the preformed Ab aggregates and potentially dissolves Ab plaques through different in vitro studies. Besides, SP1 reduces Ab 40 induced cytotoxicity, oxidative stress-mediated apoptotic events, and dysregulation of Ca 2+ homeostasis in human neuroblastoma SH-SY5Y cells. 29,30 SP1 also improves Ab 40 induced ROS generation and modulates apoptosis signalling in the cells. Notably, SP1 has therapeutic potential in vivo through less toxicity and ameliorating hippocampal neurodegeneration.
Results and discussion
Design and synthesis of the paratope The p / p stacking interactions play a central role in the selfassembly processes in most amyloidogenic proteins leading to their aggregation and disease progression. 31,32 The central core hydrophobic region of the Ab peptide (LVFFA), which acts as a selfrecognition unit, was chosen as a strand in the designed hairpinlike SP1. Two strands were connected in parallel through a exible unit (PEG) to construct the complete synthetic paratope molecule (Fig. 1a, b and ESI Scheme 1 †). We introduced N-methylation 33,34 in the alternate amino acids in each strand, preventing selfaggregation by blocking intermolecular H-bonding (Fig. 1a). In a similar principle, N-methylation should not allow further aggregation of the Ab peptide captured by SP1. Since the central core hydrophobic region (epitope) 6-8 of Ab is crucial for selfaggregation and senile plaque formation, we designed SP1 in such a way that it can selectively bind with the epitope and capture Ab from both sides with its two strands, as proposed in Fig. 1c. We introduced a control b-breaker peptide (CBp) with only one Nmethylated strand of SP1. Also, ve more peptides conjugated with suitable uorophores were hosted to investigate the mechanism of interaction between SP1 and Ab 40 (ESI Table 1 and Fig. 1-7 †). In the beginning, we conrmed that SP1 and CBp are nonamyloidogenic using combined CD, FTIR, TEM, and birefringence analyses (ESI Fig. 8 and 9 †).
Inhibition of Ab 40 amyloid formation
To investigate the inhibitory effect of SP1 on Ab 40 brillization, we performed various biophysical assays in the presence of different doses (0.5, 1, 2, and 5-fold molar excess of SP1), and CBp was used as a control. First, we monitored the kinetics of the amyloid formation of Ab 40 (50 mM) by thioavin T (ThT) assay ( Fig. 2a and b). The Ab 40 peptide alone aggregated with time, with a growth saturation point around 72 h, as evident from the surge in ThT uorescence intensity. However, in the presence of SP1, the intensity of uorescence decreased in a dose-dependent manner (Fig. 2a), and precisely, a 5-fold molar excess instigated $66% inhibition (Fig. 2b). Likewise, SP1 disaggregated the characteristic brillar aggregates of Ab 40 under TEM (Fig. 2c) in a dose-dependent manner. We inspected a few brillar aggregates at the lowest dose of SP1, demonstrating the essential requirement of an optimal concentration for Ab 40 disaggregation. Green-gold birefringence appears to be a standard result of Ab 40 aggregation under cross-polarized light post-staining with Congo red dye. Owing to amyloid formation, Ab 40 appears as green-gold birefringence (Fig. 2d) under cross-polarized light when stained with Congo red. Upon treatment with various doses of SP1, such type of birefringence ( Fig. 2d) disappeared, except at the lowest dose. In contrast, the control peptide (CBp) showed $45% inhibition of Ab 40 peptide aggregation with the experiments mentioned above in parallel (ESI Fig. 10a-c †). We noticed that a 5-fold dose is the minimal requirement for CBp to inhibit aggregation, whereas equimolar SP1 completely prevents it.
Disruption of the preformed Ab 40 aggregates
Understanding the kinetics of brillization led us to design an in vitro experiment of preformed bril disaggregation. In this experiment, the 60 h aged Ab 40 peptide was incubated further for 180 h (total 240 h) with SP1 and CBp separately at varying doses (0.5, 1, 2, and 5-fold molar excess). The high uorescence intensity of ThT shows that preformed Ab 40 brils were suppressed substantially with increased doses of SP1 and the control peptide ( Fig. 2e and ESI Fig. 11a †). Distinctly, we observed $57% (Fig. 2f) and $39% (Fig. 2f) disruption of the We also performed TEM and Congo red staining experiments to examine the efficacy of SP1 disrupting the Ab 40 preformed brils. Equimolar or higher doses of SP1 disrupted the preformed brillar assembly of Ab 40 , as conrmed by TEM (Fig. 2g). In comparison, CBp disrupted the Ab 40 brils when treated with a 2-fold or higher molar excess. Upon incubation with an equimolar concentration of SP1, a remarkable disappearance of the preformed Ab 40 aggregates was evident in the Congo red birefringence staining experiment (Fig. 2h). However, we observed a signicant disappearance of birefringence only at a 5-fold molar excess of the control peptide (ESI Fig. 11b and c †). Notably, SP1 failed to demonstrate the efficacy of bril disruption of Ab 40 at a 0.5-fold molar concentration. Collectively, these results strongly demonstrate that SP1 is more efficient than CBp in disaggregating brils of Ab 40 .
Inhibition of Ab 42 amyloid formation
The aggregation of Ab 42 causes signicant neurotoxicity among all existing isoforms of Ab. 35 We, therefore, examined the inhibition efficacy of SP1 for Ab 42 aggregation. We performed similar biophysical experiments as described earlier for Ab 40 . The ThT assay showed that the aggregation rate of Ab 42 was much faster than that of Ab 40 at a 50 mM concentration (Fig. 3a), and aggregation started immediately reaching a plateau within 20 h. ThT uorescence intensity decreased in a dose-dependent manner by treating Ab 42 aggregates with SP1 ( Fig. 3a and b).
Around 84% inhibition of Ab 42 aggregation was observed when treated with a 5-fold molar excess of SP1 (Fig. 3b).
The aggregated Ab 42 in the absence of SP1 showed densely populated brillar structures under TEM (Fig. 3c), indicating the amyloid signature, as previously reported. 36 With the assistance of SP1, the density of Ab 42 brils was reduced in a dose-dependent manner. Also, untreated Ab 42 aggregates exhibited green-gold birefringence upon staining with Congo red (Fig. 3d), and this signal was decreased in a dose-dependent manner by SP1. Collectively, these examinations indicate that the efficiency of SP1 for the inhibition of Ab 42 aggregates is comparable as observed with Ab 40 .
SP1 reduces Ab aggregate-induced dye leakage from LUVs
Smaller Ab oligomers or protobrils are more toxic than mature brils in AD progression due to their ability to disrupt membranes via pore formation. 37,38 Therefore, it is essential to examine whether SP1 can convert the toxic oligomeric species of Ab into a non-toxic one. To evaluate this, we performed a membrane leakage assay on carboxyuorescein-loaded large unilamellar vesicles (LUVs). 38 The time required for Ab 40 oligomer and mature bril formation is 12 h and 72 h, respectively (inferred from ThT assay, black curve, Fig. 2a), which directed us to set up the LUV leakage assay. Dye-loaded LUVs were incubated with the corresponding Ab 40 oligomers (12 h aged), mature brils (72 h), and freshly disaggregated Ab 40 brils in solution and SP1 or CBp (ESI Fig. 12b and c †). The uorescence intensity of complete dye release from LUVs by Triton X-100 served as a positive control (100% leakage), and untreated dye-loaded LUVs assisted as a negative control. The Ab 40 oligomers (12 h aged) caused rapid dye leakage of $40% until 100 min (ESI Fig. 12b and c †), whereas mature Ab 40 brils caused $15% leakage and the untreated LUVs showed a minimal leakage of $9% (ESI Fig. 12b and c †) during the same period. These results establish that the Ab 40 oligomers trigger more dye leakage than the mature brils, and hence are likely to be more toxic. 38 Membrane disruption by Ab 40 proceeds through a two-step mechanism. [39][40][41] In the rst step, Ab 40 monomers selfassemble to form soluble oligomers, which bind to the lipid membranes to form small ion-selective channel-like pores. During pore formation, the oligomers of Ab 40 further selfassemble and form larger aggregates that lead to the formation of mature brils which are released from the membrane. In the second step, the onset of Ab 40 aggregation and bril formation causes membrane disruption through a detergentlike mechanism. [39][40][41] Notably, freshly disaggregated Ab 40 brils by SP1 and CBp did not considerably damage the LUV membrane, evidenced by only $10% leakage from the LUVs, respectively, which is comparable to that from untreated LUVs. These results collectively affirm the potential ability of SP1 for disassembling preformed brils and other oligomers of Ab 40 to an innocuous species.
Monitoring early events by DLS and TEM
The inhibition of Ab 42 oligomer or bril formation by SP1 at different time intervals was further examined using DLS ( Fig. 4c and d) and TEM ( Fig. 4e and f) $825 nm, and $940 nm, respectively (Fig. 4c), indicating the formation of oligomers (at 1 h or 5 h) and mature brils (at 10 h and 20 h), as observed in previous reports. 42,43 SP1 treated samples exhibited hydrodynamic diameters of $15 nm, $340 nm, $640 nm, and $730 nm, respectively (Fig. 4d). These results indicated that upon 5 h or more prolonged incubation of Ab 42 with SP1, it caused oligomer formation inhibition as smaller aggregates disappeared and were converted to larger, possibly amorphous aggregates. We further validated this phenomenon by TEM and observed that Ab 42 exhibited smaller aggregates at 1 h or 5 h. In contrast, dense brils appeared at 10 h or 20 h, suggesting oligomer formation at 1 h or 5 h, as observed in the DLS results. In the presence of SP1, we did not observe any brillar aggregates in all tested time intervals; instead, some amorphous aggregates were noted. These amorphous aggregates were non-toxic, as evident from the LUV experiments mentioned above and the cytotoxicity assay (vide infra). The formation of non-toxic amorphous species suggests that SP1 drives Ab 42 aggregation towards off-pathway aggregation in line with a previous report. 43
Prevention of Ab 40 induced cytotoxicity
Since the protobrils of Ab species induce cytotoxicity in neuronal cells, 44,45 we investigated the inhibition potential of SP1 against Ab 40 induced neurotoxicity in human neuroblastoma SH-SY5Y cells as a cellular model system of AD. 29,30 Initially, we explored the toxicity of SP1 and did not observe any discernible cytotoxicity even at the maximum concentrations (10 mM) used in the experiments (ESI Fig. 13a †). Further, the cells were incubated with 10 mM Ab 40 for 24 h in the absence or presence of graded concentrations of SP1 (0.5-10 mM). We observed a signicant reduction in the cell population treated only with Ab 40 compared to the negative control and in the absence of SP1. However, the incubation of SP1 ameliorated the toxic effect considerably at 5 mM ($82%), as determined by cell viability (ESI Fig. 13b †). Then, we explored the membrane damage induced by Ab 40 (ref. 45 and 46 ) using the lactate dehydrogenase (LDH) assay. Treatment of Ab 40 released a signicant amount of cytosolic LDH into the culture medium of SH-SY5Y cells. Co-incubation of Ab 40 with SP1 at the respective concentrations (5 mM and 10 mM) revealed a substantial reduction in LDH leakage into the cell culture medium as compared to only Ab 40 treated cells (ESI Fig. 13c †). These two ndings indicate that SP1 at a molar ratio of 1 : 2 (SP1 : Ab 40 ) is sufficient to demonstrate maximum inhibition of Ab 40 mediated cellular cytotoxicity. Interestingly, Ab 40 induced neuronal cell death was preserved for at least three days upon treatment with SP1 (5 mM) (ESI Fig. 13d and e †).
SP1 ameliorates oxidative stress injury, apoptosis, and Ca 2+ homeostasis
Condensed or fragmented nuclear bodies characterize the distinctive nature of apoptotic cells. To explore the anti-apoptotic and cytoprotective properties of SP1, we used Hoechst 33258 as a DNA staining dye. A signicant number of apoptotic cells were observed under a uorescence microscope when the cells were treated with Ab 40 (10 mM) for 24 h (Fig. 5a and b) compared to untreated cells. Upon co-incubation with SP1 (5 mM), the number of apoptotic cells holding damaged DNA was markedly reduced (Fig. 5b). These ndings illustrate the potency of SP1 in regulating Ab 40 induced DNA damage in SH-SY5Y cells. The underlying mechanism of this neuronal apoptosis and oxidative damage has been reported to be signicantly inuenced by ROS generation, followed by triggering mitochondrial apoptotic events. 47,48 In another experiment, we observed that the intensity of the ROS sensitive uorescent marker in SH-SY5Y cells increased in the presence of Ab 40 compared to untreated cells, and the co-incubation of cells with Ab 40 and SP1 (ratio 1 : 2) signicantly inhibited Ab 40 induced ROS production (Fig. 5c). Dyshomeostasis of Ca 2+ is also responsible for the increased production of Ab peptides, by which a degenerative feed-forward cycle is activated, resulting in accelerated apoptosis, synaptic dysfunction, and memory impairment. 49 To examine the effect of Ab 40 with or without SP1 on Ca 2+ homeostasis, we measured intracellular free Ca 2+ using a uorescent Ca 2+ indicator, Fura-2AM. Our results demonstrated that Ab 40 (10 mM) signicantly elevated intracellular Ca 2+ levels in SH-SY5Y cells as compared to untreated cells. Then co-incubation with a molar ratio of 1 : 2 (SP1 : Ab 40 ) reduced Ca 2+ levels to 98% compared to Ab 40 treated cells (Fig. 5d). The experiments showed that SP1 preserves Ca 2+ dyshomeostasis induced by Ab 40 via encumbering the oligomeric conversion of Ab 40 .
Effect of SP1 on Ab 40 induced apoptotic protein markers
Accumulation of Ab triggers the generation of intracellular free radicals and leads to the activation of caspases via releasing cytochrome-c from mitochondria. Bcl-2 family proteins, proapoptotic Bax proteins, and caspases are well known to be involved in the mitochondrial apoptotic pathway. 50 Western blot analyses of SH-SY5Y cells suggested that Ab 40 upregulates the level of Bax and causes a slight change in the Bcl-2 level, which is a signicant increase in the ratio of Bax/Bcl-2 expression ($3.2 fold) as compared to that in healthy cells. Interestingly, the expression of Bax protein was markedly downregulated by treatment with SP1 in SH-SY5Y cells for 24 h (Fig. 5e and f). Further, western blot analysis also revealed that the expression level of cleaved caspase-9 or caspase-3 signicantly decreased aer incubation with SP1 for 24 h (Fig. 5e, g and h). However, treatment with Ab 40 alone leads to activation of caspase-3 directed DNA breakage, nuclear chromatin condensation, and neurocellular apoptosis. These outcomes conrm the active suppression of Ab 40 mediated mitochondrial apoptosis and cell death by SP1 by inhibiting Ab oligomer formation.
Evaluation of acute and sub-chronic toxicity of SP1 in vivo
We predicted the cytotoxicity of SP1 in Sprague-Dawley rats as per our previous report. 51 A total of 24 rats were used in this study (n ¼ 8/group) and divided into three groups: group1: control, no treatment; group 2: received 100 mg kg À1 of SP1 and group 3: received 500 mg kg À1 of SP1. The SP1 was administered into the tail vein for 42 days once in a day. We did not observe any cytoarchitectural changes in the liver and kidney tissue aer the injections of two different doses in the groups (Fig. 6a). Interestingly, SP1 causes neither mortality nor abnormal behavioral patterns in rats. Besides, we did not observe any signicant changes in the rats' body weight at two different doses (100 mg kg À1 and 500 mg kg À1 ) compared to their respective control groups on days 7, 21, and 42 by sub-chronic study (ESI Fig. 14 †). Notably, we did not nd any severe changes in the hematological and biochemical parameters in the group of rats treated with SP1 (100 mg kg À1 ) even aer 42 days. Almost similar observations in rats treated with SP1 (500 mg kg À1 ) were revealed, except for monocyte and SGOT levels (ESI Table 3 †). The tabulated biochemical prole (novel biomarkers of the liver and kidney) corroborates the safety charms of SP1 for further in vivo studies.
SP1 ameliorates hippocampal neurodegeneration in rat brain
The overproduction of Ab damages hippocampal neurons and causes cognitive impairments in AD. Previous data motivated us to explore the potential of SP1 in ameliorating hippocampal neurodegeneration. Cresyl violet staining was performed for identication of Nissl granules in neurons to reveal hippocampal neurodegeneration in this experiment. One-way ANOVA showed a signicant contrast in the intensity of granules in the hippocampus between the groups [F (4, 25) ¼ 102.4, P < 0.001].
Furthermore, Tukey's post hoc test suggested that the intrahippocampal microinjection of toxic Ab 40 in hippocampal neurons showed a signicant (P < 0.01) decrease in the intensity of Nissl granules (Fig. 6b) as compared to the control and sham groups, which indicates that neurons have degenerated. However, SP1 treatment at both the dosages (40 mM and 100 mM) in pre-Ab40 injected rats reduced the degeneration of hippocampal neurons signicantly (P < 0.01), demonstrated by the intensity of Nissl granules (Fig. 6b). Hence, we established that SP1 treatment exhibits neuroprotective function against Ab 40 induced neurotoxicity.
Investigation of the interaction between Ab and SP1
High-resolution 2D Heteronuclear Multiple Quantum Coherence (HMQC) NMR experiments were performed with 80 mM Ab40 with increasing concentrations of SP1 (titrated up to a molar ratio of 1 : 10). The Ab 40 backbone amide resonances resulted in concentration-dependent residue-specic chemical shi perturbations (CSPs) in the presence of SP1. At a molar ratio of 1 : 10, the molecular interaction resulted in notable CSPs, specically for the central hydrophobic-K 16 LVFFA 21 region (Fig. 7a). Similar observations were also made for the Cterminal region, particularly the I 31 IGL 34 stretch and the hydrophobic V36, V39, and V40 residues (Fig. 7b). These observations clearly indicated the specic involvement of these hydrophobic-rich segments in the molecular association with SP1. Recent studies have highlighted the K 16 LVFFA 21 segment to be essential for the Ab 40 brillation propensity. [52][53][54][55] Extensive reports have provided evidence for the segment to be closely Table 2 †). (f-h) Densitometric analysis of changes in levels of the Bcl-2/Bax expression ratio, cleaved caspase-9, and cleaved caspase-3 (changes fold to control), respectively. The protein bands were quantified using smart view image analysis, and values are expressed as mean AE SEM (n ¼ 3 experiments per group) **p < 0.01, compared to the control group and ##p < 0.01 compared to Ab 40 treated group. associated with the dock-lock mechanism underlying Ab nucleation events. Thus, the SP1 mediated perturbation of this crucial domain suggests molecular interference in the docklock interactions of monomeric Ab, explaining the altered brillation. 55,56 Alternatively, the association of SP1 with the hydrophobic K 16 LVFFA 21 and the C-terminal segments also stands to explain the reduced membrane damage and subsequent toxicity. These hydrophobic segments have been shown to internalize within the hydrophobic acyl region of the lipid membranes, disrupting the membrane integrity. 57 Our recent studies have shown the crucial role played by the C-terminal residues in mediating cytotoxicity. Our mutation-based studies have suggested the role played by the GxxxG motifs from the C-terminal in aiding the helix-helix association and regulating the Ab brillation pathway. 58 Thus, a direct molecular association of SP1 with these segments indicates the inaccessibility of these segments necessary for wild-type Ab amyloidogenesis.
Interestingly, very similar observations were obtained for the residue-specic interaction studies between Ab 42 and SP1 (Fig. 7c). HMQC proles showed signicant CSPs, specically involving the central K 16 LVFF 19 segment and the C-terminal hydrophobic residues, including G29, G33, V36, I41, and A42 (Fig. 7d). The direct association of the terminal residues in Ab 42 is further reminiscent of the reduced cytotoxicity mediated upon SP1 interaction. Reports have found the increased Cterminal stability of Ab 42 to be entropically favorable for the cytotoxic brillation. 59 Thus, high CSPs for the C-terminal residues in Ab 42 corroborate well with the functional implication of SP1 in modulating Ab aggregation propensity.
Next, singular value decomposition (SVD) was used to obtain the residue-specic binding affinity of SP1 to Ab 40 . The CSPs for Ab 40 with SP1 were adjusted for both Dd N and Dd H to extract the dissociation constant (K D ) for each residue (see the NMR method and ESI Fig. 15-17 † for details). Comparatively lower K D values of $200 mM were obtained for the residues R5, L17, V24, K28, and G29 (ESI Table 4 †) of Ab 40 , indicating their functional unavailability in brillation. Once again, these data support the inhibition of the dock-lock mechanism of Ab 40 by SP1.
The lack of transferred NOE peaks (trNOEs) restricted us from determining the three-dimensional structure of SP1 bound to Ab 40 (data not shown). Although the designed paratope's (SP1) affinity is moderate, this is the rst example of a synthetic paratope to prevent Ab aggregation. However, we are working further to improve the molecular association.
†).
We considered the two most plausible structural alignments of SP1, hairpin-like or linear ( Fig. 8a and b). Since we observed that SP1 is non-amyloidogenic (ESI Fig. 8 †), self-aggregation with the hairpin-like conformation does not qualify. Now, if SP1 adopts a hairpin-like structure (Fig. 8a), it should exhibit intra-molecular FRET or intermolecular FRET through straightchain alignment (Fig. 8b). Interestingly, SP1C showed a unique FRET event in contrast to the mixture of equimolar SP1A and SP1B, which did not show a signicant change in emission. The time-resolved uorescence study further conrmed a similar observation (ESI Table 5 and Fig. 18 †). These combined pieces of information and the calculated Förster radius (R 0 ) 60-63 of the donor/acceptor system which was 27.9Å (Section 1.2.j in the ESI †) further corroborate the U-shaped or hairpin-like structural alignment of SP1 (Fig. 1b and 8a). Also, we obtained direct evidence of interaction between SP1 and the homologous sequence of the Ab 40 peptide, resulting in substantial FRET events and positive implication of time-resolved uorescence (ESI Table 6 and Fig. 19 †). The aforementioned studies were conducted to comprehend the interaction of SP1A to LP1B and that of SP1B to LP1A by incubating equimolar concentrations. Data, revealed through all the present studies, allowed us to propose two plausible modes of interaction, which demonstrate inhibition of amyloid formation and disruption of preformed aggregates of Ab 40 by SP1 (Fig. 8c and d). The proposed models were further validated through the FRET, time-resolved uorescence study, and Förster radius (R 0 ) calculations.
Briey, a signicant FRET event was observed when SP1C was mixed in pre-captured Ab 40 with SP1 solution, and no FRET events resulted in the mixture (SP1A + SP1B) added to the precaptured Ab 40 with SP1 solution (ESI Fig. 20 and Table 7 †).
We have noted earlier that the hairpin-shaped conformation of SP1 remains unaltered in the presence of Ab 40 aggregates. Therefore, these experiments together clarify that the U-shaped synthetic paratope prevents amyloid oligomer formation, most probably through the zipping action proposed in Fig. 8c.
To target a particular epitope of various amyloidogenic proteins including tau, Ab, a-synuclein, and b 2 -microglobulin, and to antagonize their aggregation, Nowick and co-workers previously developed rationally designed 54-membered cyclic peptides, amyloid b-sheet mimics (ABSMs). [64][65][66][67][68] The ABSM peptide was comprised of two strands, linked with two ornithine d-linkages. One of the two strands was selected for recognition of the target amyloidogenic protein, whereas another strand contained an unnatural amino acid, "Hao" used as a b-sheet breaker unit to inhibit the aggregation of the target disease protein. Due to the cyclic structure, these peptides were less exible yet effective in inhibiting the aggregation of various amyloidogenic proteins. In contrast, in the present study, we designed a hairpin-like exible synthetic paratope comprising two epitope-binding peptides connected through a PEG linker. The synthetic paratope can bind to the target epitope of the Ab peptide from both sides, and the presence of N-methylation on alternate amino acids does not allow Ab monomers to selfassemble to form amyloids.
The structural design of ABSM containing a "Hao" unit helped the cyclic peptide prevent ABSMs from aggregating in solution to form a larger aggregated network of b-sheets; instead, it dimerized and further self-assembled into oligomers. 64 In contrast, the synthetic paratope (SP1) did not selfassemble to form oligomers or larger aggregates due to the presence of N-methylation at alternate amino acids. In addition, the presence of PEG groups in the synthetic paratope, in contrast to the hydrophobic side chain of ornithine in ABSM, increases aqueous solubility, which is an essential factor from a therapeutic perspective. Moreover, due to the hairpin-like structure, the synthetic paratope exhibits more exibility and possibly can show a better efficacy to inhibit the aggregation of Ab or other amyloidogenic proteins than the existing peptide probe.
Conclusion
In the present study, we have demonstrated the design, synthesis, and characterization of a synthetic paratope (SP1) that selectively binds with the epitope LVFFA, a vital amyloidogenic part of the Ab peptide. A series of in vitro biophysical experiments, including NMR, support the inhibition of Ab 40 and Ab 42 aggregation by SP1 at an atomic resolution. SP1 was also equally efficient in disaggregating the preformed brillar assembly of the Ab40 peptide into non-toxic species. We speculate that the synthetic paratope may further enable for designing an affinity tag for AD diagnosis, a reporter of the Ab40 peptide, and a PROTAC type therapeutic against AD. The ability to ameliorate Ab 40 induced neurotoxicity, ROS generation, and apoptosis and maintain intracellular Ca 2+ homeostasis of SP1 is remarkable for the further construction of suitable antiapoptotic and anti-inammatory peptide probes. The designed, non-toxic synthetic paratope may gain considerable attention for ameliorating Ab-induced hippocampal neurodegeneration, corroborated by preliminary in vivo studies in a rat model of AD. In-depth investigations with animals may be carried out aer improving its binding affinity to the nanomolar level, which is at the micromolar range at present. Further improvement of solubility and enzymatic stability is also required.
We believe that the dissected zipping-mechanism for capturing the Ab 40 peptide by a synthetic paratope will signicantly facilitate the design of a great variety of paratopes. Such a smartly designed molecular construct may also nd applications in diverse directions spanning chemical biology, diagnostics, and therapeutics. Our ndings suggest that due to the structural exibility and moderate to weak affinity towards the target epitope, the synthetic paratope might lead to potential hit discoveries against Alzheimer's disease, extendable further to other amyloidoses.
Conflicts of interest
The authors declare no conict of interest.
|
2020-12-24T09:12:20.327Z
|
2020-12-22T00:00:00.000
|
{
"year": 2020,
"sha1": "69125b4fa84f7a1e931f5134096cea88eb9955b6",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/sc/d0sc04379f",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c57c2fcdf8d49ef3dc7dd42735ddacfc5ac1d468",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
263934703
|
pes2o/s2orc
|
v3-fos-license
|
Determinants of adolescent childbearing in Ethiopia, analysis of 2016 Ethiopian demographic and health survey: a case-control study
Background Pregnancy and birth complications experienced by adolescents are also problems of older women. But it is severe among the young due to physical immaturity and social condemnation from basic reproductive health services. The study was aimed to analyze determinants of adolescent childbearing in Ethiopia using the Ethiopian demographic and health survey. Method The data source for this study was the 2016 demographic and health survey. Records of 359 cases and 1436 randomly selected controls (1:4 ratio) were included in the analysis. Adolescent childbearing was the main outcome variable and the independent variables were sociodemographic and sexual & reproductive factors. Multivariable logistic regression analysis was used to identify factors associated with adolescent childbearing. Result The mean age of girls at first cohabitation was 15.28 ±1.64 and the mean age of first birth was 16.47±1.35. Adolescent childbearing was found to be higher in the Afar region (34.8%), and the lowest was in Addis Ababa city (4.1%). Finding from the multivariable analysis showed that place of residence, survey time age, and age at first sexual intercourse were the factors that have an association with adolescent childbearing. The odd of childbearing was higher among rural residents (AOR = 1.74; 95 % CI: 1.12, 2.72), early (<18 years) initiation of sexual intercourse (AOR =12.5; 95% CI: 5.97,25.18) and the risk is also higher among older teenagers (AOR =7.92; CI:3.92,15.90). Conclusion Place of residents, age, and timing of first sexual intercourse was found to be the influencing factors of adolescent childbearing. Our finding indicates that the place of residence of the adolescent mothers must be considered in planning policies that attempt to disrupt successive cycles of socioeconomic deprivation. Public health interventions should focus their programs to be based on community and aim on prevention of early sexual intercourse and marriage.
Introduction
Adolescence is a period of vulnerability in human development as it represents a transition from childhood to physical and psychological maturity.itis in this period that they learn and develop skills on critical aspects of their health and their body got matured.Childbearing during adolescence is not only a risk factor for adverse birth outcomes, but also harms the future well-being of the mother and the child 1 .Due to lack of adequate sexual and reproductive health (SRH) services adolescents are mostly exposed to early and unprotected sexual intercourse, unintended pregnancy, unsafe abortion, HIV infection, substance abuse, child marriage, and other SRH problems 2 .
Poor maternal conditions are the leading cause of mortality among girls aged 15-19 globally.In 2018, the estimated adolescent birth rate worldwide was 44 births per 1,000 girls aged 15 to 19, and in West and Central Africa, it was 115 births, the highest regional rate in the world.In developing regions every year, an estimated 21 million adolescent girls aged 15-19 years become pregnant and around 12 million of them give birth and at least 777,000 births occur to adolescent girls younger than 15 years in this region 3,4 .Adolescent pregnancy is a major reproductive health challenge problem in Ethiopia.The 2016 Ethiopia Demographic and Health Survey (EDHS) report showed adolescent pregnancy rates of 13% 5 .A Study in Ethiopia shows a slow decrement in adolescent childbearing rate from 2000 to 2016(16.5% to 12.5%) 6 .According to the 2013 UNFPA report, Ethiopia was ranked among the top 10 countries with the highest number of women aged 20 to 24 years old and who gave birth by their eighteenth birthday 7 .
Adolescent pregnancy and childbearing have multidimensional implications for a nation such as educational opportunity, population growth, and ill-health of women and children.As a result, prevention of early marriage and reduction of adolescent pregnancy has been the focus of attention by several governmental and non-governmental organizations 8 .Previous studies have shown that maternal and perinatal morbidity and mortality can be reduced by lowering the high rate of teenage pregnancy in developing countries 9 .Consequently, reducing the high rate of adolescent pregnancy and maternal mortality is considered as the key Sustainable Development Goals (SDG).However, studies are limited at a national level that identifies the determinant factor for this higher burden of adolescent childbearing in Ethiopia.Therefore, this study is intended to investigate factors associated with adolescent childbirth using 2016 EDHS data.
Study setting
The world bank data about the Ethiopian population have evidence that it is the second-most populous country in Africa, with a population of 105 million in 201710.According to the report from 2016 EDHS, almost half (47%) of the Ethiopian population are found in the age group of < 15 years 5 .Administratively Ethiopia is divided into nine regional states (Tigray, Afar, Amhara, Oromiya, Somali, Benishangul-Gumuz, Southern Nations Nationalities and People (SNNPR), Gambela, and Harari and, two city administrates (Addis Ababa and Dire Dawa).
Data set and population
The study used data from the Ethiopian demographic and health survey conducted in 2016, which is the fourth comprehensive survey.It was a community-based cross-sectional survey conducted from January 18, 2016, to June 27, 2016, across the country and it is available from the MEASURE DHS database at https://dhsprogram.com/data/available-datasets.cfmIn this survey data was collected on household characteristics among women aged 15-49.For our study data were extracted from the women's questionnaire, particularly those of adolescents aged 15-19 years.
Eligibility criteria Inclusion criteria Cases:
We have included all adolescent girls aged 15-19 and gave birth Control: Adolescents aged 15-19 and who didn't give birth.
Exclusion criteria
Women who lack critical information were excluded from the analysis.
Sample size and sampling procedure
The EDHS survey was designed to represent all regions and administrative cities found in the country.The 2016 EDHS survey participants were selected in two stages.Initially, a total of 645 enumeration areas (202 in urban and 443 in rural) were randomly selected proportional to the household size from the sampling strata, and in the second stage, 28 households per cluster were selected using systematic random sampling.A total of 18,008 sample households were selected and from this 16, 650 households (98% response rate) were successfully interviewed in 2016 EDHS.There were 16583 eligible women and 15683 women had been interviewed and 3498 adolescents have participated.For our analysis, we used 359 cases (adolescents aged 15-19 and gave birth) 1436 randomly selected controls among adolescents aged 15-19 and didn't give birth at the time of survey by considering a 1:4 case to control ratio.
Study variables
The main outcome variable for this study was having birth during the adolescence period.It was ascertained by asking the age of women at the time of their first birth which is recorded as one variable in the survey data.
Independent variables
The predictor variables were examined by categorizing them into the socio-demographic background and proximate determinants of teenage childbearing.The socio-demographic factor included the place of residence (urban and rural), the region (Tigray, Afar, Amhara, Oromia, Somalia, Benishangul, SNNPR, Gambela, Harari, Diredawa, Addis Ababa), sex of head of a household, household wealth (poorest, poorer, middle, richer, richest) and educational status of both women and husband (no education, primary, secondary, higher).The proximal factor included decision-maker for using contraception, age at first cohabitation, unmet need for contraception, knowledge of ovulatory cycle, age at first sex, contraceptive use, and decision-maker to marry.Knowledge of ovulation was measured based on their response regarding the timing of ovulation.Those who responded, " at the middle" and "after menstrual bleeding" were considered as knowledgeable.
Data processing and analysis
The data was obtained from the MEASURE DHS database and needed variables were extracted into a new SPSS file with appropriate modification of data form in a suitable fashion for our analysis.Summary of descriptive statistics was done for both cases and controls.Simple logistic regression analysis was done first to identify those factors that show association with the outcome variable (giving birth during adolescence period) at the bivariate level.Then factors in the simple logistic regression analysis which have a P-value <0.25 were a candidate for multivariable logistic regression analysis.Finally, variables with a P-value <0.05 in the multivariable analysis were declared as they have a statistically significant association with the outcome variable with a 95 % confidence interval of the odds ratio.Final model fitness was checked by Hosmer-Lemeshow goodnessof-fit test with P > 0.05 which was 0.918.
Socio-demographic characteristics
A total of 1795 women aged 15-19 (359 have at least one birth and 1436 who didn't give birth before) were included in the analysis.The mean age of adolescents was 17.06 ±1.36 years.About two-thirds of adolescents included were urban residents.Nearly 19% of adolescents were uneducated and more than two-thirds (68.6 %) were unmarried.Among those who are married (31.4 %) around 44 % of them were attending school before marriage.Approximately 37 % of adolescents married to those who didn't read and write.(table 1) More than one-third (36.6 %) of participants didn't have the intention to use any contraceptive methods and 55% are none-users but intended to use later.Almost 37 % of adolescents didn't know their source of family planning methods.The majority (73.5 %) of them have a joint decision on using contraceptive devices to achieve their desired family size.Above one-fourth, (28.5%) of participants responded that beating is justifiable following refusal to have sex with their husband.
Determinants of adolescent childbearing
Household wealth index, educational status, place of residence, age at first cohabitation, and beating following refusal to have sex were included in the multivaria-ble logistic regression.In the multivariable model place of residence and age of cohabitation showed significant association with adolescent childbearing.The odds of childbearing among adolescents who were rural residents was about 1.74 (AOR = 1.74; 95 % CI: 1.12, 2.72) times higher than those whose residence was in urban areas.Age during survey time was also the other predictor that showed association with.The odd of teenage childbearing was 12.5 (AOR =12.5; 95% CI: 5.97,25.18)times higher among older (≥18) teenagers.
In comparison to adolescents who had started sexual intercourse ≥ 18 years, the estimated odds of childbearing was7.92(AOR=7.92; CI:3.92,15.90)times higher among those who had initiated sexual intercourse before eighteen (Table 3).
Discussion
This study was conducted to identify determinants of teenage childbearing in Ethiopia using 2016 EDHS.
Teenage childbearing varied based on different sociodemographic factors and it was higher (34.8%) in the Afar region and lower (4.1%) in Addis Ababa.This indicates that much effort is needed to narrow such a gap and lower this high prevalence of teenage childbearing in pastoral regions.The possible for such variation among geographical regions could be sociocultural differences towards early marriage and childbearing and, accessibility and utilization of sexual and reproductive health services.
Our study has found that place of residence has a significant association with adolescent childbearing.Teenagers who reside in rural societies are at higher risk of having birth before they turn nineteen compared to those who lived in urban.This is in line with a finding from a community-based case-control study in Uganda and another study in Ethiopia 10,11 .This implicates that comprehensive sexual and reproductive health issues are not well addressed in rural areas as equally as in urban one.Living in rural areas may expose them to early marriage and to give birth early and might be followed by complications related to physiological and anatom-ical unpreparedness.The possible underlying reason for such tragedy is the lack of necessary reproductive health services particularly for adolescents who are usually neglected, groups.And the issue is mostly taboo in such societies to be discussed 12 .
The age of teenagers is also found to be the determinant of adolescent childbearing.Being an older teen increases the odds of adolescent childbearing in comparison with the younger.This is in line with other studies 13,14,15 .This indicates that late teenagers had been wrongly perceived as an actual age to have safe pregnancy and childbearing.This could be explained by as age increases, the tendency of involvement in sexual activities is higher.This sexual orientation might be followed by marriage, pregnancy, and childbirth.But it is consequential for them as it is early and they are not physiologically matured to have a healthy conception.Early initiation of sexual intercourse was another predictor of teenage childbearing.Adolescent girls who started sexual intercourse before 18-year-old are more likely to have a birth in their teens compared to those who initiated sexual intercourse after eighteen.The finding is consistent with another studies in Ethiopia and brazil 11,16 .The implication here is, the sexual practice in the earlier time is more likely to be unsafe and African Health Sciences, Vol 22 Issue 2, June, 2022 unprotected and it inevitably followed by conception and childbirth.Having early sexual intercourse might increase the risk of early pregnancy whether intended or not due to lack of information, unavailability of and low utilization of SRH services by adolescents, and sexual abuse by their intimate partner.This situation could endanger their sexual and reproductive health not only from complications of pregnancy but also with associated sexually transmitted infections.
Conclusion
Our analysis showed that rural places of residence, being older teenage and early initiation of sexual intercourse are the influencing factors for the occurrence of adolescent childbearing.This is suggestive to pay critical attention to adolescent girls habituated in rural parts of the nation and on sexual and reproductive health education for early adolescents.So a reduction of teenage childbearing could be achieved by increasing accessibility of friendly SRH services for those neglected segments of reproductive age groups of girls.
Particularly those rural residents need due attention as they are threatened by underlying multidimensional socio-economic and cultural factors.And public health interventions should give great emphasis to such vulnerable groups of women population.
Table 1 :
Socio-demographic characteristics of adolescent girls in Ethiopia using the 2016 DHS dataSexual and reproductive characteristics of adolescents About one-third (31.4 %) of adolescents started sexual intercourse before they turn 18 years.The mean age of girls at first cohabitation was 15.28 ±1.64 and the mean age of first birth was 16.47±1.35.Around 4.1% of ad-olescents were pregnant during survey time.More than one-fourth of adolescents didn't know when ovulation could occur and only 17 % of them reported correctly the possible time it could happen.The mean duration of time from marriage to birth is 39.82 months (Table2).
Table 2 :
Sexual and reproductive health history of adolescents in Ethiopia using the 2016 DHS
Table 3 :
Bivariate and multivariable analysis of factors associated with teenage childbearing using data from Ethiopian 2016 DHS
|
2022-08-14T15:06:24.831Z
|
2022-06-01T00:00:00.000
|
{
"year": 2022,
"sha1": "d45d398d2e836a72931d3700bab20d5d8d163330",
"oa_license": "CCBY",
"oa_url": "https://www.ajol.info/index.php/ahs/article/download/229026/216210",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a72471fb064c2c93a6cf1ad24ec5fc9aa5898e39",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
21852256
|
pes2o/s2orc
|
v3-fos-license
|
The lung microbiome in patients with pneumocystosis
Backround Pneumocystis jirovecii pneumonia (PCP) is an opportunistic fungal infection that is associated with a high morbidity and mortality in immunocompromised individuals. In this study, we analysed the microbiome of the lower respiratory tract from critically ill intensive care unit patients with and without pneumocystosis. Methods Broncho-alveolar fluids from 65 intubated and mechanically ventilated intensive care unit patients (34 PCP+ and 31 PCP- patients) were collected. Sequence analysis of bacterial 16S rRNA gene V3/V4 regions was performed to study the composition of the respiratory microbiome using the Illumina MiSeq platform. Results Differences in the microbial composition detected between PCP+ and PCP- patients were not statistically significant on class, order, family and genus level. In addition, alpha and beta diversity metrics did not reveal significant differences between PCP+ and PCP- patients. The composition of the lung microbiota was highly variable between PCP+ patients and comparable in its variety with the microbiota composition of the heterogeneous collective of PCP- patients. Conclusions The lower respiratory tract microbiome in patients with pneumocystosis does not appear to be determined by a specific microbial composition or to be dominated by a single bacterial species. Electronic supplementary material The online version of this article (10.1186/s12890-017-0512-5) contains supplementary material, which is available to authorized users.
Pneumocystis jirovecii is an opportunistic human pathogenic fungus causing pneumocystosis, a severe pulmonary infection occurring mainly in immunosuppressed patients. In the 1980s and 90s, pneumocystosis predominantly developed in HIV-patients with low CD4 + T cell counts and was classified as an acquired immunodeficiency syndrome (AIDS)-defining disease, associated with a high mortality rate [1]. Since the initiation of highly active antiretroviral therapy and the prophylactic administration anti-Pneumocystis drugs to patients at risk, the disease frequency has decreased in this patient group [2]. In recent years, pneumocystosis became a serious matter of concern in patients with other types of immunosuppression such as solid organ transplant recipients, patients with haematological malignancies or connective tissue diseases [3]. Studies based on serological data show that most children have contact with the fungus within the first years of life [4][5][6]. Pneumocystosis occurs in most cases as an unapparent infection in immunocompetent children and seems to permanently or intermittently colonize its host in low numbers [7].
The homeostasis of the composition of the normal respiratory tract flora is considered to be essential to prevent the expansion of pathogens. In case of the fungus Aspergillus fumigatus, dysbiosis due to underlying pulmonary diseases or immune system dysfunction has been reported to cause uncontrolled fungal colonization which may exacerbate into overt fungal disease [8]. Furthermore, it is widely accepted that microbiological communities are a major regulator of the immune system and that alterations in the lung and/or gut microbiota may allow exacerbations of existing chronic lung diseases and can trigger susceptibility to new infections [9]. Besides CD4 + T-cells, which play a major role in animal models of the host defence against Pneumocystis infection [10], other studies indicate that several other immune cells such as alveolar macrophages, dendritic cells, neutrophils and B lymphocytes are involved in the immunological response against this fungal pathogen [11]. In addition, the ecological determinants of the lung microbiome -immigration, elimination, and regional growth conditions all change dramatically during acute or chronic lung infection [12]. Very recently, it was shown that respiratory infection with Pneumocystis murina influences the alpha and beta diversity of the gut microbiota of CD4+ intact and CD4depleted mice and resulted also in changes in taxa abundances indicating the role of a gut-lung axis during Pneumocystis infection [13].
To the best of our knowledge, studies of the human lung microbiome during P. jirovecii pneumonia are lacking so far. In this study, we evaluated the lung microbiota in broncho-alveolar lavages (BAL) from patients with pneumocystosis and critically ill patients without Pneumocystis pneumonia (PCP) by sequencing bacterial 16S rRNA amplicons in the V3/V4 regions. Our aim was to study if a specific lung microbiome exists in pneumocystosis patients in comparison to the lung microbiome of other critically ill intensive care unit (ICU) patients.
Study design
In this retrospective observational study, we analysed the microbiome of BAL samples from intensive care unit patients treated in the University Hospital Essen, Essen, Germany. Thirty -four BAL samples of pneumocystosis (PCP+) patients treated between 2013 and 2016 were included in the analysis. Diagnosis of pneumocystosis was done by reviewing medical records, evaluating radiological images and a positive DNA result for P. jirovecii in real-time PCR (Sacace, Como, Italy). Furthermore, 31 BAL samples from patients with negative Pneumocystis jirovecii PCR (PCP-) between 2015 and 2016 were used as control. Only the first episode of pneumocystosis and one sample per patient was included in the analysis.
The study was performed in accordance with the Declaration of Helsinki and no written informed consent was necessary due to the retrospective design of the study. It was approved by the Ethics Committee of the Medical Faculty of the University of Duisburg-Essen (no. .
DNA isolation and sequencing
DNA was isolated from broncho-alveolar lavage samples in clinical routine on daily practice using the Maxwell® 16 instrument (Promega, Madison, WI) with the Maxwell® 16 Tissue LEV Total RNA Purification Kit (Promega). DNA was stored at −20°C until further processing. The V3/V4 region of the 16S rRNA gene was amplified with the 341F forward primer and the 785R primers from Klindworth et al. [14], with an Illumina adapter overhang nucleotide sequence added 5′ of the locus-specific sequences. The sequence of the primer overhangs used was from the Illumina 16S Metagenomic Sequencing Library Preparation Guide (www.illumina.com): forward primer overhang: 5′-TCGTCGGCAG CGTCAGATGTGTATAAGAGACAG and reverse primer overhang: 5′-GTCTCGTGGGCTCGGAGATGTG TATAAGAGACAG. PCR was performed using the following steps: 95°C for 3 min and 30 cycles of 95°C for 30 s, 60°C for 30 s and 72°C for 30 s with a final extension of 72°C for 5 min. PCR samples were run on a 1% agarose gel with a 100 bp ladder to check for amplification efficacy. PCR products were cleaned up using the Qiagen PCR purification kit (Hilden, Germany) and eluted in 30 μl TE-Buffer. 2.5 μl of the purified PCR product was used as a template for the second round of PCR using the N5XX and N7XX index primer of the Nextera XT Index Kit (Illumina, San Diego, CA). Each sample had a unique combination of N5XX and N7XX indices. PCR was performed with the following setting: 95°C for 3 min and 10 cycles of 95°C for 30 s, 55°C for 30 s and 72°C for 30 s with a final extension of 72°C for 5 min. PCR samples were run on a 1% agarose gel. For purification of the PCR products with the QIAGEN PCR Purification Kit, 6 individual pools were generated containing similar amounts of PCR products as estimated by agarose gel electrophoresis. PCR products were eluted in 20 μl TE buffer and the DNA concentration of the sample pools was measured using the Qubit High-Sensitivity Assay (LifeTechnologies, Carlsbad, CA). All 6 sample pools were then combined to yield a single pool, which was quantified by qPCR using the NEBNext Library Quant Assay (NEB, Ipswich, MA, USA), and loaded on the flow cell at a concentration of 12 pM. A PhiX control library was spiked in at 3 pM concentration to increase sequence diversity, as recommended by Illumina. Sequencing was performed using the Illumina MiSeq 600 cycle reagent kit v3 (Illumina, San Diego, CA), with 301 cycles for read 1 and 2 and 8 cycles for the two index reads.
Preprocessing and data analysis
Demultiplexed paired-end fastq files generated by CASAVA (Illumina) and a mapping file were used as input files. Sequences were pre-processed, quality filtered and analysed using QIIME2 version 2017.8 and QIIME1 version 1.91 [15]. We used the DADA2 software package [16], wrapped in QIIME2, for modelling and correcting Illumina sequenced fastq files including removement of chimeras with the "consensus" method. Fastq files were processed by the qiime dada2 denoise-paired command. Due to decreasing quality scores of the sequences at the end, especially for the reverse reads, we truncated 20 bases of the forward and 80 bases of the reverse read, resulting in a remaining overlap of 35 bases in merged sequences. Sample collection of PCP+ patients including direct DNA-extraction was performed over a period of 4 years (2013-2016) and PCP-samples from the years 2013 and 2014 were lacking. We determined the sequence variants that were significantly different distributed between PCP+ of both periods (2013-2014 vs. 2015-2016) and excluded them from further analyses. Therefore, we filtered the merged sequences output of PCP+ patients, calculated the statistically significant sequence variants by Kruskal-Wallis one-way analysis of variance by using the group_significance.py QIIME script with p < 0.05 without correction for multiple testing. These sequence variants were filtered from all sequence variants in all samples by using the QIIME2 qiime feature-table filter features command. This step was done to reduce probability of errors caused by contaminating bacterial species during the process of DNA extraction in the time where PCP-patient DNA was not available.
For taxa comparisons, relative abundances based on all obtained reads were used. We used the QIIME2 q2feature-classifier plugin and the Naïve Bayes classifier that was trained on the Greengenes13.8 99% OTUs fulllength sequences. QIIME2 taxa barplot command was used for viewing the taxonomic composition of the samples.
Alpha and beta-diversity analyses were performed with the q2-diversity plugin in QIIME2 at a sampling depth of 1000. One PCP+ sample was excluded from these analyses due to a sequence frequency of 251. Alpha diversity was calculated by Shannon's diversity index, observed OTUs, Pielou's measure of species evenness and Faith's Phylogenetic Diversity. Permutational multivariate analysis of variance (PERMANOVA) was used to analyse statistical differences in beta diversity with QIIME2. Principal coordinate analyses (PCoA) was performed based on unweighted and weighted UniFrac, Bray-Curtis and Jaccard distances in QIIME2 and visualized with the make_2d_plots.py script of QIIME 1.91. Kruskal-Wallis test was used for taxa comparisons, calculated in QIIME 1.91. Benjamini-Hochberg false discovery rate (FDR) correction was used to correct for multiple hypothesis testing.
Results
BAL samples from 65 ICU patients (21 female, 44 male) were included in the study, each with one sample per patient. P. jirovecii-DNA was detected in 34 BAL samples from pneumocystosis patients and was undetectable in BALs from 31 PCP-ICU patients. All PCP+ patients exhibited lung infiltrates in chest radiography. In six BAL samples from the PCP+ group, cysts were found by immune fluorescence microscopy. The patient characteristics of all 65 patients are displayed in Table 1.
After processing of the demultiplexed fastq files with the DADA2 package we excluded amplicon sequence variants significantly different distributed between PCP+ patients from 2013 to 2014 and 2015-2016 from all samples. We obtained 577,610 sequences with a total of 2750 amplicon sequence variants from the 65 filtered samples. The mean sequence frequency was 8886 ± 9615 SD). One PCP+ sample was excluded from diversity analyses due to a read number < 1000/sample after processing.
On class, order family and genus level, no significant differences corrected for multiple testing were observed between PCP-and PCP+ patients. The microbial composition of each Heart disease 2 6 Cerebral haemorrhage 0 2 Lung fibrosis 0 2 sample on phylum and genus level is shown in the Additional file 1: Figure S1 and Additional file 2: Figure S2. The within-sample phylotype richness and evenness (alpha diversity) and dissimilarity between samples (beta diversity) were calculated on a rarefied frequency-feature table with a minimum number of 1000 sequences per sample. No differences in alpha diversity metrics were detected between both patient groups. Shannon diversity index (p = 0.108), Pielou's measure of evenness (pielou_evenness) index (p = 0.825), observed_otus index (p = 0.076) and Faith's phylogenetic diversity metric (p = 0.506) were not statistically different between PCP+ and PCP-samples ( Fig. 2 and Additional file 3: Figure S3). Samples of PCP-and PCP+ patients were not separated or clustered according to PCoA based on weighted and unweighted UniFrac phylogenetic distances, Bray-Curtis distances and Jaccard distances ( Fig. 3 and Additional file 4: Figure S4). PCP+ and PCP-samples were not statistically different using permutational multivariate analysis of variance a b Fig. 1 a + b Composition of the bacterial community at the phylum (a) and genus (b) level for PCP+ and PCP-samples. Phyla and genera with a minimum percentage of 1% are shown (PERMANOVA) with 999 permutations for all used distance metrics (p = 0.108 for unweighted UniFrac; p = 0.182 for weighted UniFrac, p = 0.495 for Bray-Curtis; p = 0.269 for Jaccard). Furthermore, we analysed if a single species dominates the individual samples by calculating the relative species abundance (Level 7) in each sample. In 10 PCP+ patients and 14 PCP-patients, the microbiome was dominated by a single species with a relative abundance of at least 75% (Fig. 4). The dominating species of the 10 PCP+ samples represented the genera Enterococcus, Streptococcus, Staphylococcus, Acinetobacter, Escherichia, Citrobacter and Stenotrophomonas and the dominating species of the PCPsamples included Serratia, Enterococcus, Neisseria, Escherichia, Pseudomonas, Stenotrophomonas, Legionella, Staphylococcus and Mycoplasma, resulting in a total of 12 different genera for these samples (Additional file 5: Table S1).
Discussion
Pneumocystosis is an opportunistic fungal infection that is associated with a high morbidity and mortality in immunocompromised patients. The natural habitat and the ways of acquisition and transmission of this organism in a b c Fig. 2 Alpha diversity analysis. Within-sample diversity measured by the Shannon-Index (a), Pielou's measure of species evenness (b) and observed OTUs (c). Samples were rarefied to a sampling depth of 1000. Kruskal Wallis test was performed to analyze statistical significance humans are poorly understood. It is unclear so far if a specific dysbiosis of the lung microbiota may promote uncontrolled colonization and overt disease in immunocompromised patients, like it is suggested for Aspergillus fumigatus [8,17].
Although we found differences in the mean relative abundance of the microbial composition of PCP+ and PCP-patients, these differences were not statistically significant, which was also due to a highly heterogenous microbial composition of the individual PCP+ samples, indicated by high standard deviations for individual taxa, comparable in its extent to the heterogenous group of PCPpatients. Healthy lungs are an ecologically unfavourable environment for most bacteria accompanied by minimal reproduction [18]. The oral microbiome usually is the primary source for the bacterial microbiota of the lungs [19,20] with Prevotella, Veillonella and Streptococcus as the dominating genera. During critical illness, e.g. during bacterial pneumonia, the environmental conditions in the lungs shift abruptly, resulting in protein-rich fluids in the alveoli, serving as energy source contributing to the changing microbiome of the lungs [12]. A dominant single species usually composes the vast majority of sequences from BAL in bacterial pneumonia [21]. During chronic or acute lung disease other than bacterial pneumonia, a shift towards proteobacteria of the gastrointestinal tract including a loss of diversity has been reported [12]. The composition of the lung microbiome in PCP+ patients with presence of proteobacteria as second most abundant phylum indicates a shift towards the microbiome of critically ill patients with other diseases, although the average abundance of the presence of proteobacteria tends to be lower in PCP+ patients compared with PCP-patients in our study. All patients included in our study received mechanical ventilation. Mechanical ventilation of critically ill patients alone was shown to be associated with changes in the respiratory microbiome [22,23], whereas an increased duration of ventilation resulted in a decreased alpha diversity [22,23]. In addition, the domination of a single taxon was reported in many patients [23]. This observation is in agreement with our data. 29% of the PCP+ and 45% of PCP-patient samples were dominated by a single bacterial species (≥ 75% of the sequence reads). However, and more importantly, we could not find a single bacterial species dominating all PCP+ samples, suggesting that no bacterial co-factor seems to be essential for successful P. jirovecii infection.
Study limitations
The study is limited by its retrospective design. Furthermore, BAL samples were drawn from different departments of the University Hospital Essen and various physicians performing the sampling procedure, so that the procedure itself was not completely standardized. For example, it is unknown if the bronchoscope was passed via the oral or nasal route, which may cause different contamination of the sample with the pharyngeal or nasal bacterial flora. We included BAL samples from PCP+ patients treated during 2013 and 2014, but did not include PCP-patients from these years. Also, in automated DNA extraction procedures, DNA contamination of reagents or eluate may influence microbiome analyses, especially in specimens with low bacterial biomass. Therefore, we excluded the sequence variants which were distributed significantly different between the years 2013-2014 and 2015-2016, as DNA was extracted shortly after sampling. Nevertheless, the removal of these significantly different distributed sequence variants does not represent the proper negative controls and we cannot guarantee that all contaminants have been removed.
In addition, the study lacks of a group-matched analyses that was not possible due to the heterogeneous patients collective both in the PCP+ and PCP-group. The underlying diagnosis, co-morbidities, the immunestatus and several other factors of medical treatment can have meaningful effects on the pulmonary microbiome. Furthermore, we did not analyse the effects of antibiotic treatment, in particular the effects of cotrimoxazole administration (standard therapy for pneumocystosis) However, a recent study showed that antibiotic administration in mechanical ventilated patients does not significantly affect the lung microbiome [22].
Conclusion
The study is the first report analysing the pulmonary microbial communities in intensive care unit patients with pneumocystosis, and comparing them with the lung microbiome of intensive care patients with other diseases. Even though no significant differences in microbial composition between patients with and without pneumocystosis were observed, the current study may be a basis for further works understanding the interaction between Pneumocystis and the lung microbial composition.
|
2017-12-07T09:30:35.403Z
|
2017-12-04T00:00:00.000
|
{
"year": 2017,
"sha1": "425d78dd4580649c19ab251f1c2babe4f4190373",
"oa_license": "CCBY",
"oa_url": "https://bmcpulmmed.biomedcentral.com/track/pdf/10.1186/s12890-017-0512-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "425d78dd4580649c19ab251f1c2babe4f4190373",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15015600
|
pes2o/s2orc
|
v3-fos-license
|
Defects in Nicotinamide-adenine Dinucleotide Phosphate Oxidase Genes NOX1 and DUOX2 in Very Early Onset Inflammatory Bowel Disease
Background & Aims Defects in intestinal innate defense systems predispose patients to inflammatory bowel disease (IBD). Reactive oxygen species (ROS) generated by nicotinamide-adenine dinucleotide phosphate (NADPH) oxidases in the mucosal barrier maintain gut homeostasis and defend against pathogenic attack. We hypothesized that molecular genetic defects in intestinal NADPH oxidases might be present in children with IBD. Methods After targeted exome sequencing of epithelial NADPH oxidases NOX1 and DUOX2 on 59 children with very early onset inflammatory bowel disease (VEOIBD), the identified mutations were validated using Sanger Sequencing. A structural analysis of NOX1 and DUOX2 variants was performed by homology in silico modeling. The functional characterization included ROS generation in model cell lines and in in vivo transduced murine crypts, protein expression, intracellular localization, and cell-based infection studies with the enteric pathogens Campylobacter jejuni and enteropathogenic Escherichia coli. Results We identified missense mutations in NOX1 (c.988G>A, p.Pro330Ser; c.967G>A, p.Asp360Asn) and DUOX2 (c.4474G>A, p.Arg1211Cys; c.3631C>T, p.Arg1492Cys) in 5 of 209 VEOIBD patients. The NOX1 p.Asp360Asn variant was replicated in a male Ashkenazi Jewish ulcerative colitis cohort. Patients with both NOX1 and DUOX2 variants showed abnormal Paneth cell metaplasia. All NOX1 and DUOX2 variants showed reduced ROS production compared with wild-type enzymes. Despite appropriate cellular localization and comparable pathogen-stimulated translocation of altered oxidases, cells harboring NOX1 or DUOX2 variants had defective host resistance to infection with C. jejuni. Conclusions This study identifies the first inactivating missense variants in NOX1 and DUOX2 associated with VEOIBD. Defective ROS production from intestinal epithelial cells constitutes a risk factor for developing VEOIBD.
BACKGROUND & AIMS: Defects in intestinal innate defense systems predispose patients to inflammatory bowel disease (IBD). Reactive oxygen species (ROS) generated by nicotinamide-adenine dinucleotide phosphate (NADPH) oxidases in the mucosal barrier maintain gut homeostasis and defend against pathogenic attack. We hypothesized that molecular genetic defects in intestinal NADPH oxidases might be present in children with IBD.
METHODS:
After targeted exome sequencing of epithelial NADPH oxidases NOX1 and DUOX2 on 59 children with very early onset inflammatory bowel disease (VEOIBD), the identified mutations were validated using Sanger Sequencing. A structural analysis of NOX1 and DUOX2 variants was performed by homology in silico modeling. The functional characterization included ROS generation in model cell lines and in in vivo transduced murine crypts, protein expression, intracellular localization, and cell-based infection studies with the enteric pathogens Campylobacter jejuni and enteropathogenic Escherichia coli.
RESULTS:
We identified missense mutations in NOX1 (c.988G>A, p.Pro330Ser; c.967G>A, p.Asp360Asn) and DUOX2 (c.4474G>A, p.Arg1211Cys; c.3631C>T, p.Arg1492Cys) in 5 of 209 VEOIBD patients. The NOX1 p.Asp360Asn variant was replicated in a male Ashkenazi Jewish ulcerative colitis cohort. Patients with both NOX1 and DUOX2 variants showed abnormal Paneth cell metaplasia. All NOX1 and DUOX2 variants showed reduced ROS production compared with wild-type enzymes. Despite appropriate cellular localization and comparable pathogen-stimulated translocation of altered oxidases, cells harboring NOX1 or DUOX2 variants had defective host resistance to infection with C. jejuni.
CONCLUSIONS: This study identifies the first inactivating missense variants in NOX1 and DUOX2 associated with VEOIBD. Defective ROS production from intestinal epithelial cells constitutes a risk factor for developing VEOIBD. (Cell Mol I nflammatory bowel disease (IBD), a complex disease associated with genetic predisposition and environmental factors, is characterized by recurrent intestinal inflammation and microbial dysbiosis. Genomewide association studies link adult IBD to alterations in genes involved in host-microbe interactions. 1,2 Nicotinamide adenine dinucleotide phosphate (NADPH) oxidase-generated reactive oxygen species (ROS) are intrinsic to the antimicrobial host defense system of professional phagocytes. Defective ROS production in patients with chronic granulomatous disease (CGD), a rare genetic disorder caused by inactivating alterations of genes required for formation of the penultimate phagocyte oxidase complex (CYBB, CYBA, NCF1, NCF2, NCF4), confers susceptibility to life-threatening bacterial and fungal infections. 3 Up to 40% of CGD patients develop inflammatory colitis that mimics Crohn's disease. 4 Genetic variants in NCF4 and NCF2 that lead to partial attenuation in phagocyte oxidase (NADPH oxidase 2, NOX2) function without causing CGD have been associated with adult and very early onset IBD (VEOIBD). 5, 6 We have recently shown that single-nucleotide polymorphisms (SNPs) and rare hypomorphic variants in all components of the NOX2 NADPH oxidase complex are associated with VEOIBD. 7 A role for ROS production by intestinal epithelial cells in mucosal barrier function and intestinal homeostasis is just emerging. 8 The predominant sources of ROS in the lining of the gastrointestinal tract are the NADPH oxidases NOX1 (NADPH oxidase 1) and DUOX2 (dual oxidase 2), with NOX1 expression restricted mainly to colon, caecum, and ileum, whereas DUOX2 can be found in all segments of the gut. 9 NOX1 and DUOX2 are the catalytic subunits of multimeric, membrane-bound enzymes that generate upon stimulation superoxide and hydrogen peroxide by transfer of electrons from NADPH to molecular oxygen. We 10 and others [11][12][13] have reported NOX1/DUOX2-mediated ROS production in the intestine and its effect on bacterial pathogenicity and barrier integrity. Here, we describe the identification and characterization of missense mutations in NOX1 (NM_007052.4, location Xq22) and in DUOX2 (NG_016992, location 15q15.3) in patients diagnosed with VEOIBD.
Study Design
All results are presented according to the STrengthening the REporting of Genetic Association Studies (STREGA) guidelines. 14 Fifty-nine IBD patients diagnosed under the age of 6 years were sequenced for NOX1 and DUOX2 by targeted exome sequencing using Agilent SureSelect target enrichment and sequencing (Agilent Technologies, Santa Clara, CA) on the Illumina HiSeq 2000/2500 (Illumina, San Diego, CA) with exon primer and sequencing protocols designed by the Beckman Coulter Genomics (beckmangenomics.com; Beckman Coulter, Brea, CA) as described previously elsewhere. 15 Sanger sequencing was used to verify all genetic defects identified using targeted sequencing of the NOX1 and DUOX2 genes at the Centre for Applied Genomics (TCAG; http://www.tcag.ca; Hospital for Sick Children, Toronto, ON, Canada).
Single-nucleotide and insertion/deletion (indel) variants identified by targeted exome sequencing and validated by Sanger sequencing were automatically scanned and manually verified. Furthermore, all variants were also validated using Taqman performed by the Centre for Applied Genomics, Hospital for Sick Children. 15
Setting
Patients included in the study were recruited from the Inflammatory Bowel Disease Clinic at the Hospital from Sick Children, University of Toronto. They were diagnosed with VEOIBD between the years 1994 and 2012 and had a confirmed diagnosis of IBD before the age of 6 years. Although there is no consensus on the definition of VEOIBD, we have used the stricter definition based on our recent modification (diagnosis <6 years of age) 5,22,23 of the Paris classification. 24 Our definition, which is more stringent and includes more severe cases that are more likely to cause monogenic forms of the disease, has been used to identify risk variants in this age group. There were no exclusion criteria for patients diagnosed with VEOIBD; however, patients with a known immunodeficiency or a clinical diagnosis of CGD were excluded because these patients were not defined as VEOIBD. The five identified patients were screened and were found negative for pathogenic mutations in IL10RA, IL10RB, IL10, XIAP, TTC7A, as well as genes involved in CGD (RAC1/2, NCF1/2/4, and CYBB, CYBA) 23,25 and NOD2 and ATG16L1 variants associated with IBD.
Participants
This was a cohort study that examined the genetics of VEOIBD patients. Fifty-five VEOIBD patients were recruited from the Hospital for Sick Children, Toronto, Canada. A second cohort of VEOIBD patients was recruited through NEOPICS (www.NEOPICS.org). The replication cohort comprised 1477 Crohn's disease cases, 559 ulcerative colitis cases, and 2614 healthy controls, all with genetically verified Ashkenazi Jewish ancestry by principal components analysis.
Standard quality control procedures were applied, and we performed association testing using Fisher's exact method, stratified by gender in 297 male ulcerative colitis (UC) cases, 262 female UC cases, 1708 male controls, and 906 female controls. Phenotypic information and DNA samples were obtained from the study participants with approval of the institutional review ethics board for IBD genetic studies at the Hospital for Sick Children and Mount Sinai Hospital Toronto.
Later onset UC cases were recruited through the National Institute of Diabetes and Digestive and Kidney Diseases Inflammatory Bowel Disease Genetics Consortium, the Cedars-Sinai Medical Center IBD Center in California and Mount Sinai Hospital in New York. Replication cohorts had ethics board approval for genetic and phenotypic studies at the individual institutions. Written informed consent was obtained from all participants/parents.
H&E and Periodic Acid-Schiff Staining in Patient Biopsy Samples
Colonic biopsy samples were fixed in 10% formaldehyde without methanol and afterward embedded in paraffin. For H&E staining, embedded paraffin tissues on slides were deparaffinized with xylene and afterward rehydrated with different percentages of ethanol. The slides were stained for 5 minutes with Meyer's hematoxylin (Fisher Scientific, Fair Lawn, NJ) for nuclei and counterstained with eosin-Y (Fisher Scientific) for cytoplasm. Slides were mounted in Entellan (EMD Millipore, Billerica, MA). Photographs were taken using an epifluorescence light microscope (Leica Microsystems, Buffalo Grove, IL) and adjusted for brightness, contrast, and pixel size in Adobe Photoshop CS5 version 12.0 (Adobe System, San Jose, CA).
Modeling and Docking Procedure
Three-dimensional (3D) models of C-terminal domains of NOX1 and DUOX2 were generated using the homology modeling program Modeller 9v11 (http://www.salilab. org/modeller/). 26 Blast of PDB was performed with the NOX1 FAD-binding domain, and a combination of several homologous structures served together with the 3D X-ray structure the NOX2 NADPH binding domain (PDB ID: 3A1F) as initial template. The modeling was performed with default parameters using the "allHmodel" protocol to include hydrogen atoms and the "HETATM" protocol to include FAD and NADPH. To compare the FAD and NADPH binding interaction between wild-type (WT) and sequence altered oxidases, the docking runs were performed with HADDOCK. 27,28 Docking was performed with most of the parameters set to default using the Web server version of HADDOCK with a Guru interface. To gain the Van der Waals, electrostatic, and desolvation energy for each enzyme -FAD or -NADPH model, HADDOCK automatically performed the molecular dynamics before and after each docking trial by including water into the calculation (detailed modeling procedure, publication in preparation).
Cell Culture and Transfection
Model cell lines were employed as intestinal epithelial cell lines, and primary colon cells express endogenous NOX1 and DUOX2. Cos7 cells are a suitable model system for NOX1-based oxidase reconstitution as they lack any functional NADPH oxidases, and NCI-H661 cells serve as a physiologically relevant model for DUOX oxidases. 29 Cos7 cells stably expressing p22 phox 30 were maintained in Dulbecco's modified Eagle's medium with 10% fetal bovine serum; for NCI-H661 cells stably expressing DUOXA2, 29 RPMI 1640 medium with 10% fetal bovine serum was used. NOX1 was cloned into pcDNA3.1 with and without the N-terminal Myc epitope tag including a linker sequence. Influenza hemagglutinin (HA)-tagged DUOX2 in pcDNA3.1 was prepared by cloning the HA tag between amino acids D27 and A28. Mutations were introduced using site-directed mutagenesis and were verified by sequencing. NOX1 WT and missense variants were transiently transfected with NOXA1 and Myc-NOXO1 into Cos-p22 phox cells (24 hours). HA-tagged DUOX2 WT and missense variants were transiently transfected into H661-DUOXA2 cells or together with DUOXA2 into Cos7 cells using X-tremeGENE (Roche Applied Science, Indianapolis, IN) (48 hours). For analysis of DUOX2 localization upon bacterial challenge, HT29 colon epithelial cells expressing endogenous NOX1 and NOD2 were stably transduced with lentivirus encoding for HA-tagged DUOX2 WT, DUOX2 R1211C, and DUOX2 R1492C in combination with WT DUOXA2.
ROS Assays
Superoxide production (NOX1) was measured using luminol enhanced chemiluminescence and stimulation with 1 mg/mL phorbol 12-myristate 13-acetate (PMA) for 30 minutes. 33 Luminescence was measured on a Berthold Centro 960 LB in white 96-well plates. The chemiluminescence (relative light units, DRLU) readings were standardized against cellular protein (BCA assay).
H 2 O 2 production (DUOX2) was measured using the homovanillic acid assay and addition of 1 mM thapsigargin. 34 H 2 O 2 production was standardized to H 2 O 2 standard curves and cell lysate protein concentration. Empty vector transfection served as the control. For crypt ROS assays, Nox1 À/À mice (Jackson Laboratory, Bar Harbor, ME) were transduced with lentivirus encoding empty vector, NOX1, NOX1 D330N, and NOX1 P360S. Briefly, the lentiviral titer was determined relative to p24 particles (QuickTiter Lentivirus Titer Kit; Cell Biolabs, San Diego, CA), and equal amounts of each lentivirus were intrarectally administered to Nox1 À/À mice. Crypts were isolated from the intestine of euthanized mice 24 hours after lentiviral administration.
PMA-stimulated superoxide production was measured using L-012 enhanced chemiluminescence, and standardization was performed against total crypt protein concentration, as measured by BCA assay. ROS generation by transduced crypts was performed in two independent experiments (n ¼ 2-3). Animal experiments were performed with ethics approval and authorization by the regulatory authority.
Flow Cytometry
H661-DUOXA2 cells expressing DUOX2 WT or variants were incubated with anti-HA antibody (Covance Laboratories) in fluorescence-activated cell sorting buffer on ice for 30 minutes without cell permeabilization. After incubation with anti-mouse Alexa Fluor 647, the cells were fixed in 1.5% paraformaldehyde and analyzed on an Accuri C6 flow cytometer (BD Biosciences).
Immunofluorescence
Cos cells expressing Myc-NOX1 WT or variants were treated with TAMRA-labeled Campylobacter jejuni for 15 minutes to visualize localization of NOX1 as described elsewhere 10 while DUOX2-DUOXA2-expressing H661 cells were not stimulated. Cells were fixed in 3% paraformaldehyde, permeabilized in 0.5% Triton X-100, and stained with anti-DUOX2 or anti-Myc antibody, followed by goat anti-rabbit or anti-mouse Alexa Fluor 488 (Invitrogen/ Life Technologies, Carlsbad, CA). HT29 cells expressing DUOXA2 and DUOX2 WT or missense variants were seeded on glass coverslips and treated with 300 mL of a clinical isolate of enteropathogenic Escherichia coli (EPEC) at optical density OD 600 ¼ 1 for 5 hours. Slides were washed, fixed, and permeabilized with 0.1% Triton X-100 and probed with antibodies against HA tag (Covance) and NOD2 (sc-30199, kind gift by P. Moynagh, National University of Ireland Maynooth), and 4 0 ,6-diamidino-2-phenylindole (DAPI, blue). Images were acquired using a Zeiss LSM 700 microscope (Carl Zeiss, Thornwood, NY) and magnification 63Â (oil) objective.
Colonic biopsies from control, disease control, and patients were fixed in 10% formaldehyde without methanol, embedded in paraffin, and processed for staining. Antigen retrieval was performed using high pressure-cooking with 1 mM EDTA at a pH 9.0 containing 0.05% Tween 20. Afterward, slides were blocked for 1 hour at room temperature with 5% bovine serum albumin in 1x phosphate-buffered saline (PBS) without calcium and magnesium containing 15% goat serum. Primary antibody incubation was performed overnight at 4 C. On the following day, the stained slides were washed three times for 10 minutes with 1x PBS without calcium and magnesium.
Secondary antibody incubation was performed at room temperature and in darkness for 1 hour. Slides were washed afterward three times for 10 minutes in darkness. Next, nuclear counterstaining with Hoechst 33342 Fluorescence Stain (Thermo Fisher Scientific, Waltham, MA) was performed at a dilution of 1:15,000. Finally, sections were mounted overnight with Vectorshield fluorescence mounting medium (Vector Laboratories, Burlingame, CA). Antibodies anti-beta catenin (BD Transduction Laboratories, BD Biosciences), anti-lysozyme (Abcam, Cambridge, MA), anti-CD24 (Abcam), and anti-EpCAM (Sigma-Aldrich, St. Louis, MO) were used at 1:100 dilution. Secondary antibodies were Alexa fluor 568 goat anti-rabbit and Alexa fluor 488 goat-anti mouse (both Invitrogen/Life Technologies). Images were acquired with an Olympus IX81 inverted fluorescence microscope (Olympus America, Center Valley, PA) equipped with a Hamamatsu C9100-13 back-thinned EM-CCD camera (Hamamatsu Photonics KK, Hamamatsu City, Japan) and Yokogawa CSU X1 spinning disk confocal scan head (Yokogawa Electric Corporation, Tokyo, Japan). Images were adjusted for contrast and brightness using the
Virulence Assay
Adherence and invasion of C. jejuni 81-176 were assessed in NOX1 complex or DUOX2-DUOXA2 expressing Cos7 cells using the gentamicin protection assay. 35 Plate grown C. jejuni 81-176 was washed and resuspended in tissue culture medium at OD 600 ¼ 0.4 and added at multiplicity of infection 1000 to cells, followed by centrifugation at 250g for 5 minutes. After incubation for 3 hours at 37 C, the nonadherent and cell-associated bacteria were collected. For invasion, the infected and washed monolayers were incubated further with and without gentamicin (400 mg/mL) and incubated for an additional 2 hours at 37 C. The cells were lysed by the addition of 0.1% Triton X-100 in PBS for 10 minutes at 37 C. Bacterial counts for each assay were enumerated by serial dilution plating. All parameters were calculated as the average of the total number of colony-forming units/total initial inoculum.
Statistical Analysis
All functional experiments were conducted in triplicate with three repeats (n ¼ 3), followed by an unpaired Student's t test.
Identification of NOX1 and DUOX2 Variants in VEOIBD
NOX1 and DUOX2 missense mutations were identified in five of 59 VEOIBD patients (age 6 years). All five patients presented with pancolitis without small bowel or perianal disease at diagnosis. None of the patients had systemic disease including thyroid disease or chronic infections, suggesting that defects were confined to the intestinal epithelium. SNPs and insertion/deletion variants were confirmed using Sanger sequencing and analyzed for potential function. Exon sequencing (Table 1-2) identified a novel NOX1 variant (c.988G>A; p.P330S) in one male patient. Another rare variant (c.967G>A; rs34688635; p.D360N) was found in one male and one female patient. The missense variant NOX1 p.P330S is potentially damaging (Polyphen2 score: 0.995) and unique according to the Washington Exome Variant Server, while NOX1 p.D360N was predicted to be "probably damaging" by PolyPhen2 and was given a maximum evolutionary conservation score of 1 by the PhastCons program using 46 mammalian species.
Variants in DUOX2 were also identified in VEOIBD patients (Table 1-2). One of the patients was heterozygous for DUOX2 p.R1211C (c.4474G>A) and developed severe disease that necessitated colonic resection. The disease subsequently recurred at the resection site, a finding consistent with Crohn's disease. The second variant was detected in a very early onset UC patient heterozygous for DUOX2 p.R1492C (c.3631C>T; rs374410986), who presented with pancolitis.
In an independent replication cohort of 150 VEOIBD patients, none of the NOX1 and DUOX2 missense variants were identified. Similarly, in the publicly available International IBD Genetics Consortium (http://www.ibdgenetics.org) database none of the NOX1 and DUOX2 missense variants were identified as this data set does not examine rare variants, only common polymorphisms, and the p.Asp360Asn variant is not analyzed by the immunochip.
Therefore, we took an alternate approach employing an array-based genotyping using the Illumina HumanExome v1.0 platform of 1477 Crohn's disease (CD) cases, 559 UC cases, and 2614 healthy controls, all with genetically verified Ashkenazi Jewish (AJ) ancestry by principal components analysis. Using this approach we detected an association between D360N NOX1 and male AJ UC (MAF case . 3.37%, MAF control . 0.82%; odds ratio 4.22; P ¼ 1.25 Â 10 À3 ). The association was not detected in either of the female AJ UC cases (MAF case ¼ 1.53%, MAF control ¼ 0.99%; odds ratio 1.55; P ¼ .343), although the trend was in the same direction as observed in the AJ males cases. However, this trend was not observed in Crohn's disease cases (MAF CD ¼ 0.97%). The finding in an adult UC cohort suggests that pathways/processes involved in VEOIBD will have implications for adult IBD patients.
Histologic Analysis of NOX1/DUOX2 Variants
Histopathology analysis using HE and PAS staining ( Figure 1A) was performed in biopsies from patients with the identified DUOX2 p.R1211C variant as well as a patient with the NOX1 p.D360N variant and compared with the healthy control and an IBD control biopsy. The disease control showed features of chronic and regenerative IBD, demonstrated by metaplastic Paneth cells within colonic crypts. The patient with the NOX1 p.D360N variant showed focal inflammation, increased cellularity of inflammatory cells adjacent to normal areas of unaffected colonic mucosa. The patient with the DUOX2 p.R1211C variant demonstrated more severe morphologic changes, with severe inflammation and crypt damage in the colonic mucosa when compared with the NOX1 variant. Immunofluorescence staining was performed on colonic biopsy samples to determine whether Paneth cell metaplasia, a feature of chronic and regenerative change as a consequence of continuous inflammation within the colon, has occurred. Both markers, lysozyme and CD24, were highly positive in metaplastic Paneth cells of colonic crypt enterocytes in the disease control (see Figure 1B). Altered NOX1 appears not to progress cells into full metaplasia as seen by the absence of CD24 within crypt cells of the patient harboring NOX1 p.D360N. In colonic crypts of the patient with the DUOX2 p.R1211C variant, both lysozyme and CD24 were expressed, albeit not as prominent as observed within metaplastic Paneth cells in the IBD control.
Topologic Models of NOX1/DUOX2 Variants
The NOX1 NADPH oxidase is formed by heterodimerization of NOX1 with p22 phox , followed by assembly with the regulatory proteins NOXO1, NOXA1, and Rac1-GTP. 8 The cytosolic carboxyl terminus of NADPH oxidases harbors NADPH and FAD-binding regions, which are required for electron transport across the membrane via hemes where molecular oxygen is reduced to form superoxide. The identified NOX1 variants are located either just in front of FAD 1 (p.P330S) or inside FAD 2 (p.D360N) (Figure 2A). Pro330 and Asp360 are conserved in NOX1-4 proteins identified in vertebrates and lower organisms. 36 CYBB missense variants (X-CGD) leading to loss or diminished ROS generation in neutrophils are located in close vicinity to the identified NOX1 variants (http://bioinf.uta.fi/ CYBBbase). 37 Modeling of NOX1 WT, NOX1 (p.P330S), or NOX1 (p.D360N) dehydrogenase domains was performed by combining the crystal structures of FAD-binding domains homologous to the NOX FAD with the partial structure of the dehydrogenase domain of NOX2 in the correct orientation (see Figure 2B).
FAD and NADPH were docked to each NOX/DUOX model by using HADDOCK. FAD binds to NOX1 WT mainly with electrostatic interaction to His339 in the FAD 1 domain and Asp360 in the FAD 2 domain. Based on the model, Pro330 will be important for stabilization of the antiparallel b-structure that creates the FAD 1 domain. Although Pro330 is not directly involved in FAD binding, the change Pro330Ser in NOX1 alters the position of His339 in the FAD 1 domain, which decreases binding affinity of this variant for FAD.
The second NOX1 residue altered in VEOIBD, Asp360, is directly involved in FAD binding, and therefore a change to asparagine (D360N) weakens the interaction between FAD and NOX1. FAD binds to NOX1 with binding affinity in mM range; therefore, we predict that small structural changes in both FAD domains will compromise catalytic activity of the NOX1 enzyme. Debeurme et al 38 reported disrupted FAD binding and diminished catalytic activity of NOX2 in selected CYBB variants.
Functional Characterization of NOX1 Variants
As structural analysis predicts that the catalytic activity of NOX1 variants will be compromised, we reconstituted WT and altered NOX1 complexes in an epithelial model cell system (Cos7) deficient in all NOX/DUOX isoforms. Both NOX1 p.P330S and NOX1 p.D360N variants displayed diminished catalytic activity (see Figure 2C). Basal and phorbol ester-stimulated ROS generation was significantly reduced for NOX1 missense variants (50%-60%), while the overall protein expression was comparable to WT NOX1 (see Figure 2D).
As patients could not be recalled for colon tissue evaluation, catalytic activity of NOX1 variants was also measured in a murine in vivo expression setting. Nox1 knockout mice were transduced with lentivirus encoding NOX1 WT and variants intrarectally, and ROS generation of isolated crypts was recorded 24 hours later. Similar to the results obtained in cell lines, ROS production in the crypts was reduced in the NOX1 variants when compared with NOX1 WT (see Figure 2E).
A reduction in epithelial ROS production will attenuate host protection from intestinal pathogens. Defective processing of responses to mucosal bacteria is recognized to play a central role in the development and perpetuation of intestinal inflammation in IBD. C. jejuni in particular has been associated with the initiation of IBD. 39 C. jejuni uptake was used to visualize infection-associated translocation of NOX1 to membrane ruffles and to assess the antibacterial response. 10 Stimulated membrane localization of NOX1 WT and NOX1 variants (NOX1 p.D360N shown) were comparable (see Figure 2F), but reduced ROS generation caused a 10-fold increase in bacterial invasion when cells harbored the NOX1 p.P330S or NOX1 p.D360N variants with reduced catalytic activity (see Figure 2G).
Functional Characterization of DUOX2 Variants
Inactivating mutations in DUOX2 or DUOXA2 have been linked to inherited permanent or transient congenital hypothyroidism, 40 and to date over 23 DUOX2 mutations have been described in this context (HGMD, www.hgmd.cf. ac.uk/ac/gene) ( Figure 3A). The two VEOIBD-associated DUOX2 variants are novel; in contrast to most of the reported DUOX2 variants, they are not located in the peroxidase homology domain or the EF hand regions. DUOX2 p.R1211C is placed in a polybasic region within an intracellular loop, and Arg1492 in DUOX2 is an integral part of the highly conserved GRP sequence in the NADPH 3 domain (see Figure 3A).
As described for NOX1, the dehydrogenase domains of DUOX2 WT and DUOX2 p.R1492C were modeled onto the extended NOX2 structure; by use of HADDOCK, NADPH and FAD were docked to the structure (see Figure 3B). Structural analysis revealed that Arg1492 is part of the NADPHbinding pocket. NADPH binds to DUOX2 WT with strong electrostatic interactions to the residues Arg1421 and Arg1492 with a sum of À181.7 ± 76.4 kcal/mol and with weak Van der Waals interactions to Gly1385, Thr1463, Pro1520, Gly1521, and Met1520 with a sum of À30.9 ± 7.8 kcal/mol. Replacing Arg1492 with cysteine as in the DUOX2 p.R1492C variant does not change the DUOX2 structure or the position of other NADPH-interacting residues. However, the change is predicted to weaken the interaction between NADPH and DUOX2 by a factor of 2. How replacement of Arg1221 with cysteine will directly affect DUOX2 catalytic activity cannot be predicted because suitable structures for modeling do not exist, but in both NOX2 and NOX4 the analogous D loop participates in ROS production. 41,42 Functional analysis of DUOX2 variants was performed in the H661 cellular model system that represents a physiologic context for DUOX-DUOXA expression and is devoid of NOX1-5 activity. 29 Both DUOX2 variants, when coexpressed with their dimerization partner DUOXA2, produced significantly less H 2 O 2 than WT DUOX2 (see Figure 3C), although protein expression and cellular localization were not altered (see Figure 3D and E). DUOX2 has been functionally associated with NOD2 in transient overexpression conditions. 43 HT29 colonic cells express endogenously a functional NOX1 complex and NOD2, and thus provide an appropriate context for analysis of putative DUOX2-NOD2 interactions.
DUOX2 or DUOX2 variants together with DUOXA2 were stably incorporated into HT29 cells, followed by exposure to enteropathogenic E. coli. DUOX2 WT or variants, localized on internal membrane structures before the challenge, translocated to the plasma membrane and cell-cell junctions. NOD2, on the other hand, remained in the intracellular compartment, albeit NOD2 protein expression was up-regulated ( Figure 4). Thus, DUOX2 and NOD2 were not recruited simultaneously upon E. coli challenge.
Stimulated H 2 O 2 release in DUOX2 WT or variantexpressing HT29 cells mirrored the results obtained with H661 cells (data not shown). DUOX2-mediated H 2 O 2 release at apical membranes has been linked to antimicrobial host defense and decreased C. jejuni virulence. 10 Comparison of C. jejuni invasion in DUOX2 WT or DUOX2 variantexpressing (DUOX2 p.R1211C, DUOX2 p.R1492C) epithelial cells showed increased invasion when ROS generation was diminished (see Figure 3F).
Discussion
We have identified inactivating missense variants in each of the epithelial NADPH oxidases NOX1 (p.P330S, p.D360N) and DUOX2 (p.R1211C, p.R1492C) in five VEOIBD patients. Variants in X-linked NOX1 were found in two male VEOIBD patients, and NOX1 p.D360N was associated with male UC in an AJ ancestry case-control cohort, likely leading to increased or sustained disease severity.
The identification of rare functional variants contributing to the pathogenesis of VEOIBD has been observed with other genes, including the NOX2 NADPH oxidase complex, 7 NOS2, 44 IL10R, 15 and XIAP. 45,46 The variants we identified in both NOX1 and DUOX2 are rare and not found in a replication VEOIBD cohort or data sets of common variants. However, all variants showed both pathologic and functional defects, indicating that these variants may contribute to disease susceptibility or pathogenesis. Further large-scale sequencing of pediatric-and adult-onset IBD may indicate a broader role of both NOX1 and DUOX2 in IBD pathogenesis, as observed in our AJ population.
Recently, altered DUOX2 expression was identified in ileum biopsies from pediatric Crohn's disease patients. 47 Further, ROS derived from NADPH oxidases is critical to control mucin granule accumulation in colonic goblet cells, 12 and NOX1 has been shown to control the balance between goblet and absorptive cell types in murine colon. 48 Interestingly, colonic biopsies from patients carrying either NOX1 p.D360N or DUOX2 p.R1211C variants showed abnormal CD24 and lysozyme expression (see Figure 1B), suggesting a role for these proteins in Paneth cell metaplasia.
The thyroid function of the two male VEOIBD patients harboring DUOX2 mutations was normal, although inactivating monoallelic and biallelic DUOX2 and DUOXA2 variants have been linked to hypothyroidism. 49 In contrast to adult onset IBD, VEOIBD frequently encompasses a unique clinical presentation, with severe disease limited to the colon and with poor response to standard therapies. 24 VEOIBD variants (NCF2, 50 NOS2, 44 IL10RA/B, 15 TTC7A 51 ) have usually been rare, suggesting that these patients may have a unique genetic susceptibility. Furthermore, we have recently shown that SNPs and rare variants in all components of the NOX2 NADPH oxidase complex are associated with VEOIBD. 7 Similar to our recent observations with NOX2 NADPH oxidase complex variants leading to decreased ROS production in neutrophils, 7 reduced mucosal ROS levels originating from NOX1 and DUOX2 variants play also a role in susceptibility to VEOIBD and perhaps other severe IBD phenotypes.
Intestinal NADPH oxidases connect to antibacterial autophagy and endosomal pathways important for mucus secretion and may modulate the interplay between commensal bacteria and pathogens. 12,13 Recent microbiome studies on a large pediatric cohort with new-onset Crohn's disease assigned a unique role to changes in the rectal mucosal microbiota for disease classification. 52 Changes in ROS generation at the mucosal surface will most likely result in dysbiosis, intestinal inflammation, and pathobiont development. Our functional studies provide strong support both for the pathogenic nature of the mutations identified in these VEOIBD patients and the role of epithelial ROS in protecting cells from bacterial attack.
Further phenotypic exploration of NOX/DUOX variants will be aided by studies in humans and improved animal models, as current IBD animal models seem often not to reflect human disease triggered by reduced ROS. For example, murine Cybb (Nox2) deficiency does not lead to spontaneous Crohn's disease-like intestinal disease and gut inflammation, both observed in many CGD patients. Although Cybb knockout mice exhibit several hallmarks of CGD upon fungal or bacterial challenge, they were slightly protected in the dextran sodium sulfate-induced colitis mouse model. 53 Similarly, Nox1 deficiency in the murine mucosa did not alter dextran sodium sulfate-colitis pathology, 54 although combined Nox1 and Il10 deficiency caused spontaneous colitis in mice. 55 Mice harboring an inactivating Duox2 variant or Duoxa deficiency showed severe hypothyroidism and increased colonization with Helicobacter felis. 11,56 In conclusion, our findings demonstrate that novel NOX1 and DUOX2 NADPH oxidase variants resulting in attenuated ROS production and impaired mucosal defense occur in children with VEOIBD. This may influence IBD pathogenesis beyond childhood. . Bacteria-induced translocation of DUOX2 and variants does not involve NOD2 in colonic cells. HT29 cells stably expressing DUOX2 WT, DUOX2 R1211C, and DUOX2 R1492C were exposed to enteropathogenic Escherichia coli (EPEC) for 5 hours. Immunofluorescence images of DUOX2 (green), NOD2 (red), and nuclei (blue). Scale bar: 15 mm.
|
2018-04-03T05:29:00.608Z
|
2015-06-24T00:00:00.000
|
{
"year": 2015,
"sha1": "f5f7551518a5da5dfc9bb157b70135e0fc821d4c",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jcmgh.2015.06.005",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fe30480608a4a0677d0ea4d069b78269747ffbee",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
51982239
|
pes2o/s2orc
|
v3-fos-license
|
Phytochemicals in Helicobacter pylori Infections: What Are We Doing Now?
In this critical review, plant sources used as effective antibacterial agents against Helicobacter pylori infections are carefully described. The main intrinsic bioactive molecules, responsible for the observed effects are also underlined and their corresponding modes of action specifically highlighted. In addition to traditional uses as herbal remedies, in vitro and in vivo studies focusing on plant extracts and isolated bioactive compounds with anti-H. pylori activity are also critically discussed. Lastly, special attention was also given to plant extracts with urease inhibitory effects, with emphasis on involved modes of action.
Introduction
Plant products, their enriched-derived extracts, and their isolated bioactive molecules have been increasingly studied due their renowned health attributes, largely used in folk medicine over centuries for multiple purposes [1][2][3][4][5][6][7][8][9]. Indeed, phytomedicine is garnering much attention among the medical and scientific communities [10][11][12]. Commercially available synthetic drugs have often been negatively pointed out due to their side effects and related toxicity [13]. In fact, the active molecules used in pharmaceutical formulation are formerly derived from bioactive molecules extracted from plants and other living organisms [14]. Also, a growing number of studies have progressively underlined the multiple bioactive properties conferred by plant formulations [15,16]. Specifically, the antimicrobial effects of multiple plant preparations have been progressively confirmed and supported by both in vitro and in vivo studies and clinical trials [17][18][19][20][21]. Thus, their lower costs, high effectiveness, bioavailability, bioefficacy, and few to no adverse effects have led to intensive research on this topic [22][23][24][25][26][27][28].
Among the various opportunistic infections, those caused by Helicobacter pylori, a human opportunistic pathogen, is attracting much attention [29]. In fact, it is widely recognized that this bacterium plays an important role in the etiology of peptic and gastric ulcers and even gastric cancers and gastric lymphomas [29]. About half of the worldwide population is colonized by this bacterium, but there are only about 20% who manifest clinical symptoms, which has been linked to the ability of some H. pylori strains to both adapt to host's immunological responses and to support an ever-changing gastric environment [29]. Relatedly, increasing rates of antibiotic-resistant H. pylori strains have been found, and therefore, the search for new eradication strategies and effective antibiotic therapies has become an issue of crucial importance [30]. Hence, research effort is focused on exploring plants as sources of anti-H. pylori agents.
Based on these findings, the present report aims to provide an extensive overview of Helicobacter pylori infections, namely describing its involvement in triggering gastric cancer and the most common antimicrobials used in H. pylori eradication. Special attention is also given to medicinal plants and their corresponding extracts and isolated constituents used as anti-H. pylori agents and urease inhibitors. This review was performed by consulting the databases of PubMed, Web of Science, Embase, and Google Scholar (as a search engine); only full-text available articles were considered, and articles published from 2008 to 2018 were prioritized. The search strategy included the combination of following keywords: "Helicobacter pylori", "anti-Helicobacter", "medicinal plant", "plant extract", "essential oil", "bioactive", "phytochemical", "antimicrobial", and "eradication".
Helicobacter pylori and Gastric Cancer
H. pylori infection has been implicated in the development of gastric cancer, a multifactorial disease and a leading cause of mortality. The risk factors for gastric cancer have been shown to include environmental factors and factors that influence host-pathogen interaction, as well as the complex interplay between these factors [31]. Modern lifestyle, high stress levels, smoking and excessive alcohol consumption, nutritional deficiencies, and prolonged use of non-steroidal anti-inflammatory drugs (NSAIDs) are amongst the most relevant etiological environmental factors [32].
This bacterial infection has been linked to the initiation of chronic gastritis that could later lead to adenocarcinoma of the intestine [33]. However, several mechanisms have been proposed to represent the involvement of H. pylori infection in tumorigenesis. Several bacterial virulence factors, such as the cytotoxin-associated gene A (CagA) protein, present in the DNA insertion element Cag pathogenicity island (CagPAI), were found to be of prominent importance in carcinogenesis [34]. Likewise, bacterial peptidoglycan can be delivered into gastric epithelial cells, where it activates a phosphoinositide 3-kinase (PI3K)-Akt pathway leading to cell proliferation, migration, and prevention of apoptosis [35]. Furthermore, H. pylori-induced gastric inflammation involves the cyclooxygenase-2 (COX2)/prostaglandin E2 (PGE2) pathway and inflammatory marker interleukin 1β (IL-1β), which are important factors triggering chronic active gastritis and The susceptibility of H. pylori isolates and strains to 543 extracts from 246 plant species was tested by disc diffusion, agar diffusion, agar dilution, and broth microdilution assays. Activity ranged from 1.56-100,000 µg/mL for minimal inhibitory concentration (MIC) and 7-42 mm for inhibition zone diameters (IZDs). However, disparities were observed among the methods used and the tested concentrations: some extracts were tested at very high concentrations (100,000 µg/mL) that might have resulted in biased conclusions. Though many plants (246 species) showed anti-H. pylori activity in vitro, very few have been screened for activity in animal models.
Organic extracts of Carum carvi, Xanthium brasilicum, and Trachyspermum copticum have demonstrated antibacterial activity against 10 clinical isolates of H. pylori [46]. In addition, ethanolic extracts of Cuminum cyminum and propolis exhibited significant in vitro inhibitory effect against H. pylori and, therefore, could be considered a valuable support in the treatment of infection, even contributing to the development of new and safer agents for inclusion in anti-H. pylori therapy regimens [47]. Some popular plant species used in Brazilian cuisine and folk medicine in the treatment of gastrointestinal disorders were also investigated for their antibacterial effects, among which Bixa orellana, Chamomilla recutita, Ilex paraguariensis, and Malva sylvestris were the most effective against H. pylori [48].
In Vivo Findings
H. pylori colonization is increasingly being associated with a heightened risk of developing upper gastrointestinal tract diseases. Despite many plant extracts having demonstrated a prominent H. pylori inhibition capacity in culture, it is of crucial importance to assess their in vivo efficacy, because it is pivotal to ascertain their effective antibacterial potency. However, a relatively low number of medicinal plants have been investigated to date for in vivo activity, as discussed below.
Paeonia lactiflora root extract (100 µg/mL) showed a complete inhibition of H. pylori colonization (4-5 × 10 5 colony forming unit (CFU)), being the antibacterial potential equivalent to of ampicillin used as positive control (10 µg/mL) (2-4 × 10 5 CFU) [98]. Time course viability experiments were also performed in simulated gastric environments to assess the anti-H. pylori activity of garlic (Allium sativum) oil (16 and 32 µg/mL). A rapid anti-H. pylori action in artificial gastric juice was found. Nevertheless, the anti-H. pylori activity displayed by garlic oil was noticeably affected by food materials and mucin, despite the fact that a substantial activity remained under simulated gastric conditions [65]. Also, H. pylori-inoculated Swiss mice receiving 125, 250, or 500 mg/kg of Bryophyllum pinnutum or ciprofloxacin (500 mg/kg) for 7 days, showed a significant reduction of H. pylori colonization on gastric tissue from 100% to 17%. In addition, the highest B. pinnatum extract tested (85.91 ± 52.91 CFU) and standard drug ciprofloxacin (25.74 ± 16.15 CFU) also reduced significantly (p < 0.05) the bacterial load of gastric mucosa as compared with untreated infected mice (11883 ± 1831 CFU) [74]. On the other hand, Eryngium foetidum methanol extract (381.9 ± 239.5 CFU) and positive control ciprofloxacin (248 ± 153.2 CFU) significantly reduced the bacterial load in gastric mucosa at the same dose (500 mg/kg) compared with untreated and inoculated mice (14350 ± 690 CFU) [73].
Hippocratea celastroides hydroethanolic root-bark extract, a widely used plant against gastric and intestinal infections, also showed anti-H. pylori efficacy in naturally infected dogs. In a study of 18 experimental dogs treated with a dose of 93.5-500 mg/kg of H. celastroides extract in weight and 19 infected dogs receiving amoxicillin-clarithromycin-omeprazole (control treatment), the results showed effectiveness of 33.3 and 55% in the experimental and control groups, respectively [99].
On the other hand, Ye et al. [95], aiming to investigate the in vivo bactericidal effects of Chenopodium ambrosioides L. against H. pylori, randomly assigned H. pylori-infected mice into plant extract group, triple therapy control (lansoprazole, metronidazole, and clarithromycin), blank control, and H. pylori control groups. The obtained eradication ratios, determined by rapid urease tests (RUTs) and histopathology, were, respectively, 60% (6/10) using RUT and 50% (5/10) using histopathology for the test group and both 70% (7/10) for the control group. In addition, the histopathologic evaluation revealed a massive bacterial colonization on the gastric mucosa surface and slight mononuclear cells infiltration after H. pylori inoculation, but no obvious inflammation or other pathologic changes in gastric mucosa were stated between the C. ambrosioides-treated mice and the standard therapy.
Tinospora sagittata and its main component, palmatine, showed in vitro bactericidal effects on H pylori strains, with both MIC and minimal bactericidal concentration (MBC) values of 6250 µg/mL, whereas palmatine's MIC value against H. pylori SCYA201401 was 6.25 µg/mL and against H. pylori SS1 was 3.12 µg/mL. The time-kill kinetic study evidenced a dose-dependent and progressive decline in the numbers of viable bacteria up to 40 h. H. pylori-infected mice treated with extract, palmatine, or control therapy (omeprazole, clarithromycin, and amoxicillin), presented eradication ratios of, respectively, 80%, 50%, and 70%. The anti-H. pylori activity found in T. sagittata extracts and its major constituent, palmatine, both in culture and animal models, clearly highlights the antibacterial potential of this plant in the treatment of both infected humans and animals [42].
Total alkaloids fraction activity (TASA) of Sophora alopecuroides L., widely used in herbal remedies against stomach-associated diseases, were also investigated on 120 H. pylori-infected BALB/c mice mouse gastritis. A total of 100 infected mice were randomly assigned into 10 treatment groups: group I (normal saline); group II (bismuth pectin); group III (omeprazole); group IV (TASA 2 mg/day); group V (TASA 4 mg/day); group VI (TASA 5 mg/day); group VII (TASA + bismuth pectin); group VIII (TASA + omeprazole); group IX (bismuth pectin + clarithromycin + metronidazole); and group X (omeprazole + clarithromycin + metronidazole). The mice were sacrificed 4 weeks after treatment. Real-time PCR was used to detect 16sDNA of H. pylori to test both the colonization and mice clearance of bacteria of each treatment. Hematoxylin and eosin staining and immunostaining of mice gastric mucosa were also used to observe the general inflammation and related factors: IL-8, COX2, and nuclear factor-kappa B (NF-κB) expression changed after treatments. TASA combined with omeprazole or bismuth pectin showed promising antimicrobial activity against H. pylori, as well as conventional triple therapy. Indeed, hematoxylin and eosin staining and immune-staining of mice gastric mucosa evidenced that the inflammation on mice gastric mucosal membrane were also clearly relieved in TASA combined treatments and conventional triple therapy compared with normal saline-treated mice. Accordingly, from immunohistochemistry results, H. pylori-induced IL-8, COX2, and NF-κB were consistently suppressed in the seventh, eighth, ninth, and tenth groups to a certain extent [100].
Pastene et al. [101] investigated the inhibitory effects of a standardized apple peel polyphenol-rich extract (Malus pumila Mill., cited as Malus domestica) against H. pylori infection and vacuolating bacterial toxin (VacA)-induced vacuolation and found that the preparation significantly prevented vacuolation in HeLa cells with an IC 50 value of 390 µg gallic acid equivalents (GAE)/mL and an in vitro anti-adhesive effect against H. pylori. A significant inhibition was also stated with 20-60% reduction of H. pylori attachment at concentrations between 0.250 and 5 mg GAE/mL. In a short-term infection model (C57BL6/J mice), doses of 150 and 300 mg/kg/day showed an inhibitory effect on H. pylori attachment. Orally administered apple peel polyphenols also showed an anti-inflammatory effect on H. pylori-associated gastritis, lowering malondialdehyde levels and gastritis scores.
Kim et al. [102] investigated the GutGard™ ability (a flavonoid rich, Glycyrrhiza glabra root extract) to inhibit H. pylori growth both in Mongolian gerbils and C57BL/6 mouse models. Infected male Mongolian gerbils were orally treated once daily 6 times/week for 8 weeks with 15, 30, and 60 mg/kg GutGard™. Bacterial identification in the biopsy samples of gastric mucosa, via urease, catalase, and ELISA, as well as immunohistochemistry revealed a dose-dependent inhibition of H. pylori colonization in gastric mucosa by GutGard™. As well, the administration of 25 mg/kg GutGard™ in H. pylori-infected C57BL/6 mice significantly reduced H. pylori colonization in gastric mucosa, suggesting its usefulness in H. pylori infection prevention.
Calophyllum brasiliense stem bark preparations are popular remedies for the treatment of chronic ulcers. A current report evidenced gastroprotective, gastric acid inhibitory properties and anti-H. pylori activity in culture (MIC = 31 µg/mL) [75]. Hydroethanolic (50, 100, and 200 mg/kg) and dichloromethane (100 and 200 mg/kg) fractions-treated Wistar rats ulcerated by acetic acid and inoculated with H. pylori, showed a marked delay in ulcer healing and reduced the ulcerated area in a dose-dependent manner [75]. While the dichloromethane fraction, at 200 mg/kg, increased PGE2 levels, both the hydroethanolic and dichloromethane fractions decreased the number of urease-positive animals, as confirmed by the reduction of the H. pylori presence in histopathological analysis. This aspect suggests that the antiulcer activity of C. brasiliense is partly linked with its anti-H. pylori efficacy [75]. Also, phenolic-rich oregano (Origanum vulgare) and cranberry (Vaccinium macrocarpon) extracts showed a prominent ability to inhibit H. pylori through urease inhibition and disruption of energy production by inhibition of proline dehydrogenase at the plasma membrane [103].
Urease Inhibition
The current therapies are challenged by the considerable number of emerging H. pylori-resistant strains. This fact has driven the need for alternative anti-H. pylori therapies that ideally should have good stability and low toxicity and to be able to inhibit urease activity [62]. It has been shown that H. pylori urease activity is crucial in bacterial survival and pathogenesis [104].
The inhibitory potency of some anti-H. pylori medicinal plants has been reported [62] and even investigated by some authors in the involved mechanisms of antibacterial action of those plant products [63]. Table 11 briefly shows the studied plant extracts with prominent anti-urease activity. Amin et al. [49] demonstrated that the methanolic and acetone extracts of some medicinal plants were able to inhibit urease activity. In fact, Acacia nilotica flower methanol and acetone extracts evidenced anti-H. pylori activity, being MIC values of 8-64 µg/mL and 4-64 µg/mL, respectively. Both extracts inhibited urease activity at 8.2-88.2% and 9.2-86.6%. Calotropis procera leaf and flower methanol and acetone extracts, with MIC values of 16-256 µg/mL, 32-256 µg/mL, and 8-128 µg/mL also displayed urease inhibitory effects, being, respectively, 12.2-48.2% and 7.2-58.2% for leaf and 9.3-68.2% for flower acetone extracts [49]. While A. nilotica extract exerted a competitive inhibition, that of C. procera extract displayed a mixed type of inhibition [49]. In addition, Casuarina equisetifolia fruit methanol extract, with MIC values varying from 128-512 µg/mL, also displayed 12.2-86.2% inhibition of urease activity [49].
In another study, Camellia sinensis young non-fermented and semi-fermented shoot extracts, presented inhibition zone diameter (IZD) and MBC of, respectively, 22.5 mm at 20-60 µg/disk and 4 mg/mL, and 18 mm at 20-60 µg/disk and 5.5 mg/mL. They both inhibited Ure A and Ure B subunits production at 2.5 and 3.5 mg/mL [94]. Also, the Chamomilla recutita flower extract, which inhibited H. pylori growth at an MIC 90 value of 125 mg/mL and a MIC 50 value of 62.5 mg/mL, was able to inhibit the urease production [105]. In the same line, the methanol fraction of Euphorbia umbellata bark extract inhibited both H. pylori growth (44.6% inhibition) at 256 µg/mL and urease activity (78.6% inhibition) at 1024 µg/mL [77]. Moreover, the Peumus boldus flower aqueous extract showed anti-adherent activity against H. pylori and inhibited urease activity with an IC 50 value of 23.4 µg GAE/mL [61]. The aqueous extract of Teminalia chebula fruit showed activity with MIC and MBC values of 125 mg/mL and 150 mg/mL, respectively, and inhibited H. pylori urease activity at a concentration of 1-2.5 mg/mL [63].
Conclusions and Future Perspectives
Overall, the report suggests that the studied plant extracts possess anti-H. pylori activity, strengthening the claims made by traditional medicine practitioners about their putative anti-ulcerative properties. However, very few of them were investigated for efficacy in animal models or the ability to inhibit urease activity. Further studies are warranted for efficacy studies in animal models, elucidation of effective modes of action (including urease inhibition), and clinical trials in human being.
Author Contributions: All authors contributed equally to this work. B.S., J.S.-R., P.V.T.F., and N.M. critically reviewed the manuscript. All the authors read and approved the final manuscript.
Funding: The APC was funded by N Martins.
|
2018-08-16T21:57:32.637Z
|
2018-08-01T00:00:00.000
|
{
"year": 2018,
"sha1": "123c17ff0239bafc65e9b1cc22a8da0e5d0aaaa2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/19/8/2361/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "123c17ff0239bafc65e9b1cc22a8da0e5d0aaaa2",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
265217193
|
pes2o/s2orc
|
v3-fos-license
|
Endoplasmic reticulum stress in the pathogenesis of alcoholic liver disease
The endoplasmic reticulum (ER) plays a pivotal role in protein synthesis, folding, and modification. Under stress conditions such as oxidative stress and inflammation, the ER can become overwhelmed, leading to an accumulation of misfolded proteins and ensuing ER stress. This triggers the unfolded protein response (UPR) designed to restore ER homeostasis. Alcoholic liver disease (ALD), a spectrum disorder resulting from chronic alcohol consumption, encompasses conditions from fatty liver and alcoholic hepatitis to cirrhosis. Metabolites of alcohol can incite oxidative stress and inflammation in hepatic cells, instigating ER stress. Prolonged alcohol exposure further disrupts protein homeostasis, exacerbating ER stress which can lead to irreversible hepatocellular damage and ALD progression. Elucidating the contribution of ER stress to ALD pathogenesis may pave the way for innovative therapeutic interventions. This review delves into ER stress, its basic signaling pathways, and its role in the alcoholic liver injury.
INTRODUCTION
Alcoholic liver disease (ALD) is a significant concern for hepatologists, clinical researchers, and healthcare professionals involved in liver disease management, ranking among the most common liver afflictions worldwide.Its manifestations, resulting primarily from chronic excessive alcohol consumption, range from asymptomatic fatty degeneration and alcoholic steatohepatitis to more severe conditions such as alcoholic liver fibrosis, cirrhosis, and even alcohol-related hepatocellular carcinoma (Hyun et al., 2022).Notably, in 2019, approximately 25% of cirrhosis-related mortalities worldwide were linked to alcohol consumption (Huang et al., 2023).
The endoplasmic reticulum (ER) serves as the cellular hub for protein folding.Accumulation of improperly folded proteins within the ER instigates an uptick in misfolded protein levels.In response, the unfolded protein response (UPR) is initiated to restore ER equilibrium (Keerthiga, Pei & Fu, 2021).UPR can be activated through three pathways: IRE1a, ATF6a, and PERK pathways.Activation of the UPR pathways can increase the capacity of the ER and correct misfolded proteins.However, sustained ER stress can overwhelm the adaptive capacity of UPR, culminating in cellular apoptosis.It is noteworthy that alcohol-induced liver injuries are characterized by UPR activation, which is concomitant with pathologies like reactive oxygen species (ROS) generation and mitochondrial impairment (Xia et al., 2020).
Hepatocytes, the primary cellular units in the liver, fulfill intricate metabolic functions.These include plasma proteins synthesis and secretion, lipoproteins and very low-density lipoprotein discharges, cholesterol synthesis, and detoxification of foreign substances (Yoon & Kim, 2023).Owing to these multifarious roles, hepatocytes are replete with both smooth and rough ER. Figure 1 elucidates the association between ER stress and ALD.The functions of ER are well-established.However, the intricacies of how ALD-related perturbations in ER stress and homeostasis modulate ER functionality, and how such disruptions influence hepatocyte operations, remains enigmatic.ER stress manifestations are discernible across a spectrum of ALD cases (Xia et al., 2020).Existing studies allude to the potential of ER stress in inciting lipid metabolism anomalies, inflammation, and cell apoptosis (Yoon & Kim, 2023;Lebeaupin et al., 2018).Yet, the exact mechanisms whereby these anomalies induce hepatocyte injury and subsequently exacerbate ALD warrant comprehensive exploration.
This review, aimed at both budding researchers and seasoned experts in hepatology, cellular biology, and related fields, traces the origin of UPR under ER stress conditions, highlights the tendency of alcohol to prompt ER stress, and sheds light on ER stress's central role in ALD progression.
SURVEY METHODOLOGY
The PubMed database was used for related literature search using the keyword 'alcoholic liver disease', 'endoplasmic reticulum stress', 'unfolded protein response', 'hepatocyte apoptosis'.
UPR pathways in hepatic ER stress
The liver stands as a pivotal organ, orchestrating crucial metabolic, secretory, and excretory functions.Hepatocytes in the liver are rich in ER to conduct processes such as the synthesis and secretion of very low-density lipoprotein and plasma proteins.In the liver, alcohol escalates oxidative stress, thereby impeding protein folding and modification and consequently triggering ER stress.ER stress further activates the UPR to maintain ER homeostasis.The activation of the UPR due to ER stress mainly depends on the activation of three pathways (Lebeaupin et al., 2018): (1) inositol-requiring enzyme 1 (IRE1), ( 2) activating transcription factor 6 (ATF6), and (3) protein kinase RNA-like ER kinase (PERK).IRE1, a universally conserved UPR pathway present in both yeast and mammalian cells (Ren et al., 2021), is a dual-functional transmembrane protein possessing both serine/ threonine kinase and ribonuclease activities.Its N-terminal domain, oriented towards the ER, is adept at detecting ER stress (Aghaei et al., 2020).Once ER stress is detected, IRE1 undergoes dimerization and autophosphorylation, thereby activating its RNase domain (Hughes & Mallucci, 2019).This aids in mitigating ER stress and enhancing protein folding and processing by activating the expression of downstream genes.Through the collaborative action of IRE1 and tRNA ligase RTCB, the transcription factor X-box binding protein (XBP1) undergoes cleavage and activated into the active spliced XBP1 (XBP1s) (Han & Kaufman, 2016).XBP1s promotes ER protein folding and secretion, enhances ER-associated degradation (ERAD), and fosters lipid synthesis (Hetz et al., 2011).Recent research has demonstrated that inhibitors that targeting the IRE1 signaling pathway effectively hinder the activation of fibroblasts induced by TGFβ in vitro.Subsequently, this inhibition mechanism has been found to lead to a marked reduction in liver fibrosis in vivo (Heindryckx et al., 2016).
PERK, a transmembrane protein with two structural domains: an N-terminal stresssensing domain and a cytoplasmic kinase domain.During ER stress, PERK undergoes phosphorylation, leading to homodimer formation.To alleviate ER stress, PERK phosphorylates the alpha subunit of eukaryotic translation initiation factor 2 (eIF2a), inhibiting the assembly of the 80S ribosome and halting mRNA translation.Besides this inhibitory action, eIF2a also enhances the expression of the transcription factor ATF4. ATF4 regulates the expression of the growth arrest and DNA damage-inducible 34 (GADD34) protein.GADD34 functioning as a co-factor for phosphatases, responds to stress and promoting the dephosphorylation of eIF2a, thereby reinstating protein translation.
ATF6, another transmembrane protein, manifests in two isoforms two isoforms: ATF6a and ATF6β (Macke et al., 2023).Under ER stress, ATF6a migrates to the Golgi apparatus, undergoing cleavage to liberate the cytoplasmic fragment ATF6f.Subsequently, ATF6f relocates to the nucleus, regulating the cAMP response element and ER stress response elements.This regulation entails controlling downstream targets such as GRP78, ERAD-related proteins, and XBP1 (Walter et al., 2018).Studies have shown that ATF6 gene-knockout mice, upon being subjected to intraperitoneal injections with the ER stress inducer tunicamycin, exhibited a pronounced liver dysfunction and steatosis compared to their wild-type counterparts (Yamamoto et al., 2010).
Various pathways of UPR are illustrated in Fig. 2. Upon the onset of ER stress, cells activate the UPR via the pathways mentioned above.The UPR alleviates ER stress by reducing protein translation, increasing the expression of protein folding enzymes, and promoting protein degradation.However, chronic alcohol consumption can intensify ER stress.Persistent ER stress may exceed the adaptive threshold of UPR, leading to a further disturbance in hepatocyte physiology.
Alcohol consumption and ER stress
Ethanol consumption has been directly implicated in inducing ER stress.Excessive alcohol intake disturbs the functioning of the ER, resulting in hepatic manifestations such as steatosis and liver inflammation.The mechanisms through which alcohol consumption induces ER stress are described below.
Alcohol metabolism and reactive oxygen species: catalysts for ER stress
As shown in Fig. 1, ingested alcohol is oxidized to acetaldehyde by the alcohol dehydrogenase (ADH) enzyme, which is predominantly present in liver cells.This enzymatic reaction requires NAD+ as a coenzyme (Yoon & Kim, 2023).Acetaldehyde is subsequently converted to acetic acid by the mitochondrial aldehyde dehydrogenase (ALDH) enzyme.Notably, ALDH2 knockout mice exhibit a marked accumulation of acetaldehyde (Li et al., 2019).Acetaldehyde is known to promote protein oxidation, diminish SOD enzyme activity, and decrease the glutathione/oxidized GSH ratio (Farfán Labonne et al., 2009).Li et al. (2017) found that decreased expression levels of ADH and ALDH could suppress ROS generation in mice with alcohol-induced liver injury.Another pathway in ethanol metabolism is the microsomal ethanol-oxidizing system (MEOS), in which the cytochrome P450 enzyme CYP2E1 operates with the coenzyme NADPH (Cioarca-Nedelcu, Atanasiu & Stoian, 2021).Research indicates a significant increase in the ROS-generating enzymes CYP2E1 and NOX4 expression after exposing mice to ethanol acutely for 24 h (Chen et al., 2021).Although ethanol can be oxidized via the peroxisomal pathway dependent on hydrogen peroxide, it's not the primary route for its metabolism (Singh, Osna & Kharbanda, 2017).When ethanol is metabolized via the ADH and MEOS pathways, a significant amount of NADH or NADP+ is generated, causing ROS accumulation and aggravating ER stress (Teschke, 2019).In addition, studies have revealed that alcohol consumption diminishes the activity of different antioxidant enzymes, such as superoxide dismutase, catalase, selenium (Se)-dependent glutathione peroxidase, glutathione reductase, and glutathione S-transferase.This decline is particularly pronounced in older mice, which exacerbates oxidative liver damage (Mallikarjuna et al., 2010;Shanmugam, Mallikarjuna & Reddy, 2011).Moreover, reduced levels of vitamins E and C have been observed in patients with ALD (Masalkar & Abhang, 2005), highlighting the diminished antioxidative capacities in such individuals.
Alcohol-induced disruption of lipid metabolism: implications for ER stress
Excessive alcohol consumption disrupts lipid metabolism by altering the expression and activity of pivotal enzymes involved in lipogenesis, fatty acid oxidation, and lipoprotein secretion (Bolatimi et al., 2023).For instance, alcohol can elevate the expression of the lipogenic gene-regulating transcription factor, sterol regulatory element-binding protein 1c (SREBP-1c), leading to increased lipogenesis and lipid accumulation in hepatocytes (Hyun et al., 2021).It is also impairs fatty acid oxidation by inhibiting the activity of peroxisome proliferator-activated receptor alpha (PPARa), a nuclear receptor involved in the regulation of genes controlling fatty acid β-oxidation (Zhang, Liu & Yang, 2023).Alcohol also affects lipid transport by modifying the secretion of lipoproteins, such as verylow-density lipoproteins (VLDL), which play a significant role in lipid export from hepatocytes (Wang et al., 2023).Studies utilizing C57BL/6J mice on long-term alcohol feeding and rapamycin-induced mTORC1 activation inhibition revealed that hepatic free fatty acids could initiate the mTORC1 signaling pathway, subsequently leading to ER stress (Guo et al., 2021).Moreover, accumulated lipids resulting from alcohol metabolism can compromise the integrity of ER membrane, thereby disturbing its regular functionality (Han & Kaufman, 2016).While the precise biological mechanism by which lipids instigate ER stress remains elusive, free fatty acids have been shown to obstruct protein folding, thereby precipitating ER stress (Lepretti et al., 2018).The lipid composition and quantity in the ER membrane could affect its fluidity and functionality, hindering the normal operation of membrane proteins and subsequently causing ER stress.
The ER serves as the primary site for lipid metabolism due to the presence of enzymes involved in lipid metabolism within the ER.Alcohol-induced dysregulation of lipid metabolism can lead to the accumulation of lipids within the ER membrane, causing ER stress and the activation of the UPR (Xia et al., 2020).It has been demonstrated that the UPR not only modulates ER homeostasis but also influences lipid metabolism.In the liver, lipogenesis relies on insulin-induced SREBP-1c activation.Recent findings revealed that CHI3L1 gene knock out led to decreased mRNA levels of the transcription factor SREBP1 in the livers of mice modeled with the Lieber-DeCarli diet, mitigating liver damage caused by the upregulation of SREBP-1 due to ALD.This highlights the potential therapeutic implications of SREBP-1 regulation for ALD (Lee et al., 2019).In addition, studies have shown that the overexpression of GRP78 in the mouse liver also repressed the cleavage of SREBP-1c and target gene expression of both SREBP-1c and SREBP-2, indicating that ER stress contributes to hepatic lipoatrophy in lab models (Kammoun et al., 2009).Furthermore, by specifically knocking out the DGAT1 gene in the liver of mice and subjecting them to chronic alcohol feeding, it was discerned that hepatocyte-specific deletion of DGAT1 led to lipid accumulation in the cells.This, in turn, activated ATF4, culminating in the induction of ER stress (Guo et al., 2021).
ER STRESS IN THE PATHOGENESIS OF ALD
ALD encompasses conditions such as fatty liver, alcoholic hepatitis, and cirrhosis.Recent research suggests that ER stress and UPR may play significant roles in the pathogenesis of ALD.In ALD, persistent ER stress and UPR may lead to lipid metabolic disorder, hepatocyte inflammation, and even cell death, thereby exacerbating hepatic inflammation and fibrosis.This section provides a detailed overview of the mechanisms by which ER stress regulates hepatocyte lipid metabolism, inflammation, and apoptosis.
ER stress-induced hepatic lipid metabolic disorders
The enzymes required for lipid metabolism are widely distributed in the ER, making the liver a crucial hub for fat synthesis.Notably, the liver also dominates cholesterol synthesis in the body (Zhao et al., 2020).Under severe ER stress, the three main pathways of the UPR play crucial roles in the regulation of fatty degeneration.Each pathway may contribute to the progression of fatty liver degeneration by promoting fat breakdown, de novo fat generation, reducing fatty acid oxidation, and interfering with the secretion of lipoproteins and very low-density lipoproteins (Yu & Pajvani, 2023).
In liver fat metabolism, the IRE1a-XBP1 pathway significantly influences the assembly and secretion of VLDL and the de novo fats production (Ding et al., 2023).IRE1a has shown to inhibit vital metabolic transcription regulators, such as CCAAT/Enhancer Binding Proteins (C/EBP) β and δ, Peroxisome Proliferator Activated Receptor γ (PPARγ), and enzymes implicated in the biosynthesis of triglycerides.ER-stressed IRE1a-deficient mice exhibit severe fatty liver due to the hampered regulation of lipid synthesis and obstructed VLDL secretion (Zhang et al., 2011).Further, the Bax inhibitor-1 (Bl-1) gene has been found at lower levels in obese mice.Overexpression of Bl-1 seemed to stabilize lipid metabolism during the UPR by inhibiting the IRE1a endonuclease activity (Bailly-Maitre et al., 2010).Remarkably, XBP1 ablation in hepatocytes led to a significant reduction in cholesterol and triglycerides by curtailing liver lipid production (Lee et al., 2008).In hepatocytes, XBP1s interacts with the promoters of lipid metabolism-related genes (SCD, DGAT2), modulating their expression levels (Lee et al., 2008).Moreover, a negative feedback loop exists between XBP1 and IRE1a.The absence of the XBP1 triggers the activation of IRE1a, diminishing the degradation of downstream mRNAs related to lipid metabolism and inducing noticeable hypocholesterolemia in mice.Interestingly, either the deletion of XBP1 or the disruption of regulated IRE1-dependent decay (RIDD) can reverse the hypocholesterolemia observed in mice XBP1-deficient mice (So et al., 2012).
The PERK/eIF2a pathway also plays a vital role in liver fat metabolism.This pathway, when stimulated by palmitic acid ester-induced ER stress in hepatocytes, could promote steatosis by enhancing the expression of GADD153 or C/EBP Homologous Protein (CHOP), reducing the secretion of apolipoprotein B, and leading to the accumulation of triglycerides and cholesterol in hepatocytes (Guo et al., 2022).Under stress conditions, phosphorylated eIF2a can promote the expression of ATF4, which subsequently induces the expression of the transcription factor CHOP.This chain of events disrupts the function of C/EBP, inhibiting the expression of genes related to lipid metabolism, which then triggers disorders in fatty acid oxidation, lipoprotein secretion, and other lipid metabolic processes (Magee et al., 2022).In mice with ATF4 gene deletion, the expression of peroxisome proliferator-activated receptor-γ (PPAR-γ) in the liver is significantly reduced.PPAR-γ is a nuclear receptor that promotes fat production in the liver.The absence of ATF4 can weaken hepatic fat production by downregulating PPAR-γ, without affecting the production of hepatic triglycerides and fatty acid oxidation (Xiao et al., 2013).Knockout of the ATF4 gene leads to increased energy expenditure in mice and can inhibit diet-induced diabetes as well as hyperlipidemia and fatty liver (Seo et al., 2009).However, the basal level phosphorylation of eIF2a can prevent lipid accumulation caused by a direct challenge to ER stress because inactivation of eIF2a in mice can lead to tetracycline-induced fatty liver (Rutkowski et al., 2008).
ATF6, conversely, acts protectively against liver fatty lesions.Overexpression of a repressive version of ATF6 (dnATF6) elevates the susceptibility to hepatic steatosis in mice that manifest insulin resistance due to a high-fat diet (Chen et al., 2016).Additionally, a direct physical interaction is observed between ATF6 and PPARa.This interaction amplifies the transcriptional activity PPARa, subsequently activating target proteins of PPARa such as CPT1a and MCAD in hepatocytes (Flister et al., 2018).These proteins promote the oxidation of fatty acids in the liver, a process critical for controlling liver fat accumulation and energy balance.However, excessive fatty acid oxidation may lead to increased oxidative stress in the liver, potentially resulting in liver injury (Tu et al., 2020).Elevated ATF6 expression in the liver fosters hepatic fatty acids oxidation, providing protection for mice with insulin resistance caused by a high-fat diet from the impact of hepatic steatosis (Chen et al., 2016).Mice with ATF6a gene knockout exhibit significant liver dysfunction and fatty liver, and a notable accumulation of neutral (e.g., triglycerides and cholesterol) in liver.This accumulation stems from the reduction of related enzyme mRNA levels crucial for fatty acid β-oxidation, the instability of apolipoprotein B-100 that hampers the formation of very low-density lipoproteins, and lipid droplet synthesis triggered by the transcription of lipid differentiation-related proteins (Yamamoto et al., 2010).The absence of the ATF6a leads to blocked fatty acid oxidation, further promoting the early development of fatty liver (Tsitrina et al., 2023).When exposed to tunicamycin, ATF6a-deficient mice exhibit persistent express CHOP, inhibition of C/EBPa (CCAAT/ enhancer-binding protein a), and develop hepatic steatosis (Lebeaupin et al., 2018).Both CHOP and C/EBPa are pivotal transcription factors that play a crucial regulatory role in liver lipid and glucose metabolism.Persistent CHOP expression might increase cellular sensitivity to stress, leading to cell death, while the inhibition of C/EBPa could imbalance liver lipid metabolism and glucose metabolism, eventually leading to hepatic steatosis (Lebeaupin et al., 2018).In addition, ATF6 can inhibit the transcriptional activation of SREBP2, thereby suppressing the lipid-generating effects of SREBP2 in hepatic cells (Zeng et al., 2004).
The UPR acts as a counteractive mechanism to ER stress, striving to maintain the homeostasis of the ER.Nonetheless, sustained ER stress can activate UPR for a prolonged period, leading to lipid metabolism disturbances.Each UPR pathway plays a distinct role in the lipid metabolic disorder induced by ER stress, and the interactions and specific mechanisms among these pathways still require further in-depth research.Overall, UPR is indispensable for hepatic lipid metabolism.Deepening our understanding of its operations and regulatory mechanisms not only demystifies the intricacies of lipid metabolism but also paves the way for innovative treatments for hepatic diseases related to lipid metabolism.
ER stress and the hepatic inflammation
ER stress is associated with several cellular inflammatory response pathways.These include the activation of NF-kB, JNK, ROS, Interleukin-6 (IL-6), and tumor necrosis factor-a (TNF-a) (Malhi & Kaufman, 2011).Notably, NF-κB activation plays a predominant role in inflammatory responses, serving as a crucial mediator in hepatocellular damage, fibrosis, and the progression of hepatocellular carcinoma.Research indicates that the phosphorylation of eIF2a is vitally important in driving NF-kB transcriptional activity.During ER stress, the PERK pathway can activate NF-kB by inhibiting the release of protein IkappaB (Du et al., 2022).Pathogen-generated lipopolysaccharides in the intestine may exert toxic effects on hepatocytes, and activate the NFkB-mediated inflammatory response in stem cells.This response bears a dual role, promoting inflammation while opposing apoptosis.This highlights a pivotal regulatory function of NF-kB-mediated inflammatory response in hepatocytes.In this context, a mild up-regulation of NF-kB can counteract hepatitis by inhibiting hepatocellular apoptosis via inflammatory responses.However, excessive up-regulation of NF-kB could facilitate the release of inflammatory factors, intensifying the severity of hepatitis (Luedde & Schwabe, 2011).In addition, NF-kB can also be activated through the phosphorylation of Akt in the ATF6 pathway (Yamazaki et al., 2009), as well as through the CHOP pathway (Willy et al., 2015).
ER stress and the hepatocyte apoptosis
ALD, a consequence of chronic and excessive alcohol consumption, encompasses a spectrum of liver conditions, encompasses a spectrum of liver conditions, from fatty liver and alcoholic hepatitis to cirrhosis, and even liver cancer.Cell apoptosis is a significant factor in the progression of ALD, as depicted in Fig. 3.The primary pathways of cell apoptosis in ALD involve the extrinsic pathway, the mitochondrial pathway, and the ER stress pathway.Notably, the ER stress-induced apoptosis pathway further bifurcates into Full-size DOI: 10.7717/peerj.16398/fig-3 three significant routes (Beilankouhi et al., 2023): the IRE1a signaling pathway, CHOP-induced apoptosis, and activation of the caspase-12 pathway.
The role of IRE1α pathway in regulating the hepatocyte apoptosis
Under high ER stress, there is evidence indicating that IRE1 may promote the non-specific degradation of membrane-associated mRNA through mechanism termed regulated IRE1dependent decay (RIDD) (Hollien & Weissman, 2006).While previous reports have identified RIDD's involvement in the degradation of ER stress-related mRNA, limiting the synthesis of nascent proteins (Zhang et al., 2011) to relieve ER stress, it also seems to be participated in the apoptotic pathway under acute ER stress conditions.RIDD cleaves RNA at a consensus site similar to XBP1, but its activity is distinct from XBP1 mRNA splicing.It holds the capacity to either preserve ER equilibrium or prompt cellular apoptosis (Maurel et al., 2014).Additionally, studies have found that mice with BAX and BAK gene knockout showed abnormal response to tunicamycin-induced ER stress, accompanied by extensive tissue damage.The expression of IRE1 substrate X-box binding protein 1 and its target genes were also reduced.Co-immunoprecipitation experiments revealed the interaction between IRE1a and BAK and BAX proteins, essential players in the mitochondrial apoptotic pathway, underscoring the potential involvement of BAX and BAK in activating the IRE1 signaling pathway and liver apoptosis (Hetz et al., 2006).
Within the IRE1 pathway, phosphorylated IRE1 also binds to the receptor protein tumor necrosis factor receptor-associated factor 2 (TRAF2).This interaction subsequently promotes a series of phosphorylation events, leading to the activation of Jun amino-terminal kinase (JNK) (Urano et al., 2000).Prolonged JNK activation can drive liver cell demise (Basha et al., 2023).Once activated, phosphorylated JNK activates the pro-apoptotic protein Bim while deactivating the anti-apoptotic protein Bcl2, orchestrating cell apoptosis.Additionally, overexpressed JNK can compromise the ER membrane integrity, causing efflux of Ca 2+ ions.This sequence activates Caspase12 via proteases, further pushing the cell towards apoptosis (Stillger et al., 2023).Wu et al. (2020) conducted a study where mice were orally administered with 10 mg/kg of copper sulfate, resulting in significant ER stress, elevated gene expression levels in the JNK and Caspase12 signaling pathways, and the onset of liver cell apoptosis.This study further elucidates the relationship between the JNK and Caspase12 signaling pathways, ER stress, and liver cell apoptosis.
The role of CHOP pathway in regulating the hepatocyte apoptosis CHOP, an integral part of the C/EBPs protein family, functions as a transcriptional regulator that facilitates apoptosis under the influence of ER stress.Investigations utilizing deletion mutations have elucidated the pivotal function of the C-terminal bZIP structural domain in CHOP-mediated apoptosis initiation (Ubeda et al., 1996).The apoptotic cascade mediated by CHOP exhibits significant association with the processes involving PERK, ATF6, and IRE1 (Oyadomari & Mori, 2004).During instances of ER stress, phosphorylated eIF2a enhances the transcription of ATF4, leading to the upregulation of genes such as CHOP, GADD34 (Michel et al., 2015).Notably, mice lacking PERK and ATF4 demonstrate an inability to activate -mediated apoptosis during ER stress (Harding et al., 2003).The eIF2a pathway triggers the expression of PERK and ATF4, amplifying protein synthesis, which in turn leads to ATP exhaustion, ROS generation, and subsequent apoptosis (Han et al., 2013).Within the ATF6 pathway, the activated migrates to the cell nucleus, promoting the transcription of several UPR-associated genes, including CHOP and XBP1 (Zimmermann et al., 2023).In the IRE1 pathway, XBP1, serving as a downstream transcription factor, can further augment the expression of the CHOP gene (Liu, Zhao & Rutkowski, 2023).Additionally, the phosphorylation cascade triggered by IRE1 encompasses not only JNK but also p38 mitogen-activated protein kinase (p38 MAPK).The latter is implicated in further phosphorylation of serine located at positions 78 and 81 of CHOP, thereby inducing apoptosis (Ron & Hubbard, 2008).
The pathways through which CHOP induces cell apoptosis include: (1) Inhibition of the expression of anti-apoptotic proteins in the BCL-2 family.McCullough et al. (2001) found that cell lines with heightened CHOP expression were especially vulnerable to ER stress, exhibiting a decrease in Bcl-2 expression.Restoring Bcl-2 expression in these cells counteracted the CHOP-induced increase in ROS and cell apoptosis.How CHOP regulates the promoter of Bcl-2 remains unclear, but studies have reported that CHOP can facilitate its transport within the nucleus through interaction with the bZIP protein C/EBP beta subtype (LIP) in the cytoplasm and nucleus.Notably, mouse embryonic fibroblasts expressing LIP have shown enhanced ER-induced apoptosis (Chiribau et al., 2010).CHOP can cause cell apoptosis by reducing the expression of Bcl-2, depleting cellular thiols, leading to ROS production, and further disrupting oxidative balance (McCullough et al., 2001).(2) CHOP induces cell apoptosis through ROS.CHOP can oxidize the ER lumen via ER oxidase1a (ERO1a) (Fujii, Ushioda & Nagata, 2023).In the absence of ER stress, the oxidation of ERO1a promotes the formation of disulfide bonds in newly entered proteins in the ER.However, persistent ER stress can induce hyper-oxidation of the ER environment, leading to cell death.It was found that cells lacking CHOP can reduce oxidative damage and achieve higher cell survival rates by reducing the expression levels of the ERO1a gene and ROS-related stress genes (Deng et al., 2023).Furthermore, CHOP gene knockout mice exhibited a significant reduction in cell apoptosis when exposed to agents that impair ER function (Zinszner et al., 1998).(3) CHOP-GADD34-elF2a pathway induced cell apoptosis.In the UPR, cells primarily mitigate ER stress and cell apoptosis by reducing protein translation rates through the phosphorylation of elF2a (Hanson et al., 2022).Under these conditions, CHOP promotes cell apoptosis by enhancing GADD34 transcription, subsequently facilitating the dephosphorylation of the 51st serine of phosphorylated elF2a, thereby reinvigorating protein transcription.Studies of gene knockout in mice by Marciniak et al. (2004) found that both GADD34 gene and CHOP knockout mice were resistant to tunicamycin-induced ER stress, reinforcing the aforementioned hypothesis.
The role of caspase-12 in regulating the hepatocyte apoptosis
Caspases, a group of cysteine proteases, are integral components in the orchestration of apoptosis.Within this family, the role of caspase-12 has gained significant attention due to its participation in the apoptosis of hepatocytes, particularly in instances of cell demise triggered by ER stress.Under normal circumstances, Tumor Necrosis Factor Receptor-Associated Factor 2 (TRAF2) forms a complex with the caspase-12 precursor.However, under sustained ER stress, the caspase-12 precursor dissociates from TRAF2, thereby activating caspase-12 (Nakagawa et al., 2000).Under sustained ER stress, caspase-7 cleaves the caspase-12 precursor at the Asp94 and Asp31 sites, thus activating caspase-12.Activated caspase-12 then activates caspase-9, which in turn activates caspase-3, ultimately leading to the induction of cell apoptosis (Pal et al., 2015).Additionally, caspase-12 can also be cleaved by the calcium-regulated enzyme calpain, activating procaspase-12 to generate active caspase-12 (Bonsignore, Martinotti & Ranzato, 2023).
In addition, research by Ding et al. (2022) demonstrated that Guanylate Binding Protein 5 (GBP5), a member of the guanosine triphosphate binding protein family, exhibits abnormally elevated expression levels in cases of liver damage and validation.Furthermore, the inhibition of calpain activity or caspase-3 can prevent GBP5-induced cell apoptosis (Ding et al., 2022).This implies a key role of GBP5 in regulating the caspase-12 cell apoptosis pathway.
ER stress and autophagy in ALD
It is reported that in the zebrafish model of ALD, the alcohol metabolism leads to impaired ER function and activation of downstream targets of the UPR (Tsedensodnom et al., 2013).When hepatocytes face prolonged ER stress, the UPR activation alone is inadequate to mitigate the stress, prompting the onset of autophagy to preserve ER stability (Senft & Ronai, 2015).In the UPR-triggered autophagy, PERK plays a pivotal role by activating autophagy-related genes through the phosphorylation of eIFa.Kouroku et al. (2007) found that mutations in the phosphorylation sites of eIFa or knockout of the PERK gene diminish the cellular autophagy induced by ER stress.Additionally, in the IRE1 pathway, the binding of TRAF2 to IRE1 activates JNK, which further phosphorylates Bcl-2.This leads to the release of the autophagy-regulating protein Beclin-1 from Bcl-2, activating the phosphoinositide-3-kinase (PI3K) complex and autophagy (Deegan et al., 2013).Moreover, C/EBP-β is also implicated in this process (Parzych & Klionsky, 2014).In a study led by Lin et al. (2013) an enhancement in autophagic flux was observed in mice subjected to extended ethanol feeding.The investigation involved the administration of carbamazepine (an autophagy activator) to mice under both acute and chronic ethanol conditions.Notably, carbamazepine was found to alleviate hepatitis and hepatic injury in ALD mice by augmenting autophagic activity (Lin et al., 2013).Specific knockout of the DGAT1 gene in mouse hepatocytes revealed that these DGAT1-deficient mice exhibited an upregulation of ER stress and a downturn in autophagy mediated by the LAMP2 pathway, which consequently led to liver damage (Guo et al., 2021).In a related study, Chao et al. (2018) reported that mice with a knockout of the Transcription Factor EB gene, a pivotal regulator for the transcription of autophagy-required genes, manifested exacerbated steatosis post-ethanol treatment compared to their control counterparts.It is evident from these findings that autophagy plays a protective role in the progression of ALD, wherein alcohol intake can stimulate cellular autophagy as a self-protective response (Ding et al., 2010).However, chronic alcohol consumption appears to suppress autophagy (Xia et al., 2020), thereby exacerbating hepatic injury.
Compounds regulating ER stress in ALD
In summary, ER stress is central to the pathogenesis of.Our comprehension of the three pathways initiated by ER stress, coupled with insights into how alcohol induces such stress, provides fresh perspectives on the pathological mechanisms underlying ALD.This knowledge paves the way for the development of innovative treatment strategies, especially drugs that specifically target ER stress.In reference to the aforementioned IRE1a, PERK, and ATF6 pathways, the final section of this review encapsulates some known and potential drugs, as detailed in Table 1.These drugs primarily influence ALD-related conditions, including fatty liver, alcoholic hepatitis, and liver cancer, through the modulation of the aforementioned pathways, either mitigating or intensifying ER stress.By deepening our understanding of the mechanisms of action of these drugs, we aspire to identify more targeted and effective treatment methods to better combat the pathogenesis of ALD.
CONCLUSIONS
Intake of ethanol leads to an elevated production of ROS in hepatocytes, intensifying the onset of ER stress.Mouse models indicate that this escalation is attributed to the decline in SOD levels and the glutathione/oxidized GSH ratio due to the metabolism of ethanol to acetaldehyde, combined with an increased activity of ethanol-metabolizing enzymes (Li et al., 2019;Farfán Labonne et al., 2009;Li et al., 2017).Such findings underscore the potential of ROS reduction in the prevention and treatment of ALD.Additionally, ethanol can enhance lipid production by modulating the activity of SREBP-1c (Guo et al., 2021).
The accumulation of lipids in hepatocytes not only suppresses autophagy but also exacerbates ER stress (Lee et al., 2019), further contributing to liver damage.The activation of autophagy has been demonstrated to be a promising therapeutic strategy for ALD (Nissar et al., 2017).Prolonged ER stress that exceeds the regulatory scope of the UPR can trigger disorders in lipid metabolism, inflammation, and apoptosis in hepatic cells.
The pathological process of ALD is highly complex encompassing more than just ER stress-it also involves the modulation of several signaling pathways, including miRNA.
Understanding the interconnections between these varied mechanisms and their ties with ER stress will be pivotal in future research endeavors.
Figure 3
Figure 3 Schematic representation of the pathways involved in ALD progression and the pivotal role of ER stress in cell apoptosis.Full-size DOI: 10.7717/peerj.16398/fig-3
Table 1
Compounds that target to the ER stress related pathways in ALD.
|
2023-11-16T16:03:02.563Z
|
2023-11-14T00:00:00.000
|
{
"year": 2023,
"sha1": "6a8ca51a800ad31a4300fdc2949bec9e4bdccfb6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj.16398",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1b8c4972c923e0e0145d5567a64119dec41afdce",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
264607602
|
pes2o/s2orc
|
v3-fos-license
|
Implications of the Deprivation of Land Rights in the Public Interest on Citizens' Property Rights
The purpose of writing this paper is to explain the regulations for implementing land acquisition for the public interest, which in writing this paper uses case studies on land acquisition for the new State Capital in East Kalimantan. Through writing this paper, the author tries to explain about the form and mechanism of compensation to the aggrieved party due to the occurrence of land acquisition based on Law no. 2 of 2012. The method used is normative juridical analysis. With the conclusion that the process of implementing land acquisition for public purposes is carried out by the Head of the Regional Office of the National Land Agency and the Head of the Land Office after receiving an assignment from the Head of the Regional Office of the National Land Agency with all duties and responsibilities starting from the implementation stage to the delivery of results with a term the time specified in the Land Acquisition Law for the public interest and its implementing regulations. 2. Assessment of the amount of compensation in the procurement of land for the public interest is carried out plot per plot of land, including: land, above-ground and underground space, buildings, plants, objects related to land, and other losses that can be assessed.
INTRODUCTION
Revocation of land rights is a legal action taken by the government or an authorized institution to take back the ownership rights or use rights of a land from the owner. Revocation of land rights can be carried out for several reasons, including the land being used for public or national interests, landowners not fulfilling their obligations as landowners or violating applicable legal provisions, and land ownership disputes that require revocation of land rights. to finish it. In revoking land rights, the government or an authorized institution must pay attention to the procedures and legal provisions in force so as not to cause unnecessary losses to landowners. Therefore, before carrying out the revocation of land rights, usually the authorities will give notice or warning in advance to the land owner.
In the case of revocation of land rights for public interest, the government has established a public policy regarding the government's authority to revoke land rights for the public interest by issuing Government Regulation Number 19 of 2021 concerning land acquisition for the implementation of development for public interest which refers to the Law Number 20 of 1961 concerning the Revocation of Rights over Land and Objects on it. This law regulates a number of things, including the reasons for revocation of land rights, the process of revoking land rights, the rights owned by landowners and parties involved in the process of revoking land rights, as well as sanctions for parties who violate the provisions in the law.
According to Government Regulation Number 19 of 2021 concerning Land Acquisition for Implementation of Development in the Public Interest, the revocation of land rights is carried out by the President at the request of the National Land Agency (BPN) and the minister from the agency that needs the land as well as the Minister of Law and Human Rights. This was created for the existence of conducive conditions to fulfill the government's development program agenda to accelerate the revitalization of rural agriculture, public housing, and development in the infrastructure sector which was hampered due to the land acquisition process. The issuance of Government Regulation Number 19 of 2021 invited criticism from various communities, who considered that people did not understand enough as a result of a lack of outreach activities, resulting in a lack of erroneous understanding. However, it is different from the Department of Public Works, which issued a favorable response to the issuance of this Presidential Regulation. The Public Works Department explained that this policy would facilitate and expedite the land acquisition process for infrastructure development.
Problem Formulation: What is the procedure for revoking land rights based on Government Regulation Number 19 of 2021 concerning Land Acquisition for Implementation of Development in the Public Interest for the protection of human rights? What is the responsibility of the government or authorized institution for losses from the revocation of land rights against land owners?
RESEARCH METHODS
The journal writing method this time is to use a descriptive analysis method in which later the author will present and describe the subject and object of the research based on the analysis and research conducted. The type of research in this paper will use the type or method of research by using a normative juridical approach as is done with an approach related to problems by examining various legal aspects and provisions in positive law regarding revocation of land rights in the public interest and seeking sources of information from statutory provisions. applicable invitations, reference books and jurisprudence related to the discussion of this paper. The types and sources of data that the authors use in this study consist of data obtained from various literary sources, through reference books, print media, electronic media, journals, opinions experts and experts as well as other sources of information.
Analysis of the data collected was carried out qualitatively, which means the process of systematically searching and compiling data obtained from the results of research literature or research literature by studying reading materials in the form of scientific books, newspapers, magazines and other library materials related to writing This paper.
Revocation of Rights on Land and Goods on it for the Public Interest
In connection with the conception of revocation of land rights, it has actually been regulated in Law no. 5 of 1960 UUPA which in that article stated and explained that "The public interest includes the interests of the nation, the state and the common interests of the people, land rights can be revoked, by providing compensation or compensation as appropriate according to the method stipulated in the law". The same thing was also conveyed in Law no. 2 of 2012 regarding land acquisition for the public interest, in which the presence of Law no. 2 of 2012 is considered capable of being a legal instrument capable of creating justice and upholding human rights for citizens whose land is used as a public place. Besides that, in Law no. 2 of 2012 also stated that the procedures and steps for land acquisition for legal purposes must be carried out in a transparent and open manner.
One example of a case of land acquisition for public purposes that occurred in Indonesia is the procurement in East Kalimantan for the National Capital. Where the implementation process must be carried out in a structured, planned, transparent and open manner and approved by all parties, especially the central government with the government and the people of East Kalimantan. In addition, the central government is also charged with explaining in a transparent manner the impact on the welfare of local communities and the state community for the project to relocate the new State Capital in East Kalimantan. In addition, it is necessary to know together about the positive impacts of relocating IKN, which are as follows: Positive Impacts: 1. Reducing Gaps and Equitable Development. As we can see for ourselves that the distribution of development in Indonesia is still not evenly distributed, especially in several islands outside Java which still tend to be left behind. So that one of the government's goals in moving the capital city is to reduce inequality and represent development equity between the islands of Java and Kalimantan. With this plan, of course, it will be balanced with development on the island of Kalimantan as well as the existence of the economy or other sectors that are not only based on Jakarta or the island of Java. 2. Realizing a New National Capital (IKN) in Accordance with National Identity and Equal Society. Apart from being the center of government, the National Capital is also the identity of the nation, so it is very important to choose a capital city that is in accordance with the character of the Indonesian nation. Kalimantan itself is considered to be very representative of the character of the Indonesian nation, which has abundant natural resources, is not prone to disasters and is still of other age. Apart from that, the transfer of the national capital was also carried out to illustrate the growth and development of Indonesia and its evenly distributed society. Because basically the infrastructure for the capital city buffer zone will grow and develop along with the process of building a new capital city. Then, an even population comes from migration and urbanization, where there are some urban residents who will move to Kalimantan around the new IKN. 3. Activities that are more flexible and open up business opportunities. With the relocation of the capital city, Jakarta's traffic jams are also reduced and there are also vacant lands. With reduced traffic jams, the activities of the people of Jakarta will be much more relaxed and the land can be used to open businesses for people who are first-time entrepreneurs or entrepreneurs who want to open branches. In addition, tourism in Jakarta also has an impact with the arrival of tourists when traffic jams are reduced. 4. Improvement of Education and Health Facilities. With Kalimantan being made a new NIK, the education and health facilities will also receive improvements and enhancements from the government. Because the area that becomes the capital city will receive more attention in improving its facilities and infrastructure.
As for several principles of land acquisition regulated in presidential regulation no. 36 of 2005 Jo Presidential Regulation No. 65 of 2006 and Head of BPN RI No. 3 of 2007 which includes the following: Land acquisition for public purposes, such as for the construction of a new National Capital City, must first ensure the availability of land, not to override the welfare of the local community from the East Kalimantan region; The basic rights of the community over land taken for the public interest must be guaranteed to be protected; To the opportunity of the emergence of land speculation.
After all the principles have been fulfilled, then the process of procuring a new State Capital can be carried out, which based on Law no. 19 of 2021 there are several stages or procedures in carrying out land acquisition, which include planning, preparation, implementation and delivery of the results of the land acquisition project. The first stage is planning where land acquisition must be based on spatial planning and development priorities. Where in planning the agency that requires the land must involve the ministry or other agencies in the land sector. If the planning process for land acquisition has been completed, it can be proven by the presence of the DPPT. The DPPT is a land acquisition planning document whose contents include approval from the parties concerned, an agreement for compensation or compensation to the people whose land is taken over or is harmed, budget costs, development plans and the like.
The next process is preparation, in which the land acquisition planning document (DPPT) is submitted to the local government and related technicians for verification. The verification process will take no later than 5 working days, after five working days and it has been declared verified, then it will enter the preparatory stage which will be carried out by public consultations to get approval and reassure that there are no parties who object. It should also be noted that the agency wishing to carry out land acquisition must also submit an application for the implementation of land acquisition by completing several necessary documents such as: Letter of determination of location; DPPT; Data of parties entitled to land acquisition; Data on local communities affected by land acquisition projects; Minutes of land acquisition agreement; Letter of statement for the installation of boundary markers for land parcels; Land release permit from the previous land owner; Letter of statement on the readiness of the participation of land acquisition project funds; Letter of statement of compensation to the affected community.
If all the documents are complete, then the agency that wants to carry out land acquisition can carry out the project. The last is the process of handing over the results of land acquisition. Based on the regulation of the Minister of Agrarian Affairs and the Head of the Land Agency No. 19 of 2021, it is explained that a maximum of 14 working days since the relinquishment of the land acquisition object rights. The form of submission of the results of the land acquisition process is in the form of minutes of land acquisition results and inclusion of land certificates that have been submitted and land acquisition implementation documents which must also be integrated electronically.
The Government's Responsibilities to Communities Who Are Disadvantaged of Land Acquisition for Public Interests
The form of compensation for land acquisition is regulated in Law no. 2 of 2012 in article 40 to be precise. In Article 40 it is explained and emphasized many times that the provision of compensation to parties who are aggrieved from land acquisition must be given directly to the rightful party. However, if the entitled party is unable to attend, the entitled party must give power of attorney to another person, such as an heir or a person who has been sent through a power of attorney. Where the recipient of the power of attorney can only receive power of attorney over one person who is entitled to compensation, the following are the people who are entitled: Holders of land rights; Classification right holders; Nadzir on waqf land; Customary land owners; Local people; Parties who control state land on the basis of good faith; The owner of the building on the land.
In Article 41 paragraph 1 it is also explained that compensation from land acquisition is given to the parties who are entitled on the basis of the results of the assessment determined in the deliberation as referred to in Article 37 paragraph 2. In addition, the Supreme Court Decision also explains that the party entitled to compensation Losers must release rights and include proof of control and ownership of land objects that are transferred to the competent authority for land acquisition. This is done to minimize things that are not desirable at any time.
As has been explained, the determination of the amount of compensation is determined based on the deliberations that have been carried out. So that if there are parties who object or refuse the amount of compensation from the results of the deliberations, the party concerned can submit an objection. And if the objections that have been filed are ignored, then the party concerned in submitting a lawsuit to court. In Article 73 paragraph 1 of Presidential Decree No. 71 of 2012 concerning Implementation of Land Acquisition for Development in the Public Interest. In paragraph 2 of the article it is further explained that the District Court has the right to decide on the form and/or amount of compensation within a maximum period of 30 working days from the receipt of the objection. While paragraph 3 explains that a party who objects to the decision of the District Court as referred to in paragraph 2 can submit an appeal to the Supreme Court within a maximum period of 14 working days. Paragraph 4 explains that the Supreme Court is required to render a decision within a maximum period of 30 working days from the time the cassation request is received. The form of compensation is regulated in Article 36 of Law Number 2 of 2012 which states that: Compensation can be given in the form of: money; replacement land; resettlement; shareholding; other forms agreed by both parties. Explanation of Law Number 2 of 2012 Article 36 what is meant by "Resettlement" is the process of providing replacement land to the entitled party to another location in accordance with the agreement in the Land Acquisition process. Furthermore, the elucidation explains what is meant by "Form of Compensation Through Share Ownership" is the participation of shares in development activities for the related public interest and/or their management based on an agreement between parties.
Concerning compensation in terms of form and amount received confirmation through Presidential Regulation Number 71 of 2012. From the aspect of understanding compensation according to the provisions of Article 1 of the Presidential Regulation it is stated as "Proper and fair compensation to the party entitled to the Land Acquisition process". In Article 65 the Appraiser is tasked with evaluating the amount of Compensation per plot of land, including: land; above ground and underground space; building; plant; objects related to land; other loss that can be assessed. As for the forms of compensation that can be given in the land acquisition process for the public interest, based on Article 74 of Presidential Regulation Number 71 of 2012 are as follows: money; replacement land; resettlement; shareholding.
CONCLUSION
Based on the explanation that was conveyed in the previous discussion, there is a common thread that we can make a conclusion in writing this paper, namely based on Law no. 19 of 2021 it is explained that there are 4 processes that must be taken in land acquisition, namely planning, preparation, implementation and delivery of results. Where in the process of implementing land acquisition for public purposes it must be carried out by the Head of BPN RI and marked by the validity of a complete and verified DPPT. In addition, we need to know together that the land acquisition process certainly involves many parties involved. For example in the process of land acquisition for the State Capital in East Kalimantan, where many indigenous peoples or local communities had their land and even the buildings on it had to be evicted to make way for the IKN development project. So that in order to represent justice for all Indonesian people, a compensation program was created for the parties who were harmed by the land acquisition, in which the assessment of the amount of compensation in the acquisition of land for the public interest was carried out in plots per plot of land, including: land, space above ground and below. land, buildings, plants, objects related to land, and other losses that can be assessed.
Finally, there is a suggestion that the author wants to convey where people who have land rights that want to use it for the public interest are expected to be cooperative because actually land ownership rights also have a social function, especially seeing the positive impact of land acquisition. Apart from that, the government is also expected to pay more attention to social welfare, the principle of order, the principle of humanity so that it can minimize all the problems of land acquisition which is about to be carried out.
In writing this article, the author would like to express his sincere gratitude to Ms. Imelda as a lecturer in the Kapita Selekta Civil course for her guidance, support and invaluable contribution in the process of this research. The author also thanks family and friends who have
|
2023-07-11T16:09:38.874Z
|
2023-07-04T00:00:00.000
|
{
"year": 2023,
"sha1": "0ec2a6664ff38a01c160c7bfb6eddedbd7a4befe",
"oa_license": "CCBYNC",
"oa_url": "https://rayyanjurnal.com/index.php/aurelia/article/download/721/pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "fd8e3e6c40e00911fdd2a2cbbe8441451e03abac",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": []
}
|
212596858
|
pes2o/s2orc
|
v3-fos-license
|
Upcoding Fraud Discovery in the Economical Fields Using Block Chain Technology
: Discovering the fraud is the most important now a days as it is observed in every sector starting from Banks to schools. Therefore it is significant discover this fraud so that the loss could be zero. Upcoding fraudulence is one kind of fraud observed now days in which the provider gains extra monetary by coding a solution in spite it is taken by the claimer the fraud will claim the insurance one more time without allusion of certified person. Owed to this claimer could misplace assurance still he is endorsed person to increase due to fraud has already claimed the monetary. With the help of expert system and data withdrawal it’s easier to recognize scam. With superior technologies like block chaining it’s easier to detect fraud in the addition of fraud it is also known the history of the fraud. This paper concentrates on up coding Fraudulence discovery using block chain technology.
I. INTRODUCTION
The term "Fraud" is a person or an obsession proposed to mislead others, naturally by excessively claiming or organism accredited with activities or character. [1].The data released by FBI states that the total cost of insurance fraud (non-health insurance) is estimated to be more than $40 billion per year. According to the statistics of The Economic Times, concerning 85%-90% of life cover frauds, drop in group of Rs.1 lakh to Rs.10 lakhs. These statistics are a grave alarm for the counselor of Insurance manufacturing. Blockchain completions can a explanation to need of interoperability inside cover commerce which reduces efficiency and also hinders progress towards the digital collaboration required to identify patterns, trends, and known actors in preventing fraud. This kinds of activities may result a risk for both the society or economically. Using Traditional techniques of information evaluation it is easy to find fraud but the drawback it is having is that it is lengthy and time consuming. Fraud cases can be many types it can be replication of claims or it can be claiming the insurance by the unauthorized person. Fraudulence cases can be similar in material and as well as the way the look [2].The scams can be more if they have the bank card ,misstating cherubs, hacking the accounts , claiming the insurance for more than one time etc. Most of the fraudulence with 20% to 25% cases are fell with some fraud and in that about 10% cases suffer from the Upcoding fraudulence. This fraudulence is started from banks, schools and due to people were not aware of this fraud the people are losing they money without their intention. Based upon a semantic network covering, Falcon fraudulence evaluation system, FICO was effectively executed in the financial sector. Grocery stores have aware of the fraud they try to avoid the scam by installing the digitized closed circuit tv with the help of POS information .So that fraudulence can be avoided to the maximum limit. Dealings which takes place with internet is facing many scams than compared to grocery [3]. As the frauds are becoming more in every field to implement in every field they need the updating of the technology which is happening day by day. So in order to avoid this people should be even smarter to avoid this to happen in their lives. The main resources which they can do fraud can be with the knowledge in Db, fake cleverness, Machine Learning and also Stats [4] they need the working of the technology to commit to the fraudulence activities.
II. RELATED WORK
Research studies on Upcoding in the field of health is minimal with only two simply documents, an IEEE International Meeting on Information Mining as well as ACM SIGKDD International Meeting on KDD and also Information Mining [15]. Upcoding research study is actually done in the location of monitored understanding with only two of referencing not being watched, analytical as well as descript vive techniques, extracting message or chart formulas for scams discovery. Suresh et alia [16] explains a trademarked fraudulence identifying paper in which they created an ordered coded repayment system, utilizing an undefined not being watched approach to discover the variant from the typical amongst the various teams. They provided a basic approach for Upcoding discovery with summing up throughout or within each category quality. Suresh et alia did not state regarding the restrictions of their innovation as well as likewise misses out on analysis and also efficiency data on discovery success prices. It did not plainly state whether the system serves for the medical care market or exactly how it could function about the various other Upcoding discovery techniques as well as applications. Schonfelder et alia [17] go over the upcoding information to restrict the variety of checks as well as audits on genuine, non-fraudulent circumstances. This method sustains for a circumstances of upcoding if the expense of the examination by insurance provider is much less than the recouped expenses from following up with the upcoding examination as well as feasible prosecution. They made use of logistic regression design to establish the possibility of upcoding utilizing 8500 inpatient insurance claim costs. The writers did not indicate any kind of design efficiency, mistake metrics. An additional restriction is that nothing else versions were utilized for contrast objectives. Hsia et alia [18] explains an approach for repayment variant fraudulence discovery to establish whether healthcare facility or market-level elements flu these variants in repayment or fees. Right here writers did not straight specify any kind of Upcoding fraudulence however information are extremely comparable as well as regression is made use of to clarify cost distinction in healthcare and also market data. The disadvantages they pointed out are not recording the offered health center as well as market variables failure to represent all the price differentials. They discuss upcoding however declare the price distinction because of Upcoding is not substantial adequate to discuss these cost variants. Candela et alia [19] have actually placed a paper that evaluations medical care information with social media network evaluation, mining message, as well as temporal evaluation. They utilized a time stamped dataset which has information for fraudsters. They embraced with the assistance of Texas Workplace of Assessor General's exemption data source. They have actually specified the value of normal therapy details to link amongst carriers to examine abuses or mistakes in the therapy of particular illness. Chandola clarifies the outcomes of the research study, without always connecting the different strategies as well as results. In this research strategies defined stand close to the approaches utilized in various other Upcoding scams discovery relevant documents.
III. UPCODING
Upcoding is an illegal activity where a doctor costs an insurance company, claims the public insurance provider with the help of CPT code by billing a pricey solution though already the claim is approved [5] .This act should be avoided so the right person could get the benefit of insurance. This is implemented with the help of clinical contractor. It is not merely permit to maintain insurance but also it is allowing earning even more money than they could claim.
It is very expensive to implement as they have to pay the money to the insurance policy payers. To obtain insurance policy they produce the incorrect details with fake documents which is indirectly affecting the future capacity of the insurance. Due to different sickness present is necessitating having encoding such type is known as International Category of Conditions. It is created by WHO.ICD-10 is the most up to date variation as ICD-9 has actually been terminated since October 2015. CPT is one of the coding system which requires detailed term and also researching codes that are mainly helpful to observe the clinical solutions and also to provide the medical solutions from health experts. It takes the information which includes the medical professional function, and is utilized to bill the public or personal insurance policies [5] [6] .To work on the upcoding cases ICD-10 codes are noted in the table listed below which are using extremely by the insurance companies [7].
IV. FRAUD DETECTION USING BLOCKCHAIN
A BlockChain is a decentralized, distributed and public digital ledger that is used to record transactions across many computers so that any involved record cannot be altered retroactively, without the alteration of all subsequent blocks. Discovering the previous claim is helpful in doing the fraud easily by meaningful data of insurer's should be protected such that it should not allow the fraud to be happened. So for protecting that information block chain can be used which is a ledger to store the transactions. The use of block chain is when the data is recorded in the ledger it cant be changed it is permanent. So the idea behind the block chain is that once the insurer claim the insurance it should be stored in block chain so that when the other person wants to claim the same insurance it won't allow the changes on the ledger. All the information which is recorded in the ledger is encrypted and every occurrence is recorded meaning it can't be altered. By using block chain, safety net providers could make receipts at various focuses in the cases procedure coming about in a changeless, auditable record of all cases exercises, which could be returned to by all gatherings, clients, dealers, back up plans, co-guarantors and reinsurers including the controllers. This could prompt lower valuebased costs, lower exchange dangers and trustless calculation. Such a methodology could help further decrease, if not so much avoid, extortion.
Fig 1: Collaborative and streamlined claims processing on the Blockchain
In the figure shown all the information related to the insurance claim like account information, Claim history, reference data , identity etc were placed in the Block chain .The reason to place this information in the block chain is to avoid the alteration by the fraudster. The authorised person who have the access to the blockchain are verified by the insurance provider network and then they are provided with a key which is confidential .Every Insurer is given a FingerPrintID which is unique is generated .Whenever the insurer wants to claim the claim it is mandatory to have FingerPrintID.The insurance provider identify the claimer with the help of FingerPrintID. the data is encrypted using hashcode where reverse hashcoding is not possible . When ever the data which is present in block chain information is send to each person who is involved in the insurance provider as well as insurance claimer.So if the fraudster wants to reclaim the policy the person is not allowed to access the Blockchain it needs the key and as soon the data is accessed the message is sent to the people who involved in the Block chain with the help of Internet Of things.So the insurer claimer can get the information that the record is accessed without they intention.when the insurance provider gets the claim request and the request is processed the record is placed in the blockchain so that the record is not altered.It is impossible to fraudster to reclaim the insurance.Thus Upcoding fraudulence is avoided using blockchain.For an insurance provider it is a crucial responsibility to discover fraudulence for insurance claims. Eventually 10% to 20% of the insurance coverage are facing the deceitful cases. So they is a need to introduce the block chain technology using which it is helpful to reduce the deceitful cases. With block chain technology it not only reduces the deceitful cases but it also reduces the cost to discover the fraud instead of using the human externally. With the the combination of Internet of Things it is also useful to track the fraud information like the location, time, date etc. With the block chain technology we can get the proof of fraud. Overseen Fraud Detecting is a technique in which we use two variables one for input and another variable is for output and we implement a mapping technique from input to output. Y= f(x). The major aspire of this techniques is to map the feature when a fraud tries to claim the insurance which is approved already. Upcoding fraudulence is avoided using this mapping where for new variable(x) it broadcast the outcome to variable(y). While block chain does not allow alteration the upcoding fraudulence is not possible. In block chain when the claim is approved it is placed in block chain as a result fingerprint id is generated which is initiated using hashing where reverse engineering is not possible. The insurance can check the status of his claim using fingerprint id directly through online. Crypto hashing generates hashing code differently for different claim approvals. Popular monitor maker detecting algorithms are direct regression ,semantic network. By fingerprinted ID it is easy for the insurance provider to monitor the count of claims of approved and which were pending. The main intention of using block chain is to avoiding alteration to the insurance claim and replication of claim without the intended person. It is advantage to use the claim data digitized so that once uploaded it can be approved directly by the block chain as the network consist of authorized person where the delegates are aware of insurance. Usually present is needed to have communication between doctor and also insurance policy carrier. As they is no protection to source it easily reachable to everybody so present is a want to defend source from fraudulence so block chain help in this aspect .once the data is recorded it cannot be altered. So hybrid knowing approaches best suits this needs [10]. UPCODING RECOGNITION: Using BlockChain technology it is easy to verify that whether the record is altered or not. With the help of Internet of Things it is quite easy to track even the location of the fraudster. Clinical graph is used to symbolize text which is connected to insurance claim and this information is further converted into its equivalent medical diagnosis with the help of ICD-9 or ICD-10 as well as treatment teams such as CPT. Medical Diagnosis Associated Teams are developed by the mix of these medical diagnosis and also teams.
V. OVERHEAD ANALYSIS
The center components presented by BAD on the traditional Bit coin convention are the communicated of shiny new forks, and their stranded squares, just as the identification of pernicious exchanges on new gotten squares. In this area, we break down the acquainted data transmission overhead with demonstrate that our answer is adaptable and along these lines deployable inside the standard Bit coin arrange. Specifically, the consequences of our examination demonstrate that our framework has negligible transfer speed utilization in correlation with the one devoured by standard hubs. A. Bandwidth overhead We have broke down the overhead presented by our answer in the most dire outcome imaginable, for example the entire worldwide Bitcoin fork action to influence one single hub named NX. Our overhead is then eaten as the measure of transfer speed that NX devours because of the fork communicate presented in BAD. To this point, and to be established on genuine information, we have considered the greatest number of stranded squares disposed of by the Bitcoin people group during a year ago. We are keen on the all out number of stranded squares since it incorporates those used to assault the people in question. Besides, we accept this number to have a little change since a shrewd foe, to remain covered up in the system, would not make an abnormal number of stranded squares. A progressively conceptual, and less obliged, examination is given in Section VI-B. To investigate BAD's overhead, we have then planned the p2pnetwork encompassing our NX hub. By development, hubs in the Bitcoin system make an irregular diagram arbitrariness being because of the choice of active associations. In the vanilla Bitcoin convention, every hub endeavors to keep a scaled down mum of 8 active associations at record-breaking. In any case, it has been seen that, by and large, a Bitcoin hub has 32 active associations. Moreover, the complete number of stranded squares disposed of during a year ago (2016)10 was 141 with a most extreme square size of 0.993201 MB. Accordingly, in our most dire outcome imaginable, we think about each one of those 141 stranded squares (of the greatest size) to gathered and re-conveyed in communicated by NX. To communicate every one of these squares with their trans-activities, NX would send communicate messages to its neighbors, which entirety up to the worldwide size of 32 ×0.993201 ×141 =4.481 GB every year. Highlight that the all out number of stranded squares is free of the hub's data transfer capacity. Henceforth, our most dire outcome imaginable can be connected to any hub: from lightweight SVP customers to hand-off hubs or excavators. Moreover, the all out hub/month transfer data transfer capacity could differ as indicated by hubs abilities and ISP assets, it could begin with 150 GB/month (which is the base prescribed transfer transmission capacity to run a Bitcoin center 11) and achieve qualities up to 300 GB/month and that's only the tip of the more Where m is the normal data transfer capacity utilization of a hub for each month. Fig. 4 demonstrates the most extreme overhead presented on account of 150 GB of transfer data transmission utilization which is of 0.248%. The outcomes are an overhead on the data transfer capacity of only0.248%. This finally demonstrates BAD to be easily deployable in the standard Bitcoin arrange.
Memory Utilization
Memory usage of the proposed MobiChain is appeared in Fig. 6 under three unique sizes of squares, i.e., one exchange for every square, three exchanges for every square, and six exchanges for each square. Here, the substance of every exchange is fixed at 20 characters. In Fig. 3. Specifically, on the off chance that we store 3 or 6 exchanges in a single square, the memory usage can be decreased by 33% or 55%, separately.
Figure 3: The memory utilization when the number of blocks increases.
Memory Utilization = cb + ctT + cdD, (1) where cb, ct, and cd are consistent, and they speak to the size of square data, the size of one exchange, and the size of one digit, individually. In (1), T is the quantity of exchanges in a single square, and D is the quantity of digits of square number.
Chain Verification Process
In our test, we make 7,156 squares and utilize the cell phone to mine these squares. For this situation, it took 3.5 days to execute the Proof-of-Work forms for each of the 7,156 squares. The histogram of these squares is appeared in Fig. 7, which can be communicated as a gamma long tail conveyance. The examination is separated to demonstrate just 0 to 100 seconds. As per Fig. 4, 88.06% of squares need to utilize 3 to 30 seconds to play out the Proof-of-Work process, while just 4.79% perform longer than 100 seconds. At the pinnacle focuses, 23.23% of the all out squares utilize 5 to 7 seconds. In our trials, 803 hashing cycles are executed every second, and in this manner the pinnacle focuses use around 4, 015 to 5, 621 hashing emphases before gathering the condition. The execution time and vitality utilization of the chain check procedure are introduced in Fig. 5 and Fig. 6, individually.
The execution time and vitality utilization are estimated from the earliest starting point of chain ensure progression awaiting finish of this procedure. For various strings, the estimation is from the earliest starting point until the last string finishes. Two sorts of investigations were led for both one-string Every one of them is performed with one, three, and six exchanges for every square. Each square contains 20 arbitrary characters. True to form, as the quantity of squares in the chain builds, the execution time and vitality utilization increment as needs be.
V. CONCLUSION AND FUTUREWORK
Upcoding fraud is hefty monetary not only to the insurer but it also reduces the amount to be paid to the insurance firm. So they is a need to develop a technology to spot these fraud so that it could be help ful in every field and the could be aware of this type of fraud. Block chain is emerging technology now a days to identify this kind of fraud and it mitigate the insurance claims frauds .It costs less when compared with the human power because to identify the fraud we need to hire a person externally but using block chaining the budget is minimum. Machine learning can also be used to reduce the fraud in insurance claims. fraud Straight regression, blended legit, Bayesian designs are the monitored methods made use of for up coding fraudulence detection. A mix of subgroup production through choice tree as well as Fisher's Precise Examination are done making use of the without supervision discovering methods. The scope of this fraud can also be detected with the need of Machine learning as well as Internet of Things.
|
2019-09-17T01:09:57.486Z
|
2019-08-10T00:00:00.000
|
{
"year": 2019,
"sha1": "f8ba79598832746838f983a0c40b4f4e7861e006",
"oa_license": null,
"oa_url": "https://doi.org/10.35940/ijitee.j9536.0881019",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f68c0110f29956bd690ef1695049703d59f60dd1",
"s2fieldsofstudy": [
"Business",
"Economics",
"Computer Science"
],
"extfieldsofstudy": []
}
|
92433309
|
pes2o/s2orc
|
v3-fos-license
|
Field Evaluation of LED Fluorescence Microscopy for Demonstration of Trypanosoma brucei rhodesiense in Patient Blood
Diagnosis of Trypanosoma brucei rhodesiense human African trypanosomiasis requires demonstration of parasites in body fluids by microscopy. The microscopy methods that are routinely used are difficult to deploy in resource-limited settings due to practical challenges, including lengthy and tedious procedures, and the need for specific equipment to centrifuge samples in glass capillary tubes. We report here on a study that was conducted in a rural region of eastern Uganda to evaluate new methods that take advantage of a field-deployable LED fluorescence microscope. Examination of acridine orange-stained blood smears by LED fluorescence microscopy resulted in a diagnostic accuracy that was similar to that of routine methods, while the time needed to identify parasites was shortened significantly. These findings make these new microscopy methods attractive alternatives to procedures that are currently used for diagnosis of T. b. rhodesiense human African trypanosomiasis.
Introduction
Human African trypanosomiasis (HAT, also known as sleeping sickness) manifests in two forms: as a chronic disease caused by Trypanosoma brucei gambi-ense that is prevalent in central and western Africa and an acute T. b. rhodesiense form in eastern and southern Africa [1].These diseases are of impor- tance in rural sub-Saharan Africa, although a progressive decrease in their incidence has put them in the spotlight for elimination as a public health problem [2].Control of HAT largely relies on screening and treatment of infected individuals.For this, accurate diagnosis plays a central role, since case definition is based on demonstration of trypanosomes in body fluids, including blood, lymph node aspirates, and cerebrospinal fluid by microscopy [3].In the absence of any serological test to screen for T. b. rhodesiense HAT, parasitological methods based on microscopy are solely relied upon to screen all individuals presenting with suggestive symptoms, in order to conclusively diagnose the disease, posing a huge burden to the strained healthcare system.As a result, most patients presenting with a fever are often tested for malaria, for which rapid diagnostic tests are available, and rarely for HAT, with the risk that many HAT cases can be missed.Current parasitological techniques are lengthy, elaborate and tedious, exerting undue strain on the usually understaffed rural health facilities where HAT is typically endemic.It is therefore pertinent that current microscopy methods be simplified in order to reduce the time taken to diagnose a single case.
A simple and rapid approach to reduce the laboratory workload and to increase the sensitivity of direct smear microscopy may now be possible through the introduction of a new fluorescence microscope based on the use of ultra-bright light emitting diodes (LED) developed jointly by Zeiss (Germany) and the Foundation for Innovative New Diagnostics (FIND), the Primo Star iLED [4].By using an LED light source, Zeiss reduced the instrument's power consumption and increased the lifespan of the light source (up to 10,000 hours), making it much more affordable than classical fluorescence microscopes.The microscope can also be easily switched from bright-field light mode to fluorescence mode, and can be operated using a battery or solar power.For examination under fluorescent light, slides are stained with acridine orange (AO), while conventional light microscopy relies on Giemsa stain.AO staining has previously been shown to enhance parasite detection in the blood of HAT patients using the quantitative buffy coat (QBC) technique [5].However, the use of QBC has been limited due to practical challenges in rural settings.
The performance of LED fluorescence microscopy methods to detect trypanosomes in thin or thick smears prepared with either whole or lysed blood was evaluated earlier using experimental infections at laboratories in the Democratic Republic of the Congo (DRC) and Uganda [6], and on T. b. gambiense HAT patients in the DRC [7].However, the clinical performance of these new methods for the T. b. rhodesiense form of the disease has not been evaluated.
This study was carried out to compare the accuracy of LED fluorescence microscopy with routine parasitological methods in a district of eastern Uganda that is endemic for T. b. rhodesiense HAT.A previously described concentration method that differentially lyses red blood cells prior to preparation of smears [6] E. Matovu
Study Sites and Ethical Considerations
This study was carried out at Lwala mission hospital, Kaberamaido district, in the T. b. rhodesiense HAT endemic region of eastern Uganda.The hospital has been treating the highest number of T. b. rhodesiense HAT cases in the country.
The study was carried out in conformity with the Declaration of Helsinki and guidelines for research involving human subjects outlined by the Uganda National Council for Science and Technology (UNCST).Ethical review was carried out by the Vector Control Division (Ministry of Health) followed by approval of the study by the UNCST.
Study Design and Execution
This was a prospective case-control study.All participants were enrolled after written informed consent in the presence of independent witnesses.Twelve ml of venous blood were collected into a heparinized Vacutainer from each participant and labelled with a blinding code by a nurse before handing over to two laboratory technicians for analysis.The first technician performed the haematocrit centrifugation technique (HCT [8], also known as capillary tube centrifugation), which was the most sensitive technique available on site, and prepared blood smears as described below.The second technician prepared and examined wet smears and performed the red blood cell (RBC) lysis and concentration procedure, as described below.Cases were defined as individuals with signs/symptoms suggestive of HAT, in whom trypanosomes were demonstrated by the HCT.A lumbar puncture was performed on all cases, and the cerebrospinal fluid (CSF) examined by microscopy to determine the stage of disease, a requirement to guide the treatment to be used [9].All confirmed HAT cases were treated according to the national guidelines.Controls were defined as individuals presenting at the hospital with neither symptoms suggestive of HAT, nor history of HAT, and for whom the HCT was negative.
Preparation of Microscopy Slides
Microscopy slides were prepared by two technicians operating independently of each other, and uninformed of results obtained by the clinicians.The first technician prepared 6 thick smears using 5 μl blood for each, for subsequent staining as described below.The second technician prepared wet smears using 5 μl blood, examined them immediately for a duration of 10 minutes, or until the first parasite was seen.For positive samples, the time taken to see the first trypanosome was recorded.Slides were scored as negative if no trypanosomes were detected in 10 minutes.The second technician also prepared three 15-ml Falcon tubes per participant, each containing 3 ml of blood; then treated them with 9 ml RBC ly-sis solution (Qiagen) as previously described [6].After the lysis procedure, the samples were centrifuged as described in [6], the resultant cell pellets each re-suspended in 40 μl of supernatant, pooled, and 3 thick as well as 3 thin smears prepared using 20 μl of the suspension.To further blind the samples, all slides were labelled with randomly generated numbers, from a coding list specifically prepared for the study.The slides were stained for 3 (AO) or 45 minutes (Giemsa) as previously described [6], rinsed and dried, and subsequently stored in opaque slide boxes.
Reading of Stained Slides
Four stained slides from each participant were examined under the microscope by one technician soon after processing: 1 thick blood smear stained with Giemsa, 1 thick blood smear stained with AO, 1 thin blood smear stained with AO after RBC lysis, and 1 thick blood smear stained with AO after RBC lysis.
Statistical Analysis
Data analysis was done using IBM SPSS version 22 statistical software.Deviation from normality was tested using D'Agostino-Pearson normality test.Numerical variables were summarized using mean and standard error of the mean.Comparison between the different methods of slide preparation was done using a one-way ANOVA followed by Tukey's multiple comparison test, set at a significance level of 5%.
Results
Sixty eight participants (41 cases and 27 controls) were included in the study between April and September 2011 (35 participants), and between December 2012 and June 2013 (33 participants).The male:female ratio for the cases was 0.28, with a mean age of 28.2 ± 3.1 years (
Discussion
Diagnosis of T. b. rhodesiense HAT is complicated by the absence of a screening test.There is therefore a justifiable need to simplify and improve methods for trypanosome detection.While all the smear microscopy methods that were evaluated here performed equally in terms of diagnostic sensitivity and specificity, the analytical sensitivity was highest with thin smears stained with AO after lysis and concentration, which is consistent with results obtained using experimental infections [6].This is also consistent with the finding that this method was the most sensitive smear microscopy method to diagnose T. b. gambiense patients [7], who present a better dynamic range to detect subtle sensitivity differences, since they exhibit lower densities of parasites in the blood than T. b. rhodesiense patients [9].Similarly, the observation that thin smears stained with Microscopy Research AO after lysis and concentration was associated with the shortest time to detect parasites was also reported using experimental infections [6].AO is also more attractive than Giemsa due to a faster procedure for staining slides; while Giemsa staining usually requires incubation for 20 to 50 minutes, AO staining can be achieved in only 3 minutes.
Although in the present study LED fluorescence microscopy was not found to provide any advantage over HCT in terms of diagnostic accuracy, it could facilitate access to parasitological testing in peripheral sites, where equipment to centrifuge glass capillary tubes is often missing.In addition, unlike with methods relying on fresh samples such as HCT, samples prepared onto slides can be stored and sent to reference laboratories.Indeed, we have piloted a referral system in which a few cases, who would not have accessed Lwala mission hospital, have been detected by checking smears sent to the hospital (unpublished data).
Another advantage would be in a situation when the laboratory staff cannot immediately examine fresh blood to detect live trypanosomes, as we found that examining stained smears was as sensitive as other methods using fresh samples.
Since this study was completed, the LED fluorescence microscopy methods that were evaluated here have been introduced in five health facilities located [11].LED fluorescence microscopy using acridine orange has also been shown to be an accurate and fast method to diagnose Plasmodium falciparum infections [12].Other parasites have been reported to be easily visu- alized by fluorescence microscopy after staining with acridine orange, such as Trichomonas vaginalis and Leishmania donovani [13] [14].Thus, the utility of LED fluorescence microscopy in the diagnosis of other infections would deserve to be investigated further in order to determine how these novel tools could be best used to integrate diagnosis of multiple diseases in resource-limited settings.
Conclusion
While examination of acridine orange-stained blood smears by LED fluorescence microscopy did not result in any observable improvement in accuracy in comparison to the routine microscopy methods used to diagnose T. b. rho-desiense HAT, there was a significant reduction in the time needed to identify parasites when using these new methods.These methods could therefore be considered as alternatives to current diagnostic procedures for T. b. rhodesiense HAT, as they would improve throughput and free some time for the technicians to perform other routine tasks in their typically high-workload laboratories.
Since acridine orange is known to stain various parasites, the fluorescence microscopy methods that were evaluated here could also be of interest for use in diagnosis of other parasitic diseases.
Stained slides were not examined by the technician who prepared the slides, but by the other technician.The time taken to detect the first trypanosome was recorded.Any slide in which no trypanosome was detected within 10 minutes was considered negative.The remaining slides were stored as back-up.Slides stained with Giemsa were examined by bright field microscopy, while slides stained with AO were assessed by fluorescence microscopy.A Primo Star iLED microscope (Zeiss) was used for both bright field and fluorescence microscopy.Results were scored based on the number of parasites observed in 5 fields under a 400× magnification, as + for 1 -4 trypanosomes observed in a slide; ++ for 5 -9 trypanosomes or +++ for 10 or more trypanosomes.After examination, the slides were stored in the slide box and kept in a cool, dry place, for monitoring and independent assessment.
Figure 1 .
Figure1.Parasite density compared across different microscopy methods.AO: acridine orange; tryp.: trypanosome.The total height of each bar does not always correspond to 41 patients, as some cases could not be tested with all the methods.For HCT, the parasites recorded were those observed per capillary tube, rather than per field as has been presented for all techniques for simplicity.
Figure 2 .
Figure 2. Average time taken to see the first trypanosome on a slide across different slide preparation methods.Bars indicate standard error of the mean; AO: acridine orange.
around a conservation area in northern Malawi, where they have been used to enhance diagnosis of T. b. rhodesiense HAT, in combination with other routine methods.In addition, these methods have been introduced in multiple facilities in Uganda [10], Guinea, Chad, the Democratic Republic of the Congo, the Republic of the Congo, Angola, South Sudan and Nigeria, where they have contributed to improved diagnosis of T. b. gambiense HAT.Finally, considering the current low prevalence of T. b. rhodesiense HAT and trend toward integration of disease control activities with general health services, the microscopy methods evaluated in the present study may also provide useful options for diagnosing other infections.In particular, LED fluorescence microscopy has been shown to be more sensitive than conventional Ziehl-Neelsen microscopy and has been recommended by the World Health Organization for diagnosis of tuberculosis et al.
Table 1
).The sensitivity of all the parasitological methods performed in the study was 100%, with all the 41 cases correctly diagnosed as positive.A large number of cases (32, 78.0%) were classified as late stage, with trypanosomes detected in the CSF of 29 (70.3%) of them.Microscopy Research methods (Figure2).When thin and thick smears stained with AO after lysis and concentration were compared, time to detection of parasites in AO stained thick smears was longer (on average 241.6 ± 20.1 sec).It took longest to detect trypanosomes from Giemsa stained thick smears (294.2 ± 21.1 sec).Interestingly, there was no significant difference (p > 0.05) in the time taken to detect trypanosomes in thick AO stained smears of whole blood (273.1 ± 21.7 sec) as compared to wet smears (269 ± 23.1 sec).When the average time taken to see the first trypanosome on a slide was correlated with parasite density, there was an inverse relationship that was strongest with thin smears stained with AO after lysis and concentration (Pearson correlation −0.745, p < 0.0001).
Table 1 .
Characteristics of T. b. rhodesiense HAT cases.
* indicates significant differences between disease stages.SEM: standard error of the mean; CSF: cerebrospinal fluid; WBC: white blood cell; HCT: haematocrit centrifugation technique; tryp: trypanosome.Microscopy Research
|
2019-02-14T04:52:44.389Z
|
2019-01-30T00:00:00.000
|
{
"year": 2019,
"sha1": "1ec18973790e3697a2ce8460c56b8aeeae177311",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=90320",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "1ec18973790e3697a2ce8460c56b8aeeae177311",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
}
|
226045345
|
pes2o/s2orc
|
v3-fos-license
|
Seabed Topography Changes in the Sopot Pier Zone in 2010–2018 Influenced by Tombolo Phenomenon
Bathymetric surveys of the same body of water, performed at regular intervals, apart from updating the geospatial information used to create paper and electronic maps, allow for several additional analyses, including an evaluation of geomorphological changes occurring in the coastal zone. This research is particularly important in places where the shape of the coastal zone has been violently disturbed, including by human activity. Tombolo is such a phenomenon and it dynamically shapes the new hydrological conditions of the coastal zone. Apart from natural factors, it may be caused by the construction of hydrotechnical facilities in the littoral zone. It causes a significant disturbance in the balance of the marine environment, resulting in the bottom accretion and dynamic changes in the coastline. This has been the case since 2010 in Sopot, where the rapidly advancing tombolo is not only changing environmental relations but also threatening the health-spa character of the town by stopping the transport of sand along the coast. This paper analyses changes in seabed shape in the pier area in Sopot between 2010 and 2018. In the analysis, both archival maps and bathymetric surveys over a period of 8 years were used; based on these, numerical bottom models were developed and their geospatial changes were analyzed. The results showed that changes in the seabed in this area are progressing very quickly, despite periodic dredging actions organized by administrative bodies.
Using a typical hydrographic vessel with a very low draught of less than 30 cm (used in the research- Figure 1a,b). 2.
Using an unmanned, directly controlled vessel (from the telemanipulator), which means that they are not able to navigate independently on survey profiles. 3.
Using an unmanned vessel capable of automatically navigating along a set route.
Sensors 2020, 20, x 2 of 19 has become an additional and highly effective measurement method enabling the acquisition of data on the relief and land cover in the coastal strip [22,23]. Please note that the selection of the vessel and its equipment depends on several factors, such as the size of the probed area and distance from the shore, depth, and maneuverability. Therefore, unmanned vehicles [24] can be used with success for measurements in very shallow waters, close to the shore where maneuverability is limited, like in marinas between moored vessels. Their small size allows the use of a singlebeam echosounder, even though multibeam echosounders dedicated to unmanned vehicles are also available. Three major methods of bathymetric measurements are currently available for very shallow waters: 1. Using a typical hydrographic vessel with a very low draught of less than 30 cm (used in the research- Figure 1a,b). 2. Using an unmanned, directly controlled vessel (from the telemanipulator), which means that they are not able to navigate independently on survey profiles. 3. Using an unmanned vessel capable of automatically navigating along a set route. Another key element in the analysis of geospatial changes in the coastal zone includes modeling of geomorphological processes in coastal line changes [25], including those with an anthropogenic nature [26].
In the literature on the subject, the process of modeling three-dimensional sediment transport has been described in detail and extensively in many publications [27,28]. It includes both the impact on the sea shore, as well as the influence on infrastructure and marine structures [29,30]. The diverse research approach to the authors' problem results from different methods of modeling taking into account: evolution [31], fixed profile [32], coupling area-line [33], area-crossshore-alongshore transport coupling [27,34], diffusion [30], the wet-dry [35], hybrid [36], line numerical [37] and the cut-cell models [35]. It should be noted with reference to [38] that general wave-average area models may have merits to simulate shoreline evolution for arbitrary geometry. However, the difficulty in treatment of shoreward boundary has hindered the use of this type of approach for simulation of shoreline evolution. The major defect is that it is impossible to simulate morphologic change above the wave-average water level, although it is more or less hidden for macro-tidal environment.
For the sake of navigation safety, marine navigation areas require periodic bathymetric surveys with varied frequency. Navigation areas available for large vessels should be surveyed more frequently than those where only small vessels are operating. This is due to the limited vessel maneuverability, the small reserve under the bottom, and the more serious consequences of a potential collision. When vessels are maneuvering in harbor basins, additional lateral heels occur which reduce the depth reserve under the keel, increasing the demand for reliable bathymetric information in these basins. Based on this, institutions legally responsible for maritime safety determined the frequency of bathymetric surveys both internationally [39] and nationally [40,41]. The lowest measurement frequency has been determined for waters of the exclusive economic zone from Another key element in the analysis of geospatial changes in the coastal zone includes modeling of geomorphological processes in coastal line changes [25], including those with an anthropogenic nature [26].
In the literature on the subject, the process of modeling three-dimensional sediment transport has been described in detail and extensively in many publications [27,28]. It includes both the impact on the sea shore, as well as the influence on infrastructure and marine structures [29,30]. The diverse research approach to the authors' problem results from different methods of modeling taking into account: evolution [31], fixed profile [32], coupling area-line [33], area-crossshore-alongshore transport coupling [27,34], diffusion [30], the wet-dry [35], hybrid [36], line numerical [37] and the cut-cell models [35]. It should be noted with reference to [38] that general wave-average area models may have merits to simulate shoreline evolution for arbitrary geometry. However, the difficulty in treatment of shoreward boundary has hindered the use of this type of approach for simulation of shoreline evolution. The major defect is that it is impossible to simulate morphologic change above the wave-average water level, although it is more or less hidden for macro-tidal environment.
For the sake of navigation safety, marine navigation areas require periodic bathymetric surveys with varied frequency. Navigation areas available for large vessels should be surveyed more frequently than those where only small vessels are operating. This is due to the limited vessel maneuverability, the small reserve under the bottom, and the more serious consequences of a potential collision. When vessels are maneuvering in harbor basins, additional lateral heels occur which reduce the depth reserve under the keel, increasing the demand for reliable bathymetric information in these basins. Based on this, institutions legally responsible for maritime safety determined the frequency of bathymetric surveys both internationally [39] and nationally [40,41]. The lowest measurement frequency has been determined for waters of the exclusive economic zone from the 10-m depth contour or 5 km from the coastline. It is typically every 10-20 years. The highest frequency (every 1-2 years) Sensors 2020, 20, 6061 3 of 19 was established for harbor areas. Special regulations may apply to bathymetric surveys within the framework of inspection of underwater hydrotechnical structures [42]. Here, the greatest frequency (less than one year) was determined for water bodies and offshore extensively operated structures, where the bottom may become significantly shallower. This also applies to maritime structures being a part of passenger, ferry and fuel terminals.
Another area of application for bathymetric surveys includes geomorphology, which studies the relief of the Earth's surface and the processes creating and transforming them [43,44]. While in both cases it is important to determine the bottom shape, the intended use of this information may vary. Navigation maps provide up-to-date information on depths to guarantee safe navigation or for dredging works. Geomorphological surveys are primarily focused on the assessment of the Earth's surface relief changing under the influence of natural and anthropogenic processes, changes in morphogenetic processes in the coastal zone against the background of global changes and accelerated rise in ocean levels globally, and the monitoring of the natural environment of selected geo-ecosystems [45].
Morphological changes in the coastal zone and the coastline course are most often due to natural factors [46][47][48]. However, they can also be the result of human activity related to direct interference in the environment, as is the case here, where the marina construction caused a slowdown in the sediment transport along the coast, initiating the process of creating tombolo. The transport of sediments in shallow zones is the main mechanism influencing erosion, beach accumulation, and bathymetry change [49]. The tombolo effect, which occurred near the pier in Sopot (a seaside resort city in Eastern Pomerania on the southern coast of the Baltic Sea in northern Poland, lies between the larger cities of Gdańsk to the southeast and Gdynia to the northwest), is an example of such a phenomenon ( Figure 2). Sensors 2020, 20, x 3 of 19 the 10-m depth contour or 5 km from the coastline. It is typically every 10-20 years. The highest frequency (every 1-2 years) was established for harbor areas. Special regulations may apply to bathymetric surveys within the framework of inspection of underwater hydrotechnical structures [42]. Here, the greatest frequency (less than one year) was determined for water bodies and offshore extensively operated structures, where the bottom may become significantly shallower. This also applies to maritime structures being a part of passenger, ferry and fuel terminals. Another area of application for bathymetric surveys includes geomorphology, which studies the relief of the Earth's surface and the processes creating and transforming them [43,44]. While in both cases it is important to determine the bottom shape, the intended use of this information may vary. Navigation maps provide up-to-date information on depths to guarantee safe navigation or for dredging works. Geomorphological surveys are primarily focused on the assessment of the Earth's surface relief changing under the influence of natural and anthropogenic processes, changes in morphogenetic processes in the coastal zone against the background of global changes and accelerated rise in ocean levels globally, and the monitoring of the natural environment of selected geo-ecosystems [45].
Morphological changes in the coastal zone and the coastline course are most often due to natural factors [46][47][48]. However, they can also be the result of human activity related to direct interference in the environment, as is the case here, where the marina construction caused a slowdown in the sediment transport along the coast, initiating the process of creating tombolo. The transport of sediments in shallow zones is the main mechanism influencing erosion, beach accumulation, and bathymetry change [49]. The tombolo effect, which occurred near the pier in Sopot (a seaside resort city in Eastern Pomerania on the southern coast of the Baltic Sea in northern Poland, lies between the larger cities of Gdańsk to the southeast and Gdynia to the northwest), is an example of such a phenomenon ( Figure 2). For many years, tombolo has been subject to research from various perspectives: geological, geomorphological, and dynamics of relief-forming processes. It is implemented, among others, in the north-eastern part of the Adriatic Sea [50,51], on the Rhymittyla and Parainen islands and along the Salpausselka III ridge in Finland to study the geology [52], impact on the seashore as well as the impact on marine infrastructure and structures [29,30].
In 2018, the first tombolo survey was carried out for research purposes and it showed significant changes in depth. Moreover, special attention was paid to marine environment protection related to overgrowth. In subsequent studies, the beach was surveyed using laser scanning and unmanned aerial vehicles (UAV) was used to create a digital terrain model (DTM) [53] assessing scan accuracy. Next, the methodology for conducting this type of research was developed [54]. The conclusion was that to obtain a complete geospatial description of the tombolo phenomenon, it is necessary to analyze all archival materials originating from a given area. They would allow for a time-space analysis of the changes. Researchers asked maritime administration bodies for bathymetric archives as these bodies, as part of their statutory activities, carry out periodic bathymetric measurements for Electronic Navigational Chart (ENC) updates and nautical publications. They were re-analyzed to assess geomorphological changes in this phenomenon. The following data were used for tests:
•
Bathymetric surveys from the period before the marina was built. They were performed by the Maritime Office in Gdynia (2010). In this paper, for assessment of bottom relief changes, the digital sea bottom model (DSBM) methods known from hydrography were used, described in detail in [55], and were supplemented with an analysis of data reliability [56].
The results are presented in graphic 2D form, while the projection shows the changes in comparison with the measurements from previous campaigns. The paper ends with conclusions, which summarize the most important aspects of the study and set directions for further research.
Materials and Methods
Due to the very low depth, this area was not always completely surveyed during measurement campaigns. This was due to the size of the vessel and the type of echosounder used. These were SBES and MBES echosounders. Table 1 presents a summary of bathymetric surveys carried out in this area in 2010-2018, information on the echosounder used, and the exact sea area. To ensure the high reliability in DSBM creation, as many data as possible are needed. This can be achieved by using either MBES or SBES on measuring profiles spaced at small distances of several meters. Therefore, only measurements made in 2010, 2012, and 2015 with a singlebeam echosounder in shallow water proved useful for analyzing changes in bottom relief. Although surveys in 2011 were also taken with a SBES, they do not cover the southern part. Measurements using MBES, taken in 2013, 2015, 2017, and 2018 in deeper water, do not cover the tombolo phenomenon and are thus not useful for analysis. In 2018, the latest surveys in the northern and southern parts using the SBES echosounder were performed.
Archival Hydrographic Data: 2010 and 2012
The first source of geospatial data for analysis was archival materials from the surveys carried out by the Department of Hydrographic Surveys of the Maritime Office in Gdynia. The reporting documentation from bathymetric surveys consisted of hydrographic boards and digital data in the form of Cartesian coordinates and depth related to the chart datum. Graphical information contained on bathymetric boards shows the course of depth contour with spot wise depths and is usually complemented with land infrastructure elements, such as quays, breakwaters, piers. Based on probe arrangement (depth points), the arrangement and course of the measuring profiles, especially in terms of the distance between them, can be derived. It serves as a basis for assessing the value of the material, that was used in the next stage to develop a numerical bottom model. Figure 3 shows two bathymetric charts of the Sopot pier area. The first one comes from 2010, while the second one was created after the marina was completed in 2012. To ensure the high reliability in DSBM creation, as many data as possible are needed. This can be achieved by using either MBES or SBES on measuring profiles spaced at small distances of several meters. Therefore, only measurements made in 2010, 2012, and 2015 with a singlebeam echosounder in shallow water proved useful for analyzing changes in bottom relief. Although surveys in 2011 were also taken with a SBES, they do not cover the southern part. Measurements using MBES, taken in 2013, 2015, 2017, and 2018 in deeper water, do not cover the tombolo phenomenon and are thus not useful for analysis. In 2018, the latest surveys in the northern and southern parts using the SBES echosounder were performed.
Archival Hydrographic Data: 2010 and 2012
The first source of geospatial data for analysis was archival materials from the surveys carried out by the Department of Hydrographic Surveys of the Maritime Office in Gdynia. The reporting documentation from bathymetric surveys consisted of hydrographic boards and digital data in the form of Cartesian coordinates and depth related to the chart datum. Graphical information contained on bathymetric boards shows the course of depth contour with spot wise depths and is usually complemented with land infrastructure elements, such as quays, breakwaters, piers. Based on probe arrangement (depth points), the arrangement and course of the measuring profiles, especially in terms of the distance between them, can be derived. It serves as a basis for assessing the value of the material, that was used in the next stage to develop a numerical bottom model. Figure 3 shows two bathymetric charts of the Sopot pier area. The first one comes from 2010, while the second one was created after the marina was completed in 2012. The chart presented cover area of similar size: in 2010, the surveys were taken on a body of water that measured 450 m × 630 m. Please note that both areas fully cover the area of occurrence of the tombolo phenomenon, which is limited by the marina and the beach and is 600 m long. The surveys under analysis were made with very similar equipment. DGPS receivers (precision of 2 m p = 0.95) and a singlebeam echosounder with a depth measurement precision of 1 cm (rms) at 210 kHz were used ( Table 2). The Electronic Navigational Chart (ENC) is the second source of geospatial data for the analysis of bottom relief near the marina in Sopot. Such data are used for the graphic presentation with a System of Electronic Navigational Chart (SENC). It is encoded in the international S-57 standard, containing geo, meta, collection and cartographic items. Items of type geo contain descriptive characteristics of real-world elements with attributes and acronyms assigned. From the available ENC cells from the studied period, ENC data from the years 2011, 2014, and 2018 were analyzed. Geospatial information contained in the coastline, depth contour and the survey was used to build the DSBM. Depth contours, which are linear items, contain variable horizontal coordinates and a constant vertical component of depth. Figure 4 shows the evolution of the content of the ENC maps over the years. The information was enriched by the marina external breakwater (2014) and the increased surveying density (2018).
Even a rough analysis indicates a change in the coastline course and its shift towards the sea. In addition, waters in the vicinity of the marina became significantly shallower.
Bathymetric Surveys Using SBES: 2018
In 2018, a comprehensive study of the tombolo phenomenon in Sopot began. In December 2018, the first survey of this reservoir was conducted using Gdynia Maritime University Navigator-One, a classic low draught hydrographic vessel. The vessel was equipped, among others, with the Ohmex SonarMite hydrographic singlebeam echosounder and Trimble R10 GNSS receiver with parameters shown in Table 3. In such measurements, in very shallow water, the survey time gains in importance. Therefore, measurement timing was determined based on a forecast predicting the highest possible water level, using the ecohydrodynamic model developed by the Polish Academy of Sciences, Institute of Oceanology. This enabled the vessel to approach the shore as much as possible, providing depth measurement in very shallow water and maximum data coverage for the body of water. Depth was System of Electronic Navigational Chart (SENC). It is encoded in the international S-57 standard, containing geo, meta, collection and cartographic items. Items of type geo contain descriptive characteristics of real-world elements with attributes and acronyms assigned. From the available ENC cells from the studied period, ENC data from the years 2011, 2014, and 2018 were analyzed. Geospatial information contained in the coastline, depth contour and the survey was used to build the DSBM. Depth contours, which are linear items, contain variable horizontal coordinates and a constant vertical component of depth. Figure 4 shows the evolution of the content of the ENC maps over the years. The information was enriched by the marina external breakwater (2014) and the increased surveying density (2018).
Even a rough analysis indicates a change in the coastline course and its shift towards the sea. In addition, waters in the vicinity of the marina became significantly shallower.
Bathymetric Surveys Using SBES: 2018
In 2018, a comprehensive study of the tombolo phenomenon in Sopot began. In December 2018, the first survey of this reservoir was conducted using Gdynia Maritime University Navigator-One, a classic low draught hydrographic vessel. The vessel was equipped, among others, with the Ohmex SonarMite hydrographic singlebeam echosounder and Trimble R10 GNSS receiver with parameters shown in Table 3.
Comparative Analysis of Research Material
Bathymetric surveys of the water body adjacent to the pier in Sopot made in the years 2010-2018 differ both in terms of implementation technicalities and data processing methods. Furthermore, the data contained in the ENC, SBES and MBES differ not only in numbers but also in geometry, which is related to measurement profile arrangement.
Surveys made using MBES yielded the greatest amount of data. The large number of beams sent from the acoustic transducer in the starboard and port side traverse, the low speed of the survey vessel and high ping rate ensure small distances between the signal reflection points from the bottom, To compare such diverse data, DSBM modeling methods known in hydrography were used. In the process of DSBM building, horizontally irregular data were used to create a regular grid (rectangles). Regular data can also be available by exporting from the grd→xyz grid. Both the regular and irregular MBES grid are models with a large scale of data integration. This allows obtaining high reliability of a grid with a small target size when the reliability decreases when building a high-resolution DSBM based on a small set of geospatial data.
The ENC data contain a smaller set of geospatial information that is distributed differently. The depth contours contained in the ENC are parallel to the coastline and the SBES data are perpendicular to the depth contours, which results from the SBES measurement methodology. Additionally, the ENC cells contain geospatial information in SOUNDG objects with a different degree of integration depending on the update (more recent updates contain more data). A singlebeam echosounder usually provides more data than are contained in the ENC and depends on the distance between the profiles. Therefore, while the depth contour waveform is determined precisely on survey profiles, between them it needs interpolation. In principle, it does not have much influence on the depth contour accuracy in an area with low depth changes dynamics.
The use of SBES or MBES results in a different number of measurements made in the same body of water. For comparison purposes, Table 4 presents the number of data measurements included in ENC for the northern and southern water bodies. The southern reservoir, where the tombolo phenomenon occurs, is especially important. Because of the shallow depth, it was not possible to use the MBES echosounder for measurements there. Therefore, the data for the northern body were also presented, where SBES and MBES measurements for ENC cell updates were performed in different years. The density of survey profiles is another factor influencing the accuracy of DSBM development in this body of water. Figure 5 shows the data coverage of the water body in the vicinity of the pier in Sopot in the years 2010 and 2012, made with the use of the SBES and with distances between the profiles of 10 m. This ensured a dense data coverage as opposed to the surveys taken in 2015 when the distances between the profiles were 90-100 m. In 2018, we tried to keep the distance of 10 m in southern part (tombolo) and 20 m in northern part of the area.
Extraction of Geospatial Data from ENC
Three classes of geo objects were used to build the DSBM based on geospatial data contained in the ENC: a survey, which is a set of points of different depths, the coastline, and the depth contour, whose depths are respectively constant. Table 5 presents those elements of the ENC maps that contain geospatial information used to develop DBSM. These objects are shown in Figure 6. The SOUNDG object is a scattered point object. Data density increased the year ENC map cell was issued. The two remaining objects: COALNE and DEPCNT and
Extraction of Geospatial Data from ENC
Three classes of geo objects were used to build the DSBM based on geospatial data contained in the ENC: a survey, which is a set of points of different depths, the coastline, and the depth contour, whose depths are respectively constant. Table 5 presents those elements of the ENC maps that contain geospatial information used to develop DBSM. These objects are shown in Figure 6. The SOUNDG object is a scattered point object. Data density increased the year ENC map cell was issued. The two remaining objects: COALNE and DEPCNT and linear objects with a large amount of data. Such redundant information is of little use and does not contribute to the increasing reliability of the developed DSBM. Although the ENC cell is issued with a specific date, the age of the individual data may vary. Figure 7 shows SOUNDG geospatial data of PL5SOPOT cell issued in 2018 with the time span between the oldest and the youngest data being 7 years. For the data marked in red, the year of acquisition is respectively: 2010 (southern part in the beach area- Figure 7a Although the ENC cell is issued with a specific date, the age of the individual data may vary. Figure 7 shows SOUNDG geospatial data of PL5SOPOT cell issued in 2018 with the time span between the oldest and the youngest data being 7 years. For the data marked in red, the year of acquisition is respectively: 2010 (southern part in the beach area- Figure 7a Although the ENC cell is issued with a specific date, the age of the individual data may vary. Figure 7 shows SOUNDG geospatial data of PL5SOPOT cell issued in 2018 with the time span between the oldest and the youngest data being 7 years. For the data marked in red, the year of acquisition is respectively: 2010 (southern part in the beach area- Figure 7a Non-geospatial data that include information on data acquisition time for two groups of surveys (Figure 7a,b)
Digital Sea Bottom Model by Inverse Distance Weighted Method
As with any surface, the description of the seabed bottom is generally a projection of a certain two-dimensional area into space, the coordinates of which are described by means of polynomials of two variables. These can be, for example, Bezier surface patches [57], rational rectangular Bezier patches [58], B-splines, and in Hermite's polynomials, NURBS (Non-Uniform Rational B-Splines) functions [59] or Coons patches [60]. For building DTM (here: DSBM), the method of a grid of rectangles with the use of interpolation is commonly used. Among the interpolation methods available in the ArcGIS environment, such as Kriging, natural neighborhood and splines, Inverse Distance Weighted (IDW) was used [61][62][63][64]. The value of the h(x,y) function at any point (here: in the grid node), is the weighted average of the known depth values from n interpolating points and can be presented in the basic form [64]: where n is the number of interpolating points, w i (k) is the weight of the i-th point (the k index refers to the type of weight). The w(x,y) weight is a function of distance and determines the magnitude of the influence of the i-th point on the interpolated value. Weight coefficients are now mostly calculated according to the relation (1), in which the value of the weight is inversely proportional to the distance between the interpolated point and the measurement point based on the relation [64]: where: Apart from the vertical component, determined for the node point of the grid, cell size is another parameter selected based on the source points system, resulting from measurements or extracted from ENC cells. Table 6 shows the default values of this parameter for particular bottom models created in the ArcGIS environment. For creating DSBMs, 1 m cell size was set up.
Comparative Analysis of Charts from 2010 and 2012 (SBES)
Based on xyz measurement data from the years 2010 and 2012, DSBM was developed in the ArcGIS environment, which is presented in the 2D form in Figure 8. Only one year after the construction of the marina, the depth contour shifting towards the marina could be observed, which is indicative of an increasing shallowing near the beach.
Comparative Analysis of Charts from 2010 and 2012 (SBES)
Based on xyz measurement data from the years 2010 and 2012, DSBM was developed in the ArcGIS environment, which is presented in the 2D form in Figure 8. Only one year after the construction of the marina, the depth contour shifting towards the marina could be observed, which is indicative of an increasing shallowing near the beach. 3D imaging is an alternative form of map representation that is close to the natural human perception, hence Figure 9 presents 3D models of the bottom. Figure 9b shows a later model (2012) in grey, which covers the older one (2010) as a result of the shallowing. In general, waters get shallower throughout the entire presented area, both in the southern and northern parts. This is visible in Figure 9b, where the bottom area, determined based on the 2012 data and marked grey, completely covers the area from 2010. The area with depths above 0.6 m, for which 3D imaging is an alternative form of map representation that is close to the natural human perception, hence Figure 9 presents 3D models of the bottom. Figure 9b
Comparative Analysis of Charts from 2010 and 2012 (SBES)
Based on xyz measurement data from the years 2010 and 2012, DSBM was developed in the ArcGIS environment, which is presented in the 2D form in Figure 8. Only one year after the construction of the marina, the depth contour shifting towards the marina could be observed, which is indicative of an increasing shallowing near the beach. 3D imaging is an alternative form of map representation that is close to the natural human perception, hence Figu In general, waters get shallower throughout the entire presented area, both in the southern and northern parts. This is visible in Figure 9b, where the bottom area, determined based on the 2012 data and marked grey, completely covers the area from 2010. The area with depths above 0.6 m, for which In general, waters get shallower throughout the entire presented area, both in the southern and northern parts. This is visible in Figure 9b, where the bottom area, determined based on the 2012 data and marked grey, completely covers the area from 2010. The area with depths above 0.6 m, for which measurement points (Figure 8) are visible, should be considered for analysis. It can be seen in the vicinity of the marina's southern breakwater that waters became shallower by ca. 1 m.
Comparative Analysis of the 2014 and 2018 (ENC)
For the 2014-2018 period, bottom models were compared based on the data contained in the ENC data obtained with a singlebeam echosounder. The number of data contained in the ENC is greater (Figures 4c and 6) compared to 2011 and 2014 (Figure 4a,b) and is enough for creating the DSBM on the basis of geo objects (Figure 10).
Sensors 2020, 20, x 13 of 19 measurement points (Figure 8) are visible, should be considered for analysis. It can be seen in the vicinity of the marina's southern breakwater that waters became shallower by ca. 1 m.
Comparative Analysis of the 2014 and 2018 (ENC)
For the 2014-2018 period, bottom models were compared based on the data contained in the ENC data obtained with a singlebeam echosounder. The number of data contained in the ENC is greater (Figure 4c and Figure 6) compared to 2011 and 2014 (Figures 4a,b) and is enough for creating the DSBM on the basis of geo objects ( Figure 10). Figure 11 shows the shallow areas based on two bottom models created on the basis of the ENC data. The bottom area, determined based on the 2018 ENC data and marked grey, covers the southern area, where tombolo phenomenon takes place.
Comparative Analysis of 2012 and 2018 SBES Soundings
An analysis of the reporting documentation (reporting board with description included in the table describing the boards) of the Maritime Office in Gdynia from the survey carried out in 2012 and our last (2018) survey showed that the measurements were taken down to the draught limit of the survey vessels of 0.6-0.7 m. These measurements do not include shallow water from the 0.6-m depth contour to the coastline. In 2012, distances between the profiles were set at 10 m. For the survey in
Comparative Analysis of the 2014 and 2018 (ENC)
For the 2014-2018 period, bottom models were compared based on the data contained in the ENC data obtained with a singlebeam echosounder. The number of data contained in the ENC is greater (Figure 4c and Figure 6) compared to 2011 and 2014 (Figures 4a,b) and is enough for creating the DSBM on the basis of geo objects ( Figure 10). Figure 11 shows the shallow areas based on two bottom models created on the basis of the ENC data. The bottom area, determined based on the 2018 ENC data and marked grey, covers the southern area, where tombolo phenomenon takes place. 2018, the distances were set at 10 m (Sth) to 20 m (Nth). Measurements were taken in a strip of ±400 m wider than the pier. The 2D bottom model based on the SBES surveys in 2018 is shown in Figure 12.
Cell Size and DBSM Reliability
Cell size is an important parameter in spatial surface modeling (here: DBSM bottom surface) [48,49]. It seems justified to reduce the distance between the grid nodes to obtain a better image. At low measurement density, depths in the nodes are interpolated from distant points, leading to a decrease in reliability of the constructed DBSM. This also affects the calculation of the volume under/over the surface to estimate the gains and losses. Both the volume of the bottom material to be dredged, and the loss, i.e., changes in the bottom shape, are important for the analysis of the tombolo phenomenon.
When measuring with a MBES, the measurement density is high due to the large number of beams sent by the transducer towards the starboard and port side traverse (transverse density). It also depends on the velocity of the survey vessel and pinging frequency (longitudinal distance). The pinging frequency (emission of acoustic impulses by the echosounder) and the transverse distance are affected by depth: as it increases, the transverse distance increases with constant beam separation, and the pinging frequency decreases with increasing MBES operation ranges. SBES depth measurements allow high resolution geospatial data to be obtained depending on the depth and velocity of the survey vessel, but the traverse distance depends on the distance between 2018, the distances were set at 10 m (Sth) to 20 m (Nth). Measurements were taken in a strip of ±400 m wider than the pier. The 2D bottom model based on the SBES surveys in 2018 is shown in Figure 12.
Cell Size and DBSM Reliability
Cell size is an important parameter in spatial surface modeling (here: DBSM bottom surface) [48,49]. It seems justified to reduce the distance between the grid nodes to obtain a better image. At low measurement density, depths in the nodes are interpolated from distant points, leading to a decrease in reliability of the constructed DBSM. This also affects the calculation of the volume under/over the surface to estimate the gains and losses. Both the volume of the bottom material to be dredged, and the loss, i.e., changes in the bottom shape, are important for the analysis of the tombolo phenomenon.
When measuring with a MBES, the measurement density is high due to the large number of beams sent by the transducer towards the starboard and port side traverse (transverse density). It also depends on the velocity of the survey vessel and pinging frequency (longitudinal distance). The pinging frequency (emission of acoustic impulses by the echosounder) and the transverse distance are affected by depth: as it increases, the transverse distance increases with constant beam separation, and the pinging frequency decreases with increasing MBES operation ranges. SBES depth measurements allow high resolution geospatial data to be obtained depending on the depth and velocity of the survey vessel, but the traverse distance depends on the distance between
Cell Size and DBSM Reliability
Cell size is an important parameter in spatial surface modeling (here: DBSM bottom surface) [48,49]. It seems justified to reduce the distance between the grid nodes to obtain a better image. At low measurement density, depths in the nodes are interpolated from distant points, leading to a decrease in reliability of the constructed DBSM. This also affects the calculation of the volume under/over the surface to estimate the gains and losses. Both the volume of the bottom material to be dredged, and the loss, i.e., changes in the bottom shape, are important for the analysis of the tombolo phenomenon.
When measuring with a MBES, the measurement density is high due to the large number of beams sent by the transducer towards the starboard and port side traverse (transverse density). It also depends on the velocity of the survey vessel and pinging frequency (longitudinal distance). The pinging frequency (emission of acoustic impulses by the echosounder) and the transverse distance are affected by depth: as it increases, the transverse distance increases with constant beam separation, and the pinging frequency decreases with increasing MBES operation ranges. SBES depth measurements allow high resolution geospatial data to be obtained depending on the depth and velocity of the survey vessel, but the traverse distance depends on the distance between measurements on adjacent profiles. When this distance is minimized, the workload increases and is difficult to implement for the helmsman of a larger surveying vessel and under less favorable hydrometeorological conditions (wind, current).
Conclusions
For the analysis of changes in the relief of the area adjacent to the Sopot pier, especially between the marina and the beach, the available materials with geospatial information were used at the highest possible data density. These were bathymetric measurements taken in 2010, 2012, 2015, and 2018 and the ENC from 2018. This made it possible to build highly reliable numerical bottom models with respect to the actual bottom shape. Because the measurements were made with a singlebeam echosounder on a hydrographic motorboat, the measurements were to the 0.6-m depth contour. The data from the shallow water were interpolated to the shoreline, whose course was obtained based on geodetic field measurements and an Electronic Navigational Chart.
The reliability of the numerical bottom model is influenced by the method and its parameters. For building this numerical bottom model, the IDW method was used without interfering with the parameters such as power, smoothing or anisotropy (ratio and angle).
Construction of the marina boosted demand for periodic and frequent measurements since the sand began to deposit on the bottom, resulting in water shallowing and widening of the beach, i.e., a shift of the shoreline towards the water. These measurements can be used not only for engineering, i.e., determining how much bottom material has been deposited and thus how many thousands of cubic meters must be removed, but also for studying the sea dynamics. Contemporary bathymetric measuring systems and methods not only make the surveys more accurate but also faster and easier. This paper was written based on geospatial data obtained with a singlebeam echosounder on a hydrographic motorboat, but the use of the same echosounder on the USV enables measurements to be made to a much smaller depth, even 0.2 m. Precise positioning and line keeping of the measuring vessel in automatic mode enables quick measurements on measuring profiles at distances of 2 m and even 1 m. At such shallow depths, it is not necessary to use a multibeam echosounder when the width of the swath is decreasing in the increasingly shallower water. Although the bottom is sandy and hard, it is justified to investigate the possibility of using other echosounder frequencies. The grass and algae that started growing there with the emergence of the tombolo phenomenon affect the interference of the high-frequency echosounder.
By 2012, within just two years, the breakwater area had become shallower by almost 1 m, decreasing its depth from 3.5 to 2.6 m and shifting the 1 m and 1.5 m isobaths by 90 m towards the marina. Continuous transfer of sand resulted in local shallowing, with a depth of 1.5 m, visible on the ENC map from 2018.
Further research on this phenomenon may include a quantitative analysis of the dredged and lost material. The presented research points to where the tombolo phenomenon caused water shallowing. These areas have been marked in grey in the drawings. Although it is possible to calculate the volume of drifted sand, to assess the volume of the dredged material it is necessary to model the target bottom shape after dredging. A decision is needed as to the course of the shoreline and whether the bottom should drop (the depth should increase) evenly, and up to what distance from the shore this should occur. The random character of transport constitutes a limitation in calculating the amount of dredged sand, as it depends on such factors as the diameter of the sediment grain, its weight (taking into account the buoyant force) and broadly defined structural features (packing, sorting, shape) and the roughness of the bottom.
|
2020-10-29T09:07:44.956Z
|
2020-10-24T00:00:00.000
|
{
"year": 2020,
"sha1": "d03681f4b1ecf2d8f25a869cb7d3314b393c50af",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/20/21/6061/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "78756dc6d5d5c97d294a207a5943bdaa12aab5f9",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geology",
"Computer Science",
"Medicine"
]
}
|
25967768
|
pes2o/s2orc
|
v3-fos-license
|
hpv and oral lesions : preventive possibilities , vaccines and early diagnosis of malignant lesions
Human Papilloma Virus (HPV) was been widely studied in genital infections conquering the first place in etiology of uterine cervix cancer. Wide diffusion of HPV, caused by its simplicity of transmission had determinated the necessity of deep study of papilloma virus and its identification in other body district, also oral cavity. The importance of HPV in world health is high, in fact high-risk HPV types contribute significantly to viral associated neoplasms, accounting for approximately 600,000 cases (5%) of cancers worldwide annually (1). Differently from cervix cancer, HPV plays pathogenic role only in a small part of oral cancer through the continue expression of viral oncogene necessary for the histopathologic progression of malignant cancer HPV associated. Two proteins of HPV (E6 and E7) are the main responsible for cell transformation and malignant progression of cancer and so they are defined oncogene proteins (2). The action of oncogenic E6 and E7 proteins resides in the ability to inactivate two important tumor suppressor proteins, p53 and the retinoblastoma protein, pRb (3, 4). A systematic meta-analysis reviewed the current hpv and oral lesions: preventive possibilities, vaccines and early diagnosis of malignant lesions
Introduction
Human Papilloma Virus (HPV) was been widely studied in genital infections conquering the first place in etiology of uterine cervix cancer. Wide diffusion of HPV, caused by its simplicity of transmission had determinated the necessity of deep study of papilloma virus and its identification in other body district, also oral cavity. The importance of HPV in world health is high, in fact high-risk HPV types contribute significantly to viral associated neoplasms, accounting for approximately 600,000 cases (5%) of cancers worldwide annually (1). Differently from cervix cancer, HPV plays pathogenic role only in a small part of oral cancer through the continue expression of viral oncogene necessary for the histopathologic progression of malignant cancer HPV associated. Two proteins of HPV (E6 and E7) are the main responsible for cell transformation and malignant progression of cancer and so they are defined oncogene proteins (2). The action of oncogenic E6 and E7 proteins resides in the ability to inactivate two important tumor suppressor proteins, p53 and the retinoblastoma protein, pRb (3,4 (6). Among the many high-risk HPV types, HPV-16 is the most common, found in almost 90% of the HPV(+) oropharyngeal cancers. At present, HPV-16 remains the only HPV type that is classified as cancer-causing in the head and neck (7). For fairness, the Authors want to emphasize that results about the presence of HPV in Oral Squamous Cell Carcinoma (OSCC) were not the same in all studies, in fact in some studies the prevalence of HPV in OSCC was between 2-7% (8,9). The Human Papillomaviruses (HPVs) can be broadly grouped into cutaneous and mucosotropic types. The mucosotropic HPVs are typically found in the anogenital mucosa and oral mucosa. Genital infection with HPV can be transmitted to oral mucosa through autoinoculation, oral sex, or oral contact. The virion is composed of a double-stranded, circular, 8,000-base pair DNA genome encased in a naked icosahedral capsid about 55 nm in diameter that comprise a heterogeneous family consisting of more than 130 different HPV types and 16 categories (10)(11)(12)(13).
In this article we will analyze vary expression of HPV in oral cavity both benign and malignant, their prevalence and the importance in early diagnosis and prevention.
Diagnosis
Diagnosis of papillomavirus lesions is based on the histopathological appearance. Characteristic features include koilocytosis, acanthosis and papillomatosis which, coupled to the clinical appearance, suggest the infection (14). The classical oral lesions associated with human papillomavirus are squamous cell papilloma, condyloma acuminatum, verruca vulgaris and focal epithelial hyperplasia (15,16). Squamous cell papilloma is a cauliflower-like lesion with a narrow base. It is a small, pink exophytic growth of the oral mucosa ( Figure 1). The lesion of condyloma acuminatum is similar, presenting multiple small, soft, pale lesions with a cauliflower-like surface ( Figure 2). Histologically, both lesions have the same appearance, and human papillomavirus types 6 and 11 are involved. Verruca vulgaris or the common wart is a narrow exophytic growth, wider at the base, sessile and firm ( Figure 3). The lesion is usually found on the gingiva, labial mucosa, commissure, hard palate or tongue. Human papillomavirus types 2 and 57 have been identified in the lesions. Treatment is by surgical excision. Focal epithelial hyperplasia (Heck's disease) usu-review Figure 1 Squamous cell papilloma is a cauliflower-like lesion with a narrow base. It is a small, pink exophytic growth of the oral mucosa.
ally presents as multiple plaque-like or papular lesions, flat or convex, in the mucosa mostly of children. The color may vary from red to gray to white. Lesions occur on oral mucosa exclusively. The lesions are benign and may resolve spontaneously ( Figure 4) (17,18). Other oral lesions that have been associated with human papillomavirus include erythroplakia (HPV-16), proliferative verrucous leukoplakia (HPV-16), candidal leukoplakia, oral squamous cell carcinoma (HPV-16 and HPV-19) and lichen planus (HPV-6, HPV-11 and HPV-16). Overall, HPV types 2, 4, 6, 11, 13 and 32 have been associated with benign oral lesions while HPV types 16 and 18 have been associated with malignant lesions.
It is now clear that high-risk human papillomavirus genotypes, particularly human papillomaviruses 16 and 18, are important co-factors, especially in cancers of the tonsils and elsewhere in the oropharynx. A meta-analysis by the Fifth World Workshop on Oral Medicine reviewed 1,121 published studies of oral lesions. The odds ratio for association with high-risk human papillomaviruses and oral cancer was 4.0 (2.62-6.02) and that for oral potentially. The odds ratio for tobacco or heavy drinker and oral cancer was in a range between 3 and 9, so the odd ration of HPV underlines as it is an important risk factor. Oral cancer shows highly variable clinical features. The most frequent appearance is a white lesion or red or ulcerative area. The clinical morphology is a function of the tumor growth, so you can observe exophytic lesions of papillary or warty appearance or growth endophytic lesions that take the form of penetrating ulcers ( Figure 5). Oral cancer signs and symptoms list below considers both oral cancers from HPV and those from tobacco and alcohol: an ulcer or sore that does not heal within 2-3 weeks, difficult or painful swallowing, pain when chewing, a persistent sore Focal epithelial hyperplasia (Heck's disease) usually presents as multiple plaque-like or papular lesions, flat or convex; the color may vary from red to gray to white.
Figure 3
Verruca vulgaris or the common wart is a narrow exophytic growth, wider at the base, sessile and firm. review throat or hoarse voice, a swelling or lump in the mouth, a painless lump felt on the outside of the neck, which has been there for at least two weeks, a numb feeling in the mouth or lips, constant coughing, an ear ache on one side (unilateral) which persists for more than a few days (19)(20)(21).
HPV contraction, and course
Transmission of the virus can occur with direct contact, genital contact, anal and oral sex, latest studies suggest a salivary transmission and from mother to child during delivery. The number of lifetime sexual partners is an important risk factor for the development of HPVpositive head-neck cancer. In case-control studies, the odds of HPV-positive throat cancer doubled in individuals who reported between one and five lifetime oral sexual partner. The risk increased five-fold in those patients with six or more oral sexual partners compared with those who have not had oral sex. The virus may be inactive for weeks, months and for some people possibly even years after infection. There is no cure for the virus. Most of the time, HPV goes away by itself within two years and does not cause health problems. It is only when HPV stays in the body for many years that it might cause these oral cancers. Even then, it is a very small number of people that will have an HPV infection cascade all the way into an oral malignancy, though that number is increasing every year by about 10%. It is not known why HPV goes away in most, but not all cases. For
Figure 5
Oral cancer HPV +, shows highly variable clinical features. The most frequent appearance is a white lesion or red or ulcerative area. The clinical morphology is a function of the tumor growth, so you can observe exophytic lesions of papillary or warty appearance or growth endophytic lesions that take the form of penetrating ulcers.
unknown reasons there is a small percentage of the population whose immune system does not recognize this as a threat and it is allowed to prosper (22)(23)(24)(25)(26)(27).
Fortunately there is a difference between oropharyngeal cancer HPV + and HPV -, the loss of the P16 expression by deletion, hypermethylation or mutation is common in OSCCs caused by alcohol and tobacco, producing injuries with a worse prognosis as they do not respond to chemotherapy or radiotherapy in the same way. E6 HPV's oncoprotein can inactivate P53 too, although in these cases it would be a functional inactivation, not a mutation as it happens in cancer associated with tobacco and alcohol intake.
In fact, the rate of P53 mutations due to HPV is very low. All this would support the existing evidence of oral/oropharyngeal cancer etiologically associated with HPV having an increased survival and a better prognostic (85-90% to five years), but due to the transmission mode, patients with HPV associated HNSCC are younger (30-50 years old) (28)(29)(30)(31).
Vaccine
HPV vaccine is the first explicitly designed to prevent virus induced cervix cancer (32,33). HPVs 16 and 18 are the main targets of the currently approved vaccines and the available data confirm the success in the incidence reduction of pre-cancerous cervical injuries for these types (34). The vaccine's efficacy is limited by two factors since not all cancers are caused by HPVs 16 and 18; and there seems to be a requirement of vaccinating young women before they get infected by these two types. To be effective, such vaccination should start before "sexual puberty". There are two commercially available prophylactic vaccines against HPV today: the bivalent (VPHs 16 and 18) Cervarix ® and the tetravalent (VPHs 6,11,16 and 18) Gardasil ® . Theoretically, there is no reason for these vaccines to fail to work against these same viruses in different localizations (such as oral cavity, pharynx, larynx or the anogenital region). Proving that the vaccine also prevents oropharyngeal cancer would mean not only a landmark in the prevention of these diseases, but it would also provide the missing link in the chain of evidences with the ultimate proof of HPV induced viral etiology of these tumors. Vaccination is approved in females aged 9 to 26.
The vaccination primary target population should be females aged 11 and 12. However the vaccine can also be administered to females up to 9 years old and to those aged between 13 and 26 who have been sexually active (35)(36)(37)
Conclusions
Considering the importance of HPV in influencing both the risk and the course of oral cancer, it assumes a fundamental importance the preventive diagnosis of HPV. In particular, the clinical examination of precancerous or cancerous lesions is not sufficient, but it is necessary an instrumental analysis. In addition to more traditional histological techniques performed on biopsies, you can now search for the viral DNA in skin cells flaking. It is thus evident the importance of early detection and dentists, rather than to other medical specialties, have a key responsibility of timely diagnostic classification of potentially malignant oral lesions. The vaccine could be an important preventive strategy, in fact the scientific community is in agreement on hypothesis that blocking the contagion it may also limit the distance complications as the oropharyngeal cancer so in our opinion HPV is an important risk factor of sure future impact on world healthy.
|
2018-04-03T04:46:15.416Z
|
2015-04-01T00:00:00.000
|
{
"year": 2015,
"sha1": "ffb15c9372b2d4ad886d88679295c73db038a616",
"oa_license": null,
"oa_url": "https://doi.org/10.11138/orl/2015.8.2.045",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e25733d3b9c97a0d6beea91c47d7687df67786b9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
11792818
|
pes2o/s2orc
|
v3-fos-license
|
Primary Diffuse Large B-Cell Lymphoma of the Ascending Colon
Abstract Primary colorectal lymphoma is a rare malignancy accounting for 3% of all gastrointestinal lymphomas and 0.1-0.5% of all colorectal malignancies. Among primary colorectal lymphomas, the most common histological subtype of colorectal lymphoma is diffuse large B-cell lymphoma. We report a case of an 84-year old Caucasian female who was admitted to the hospital because of a 2 days history of altered mental status. In the emergency department the patient was found to have acute kidney injury and hypercalcemia. On physical examination a large lower quadrant abdominal mass was palpated. Computed tomography scan of abdomen confirmed the presence of a mass along the cecum and proximal ascending colon. Colonoscopy showed a large ulcerated mass and biopsy was consistent with diffuse large B-cell lymphoma. The patient underwent colectomy but refused to receive chemotherapy.
Introduction
The gastrointestinal (GI) system is a common site for secondary spread of non-Hodgkin lymphomas (NHL). 1,2 However, primary involvement of the GI tract is significantly less common, representing only 10-15% of all NHLs and accounting for approximately 4% of all tumors arising in the GI system. 3,4 Primary colorectal lymphomas are even rarer entities, comprising 0.1-0.5% of all colorectal malignancies and 1.4% of all cases of NHL. 5 Dawson et al. were the first to describe colorectal lymphoma in 1961. 6 Lack of specific symptoms can lead to delayed diagnosis in 35-65% of patients when surgical treatment options are either urgent or emergent. [7][8][9] In more than half of the cases, it is clinically possible to appreciate the lymphoma as a bulky mass on a physical examination. 10 Treatment has a multidisciplinary approach with combination of surgery, chemotherapy and radiation. Due to its rarity, there is lack of randomized trials and most of the information published is based on individual case reports. Below, we present a case of 84 year old female with primary colorectal lymphoma who presented to the hospital with altered mental status secondary to hypercalcemia.
Case Report
An 84-year old Caucasian female was sent to the hospital because of a two day history of altered mental status. In the emergency department she was found to have acute kidney injury and hypercalcemia with a total serum calcium level of 17 mg/dL (normal range: 8.5-10.3). Physical examination was significant for a right lower quadrant mass measuring 10 cm at the greatest diameter. The rest of the physical examination was unremarkable. A hypercalcemia work up was initiated, which showed elevation of lactate dehydrogenase, uric acid, 1,25 vitamin D and decreased level of parathyroid hormone. The rest of the laboratory parameters were within normal limits.
Computed tomography (CT) scan of the abdomen and pelvis was performed, which showed a 12.0 cm circumferential mass along the cecum and proximal ascending colon ( Figure 1). Subsequent colonoscopy demonstrated an ulcerated circumferential rigid mass at the ascending colon ( Figure 2). A gross pathological specimen is shown in Figure 3. A few days later, the patient's pathology report revealed diffuse large B cell lymphoma (DLBCL) of the ascending colon. Microscopic examination of the biopsy sample revealed portions of colonic tissue which were infiltrated by the neoplasm. The neoplasm formed large sheets of cells without glandular formation or keratin production (Figures 4 and 5). The cells were monotonous with irregular nuclear membranes and prominent nucleoli with easily found mitotic activity. Immunohistochemical staining was also performed and revealed the tumor to be CD45 + , CD3 + , CD20 + , BCL6 + and MUM1 negative ( Figure 6). Lymphoid survey was negative and there was no distal organ involvement. Upon classification using the Revised International Prognostic Index (R-IPI), the patient was classified in the poor risk group with a score of 3. The patient refused to receive chemotherapy but did undergo open right hemi-colectomy with right oophorectomy and ilieocolic anastomosis. CT scan of the abdomen and pelvis was done two months later, which showed recurrent mass in the right lower quadrant for which patient underwent multiple sessions of radiation therapy. The course was complicated with radiation induced colitis and deep venous thrombosis requiring hospitalization. The patient did not receive any chemotherapy and did not undergo any additional surgical intervention.
Discussion
Primary colorectal lymphoma is a rare malignancy accounting for 3% of all GI lymphomas and 0.1-0.5% of all colorectal malignancies. 10,11 The stomach is the most common location of GI lymphomas (50-60%) followed by small bowel (20-30%) and colorectal (10-20%) lymphomas. 12 Cecum is the most common site of involvement for colorectal lymphomas, because of abundance of lymphatic tissue. 10 The definition of primary GI lymphomas varies among different authors. However, most classification systems refer to primary GI lymphomas as arising in any part of the GI tract, even in the presence of more disseminated disease as long as extra nodal site is predominant. 13 The most common histological subtype of colorectal lymphoma is diffuse large B-cell lymphoma. 9 Other histologies include follicular lymphoma, Burkitt lymphoma and Mantle cell lymphoma. 10 The etiology of DLBCL is unknown, but some risk factors and predisposing conditions have been identified such as immunodeficient conditions and inflammatory bowel diseases. 5 The most common symptoms are abdominal pain, weight loss and altered bowel habits. 14 Males are affected more common with the mean age of diagnosis at 55 years. 13,[15][16][17] Colonoscopy with subsequent biopsy is the For diagnostic purposes it is crucial to define morphology and immunophenotyping. Morphologically, DLBCL consists of large size atypical lymphoid cells with prominent nucleoli and basophilic cytoplasm that have a diffuse growth pattern obliterating colonic gland architecture (Figures 4 and 5). Immunohistochemistry and flow cytometry confirms the immunophenotype of DLBCL. Tumor cells generally express pan B cell markers such as CD20, CD19, CD 22, CD45 and CD79a ( Figure 6). 18 22 Most of the time DLBCL often is associated with genetic abnormalities in BCL-6 gene, which leads to an uncontrolled cell cycle. 23,24 There are many prognostic systems of which the International Prognostic Index (IPI) is the main clinical tool used in the prognostication of DLBCL. 25 Gene Expression Profiling (GEP) is a new evolving approach to diagnose, classify and prognosticate DLBCL. 25 According to GEP two prognostically significant types of DLBCL have been identified. The molecular subgroups include germinal center B-cell-like (GCB) and activated B-cell-like (ABC), which are associated with different chromosomal aberrations. GCB group has better prognosis than the ACB group. [26][27][28] In the literature the treatment of colorectal DLBCL includes chemotherapy, radiation, surgical treatment or a combination of these approaches. The CHOP (cyclophosphamide, doxorubicin, vincristine, and prednisone) therapy has been the mainstay therapy for DLBCL for many years providing long term survival in 40-50% of patients. 29 Rituximab is the first monoclonal antibody approved for the treatment of DLBCL. 30 Randomized trials have shown that a combination of chemotherapy (CHOP) and rituximab results in significantly increased survival as compared with chemotherapy alone. 31,32 It is important to note that addition of rituximab to CHOP regimen resulted in 10-15% increase in survival with no increased risk of side effects. 33 Furthermore, rituximab monotherapy in patients with relapsed or refractory DLBCL allows achieving complete or partial remission. 34 Revised IPI has been developed to better predict outcome in patients treated with R-CHOP (Table 1). 35 A few randomized trials investigated the role of radiation and came to the conclusion that radiation therapy is at least as effective as chemotherapy alone. [36][37][38][39][40][41] Radiation therapy might not be the preferred option for the treatment of DLBCL involving the colon, because of a high risk of complications involving the small and large bowels according to Quayle et al. 41 Because of the low incidence of primary col-Case Report
Conclusions
Primary colorectal lymphomas are rare malignancies. The most common histological subtype of colorectal lymphoma is diffuse large B-cell lymphoma. Due to the lack of randomized controlled trials, there is not a clear treatment algorithm for these cases. However, combination chemotherapy along with rituximab appears to be a promising treatment. Revised International Prognostic Index criteria: i) age >60; ii) serum lactate dehydrogenase concentration above normal; iii) Eastern Cooperative Oncology Group (ECOG, Zubrod, WHO) scale performance status ≥2; iv) Ann Arbor stage III or IV; v) number of extranodal disease sites >1. *One point is given for each of the above characteristics present in the patient, for a total score ranging from zero to five.
|
2016-05-04T20:20:58.661Z
|
2013-04-15T00:00:00.000
|
{
"year": 2013,
"sha1": "f4cc7734973891e14af2fd1d5a7188785533105b",
"oa_license": "CCBYNC",
"oa_url": "http://journals.sagepub.com/doi/pdf/10.4081/rt.2013.e23",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0f6130ebd8dfb014e1146b58044526cb0719b943",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
56179342
|
pes2o/s2orc
|
v3-fos-license
|
Public health research in India in the new millennium: a bibliometric analysis
Background Public health research has gained increasing importance in India's national health policy as the country seeks to address the high burden of disease and its inequitable distribution, and embarks on an ambitious agenda towards universalising health care. Objective This study aimed at describing the public health research output in India, its focus and distribution, and the actors involved in the research system. It makes recommendations for systematically promoting and strengthening public health research in the country. Design The study was a bibliometric analysis of PubMed and IndMed databases for years 2000–2010. The bibliometric data were analysed in terms of biomedical focus based on the Global Burden of Disease, location of research, research institutions, and funding agencies. Results A total of 7,893 eligible articles were identified over the 11-year search period. The annual research output increased by 42% between 2000 and 2010. In total, 60.8% of the articles were related to communicable diseases, newborn, maternal, and nutritional causes, comparing favourably with the burden of these causes (39.1%). While the burdens from non-communicable diseases and injuries were 50.2 and 10.7%, respectively, only 31.9 and 7.5% of articles reported research for these conditions. The north-eastern states and the Empowered-Action-Group states of India were the most under-represented for location of research. In total, 67.2% of papers involved international collaborations and 49.2% of these collaborations were with institutions in the UK or USA; 35.4% of the publications involved international funding and 71.2% of funders were located in the UK or USA. Conclusions While public health research output in India has increased significantly, there are marked inequities in relation to the burden of disease and the geographic distribution of research. Systematic priority setting, adequate funding, and institutional capacity building are needed to address these inequities.
A lthough research is increasingly recognised as one of the driving forces behind global health and development, the research output from low-and middle-income countries (LMICs) such as India compares poorly with that of high-income countries (1Á5). This phenomenon has been powerfully captured by what the Global Forum for Health Research popularised as the '10/90 gap': the fact that of the over $70 billion spent worldwide on health research each year, only about 10% is invested in research into 90% of the Global Burden of Disease (GBD). This inequity in the global distribution of health research is further compounded by regional inequities, for example, in the biomedical focus of research, and in geographical and population representation.
As a result, the knowledge generated by health research does not adequately address the needs of countries and hinders the implementation of evidence-based policy and practice. It is in this context that there are increasing calls for strengthening health research capacity in developing countries as a 'critical element for achieving health equity' (6,7).
The public health research situation in India is characteristic of the low priority to public health more generally. A recent review by Dandona et al. (8) International License (http://creativecommons.org/licenses/by/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license. and resource support if there is to be a positive change in the production of such research in the country and, by its application, the promotion of healthier lives for its population (9). A focus on addressing health inequalities, on evidence-based policy making, on universal health care, and achievement of the Millennium Development Goals are notable public health goals of the new millennium, both globally and in India. In India, public health research has been emphasised as a core investment and tool to guide policy and practice as the country embarks on an ambitious agenda to universalise health care (10,11). The formation of the Department of Health Research is an example of a step by the government in this direction. This is an institution created in 2007 by the Indian government under the Ministry of Health and Family Welfare Á which is the central ministry for health in India. The primary mandate of this department is to promote and co-ordinate basic, applied, operational, and clinical research; provide guidance on research governance; promote inter-sectoral and international collaborations; as well as advance training and grants in medical and health research (12).
It is in this context, that we undertook a systematic situational analysis of public health research in India in the new millennium, with the aim of describing public health research output, whether its focus reflects the current burden of diseases, whether the research is equitably distributed in the country, the research institutions, and funders and collaborations for public health research.
Methods
Bibliometric analysis is a method used to describe patterns of publication within a given field or body of literature (13Á15). The methodology used in this study parallels other bibliometric studies undertaken to evaluate research production in specific scientific disciplines and/or world regions (16Á18). Two data sources were selected: PubMed, an open-access international database of medical journals and IndMed, an open-access database of Indian medical journals. The search strategy was determined by the operational definitions of relevant terms Á public health and public health research Á which are the focus of this study. Notably captured by Acheson in 1999 and by Last in 2000, several definitions of public health exist, which typically reflect the wide scope of public health itself (19,20). Definitions of both public health [as stated by the World Health Organization (WHO) in 1998] and of public health research (stated by the Strengthening Public Health Research in Europe) accept that the key common points are the population approach (public health) and the production of generalisable knowledge (research) (21,22).
In case of PubMed (www.ncbi.nlm.nih.gov/pubmed), an 'advanced search' of the title, keywords, and the entire article was conducted with Medical Subject Headings (MeSH), a comprehensive vocabulary for the purpose of indexing journal articles in the life sciences. In the MeSH tree, health care is a 'major topic,' which includes public health as a sub-head (23). Since health care also included articles that were not related to public health, a combination of the two MeSH terms were used.
The search terms used were: 1. MeSH major topic Á health care ' public health, AND 2. Text word Á India, AND 3. Publication date Á from 2000/01/01 to 2010/12/31 The search yield was 7,844 references. Selected abstracts were directly imported into an EndNote library. To ensure that all articles related to public health have been included, analyses to test the accuracy of the search terms were conducted for combinations of MeSH major topic health care with MeSH terms diseases, mental disorders, social sciences, and Anthropology, Education, Sociology, and Social Phenomena. For the first accuracy analyses, it was found that all relevant articles were included in the primary search (healthcare'public health). For the fourth accuracy analysis, 2,566 articles were found to be relevant to our study but were not included in the original search yield. These were added to make the total PubMed yield 10,410.
IndMed is a database covering peer-reviewed Indian biomedical journals and complements PubMed. It covers 62 journals indexed from the publication year 1985 onwards. After reviewing the 'advanced search' option in IndMed with 'public health' in keywords and the year of publication (individually for each year from 2000 to 2010), we observed that the results were unlikely to be complete. For instance, only 19 abstracts were listed for the year 2000 with this search combination from all journals. Thus, we used a different strategy searching each journal individually. Of the 62 journals, 9 were indexed in PubMed. Of the remaining 53, 17 journals were selected on the basis of table of content analysis revealing at least 5% of the articles per randomly selected set of issues on themes of public health research. The indexing of these 17 journals was incomplete for most journals. To address these gaps, additional searches were conducted. The first strategy involved websearches of the table of contents from the journal websites (four journals had websites with archives of abstracts). For seven journals, external websites or databases were used to close data gaps. For the remaining six journals, hand searches were conducted in the following libraries Á the National Medical Library and the B.B. Dikshit Library at the All India Institute of Medical Sciences, Delhi, and the Dorabji Tata Library at the Tata Institute of Social Sciences, Mumbai.
We screened abstracts of all identified articles from either of these two databases for inclusion for bibliometric Anuska Kalita et al.
analysis. In case of articles that did not have abstracts, the full text was screened. The following inclusion criteria were used: 1. Published in English language. 2. Must be data-based (either primary and/or secondary). 3. Studies must be undertaken in India Á either exclusively, or in India as one of the countries in a multi-country/study.
To ensure reliability, two independent reviewers screened each paper and the two EndNote libraries were matched, thus leading to a reliability check of 100% of the selected abstracts. In addition, a randomly selected sample of 500 abstracts from across the 11 years was manually checked by a third reviewer.
Based on the inclusion criteria, 5,869 articles from PubMed and 2,024 articles from IndMed were found to be eligible, yielding a total sample of 7,893 articles. Each abstract (or full-text of papers without abstracts) of the 7,893 eligible papers were reviewed by two independent reviewers and categorised under biomedical disease focused papers or papers that described determinants, policy, and practice. Biomedical disease focused papers were further categorised into three categories based on the GBD Study definitions, viz., GBD 1 included studies on communicable diseases, maternal and neonatal health, and nutritional disorders; GBD 2 included studies on noncommunicable diseases and mental and behavioural disorders; and GBD 3 included studies on injuries. Articles that involved research on two or more GBD categories were classified under each of them. The non-disease category included articles on social determinants of health, history of medicine, ethics, policy, and programmatic research that is not related to specific disease burden categories. Abstracts were categorised independently by the two reviewers; discrepancies were addressed by consulting a third reviewer.
To analyse the disease focus and geographical distribution of public health research in India, data were extracted into a spreadsheet for the following parameters from each article 1) disease focus Á as per the GBD categories; 2) location of the research study across all states and union territories of India; 3) corresponding author's institution (as a proxy for the research institution leading the study); and 4) location of the corresponding author's institution across all states and union territories of India.
To analyse funding source and international collaborations, we randomly selected 1,600 articles (20% of the total sample) for more detailed analyses of the full manuscript. We also attempted to fill data gaps in any of these categories of information through web-based searches and direct communication with authors. This yielded 1,076 papers with information about collaborations (approximately 67% of the sub-sample, and 13.7% of the total sample), and 870 papers with funding sources (approximately 54% of the sub-sample and 11% of the total sample).
Descriptive analysis and frequencies were used to describe absolute outputs over time, examine outputs in different categories of GBD over time, geographical distribution of research/research institutions, collaborations, and funders.
Ethics statement
The study was reviewed and has been approved by the Institutional Review Board of Sangath (Sangath-IRB).
Absolute research output
The total number of eligible articles included in the bibliometric analysis from both PubMed and IndMed was 7,893 (5,869 from PubMed and 2,024 from IndMed). The process of data collection is shown in Fig. 1
Distribution of public health research
Out of the 7,893 papers, 6,103 reported the topic of research as one or more of the GBD conditions. We observed that the majority of the papers with a biomedical focus were related to conditions in the GBD 1 category across all 11 years (60.8%, 3,711/6,103), compared with a burden of disease, as estimated at the mid-point of the decade in 2004, of 39.1% (Fig. 3). The proportion of lost DALYs (Disability Adjusted Life Years) caused by conditions under GBD 2 category for India was 50.2% in 2004. Compared to this burden, only 31.7% (1,933/6,103) publications focused on conditions under this category. The proportion of research focused on diseases in GBD 3 is 7.5% (458 out of 6,103), which is slightly lower than the burden of disease in this category (10.7%) in India.
We observed a trend of reduced proportion of GBD 1 and a proportionate increase in those related to GBD 2 over time, although the proportionate distribution of research in the later years still does not match the burden of disease reported in the GBD 2010 (Fig. 4).
The geographical equity in public health research output is skewed. For this, we considered the Empowered Action Group (EAG) that was constituted by the Ministry of Health and Family Welfare in 2001 to facilitate areaspecific interventions for the eight most populous and poorest states (viz. Bihar, Chhattisgarh, Jharkhand, Madhya Pradesh, Rajasthan, Orissa, Uttarakhand and Uttar Pradesh), which together account for 45.9% of India's population and 56.5% of the poor were the location of just 10% of publications (801/7,893) (24). This is presented in Fig. 5.
The research actors
Out of our total sample of 7,893 papers, 7,706 papers reported corresponding addresses. From this sample, 78.4% (6,044/7,706) reported an Indian research institution. In total, 42.5% (2,572/6,044) of the papers were produced from research institutions located in just three states of Delhi, Maharashtra, and Tamil Nadu. Table 1 lists the 15 leading research institutions in India. Together these institutions produced 21% (1,258/6,044) of the research papers from India during the last decade; the majority of these institutions were located in Delhi and Maharashtra. Another observation was the disparity in production of research even among these top 15 institutions, which ranged from a maximum of 555 papers to a minimum of 13. The north-eastern seven states accounted for the least number of research institutions (1.4%,111/7,706), Table 2. Together, these institutions led 26.9% (442/1,662) of the papers and were involved in collaborations on 89% (187/210) of the papers.
Eight hundred and seventy papers of the sub-sample of 1,600 papers yielded information on funding sources. In total, 34.1% (297/870) listed an Indian funding agency and the remaining two-thirds (573/870) listed a foreign funding source. The main funding institutions supporting public health research in India are listed in Table 3. In total, 81.5% (709/870) of papers were funded by these 10 agencies. While all the four Indian funders are governmental institutions, international funding agencies represent a mix of multilateral and bilateral organisations (WHO and the Department for International Development-UK)
Discussion
This paper describes the results of an analysis of public health research in India in the new millennium. The data source was a bibliometric analysis of one of the largest international and the largest national databases of medical research. Our main findings were that while public health research output has increased substantially over the course of the first decade of the new millennium, there is considerable maldistribution of research in terms of the disease focus and the geographical focus. Most research is funded by international donors with relatively low levels of domestic public or private sector investment. International academic partners, particularly from the USA and the UK, play influential roles in research with little evidence of southÁsouth partnerships with other developing countries. In a country which bears a disproportionate amount of the GBD, it was reassuring to observe that the total number of publications based on public health research in India has substantially increased over the first decade of the millennium; however, this increase (of 72.3%) falls well below that of other middle-income countries such as South Africa (225% increase from 2000 to 2010) (25,26), Mexico (102% from 1995 to 2004) (27), and Brazil (241% increase from 1995 to 2004) (28). This absolute increase in the volume of publication masks striking inequities both in terms of the research focus and the research settings. Even according to the recent GBD estimates of 2010, while GBD 2 and 3 conditions accounted for 45 and 12% (together 57%) of the burden of disease, just 35 and 7% (42%) of papers focused on these conditions (29). These findings are consistent with the only other bibliometric study from India and those from other LMICs (2Á5, 30). This skewed picture has been attributed to the misconceived notion of research agencies and donors regarding the association of these diseases with affluence (27, 31Á34) even though the majority of GBD 2 and 3 conditions are more frequent among poorer populations in LMICs (27, 35Á40).
In addition to the under-representation of research on leading causes of the burden of disease in India, there is a markedly inequitable representation of vulnerable contexts or population groups in India. Capacities exist, but are unequally distributed, as is evident from the concentration of research institutions in richer states of the country such as Delhi, Maharashtra, West Bengal, and Tamil Nadu. A number of factors contribute to these maldistributions Á dependence on foreign funding and donor-driven research priorities, asymmetries in capacities of researchers and institutions leading to a concentration of research in a few subject areas and geographies, and a policy and research-system vacuum. The lack of research institutions in states contributing to the highest proportions of poverty and disease burden in the country potentially contributes to a vicious cycle of low capacity to carry out public health research, which is relevant to these populations.
International institutions, both donors and research partners, play a leading role in public health research in the country. Two-thirds of the publications were based on research funded by foreign donors. This compares unfavourably with other middle-income countries such as Brazil and China where 74.3 and 78.6% of the total health research funding comes from the domestic public sector agencies and only 2.2 and 8.8% comes from international funding agencies (41Á44). This reliance on international funding may contribute to the inequities in the distribution of research, such as an undue focus on international goals like the MDGs. These issues of skewed priorities and funding need to be addressed through a significant increase in domestic investments in public health research that is transparent, accountable, and responsive to the burden of disease and the needs of diverse geographical regions and populations of the country. There is also a need for domestic private philanthropies to support public health research; in Brazil, for example, domestic private sector organisations contribute 23.3% investments in public health research (43). Channelling private-sector support towards public health research assumes special relevance in the context of the recent Companies Bill that mandates 2% allocation of profits of listed companies towards corporate social responsibility (45).
Given the inequitable distribution of research institutions and focus areas in the country, the focus of capacity strengthening efforts to build institutions, especially in resource-poor states and in neglected public health focus areas is urgent. However, attracting and retaining researchers within institutions require coordinated strategies that address familiar barriers such as the lack of academic liberty, absence of professional incentives, poor and non-transparent funding, bureaucratic obstacles, and unclear career pathways (9). The weak public health research environment in India needs strengthening through a comprehensive approach. There is often little communication and consultation between the producers of research and the users of research: policy-makers, health providers, civil society, the private sector, other researchers, and the general public. It is important to recognise that the health research process spans the entire spectrum of policies related to knowledge creation as well as its diffusion and use. Therefore, a well-coordinated, systematic approach to health research needs to involve all stakeholders. For instance, priority setting needs to underlie the efforts to increase the quality, relevance, and production of research by considering whether there is a demand for this research. The paucity of forums to interact and share knowledge, inaccessibility of existing global resources and information asymmetry, and the lack of systematic dissemination of research towards policy and practice all lead to a weak research ecosystem. Collaborations between domestic, as well as international researchers and institutions, can foster such exchange and access. Evidence from South Africa and Brazil suggests that international collaborations dramatically boost the volume of health research publications in high impact peer-reviewed journals (46,47). To realise the potential of collaborative research, it is crucial that local capacities are strengthened and relationships between domestic and international institutions are based on equal partnerships. An issue of note here is the dominance of the USA and the UK in collaboration for public health research in India. SouthÁsouth collaborations, either with countries such as Brazil or South Africa with vibrant public health research cultures, or with other countries in South Asia which share similar public health priorities, were negligible. Steps need to be built on to encourage cooperation, such as Á facilitating discussions and sharing of national experiences; supporting cross border training; developing networks of researchers, policymakers, and institutions; and increasing political visibility of health (48Á50).
The weakness of governance systems that regulate and monitor public health research in the country often lead to insufficient coordination. Research activities in various health-related fields have been fragmented, isolated from each other, and wastefully duplicative. In a context like India, where both financial and human resources are scarce, this is inefficient and sub-optimal. While the Department of Health Research was set up under the Ministry of Health and Family Welfare by the Government of India in 2009Á2010 (12), a policy for health research, a clear mandate and empowerment of the Department, and systems of convergence with existing departments and government institutions have yet to clearly articulated. The current need in India is for the health research system to identify priorities, mobilise resources, both public and private, and maximise the use of existing ones, develop and sustain the human and institutional capacity necessary to conduct research, disseminate research results to target audiences, apply research results in policy and practice, and evaluate the impact of research on health outcomes. Good quality research can and must be generated to continuously address critical knowledge and practice gaps to advance innovation in and improve implementation of public health programmes. Such research cannot be viewed as an indulgence in resource-poor states but needs to be at its most creative and relevant in precisely those contexts.
The last decade has seen some positive developments in the area of health. Recommendations for universalisation of health coverage (10) increased investments in health in the 12th Five-Year Plan period (11), and the proposal for a comprehensive and convergent National Health Mission (11) is all desirable goals, which need evidence generation for their effective implementation. Public health research priorities and investments need to be convergent with, and not parallel to, these goals.
This study suffers from the typical limitations of bibliometric analyses, that is, the fact that these miss out on articles or journals, which are not indexed. Another limitation could be the risk of misclassification of articles (in particular regarding focus areas) despite our robust efforts to minimise this bias. Additionally, newer articles published from 2011 till date have not been included within the scope of this study, and we acknowledge that there might be changes in the trends of public health research in India in the last 4 years. Nevertheless, our findings represent the most comprehensive analysis of public health research in India in the current millennium and serve as a reference for the evaluation of future research production metrics.
Conclusions
While public health research output in India has increased significantly in the first decade of this millennium, there are marked inequities in relation to the burden of disease and the geographic distribution of research. Systematic priority setting, adequate funding, and institutional capacity building are needed to address these inequities. It is imperative that India invests adequately in developing a vibrant and rigorous ecosystem of public health research at the heart of its public health strategy.
Authors' contributions
VP and AK conceived the study. VP provided overall guidance. AK led the bibliometric analysis and SS led the stakeholder analysis. AK prepared the first draft. VP, AK and SS finalized the draft.
|
2018-04-03T01:18:28.716Z
|
2015-08-14T00:00:00.000
|
{
"year": 2015,
"sha1": "e6014a1335cb24ff0cbc6a09f54b3605f49618e6",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.3402/gha.v8.27576?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e2f996574c090c5b0ecc0042e3e218d694a2e885",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244434347
|
pes2o/s2orc
|
v3-fos-license
|
Three-Component Microseismic Data Denoising Based on Re-Constrain Variational Mode Decomposition
: Microseismic monitoring is an important technology used to evaluate hydraulic fracturing, and denoising is a crucial processing step. Analyses of the characteristics of acquired three-component microseismic data have indicated that the vertical component has a higher signal-to-noise ratio (SNR) than the two horizontal components. Therefore, we propose a new denoising method for three-component microseismic data using re-constrain variational mode decomposition (VMD). In this method, it is assumed that there is a linear relationship between the modes with the same center frequency among the VMD results of the three-component data. Then, the decomposition result of the vertical component is used as a constraint to the whole denoising effect of the three-component data. On the basis of VMD, we add a constraint condition to form the re-constrain VMD, and deduce the corresponding solution process. According to the synthesis data analysis, the proposed method can not only improve the SNR level of three-component records, it also improves the accuracy of polarization analysis. The proposed method also achieved a satisfactory effect for field data.
Introduction
Microseismic monitoring is a technology used to monitor the subsurface fracturing of rocks due to human activities [1][2][3][4]. This is the first step in recognizing microseismic event signals for microseismic monitoring. However, owing to the characteristics of microseismic event signals (e.g., low energy, high frequency, and short duration) they are easily disrupted by various factors, which increases the difficulty of the subsequent processing and interpretation. Therefore, it is necessary to denoise the monitoring record.
The denoising process for microseismic data is mainly divided into two basic strategies according to the type of data: multi-channel denoising and single-channel denoising [5]. For the multi-channel strategy, microseismic data are denoised by the spatial distribution information of the geophone. Gan et al. [6] used a median filter based on the seismic profile structure to suppress blending noise. Chao et al. [7] used a method based on the correlation between the three components of microseismic to identify the 3D shearlet transform coefficient and effectively remove the noise. Bai et al. [8] established a model based on the least squares method to decompose seismic data using the spatial and temporal relationships among multi-channel seismic data. The single-channel strategy is mainly used for denoising via time-frequency transformation and signal decomposition. Chakrabort et al. [9] used wavelet time-frequency analysis to suppress the seismic noise. Mousavi et al. [10] combined the synchrosqueezed continuous wavelet transform and a detection function to remove noise from the signal.
Noise reduction methods based on decomposition and reconstruction are widely used in single-channel noise reduction [11], such as empirical mode decomposition (EMD) [12,13] which is used to process the real and imaginary parts of the frequency domain of the signal in the time window to reduce random and coherent noise. Chen et al. [14] applied an AR mode to process the results of EMD in the frequency domain to reduce random noise. However, EMD has some drawbacks in practical applications, such as mode mixing [15]. Han et al. [16] used ensemble empirical mode decomposition (EEMD), which was proposed by Wu et al. [17] to overcome the problem of mode mixing and to process the real and imaginary parts of the frequency domain of the signal in the time window to reduce the noise. Chen et al. [18] applied wavelet threshold filtering to process the high-frequency intrinsic mode function (IMF) of the EEMD, and then reconstructed the high frequency and low frequency to reduce the noise. Dong et al. [19] combined the complex curvelet transform and complementary ensemble empirical mode decomposition (CEEMD) which was proposed by Yeh et al. [20] to improve the effect of decomposition. Zuo et al. [21] selected the IMFs of the CEEMD using the self-correlation coefficient, then used the wavelet packet threshold to process the IMFs, and finally reconstructed the IMFs to suppress the noise. Peng et al. [22] selected the EEMD mode by the variance contribution rates (VCRs), which removed the modes with VCR < 0.01 and others retained. Then, each retained mode was constructed as a Hankel matrix, and it was processed by principal component analysis (PCA) to complete denoising.
Compared with EMD, EEMD, and CEEMD, VMD can further decrease the redundant modes and solve the mode-mixing problem, and has a more powerful anti-noise performance [23]. Since then, many studies have applied VMD to reduce noise. Liu et al. [24] used VMD to perform time-frequency analysis of seismic data to suppress noise and highlight the geologic characteristics. Li et al. [25] selected IMFs to reconstruct a signal based on detrended fluctuation analysis (DFA). Huang et al. [26] performed a correlation analysis on the IMFs of the VMD to suppress the noise. Zhou et al. [27] combined VMD and odd spectrum analysis to remove the remaining low-frequency noise of the VMD. Li et al. [28] applied time-frequency peak filtering for the IMFs of VMD and reconstructed the signal to reduce noise. Liu et al. [29] used VMD to suppress ground rolling waves to achieve denoising. Zhang et al. [30] used the Akaike information criterion to judge the results of the VMD and selected the threshold to suppress the noise. In the above application research, most scholars have focused on processing IMFs to achieve a better denoising effect while using VMD. Li et al. [31] applied DFA to determine whether the VMD modes of the water inrush signals were random noise or a valid signal, and eliminated the noise in the mode using DFA.
Analyses of the characteristics of the acquired microseismic data found that the Pwave of the vertical component has a higher SNR than the horizontal components, which is not conducive to polarization analysis [32,33]. Therefore, a noise reduction method for the horizontal component is needed. When the three-component microseismic data are processed by VMD, the horizontal component is more easily affected by noise of similar frequency than the vertical component. This is because the level of random noise of the horizontal component is higher than that of the vertical component. Rodriguez et al. used a redundant dictionary to denoise the three components of microseismic data based on the sparse coefficient distribution of the three components [34]. Therefore, to improve the SNR of the horizontal component, it is assumed that each component consists of multiple IMFs, and there is a linear relationship between the modes of the same center frequency to limit the P-wave of the horizontal component by the vertical component. In this study, we propose a new method that adds a new constraint to the original formulas of VMD.
The proposed method simultaneously processes three-component microseismic data based on the VMD. However, the formulas for processing are different between the vertical component and the horizontal components. Based on the linear relationship among the three components of microseismic data, and the non-linear relationship of the random noise in the three-component data, the vertical component is processed by the original formula of VMD, and the horizontal components are processed by the re-constrained formula of VMD.
The aim of this processing is to make the signal amplitude distribution of the horizontal components consistent with that of the vertical component as much as possible.
In the verification phase of the proposed method, a synthetic three-component data is designed in which the SNR horizontal component is relatively low, and that of the vertical component is relatively high. After denoising, the polarization characteristics of the data were analyzed to validate the effectiveness of the proposed method. In the practical phase, field microseismic data were processed based on the proposed method. The corresponding results show that the proposed method can effectively improve the SNR of three-component data as well as the precision of the polarization information.
Variational Mode Decomposition
VMD is a signal decomposition method in which the IMF and corresponding center frequencies are obtained through iteration. The method first assumes that the original signal is decomposed into k mode components. It then takes the minimum Gaussian smoothness of each mode component as the goal. The sum of the modes is equal to the decomposed signal as the constraint for optimization. Finally, the decomposition results are obtained. The VMD expression is defined as follows: where K is the number of decomposition modes, u k is the kth IMF, ω k is the center frequency of the kth mode, f is the original signal, * is the sign of convolution, and δ(t) is the Dirac delta function which indicates a unit impulse symbol.
The Lagrange multiplier method is used to solve the problem. The Lagrange multiplication operator is introduced to transform the constrained variational problem into an unconstrained variational problem. The updated formula is derived as follows: where α is the penalty factor, f(ω) is the Fourier transform of the original signal, ω k is the center frequency of the kth IMF of the horizontal component, u k is the Fourier transform of the kth IMF, and λ(ω) is the Fourier transform of the Lagrangian multipliers.
Re-Constraint of Variational Mode Decomposition
In the ideal state, the relationship between the three components of the microseismic data can be regarded as a linear relationship: where H 1 and H 2 are horizontal components, V is the vertical component, and a and b are coefficients that represent the linear relationships between the components. Since the center frequencies of the P-wave and S-wave of the same microseismic event are different, this study assumed a linear relationship for the modes of the same (or close to) center frequencies in the VMD decomposition results of different components. Based on this relationship, we re-constrained the VMD and obtained a new formula that adds a new constraint condition: where v k is the kth IMF after the decomposition of the vertical component, u k is the kth IMF after the decomposition of the horizontal component, and a k is a constant that expresses the linear relationship between the kth IMFs. The third line of Formula (4) is the re-constrained condition, indicating that there is a linear relationship between the modes.
By using the Lagrange multiplier method, we get the corresponding updated formulas: where α is the penalty factor, f(ω) is the Fourier transform of the original signal, ω k is the center frequency of the kth IMF of the horizontal component, and u n+1 k is the Fourier transform of the kth IMF. λ(ω) and µ(ω) are the Fourier transforms of the Lagrangian multipliers.
Under the noise-free condition, the VMD results for three-component data share the same frequency distributions. However, owing to the influence of various factors, the frequency distributions of the VMD results will be different in field data. Because the characteristic of the SNR of the vertical component is better than that of the horizontal component, we updated the horizontal component IMFs by using the center frequency of the vertical component IMFs in the iteration processing. The premise of using this method to process three-component microseismic data is that the SNR of the vertical component is higher than that of the horizontal components. Thus, we obtained the new updated formulas: where α is the penalty factor, f(ω) is the Fourier transform of the original signal, ω zk is the center frequency of the kth IMF of the vertical component, and u n+1 k is the Fourier transform of the kth IMF. λ(ω) and µ(ω) are the Fourier transforms of the Lagrangian multipliers.
As shown in Figure 1, the processing flow of the re-constrain VMD is as follows: 1.
Load the initial three-component microseismic data.
2.
Expend the data using a mirror image against the boundary effect, and perform the Fourier transform for the original data. The related parameters of the subsequent processing are initialized.
3.
Update the vertical component variable using VMD updated formulas as Formula (2).
4.
Update the parameters of the horizontal components using updated re-constrain VMD formula as in Formula (6).
5.
Judge whether the end condition of iteration is satisfied. The end condition is determined by the maximum number of iterations. If the condition is not satisfied, repeat Step 3.
6.
Perform the inverse Fourier transform for the previous output, and remove the mirror data. Then, the final result is obtained.
Synthetic Data
We used an attenuated simple harmonic to verify the effectiveness of this method. Figure 2 shows pure attenuated harmonic three-component data with frequencies of 30 Hz and 70 Hz. Figure 3 shows waves contaminated with random noise. The SNR of the horizontal component is set to −10 dB, and the SNR of the vertical component is set to 5 dB. In this study, we used the SNR, which is defined as: where x(t) is the pure synthetic signal, n(t) is random noise, and N is the number of sampling points. Figure 4 shows the VMD results of three-component data. Since the parameter k of VMD only impacts the frequency representation performance, it has no noticeable impact on the reconstruction performance [24]. In addition, the proposed method only focuses on the frequency representation of the vertical component, which is viewed as a re-constraint condition. Therefore, the parameter k for each component was set to the same value in this study. Figure 5 shows the decomposition results of the three-component synthetic data obtained by the re-constrain VMD. As shown in Figure 5, the redundant amplitude of the nonlinear relationship information was removed. The remaining modes were also suppressed because they are all random signals and have no direct correlation.
To prevent the re-constrained modes of two horizontal components (of which the SNR is low) from being an illusion of noise fitting (in other words, noise is regarded as an effective component of the signal to participate in the reconstruction processing), we removed the pure synthetic signal and performed the re-constrain VMD for these two components. Figure 6 shows the corresponding results. For the re-constrained modes in the two horizontal components, there was a weak correlation with the vertical component after re-constrained decomposition, indicating that the re-constrained modes of the two horizontal components were also affected by random noise, and the degree of influence was quite limited. The first and second modes were reconstructed as a result of simple noise reduction. In Figure 7, it can be seen that compared with the VMD results, the results of re-constrain VMD could obtain better polarization characteristics and remove more background noise.
To better evaluate its denoising effect, we analyzed the polarization characteristics of the 30 Hz signal in the three-component data. The length of the selected time window was an entire period, and the 230th sampling point was manually selected as the starting point of this window, as shown in Figure 8. In the synthetic data in Figure 8, the two red lines indicate the range of the selected windows for polar analysis. It is evident that the polarization directions were changed for the synthetic data contaminated with random noise. After denoising based on VMD and re-constrain VMD, the accuracy of the calculated polarization directions was greatly improved. In addition, the results of hodograms based on re-constrain VMD were more compact than those based on VMD.
Field Microseismic Data
The field three-component microseismic event data were acquired in the hydraulic fracturing of a shale gas reservoir. The original waveforms of this microseismic event are shown in the first row of Figure 9. These microseismic data include the P-wave and S-wave of the microseismic event. The range of the P-wave is approximately from the 1000th sampling point to the 1450th sampling point, and the range of S-wave is approximately from the 1450th sampling point to the 2000th sampling point. Because of the formation mechanism of microseismic events, there is a partial overlap between the P-wave and S-wave. The SNR of the vertical component is better than that of the two horizontal components. The noise reduction results obtained using VMD and re-constrain VMD are shown. Comparing the waveforms between the original data and denoising results, the random noise was effectively suppressed by both methods. As shown in Figure 9, these two methods could suppress random noise to a certain extent. According to the original waveform of the horizontal component H1, the SNR of the P-wave is relatively low. Comparing the H1 components of the two groups' denoising results, it can be seen that the P-wave waveforms of microseismic data were not well recovered using VMD. Correspondingly, the waveforms of the P-wave were more obvious after using re-constrain VMD. It can also be seen that the background noise was better suppressed after using re-constrain VMD, but this effect was controlled by the background noise intensity of the vertical component.
Because the P-wave is commonly used to analyze the polarization direction of microseismic events, we selected from the 1000th point to the 1070th point as the time window range, which is indicated by the red lines in Figure 9. The data in the time window were used to analyze the polarization and to judge the denoising effect. After denoising using VMD and re-constrain VMD, the polarization characteristics of the denoising results are shown in Figures 10 and 11. In this study, the enhancement of polarization information by denoising was evaluated by determining whether the polarization characteristic of the curve of the hodogram was clear, compact, and self-consistent. It was largely straightforward to determine whether the polarization characteristic of the curve was clear, except for the H1-H2 hodogram in Figure 10. Regarding compactness, all hodograms in Figure 11 are better than those in Figure 10. Regarding self-consistency, according to the positive or negative correlation among the three components, the hodograms in Figure 11 are better than those in Figure 10.
Conclusions
The proposed method mainly focuses on solving the challenge of low-SNR horizontal components of microseismic data. Among the different components, there is no correlation for random noise, and there is a linear relationship for the signal. For most field threecomponent microseismic data, the SNR of the vertical component is better than that of the two horizontal components. Therefore, by adding a new constraint to the VMD and modifying the new updated formulas, we propose the re-constrain VMD. The new constraint is used to represent the characteristics of linear relationships for signal and random noise among the three components of the data.
According to the denoising results for the synthetic data and field microseismic data, we found that the proposed method could suppress random noise to a certain extent, and the effect was better than that of VMD. The subsequent polarization analysis also showed that the proposed method could obtain better polarization characteristics.
|
2021-11-21T16:26:18.140Z
|
2021-11-19T00:00:00.000
|
{
"year": 2021,
"sha1": "f9615281820ad0d958ec3b83c2582eff0c2b5ca2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/22/10943/pdf?version=1637580680",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "446bd96370920acda5dd5846839d407f0f580135",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": []
}
|
256960505
|
pes2o/s2orc
|
v3-fos-license
|
‘Cyclical Bias’ in Microbiome Research Revealed by A Portable Germ-Free Housing System Using Nested Isolation
Germ-Free (GF) research has required highly technical pressurized HEPA-ventilation anchored systems for decades. Herein, we validated a GF system that can be easily implemented and portable using Nested Isolation (NesTiso). GF-standards can be achieved housing mice in non-HEPA-static cages, which only need to be nested ‘one-cage-inside-another’ resembling ‘Russian dolls’. After 2 years of monitoring ~100,000 GF-mouse-days, NesTiso showed mice can be maintained GF for life (>1.3 years), with low animal daily-contamination-probability risk (1 every 867 days), allowing the expansion of GF research with unprecedented freedom and mobility. At the cage level, with 23,360 GF cage-days, the probability of having a cage contamination in NesTiso cages opened in biosafety hoods was statistically identical to that of opening cages inside (the ‘gold standard’) multi-cage pressurized GF isolators. When validating the benefits of using NesTiso in mouse microbiome research, our experiments unexpectedly revealed that the mouse fecal microbiota composition within the ‘bedding material’ of conventional SPF-cages suffers cyclical selection bias as moist/feces/diet/organic content (‘soiledness’) increases over time (e.g., favoring microbiome abundances of Bacillales, Burkholderiales, Pseudomonadales; and cultivable Enterococcus faecalis over Lactobacillus murinus and Escherichia coli), which in turn cyclically influences the gut microbiome dynamics of caged mice. Culture ‘co-streaking’ assays showed that cohoused mice exhibiting different fecal microbiota/hemolytic profiles in clean bedding (high-within-cage individual diversity) ‘cyclically and transiently appear identical’ (less diverse) as bedding soiledness increases, and recurs. Strategies are proposed to minimize this novel functional form of cyclical bedding-dependent microbiome selection bias.
The importance of germ-free (GF) animals as a laboratory resource has exponentially grown with our expanded understanding of the complex role of microbes in disease modulation [1][2][3][4][5][6][7] especially in the complex context of personalized diets, microbiome variability and genetics 8,9 . Although the use of GF mice in scientific publications has tripled over the last decade, GF facilities remain relatively scarce due to their high technical costs. Improving current GF research efficiency and experimental capabilities will allow more laboratories to adopt GF infrastructure to conduct more complex and parallel studies of diverse microbiotas 10 , as numerous diseases could be better treated with an improved causal understanding of microbes-diet-genomic interactions [11][12][13] . Novel complementary strategies to promote paralleled microbiome research are also needed since cross-contamination of cages in standard multi-cage pressurized isolators is a common and difficult problem to control when cages are enclosed together [14][15][16] .
Although mechanically pressurized ventilation with high-efficiency particulate arresting (HEPA) filtration have existed for decades in GF multi-cage isolation systems, and more recently in individually ventilated cages 17,18 , pressurized systems require anchored (nonmobile/nontransportable) infrastructure. Although all HEPA-pressurized isolators are 'transportable' , arguably they cannot be moved freely by one person through
Results
NesTiso cage set design and thermography. To prevent contact with airborne particles (required for environmental exposure to microbes) [22][23][24] , NesTiso is technically a 'double-caging/triple-barrier' or 'nesting 3-layer isolation' system. Implemented using commercially available static cages, we housed cohorts of GF mice born in HEPA-pressurized isolators by placing the mice (SAMP1/YitFc [SAMP] 25,26 , C57BL/6 [B6], and Swiss Webster [SW]) inside mouse cages, and then nesting such cages inside larger rat cages. For air filtration, both nested cages had spunbonded polyester non-HEPA filter lids 22 , which were hermetically attached to the cage bottoms using stretch plastic film. As a third layer, NesTiso sets were placed on an autoclavable steel rack-cart safeguarded with breathable autoclavable curtains (Fig. 1a-c). Although mechanical ventilation efficiently exchanges air in standard cages, its use arguably causes cold-stress and immune-alterations in mice [27][28][29] . Natural ventilation in NesTiso is based on heat convection from the mouse causing infrared thermo-physical effects on the surrounding air. Thermal studies in mice [29][30][31] , architectural ventilation laws 21 , and NesTiso thermography (Supplementary Figs 1 and 2) 32 indicate that cage air, if set at temperatures lower than that of the mouse, warms up by convection near the mouse via respiration or infrared reflectivity and rises, creating a ('chimney effect') column of air moving upward. Rising warm humid air currents then promote replacement with heavier colder clean air moving inward, causing passive air filtration as currents move in both directions through non-HEPA filters ( Supplementary Fig. 3).
External aeration improves natural ventilation. Because moist condensation was notorious in high animal density NesTiso cages (4-5 mice/cage), we quantified the air humidity in NesTiso, and the effect of external aeration on 7-day moistened-soiled bedding in empty NesTiso cages. Under laboratory conditions of stable air humidity (26.5 ± 4.89%) and temperature (23.8 ± 0.57 °C), natural air humidity in NesTiso was 3.7 ± 0.9% higher compared to static single caging (70.3 ± 1.5%), and cage humidity fluctuated in parallel over time in both NesTiso and single cages, indicating proper moisture exchange in NesTiso based solely on natural moisture-driven ventilation (Fig. 1d). Because external ventilation around the cages could improve air exchange with the cages' interior, we then quantified if aeration of the cage-holding rack by using a household fan could improve natural ventilation, lowering the humidity within the NesTiso cages. Experiments on humidity of moistened corncob bedding demonstrated that external aeration was effective at reducing bedding moisture in the innermost cage. Measurement of moist bedding weight changes over a 12-day period demonstrated that aeration caused optimal steep evaporation curves of moist bedding in NesTiso (Fig. 1e), indicating that external aeration improves natural ventilation and bedding dehydration.
Microbial screening confirms NesTiso GF status. Feed indwelling microbes that survive sterilization (gamma-irradiation, autoclaving) 14 , airborne particulates [22][23][24]33 , and human skin microbes are common sources of contamination of GF mice. To validate NesTiso as a GF system, we used fecal gram staining and quantified the test agreement between aerobic and anaerobic cultures to identify the most efficient microbial screening. For this purpose, we transported 32 GF mice in 19 NesTiso sets to a microbiology (non-GF, non-HEPA facility) laboratory, where cages were opened twice under a biosafety HEPA hood to feed the mice an SPF-grade irradiated diet over a 10-day period. Culture results showed that aerobic cultivation of feces correctly predicted the results (94.8%) of either mouse colonization with facultative anaerobes (only one cage had strict anaerobes), SCientifiC REPORTS | (2018) 8:3801 | DOI: 10.1038/s41598-018-20742-1 or the absence of microbes in GF mice by day 10 ( Supplementary Fig. 4a-d). Kappa statistics 34 confirmed optimal test-agreement (89.5%) between aerobic and anaerobic screening (Kappa = 0.78 ± 0.23; Z = 3.42; Prob > Z = 0.0003). The global performance of both cultures was further tested using receiver operating characteristic (ROC) regression and bootstrapping, where the probability of culture results from randomly selected contaminated cages (sensitivity) was compared to that of random non-contaminated cages (specificity) 35 , using as comparator both cultures interpreted 'in series' 34 . ROC predictions showed that aerobic and anaerobic incubation have the same probability of differentiating GF from colonized mice (P = 1.0). Because most environmental contaminants are robustly aerobic, we recommend routine aerobic fecal cultures, incubation of soiled cages for fungi, fecal gram staining and weekly anaerobic cultures ( Supplementary Fig. 4e,f). Mice were confirmed GF, on average, with nine negative tests. Serological health screening was also conducted in NesTiso mice confirming the absence of reactive antibodies and inadvertent exposure to 23 cultivable and uncultivable pathogens, including viruses [36][37][38][39][40][41][42][43][44][45] (Methods and Supplementary Table 1).
Containment of microbes and 'quadrant infection control' in NesTiso.
Collectively, this study represents two years of monitoring mice for a total of >99,530 mouse-days, divided across three rooms (A, B, C). To determine if contamination events could be halted in NesTiso, we tested two strategies: In rooms A/B (~65 cages), in the event of a contamination we simultaneously tested and replaced all cages in the entire mouse colony, and eliminated all contaminated cages using an 'all-in-all-out' strategy; in Room C (~35 cages), without (a) Illustration of ventilation and air filtration in housing systems commercially available for mice, and our Nested isolation system (NesTiso; non-HEPA air filtration occurs in inward/outward directions as air currents move by natural ventilation and external aeration). Mouse photograph, thermography demonstrates mice are a source of heat that instantly affects the temperature of the bedding material and surrounding elements via infrared rays reflectivity. Circles illustrate hottest spot near the eye (35.9 °C), and instant infrared reflection (heat radiation) that warms up surrounding surfaces (e.g., +2.9 °C on bench top; details in Supplementary Figs 1-3). (b) NesTiso setting in ultrabarrier GF room. (c) Germ-free NesTiso cage set in biosafety cabinet housing one 40 week-old GF-mouse during a 7-day DSS experiment (day 72 in NesTiso). Filter lids are sealed to cage bottoms using plastic wrap. Notice the space between the cages to store materials for individualized repeated aseptic handling and weighting of mice (small orange box). (d) Comparison of naturally occurring air humidity inside heavily soiled empty GF mouse cages monitored over time in NesTiso or standard single caging (NesTiso labeled as DC for double caging in illustration, and SC for single caging; 3 cages/group). Notice that NesTiso ventilation dynamics parallel that of SC. Air humidity differences were stable for four days and noticeable immediately after soiled cages were set as NesTiso (y-axis, oval). (e) Effect of external aeration with a household fan on the humidity (wet weight) of experimentally moistened soiled corncob bedding material (replicate sets A and B; without mice). Inset, actual bedding weight in grams (four replicas/cage) over time. Notice markedly improved ventilation and evaporation (bedding desiccation) in both NesTiso and SC. Paired-t test, 4-6-cages/4-replicas/cage. testing the entire colony, we only eliminated newly contaminated cages. Since implementation 14 , rooms A/B housed 62,780 mouse-days, of which, 40,880 were in NesTiso (twice the isolators' capacity; ~23,360 cage-days, ~1,987 cage-openings; Table 1 and Supplementary Table 2). In average, NesTiso in rooms A/B (2.5 mice per cage; 26.9-week old) required over 1,381 routine fecal cultures and 300 cage-fungal incubations to monitor GF sterility. Only two cages were contaminated in room A (50 mouse-days), once with a fungus (Penicillium spp.), and 8 months later with a bacterium (Bacillus spp.). With ~1,987 cage-openings (in average once every 10 days), the risk of cage contamination with every opening was 0-0.1% (room A: 2/1,220; room B: 0/767). At the animal level, estimates indicate the daily risk of mouse contamination in NesTiso is 1 out of every 817 (50:40,830) days of housing using the 'all-in-all-out' strategy, which is longer than our oldest GF mouse born and housed in NesTiso (1.39 years of age). At the cage level, comparatively, NesTiso effectiveness was similar to managing cages in isolators, which had no contaminations in 2 years (0/548 cage-openings; 1-sided Fisher's P = 0.61); however, NesTiso contaminations were restricted to affected cages (100% prevention), in contrast with reports of extensive dissemination of microbes across cages in isolators 14,15 . With 23,360 GF cage-days (equivalent to maintaining a GF cage for 64 years), the cumulative probability of having a cage contamination event for every cage-opening (every 10 days) of NesTiso sets inside biosafety hoods was statistically identical to that of opening cages inside the multi-cage pressurized GF isolators (2 events/1,971 openings vs. 0/548, two-tailed Fisher's exact P = 1.0, Table 1).
Because simultaneous 'all-in-all-out' testing and cage replacement of an entire NesTiso colony can be stressful and laborious, we confirmed in room C (~35 cages, ~420 days, ~36,750 mouse-days) that eliminating only contaminated cages was ineffective at maintaining a low incidence of cage contaminations. We then validated that a NesTiso colony could be divided into quadrants for 'all-in-all-out' infection control (one quadrant/day; overnight disinfection), showing effectiveness comparable to the 'all-in-all-out' approach, while reducing technical stress.
Long-term phenotypes, survival and breeding are unaffected by NesTiso. As alternative to measuring time-point cortisol levels as a measurement of animal adaptability, which induces stress to animals and increases the risk of microbial contamination in GF mice during handling, we monitored murine morphological and breeding phenotypes to determine whether NesTiso is suitable for the study of long-term phenotypes. One of the functions of gut commensals is to aid in digestion and modulate tissue morphology. By comparing
Mouse colony inventory (22 month inventory snapshot)
Cage counts a Animal density Age b (weeks) organ dimensions from mice housed in GF-NesTiso, GF-isolators and SPF conditions, we determined that the organ biomass and hematocrit (as surrogate for dehydration and erythrocythemia) of GF-NesTiso mice were similar to that of GF mice in isolators in comparison to SPF mice ( Fig. 2a-d). We then assessed whether NesTiso caging affected the spontaneous intestinal disease phenotype in SAMP mice, and found no effects on the natural three-dimensional occurrence of Crohn's disease-like ileitis (cobblestones) 26 lesions in GF-SAMP mice, GF-SAMP survival (five-month), or the SAMP body weight (three-month) after transplantation with normal human fecal microbiota (Fig. 2c-e and Supplementary Fig. 5) 46 . Furthermore, NesTiso did not induce signs of systemic, integument, or intestinal diseases in ileitis-free GF-B6 and GF-SW mice. A four-week breeding trial, conducted by cohousing males and females (2:3/cage; 2 cages/strain) for three days, also determined that breeding yields using NesTiso for the three mouse strains were ranked as expected after 30 days. SAMP mice were the least productive strain (one pup from 1/6 females); B6 were intermediate (8 pups from 1/6 females), and SW were the most productive (67 pups from 6/6 females; Fisher's P < 0.05). Long-term survival in NesTiso was further documented in this study by breeding and maintaining GF mice for as long as 72 weeks of age, when animals were removed solely for experimental purposes or died due to aging-associated complications. Other studies have followed GF animals outside isolators only for 2-3 or 12 weeks 19,20 . Human fecal microbiota transplants to mice in NesTiso. The containment of microbes in NesTiso makes the system ideal for studying the stability and colonizability of human fecal microbiota transplants (FMT) in GF mice. Because FMT-mice often require BSL-2 isolation in facilities housing SPF-mice, we tested the portability of NesTiso FMT-mice to a BSL2-room, sharing biosafety hoods with 20-30 SPF-cages. We determined whether 12-week-old mice would have stable FMT microbiota in NesTiso, and whether NesTiso FMT-mice would have 16S rRNA gene microbiome signatures of SPF-mice. Fecal DNA and quantitative real-time PCR analysis of four 16S rRNA-universal and -specific bacterial taxa primers (Lactobacillaceae, Bacteroidaceae, Bifidobacteriaceae, segmented filamentous bacteria) 26,47 showed that FMT in GF-SAMP was stable over 14 days in NesTiso (Fig. 2f). Slightly extending the study period to 21 days to encompass the establishment of adaptive immunity, 16S rRNA microbiome analysis of fecal samples that were randomly collected from 10 mice sampled on days 2, 11 and 21 after FMT showed that FMT-mice in NesTiso had the healthy profile of the human donor (6/6 of 31 possible phylum taxa) that was rich in Firmicutes, while conventional concurrent SPF-mouse signatures in the same facility were distinct and rich in Bacteroidetes (Fig. 3a,b). These 16S rRNA microbiome data further support NesTiso as a suitable portable caging system that can be used to prevent cage cross-contamination 14,15 , facilitating the parallel study of diverse microbiotas and their effects on transplanted GF mice. By categorizing read count data as binary (presence/absence), and using probability-of-recovery statistics (% of mice transplanted having the taxa), we also noticed that two analytical replicas interpreted 'in series' (sum of taxa reads in both replica) normalize the distribution of low abundant taxa making it preferable over interpretation 'in parallel' (only taxa positive in both replica), or using single aliquots. When using NesTiso in FMT experiments, it is thus also advisable to submit ≥2 donor aliquots for microbiome sequencing and interpret these profiles 'in series' 34 ( Fig. 3c and Supplementary Fig. 6). Also, results suggest that is not advisable to exclude taxa with 'lower number reads in a sample' , as occasionally recommended in bioinformatic pipelines, but rather to consider using NesTiso to prevent contamination and analyze all available high-quality data 'in series' .
NesTiso-independent enrichment of fecal Bacillales, Burkholderiales and Pseudomonadales
in soiled bedding. Because NesTiso may increase air humidity if not aerated, we hypothesized that FMT studies performed in NesTiso could favor the selection of certain fecal microbes compared to conventional single caging. This was important as we noticed some contaminants thriving in soiled-humid bedding, while others (slow-growing environmantal fungi) unexpectedly disappeared from GF mice in dry (frequently-replaced) cages. In split-plot experimentation, we determined that the DNA microbiome profile of a freshly soiled SPF-SAMP bedding mixture (split into 40 petri dishes) was identical for NesTiso and single caging after incubation for 28 days at 23 °C, indicating NesTiso double caging did not contribute to microbial bias ( Fig. 4a-c). Collectively, however, bedding microbiomes were significantly enriched with Bacillales, Pseudomonadales and Burkholderiales when compared to fecal mouse microbiome studies from conventional single cages ( Fig. 4d and Supplementary Fig. 7). More relevant, an expanded comparison showed that coincidentally the same orders (Bacillales and Pseudomonadales) were markedly enriched in stereomicroscopically dissected mucosal-associated microbiomes 48 , rising concerns for the first time about potential bias driven by the cyclical selection and enrichment of fecal microbes in soiled bedding ( Fig. 4e and Supplementary Fig. 7). Interestingly, we have identified in our facility and during this study Bacillus spp., Staphylococcus petrassii/aureus, Paenibacillus woosongensis, and Pseudomonas alcaligenes as GF contaminants, supporting the relevance of both Bacillales and Pseudomonadales enrichment, survival and adaptability to the bedding material and housing conditions in laboratory animal research.
Modeling and predictions of bacterial growth and extinction over cyclical enrichment-dilution events. The enrichment of certain microbes in the bedding material might depend on the type of substrate and lead to cyclical changes in the cage microbiome as cages become warm, humid and rich in organic matter over time (Fig. 4f). Quantitation revealed that the organic 'nutritious' enrichment in the bedding has linear dependence on animal density, 'bedding cycle' interval (clean bedding becomes soiled, then replaced with new bedding usually 7-10 days), and grinding behavior (e.g., by day 10, a 5-mice-bedding contains 6.9% feces and 43.6% diet). To visualize the periodic dilutional effect of cage replacements at fixed intervals on microbial selection (both survival and extinction), we implemented a mechanistic mathematical model using a logistic function validated for bacterial growth in liquid medium coupled with a customizable event function accounting for periodic dilution events (as surrogate for bacterial and organic substrate replacement), using 'deSolve' 49 to ran simulations in open-source R software. Simulations illustrated how fast-growing microbes, depending on their rate of growth, persist in the model over several cycles, while slow-growing microbes become extinct (Fig. 4g), and importantly allowed the recognition of unaccounted periodicity mechanisms influencing cyclical microbial selection herein refer to as basic Cyclical Bedding-dependent Microbiome Periodicity Rules (see brief description in Supplementary Materials). With the mathematical visualization of differential microbial selection over time and recurrent dilution events (cage replacements) 49 and inferred predictions, we next tested experimentally whether bedding soiledness influenced the gut microbiota profile directly in mice.
Co-streaking culture assay reveals bedding soiledness cyclically influence the gut microbial profile.
We hypothesized that GF-NesTiso mice exposed to 1-day-soiled SPF-bedding would have a different transmissible fecal microbiota profile compared to mice exposed to 10-day-soiled SPF-bedding, and that over time their individual profiles would periodically vary with every bedding cycle. Because microbiome assays and data analysis are not real time and can be time consuming and technically intensive, we developed a rapid culture assay of feces (streaked on TSA blood agar, incubated overnight; herein referred to as 'co-streaking' , the assay results are reproducible across fecal pellets) that facilitated the enumeration of colony types and thus the cost-effective assessment of gut microbial dynamics in near real time. By streaking the feces of 'co-experimental' mice on the same agar plate, our 'co-streaking assay' became a semi-quantitative screening tool to visualize the periodic dynamics of the gut microbiota ( Fig. 5a and Supplementary Fig. 8). Remarkably, we found in a 30-day (3-bedding-cycles) experiment that overnight exposure of GF-SW (healthy) mice to the bedding of SPF-SAMP (ileitis-prone) mice yielded persistently different co-streaking patterns depending primarily on whether the mice were exposed to either 1-day-soiled bedding or 10-day-soiled bedding ( Fig. 5b-d). As hypothesized, co-streaking also showed more diversity (colony types) across cages and mice on day 3 after cage replacement, which remarkably disappeared (less diversity, primarily same colony type) by days 8-10, a phenomenon that recurred with every bedding cycle. We also noticed that within cages, some animals cyclically exhibit their own pattern of microbiota profile (individuality), which were markedly influenced (cyclically disappeared) as cages became soiled ( Fig. 5b-d). Confirming the high occurrence of within-cage individualities, a cross-sectional screening of 80 adult SPF (AKR, B6, B6 TNFdeltaARE/+ , SAMP) mice in 45 cages (without controlling for bedding soiledness) revealed that up to 70-82% of cages cohousing >2 genetically-identical mice had >1-2 individual co-streaking patterns, which contradicts the perception that cohoused SPF mice have the same microbiome profile (Fig. 5 and Supplementary Figs 8 and 9), identifying an new unrecognized form of microbiome intra-cage variability despite presumed homogeneous coprophagic behavior [50][51][52][53][54] and rising questions about cohousing as a preferred design in mouse microbiome research 52,55-57 , especially since cohousing also alters numerous phenotypes of interest (e.g., metabolism, obesity, inflammation) [58][59][60][61] . A parallel shotgun metagenomic profiling study of fecal samples from male-female breeding pairs cohoused for 30 weeks since weaning (3 mouse lines) confirmed that although cohoused mice clustered together (predicting cage allocation) in an 'unsupervised Euclidean heat map analysis' , binary (yes/no) discordance analysis showed that cohoused mice vastly differed in the number of detectable fecal bacterial families that the paired mice did not share in each cage (up to 33% discordant, not attributable to sex; mean 24 ± 5%; 51 bacterial families in the study) 9 . As an alternative approach, we have developed a protocol where all experimental mice are not cohoused (because studies require large number of animals and since an ideal cohousing design would be impossible 56 , not simple 62,63 and IACUC approved maximum animal density is 5 mice/cage) but instead (i) gavaged a composite of their collective fecal microbiota 26 , (ii) allowed to establish a baseline collective microbiome, and then (iii) followed up to determine the functional relevance of microbiota that animals select as experiments progress 8,9,26,64,65 . In the context of well-known long-term microbiome stability and the stable core microbiome in humans 66 , our findings also rise questions about whether the assumption of 'difficult-to-control' temporal microbiome stochastic variability reported in mice 15,67 is truly biologically correct, or whether such 'temporal variability' represents the distribution of study results randomly confounded by an unrecognized technical artifact that occurs when the timing of animal sampling for microbiome analysis in not controlled and accounted for as a function of bedding microbial selection. Therefore, we further assessed experimentally the effect of various degrees of bedding soiledness on the competitive growth and 9-day survival of three cultivable abundant fecal microbes, representing distinct bacterial families detectable in SPF mice. Supplementary Figs 8 and 9, respectively. Fecal enumeration and single-colony Sanger sequencing indicates abundant cultivable microbes contribute major fractions of bacterial DNA in mouse fecal microbiome. Under the assumption that cultivable and uncultivable microbes interact dynamically, the assay serves to monitor the comparative dynamics of fecal systems. (b) Experimental design to determine the effect of soiledness on colonizability differences in GF-SW mice, and the dynamic effect over three cage replacements. (c) Aerobic incubation of 'co-streaked' fecal samples illustrates cultivable microbiota differences. Notice 'co-streaking' fecal profiles of 9 SW mice (labels, 1-9): mice look similar on day 1; then appear more distinct with 4 cultivable profiles on day 3; then similar on day 8 (two profiles). Inset line plot, number of 'costreaking' fecal profiles over 33 days (3 bedding cycles). Notice pattern of 'co-streaking' fecal profile variability oscillates cyclically over time with every new cage change (more alike when beddings are 10-day-soiled; more distinct when samples are collected three days in clean cages, i.e., 3-day-soiled). (d) Anaerobic hemolytic (virulence) fecal profiles on day 10. Notice that 4 mice exposed to 1-day-soiled bedding have abundant hemolytic anaerobes (absent in 10-day-soiled bedding mice). Exposure to variably soiled bedding affect collective virulence profile of acquired/transmissible microbes from bedding. Because microbiota abundance and virulence variation may influence animal phenotypes, it is necessary to control for CyBeD microbiome variability to improve scientific rigor during experiments, but also during breeding since newborn pups from a single colony may be variably imprinted by the cyclically biased bedding microbiome.
Dose-effect study illustrates soiledness favors gut Enterococcus faecalis over Escherichia coli
and Lactobacillus murinus. Various bedding substrates are available for use with rodents, including corncob, paper products, aspen wood chips, cotton and grass fiber pellets; however, animal welfare regulations recommend bedding that allows foraging, burrowing, digging, nest building and absorbs urine, ammonia, humidity and feces [68][69][70] . Because autoclaved corncob is an efficient common bedding material used for the routine rearing of laboratory rodents 68 , we next tested and confirmed in vitro that the amount of 'soiledness' influences the microbial selection of highly abundant (10 5 -10 8 CFU/g) gut aerobes inoculated in autoclaved corncob bedding. Experimentally, three distinct fecal bacterial 'co-streaked' types from a healthy SPF-AKR mouse (shiny-spreading Escherichia coli, small-gray Lactobacillus murinus, domed-white Enterococcus faecalis which inhibits L. murinus when in proximity) were added as a 1:1:1 mixture to NesTiso-petri dishes containing either sterile clean bedding, GF-10-day-soiled bedding from a NesTiso cage, a mixture of clean bedding containing 10% or 50% of the GF-soiled bedding (as surrogates for 1-and 5-day-soiled bedding based on mathematical model), or GF-diet. Remarkably, bacterial enumeration on TSA over time (23 °C for 9 days) demonstrated that each bedding condition result in very different bacterial growth ratio profiles (different from 1:1:1 inoculated ratio), favoring in most cases the enrichment/selection of E. faecalis in soiled cages. Intriguingly, plain GF-diet as growing substrate inhibited and disfavored the survival and growth of otherwise fast-growing E. coli and L. murinus, suggesting that certain types of (digested or indigested) diets might further favor bedding-enriched Enterococcus faecalis, arguably in the most proximal segments of the mouse gut (Fig. 6). With the abnormal abundance of L. murinus in experimental environments, it is reasonable to expect that such aerobic microbe could influence the mouse physiology, as it has been demonstrated that its overgrowth causes biotin-dependent alopecia in mice 71 . On the other hand, the overgrowth E. faecalis can selectively inhibit a large number of other (mainly gram-positive) microbes via bacteriocin-like inhibitors greatly common amongst the family Enterococcaceae, within the order Lactobacillales. (e,f) Line plots illustrate that when incubated as a 1:1:1 mixture, E. faecalis is highly resilient to soiledness, and able to readily grow on the GF-grade rodent diet used. Unexpectedly, E. coli was the least adaptable fecal microorganism in the cage environment. Biologically and experimentally relevant, L. murinus, a worldwide aerobic species is best adapted to 5-day-soiled bedding, indicating selection bias favors its abundant growth until it is inhibited by the overgrowth of E. faecalis towards bed-day 10 ( Supplementary Fig. 10). These findings derived from SPF-AKR mice confirm the cyclical predictions illustrated in panels 6b-c, which derived from interpretation of the fecal co-streaking profiles of the SPF-SAMP microbiota that was transmisible to GF-SW mice.
Discussion
To date, with over 100,000 mouse-days of data and experimentation, this report illustrates the effectiveness of NesTiso as a portable GF animal housing alternative to pressurized systems. Major advantages of NesTiso include its potential for cost-effective scalability, and the elimination of risks associated with back-flow ventilation problems and sterility-barrier failure, widely documented across isolation units in positive-pressurized ventilation hospital settings 72 (also very likely to occur in GF facilities). Since implementation, NesTiso has allowed the portability of GF animals across research facilities, and improved efficiency in parallel gnotobiotic/FMT studies. As NesTiso implementation could 'democratize' GF-grade capabilities, our results provide immediate support to ongoing government efforts to improve rigor and reproducibility in microbiome research, as well as efforts by the World Health Organization 73 to promote natural ventilation for hospital infection control. Further, due to major human gut microbiota variability, studies with broad coverage of donors and large number of GF animals and cages are needed to elucidate the mechansisms of establishment of human-derived microbiotas in mice, and their effect on biology and disease phenotypes. In this context, NesTiso has the potential to further improve research efficiency by preventing the risk of environmental transfer of microbes between transplanted mice, which is known to confound research by affecting mouse metabolic traits when inadvertent contamination occurs 16 .
After the implementation of NesTiso, we have not identified major disadvantages to justify working solely with multi-cage isolators again. In some GF facilities, the replacement of soiled cages inside isolators is conducted by replacing only the dirty bedding material from each cage, and not the entire cage set. To improve efficiency, sterile material is entered into the isolators together with food, and other supplies in bulk to expedite husbandry. Since the risk of cross-contamination of cages and the whole isolator is high when only the bedding material is replaced 14 , in our facility we replace the entire cage set by moving the mice to a new clean cage every time we change cages, in both, isolators and NesTiso. In this comparable context, NesTiso has some technical advantages compared to isolators: (i) the speed of husbandry flow is faster in NesTiso with more cages changed per unit of time since it is easier to work under biosafety hoods than inside isolators, (ii) the number of animals housed per floor area is higher since animals can be housed in cages stacked vertically and require less space for transfer chambers or for operators using the glove ports needed in front of each isolator, (iii) there are no maintenance costs associated with technical inspection and service of positive pressurized equipment, (iv) dirty soiled or microbially contaminated or purposely gnotobiotic cages are easily handled and disposed very efficiently in NesTiso since there is no need to move cages in or out of transfer compartments, (v) the initial investment to begin a GF colony with NesTiso is virtually null because there is little to no need to purchase or invest on specialized equipment, and (vi) there is no noise or vibration disturbance associated with ventilation equipment with passive ventilation in NesTiso. Currently, NesTiso settings are being improved to further minimize area footprint and to maximize natural ventilation. Following strict surgical grade aseptic protocols widely available in the literature and successfully applied for surgery in GF mice 74 we anticipate that other laboratories could implement and expand successfully their GF research portfolio using NesTiso.
This report also highlights that important cyclical alterations exist within the fecal microbiome profile of experimental mice, presumably due to the selective enrichment of specific aerobic microbes within the mouse bedding material. If ignored, cyclical bedding-dependent microbiome alterations could have unpredictable confounding effects on the interpretation of phenotypic results across numerous fields of murine research, including the unpredictable consequences of microbiome imprinting of newborn pups in colony-breeding programs. As a potential solution, we earlier developed a fecal homogenization protocol 26,64 , where all experimental animals housed in various cages are exposed to the entire pool of gut microbes harbored in the feces of all experimental mice and the bedding of all cages. Herein, we propose to apply that protocol only with fresh feces (and not bedding material), and to further improve scientific rigor by either considering designing novel cage flooring systems to prevent the permanent contact of mice with their feces and soiled bedding (which does not occur in humans); or by conducting experiments with frequent cage replacements and proper ventilation accounting for animal density to minimize humidity/soiledness, and collect samples for investigational purposes 2 days after animals have been transferred to new bedding/cages. Although we did not test all potential combinations and possibilities (e.g., animal density, body weight, drinking, grinding behaviour, etc), results indicate that microbiome experiments would benefit if conducted with cages having comparably reduced animal density (e.g., 2 mice/cage), with animals being sampled for analysis on day 2 post-cage replacement (e.g., '2 × 2 cage sampling rule'). Unless this is accomplished, each study should examine and control for the effect of cyclical bedding microbiome selection (which may vary widely) on target investigational phenotypes. Together, our data also indicate that mouse cohousing 52,55-61 might not be necessarily robust in all scientific scenarios to control for microbiome variability in mouse research, unless we control for bedding-dependent cyclical microbiome selection bias, and intra-cage sustained mouse-mouse gut microbiota variability. Strategies have long been explored to prevent coprophagia in nutritional studies since the 1960s 50,75,76 . Simpler practices here described, however, could control for bedding soiledness as a potential source of cyclical microbial bias in modern mouse microbiome research.
Materials and Methods
Animals and germ-free facility. The portable static isolation strategy herein proposed was tested by housing inbred GF SAMP1/YitFc (SAMP) and C57BL/6J (B6) mice and outbred Swiss Webster (SW) mice re-derived or obtained from Taconic Biosciences Inc. (Hudson, NY). All mice were maintained as GF colonies at the Animal Resource Center (ARC) at Case Western Reserve University School of Medicine (CWRU). SAMP mice are a sub-strain of AKR/J mice originally developed in Japan that spontaneously develop intestinal and extra-intestinal inflammatory disease 25,26,77,78 , and has a polygenic genotype 79 isolator allowed for the manipulation of mice and supplies via four sets of permanent gloves and a port of entry, which was opened as needed, usually once a week. Animals were housed in wire-topped polycarbonate shoebox cages (~30 cm L; 15 cm W; 15 cm H) in a 12 h:12 h light:dark cycle. Autoclaved GF-grade 40-50 kGy irradiated pellet food (PMI Nutrition Int'l., LLC., Labdiet ® Charles River. Vac-Pac Rodent 6/5 irradiated, 5% kcal% fat) or autoclaved (Prolab RMH 3000; porcine animal-derived fat preserved with BHA; 6.8% content by acid hydrolysis) diets and water in bottles were provided ad libitum 26 . Portability experiments where NesTiso cages were taken out of the ultrabarrier facility were conducted in BSL-2 grade laboratories equipped with standard HEPA filtration vent systems on the ceiling, but were not positively-ventilated or pressurized representing most standard clean laboratories. In those settings, HEPA-filtered air was readily available in biosafety cabinets which were used to open and replace the cages. Protocols on animal handling, housing, and transplant of human microbiota into GF mice were approved by the IACUC and the Institutional Review Board at CWRU, in accordance with the National Research Council Guide for the Care and Use of Laboratory Animals 70 .
Nesting cages: static double-layer isolation setting and thermography. Cages and materials used are commercially available to assure results are generalizable to other laboratories. In brief, referred to as 'double-caging/triple-barrier' or 'nesting 2-layer isolation' (NesTiso), the proposed housing strategy was tested by housing cohorts of GF mice (produced in standard GF isolators) inside autoclaved static mouse cages 80 , which were then placed (nested) inside larger rat static cages (Allentown Inc., Allentown, NJ; see Results and Fig. 1 for details). Animals and cages were microbiologically monitored and handled by trained personnel under strict GF-grade aseptic conditions and our routine GF practiced following stringent disinfection protocols using complete isolation-grade fabric impermeable gowns, double gloves, hairnets and masks, or N95 respirators when deemed medically appropriate for personnel desiring not to be exposed to disinfectant vapors 14 . Thermography infrared image analysis in mice and cages was conducted using standardized principles 29-32 and a thermal camera (FLIR E95 with Intelligent Autofocal TM Optics) with capability to measure 161,472 point (range, −20 to +1500 °C) temperature pixels allowing allows sensitive detection of spatially confined minute thermal differences (464 × 348 native resolution, spectral range 7.4-14 µm). Differential quantitation of selected areas was conducted using the proprietary thermography camera software (FLIR tools for Mac TM v.2017).
Animal handling and disinfection. Disinfection protocols to ensure aseptic environmental conditions were based on quaternary ammonium-based soap to remove organic matter, 70% ethanol to remove grease and dehydrate; and Spor-Klenz ® (Steris Corp., Groveport, OH, 6525; 1% hydrogen peroxide, 10% acetic acid, 0.08% paracetic acid) on rust-sensitive equipment 14 . Floors and other surfaces were disinfected with Spor-Klenz ® and Clidox ® (Pharmacal Research Laboratories, Inc., Waterbury, CT, 96120F, chlorine dioxide). Biosafety hoods equipped with new HEPA filters and sterilized daily or weekly with chlorine gas or Spor-klenz vapors were used whenever cages or animals were manipulated (e.g., feces collection, body weight measurements). Autoclaved sterile gowns and hairnets, masks (N95 or cartridge half-face piece) and impermeable plastic sleeves were worn by personnel to prevent exposure of the NesTiso cage sets and animals to human dust or microorganisms, and to reduce personnel exposure to disinfectants.
Husbandry and sanitation.
Although the deleterious effects associated with ammonia are critical in conventional mice, ammonia is not relevant in GF animals (due to lack of urea-utilizer, ammonia-producing gut microbes). For sanitation purposes, replacement of whole NesTiso cage sets under GF or fecal microbiota transplant experiments followed comparable regulatory guidelines for conventional housing 81 , which is daily monitored by the CWRU ARC personnel and IACUC committee which monitors husbandry compliance with the NRCG-CULA. NesTiso sets were replaced every 7-14 days based on animal density, production of soiled material, and animal grinding behavior 82,83 . Every cage was routinely replaced under biosafety cabinets at least once weekly for animal densities of 3-5 mice/cage, and once biweekly for 1-2 mice/cage. In compliance with static cage usage for conventional (SPF-microbiota) mice, we used corncob bedding due to its absorbent capacity to lower air humidity inside cages 68,84 . This bedding material has been shown to minimally influence mouse body core temperatures compared to other materials 29 . In all cases, animals were handled using Spor-klenz disinfected, or autoclaved and rubberized 12-inch long forceps.
Microbiological monitoring of GF status and cage-cage cross contamination. All mice inside
both pressurized isolators and NesTiso sets were routinely tested using standard culture-based microbiological procedures and gram-staining 85 . Culture of feces and cage bedding material was conducted aerobically and anaerobically (10% CO 2 , 10% hydrogen, 80% nitrogen) using Tryptic Soy Agar (TSA) supplemented with 5% of defibrinated sheep blood. Luria Bertani, de Mann Rogose Sharpe, and McConkey agars were also used (Becton, Dickinson and Company, Franklin Lakes, NJ). Nutritious brain heart infusion broth supplemented with 5% yeast extract was used to test feed sterility and rule out bacterial contamination as needed. To monitor the risk of fungal contamination, we tested selected cages at 1-3 week intervals using fresh feces and direct plating onto potato dextrose agar (PDA), sabouraud, and Candida chromID agars (Oxoid, BBL, bioMérieux SA, France; 30 °C, 7 days). In addition, we also incubated 20-100% of soiled cages after adding 100 ml of water from the drinking water bottle (23 °C, aerobically, 21 days) to allow for fungal spore germination and the formation of vegetative aerial colonies, which aid in the confirmation and taxonomic classification of fungi 14 .
In a culture-independent manner, we also gram-stained mouse feces to verify that animals were not colonized in vivo by microorganisms that may be uncultivable using the in vitro methods described 85 . An expert board-certified microbiologist, who could distinguish microbes from dietary vegetable fibers, intestinal epithelial cells, inflammatory cells, and dye crystals and artifacts, conducted the interpretation of gram stains. If analysis revealed the presence of suspect microorganisms, animals were quarantined and gram stained and re-cultured 1-2 days later to verify mouse colonization (as indicated by an increased number of CFU and gram-stained microbes). Three consecutive negative gram stains or culture results were needed to declare a suspect NesTiso cage as free of germs (GF), based on infectious guidelines in veterinary medicine where horses with infectious agents (i.e., Salmonella spp.) require between three to five consecutive negative cultures to deem a horse free of the pathogen 86,87 . Our data indicate that two consecutive negative results are optimal to prove the mice were GF, and as such is an approach we use before enrolling any GF mouse cohort into experimentation. PCR was not used to test GF mice, although a qPCR-amplicon RFLP method has been recently validated for GF testing 85 , since DNA of dead and food indwelling microbes could not always be differentiated from active colonization and because PCR has been shown to be less sensitive than culture and gram-staining in identifying intestinal colonization in gnotobiotic mice and poultry 88,89 . Microbial DNA was also extracted from single purified colonies on TSA or PDA agars using the QiaAmpFast DNA extraction kit (Qiagen, City, ST) with some modifications (bead-beating with Sigma-Aldrich 500-µm beads, MP Fastprep-24 homogenizer; 1000 RMP 2 runs of 20s; AS lysis buffer). Microbial identification was based on single colony PCR amplification and Sanger sequencing, using 16S rRNA sequencing of V1-2 regions and Earth microbiome primers 515F/860R 90 . Ribosomal internal transcribed spacers 1 (ITS-1) and 2, and the 5.8S rRNA regions were sequenced for fungi using ITS1 (5′TCCGTAGGTGAACCTGCGG) and ITS4 (5′TCCTCCGCTTATTGATATGC) primers 14,91 . Species designation was based on NCBI Bacterial 16S rRNA and the fungal UNITE databases using BLASTn 92 .
Cage air humidity and evaporation of soiled bedding experiments.
We hypothesized that adding an extra layer of static filtration around the static mouse cage would presumably reduce ventilation exchange 80 , increasing humidity accumulation measured using digital monitors of air humidity and temperature (AcuRite 00613). Therefore, our first experiment involved the qualitative evaluation of water condensation within the cages with and without external ventilation (by using a 20 cm diameter table fan set two meters from the cages, ~1750 revolutions/minute). We tested three conditions (SPF, GF-isolator and GF-NesTiso) and measured (%) air humidity changes over a 7-day period of time inside mouse-free cages that had soiled bedding after housing five mice per cage for 7 days 80 . Lastly, we quantified the rate of evaporation of soiled moist bedding (weight changes) over a 12-day period (longer than the 7 days recommended for regular husbandry of static cages), with and without ventilation. Experiments on cage humidity were conducted without mice to minimize uncertainty due to animal behavior (urine production, grinding). Experiments were conducted in a laboratory with stable room air temperature (23.8 ± 0.57 °C) and relative air humidity (26.5 ± 4.89%).
Mouse intestinal disease phenotype and survival analysis.
To understand the effects of NesTiso on maintaining mouse phenotypes, we used SAMP mice, which display a well-characterized intestinal inflammation phenotype with 100% penetrance that resembles the typical three-dimensional (3D) cobblestone lesions of Crohn's disease. Body weights was used as an indicator of animal health and welfare, and was monitored beginning in 10-week old mice (n = 10) for 90 days after their introduction to NesTiso. Post-mortem histological and stereomicroscopic 3D-pattern profiling 26 were conducted on terminal ilea to assess the persistence of the Crohn's-like intestinal phenotype in NesTiso cages. In another experiment, we compared mean cecum size (cecum weight ÷ body weight ratio* 100) among mouse cohorts, since GF mice have relatively large ceca due to absence of microbiota. For this purpose, adult (>14 weeks old) GF mice in NesTiso were compared to GF-SAMP mice in isolators, SPF-SAMP mice, and second mouse line prone to developing Crohn's-like ileitis (B6 TNFare ) 26 . To determine if NesTiso increased the risk of mortality in SAMP mice, we compared the natural mortality across cohorts of GF mice housed in NesTiso or GF-isolators for up to 6 months using survival analysis.
Fecal material transfer experiment. To determine the suitability of NesTiso for housing moderate densities of mice that harbor gut commensal microbiota (3-4 mice/cage, without external ventilation), we conducted a humanized fecal matter transplant experiment with 10 GF SAMP mice using frozen feces of a healthy (40-year old) human donor. All methods were carried out in accordance with guidelines approved by CWRU Institutional Review Board. Samples were obtained from the Cleveland Digestive Diseases Research Core Center Biorepository, which is also IRB approved, and which obtain the informed consents from all donors of fecal matter following strict regulations. We manipulated the mice weekly for fecal collection, and monitored the stability of the transplanted microbiota in fresh murine feces at 2, 11 and 21 days post-transplant by performing qPCR to determine the relative abundance of five bacterial families 26 . 16S microbiome analysis of fecal DNA samples from three mice for each time point was conducted by amplifying the V1-V3 regions using Illumina Truseq and HiSeq. 4000 protocols. Bioinformatics analysis was conducted using Greengenes and default Qiime pipelines (http://qiime.org).
Soiled bedding microbiome analysis.
To determine the effect of NesTiso on the 16S microbiome profiles, dry sterile dry corncob bedding material was experimentally inoculated with SPF mouse feces (20% of dry bedding weight), moistened with distilled water (25% volume/dry bedding-feces weight; ml/g), homogenized, and divided into aliquots that were placed in 10-cm sterile petri dish bottoms to achieve ~1 cm-thick layers (46.5 ± 2.28 grams of bedding/petri dish). Bedding humidity was adjusted to reach water content comparable to levels in naturally soiled bedding material of cages with breeding mice (i.e., 25% of bedding moisture relative to autoclaved dry corncob bedding in cages with three adult breeders and one-week old pups) after 7 days of housing in GF isolators. After 21 days of incubation of five dishes/cage, inside each of six NesTiso sets and four standard static mouse cages (23 °C, no external ventilation), bedding material was examined in situ for enumeration of fungal colonies and homogenized to extract a pooled sample of DNA for 16S microbiome analysis. Serology to assess inadvertent exposure to common rodent pathogens. Because certain pathogens (e.g., viruses and Mycoplasma pulmonis) cannot be detected by the described culture-based methods 33 , we also collected serum samples from six sentinel GF mice that were housed for six to twelve months in NesTiso to confirm the absence of exposure to 23 rodent pathogens. Fresh sera collected from euthanized mice were independently submitted by veterinary personnel at our Animal Resource Center-CWRU for testing at an external diagnostic institution (IDDEX Laboratory, Worthington, OH). Concurrent testing of other SPF rodent colonies from our institution served as test controls.
Breeding potential of acutely humanized GF mouse lines in NesTiso. We next tested breeding and early nursing capabilities of mice housed in NesTiso by comparing the breeding efficiency of GF-SAMP with that of commercial GF-B6 and GF-SW 12-week-old mice transplanted with human gut microbiota. Based on our records, predicted breeding efficiency would rank SAMP mice as the poorest breeders, followed by B6 mice, and then SW mice with the highest number of viable healthy nursed pups by 1 week of age. Following oral gavage with a 400 µL aliquot of human gut microbiota, nine 10-week old mice were housed in NesTiso sets (5 mice/cage; 2 sets/strain; at 2:3 male:female ratio) and left to mate for 3 days; males were then removed from the cages. The number of pups produced per pregnant dam was determined 30 days after animals were set to breed.
Effect of exposure of NesTiso GF mice to soiled bedding of SPF SAMP mice. To determine the potential impact of mouse exposure to different degrees of soiled bedding material 33 on the gut microbiome, nine 20-week-old GF SW mice were exposed overnight to bedding from five SPF 19-week-old SAMP mice. SPF bedding originated from a single cage housing a cohort of five SPF-SAMP mice. The bedding material from the SPF cage was sampled at the nesting site and on wetter sites on days 1, 3 and 10 for culture and DNA microbiome analysis. The remaining bedding for days 1 and 10 were homogenized manually (separately) within the cage and aliquoted to be used as SPF-bedding for the cages that would house the GF mice. In average, each GF mouse was exposed to 40 grams for approximately 22 hours. Mice were assigned to either 1-day-or 10-day-SPF bedding a priori in sets of 1, 1, 1, and 2; and 2 and 2 for the 10-d and 1-day SPF bedding aliquots, respectively. After the exposure period, mice were transferred to GF-NesTiso sets, and feces were collected for culture and DNA extraction for microbiota culture assays. To prevent confounders, mice were not handled for the following three days (NesTiso cages were sealed and maintained at room temperature), when fecal samples and bedding material were collected for culture and DNA extraction, and mice initially caged singly were re-cohoused together as a trio. Thereafter, during the follow-up phase of the experiment, animals were monitored either in 2 pairs as initially set for 1 day-SPF bedding; and 1 pair and a trio for the 10 day-SPF bedding material. During the following 10-day cage changing cycles, the mice and bedding were sampled on days 3 and 8-10-day post cage change, for three cycles. Analysis of culture data derived from streaking fecal samples on TSA agar was conducted to assess the dynamics of the cultivable fecal microbiota over time. After incubation at aerobic and anaerobic incubation, photographs were taken, and representative colony phenotypes were selected for each fecal profile for sub cultured for purification and Sanger sequencing for species identification as above described.
In vitro experiment for enumeration of a microbial cocktail in bedding material. By using NesTiso, the three most abundant bacteria in the co-streaking feces of cohoused SPF-AKR/J mice, and 10-fold serial dilutions in PBS with enumeration in TSA, we quantified to what extent bacteria would grow in NesTiso petri dishes containing moist soiled bedding material. Single colony PCR identified the most abundant aerobic bacteria in the AKR fecal sample as Enterococcus faecalis, Lactobacillus murinus and Escherichia coli. After purification and subculture, we determined that the bacteria in (1:1:1) cocktail experiments orally administered by esophageal gavage to three GF 20-week old SAMP mice (10 6,7 CFU/mouse in 400 uL of phosphate buffered saline) reproduced the proportions of the 20-week-old donor AKR/J mice (10:1:1). In split-plot experimentation, then we simultaneously inoculated the same 1:1:1 mixture to 5 different sterile substrates (clean sterile corncob bedding, ground GF irradiated autoclaved diet, and three concentrations of soiled bedding; see experimental designs in Fig. 6b). The substrates were aerobically incubated in petri dishes within NesTiso sets for 9 days at 23 °C.
Microbiome analysis.
Fecal and bedding microbiome analysis was conducted with sufficient coverage to infer the presence or absence of abundant taxa and to quantify the risk of cross-contamination of transplanted mice with murine SPF microbiota at the phylum level (2-3 Log 3 range difference between 100-bp pair-end reads of most and least abundant bacteria in sample). Total read counts for samples in were approximately 2500 and 25,000-40,000 reads per sample for Figs 3 and 4, respectively. Binary interpretation of phylum data (presence/absence) indicated that (i) recipient mice had a microbiome binary profile ('phylum signature') that was virtually identical to that of the human donor for at least 21 days indicating microbiome colonizability/stability. DNA extraction was conducted using Qiagen reagents (Tissue and blood kit). Library preparation and 16S rRNA microbiome sequencing and primary analysis was conducted using MiSeq Illumina protocols and bioinformatics standard pipelines based on Qiime at the Beijing Genomics Institute in Shenzen, China. Statistical analysis of OTU normalized 0.00017 + log2 transformed data tables was conducted using STATA v13.0 and R software v. 3.4.0 packages. Mathematical modeling. The mechanistic exploration of the microbiome driven hypothesis was conducted using available mathematical modeling functions for discontinuous logistic growth of populations with discrete events in R software (R-project, Vienna, Austria) package 'deSolve' 49 . This package contains modules that allow the incorporation of customizable dilution simulation dynamic events to differential equations. The rationale and detailed description of a novel set of mathematical rules governing the periodic dynamics of cyclical microbial bias inferred from mechanistic interpretation of simulated data are described in detail Statistics. Body weight curves and normally distributed continuous parameters were tested using repeated measures (area under the curves, or univariate sum statistics of paired data points as recommended 93 ) and parametric t-test statistics. When assumptions were not fulfilled, nonparametric methods were used 34 . Right-censored survival analysis data was conducted by computing survival fractions using Kaplan-Meier statistics 46 . Point wise 95% confidence intervals of survival fractions were computed using the log-log transform approach. An alpha level of 0.05 was considered in all cases significant. 95% confidence intervals are reported as primary measure of data dispersion to aid in the interpretation of the p values if larger than 0.05 and lower than 0.1. STATA (v.13; College Station, TX, USA), R (R-project, Vienna, Austria), and Graph Pad Prism (La Jolla, CA, USA) software were used for statistical analysis and graphics.
Ethical IRB and IACUC approvals. This study was carried out in accordance with the recommendations and principles set by the National Centre for the Replacement, Refinement and Reduction of Animals in Research. Experiments and protocols were approved by the IACUC and the Institutional Review Board (IRB) at Case Western Reserve University. Fecal specimens from humans were obtained following protocols and personnel approved by the IRB who obtained informed consent from adult participants that donated specimens for testing in mice.
Declaration of data availability. Data and detailed protocols are available upon request or freely available as Supplementary Materials.
|
2023-02-18T14:35:59.940Z
|
2018-02-28T00:00:00.000
|
{
"year": 2018,
"sha1": "45396f5bf9bd69f03fb121d6fb5258e739295568",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-20742-1.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "45396f5bf9bd69f03fb121d6fb5258e739295568",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
}
|
13979487
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of silver nanoparticles synthetic potential of Couroupita guianensis Aubl., flower buds extract and their synergistic antibacterial activity
The present investigation demonstrates Couroupita guianensis flower buds extract mediated synthesis of stable silver nanoparticles (AgNPs). Instant formation of AgNPs was primarily confirmed by the appearance of yellowish brown colour and characteristic silver SPR band in the UV–visible spectrum. Elemental and crystalline natures of the AgNPs were identified from EDX and XRD pattern, respectively. Spherical morphology and the mono-disparity were revealed from TEM and AFM images. The particle size ranged from 5 to 30 nm and average size of 17 nm was consistent in XRD, TEM and AFM measurements. Possible reduction and stabilizing agents, viz., phenolics, flavonoids and proteins were identified from the characteristic FTIR peaks representing their functional groups. The strong antibacterial activity of synthesized AgNPs against Gram-positive and Gram-negative bacteria exhibited the potential for the formulation of synergistic bactericides by combining antibacterial properties of Couroupita flower buds extract and silver salts for biomedical applications.
Introduction
Phytosynthesis methods of silver nanoparticles (AgNPs) are emerging as a potential alternative to physical and chemical processes, ever since the first report on Geranium leaf cell free extract mediated synthesis (Shankar et al. 2003), and demonstration of their applications in a number of fields including medicine. The properties and specific applications of AgNPs depend on their shape, size and dispersion. Cell free extract method offers a number of possibilities and flexibility to control various parameters determining the architecture of the nanoparticle. Live plants posses innate nanoparticle synthetic ability as a part of biomineralization, fortification and metal tolerance traits, was evolved to colonize metalliferous soils, this nanoparticle fabrication potential was demonstrated in many plant systems (Marchiol 2012). Efficient and stable synthesis of AgNPs depends on plant species employed and nature of extract.
Varied primary and secondary metabolites of plants, viz., proteins, enzymes, polysaccharides, amino acids and vitamins, antioxidants, flavonoids, flavones, isoflavones, catechins, anthocyanidins, isothiocyanates, carotenoids, polyphenols were attributed as reducing and stabilization agents in the synthesis of metallic nanoparticles (Park et al. 2011). The availability of vast metabolite diversity and unexplored potential of exotic and rare plant systems, a lot of attention is being paid to combine phytochemistry and nanotechnology to develop nanomaterials with desirable size and morphology.
Antimicrobial activity of AgNPs was most extensively evaluated for applications in the medical field to prepare medicines, devices, implants, polymers and dressing material to control infections. Biosynthetic AgNPs are preferred due to their enhanced antimicrobial activity over silver ions and biocompatibility. Antimicrobial activity of AgNPs varies with their size (Panacek et al. 2006), and phytosynthesis of AgNPs with varied size and morphology was reported earlier (Mohanpuria et al. 2008) by employing different plant species.
Couroupita guianensis, a member of threatened list (local name: Cannon ball tree) is one of the two species representing Lecythidaceae family, found rarely in Botanical gardens and temples in India. Various plant parts viz., leaves, flowers, fruit and bark possess antibacterial, antifungal, antiseptic and analgesic qualities. The flowers are used to cure cold, stomach aches, intestinal gas formation and Diarrhea (Prabhu and Subban 2012).
Generally different parts of flower buds are loaded with antimicrobial and other compounds to protect vital reproductive process. Several chemical constituents with novel structures and bio-active moieties viz. aliphatic hydrocarbon, stigmasterol, alkaloids, phenolics, flavonoids, isatin and terpenoids have been reported from the flowers (Rane et al. 2001;Wong and Tie 1995). Similar phytochemical principles reported from Couroupita flowers are implicated in antimicrobial activity, reduction of metal ions and stabilization of AgNPs from other plant systems (Mohanpuria et al. 2008). In the light of the above background, we hypothesize that potential antimicrobial formulations can be developed by combining antimicrobial activity of phytochemicals and their potential to synthesize AgNPs. Here, we report efficient synthesis of AgNPs using Couroupita flower bud extract and their synergistic antibacterial activity for the first time.
Preparation of aqueous extract and synthesis of AgNPs
Flower buds [5, 10 and 15 days old] were collected from the plant growing in the Botanical Garden of Sri Venkateswara University, Tirupati. The age of flower bud, concentration and volume of extract of silver nitrate solution for the synthesis of AgNPs were optimized and based on preliminary results, 5 days old, 20 % (FW/Vol.) and 0.01 ml extract/30 ml AgNO 3 was chosen. The aqueous extract was prepared by boiling 5 g of flower bud pieces in 25 ml of deionized water for 5 min and the filtrate were used as reducing and stabilizing agent. In a typical AgNPs synthesis reaction, 0.01 ml of flower buds broth was added to 30 ml of 10 -3 M AgNO 3 and kept in dark at room temperature. The reaction leading to AgNPs synthesis was monitored visually, i.e., the appearance of yellowish brown color and spectral characteristics using UV spectrophotometer.
Characterization of AgNPs
The reduction of Ag ? to Ag 0 was monitored by UV-visible absorption spectrum (Jasco Corp., , in the range of 300-700 nm periodically. For the spectral analysis, reaction mixture was diluted with Milli Q water (1:9 ratio) and measurements were carried out as a function of reaction time at room temperature. The colloidal solution of AgNPs was centrifuged for 15 min at 15,000 rpm. This process was repeated for five times by redispersing the pellet into Milli-Q water and the final pellet obtained was air dried and this dried powder of AgNPs was used for further analysis.
A thin film of dried powder of AgNPs on a glass substrate was used to obtain X-ray powder diffraction data by PAN analytical X'Pert PRO diffractometer. The Scanning was done in Bragg-Brentano geometry using step-scan technique and Johansson monochromator to produce pure Cu aK radiation (1.5406 A; 45 kV, 40 MA) in the range of 10°to 90°at a rate of 2 min -1 . The crystalline size was calculated using the Debye-Scherrer's equation.
TEM image was acquired and measurements were performed on a JEM-2100 (JEOL, Germany), operated at an acceleration of 300 kV. The Selected Area Electron Diffraction (SAED) pattern was obtained by directing the electron beam perpendicular to one of the spheres.
The thin film on silicon cover glass containing the dried AgNPs sample was analyzed using Atomic Force Microscope (NT-MDT AFM NEXT, Germany). The elemental nature of the nano silver sample was analyzed through EDX using Oxford Inca Penta FET X3 EDX instrument connected to Carl Zeiss EAO MA 15 Scanning Electron Microscope.
FTIR measurements of powdered samples of AgNPs and flower bud extract were carried out on VERTEX K-ALPHA FT-IR Spectrophotometer (Bruker, Germany). Spectra were taken in the wave number range of 500-4000 cm -1 without adding the KBr pellets. Antibacterial activity of silver nitrate, extract of flower buds and AgNPs was performed following Kirby-Bauer disc diffusion method. Bacterial strains, which are at Log phase (10 8 cfu/ml) were standardized against McFarland's standard and were swabbed on to Mueller Hinton Agar (MHA) plates. For the preparation of discs, 20 ll (50 lg/ ml) each of the test solutions was used along with standard drug gentamicin (10 lg) for comparison. Cultures were incubated at 37°C for 24 h and the zone of inhibition was measured by MIC scale. Triplicates were maintained for each treatment. The results were subjected to one-way ANOVA followed by Dennett's test (**P \ 0.05).
Results and discussion
Synthesis and UV-visible analysis Synthesis of AgNPs by reduction of silver ions by the phyto-constituents of the extract were initially observed with the appearance of characteristic yellowish brown color, which is due to excitation of Surface Plasmon Resonance (SPR) phenomena (Mulvaney 1996).
The intensity of color increased, as more and more silver ions got reduced with reaction time and attained stable dark chocolate brown color at 72 h.
The progress of the silver ion reduction was monitored by UV-vis spectroscopy analysis in the wavelength range from 300 to 700 nm at periodic intervals (1 min to 72 h) ( Fig. 1), further synthesis was confirmed by characteristic SPR band at 415-420 nm.
The height of the peak increased with reaction time and was stable at 72 h with an absorption maximum at 420 nm. This absorption maximum is a characteristic feature of AgNPs, as reported from chemical (Kong and Jang 2006), and biosynthesis methods (Logeswari et al. 2012). Generally, the SPR bands are influenced by the size, shape, morphology, composition and the dielectric environment of the prepared nanoparticles (Ahmad et al. 2010). A single, narrow absorption peak centered at 420 nm indicates uniform spherical AgNPs, as expected according to Mie's theory (Mie 1908).
There was no obvious change in colour intensity, spectral peak position and optical density of the colloid, when monitored at regular intervals over a period of 4 months. This confirms the colloidal stability and uniformity of the hydrosol, as reported earlier in green synthesis studies (Pasupuleti et al. 2013).
The efficiency of plant extract in the synthesis of AgNPs depends on the phytochemical composition and concentration of the extract, the ratio of plant extract to AgNO 3 , time taken for initiation, completion of synthesis and their stability. The extract of 5 days old buds was found to be efficient in terms of the ratio of the extract to silver solution (1:3000), which is several fold less compared to the earlier reports (Usha Rani and Rajasekharreddy 2011) and rate of reduction. Hence, Couroupita flower bud extract is more efficient in the reduction of silver ions and stabilization of so formed AgNPs, when compared to the many earlier green synthesis reports (Logeswari et al. 2012). This may be due to the high concentration of reducing agents or more efficient, reducing compounds and stabilizing agents present in the extract at that particular developmental stage of buds.
Morphological characterization
The powder XRD pattern of AgNPs revealed a total of 10 peaks (Fig. 2). The Bragg reflection values of four major peaks 38. 314, 44.405, 64.400, and 76.506 at 2h value corresponds to (111), (200), (220) and (311) crystallographic The remaining minor peaks are reflections of crystalline organic molecules adsorbed on the surface of the AgNPs. The XRD pattern obtained was consistent with earlier plant based synthesis reports (Murugan et al. 2014). The sizes of AgNPs were determined by estimating the full-width at half maximum (FWHM) of the most prominent peaks from the XRD pattern using the Debye-Scherrer's equation and the average crystallite size was 17 nm.
TEM micrograph showed that the phytosynthesized AgNPs were small, monodisperse and spherical in shape which are in consistent with AFM image. The size of the AgNPs, as can be seen in the image ranged from 5 to 30 nm with an average of approximately 17 nm. These recorded particle size measurements are also in agreement with the estimated values from the AFM and XRD pattern. The SAED pattern (Fig. 3b) revealed distribution and crystalline nature of particles in the focusing zone.
The surface morphology of synthesized AgNPs can be better visualized and understood by their three dimensional topography. The AFM 3D image (Fig. 4a), of AgNPs depicts uniform spherical morphology and colloidal nature (Fig. 4b). The frequency of particles within the size range of 2.8 to 35 nm was much higher with an average of 17.396 nm, as shown in the particle size histogram (Fig. 4b). The sizes of the AgNPs are in agreement with TEM image and XRD pattern. This size range of particles could be due to capping of AgNPs by various compounds present in the flower bud extract. Previous studies have shown similar variation in size of the AgNPs (Nabikhan et al. 2010). The presence of elemental silver in the colloid was confirmed by the EDX peak pattern in silver region (Fig. 5).
Phytochemical analysis by FTIR
Metallic nanoparticles generated through phytosynthesis are generally stabilized by phytochemicals through molecular interaction with metal surfaces. The nature of molecular interactions can be studied using FTIR analysis and various capping agents were suggested based on the reference peaks of functional groups in the literature. As shown in (Fig. 6b), there were little changes in the spectrum of AgNPs compared with flower bud extract spectrum (Fig. 6a). The observed peaks in the flower bud extract (Fig. 6a) at 3740 for (O-H) alcohol, 1563 for (N-H) amide, 1181 and 1034 (C=O) for ether, and 2312 for (P-H) phosphine stretches were shifted (Fig. 6b) to lower wave numbers 3739 (OH), 1553 (N-H), 1029 (C=O) and 2309 (P-H) respectively in FTIR spectrum of AgNPs. Whereas, peak at 1316 for (C-N) amine stretch, shifted to higher wave number i.e., 1327 (C-N). There was disappearance of FTIR peak at 1449 for (C-H) alkane and appearance of new peak at 1512 for (N=O) Nitro stretch in FTIR spectrum of AgNPs. Similarity and subtle changes observed in Phytochemicals with functional groups representing characteristic FTIR peaks namely phenols, flavonoids, stigmasterol and aliphatic hydrocarbons were reported from Couroupita guianensis flowers (Prabhu and Subban 2012;Rane et al. 2001;Wong and Tie 1995). Phenolic compounds are known to have high electron donating property which results in the formation of H radicals, which subsequently reduces silver ions (Ag ? ) to nano size (Ag 0 ). The reduction function of polyphenols in the synthesis of AgNPs was also reported earlier (Dibrov et al. 2002). Phenolics posses hydroxyl and carboxyl groups and are able to bind to heavy metals.
The two peaks representing Amide-I are characteristic of proteins, which are responsible for reduction and stabilization of AgNPs, as reported earlier (Gole et al. 2001). The proteins bind to the AgNPs through the free amino group in cysteine residues (Sivaraman et al. 2009). FT-IR spectral characteristics of stabilized AgNPs and different sizes observed in TEM and AFM suggests several phytoconstituents like polyphenolic compounds, flavonoids, and proteins are involved in the synthesis and stabilization of nanoparticles.
Possible mechanism of AgNPs synthesis
Quantitative analysis of flowers recorded high amount of quercetin (Prabhu and Subban 2012) and its high reduction potential was also reported (Zhang et al. 2011). It is thus possible that quercetin acts as a reducing agent and it is oxidized by AgNO 3 resulting in the formation of silver nanoparticles. We therefore propose the overall reaction shown in (Fig. 7), involving reduction of Ag (I) to Ag (0) coupled with catechol oxidation and subsequent crosslinking of the corresponding quinone and quercetin. The redox reaction shown in the reaction scheme indicates the production of two protons per catechol.
Antibacterial activity of silver nanoparticles
The antibacterial activities of AgNPs, AgNO 3 , extract of flower buds and gentamicin against tested Gram positive and Gram negative bacteria were depicted as average values of inhibition zones (Table 1). The degree of sensitivity varied in relation to bacterial species and comparable with standard drug gentamicin under test concentration (Fig. 8).
AgNPs showed strong inhibitory activity against four Gram positive bacterial species compared with AgNO 3 and extract of flower buds. Maximum inhibition zone, i.e. 24 ± 0.8 mm was observed for Micrococcus luteus which is closely followed by Bacillus subtilis (23 ± 0.5) mm (Table 1). Moreover, for Micrococcus luteus and Bacillus cereus, AgNPs displayed higher bactericidal activity compared to gentamicin and were equally effective for Staphylococcus aureus. Notwithstanding, for B. subtilis, gentamicin showed significantly higher inhibition than AgNPs. Gram negative bacteria were also significantly inhibited by AgNPs when compared with AgNO 3 and flower bud extract, while gentamicin displayed strong bactericidal activity against three Gram negative bacteria compared to AgNPs. Only Klebsiella pneumonia is less sensitive to gentamicin than AgNPs, AgNO 3 and flower bud extract.
Even though the antimicrobial activity of silver is well known for over a century, an exact mechanism of action was not yet elucidated. Many possible mechanisms were suggested in the literature that includes, disruption of proton gradient by binding to phospholipids associated with the proton pump of bacterial membranes (Sivaraman et al. 2009), DNA unwinding, inhibition of cell division, damage to bacterial cell envelopes and interaction with hydrogen bonding processes leading to a range of effects The significant bactericidal activity of the flower bud extract against tested bacterial species is due to the presence of a number of antibacterial compounds like phenols, flavonoids, aromatic compounds, isatin and indirubin (Wong and Tie 1995). Varied mechanisms were suggested in the literature for the antibacterial activity of similar compounds reported from the blossoms of Couroupita. Plant phenolic compounds have a range of bioactivities like antibacterial, fungicidal, antiviral, antimutagenic and anti-inflammatory activities (Jayaraman et al. 2010), which are attributed to their disciple hydroxyl (OH) groups. Earlier reports of stigmasterol, aliphatic hydrocarbons and quercetin from flowers (Prabhu and Subban 2012;Rane et al. 2001), also support the observed antimicrobial activity of the extract. Phenolic compounds are known to inhibit enzymes through interaction with sulphahydral groups and proteins nonspecifically (Mason and Wasserman 1987). Quinine, the major plant phenolic compound complexes irreversibly with nucleophilic amino acids present in proteins, leading to inactivation and loss of their function (Stern et al. 1996). The most common targets are surface exposed adhesions, cell wall polypeptides and membrane bound enzymes. Quercitin, belongs to flavonoid class is known to curb E. coli gyrase B by binding to the ATP binding site (Plaper et al. 2003), to DNA and induce enzymic DNA damage (Austin et al. 1992). Quercitin also increases permeability of the bacterial inner membrane and loss of membrane potential (Mirzoeva et al. 1997). Multiple modes and overlapping action of these compounds present in the extract were responsible for the antimicrobial activity. The enhanced antibacterial activity of phyto-chemically stabilized AgNPs is due to the combined effect of nanosilver form and associated antimicrobial plant compounds. This can be explained by the increased surface area to volume ratios in nanoparticles, which provides maximum contact area with bacteria. Hence, bactericidal property of the nanoparticles is size dependent, the larger the surface area, the greater the antibacterial activity (Jeong et al. 2005).
Similar to silver ions, a number of mechanisms were offered in the literature for the disinfectant activity of AgNPs. The proposed mechanisms include, depletion of intracellular ATP through destabilization of the outer membrane and rupture of the plasma membrane (Lok et al. 2006), blocking of respiration by reacting with sulfhydryl (-S-H) groups along the cell wall to form R-S-S-R bonds by the nanosilver causing bacterial cell death (Kumar et al. 2004;Morones et al. 2005). Further, on penetration AgNPs inflict more damage to bacteria by interacting with sulfurand phosphorous-containing compounds like DNA (Sathish ).
The phytochemicals are reported to have the capability of increasing the susceptibility of bacteria for various drugs (Jayaraman et al. 2010). For example, epigallocatechin gallate enhances the tetracycline action against resistant Staphylococcus isolates by impairment of tetracycline efflux pump activity and increased intracellular retention of the drug (Pal et al. 2007). In the present investigation the increased sensitivity of bacteria to the AgNPs is due to overlapping actions of silver and phytochemicals like quercetin, as they bind to the cell membrane, inhibition of enzymes, binding to proteins and DNA resulting in additive mode of action. Damage to the cell membrane by AgNPs was shown by TEM analysis (Pal et al. 2007). The difference in the antimicrobial activity of AgNPs between the Gram positive and Gram negative bacteria is due to the difference in molecular makeup of the cell walls. In the presence of lipopolysaccharide barrier in Gram negative bacteria general susceptibility is very low, as they get protection against toxins and chemicals. Higher sensitivity of Gram negative bacteria to gentamicin than AgNPs may be due to their differential mode of action, as the antibiotic inhibits protein synthesis, while silver failed to disrupt thick cell walls effectively. The difference in the antimicrobial activity of AgNPs among Gram positive and Gram negative bacteria may be due to species/strain variation in uptake, tolerance mechanism and general susceptibility. This is evident from the varying sensitivities of Bacillus subtilis and Bacillus cereus.
Thus, AgNPs fabricated by phytosynthetic methods enhance the low toxic potential of phytochemicals as antibacterial compounds and also reduce the maximum toxicity of silver on nontargets. Silver and phytochemical synergistic combinations in the form of nanoparticles have potential therapeutic value, as the antibacterial effect is achieved with a lower concentration of silver and phytochemicals. Overlapping and multiple mechanisms of bacteriostatic/bactericidal action of silver and phytochemicals certainly delay the emergence of resistance bacteria.
Conclusions
The phytochemicals, viz. Phenolic compounds, flavonoids and proteins present in the aqueous extract of flower buds of Couroupita exhibited an efficient reduction of silver ions and stabilization of nanoparticles. The average size of spherical nanocrystal was 17 nm and mono-dispersed AgNPs, which are a combination of nano form of silver and phytochemical coating displayed synergism in antimicrobial activity.
|
2016-05-04T20:20:58.661Z
|
2016-03-21T00:00:00.000
|
{
"year": 2016,
"sha1": "ff962980f0cba0e85ca1d4891c423ff0ffb07044",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13205-016-0407-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ff962980f0cba0e85ca1d4891c423ff0ffb07044",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
248307184
|
pes2o/s2orc
|
v3-fos-license
|
A lesson from MMR: is choice of vaccine the missing link in promoting vaccine confidence through informed consent?
ABSTRACT A recent study suggests that vaccine hesitancy amongst key demographics – including females, younger individuals, and certain ethnic groups – could undermine the pursuit of herd immunity against COVID-19 in the United Kingdom. At the same time, the UK Joint Committee on Vaccination and Immunization (JVCI) indicated that it will not facilitate the choice between available COVID-19 vaccines. This paper reflects upon lessons from the introduction of the UK’s combined Measles, Mumps and Rubella (MMR) vaccine strategy of the 1980s when Member of Parliament Miss Julie Kirkbride argued that had parents been allowed to choose between vaccine variants, then the crisis of low herd immunity – and subsequent outbreaks – could have been avoided. This paper explores this argument, as applied to the COVID-19 vaccination strategy, by considering how three key elements of informed consent – disclosure of risk, benefit, and reasonable alternatives – may be employed to tackle vaccine hesitancy and build vaccine confidence.
The novel and highly transmissible SARS-CoV-2 virus responsible for Coronavirus Disease 2019 has infected over 261 million people globally and claimed more than 5.2 million lives (John Hopkins University & Medicine. Coronavirus Resource Center, 2021). Symptoms range from mild disease to severe acute respiratory distress. The virus is also associated with "long-COVID"a chronic, multi-systemic, vascular dysfunction linked to a range of conditions including chronic fatigue, dyspnea, insomnia, palpitations, impaired male fertility and mental health conditions (Huang et al., 2021). The United Kingdom (U.K.) government was first to announce that it had granted temporary approval for a vaccine against SARS-CoV-2 in late 2020. Whilst vaccine uptake has been high amongst the general UK population, a large-scale study by Robertson and colleagues in 2021 indicates that vaccine hesitancy -" . . . [the] reluctance or refusal to vaccinate" (World Health Organisation, 2019) -remains prevalent amongst certain demographics (Mahase, 2020). Such hesitancy can threaten to undermine the high levels of community vaccine coverage required to reduce viral transmission, protect the vulnerable, and minimize the risk of outbreaks.
In the late 1990s, vaccine hesitancy peaked when the UK replaced individual vaccines with a combined, triple "Measles, Mumps and Rubella" (MMR) vaccine (World Health Organisation, 2019). At the time, Member of Parliament (MP) Miss Julie Kirkbride asserted that had parents been afforded a choice between the combined tripe vaccine or equivalent single vaccines and then the "[subsequent] crisis in herd immunity" could have been averted (U.K. House of Commons, HC Deb, 2002). This paper looks at lessons from MMR vaccine controversy and questions whether vaccine choice could help improve COVID-19 vaccine confidence. In doing so, it addresses the principles of informed consent, which may support this proposition, namely, that as a "prophylactic" form medical treatment, patients should be informed of risks, benefits, and reasonable alternatives (Public Health stronger immune response should it encounter the actual pathogen (Chaplin, 2010). Three COVID-19 vaccines were initially granted temporary, accelerated authorization for use in the UK's vaccination program: those from AstraZeneca, Pfizer and Moderna (Regulation 174(a) Human Medicines Regulation, 2012; Medicines and Healthcare Products Regulatory Agency (MHRA, 2021a(MHRA, , 2021b(MHRA, , 2021c). Whilst the AstraZeneca vaccine re-deployed this existing, previously licensed technology against the novel virus, the vaccines from Pfizer and Moderna introduced a new, previously unlicenced, form of "messenger RNA" (mRNA) vaccine technology. The mRNA vaccines manipulate the body's own cells to make harmless viral protein replicas to elicit a similar immune response (see , Table 1; Medicines and Healthcare Products Regulatory Agency (MHRA, 2021c;Falconbridge & Sandle, 2020)). However, some feared that this amounted to manipulation of DNA (Centres for Disease Control (CDC), 2021 November 3 rd ). The rapid and "temporary" authorization of these new COVID-19 vaccines -particularly those employing novel mRNA technology in lieu of long-term safety data -have raised the question of whether they amount to an experimental form of medical treatment (Anand & Stahel, 2021). The Medicines and Healthcare Regulatory Authority refute this claim, maintaining that they " . . . [do] not consider these vaccines to be experimental [. . . as . . .] [t]he main efficacy and safety results for the Phase I, II and III trials have been submitted . . . [and deemed] sufficient" however, concerns persist (Medicines and Healthcare Products Regulatory Agency, 2021). The MHRA's counterpart in the United States -the Food and Drugs Administration (FDA), 2021has similarly granted "emergency use authorisation" (EUA) for the vaccines which "makes a product available to the public based on the best available evidence" (Food and Drugs Agency (F.D.A; U.S.), 2020). Whilst "temporary authorisation" and "EUA" status do not equate to experimental status, the public's mere perception that they do may have profound implications, particularly for minority groups. Sims and Lacks (2021) explain that the legacy of the Tuskegee experiments -when the U.S. government sponsored experiments conducted on Tuskegee men -continues to fuel justifiable vaccine hesitancy amongst Black Americans to this day (Sims & Lacks, 2021).
Such hesitancy is noted amongst ethnic minorities in both the US and the UK. Black and minority communities are not only disproportionately affected by COVID-19 viral infection but also have the lowest levels of trust in the COVID-19 vaccines (Laurencin, 2021;Robertson et al., 2021). According to the large-scale study by Robertson and colleagues, vaccine hesitancy is as high as 71.8% amongst Black demographics in the UK. Younger demographics are also up to six times more likely to be hesitant that those aged 75 or over which may be linked to fears of infertility or miscarriage (Moodley et al., 2021;Robertson et al., 2021). Perhaps fueled by a " . . . spike in conspiracy content . . . " on social media platforms that shows the "[t]he dominant coronavirus vaccine narratives . . . [now focus upon] . . . discussion of political motives . . . and . . . impact [upon] personal liberties" and associated misinformation, rather than the protective benefits of vaccination (De Graaf et al., 2020;Sesa et al., 2021). Although there remains a "general willingness" to be vaccinated in the UK, it is estimated that vaccination rates must reach between 67% and 80% of the population for herd immunity to be Table 1. Summary of vaccine types and method of development in relation to the initial vaccines which were granted temporary authorization in the United Kingdom as of January 2021.
Vaccine
Summary of Method of Development Pfizer/BioNTech (Medicines and Healthcare Products Regulatory Agency (MHRA, 2021a), New generation of 'messenger RNA' (mRNA) vaccines work by delivering a set of instructions -mRNA -directly to host cells. directing them to the produce SARS-CoV-2 surface proteins which will illicit an immune response. Oxford/AstraZeneca (Medicines and Healthcare Products Regulatory Agency (MHRA, 2021b) and The Oxford/AstraZeneca' COVID-19 vaccine -ChAdOx1-S -is developed using a traditional method by re-deploying existing research toward the COVID-19 effort. It therefore uses genetically modified chimpanzee adenovirus to express SARS-CoV-2 surface proteins which will trigger immunity (Medicines and Healthcare Products Regulatory Agency (MHRA, 2021a).
Moderna (Medicines and Healthcare Products
Regulatory Agency (MHRA, 2021c) New generation of 'messenger RNA' (mRNA) vaccines work by delivering a set of instructions -mRNA -directly to host cells.
attained (Randolph & Barriero, 2020). This could lead to waves of COVID-19 reemergence and waves of further re-infection. Such a problem is only likely to get worse given the increasing likelihood of annual -or bi-annual -vaccination boosters (Torjesen, 2021a). Robertson and colleagues urge that strategies be developed to boost herd immunity amongst identified demographics (2021). (Parmet, 2005) asserts that vaccine choice -a central covenant of informed consent -can be a useful public health tool in promoting vaccine trust and confidence.
Reinterpreting autonomy
As prophylactic treatment, vaccines are subject to the same informed consent requirements of any other medical treatment (s.45E(2; Public Health (Control of Disease) Act, 1984); Royal College of Surgeons of England, 2018). Informed consent is integral in upholding patient autonomy, etymological root of which derives from the Greek word "autonomous" meaning "self-law" or "selfgovernance" giving the impression of a purely individualistic principle. However, when interpreted as a relational concept, autonomy may be of public health utility. Indeed, whilst some liberals are literal in interpreting autonomy as that which is free from all external influence (Stanford Encyclopedia, n.d), even liberal philosophers such as Kant concede a relational dimension to the principle. According to Kant, an autonomous decision must be rational in accordance with the Categorical Imperative which holds that one must only act " . . . according to that maxim by which you can at the same time will that it should become a universal law" (Kant, 1785). Similarly, Mill -in his famed publication "On Liberty"-concedes that whilst individuals should be free to pursue their own interests, they may rely upon others to warn them of risk (Mill, 1998)]. Mill employs the "poison warning label" analogy and that of the dangerous bridge as examples of why it is reasonable to challenge an individual's seemingly irrational decision-making (Mill, 1998). Communitarians give greater recognition still to the relational aspects of autonomy; an approach that was evident in some US cities, such as New York, where greater emphasis was placed upon the collective benefit of vaccination (NYC.gov, 2020). This was also evident in the UK with its long history of healthcare solidarity where the vaccination campaign focused on slogans like "save the NHS" and "protect the elderly" (NHS Rotherham, Doncaster and South Humber, n.d.). However, individualistic interpretations of autonomy have become increasingly politicized, during the pandemic. This was particularly evident in parts of the US where partisanship played a key role in determining attitudes toward public health measures with autonomy centered upon implications for the individual, perceived coercision, restrictions on freedoms and implications for the individual alone (Ye, 2021). In contrast, by introducing a relational caveat, Kant's relational approach to Liberalism requires one's own actions to be applicable to all so that individuals be held to the same standards as others in society during decision-making. It is in this way that autonomy can have public health utility -whilst maintaining the individual's right to come to their own decision it incorporates wider considerations such as societal risk and community benefit (O'Neill, 2020).
Public health utility of informed consent
According to (Parmet, 2005) " . . . [by] respecting choices, however broad or limited they may be, informed consent provides individuals and communities with the respect and knowledge necessary for their acceptance and support of public health procedures" (Parmet, 2005 at 107). Pamet emphasizes that informed consent is a particularly important tool in circumstances whereby "emergency approval" underpins vaccine use in a public health emergency -as is the case with the COVID-19 vaccines. The WHO also recognizes the value of informed consent as a public health tool by asserting that practitioners " . .
. remain the most trusted advisor[s] and influencer[s] of vaccination decisions and
[therefore] they must be supported to provide trusted credible information on vaccines" (World Health (Figueiredo et al., 2020). The requirements of valid informed consent were determined in (Montgomery v Lanarkshire Health Board Health Board, 2015) when Lord Reed clarified that doctors are under a duty to " . . . take reasonable care to ensure that the patient is aware of any material risks involved in any recommended treatment, and of any reasonable alternative or variant treatments . . . " (87). This creates a legal duty to "involve patients in decisions relating to their treatment" which, in the case of vaccination requires disclosure of
"Relational" benefits of vaccination
General information pertaining to the benefits material risks and reasonable treatment alternatives are outlined by the MHRA and set out in Table 2 (MHRA(a)(b)(c)). In applying a relational approach to autonomy, practitioners should disclose both individual and collective benefits of vaccination. For the individual, vaccine efficacy in reducing severe infection and hospitalization stands at around 62-98% (see, Table 2). The Oxford/AstraZeneca vaccine has been associated with the lowest efficacy of the three vaccines across some demographics, which has led some European countries to favor the other vaccine types -this may have a similar influence upon individuals who may seek more coverage (European Medicines Agency (2021); Kington, 2021). Disclosure of benefit would also extend to providing information about the link between immunization and the attainment of herd immunity whilst recent pre-publication findings from Israel also suggest that the Pfizer vaccine reduces transmission of COVID-19 by up to 90%, which confers a societal benefit in protecting the vulnerable (Lubell, 2021).
If it is accepted that relational autonomy may be of public health benefit -by acknowledging that individuals depend upon healthcare practitioners for "support and assistance" -then this relational aspect of consent can potentially be enhanced to promote utility (2006). Whilst the law does not require decision-making to be rational, it does require that patients have fully understood the information provided. MacLean, however, argues that by questioning the rationality of decisionmaking, healthcare practitioners can ensure understanding and so better protect autonomous decision making which is founded upon concepts of capacity and competence (MacLean, 2006). It is to this end that MacLean proposes a model of mutual persuasion. Mutual persuasion involves information disclosure from both the patient (e.g., medical history or symptomology) and the practitioner (e.g., treatment information) that is not just purely informative, but instead involves active dialog that can challenge misconceptions -thus ensuring understanding -and provide a platform for patient and practitioner to persuade the other of their stance and ensure that the decision-making is both relational and informed (MacLean, 2006). Such persuasion is not to be confused with coercion -which would invalidate consent -as with persuasion the ultimate decision lies with the patient. Instead, it seeks to enhance understanding and so support and enhance informed consent.
"Relational" risks of vaccination
Whilst the benefits of treatment are more easily ascertainable, risk is determined according to a test of materiality. A material risk is that which " . . . in the circumstances of the particular case, a reasonable person in the patient's position would be likely to attach significance to the risk, or the doctor is or should reasonably be aware that the particular patient would be likely to attach significance to this risk" (Montgomery v Lanarkshire Health Board Health Board, 2015 at 87). The proviso that material risk may also pertain to that which the practitioner should "reasonably be aware of" necessitates dialogue rather than mere monologue of information disclosure alone -and therefore further supports a relational interpretation of the benefits and risks of treatment. Disclosure of risk would likely involve common side effects, with a test of materiality used to determine whether a reasonable person would consider any rarer side effects to be relevant to the decision-making process. A discussion with the patient will also be needed to determine what additional risks the "particular patient" is likely to attach significance to (Montgomery v Lanarkshire Health Board Health Board, 2015, p. 81). This may involve disclosure of rarer side effects such as those identified during ongoing adverse drug reaction (ADR) monitoring (Torjesen, 2021b). For patients who have, for example, had cosmetic dermal fillers it may be deemed material that the Moderna vaccine has been associated with an immunological reaction to fillers that resulted in peripheral facial paralysis (Munavalli et al., 2021). Long-term data for all of the vaccines are as yet unknown. Discussion pertaining to risk should also include the risk deriving from a failure to treat, which could include susceptibility to COVID-19 infection and its associated risks of long-term complications and mortality that has occurred across a range of age demographics (Huang et al., 2021; John Hopkins University & Medicine. Coronavirus Resource Center, 2021).
COVID-19 vaccine alternatives
The third item for disclosure according to Montgomery is that of reasonable treatment alternatives, however at law, ambiguity remains over the legal interpretation and application of "viable treatment alternatives" (Montgomery v Lanarkshire Health Board Health Board, 2015 at 87). The term was first used nearly a decade before Montgomery in Birch v University College (Cave & Milo, 2020) caution that ambiguity continues to surround requirement to disclose reasonable treatment alternatives. The leading authority on treatment selection is the case of (Bolam v Friern Hospital Management Committee (1957)) 1 WLR 582 which applies a test professional judgment to questions of treatment suitability. The test considers the suitability of treatment according to the opine of a body of medical opinion which would, therefore, excluding patients from such matters (Bolam v Friern Hospital Management Committee, 1957, p. 587). Scholars such as (Poole, 2019) have cautioned against using the Bolam test in relation to questions of treatment choice, arguing that it undermines Montgomery's intent to facilitate greater patient-centric care (Bolam v Friern Hospital Management Committee, 1957;Montgomery v Lanarkshire Health Board Health Board, 2015;Poole, 2019). Indeed, Montgomery -which was a landmark departure from the Bolam standard on matters of informed consent -centered upon the negligent non-disclosure of treatment alternatives during labor (Bolam v Friern Hospital Management Committee, 1957;Montgomery v Lanarkshire Health Board Health Board, 2015). (Cave & Milo, 2020) therefore argue that selection of treatment alternatives should be determined according to Montgomery's reasonable patient test that promotes greater patient-centricity and therefore choice (Cave & Milo, 2020;Montgomery v Lanarkshire Health Board Health Board, 2015). Nonetheless, the recent case (Bayley v George Elliot Hospital NHS Trust (2017)) applied a "Bolam gloss" to the issue of "reasonable alternatives" by suggesting that alternatives must be within the knowledge of a reasonably competent clinician, must be accepted practice and must be appropriate, not just possible (Bayley v George Elliot Hospital NHS Trust, 2017 at 99(5);Bolam v Friern Hospital Management Committee, 1957). On these grounds, patients would be informed of appropriate vaccine choices so that they might accept a vaccine which they have more confidence in. Facilitating choice would be particularly beneficial amongst minority demographics who have justifiable vaccine hesitancy due to historical government sponsored experimentations such as that which was seen at Tuskegee. It may also promote confidence amongst younger demographics who may have lingering fears about fertility, providing an opportunity for determined vaccine refusers to be presented with a "more trusted" option. For these "on-the-fence" groups, it is arguable that the ability to select a preferred vaccine type is favorable over outright refusal (Williamson & Glaab, 2018). For determined refusers, the option of another vaccine type could, present a golden opportunity to address persistent hesitancy.
Despite the strong legal case in favor of facilitating vaccine choice -and given that the UK Government have explicitly recognized that informed consent to vaccination is required -it is perhaps surprising that the JCVI recently indicated that choice of vaccine would likely not be available due to " . . . operational and programmatic reasons . . . [only] one vaccine [type] may be offered . . . " (Joint Committee on Vaccination and Immunisation, 2020). Whilst it is understandable that there be such logistical difficulties during a public health crisis and that, to some extents, individual rights of autonomy be limited, it is important to recognize that failure to fully uphold informed consent may undermine " . . . efforts to build confidence in vaccination programmes in the longer term" and moves to restrict choice could generate a "counterproductive resistance" (Williamson & Glaab, 2018). Therefore, as far as is possible, public policy should aim to facilitate patient choice of vaccine and improve access.
Empirical data
There is a widespread lack of empirical data specifically addressing whether vaccine choice could influence confidence and uptake. A 2021 joint study by the University of Bristol and Kings College London's into "Vaccine Confidence, Concerns and Behaviours" suggest that over 50% of the UK population do have a preferred choice of vaccine between Pfizer (28%), AstraZeneca (18%), Moderna (6%), and Johnson & Johnson (5%; Allington et al., 2021a, p. 5). However, in the US -where vaccine choice is currently facilitated -only 65.4% of the population had completed the initial vaccine protocol by March 2022, compared to 72.3% of the UK population where there is no such choice (Our World in Data, 2022). Whilst this could suggest that choice inadvertently impedes vaccine uptake, it is pertinent to note that, in the US, political views can strongly influence vaccine uptake as according to Albercht (2022) Trump supporters are far less likely to be accepting of vaccination. In the UK, where the political scene is different and there's greater emphasis on healthcare solidarity, the most common reason for vaccine hesitancy stems from concern over vaccine side effects (60%; Sethi et al., 2021). Therefore, direct comparisons between the US and UK political and healthcare landscapes cannot easily be drawn. However, there are similarities. Vaccine hesitancy is high amongst ethnic minorities in both the US and UK, and data suggest that the US model facilitating choice could promote increased uptake amongst these hesitant demographics. According to an analysis across 42 US States by Ndugga et al., (2022), 62% of White, 52% of Black, 64% Hispanic and 84% of the Asian population had received at least a single COVID-19 vaccine dose. This, compared to a UK Office of National Statistics (ONS) study from December 2021, which showed 49.9% of Black African, 66.6% of Black Caribbean and 39.7% of Mixed Ethnicity groups in the 18-29 year age bracket had not received a single COVID-19 vaccine (Office for National Statistics, 2022). There is no direct data comparing rates across all age groups in the UK at present; however, these preliminary data suggest that uptake may be lower amongst ethnic groups than in the US where choice is facilitated. A study by Allington et al. (2021b)) into UK "Vaccine Confidence, Concerns and Behaviours" found that those who did not respond to a vaccine invitation were more likely to have vaccine safety concerns (54%) than those who planned to attend their vaccine appointment (30%). Their data indicate that concerns may relate to specific vaccines and note that there has been a marked decline in confidence in the AstraZeneca vaccine option since it was linked to blot clots with only 15% now preferring this option (Allington et al., 2021a). Given that the majority of the UK population have vaccine preferences which may be influenced by safety or side-effect concerns, it may be argued that facilitating choice could have a positive impact on vaccine confidence. Nevertheless, it is evident that there is a growing need for specific empirical studies into the impact that choice has on vaccine confidence to fill this evidential gap.
Facilitating choice and improving access
Strategies to improve informed consent and facilitate choice will require more engagement and better infrastructure. However, there are likely to be concerns raised over time and resource pressures. The British Society for Immunology addressed such concerns in launching its "vaccine engagement starts at home" campaign aimed at " . . . address [ing] common questions and concerns . . . " through webinars and social media (British Society for Immunology, 2021). Its aim -to address misinformation -could help lay the foundations of informed consent by ensuring patients have early access to information. Burgess et al. (2021) also encourage policymakers to recognize that community engagement can "accelerate dialogue" and represent a cost-effective way of promoting vaccine uptake. Whilst appointments should include adequate time for informed consent discussions to take place, implementing a process of early supported decision-making can, therefore, help ensure efficient use of time and resources (O'Neill, 2020).
There may be further concern as to the logistics of facilitating vaccine choice in the UK, however an improved booking and stock management system could improve the already fragmented UK vaccine booking system. During the pandemic, NHS England utilized an online appointment-booking system via an app, whilst NHS Scotland relied upon a letter or call-based invitation system (Maishman, 2021). In Scotland, this meant that appointments may be been pre-arranged at "hard to reach" destinationalthough NHS Scotland state that efforts are made to avoid this (NHS Inform, 2021). Other countries in Europe -such as the Republic of Cyprus -successfully introduced vaccination portals to facilitate both improved vaccine access and choice. Patients registered with the public "General Health System" (GHS or ΓΕΣΥ) could also directly contact a designated call center to seek advice and information on the available vaccine types to assist decision-making. Patients could then choose their vaccine appointment according to suitable venue, time, and choice of vaccine (Government of Cyprus, Ministry of Interior, 2020). Notably, choice of appointment time and location can also help mitigate against missed appointments by allowing patients to schedule vaccination around work or childcare commitments. Text message alerts were also as a reminder prior to the appointment. Cypriot common law is largely based upon the English common law system and so the relevant common law principles apply (Montgomery v Lanarkshire Health Board Health Board, 2015, p. 81). Since the Cypriot model requires choice of vaccine be made before the appointment, pre-appointment engagement is all the more crucial. The choice of vaccine has proved highly popular, and the system has been adapted to accommodate growing demand for choice (Chrysostomou, 2021;Rosenbaum, 2021). The software used allows the Ministry of Health to monitor vaccine availability and stock so as to ensure vaccine replenishment so as to "meet the needs of the population." In the first month of operation, Cyprus was one of the leading EU countries for vaccination (Our World in Data, 2021). Notably, when Denmark suspended use of the AstraZeneca vaccine on clotting fears, the Cypriot portal recorded a marked increase in requests for Pfizer vaccines; a trend that confirms that facilitation of choice may avert outright vaccine refusal when trust in one vaccine is undermined. Arguably the facilitation of such choice avoided the cancellation of appointments on grounds of safety fears. By contrast, the UK vaccine strategy provides patients with whatever vaccine is available on the day which could create anxiety and reduced confidence which could fuel hesitancy and subsequently lead to appointment cancellations. Recent reports suggest that vaccines are going to waste under parts of the UK system due to missed appointments with 60,000 Scottish patients missing their COVID-19 vaccination appointments in March 2021 due to delayed postal deliveries (Tapper, 2021;PA Media, 2021). The benefit of an online booking system is that it can adapt to demand -when uptake drops amongst one cohort, the next can be given access to book their appointments and maintain vaccine distribution. Whilst the Cyprus model is based on a much smaller population, it has already been upgraded and adapted to handle higher levels of use and aimed to facilitate 15,000 appointments per day by 2021 (University of Nicosia, 2021). If a similar system could be adapted for the UK, it could complement the existing UK vaccination program, support the process of informed consent and promote increased vaccine uptake.
Conclusion
Greater patient engagement must be a priority for public health policymakers if an ongoing COVID-19 vaccination program is to maintain or improve rates of uptake. Studies suggest that vaccine hesitancy remains prevalent in key UK and US demographics and particularly amongst ethnic minorities. Whilst the US has a choice-based vaccine strategy, it has seen lower levels of overall vaccine uptake. However, this figure is likely to be influenced by the US' unique political landscape. Amongst ethnic groups, rates of vaccine uptake actually appear higher in the US than in the UK which could indicate that choice improves vaccine confidence. In the UK, studies also indicate that most of the UK population have a vaccine preference and that their perception of vaccine safety may influence attendance at appointments. Informed consent is often considered an opposing construct to the collectivism of public health strategies, however, as a relational construct it provides opportunity to address misinformation and to facilitate vaccine choice where appropriate. The combined MMR controversy suggests that vaccine safety fears are often long-lived and that where choice is facilitated, there is the potential for increased uptake and mitigation of outbreaks. Now is the time to fully embrace informed consent to vaccination as a part of the public health vaccination strategy. The policy and infrastructure model available in other countries provides a template for facilitating vaccine choice in the UK. This could, in turn, promote greater efficiency, reduce vaccine waste, and maximize the roll out so that herd immunity be attained more readily.
Disclosure statement
No potential conflict of interest was reported by the author(s).
|
2022-04-22T15:13:55.662Z
|
2022-04-20T00:00:00.000
|
{
"year": 2023,
"sha1": "3af4e610d41eacfcc761be6c6e8b4c2056b32d99",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/10508422.2022.2059757?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "d99c899b95761af6e8e52edf9cc7448da7de206c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
265184127
|
pes2o/s2orc
|
v3-fos-license
|
Analgesic Efficacy of Non-Steroidal Anti-Inflammatory Drug Therapy in Horses with Abdominal Pain: A Systematic Review
Simple Summary The use of non-steroidal anti-inflammatory drugs, that primarily act via the inhibition of COX isomers, is one of the most common therapeutic means to control abdominal pain in horses. However, these drugs can elicit gastrointestinal side effects. Drugs that are more selective for COX-2 inhibition are considered to cause less adverse effects. Despite some physiological effects of the COX-2 isoform, it is mainly induced by inflammatory processes, whereas the COX-1 isoform has a protective role and is considered to be constitutive. Despite the availability of several non-steroidal anti-inflammatory drugs with varying degrees of COX isoform inhibition, non-selective molecules remain the most frequently prescribed non-steroidal anti-inflammatory drugs. This is likely because the analgesic effect achieved by COX-2-selective drugs is not considered sufficient. To date, the scientific evidence concerning the analgesic efficacy of non-steroidal anti-inflammatory drugs in the treatment of abdominal pain in horses still remains uncertain. This systematic review showed that the current scientific literature cannot adequately justify the therapeutic choice of one non-steroidal anti-inflammatory drug over another for the treatment of abdominal pain in horses. Therefore, prospective randomised blinded clinical trials are deemed necessary to elucidate the analgesic efficacy of non-steroidal anti-inflammatory drugs in the treatment of abdominal pain in horses. Abstract This systematic review aimed to identify the evidence concerning the analgesic efficacy of non-steroidal anti-inflammatory drugs to treat abdominal pain in horses, and to establish whether one non-steroidal anti-inflammatory drug could provide better analgesia compared to others. This systematic review was conducted following the “Systematic Review Protocol for Animal Intervention Studies”. Research published between 1985 and the end of May 2023 was searched, using three databases, namely, PubMed, Embase, and Scopus, using the words equine OR horse AND colic OR abdominal pain AND non-steroidal anti-inflammatory drug AND meloxicam OR flunixin meglumine OR phenylbutazone OR firocoxib OR ketoprofen. Risk of bias was assessed with the SYRCLE risk of bias tool, and level of evidence scored according to the Oxford Centre for Evidence-based Medicine. A total of 10 studies met the inclusion criteria. From those only one study judged pain with a validated pain score, and a high risk of bias was identified due to the presence of selection, performance, and “other” types of bias. Therefore, caution is required in the interpretation of results from individual studies. To date, the evidence on analgesic efficacy to determine whether one drug is more potent than another regarding the treatment of abdominal pain in horses is sparse.
Introduction
The use of non-steroid anti-inflammatory drugs (NSAIDs) is an indispensable aid in the treatment of visceral disorders in horses suffering from colic or post-castration abdominal pain.Over time, new molecules have been developed to limit NSAID-related side effects while preserving their analgesic and anti-inflammatory effects.NSAIDs have a different degree of inhibition of COX-isoforms depending on the type of molecule [1].Recently, research has focused more on molecules with a high selectivity towards the inhibition of COX-2 (inducible isoform), which is responsible for the trigger of pain and inflammation in response to injury [2].As such, the functionality of COX-1 (constitutive isoform), which is responsible for maintaining protective and reparative physiological mechanisms, is preserved [3].
In horse medicine, colic, defined as acute paroxysmal abdominal pain, represents one of the most frequent conditions encountered in clinical practice [4].The treatment, which can evolve either as medical or surgical therapy, will most likely involve the use of NSAIDs [5,6].A recent study, examining the proportion of NSAID prescriptions in equine practice, found that the most frequently prescribed NSAIDs for the treatment of colic in the UK, USA, and Canada were flunixin meglumine and phenylbutazone (traditional NSAIDs) [7].Furthermore, earlier studies have confirmed similar findings in South Africa [8].
It is interesting to note that the prescribing trend is still formally linked to the use of traditional molecules (non-selective COX isoform) despite the availability of newer molecules such as meloxicam and firocoxib, which are designed to more specifically target the COX-2 isoform to avoid undesirable gastrointestinal side effects [9].Apparently, this derives from a certain degree of scepticism towards the analgesic potency of some newergeneration NSAIDs, particularly firocoxib (entirely COX-2-selective) [9].
Considering the high rate of prescriptions of NSAIDs and their role in the analgesia management of abdominal pain in horses, it is crucial to elucidate the analgesic efficacy of the different classes of NSAIDs in this species.Clinical practice should be guided by evidence-based research.Systematic reviews build a connection between medical research and health care practice, and answer clinically relevant questions based on the evidence of all relevant literature regarding a specific research question.
For this reason, the aim of this systematic review is (i) to identify, synthetise, and evaluate the evidence concerning the analgesic efficacy of the NSAIDs available to treat abdominal pain in horses, and (ii) to establish, if possible, whether there is a NSAID that could provide better analgesia compared to others.
Materials and Methods
This systematic review was conducted following the "Systematic Review Protocol for Animal Intervention Studies (SYRCLE)" [10].
Disease/Health Problem and Population/Species
Studies were included if they investigated the effect of NSAIDs in adult horses (aged >6 months) with naturally occurring or experimentally induced colic and postcastration abdominal pain.
Interventions/Exposure
For this review, the administration of at least one of the following NSAIDs was considered as inclusion intervention: meloxicam and/or flunixin meglumine and/or phenylbutazone and/or firocoxib and/or ketoprofen, with specified dosage and route of administration in horses suffering from experimentally induced or naturally occurring colic-or castration-related abdominal pain.The administration had to be in a controlled (versus placebo) manner or in a comparative manner (one NSAID of interest compared to another one).
Control Population
For ethical reasons, most modern clinical trials investigating analgesic drugs do not include a control group.Therefore, the current study also included studies with no control group.However, experimental studies conducted with a control group (absence of the selected intervention) were considered in this systematic review.
•
Pain score after administration of NSAIDs (mandatory)
•
Clinical parameters such as heart rate (HR) and respiratory rate (RR) (if present).
Search Method
The investigation was conducted on three databases according to SYRCLE guidelines.
The search strategy was conducted according to the step-by-step search guide [11], and consisted in this string: (equine OR horse) AND (colic OR abdominal pain) AND non-steroidal anti-inflammatory drug AND (meloxicam OR flunixin meglumine OR phenylbutazone OR firocoxib OR ketoprofen).
This string was adapted according to the search rules/code of the database used.All publications from 1985 to the end of May 2023 were searched.
Selection of Studies
Two reviewers independently screened the results of the search output.The first selection phase consisted of the evaluation of the titles and abstracts of the studies.Then, the second phase consisted of a careful reading of the full text of the selected papers.
The selected papers were analysed for their strength of evidence according to the Oxford Centre for Evidence-based Medicine [12].The scoring to assess the quality of evidence consisted of 3 quality levels: the highest quality level (I-LoE) was awarded to papers including evidence from systematic reviews; quality level II (II-LoE) to papers with evidence obtained from properly designed, randomised controlled trials; and quality level III (III-LoE) to papers with evidence coming from non-randomised trials or experimental studies.Discrepancies between reviewers were resolved by including the judgement of a third person, and conclusions were drawn following a critical discussion between the reviewers.
Inclusion criteria:
We included controlled studies on either experimental or client-owned adult horses (aged > 6 months) that compared the analgesic efficacy of two or more NSAIDs, or one or more NSAIDs and a control group in horses with acute colic/abdominal pain.Only publications in English with at least abstract and title available were included.Only papers scoring I-LoE, II-LoE, and III-LoE for the quality of evidence were included in this systematic review.
Exclusion criteria: We excluded studies with models of chronic abdominal pain (>6 weeks).Studies with ponies, miniature horses, and donkeys were excluded due to the potential for differing pathophysiological responses to colic.Studies concerning the effect of paracetamol or metamizole were also excluded due to the nature of those drugs identified as nonclassical NSAIDs.
Data Extraction and Management
Details of the eligible studies were independently extracted by the two reviewers.Data extracted were the following:
•
Authors, title, year of publication, and journal; The two reviewers independently scored the selected studies regarding the risk of bias using a modified SYRCLE's risk of bias tool [13].Discrepancies were resolved by asking an additional person and with a critical discussion between the reviewers.
The modified tool to assess the included studies consisted of 10 signalling questions defined to analyse 6 types of bias: selection bias, performance bias, detection bias, attrition bias, reporting bias, and other biases.The review authors' judgements about each risk of bias item for each included study and for each included variable question with a "yes" indicated a low risk of bias; "no" indicated a high risk of bias; and "unclear" indicated an unclear risk of bias.Questions number 4 and 9 were adapted to the need of this systematic review while question number 6 was judged as not applicable.A detailed description can be found in Table 1.
Results
The total amount of papers found was 22 (PubMed), 147 (Scopus), and 116 (Embase).After removing the 41 duplicates, 244 papers were screened as eligible.The first selection considering title and abstract included 18 papers.Of these, 10 were excluded.After full text examination (Figure 1), another two studies that met the inclusion criteria were found from references cited, not detected via the initial string.Finally, a total of 10 studies were included in this systematic review.
Results
The total amount of papers found was 22 (PubMed), 147 (Scopus), and 116 (Embase).After removing the 41 duplicates, 244 papers were screened as eligible.The first selection considering title and abstract included 18 papers.Of these, 10 were excluded.After full text examination (Figure 1), another two studies that met the inclusion criteria were found from references cited, not detected via the initial string.Finally, a total of 10 studies were included in this systematic review.
Characteristics of the Included Studies
A total of 10 studies were judged eligible for this systematic review; 4 of them referred to castration-related abdominal pain and the rest were either experimental or clinical trials regarding colic-related abdominal pain.Detailed information is provided in Table 2. Of the castration studies, one consisted in the comparison of the analgesic effects of flunixin, firocoxib, and meloxicam [14], and one of flunixin, meloxicam, and ketoprofen [15]; another compared the analgesic efficacy of meloxicam oral suspension with a control group [16], and the last one compared the analgesic effects of butorphanol tartrate and phenylbutazone administered alone and in combination [17].
The other six studies referred to colic-related abdominal pain.Of the total colic studies, four were experimental, while the other two were clinical trials.Three experimental studies referred to the recovery of mucosal barrier and the analgesia effect in the model of ischemic-injured jejunum.One study compared the effect of firocoxib and flunixin [20], while another compared the effect of flunixin and etodolac [23], and a third investigated the effect of meloxicam and flunixin meglumine [21].The last experimental study investigated the analgesia efficacy of meloxicam in a model of low-dose-endotoxin-induced pain with lipopolysaccharide (LPS) [22].
Characteristics of the Excluded Studies
A total of eight studies were excluded after full-text evaluation.One was a narrative review [24]; the other was an experimental study in ponies [25].All other studies were excluded due to the lack in pain score, that represents the main outcome of this systematic review.
Analgesic Effects of NSAIDS on the Treatment of Castration-Related Abdominal Pain
The characteristics and study design features of the castration studies are reported in detail in Table 3.
The study of Gobbi et al., 2020, reported increased stiffness and scrotal swelling scores for two horses in both the meloxicam and the firocoxib group from day 1 to day 3.The flunixin group showed statistically significant lower HR compared with the other two groups, while no difference was reported for the RR.
Lemonnier et al., 2022, used a modified post-abdominal surgery pain assessment scale (PASPAS) [26], and showed that there was no significant effect of the NSAIDs on overall pain scores; however, a higher pain score was reached in the second pain score, 3.1 h after ketoprofen compared to flunixin and meloxicam.Physiological parameters were not included in the statistical analysis.
Olson et al., 2015, showed that the median behaviour and visual analogue scores (VAS), as well as stiffness and scrotal swelling scores, were significantly greater in control animals compared to meloxicam-treated animals at all time periods.Physiological parameters were not included in this study.Sanz et al., 2009, reported no significant difference for the numerical rating scale (NRS) and VAS data among groups.However, the VAS scores were different over time in the three groups.The highest VAS scores were evident at 4 and 8 h after surgery.HR and RR did not show significant difference over time or between the three groups.
Analgesic Effects of NSAIDS on the Treatment of Colic-Related Abdominal Pain
Of the colic-related abdominal pain studies, four were experimental and two were clinical trials.The characteristics and study design features of colic studies are reported in detail in Table 3.
Experimental Studies
In the study by Cook at al., 2009, the behavioural pain scoring system [27] showed significantly higher scores at 4-8 h after surgery in horses in the saline group, compared with horses in the flunixin or firocoxib group.The pain scores of flunixin and firocoxib were not significantly different at any time point.The pain scores at 16 h after surgery were not significantly different between groups or from the scores before surgery.Physiological parameters were not included in this study.Tomlinson et al., 2004, showed that the median behavioural pain score [27] at 2 h that was greater in the saline group compared with the others, and no difference was present between the etodolac and the flunixin group.At 18 h, the pain scores had decreased in all groups, with the saline group still showing a greater score compared with the others, and no difference was present between the etodolac and the flunixin group.Physiological parameters were not included in this study.Little et al., 2007, reported no significant difference in the total behavioural pain scores [27] between the flunixin and the meloxicam group.The heart rate was not compared between the flunixin and the meloxicam group.In horses treated with flunixin, the heart rate was significantly increased at 8 h compared with preoperative values, while the heart rate was not significantly increased in horses treated with meloxicam at any time point after surgery.The respiratory rate at 16 h after surgery was significantly lower in horses treated with meloxicam, compared with horses treated with flunixin.
Urayama et al., 2018, showed that in the meloxicam group, the pain scores began to rise after 60 min and then remained constant.In the meloxicam group, the behavioural score [27] was significantly lower compared with the saline group at 60, 90, 120, and 180 min.No significant differences in heart rate or respiratory rate were recorded between the two groups (the data were not shown).
Clinical Trials
No significant difference was detected by Ziegler et al., 2019, in behavioural pain scores [27] between the two groups, and there was no significant difference in the use of additional pain control.However, in the firocoxib group, 9% of horses received an additional pain killer, compared with 27% in the flunixin group.Although the relative risk increased threefold, the result was not statistically significant.No significant difference in the heart rate was found between the two groups.No data were reported for the respiratory rate.
Naylor et al., 2014, reported that 16% (5/32) of horses receiving flunixin and 32% (9/28) of those receiving meloxicam were administered additional analgesia.There was no effect of the treatment on the behavioural or social pain score [27].There was an effect of the centre on pain score, with fewer horses at one centre showing signs of gross pain and having significantly lower postural pain scores.When the pain score was broken down into composite parts, the horses of the flunixin group experienced significantly less pain compared to the horses in the meloxicam group.There was no difference in heart and respiratory rates at admission to the hospital, but, unfortunately, no data were reported for the post-operative period.
Discussion
The present review of the current literature stems from the consideration of why, despite the availability of more recent NSAIDs, with supposedly fewer side effects, flunixin is still the most widely used NSAID in the UK, USA, and South Africa [7,8].Surprisingly, the review did not find scientific evidence supporting this, and research investigating the duration and efficacy of NSAIDs in horses is still sparse [28].In this systematic review, a total of 184 horses were investigated in castration studies and a total of 175 horses suffering from colic.Despite the widespread use of NSAIDs worldwide and the lengthy research period examined (1989-2023), the total number of horses investigated seems relatively small.Looking at the study design of the selected studies, only 3 out of 10 reported a power analysis for the calculation of the sample size.This reduces the probability that the conclusion of a study reflects the true effect [29].A practical example is given by the clinical trials of Naylor et al. (2014) and Ziegler et al. (2019) with a sample size of 60 and 56 horses, respectively, where the authors highlighted that 164 and 500 horses would have been required to show with 80% power the relevant difference between the groups investigated.Indeed, a small sample size coming from single studies warrants a cautious interpretation of the results.Also, the type of study, whether experimental or not, prospective or retrospective, or randomised, and the randomisation method should influence result interpretation.In this systematic review, two out of six studies on colic pain and all four castration pain studies were clinical trials.Of the castration studies, Lemonnier et 2020), because of unclear blindness, was considered III-LoE according to the Oxford Centre for Evidence-based Medicine [12].An absence in randomisation or a lack in the reliability of the randomisation method, as the use of a flipped coin reported in Ziegler et al., 2019 (III-LoE), can lead to an overestimation as well as an underestimation of treatment effects [30].Moreover, attention must also be paid to the question whether experimental studies can reflect clinical practice.Indeed, castration as an elective non-corrective intervention shows very good correlation with clinical practice while experimental models of colic pain are less reliable in mimicking naturally occurring colic pain.In four out of six studies, the researchers used models of induced disease with vascular ligation of the small bowel (strangulation model) and injection of LPS for the septic colic model.Given the experimental nature of these studies, they probably just represent a vague approximation of natural disease, and therefore the results should be interpreted with extreme caution [31].
Concerning the intervention, a total of 80% of the studies selected used flunixin as the NSAID.Indeed, over time, flunixin is still considered as a sort of gold standard, against which the efficacy of other drugs can be compared.In all castration studies, NSAIDs were used IV at licensed doses and ranges.Only Olson et al., 2015, administered meloxicam orally through an oral suspension that provided, after 1 h from administration and for 24 h, a plasma concentration that exceeded the established 50% of maximum response (EC50) of 0.20 µg/mL [32,33].Gobbi et al., 2020, adopted a dose of firocoxib of 0.1 mg/kg once a day without the advised loading dose of 0.3 mg/kg on the first day [9], obtaining good results regarding post-operative analgesia.Sanz et al., 2009, observed no additional analgesic effect in the group receiving phenylbutazone compared to the group without phenylbutazone.This is probably due to the more pronounced analgesic effect of phenylbutazone for orthopaedic conditions [7,34].
In the colic pain studies, flunixin was used at double the licensed dose.In the study of Naylor et al. (2014), both drugs, flunixin and meloxicam, were administered off-label at double the dose indicated by the European Agency for the Evaluation of Medicinal Products.Ziegler et al., 2019, used a double dose of flunixin compared with firocoxib administered at 0.1 mg/kg IV after a non-licensed loading dose of 0.3 mg/kg.However, a pharmacokinetic reason supports this choice, as firocoxib does not reach steady state concentrations within the first 72 h without a loading dose [35].Also, in the experimental studies of Cook et al. (2009), Little et al. (2007), and Tomlinson et al. (2004), flunixin was used at a off-label dose of 1.1 mg/kg IV twice daily.Cook et al., 2008, found no difference in the post-operative pain score of firocoxib compared to flunixin, despite the fact that firocoxib was administered at a lower dosage (0.09 mg/kg IV once daily) and without any loading dose, compared to the clinical trial of Ziegler et al., 2019.This raises the concern about how reliable the indicated licensed doses are for routine NSAID administration.Tomlinson et al., 2004, compared flunixin with etodolac 23 mg/kg IV twice a day, which is relatively COX-2-selective in horses with more sustained efficacy for orthopaedic conditions [34].However, to the authors' knowledge, the administration of etodolac has fallen into disuse over time.Another aspect to consider is the effect of additional analgesia, as shown in the prospective clinical trials of Naylor et al. (2014) and Ziegler et al. (2019), where an additional dose of flunixin before surgery could have influenced the pain score.
Regarding pain assessment, different scales, methods, and intervals were used in the selected studies.Behavioural scales, i.e., NRS and VAS, were the most represented pain scales.On the other hand, Olson et al., 2015, used a scale adapted to control post-castration pain [36], while in Lemonnier et al., 2022, horses were evaluated using an adapted postabdominal surgery pain assessment scale (PASPAS) [26].From a methodological point of view, the reproducibility, reliability, and validity of the used pain scale, as well as the choice of the observer, can influence the soundness of the results.The choice of the observers affects the reproducibility that is strongly correlated with intra-and inter-observer reliability [37].Higher values of intra-and inter-observer reliability indicate a higher precision of the measurements taken by each observer [38].In fact, the PASPAS is reported to be a reliable tool with low inter-observer variability when expert observers are involved [26]; however, as shown by Lemonnier et al., 2022, inter-observer variability drastically increases when non-expert observers are involved.A high inter-observer variability negatively influences the inter-observer reliability and ultimately the reproducibility of the test.Only 30% of the studies reviewed reported the number of observers and their degree of experience, impairing the reliability of the pain score.Furthermore, only two out of four studies on castration and two out of six on colic pain reported blindness to the intervention of pain score assessors, resulting in a remarkable increase in risk of bias.Another important fact is the time interval of pain scoring that was very variable between studies, with some studies giving a pain score only once a day.Validity represents the ability of a pain score to measure what it is supposed to measure, with minimal inter-and intra-observer variability [37].To the best of the authors' knowledge, only Lemonnier et al. (2022) adopted a properly validated pain score.Urayama et al. (2019) described the behavioural pain score [27] used in their study as validated, but to the authors' knowledge, this pain score has not been validated.Physiological parameters, such as heart rate and respiratory rate, were also analysed in some studies, even though they are non-specific for the presence of pain, and studies have failed to establish a direct relation between heart rate and the presence or severity of pain [39].Factors such as ambient temperature, dehydration, excitement, and cardiovascular and/or respiratory disease can trigger a physiological response and increase bias [40].Bias in clinical trials can be defined as a systematic error that can promote one outcome over another and lead investigators to the wrong conclusions about the effects of selected interventions [41].In the included studies of this systematic review, selection, performance, and "other" bias were the most frequently encountered types of bias.The first one was due to the absence of a clear randomisation method in 70% of the studies, and to the lack in allocation concealment, that is an important step for an adequate randomisation [41].The detection bias was because the assessors were not blinded to the outcome in 50% of the studies.Moreover, none of the studies specified whether caregivers were also blinded to the selected intervention, generating performance bias.Finally, the "other bias", that represents the main bias for selected outcomes of this systematic review, was the absence of a validated pain score.
Several limitations are present in this systematic review, such as the choice of the SYR-CLE RoB tool [13], the inclusion of experimental studies, and also the string for the search strategy that might have led to the loss of some relevant studies.The modified SYRCLE's risk of bias tool was selected for continuity with the Systematic Review Protocol for Animal Intervention Studies.The SYRCLE RoB tool has been developed for experimental animal studies and is therefore not ideal for clinical trials.As other tools are based on human RCTs, their application to an animal setting could itself be a source of bias.Because experimental animal studies were included in this systematic review, the choice of the SYRCLE RoB tool was considered appropriate.As suggested by the SYRCLE RoB tool, criterion questions were adapted for bias research to suit the needs of this systematic review.However, no mention of how the modification may result in the development of bias is present.The criterion questions number 4 and 9 were readjusted for the needs of this systematic review, and question number 6 was judged not to be applicable because it was highly linked to laboratory animal studies.However, it is the authors' impression that this represents a possible source of bias that could undermine the level of evidence at which a systematic review is aimed.Therefore, it would be desirable that specific tools for bias will soon be available for the evaluation of veterinary RCTs.In theory, clinical interventions should only be used if they have been proven safe and effective in well-structured studies.However, this systematic review shows how evidence-based decisions often result from underpowered randomised studies and with unclear control of bias.Still, it is the clinicians who must decide whether they believe that the intervention should be used or not in clinical practice.The latter represents an interesting point, as it appears that over the years the focus of NSAID scientific evaluation has changed its direction.In fact, in the selected studies especially for colic pain, considerable attention was paid on the anti-inflammatory and pharmacological effects on the enteric mucosa rather than on analgesic efficacy.However, clinically, the use of NSAIDs is primarily still aimed at achieving an expected analgesic effect rather than selecting the best NSAIDs with regard of COX selectivity.
Conclusions
Experimental studies have clearly shown that concerning mucosal interference, COXnon-selective NSAIDs are worse than COX-selective ones; however, COX-non-selective NSAIDs are still the most frequently used drugs in a clinical setting.Therefore, the present study aimed at answering the question: "What is the clinical efficacy of NSAIDs in terms of analgesia?".The answer is that to date, the available studies cannot adequately address this question, as for many of them, the pain score was not the main outcome but a secondary component.Therefore, new prospective randomised blinded clinical trials, focusing on addressing pain, with a validated easy-to-use pain score, are deemed necessary to elucidate the analgesic efficacy of NSAIDs in the treatment of abdominal pain in horses.
Table 1 .
Assessment of risk of bias and level of evidence.
Table 2 .
Study features of standardised methodological assessment of the included studies.
Table 3 .
Characteristics and study design features.
|
2023-11-15T17:29:19.609Z
|
2023-11-01T00:00:00.000
|
{
"year": 2023,
"sha1": "b003240c41ea1efba1f9ea6255dd85826f5600bd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/13/22/3447/pdf?version=1699435461",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9fc1931568fede94d9e0a8a0073a1613a4a93be6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
19823
|
pes2o/s2orc
|
v3-fos-license
|
β-catenin in plants and animals: common players but different pathways
A key node in number of essential cellular processes in eukaryotes, Armadillo was originally characterized in Drosophila as the component of Wingless/Wnt signal transduction pathway (Nusslein-Volhard and Wieschaus, 1980). β-catenin is the mammalian homolog of Armadillo playing dual role in structural and transcriptional regulation during embryonic development (Conacci-Sorrell et al., 2002). Even though initially characterized in animals, members of the Armadillo proteins are also known to exist in non-animals including slime mold (Dictyostelium discoideum) and plants (Wang et al., 1998; Barelle et al., 2006; Veses et al., 2009). The existence of Armadillo repeat family of proteins across species suggests ancient evolutionary origin and functional conservation of these proteins in multicellular organisms (Coates, 2003). The intricate role of β-catenin raises several doubts about the mechanism by which it mediates interaction with diverse partner proteins using common interface, and how this interaction influences adhesion and transcription?
The ARM family proteins have been identified with multiple functional domains in more than one species. Genome-wide studies in plants have shown the existence of large number of Armadillo homologs in Physcomitrella patens, Arabidopsis and Oryza sativa (Mudgil et al., 2004; Sharma et al., 2014). One assumption is that, Armadillo family being evolutionary conserved, perform similar role in all organisms. However, the existence of multigene Armadillo family with various subfamilies indicate novel species specific functions of these proteins in plants. Several recent studies have made known the function of numerous ARM proteins in Arabidopsis and rice. Apart from their analogous role in regulation of gene expression and developmental processes, various proteins were discovered to be predominantly involved in plant stress responses.
Thus, an intriguing and important question remains as in what way the similar effector proteins of Wnt pathway function and how similar canonical response is prevented or exist in plants. Recent progress in studies of ARM proteins in plants has suggested some possible answers to this question. However, the Wnt signaling mechanism regulated by ARM repeat proteins is still unknown. Regarding this, many underscoring questions are just beginning to emerge that remains to be answered.
INTRODUCTION
A key node in number of essential cellular processes in eukaryotes, Armadillo was originally characterized in Drosophila as the component of Wingless/Wnt signal transduction pathway (Nusslein-Volhard and Wieschaus, 1980). β-catenin is the mammalian homolog of Armadillo playing dual role in structural and transcriptional regulation during embryonic development (Conacci-Sorrell et al., 2002). Even though initially characterized in animals, members of the Armadillo proteins are also known to exist in non-animals including slime mold (Dictyostelium discoideum) and plants (Wang et al., 1998;Barelle et al., 2006;Veses et al., 2009). The existence of Armadillo repeat family of proteins across species suggests ancient evolutionary origin and functional conservation of these proteins in multicellular organisms (Coates, 2003). The intricate role of β-catenin raises several doubts about the mechanism by which it mediates interaction with diverse partner proteins using common interface, and how this interaction influences adhesion and transcription?
The ARM family proteins have been identified with multiple functional domains in more than one species. Genome-wide studies in plants have shown the existence of large number of Armadillo homologs in Physcomitrella patens, Arabidopsis and Oryza sativa (Mudgil et al., 2004;Sharma et al., 2014). One assumption is that, Armadillo family being evolutionary conserved, perform similar role in all organisms. However, the existence of multigene Armadillo family with various subfamilies indicate novel species specific functions of these proteins in plants. Several recent studies have made known the function of numerous ARM proteins in Arabidopsis and rice. Apart from their analogous role in regulation of gene expression and developmental processes, various proteins were discovered to be predominantly involved in plant stress responses.
Thus, an intriguing and important question remains as in what way the similar effector proteins of Wnt pathway function and how similar canonical response is prevented or exist in plants.
Recent progress in studies of ARM proteins in plants has suggested some possible answers to this question. However, the Wnt signaling mechanism regulated by ARM repeat proteins is still unknown. Regarding this, many underscoring questions are just beginning to emerge that remains to be answered.
Wnt SIGNALING-DEVELOPMENTAL REGULATION IN PLANTS AND ANIMALS
Wnt proteins are one of the foremost signaling molecule essential for cell polarity, embryonic development and the determination of cell fate in metazoa (Cadigan and Nusse, 1997;Wodarz and Nusse, 1998;Logan and Nusse, 2004). A combination of molecular and genetic studies has provided evidences for how Wnt1, Wnt3a, and Wnt8 specifically induce the activation of "canonical β-catenin" pathway in animals (Du et al., 1995;Shimizu et al., 1997;Kuhl et al., 2000). However, no evidence for a Wnt, Frizzled (Fz) and low-density-lipoprotein-related protein receptors has been obtained in plants. Despite this, few homologs of proteins, which act as negative regulator of Wnt signaling has been unveiled in plants. Based on BLAST searches, the serine/threonine kinase GSK-3 (glycogen synthase kinase-3), CK1 (casein kinase 1) and APC (Adenomatous polyposis coli), which together form a destruction complex to stimulate degradation of β-catenin in animals were found to be conserved in plants (Figure 1) (Li et al., 2001). This has been proven in animals that activity of GSK3/CK1 complex is inhibited in response to Wnt signal perception at the cell surface to relieve its inhibitory effects on downstream β-catenin (He et al., 2004;Tamai et al., 2004;Nusse, 2005). The conservation of β-catenin destruction complex in plants points toward novel targets and modulation of Wnt signaling.
POTENTIAL "Wnt-LIKE" SIGNALING FUNCTIONS FOR PLANT ARM FAMILY PROTEINS
Arabidopsis comprises a multigene SHAGGY-related protein kinase (ASK) gene family, which is 70% identical to glycogen synthase kinase-3 from mammals, (Bourouis et al., 1990;Siegfried et al., 1990;Woodgett, 1990) classified into four distinct subfamilies (Jonak and Hirt, 2002). In the past few years, significant progress has been made in understanding how GSK3s perform their diverse functions in plants. The diverged biological functions of these members in signal transduction, cell patterning, cytokinesis and determination of cell fate has been established and credited to their diversity within plants (Dornelas et al., 1998). Most of the plants GSKs are found to be involved in brassinosteroid signaling and salt stress response (Dornelas et al., 2000;Kim et al., 2009). Brassinosteroids (BRs) are plant hormones, which signal through a plasma membrane localized receptor kinase BRI1. BRI1 interacts with BAK1 (BRI1 associated receptor kinase 1) to mediate plant steroid signaling . BES1 has been identified as a suppressor of BRI1, which in turn is negatively regulated by a kinase BIN2 (Yin et al., 2002). Interestingly, the BR signaling pathway mechanism is analogous to the Wnt signaling pathway. In the proposed model, BIN2 which shares sequence homology with GSK-3 , phosphorylate and destabilize its substrate BES-1. In response to brassinosteroids, BES-1 is stabilized and accumulates in the nucleus to activate target gene expression (Yin et al., 2002).
It is important to note that both BES-1 and β-catenin does not share homology at the protein sequence level. Similarly, BRI1 and Wnt are the two different receptors and does not belong to the same family (He et al., 2002;Yin et al., 2002;Zhao et al., 2002). However, it will be interesting to know if any of the protein in multigene Armadillo family in plants, gets regulated in the same manner or it is simply the way in which the pathway is conserved.
Meanwhile, several lines of evidence suggest the role of Wnt signaling proteins i.e., Armadillo repeats containing proteins in the developmental regulation in both animals and plants (Amador et al., 2001). p120ctn is an Armadillo repeat protein identified as a component of E-cadherin-catenin cell adhesion complex (Daniel et al., 2002). The signaling and cell adhesion co-factor p120ctn is the only known binding partner for Kaiso, a novel BTB/POZ domain zinc finger transcription factor (Daniel et al., 2002). Another possible candidate mediating interaction within actin and microtubule filaments in plants is ARK/MRH2 kinesin (ARM repeat kinesin/Morphogenesis of root hair). ARK/MRH2 interacts with NIMA-related protein kinase NEK6, to regulate epidermal cell morphogenesis by modulating microtubule dynamics (Sakai et al., 2008).
In relation to this, Arabidopsis (AT5G13060) and rice (LOC_Os05G33050) also possess homologous proteins comprising ARM repeats and a BTB/POZ domain (Figure 1). The Arabidopsis BTB/POZ ARM protein also known as ABAP1 has been shown to be involved in DNA replication and gene transcription controls (Masuda et al., 2008).
Arabidillo-1/-2 and Oryzadillo are the closest homolog of β-catenin in Arabidopsis and Oryza sativa respectively, consisting of an F-box motif near their N-terminal, and several presumed sites for GSK-3 phosphorylation (Gagne et al., 2002;Kuroda et al., 2002;Coates, 2003). Remarkably, Arabidillo's are closest to the β-catenin homolog in Dictyostelium' Aar protein that consists of an F-box domain and is required for the differentiation and expression of prespore specific genes (Grimson et al., 2000). Besides, analogous to animals, physical interaction of Arabidillo-1/-2 proteins through their F-box domain with ASKs (SHAGGY-like protein kinase) lead to the formation of SCF complexes that target various substrates for ubiquitn/26S proteasomemediated proteolysis has been proven in plants (Changjun et al., 2010). This suggest an evolutionary conservation of signal transduction pathway elements and their site of action within animals and plants.
BEYOND Wnt SIGNALING: ROLE OF PLANT ARM PROTEINS
Exposure to abiotic and biotic stress results in alteration of cellular homeostasis in plants. The first response to stress factors, is to activate the signal transduction pathways that stimulate cell defense and adaptive mechanisms. Ubiquitination is a unique protein degradation mechanism utilized by plants to effectively degrade detrimental cellular proteins and components specific to these stress signalings. A majority of U-box E3 ubiquitin ligase encoding ARM proteins related to biotic and abiotic stress have been identified in plants. We can certainly anticipate new insight into the molecular mechanism of plant β-catenin-like proteins function in the context of abiotic stress signals.
There are 41 and 47 predicted U-box/ARM proteins in the genome of Arabidopsis and rice respectively (Mudgil et al., 2004;Sharma et al., 2014). A few of them have been functionally characterized in Arabidopsis. Many of these proteins have now been linked to specific stress and hormonal responses.
A biological role for the U-box/ARM protein AtPUB9 has been proposed in ABA (Abscisic acid) signaling (Samuel et al., 2008). In Arabidopsis, ATPUB18 and ATPUB19 are the two homologous proteins. Molecular analysis of AtPUB19 showed that it is upregulated in response to drought, salt, cold and ABA (Liu et al., 2011). In the consecutive year, role of ATPUB18 as a negative regulator has been put forward in ABA-mediated stomatal closure and drought responses (Seo et al., 2012). A different homologous pair of PUB proteins, AtPUB22 and 23 have been shown to play a combinatory role in the negative regulation of drought stress (Cho et al., 2008;Seo et al., 2012). A closely related ortholog of ATPUB22/23 in Capsicum annum known as CaPUB1 was found to be highly inducible in response to various abiotic stresses such as drought, cold and salt (Cho et al., 2006).
Another report suggested the role of AtCHIP, an Arabidopsis U-box/ARM protein in response to extreme temperature conditions. Subsequently, AtCHIP was reported to be involved in the ABA stress signaling pathway by mediating interaction with protein phosphatase 2A (Yan et al., 2003). In rice, SPL11 was identified as a U-box containg ARM protein that functions as a negative regulator in the control of cell death and pathogen defense . The Arabidopsis ortholog of SPL11, ATPUB13 is a functionally conserved protein regulating plant defense, cell death and flowering time (Li et al., 2012a,b). In Nicotiana, two U-box/ARM proteins NtCMPG1 and tobacco ACRE276 and their functional homolog in Arabidopsis, AtPUB17 has been implicated as positive mediators of plant defense and stress signaling Yang et al., 2006). Apart from this, expression analysis in rice has confirmed many of the ARM proteins without any associated domain to be differentially regulated under abiotic stress conditions suggesting a role of ARM repeats in the stress regulation (Sharma et al., 2014).
On the basis of facts described above, it can be concluded that animal and plant ARM repeat proteins share many resemblances. Therefore, it is possible that at least some transcription effectors involved in Wnt signaling are evolutionary conserved. These elements include nuclear accumulation in response to extracellular signal, phosphorylation and degradation. Apart from the common response, plants possess specific signaling pathways mediated by ARM proteins. In plants, ubiquitination is critically involved in the function of ARM proteins. The proliferation of β-catenin-like ARM proteins in plants suggest their significance in the regulation of diverse biological fuctions in them. Further study of these proteins in plants would contribute to our understanding of the molecular factors involved in response to abiotic stress.
|
2016-06-17T23:34:29.248Z
|
2014-04-10T00:00:00.000
|
{
"year": 2014,
"sha1": "a69f619e7872c81cc9e7601602a1534aca3da339",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2014.00143/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a69f619e7872c81cc9e7601602a1534aca3da339",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
269937225
|
pes2o/s2orc
|
v3-fos-license
|
Photoelectric synapses based on all-two-dimensional ferroelectric semiconductor heterojunction
Photoelectric synapses are attracting intensive attention due to its low power consumption and adaptive learning. However, traditional ferroelectric field effect transistors are not conducive to the integrated application in artificial intelligence systems. Here, we design the all two-dimensional photoelectric synapse device based on WSe2/MoS2/α-In2Se3 ferroelectric van der Waals heterojunction, which has high memory capacity (memory on/off = 105) and synaptic function. In addition, we simulate an artificial neural network to modify the handwritten digit recognition of the National Institute of Standards and Technology. In particular, the recognition rates are 92.4% and 93.6% for electrical synapse and photoelectric synapse, respectively. This work provides an effective strategy for achieving stable integration of neuromorphic computing.
Introduction
With the development of the information age, the next generation of electronic devices need to process the exponential growth of information [1][2][3].The traditional computing technology based on complementary metal-oxide-semiconductor (CMOS) circuits and von Neumann architectures is facing the von Neumann bottleneck, which can not meet the demand of next generation information technology [4,5].The human brain works as a control center of the nervous system and has the capable of efficient complex computation [6][7][8].Hence, Brain-like computation that mimics artificial synapses is expected to overcome the von Neumann bottleneck [9][10][11].Recently, great efforts have been made to explore and simulate the artificial synaptic devices, such as memristor [12][13][14][15], ferroelectric field-effect transistors (Fe-FETs) [16][17][18], and floating-gate transistors (FGTs) [19][20][21][22][23][24].Among them, the Fe-FETs have good non-volatility and are considered to have great potential in future electronic device applications [25][26][27].
Although remarkable progress has been shown in simulating electronically controlled artificial synapses with Fe-FETs, the operation speed of electronic synapses is still affected by resistance delay, power loss and high energy consumption.Studies show that introducing light into synaptic devices can overcome these limitations [28], the photoelectric synapses are more applicable for smart sensors for Internet of Things and biomedical electronics [29,30].The semiconductor materials with photoresponse are used as ferroelectric regulatory layers to simulate photoelectric synaptic devices.However, the material of ferroelectric layer is bulk material, which is not conducive to the integration of Fe-FETs.Therefore, two-dimensional (2D) semiconductor ferroelectric materials are considered to be an ideal platform for small size integrated photoelectric synaptic devices.Among them, 2D α-In 2 Se 3 has stable ferroelectricity at room temperature, which is suitable for the preparation of ferroelectric photoelectric synaptic devices for easy integration.In addition, MoS 2 and WSe 2 are 2D materials with excellent photoelectric properties [31,32], and combining them with α-In 2 Se 3 is expected to build a device with good performance.
In this work, we develop the all 2D photoelectric synapse device based on the WSe 2 /MoS 2 /α-In 2 Se 3 van der Waals heterojunction (vdWH), in which the α-In 2 Se 3 and WSe 2 act as ferroelectric layer and photonic gate layer.The device behaviors excellent characteristics, such as high on/off ratio (10 5 ), paired-pulse facilitation (PPF), short-term plasticity (STP), long-term potentiation (LTP) and learning-forgetting-learning behavior.Particularly, the recognition rate of an artificial neural network is higher, the recognition rate of the electrical synapse is 92.4% and the photoelectric synapse is 93.6%.This work provides an effective strategy to realize the integration of artificial synaptic neural networks.
Device fabrication
The Si/SiO 2 substrate was cleaned with alcohol and acetone for 1 h.The MoS 2 , α-In 2 Se 3 and WSe 2 flakes were obtained via mechanical exfoliation method.Under the help of a 2D material transfer platform, the three kinds of materials were superimposed.Finally, standard electron beam lithography was used to carve the electrode, and thermal evaporation method was used to prepare the 30 nm electrodes of Au.
Electric measurement
The electric characteristics and basic memory behaviors were measured by semiconductor parameter analyzer (Keithley B1500) at room temperature.The optical microscope (Olympus BX51 M) was used to gain surface topography.The thickness of the MoS 2 , WSe 2 , and α-In 2 Se 3 thin films were performed by AFM (Veeco Multimode).The quality of the MoS 2 , WSe 2 , and α-In 2 Se 3 were defined through Raman spectroscopy (Renishaw InVia, 532 nm excitation laser).
Results and discussion
In order to describe the device structure, figure 1(a) presents the structure diagram of the device, and the channel of the device is MoS 2 .In addition, the WSe 2 and α-In 2 Se 3 are located in the upper and lower parts of the channel, respectively.The substrate of the device is SiO 2 /Si substrate with 300 nm thick SiO 2 , and the gate connects to the Si layer.
The optical image of the device is shown in the figure 1(b), the green orange and violet line outlines the material WSe 2 , MoS 2 and α-In 2 Se 3 , respectively, and the interface of the device is clean.The device is built in the super clean room, so the interface defects and the trapped charge at the junction of the device are less, which can not affect the performance of the device.To confirm the presence of these components and the junction mass of the material, the Raman spectroscopic measurements were performed.The Raman spectra of WSe 2 , MoS 2 , [35,36], the proposed energy band diagrams of the heterojunction can be shown in figures 1(e)−(f), in which the electron affinity (χ) and workfunction (j) are also given.The electron affinity (χ) and workfunction (j) are approximately 4.15 and 1.82 eV, respectively, for the MoS 2 and 3.55 and 2.61 eV, respectively, for WSe 2 (figure 1(e)).The WSe 2 is p-type material and MoS 2 is n-type material, the energy band shows the p-n junction type (figure 1(f)).However, α-In 2 Se 3 is the ferroelectric material, which is to generate a ferroelectric field.Therefore, the schematic diagram of the section structure can more clearly show the movement routes of the electrons and holes, the working mechanism is explained by the section structure in the figure 2.
In figure 2, the memory characteristics of the device are investigated.Figure 2(a) shows the dual-sweeping transfer curves of the device under dark.The figure shows that the device has a memory window where the switching ratio reaches 10 5 , which shows that the device has the property of memory.Also, figure S2 presents the I-V curve the dual-sweeping transfer curves of the device under the dark, showing that the related device performance is slightly better under vacuum conditions.Therefore, in order to further explore the memory ratio of the device, figure 2(b) tests the relationship between the device current in the erase state and the program state with time.The current in both states can be maintained for a long time, and the memory ratio of the device is the current of erase/programming, which indicates that the device has a memory ratio of 10 5 .Here the program voltage is high due to the utilization of 300 nm SiO2/Si substrate as dielectric layer.Nevertheless, the operating voltage can be decreased by introducing high-k materials.In addition, figure 2(c) also tests the current change of the device after a single laser pulse, the current is small when the device is programmed, and the current suddenly rises after light irradiation and can maintain in a high current state for a long time.The related optical synapses can be simulated by using the optical memory properties.The working mechanism of the device is explained as follows, the upper and lower parts of the figure 2(d) show the carrier distribution of the device before and after the gate voltage.In the original state, the ferroelectric domain direction of α-In 2 Se 3 is disorganized and does not affect the number of free holes of channel MoS 2 .However, when the device is applied a negative gate voltage, the ferroelectric domain direction of α-In 2 Se 3 is downward.The side that contacts MoS 2 gathers a large number of negative polarization charges, which adsorbs the free holes in the MoS 2 .Hence, the concentration of free electrons in the channel increases, the current of the device rises and the device is in the erase state.In contrast, in the figure 2(e), the ferroelectric domain direction of α-In 2 Se 3 is upward when a positive gate voltage is applied to the device.The side of MoS 2 gathers a large amount of positively polarized charge, which adsorbs the free electrons in the MoS 2 .Therefore, the concentration of free electrons in the channel is reduced, the current of the device is reduced and the device is in a programming state.In addition, when the device is illuminated in the programming state, MoS 2 and WSe 2 generate photogenerated electron-hole pairs.In the MoS 2 , the electrons adsorbed by the upward polarized ferroelectric layer are neutralized by photogenerated holes, the photogenerated electrons remain in the MoS 2 .In the WSe 2 , the photogenerated electrons enter into the MoS 2 , and the photogenerated holes remain in the WSe 2 , which act as positive gate pressure.Therefore, remove the light, the electron concentration of MoS 2 is rises, the free current of the device becomes larger, and the device is in the optically programmed state.
In the human visual system, the light signal reflected on the object is transmitted to the retina, and the retina transmits the light signal to the brain through the neurons, then the object can be observed by humans.Thus, the behavior of visual neurons and synapses is stimulated by light signals, as shown in figure 3(a).In our device, light signals can be used as stimulus signals applied to devices to simulate the behavior and function of synapses.The most typical behavior in synaptic plasticity is Paired-pulse Facilitation (PPF), when two electrical pulses are continuously applied, the time interval between the pulses is 1.5 s, as shown in the illustration of figure 3(b).It is found that the current after the first pulse stimulation increases to 16.84 pA, and the current after the second pulse stimulation is 28.7 pA, which is greater than the effect of the first pulse stimulation.In order to study the temporal correlation between two pulses, the current amplitude of the first pulse is A 1 , and the current amplitude of the second pulse is A 2 .The PPF can be calculated using the following formula: the figure 3(b), making the amplitude and pulse width unchanged and changing the time interval of the two pulses to 1 s, 1.5 s, 3 s, 6 s and 10 s respectively.It is found that the larger the time interval, the smaller the PPF, and this reduction is non-linear, first fast and then slow.The law of attenuation can be described by a function [37], The τ 1 and τ 2 are the time constants about fast and slow decaying terms, which are calculated to be 0. In addition, the human process is simulated by applying a series of light pulse (figure 3(e)).At the first learning phase, the 20 consecutive light pulses are used to stimulate synaptic weights.After 40 s, the same sequence of light pulses is applied to the device again.When the light pulses numbers is 6, the same current level as the first learning phase can be obtained.After the same time interval of 40 s, the same current level require the minimum number of light pulses (n = 4).The past experiences influence subsequent learning, which is similar to the human brain learns and remembers.
Finally, the convolutional neural network (CNN) is built for supervised learning in order to simulate artificial neural network.As shown in the figure 4(a), the CNN structure mainly consists of three parts: convolution layer, pooling layer and fully connected layer [38,39].The supervised learning of CNN is a modified National Institute of Standards and Technology (MNIST) handwritten data set, which contains many handwritten digital images.MNIST is often used by various image recognition systems to train and evaluate machine learning performance.The MNIST contains 60000 images for training the network and 10000 images for evaluating the recognition accuracy of the trained network.The left part of figure 4(a) presents the example of an image from the MNIST dataset, which is typically normalized to a black and white image containing 28 28 ´pixels.Therefore, for simulation, a two-layer multilayer perceptron (MLP) neural network with 784 input neurons, 100 hidden neurons, and 10 output neurons is utilized.Through further experimental tests, the conductivity changes of the device can be well controlled by the number of electrical and light pulses.As shown in figure 4(b), when the V g is −5 V, the conductance (G) is continuously increased, which is long-term enhancement (LTP).In addition, when there is a positive peak (V g = 0.01 V), the G will gradually decline, which is the long-term decline (LTD).According to the optical response characteristics of the device in figure 3, the light-LTP and electricity-LTD of the device are measured in figure 4(c).The LTP and LTD are important parts of synaptic plasticity, meaning that the weight of the synapse is adjustable, the device can be used in artificial neural networks.For the ideal device, the LTP and LTD conductance curves should be linear.The nonlinearities (NL) of the LTP and LTD curves are extracted from the following equations [40]: where the G n+1 and G n represent the synaptic conductance of the device in the present and updated states, and the parameters α and β denotes the changing step sizes of the conductance and nonlinearity, respectively.G max and G min are the measured maximum and minimum values of G, respectively.To evaluate the synaptic performance of our device, we calculate the NL of photoelectric synapses and electrical synapses by equations (3) and (4), where the NL are shown in the figures 4(b) and (c).
In the simulation, the accuracy of the electrical synaptic simulation reach up to 92.4% [green circle in figure 4(d)] under the training of the device-based neural network.In addition, the highest value of photoelectric synapse accuracy is 93.6% [red circle in figure 4(d)].In figures 4(b) and (c), the nonlinearity of photoelectric synapses is lower than that of electrical synapses.Therefore, the recognition performance of photoelectric synapses is better than that of electrical synapses.This demonstrates both the high performance and versatility of our synaptic device and the high efficiency of our designed neural network.
Conclusion
In summary, we fabricated the Fe-FETs based on the all 2D WSe 2 /MoS 2 /α-In 2 Se 3 vdWH, in which the α-In 2 Se 3 is used as ferroelectric material.Based on α-In 2 Se 3 and WSe 2 provide ferroelectric field and grating action, the devices exhibit good memory performance and photoelectric synaptic plasticity, such as write/erase rates above 10 5 , STP and LTP, and conversion between the STP and LTP.These operating modes can be selected by varying the intensity, pulse width, and number of input light pulses.In addition, the device also has the ability to simulate learning-forget-relearning.At the same time, the device can simulate artificial neural network and has high image recognition accuracy.This research has the potential to solve the integration problem of FE-FETs in ferroelectric synaptic applications, which helps to achieve better performance of neural network integrated systems.
Figure 1 .
Figure 1.(a) Schematic diagram of the device structure.(b) Optical microscopy image.(c) Raman spectrum.(d) AFM topography image.(e) The band distribution before the material contact.(f) The band distribution after the material contact.
α-In 2
Se 3 and the overlap are shown in figure 1(c).Obviously, the Raman characteristic peaks of WSe 2 , MoS 2 and α-In 2 Se 3 are clearly observed in the WSe 2 /MoS 2 /α-In 2 Se 3 heterojunction device, which corresponds well with the characteristic peaks of single material, indicating that the high quality of the junctions has been formed.For the two-dimensional (2D) layered materials with dangling-bond-free surface, different two-dimensional layered materials can be combined to create the van der Waals heterostructures (vdWHs) without the constraints of conventional lattice matching and processing compatibility [33, 34].Moreover, figure S1 presents the TEM image and high-resolution cross-sectional image of α-In 2 Se 3 /MoS 2 /WSe 2 heterostructures, which exhibit the sharp interface between interface and less lattice mismatch appears.In addition, figure 1(d) shows the AFM diagram of the device, where the thickness of MoS 2 is 10 nm.Based on the reported literatures
Figure 2 .
Figure 2. (a) Dual-sweeping transfer curves of the device under the dark.(b) Retention capability of the device in erase state (V g = 80 V for 0.2 s) and program state (V g = −80 V for 0.2 s).(c) Optical response memory characteristics of a device under a single laser pulse.(d) Mechanism diagram of device memory and optical erase:(d) electrical erasing state, (e) electrical programming state, (f) lighterasing state.
Figure 3 .
Figure 3. (a) Schematic diagram of the human visual nervous system.(b) PPF index as a function of pulse interval (Δt).The curve is fitted with a double exponential decay function.(c) The current and relaxation process triggered by light pulses of different intensity (0.48 ∼ 2.59 mW).(d) The current evoked by ten light pulses with different pulse frequency (1 ∼ 20 HZ).(e) Simulation of multiple learning.
(
1 and 4.56 s by fitting experimental data in figure 3(b).This attests to our device has the short term synaptic plasticity property.As shown in figure 3(c), keeping the wavelength and the pulse width of the light pulse are 405 um and 1 s, the current of the device increases with the increase of the intensity of the pulse.And the intensities of the light pulse are to 0.48 mW, 0.812 mW, 1.198 mW, 1.886 mW and 2.59 mW.In addition, figure 3(d) shows the current changes under different frequency light pulses, a single frequency pulse sequence consists of 10 pulses.It can be found from the figure that the current becomes larger with the increase of pulse frequency.Therefore, changing the intensity and frequency of the light pulse can regulate the weight of synaptic connection.
Figure 4 .
Figure 4. (a) Schematic diagram of the convolutional neural network (CNN).(b) Long-term potentiation (LTP) and long-term depression (LTD).(c) The conductance change of the device under 15 light pulses and positive grid voltage pulses.(d) The accuracy based on CNN simulation.
|
2024-05-22T15:04:04.479Z
|
2024-05-20T00:00:00.000
|
{
"year": 2024,
"sha1": "a85577a4a41225323a5703b69ebf03dcad3558b3",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/2053-1591/ad4e0c/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3efd3624d4dcbfe45393a008e087d4f9934e27d7",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Physics",
"Computer Science"
],
"extfieldsofstudy": []
}
|
13970423
|
pes2o/s2orc
|
v3-fos-license
|
Detector location selection based on VIP analysis in near-infrared detection of dural hematoma
Detection of dural hematoma based on multi-channel near-infrared differential absorbance has the advantages of rapid and non-invasive detection. The location and number of detectors around the light source are critical for reducing the pathological characteristics of the prediction model on dural hematoma degree. Therefore, rational selection of detector numbers and their distances from the light source is very important. In this paper, a detector position screening method based on Variable Importance in the Projection (VIP) analysis is proposed. A preliminary modeling based on Partial Least Squares method (PLS) for the prediction of dural position μa was established using light absorbance information from 30 detectors located 2.0–5.0 cm from the light source with a 0.1 cm interval. The mean relative error (MRE) of the dural position μa prediction model was 4.08%. After VIP analysis, the number of detectors was reduced from 30 to 4 and the MRE of the dural position μa prediction was reduced from 4.08% to 2.06% after the reduction in detector numbers. The prediction model after VIP detector screening still showed good prediction of the epidural position μa. This study provided a new approach and important reference on the selection of detector location in near-infrared dural hematoma detection.
Introduction
Dural hematoma often occurs after traumatic brain injury. Inability to make accurate and timely diagnosis for a reasonable treatment plan can result in irreversible brain damage and endanger the life of the patient. Therefore, non-invasive detection of traumatic dural hematoma is always a research focus in the biomedical engineering field (Hitoshi et al., 2016).
Due to the high absorption of 650-900 nm near-infrared light by the hemoglobin molecules within the tissue, the optical properties of the brain tissue tested can be obtained by analyzing the emitting light, so that rapid and non-invasive detection of dural hematoma can be achieved (Wu et al., 2015;Gao et al., 2017). Near-infrared spectroscopy technology has been widely used in clinical applications such as functional neuroimaging (Nourhashemi et al., 2016), brain tumor imaging (Kim et al., 2016;Yildirim et al., 2017), cerebral blood flow measurement (Kato et al., 2015), and brain hematoma detection (Braun et al., 2015). Britton Chance at the University of Pennsylvania in the US has proposed the determination of the presence of cerebral hematoma by comparing the difference between the optical contrast between of two positions in the brain (Robertson et al., 1997). This research further extends this method. Based on the correlation between the differences in near-infrared light density (DOD), the degree of hematoma can be analyzed. However, the thicknesses of scalp and skull vary among different individuals due to different growing-up environments, age, race, gender, and other factors. As the thickness of scalp and skull varies, the location of the hematoma changes (Halim and Phang, 2017). Therefore, rational selection of the position and number of detectors is essential for reducing the impact of inter-subject variation and improving the accuracy of the model. Some researchers have used the tMCimg method to simulate the relationship between the effective depth and the distance between detector and the light source. They concluded that the effective detection depth is twice the distance between the light source and the detector (Wang et al., 2011 (UVE), Synergy Interval Regression and Genetic Algorithm have wide applications in variable selection. The variable importance in the projection (VIP) method examines the importance of independent variables in modeling. This method has been widely applied in fields like epidemiological analysis (Bergdahl and Bergdahl, 2000), remote sensing (Zeng et al., 2010;Shi et al., 2017), biochemical analysis (Broderick et al., 2006), and blood component testing (He et al., 2016). In this study, it is noted that signals detected by the detectors have multiple linear dependencies. Thus, by using the VIP analysis technique, the detectors with strong predictive and interpretative abilities for hematoma can be selected. Through the VIP analysis, the number of detectors was reduced from 30 to 4 in this study and the model's prediction capability was improved (Shamsudin et al., 2017). This study introduced a novel idea for the selection of distance between the near-infrared light source and the detectors, upon which clinical application of hematoma prediction can be applied based on differential near-infrared optical density.
VIP analysis based on the partial least-squares regression
VIP is an auxiliary analysis technology based on the partial least squares method. It can be used for the determination and selection of important independent variables. When the correlation between variables is strong, it can describe the explanatory power of independent variables to dependent variables through the comprehensive principal components of the relevant independent variables. It also can be used to screen the independent variables based on their VIP abilities and to analyze the explanatory power of each independent variable on the dependent variables after preliminary PLS analysis. For example, assuming there is a dependent variable y and the independent variables are x 1 , x 2 , . . ., xk. Eq. (1) gives the VIP value of independent variable number j: where k is the number of independent variables; c h is the principal component extracted from the relevant independent variables; r (y, c h ) is the correlation coefficient of the dependent variable and the principal component, representing the explanatory power of the principal component to y; and w hj is the weight of the independent variable on principal component. Since the explanatory power of x j to y is conveyed through the principal component c h , if the explanatory power of c h to y is very strong and the effect of x j to c h is very significant, it can be considered that the explanatory power of x j to y is strong.
The VIP value of each independent variable represents the explanatory power of the independent variable on the dependent variable, also known as the importance level of the independent variable for the modeling. If a reasonable VIP threshold is set for selecting high VIP-value independent variables and excluding low-valued independent variables, the accuracy of the model would not be affected much but the complexity of the model would be greatly reduced. This will a have great impact on the system speed, ease of use and cost (Wu et al., 2015). Fig. 1 shows the diagram of brain hematoma detection using multi-channel differential absorption spectrophotometry. In the diagram, S and S' are the incident light source. D 1 -D n and D 0 1 -D n ' are the light detectors placed at equal interval space respectively. The detection was conducted on symmetrical positions of the human head . First, the detection was conducted on the right side of the head (R site) once, and the light intensities detected by n detectors were I 1 -I n respectively. Then, detection was performed on the left side of the head (L site) and the light intensities detected by the n detectors were I 0 1 -I n ' respectively. I 0 was the incident light intensity from both the left and the right sides. By using Eqs. (2) and (3), the light absorbance at each location OD 1 -OD n and OD 0 1 -OD n 'can be obtained respectively.
Differential absorption dural hematoma detection
OD 0 i ¼ lg By dividing the light absorbance from symmetrical locations, the differential absorbance can be calculated.
According to the anatomy of the human brain, the brain model consists of 5 layers, namely, the layers of scalp, skull, cerebrospinal fluid (CSF), grey matter, and white matter (Fig. 1). The brain structure and optical parameters at 840 nm wavelength are shown in Table 1 (Holmes et al., 1996;Umeyama and Yamada, 2009). The definitions of the brain parameters are as follows: n is the refractive index; l a (cm À1 ) is the absorption coefficient; l s (cm À1 ) is the scattering coefficient; g is the anisotropic factor; d (cm) is the tissue thickness.
Monte Carlo simulation was used to establish the brain model in this study. Monte Carlo simulation can describe the transmission path of photons within any tissue structure. It is considered as the simulation most close to reality and is known as the ''golden standard" in describing photon transmission trajectory in biological tissue . The simulated photon number in this study was set as 10 8 . According to the study by Strangman et al., the thickness of the human scalp is 6.9 ± 3.6 mm and the thickness of the skull is 6.0 ± 1.9 mm. In this study, the thickness of the scalp and skull were set as 1.3 cm, of which the thickness of the scalp was 7.0 mm and the thickness of skull was 6.0 mm (Strangman et al., 2014).
Clinical cerebral hematoma can be categorized as subdural hematoma, epidural hematoma, intracerebral hematoma and subarachnoid hemorrhage. Most hematomas caused by traumatic brain injury are positioned extradurally or intradurally under the skull, which is located at the third level, i.e. the cerebrospinal fluid level in the brain model (Bullock et al., 2006;Abbas et al., 2017). When traumatic brain hematoma occurs, the absorption coefficient of the dural position will increase significantly. Clinical studies have shown that the hematoma absorption coefficient increases over 10 times, while the normal dural position absorption coefficient is 0.05 cm À1 . In order to simulate different degrees of hematoma, in this study, the dural position l a was set at a range of 0.5-1.5 cm À1 and the adjustment step length was 0.1 cm À1 .
Data processing
According to Monte Carlo simulation, the luminous fluxes from 30 detectors placed at different radial positions from the light source were obtained. The DOD values of the detectors located within a 2.0-5.0 cm range with a 0.1 cm interval were calculated using Eq. (4). White noise is the random noise evenly distributed throughout the frequency spectrum with similar energy density and it exists in every detector. In this study, white noise was added to the DOD value in each detector according to the signal-to-noise ratio of the spectrometer in order to verify the robustness of the model. The USB2000+ fiber optic spectrometer from Ocean Optics is the most classic and popular spectrometer. It is suitable for many types of scientific studies and industrial applications. Its signal-tonoise ratio is 250:1. This ratio was used as a basis for the added white noise in this study. The PLS model was built using DOD values with added white noise from the detectors at each position as the input and the dural position l a as the output.
VIP screening of detector position
The VIP values of the difference in luminous flux DOD of detectors at different position versus the hematoma degree y were calculated based on the definition of the importance of variable projection of Eq. (1). The result is shown in Fig. 2.
The importance of each detector position on modeling can be viewed clearly using the VIP radial distribution map between 2.0 cm and 5.0 cm. Based on the characteristics of the brain structure as well as the VIP values, the chosen screening criterion for the detector position was that the normalized VIP value should be greater than 0.14. As shown in the radial distribution map of VIP in Fig. 4, the detector positions at 2.1 cm, 2.4 cm, 3.4 cm and 4.2 cm met the screening criteria. After VIP screening, the DOD values with the added white noise from the 4 detector positions were used as the input and the dural position l a as the output to establish the PLS model. The model verification method used was Leave-One-Out Cross Validation (LOOCV) and the model extraction fraction was 2.
Results and discussions
One experimental group and two control groups were used for this study. The experimental group was modeled using the VIP screening result. Control group 1 underwent PLS modeling using the DOD values from 30 detectors positioned at 2.0-5.0 cm with a 0.1 cm division as the input. Control group 2 underwent PLS modeling using the DOD values from 4 detectors located at 2.0 cm, 3.0 cm, 4.0 cm and 5.0 cm from the light source as the input and the dural position l a as the output. White noise was added to all models. The prediction results are shown in Fig. 3. Fig. 3(a) shows the results of the dural position l a prediction PLS model using the 4 selected VIP detectors. The model correlation was 99.48%; the mean error was 0.0200 cm À1 ; and the maximum error was 0.0414 cm À1 . Fig. 3(b) shows the results of the dural position l a prediction PLS model of control group 1 using 30 detectors.
The model correlation was 98.54%; the mean error was 0.0339 cm À1 ; and the maximum error was 0.0684 cm À1 . Fig. 3(c) shows the results of the dural position l a prediction PLS model of control group 2 using 4 equally positioned detectors. The model correlation was 97.26%; the mean error was 0.0474 cm À1 ; and the maximum error was 0.0814 cm À1 .
The comparison of the relative errors (RE) among the three different model groups is shown in Fig. 4. The experiment group modeled via the PLS model used the DOD from 4 detectors with distances of 2.1 cm, 2.4 cm, 3.4 cm and 4.2 cm from the light source respectively to predict the dural position l a . The RE was 2.06%. The control group 1 PLS model used the DOD from 30 detectors to predict the dural position l a . The RE was 4.08%. Control group 2 PLS model used the DOD from 4 equally distanced detectors 2.0 cm, 3.0 cm, 4.0 cm and 5.0 cm from the light source to predict the dural position l a . The RE was 5.68%. These results show that the model with the 4 detectors selected through the VIP method has the smallest RE. Therefore, satisfactory results of dural position l a prediction using VIP for the selection of detector position and number can be obtained.
Conclusion
In this study, variable importance in the projection (VIP) method was used to simplify the model that uses differential near-infrared luminous flux for the detection of the position l a . A highly accurate model with smaller relative error can be built using a smaller number of detectors using VIP screening for detectors with different radial distances from the light source. This study is important for the miniaturization and portability of brain hematoma detection device research. It plays an important role in the promotion of portable near-infrared brain hematoma detector applications in com-plex environments and provides an important reference and a new direction for the research on relationship between the location of detectors and source and the effective depth of photons.
|
2018-05-03T00:30:16.162Z
|
2017-11-01T00:00:00.000
|
{
"year": 2017,
"sha1": "a841590e3f305d159b3aa1b6958d1762ddf9359f",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.sjbs.2017.11.044",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b51f1e0b3ca20098cf4c28161a077db59237b789",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
268380970
|
pes2o/s2orc
|
v3-fos-license
|
Verification of neuronavigated TMS accuracy using structured-light 3D scans
Objective. To investigate the reliability and accuracy of the manual three-point co-registration in neuronavigated transcranial magnetic stimulation (TMS). The effect of the error in landmark pointing on the coil placement and on the induced electric and magnetic fields was examined. Approach. The position of the TMS coil on the head was recorded by the neuronavigation system and by 3D scanning for ten healthy participants. The differences in the coil locations and orientations and the theoretical error values for electric and magnetic fields between the neuronavigated and 3D scanned coil positions were calculated. In addition, the sensitivity of the coil location on landmark accuracy was calculated. Main results. The measured distances between the neuronavigated and 3D scanned coil locations were on average 10.2 mm, ranging from 3.1 to 18.7 mm. The error in angles were on average from two to three degrees. The coil misplacement caused on average a 29% relative error in the electric field with a range from 9% to 51%. In the magnetic field, the same error was on average 33%, ranging from 10% to 58%. The misplacement of landmark points could cause a 1.8-fold error for the coil location. Significance. TMS neuronavigation with three landmark points can cause a significant error in the coil position, hampering research using highly accurate electric field calculations. Including 3D scanning to the process provides an efficient method to achieve a more accurate coil position.
Introduction
Transcranial magnetic stimulation (TMS) is a non-invasive technique for neural stimulation of the brain.It has both scientific uses in brain research and clinical applications, such as presurgical functional mapping (Jung et al 2019), and treatment of neuropsychiatric diseases, such as depression (Brunoni et al 2017), obsessive compulsive disorder (Fitzsimmons et al 2022), addiction (Petit et al 2022), and chronic pain (Stilling et al 2019).TMS operates through a magnetic coil placed on the scalp that induces an electric field within the brain tissue beneath.Changes in the coil positioning affect the induced electric field, potentially altering which cortical region is stimulated (de Goede et al 2018, Laakso et al 2013Laakso et al , 2018)).This can cause issues in clinical TMS applications, such as errors in pre-surgical mapping and mislocation of the treatment.The effect is emphasized in a research setting where even a small inaccuracy can drastically alter the results.For instance, even a small difference in the coil position can change whether a sought response to stimulation is observed or not.Therefore, accurate coil positioning is essential.
Navigated TMS (nTMS) is a tool for placing the coil to stimulate the desired target.It provides image-based stereotactic neuronavigation, where the position of the coil is visualized on the brain in real time (Ruohonen and Karhu 2010).The navigation relies on a camera that tracks the locations of the TMS coil and the head reference marker attached to the head.This allows the positioning of the coil based on a pre-marked location on the structural magnetic resonance (MR) image of the brain.The navigation relies on a co-registration process, in which the reference points, typically the nasion and the pre-auricular points, are marked in the MR image and matched with the exact corresponding anatomical landmark locations on the real head.
However, the co-registration is prone to inaccuracies such as the instability of the head reference marker and slight errors in the manual pointing of the landmark locations (Nieminen et al 2022).In addition, the head is slightly distorted during an MRI scan due to the prone position that pulls the lower face posterior, and cushioning that pushes the sides and back of the head slightly inwards.This causes minor discrepancies between the real head and the head shown in the MR images, which hampers the landmark pointing and the alignment of the corresponding anatomical points.Some of these inaccuracies could be alleviated by increasing the number of landmark points in co-registration.Still, it is common practice to use just three points in nTMS measurements and software (Souza et al 2018, Caulfield et al 2022, Matsuda et al 2023), as it often provides sufficient accuracy, and increasing the number of points makes the process more time consuming.
There have been several previous studies evaluating the accuracy of nTMS.A common strategy to study the accuracy of the coil placement is to calculate the distance from the target location displayed by the neuronavigation system (Schönfeldt-Lecuona et al 2005, Caulfield et al 2022).The problem with such an implementation is that it does not take into account the error in the co-registration of the head, which can cause misplacement of the target location.The errors related to the co-registration process have only been examined in few studies.Nieminen et al (2022) used computer simulations to estimate navigation errors due to landmarkand surface-based head-to-MRI registrations.Yet, there seems to be no research that evaluates the coregistration errors with measured data.
In this study, we investigated the reliability and accuracy of the manual three-point co-registration of the nTMS with 3D scanning.The aim was to examine the extent of the error in coil positioning, and its effect to the modeled induced electric and magnetic fields.We compared the coil locations from the nTMS to the 3D scanned locations.In addition, we calculated the sensitivity of the coil location on landmark accuracy to demonstrate the error.To our knowledge, this is the first study to compare the coil location from nTMS with the actual coil location verified with 3D scanning.
Participants
The data were collected from 10 healthy right-handed participants (5 female, 5 male, mean age ± SD = 30.0± 4.2, age range: 26-40).All participants gave their written consent for participation.The study was approved by the Aalto University Research Ethics Committee (decisions D/574/03.04/2022and D/1006/ 03.04/2022).All procedures were conducted in accordance with the Declaration of Helsinki.
The MR images were segmented into different tissue types using the FreeSurfer image analysis software (Dale et al 1999, Fischl et al 1999) for brain tissues and a semi-automatic procedure described in Laakso et al (2015) for non-brain tissues.The segmented images were voxelized into cubical elements with a spatial resolution of 0.5 mm to generate volume conductor models.The segmented MR images were also used to generate a triangular mesh surface model of the head (figure 1(A)).
Navigated TMS experiments
The TMS was performed with a monophasic Magstim 200 2 stimulator (Magstim Company, UK) and an eightshaped TMS coil with two adjacent round wings of 9 cm diameter (D70 Alpha Flat Coil, Magstim Company, UK).The coil position was tracked and recorded with the Visor2 TMS neuronavigation system (ANT Neuro, Enschede, the Netherlands) and the associated Polaris Vicra optical tracking system (Northern Digital Inc., Canada).To validate the coil location measured by the navigation system, the participant's head together with the coil was 3D scanned (Artec Leo, Artec 3D, Luxembourg).The data were measured at Aalto TMS, Aalto NeuroImaging, Aalto University School of Science.
During the experiment, the participants were comfortably sitting on a chair with an individually shaped neck rest to support their head.The participants wore a tightly fitting neoprene cap to hold the neuronavigation head reference marker in place and to create a uniform surface for the 3D scan, as the 3D scanner cannot properly scan hair.The TMS coil was fixed with a coil holder.
First, the head location was registered by using a pointer tool to digitize the positions of three anatomical landmarks, nasion and ears.A 3D visualization of the head surface in which the landmarks had been premarked was used to aid the digitization process (figure 1(B)).After the registering process, the head and the coil on the scalp were 3D scanned.A zero-intensity TMS pulse was delivered before and after the 3D scan for Visor2 to register the coil and head location to the navigation system, and to validate that the head was not moved during the scan (figure 1(C)).
Visor2 returns the coil locations in nasion-ear coordinate system as described in its user manual.This coordinate system is established with the Y¢-axis aligned along the line connecting the ear points (l 1 and l 2 ), so that its direction u Y ¢ points from right to left.The origin (r 0 ) is the closest point on the Y¢-axis to the nasion point (l 3 ).X¢-axis direction u X¢ is from the origin to the nasion point.The Z¢-axis direction u Z¢ is orthogonal to both the Y¢ and X¢ axes, pointing to the superior direction.For analysis, coil locations in nasion-ear coordinate system (r¢) are transformed into the coordinates of the MR image (r) by The accuracy of the navigation system was verified prior the experiment.First, three landmark points were registered on known locations on a flat surface.Then, the the coil was positioned at 20 known reference points (measurement accuracy 0.5 mm) above the surface and the positions were measured with the navigation system.The accuracy of the navigation system was precise, as the mean difference between the reference and navigation was 0.20 ± 0.16 mm, which is smaller than the level of the reference measurement accuracy.
Head to head registration
To effectively compare the location of the real 3D scanned coil and the location where the navigation system assumes the coil to be, the locations had to be transformed to the same coordinate system.We did this by transforming the coordinates of the 3D scanned coil to the MRI coordinates.
First, the 3D scan was registered to the MR images using MATLAB.For the registration, a facial area around the nose was extracted and co-registered to the MR image (figure 2(A)-(B)).This specific area was chosen as it is mostly immutable between different body positions (Hironaga et al 2019).The surfaces were matched using an iterative closest point (ICP) algorithm.Next, a transformation matrix for transforming the 3D scanner coordinates to MRI coordinates was obtained from the co-registration.Finally, the transformation matrix was used to transform the coordinates of the 3D scan to the MRI coordinates (figure 2(B)).To obtain the exact coil location from the 3D scan, a model coil (generated from a highly detailed 3D scan) was registered to the coil obtained from the 3D scan (figure 2(C)).This allowed a direct comparison of the navigation and 3D scan coil locations in the same coordinate system (figure 2(D)).
We also studied the accuracy of the co-registration (figure 2(A)-(B)) and the differences between the head shape in supine (in MRI) and upright (in neuronavigation) positions.For meaningful comparison of differences, the facial area was divided into horizontal thirds, where the bottom segment was limited to a line drawn under the nose and the top segment to a line drawn under the brow ridge.In addition, the nasal area that was used for the co-registration was examined separately.
Accuracy of neuronavigation
To study the accuracy of the neuronavigated coil position, we calculated the distance between the 3D scanned and navigated coil positions along the X, Y, and Z axes.The X, Y, and Z components are defined as the coordinates in left-right, posterior-anterior, and inferior-superior directions, respectively (figure 3).We also calculated the difference in the coil position in yaw, pitch and roll angles, where the yaw axis is perpendicular to the round wings, pitch axis is parallel to the wings and the roll axis is parallel to the coil handle (figure 3).
Sensitivity of the coil location on landmark accuracy
To study the sensitivity of the coil position to mispositioning during the neuronavigation landmark pointing, we created the Jacobian matrix of r from (1): with three landmark points (L l l l , , ) as the input values and the location of the coil (r) as the output value.The coil location in nasion-ear coordinates (r¢) was fixed at the individually measured location for each participant.Altogether, nine degrees of freedom were included in the matrix as there were three coordinates for each landmark point.
Similarly, we used a Jacobian matrix to inspect the sensitivity to the coil rotation.The same three landmark points were used as the input values, but the output value was a vector including the coil rotation angles in yaw, pitch, and roll directions.
To calculate the maximal differences in the coil position that the mislocation of the landmark points can cause, we calculated the norm of the Jacobian matrix ∥J∥ 2 , which provides the maximum scale by which the mislocation of the landmark points can stretch the output vector.
Induced electric field
The induced electric fields in the brain were computationally estimated using a finite-element method (FEM) as described in Laakso et al 2018.The source code for the FEM solver is available at (https://version.aalto.fi/gitlab/ilaakso/vgm-fem).In the procedure, a computer model of an eight-shaped coil was placed on either the neuronavigated or the 3D scanned coil location over the individual volume conductor model of the head.The induced electric field originating from the coil was calculated in the whole head using the FEM with a uniform grid of first-order cubical elements with a 0.5 mm edge length.For this study, the field in the depth of 2 mm below the pial surface was selected to avoid the staircase approximation error at the tissue boundary between gray matter and cerebrospinal fluid.Electric conductivity values were assigned to the voxels similarly to Laakso et al (2018) (unit: S/m): gray matter (0.215), white matter (0.142), cerebrospinal fluid (1.79), compact and spongy bone (0.009 and 0.034), subcutaneous fat (0.15), scalp (0.43), muscle (0.18), dura mater (0.18), and blood (0.7). Volume conductor models were generated from the segmented MR images with the given conductivity values.Maximum stimulator output (MSO) intensity, corresponding to the dI/dt of 152 A μs −1 , was used for the computer simulated TMS pulses.
We simulated the differences in induced electric fields and magnetic fields between the two coil positions.For both, we calculated a L2-norm relative error of the field from the neuronavigation system (E nav and B nav ) to the field from the 3D scan (E scan and B scan ) as where ΔE = E nav − E scan and ΔB = B nav − B scan .The maximum electric field magnitude (E max ) and the magnitude at the cortical location of the first dorsal interosseous (FDI) muscle (E FDI ) were determined for the two coil positions.The cortical FDI location was preregistered to be [−41, −7, 63] in the MNI coordinates, which is a group-average activation site of the FDI muscle from Laakso et al (2018).The distances between the locations of the E max on the cortex were also calculated.Correlations between the L2-norm relative error and the coil distance and between the L2-norm relative error and the E max cortical location distance were calculated using the Pearsonʼs correlation coefficient.
Co-registration accuracy and differences in head shape
The co-registration of the 3D scanned face and the head surface constructed from the MR images is visualized in figure 4. Inaccuracies of the co-registration were focused on soft tissues and mutable parts in lower face.The mean distances between the co-registered 3D scan and the MRI surface were 4.6 ± 1.7, 1.4 ± 0.4, and 1.4 ± 0.6 mm for the bottom, middle, and top segment of the face, respectively.For the nasal area, the distance was 0.5 ± 0.2 mm.
Sensitivity of the coil location on landmark accuracy
The norms of the Jacobian matrix for each participant are listed in table 2. The mean norm for location was 1.82, i.e. an error of 1 mm in the landmark points (Euclidean norm) can lead to approximately 1.8 mm error in the location of the coil center.The largest measured deviation was 18.7 mm, which could be then caused by a combined error of ∼10 mm in the landmark points.For a single landmark point this could mean a smaller error around 4 mm.For the rotation, the mean norm was 0.7°mm −1 .The largest deviation was 7°, which could likewise be caused by a combined landmark error of ∼10 mm.One-way ANOVA supports that the landmark point has a significant effect on the stretch level of the output vector (coil location) (F(2, 27) = [23.7],p < 0.005).When the coil is located closer to a landmark point, the vector is stretched more, e.g. the mislocation of the left pre-auricular point has the strongest effect to the error of the coil position.
Effect of coil location error on the magnetic field and induced electric field
The L2-norm relative error between the neuronavigated and scanned coil positions was 29% ± 16% for the induced electric field (err E ) and 33% ± 17% for the magnetic field (err B ) (table 3).The err E ranged from 9% to 51% and err B from 19% to 58%.There was a clear correlation between the coil distance and the err E (p < 0.005, r = 0.92).Linear regression for the relationship was D err 0.025 0.033, 5 where D is the coil distance in millimeters.
For the majority of the subjects, the cortical location of the E max did not change or changed less than 1 mm.However, for three subjects the distance between the E max locations on the cortex was notable, ranging from 11 Table 3. Column 1 and 2, the relative error in the induced electric field (err E ) and the magnetic field (err B ) from the change in position between the neuronavigated and 3D scanned coil.The third column presents the distance between the cortical locations with the maximum induced electric field magnitude with the two coil locations.Columns 4 and 5 present the maximum induced electric field magnitude on the cortex for neuronavigated and 3D scanned coil positions and columns 6 and 7, the induced electric field at cortical FDI location.
to 30 mm.There was no evidence for correlation between the E max cortical location distance and the error level (p = 0.29, r = 0.37).The total differences in electric fields are visualized in figure 5.The difference in the maximum electric field magnitudes were on average 43 V m −1 ranging from 6 to 153 V m −1 and the difference in electric field magnitudes at FDI cortical location were on average 33 V m −1 ranging from 10 to 101 V m −1 .
Discussion
We studied the accuracy of the three-point navigated TMS that is still a commonly used approach for neuronavigation (Souza et al 2018, Caulfield et al 2022, Matsuda et al 2023).We also performed computer simulations to detect its sensitivity to errors in landmark pointing.The landmark sensitivity simulations indicated that on average, the total error in the landmark points causes even a 1.8-fold error for the coil location.
The closer the landmark is to the coil, the bigger the impact of the error is.Similarly, the effect to the Euclidean rotation was 0.7°mm −1 .According to previous studies, the induced electric field is less sensitive to errors in orientation than location (Gomez et al 2021) and changes less than 10 degrees do not drastically alter the induced electric field (Janssen et al 2015).The measured mean Euclidean distance between the neuronavigated and scanned coil location was 10.2 mm and orientation difference in each direction was less than 3 degrees.The previous study of Nieminen et al (2022) reported the computer simulated coil position error to be about 4 mm, which is smaller than the result on this paper.However, their study based the size of the landmark pointing error on studies that use the average of the intersession group mean variability of the pointed locations to estimate the error (Schönfeldt-Lecuona et al 2005).This approach does not properly consider the error in trying to match the landmark point with the point in the MR image, which can be significant due to various factors.MRI is a crucial part in co-registration, and potentially a substantial error source.If the quality of the MR image is poor or the image is heavily distorted, it can cause inaccuracies in the landmark point matching.As the figure 4 shows, the surfaces from the MRI and the 3D scan are not a perfect copy of each other.Generally, the largest difference can be observed in the jaw region, as the jaw moves when changing from the MRI supine position to the sitting position in TMS.In addition, the cushioning pillows squeeze the cheeks during an MRI causing some differences especially in the middle facial area and ears.The ears are also susceptible to deformation caused by ear muffs, and the digitizer pen used in landmark pointing can press the soft tissue of the ears several millimeters.Slight changes in facial expression can also cause minor differences between the surfaces.To minimize the co-registration errors, it is important to consider which facial areas are prone to change when selecting the reference points.The best practice is to use clear targets that are as immutable as possible and are not affected by the measurement.No force should be applied when using the digitizer pen, the tip should only lightly touch the skin.
The coil misplacement has a great impact on the induced electric field, as the relative error in the electric field due to coil movement was on average 29% and up to 51%.It is much larger than other sources of error in TMS induced electric fields.For example, the numerical error causes an error smaller than 2% (Gomez et al 2020), a 20% change in conductivity values generates a 5% error (Saturnino et al 2019, Stenroos andKoponen 2019) and the accuracy of the MRI segmentation can cause a 15% error (Puonti et al 2020).Additionally, brain movement due to the different posture has only a small effect (Mikkonen and Laakso 2019).To reduce the error from coil misplacement to 10% or less, the navigation inaccuracy should not be larger than 2.6 mm, estimated using equation (5).This would require the landmark points to be placed with an approximately 1.5 mm accuracy, which cannot be easily achieved with a standard pointer tool.However, even though the severity of coil misplacement correlates with the relative error of the electric field, there is no correlation with movement of the field maximum.The cortex is a complex anatomy, and it seems probable that individual differences affect the field maximum location more than the coil placement.
The error in the induced electric field can have a significant effect on research where highly accurate individual cortical models are used.The more accurate the model is, the more the effect is emphasized.With precisely replicated anatomy, even a few millimeter error in coil location may alter the results, and the repercussion of a centimeter-scale error is even larger.One recent example is novel research estimating the locations that TMS activates in the brain with anatomically accurate dosimetric models (Bungert et al 2017, Laakso et al 2018, Aonuma et al 2018, Weise et al 2020, Kataja et al 2021, Numssen et al 2021).Including these models to the neuronavigation system used with TMS would provide more realistic electric field estimations, but the system is also more vulnerable to coil misplacement.
There are several potential methods to improve the accuracy of nTMS, but none are without limitations.The simplest method is to increase the number of anatomical landmark locations matched with the MR image and to complement it with a surface based approach by adding several hundred surface points for the co-registration.However, the surface based approach requires that the points are collected from a rigid scalp to ensure the optimal co-registration.In reality, the location is affected by the influence of soft skin, thick hair (e.g.braids) and different caps worn for example for TMS-EEG.Another option is to remove the markers from the neuronavigation and use computer-vision techniques to track the patientʼs head and the TMS coil (Matsuda et al 2023).This removes the inaccuracies from the misplacement of landmarks and the movement of the head marker.Yet, this technique relies on face detection and is vulnerable to changes in facial expression.
Our solution for minimizing the error in the TMS co-registration process is using a 3D scanner that can reliably locate the coil position relative to the head.The scanning eliminates the errors related to manual landmark pointing, and the actual co-registration is reliable and fairly straightforward by matching the two surfaces based on carefully chosen immutable parts of the head.Another advantage is that unlike methods requiring surface points on the scalp, this method is not affected by hair or electrode caps.The downside of the scanning is that it involves an extra step in the preparation procedure, lasting approximately one minute per scan, but one scan is sufficient.Other disadvantage is that the coil needs to be fixed during the scanning to prevent the movement that could ensue from a handheld coil.
3D scanning is also usable without additional neuronavigation and is not restricted to the head, but can be used with other body parts as well.Besides TMS, 3D scanning can be used in other applications such as transcranial direct current stimulation (tDCS) to verify the location of the electrodes.
Conclusion
TMS neuronavigation with three landmark point co-registration has inaccuracies that could hinder research with highly accurate individual cortical models.The error in landmark pointing causes a severalfold misplacement of the coil location, which can be a major source of error in accurate electric field calculations.Complementing the procedure with 3D scanning provides a reliable way to record the actual coil location for the induced field.
Figure 1 .
Figure 1.(A) T1-and T2-weighted MRI was used to generate a surface model of the head.(B) Pre-marked landmark points on nasion and both ears (black dots) on the surface head model aided the digitization process with the pointer tool.(C) Coil position on the head was recorded by neuronavigation with optical tracker and by a 3D scan during 0% intensity TMS pulse.Neuronavigated coil is visualized with the MRI surface model of the head.
Figure 2 .
Figure 2. 3D scanned head and coil matched with the MR image of the head and the neuronavigated coil with two example subjects.(A) Co-registration of the 3D scan (blue) to the MR image of the head (gray) and the neuronavigated coil (orange).The extracted facial area around the nose used for the co-registration is marked with a black border line.(B) Co-registered 3D scans on the MRI surface model of the head and the neuronavigated coil.(C) Modeled coil co-registered with the 3D scanned coil to provide a complete structure of the coil.(D) Neuronavigated (orange) and 3D scanned (blue) coil locations visualized with modeled coil at their corresponding locations.
Figure 3 .
Figure3.The axes of the TMS coil and the head.Yaw axis is perpendicular to the round wings, pitch axis is parallel to the wings and the roll axis is parallel to the coil handle.X, Y, and Z components are defined as the coordinates in left-right, posterior-anterior, and inferior-superior directions on the head.
Figure 4 .
Figure 4. Distances between the co-registered 3D scan and the head surface model reconstructed from MR images.(A) Boxplot visualization of the mean distances (mm) in different facial areas.Graphs represent the minimum, maximum, median, first quartile and third quartile in the data set.(B) The measured distances of two participants visualized on 3D scan over the head models.Facial areas are bordered with black lines.
Figure 5 .
Figure 5. Induced electric fields for neuronavigated (E nav ) and 3D scanned (E scan ) coil position and their subtraction (ΔE) on individual cortices for each subject.Locations of the electric field maximums are marked with black and white circles, and the number indicates the magnitude of the electric field in that location.
Table 1 .
Distance and rotation between the neuronavigated and the 3D scanned coil positions.
Table 2 .
The norm of a Jacobian matrix for the coil location and rotation.The value indicates the maximum scale by which the mislocation of the landmark points can stretch the error in coil position.For the location, the scale is presented for the total mislocation of all three landmarks as well as for each landmark separately.
|
2024-03-15T06:18:45.153Z
|
2024-03-13T00:00:00.000
|
{
"year": 2024,
"sha1": "c62be88a9a3764a2550ce1fecff7b52c4a9d17b6",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1361-6560/ad33b8/pdf",
"oa_status": "HYBRID",
"pdf_src": "IOP",
"pdf_hash": "52f1ebe1a59cd95049f12cb8d74c6cd55546074d",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
246556142
|
pes2o/s2orc
|
v3-fos-license
|
Giving birth: A hermeneutic study of the expectations and experiences of healthy primigravid women in Switzerland
Switzerland experiences one of the highest caesarean section rates in Europe but it is unclear why and when the decision is made to perform a caesarean section. Many studies have examined from a medical and physiological point of view, but research from a women’s standpoint is lacking. Our aim was to develop a model of the emerging expectations of giving birth and the subsequent experiences of healthy primigravid women, across four cantons in Switzerland. This longitudinal study included 30 primigravidae from the German speaking, 14 from the French speaking and 14 from the Italian speaking cantons who were purposively selected. Data were collected by semi-structured interviews taking place around 22 and 36 weeks of pregnancy and six weeks and six months postnatally. Following Gadamer’s hermeneutic, which in this study comprised 5 stages, a model was developed. Four major themes emerged: Decisions, Care, Influences and Emotions. Their meandering paths and evolution demonstrate the complexity of the expectations and experiences of women becoming mothers. In this study, women’s narrated mode of birth expectations did not foretell how they gave birth and their lived experiences. A hermeneutic discontinuity arises at the 6 week postnatal interview mark. This temporary gap illustrates the bridge between women’s expectations of birth and their actual lived experiences, highlighting the importance of informed consent, parent education and ensuring women have a positive birth and immediate postnatal experiences. Other factors than women’s preferences should be considered to explain the increasing caesarean section rates.
Background
The central European country of Switzerland is constantly ranked amongst the highest of the world's wealthiest countries with residents enjoying a high quality of life [1]. In such contexts, around pregnancy and birth, and who do not fear birth, tend to expect to have a vaginal birth, whilst others would prefer a caesarean section [22]. Moreover, in countries with high caesarean section rates, the majority of studies agree that birth technologies make birth easier and women should have a right to choose a non-medically indicated caesarean [23]. Sumbul et al. 2020, through a review of the literature, demonstrate that most published studies show cross sectional pictures of women's expectations using predetermined questions [24]. Women's expectations of birth and how their life experiences influence them are particularly relevant issues for the present study. A study undertaken in Switzerland is of high relevance [25]. Although recruitment only took place after birth, the researchers investigated how 251 participants' views of their birth experiences changed in the first two years of their child's life. The study also sought to identify any particular groups of women at risk of developing a long-term negative memory of their birth experience. Women's birth experiences were collected and appraised within 48 to 96 hours postpartum, at three weeks and then in the second year after giving birth. The authors concluded that women at risk of developing a negative long-term memory of their birth experiences can be identified in the early postnatal period, when the overall birth experience and the perceived relationship to their intrapartum experience are considered. This study provides useful insights, but the varying parity of the participants and the lack of focus on their expectations leave some unanswered questions as to its validity.
Overall there appears to be a lack of clarity as to why and when the decision is made to undertake a caesarean section and which factors influence this process [26]. This, in turn, prompts the questions as to what expectations pregnant women have of their birth, how women make the decision for a particular mode birth, and how women recollect their experience of giving birth in relation to their decision making processes [27][28][29]. This study therefore focuses on healthy women becoming first time mothers, and their expectations and subsequent experiences of giving birth to provide a baseline understanding.
Qualitative approach and theoretical framework
This study seeks to generate an understanding in women's journey to choosing a mode of birth and subsequently, how they experienced the birth of their baby. The hermeneutic of Gadamer is well suited for this study [30]. This philosophy aims to generate an in-depth understanding rather than simply describing an experience, and to draw out new knowledge from the participants. From Gadamer's philosophy, a specific five stage approach was adopted [31]. Table 1 present these stages and the fact that are not necessarily consecutive, but involve an engagement with the hermeneutic circle, going backwards and forwards between its whole and parts. Fig 1 shows the reflexive path we took.
Throughout this study, the research group members were aware of their own pre-understandings, gained through their own professional and personal experiences, and encounters [30]. These pre-understandings were recorded in the form of reflexive dialogues between team members and analysed alongside the acquired data. It was also acknowledged that these may change during the course of this study as shown in this statement, which emerged from a team discussion: "Well, midwives will have a different starting point from others such as me. We need to take all that into consideration when we do our analysis so that we are not letting professional biases interfere with our analysis"
Research questions
To build the model, we sought to answer the following questions • What are healthy primigravid women's expectations in early pregnancy about giving birth?
• How do these expectations change during pregnancy?
• What factors influence these expectations?
• What were the women's birthing experiences?
• How did the birthing experience match the antenatal expectations?
Setting and sampling strategy
Though a relatively small country, Switzerland comprises a heterogenous population. The country consists of a confederation of 26 cantons, and each canton has its own culture and
PLOS ONE
customs. German is spoken in the northern and eastern cantons, French in the western cantons, and Italian in the southern canton of Ticino. To reflect the geographical distribution of the country and the national birth statistics, we sampled 30 women from the German speaking, 14 from the French speaking and 14 from the Italian speaking cantons. We adopted a purposive sampling method, Table 2, appropriate for a hermeneutic study [32]. Our choice resonated with Gadamer's notion of cultural inclusiveness and maximising understanding of all elements of the data. It also took into account a minimum size required for generating data for each Swiss language [33]. Our sample was obtained from healthy primigravidae over the age of 18 with straightforward pregnancies. We recruited from settings which covered the full range of birth options from home birth to a university hospital, resulting in a wide range of potential participants, Table 2. Because of the chosen philosophical paradigm, the participants had to be fluent in their canton's official language (i.e. German, French or Italian). The 75 recruited women provided written consent in their chosen language.
Data collection
We chose qualitative interviews as the most appropriate method for data collection. The interviewers were experienced, female researchers. Three were midwives, one a psychologist and one a sociologist. All were fluent speakers in the language in which they collected the data and in English. Participants were interviewed by the same allocated interviewer throughout the study.
We invited the 75 consenting women to participate in four guided interviews lasting approximately one hour at 20-24 weeks and 35-37 weeks antenatally, and six weeks and six months postnatally. Each interview began with the question "What are your expectations of birth?" or "How was your birth in relation to your earlier expectations?". Our longitudinal approach reflects the recommendations from reviews of both Gamble & Creedy [34] and McCourt et al. [35].
We collected data by semi-structured interviews at a place of each participant's choice. Interviews lasted between 45 and 75 minutes, were audio-recorded, transcribed verbatim and an initial thematic analysis was carried out before raising the themes to the next hermeneutic stage. During transcription all identifying details were removed and participants were given pseudonyms. MaxQDA © was used for data management and analysis. Transcripts were first entered in the interview's original language, before generating initial codes. Each researcher wrote memos in English pertaining to significant codes and to the interview in its entirety.
Analysis
An initial thematic analysis of the data was undertaken following Braun and Clarke's proposed method [36]. In hermeneutic studies, this method allows to identify themes, in order to then draw out understandings of lived human experiences [37]. In addition, thematic analysis is a
PLOS ONE
useful method to analyse large datasets. The interview memos formed the first point of discussion among the team as to commonalities and differences among participants within the same canton. The senior researchers on each site then compared the themes generated in each canton before undertaking an in-depth hermeneutic analysis, allowing for the understandings from combined themes from each language region to emerge. These were discussed by the complete team, whose members generated an initial model, which was revised on several occasions as the analysis deepened. The completed analysis permitted the identification of the key themes, illustrated using the translations of participants' own words. The themes and their longitudinal evolution in the antenatal and postnatal periods will form the elaborated model of "birth expectation to birth experience".
Ethical considerations
The main ethical issues were informed consent, autonomy, confidentiality and anonymity. Primary permission to undertake the study was given by the Ethics Commission for Zürich (KEK-ZH-2014-0367). Secondary permission was granted by the ethics commissions of each of the other three cantons: Vaud, St Gallen and Ticino.
Findings
Four main themes emerged from this longitudinal hermeneutic study: Decisions, Care, Emotions, and Influences. Each theme evolved from one interview to the next. At various stages of the pregnancy, the themes were either present, strongly present, or absent. Some merged with other themes, as detailed below in Fig 2.
Decisions
In the first interviews, around 22 weeks antenatally (AN), women were experiencing a feeling of "being in limbo". At this particular stage of their pregnancy, women began to realise the enormity of the change that their pregnancy and arrival of their baby would bring to their lives: For some women, this feeling led to a sense of denial: During the first three months, I really. . .I forced myself not to build up any emotional relationship with this "egg". I called him "egg" at the beginning because I was really scared of having a miscarriage. [Scarlet; 22 weeks AN].
Others were more proactive and questioned their feelings: By the time of the second interview, around 36 weeks, there was more of a sense of purpose, which we identified as "negotiating the maze". As Nora, who was seeking an elective caesarean for breech presentation, notes: I'm more serene and less anxious thinking that it will be a caesarean, with a scheduled date. This theme was not present in the third interviews around 6 weeks postpartum but in the final interviews at 6 months after birth. After 6 months, women start focusing on the next pregnancy and "getting it right the next time". Lisa articulated: I think the next pregnancy will not be as intense. [. . .]. I will not be able to absorb myself in a second pregnancy, with another child around. [Lisa; 6 months PN]
Care
Care was strongly expressed during the two antenatal interviews, but not in the postnatal interviews. The first interviews showed a range of views that were categorised as "planning". At 22 weeks, women and their families start to think about the location in which they want to give birth. Birthplace locations include university hospitals, regional hospitals, birth centres and the woman's home. This is well illustrated by Katja, describing her thought process about choosing her place of birth: We chose this birthplace by not looking at too many hospitals and birthing centres and I do not know what the other various possibilities are. You simply say "ok, this is coincidentally close by, and you go for it. . . or that the birth centre is actually the place you want to go and that you trust it. [Katja; 22 weeks AN] Participants, planning to give birth at home or in birth centres, often reported receiving negative comments about these settings. Lana, who wished to give birth in a birth centre, recounts her conversation with her obstetrician: She told me that I would die in a birth centre. She told me "you'll have a haemorrhage and then you have to act fast. . . ." I tried to laugh about it because I knew it was just her being stupid, trying to justify her job in the sense that she wants me to give birth in her hospital because she makes money out of it. Anyway, it's her job and she believes in her job but still, she managed to scare me. [Lana; 22 weeks AN] By the time of the second interviews this theme had fragmented itself into three distinct pathways. The first pathway merges with the Decisions theme "Negotiating the maze" which is outlined above. The second thread shows how either the initial choice is consolidated, or in some cases, participants were "Reacting to the unexpected". Bea, for example, disagreed with the idea of putting herself in the hands of others but nonetheless accepted that complications may occur: I find that if you go with the idea of having a natural birth. . . if you have it in your head. I find a little unfortunate that maybe at the last minute they suggest a caesarean. On one hand I tell myself if it is the only solution because there is a risk for me or the baby at that moment I think I'll feel bad for a while.
Emotions
The theme of Emotions was noted during the first, third and fourth interviews but not during the second interviews. The first interviews revealed a "continuum of emotions", which reflected the intense expectations the participants had of their forthcoming birth. This means, women are reporting emotions that they had already felt prior to their pregnancy, and which they are expecting to experience until birth or beyond. For some, the main emotion was fear: Fear of birth is already there, well fear of the pain. [Ronja; 22 weeks AN].
The fear of having to cope with a frightening experience can make women think about possible approaches to birth.
. . . so anyway, I understand one want to find an equilibrium. . . to have a birth that's as serene as possible, even if, in my opinion, it's going to be a battle field. . . especially the first time, when you don't know what to expect, so no matter how you represent it, how you imagine, no matter what you read or people tell you, it will be 10,000 times worse, 10,000 times different or 10,000 better than anything you can ever imagine or read. . . But it will be a battlefield. Other representations of childbirth were more balanced. They relied on the participants' faith that childbirth is a natural process, traditionally achieved vaginally. These women tended to feel confident in their own capacity to give birth and cope with pain. Though absent during the second interviews at 36 weeks, Emotions was extremely strong during the third interviews, at 6 weeks postpartum. Emotions, rather than following a wellordered continuum, were "polarised" ranging from strong frustrations to intense happiness.
. . .Luckily, she (baby) is cool anyway, because since the childbirth. . .. Well, I'm telling you about childbirth, I am well now. . . but I debriefed it, I debriefed it with Nathalie the second midwife, I was in tears, I wasn't well. . . I was not well, not well, I debriefed again with my gynaecologist, I spoke about it again with the hospital, it is clear that I never want to set foot there again. . .. [Leanne, 6 weeks PN] In the final interviews Emotions were less to the fore but participants "Held on to powerful emotions". For Lisa this was a sense of disappointment: It was a deep wish that I give birth naturally. A primal wish somehow. And then the disappointment that I did not make it. [Lisa; 6 months PN] For most others, however, it was more positive. Olivia sums it up: Simply magical, really how this child grows in your body and then somehow magically comes out and you just do not know how it was possible to be inside. And then after four months we started with baby purées but until then, I looked at him and thought 'Everything that he is, his existence somehow is because it went through my body'. And it is wondrous. Also, for my husband. And also, for a couple, to see that something like this is even possible. Birth is a widely discussed topic, and society highly influences women and families. In Audrey's case, a wider world view was adopted. She spoke about society's influence on birth and her reaction to it: I have the impression that this is what society sometimes makes us believe that it is possible when it is not. . .. . . and I think it has an influence on what people say when we hear women saying " anyway I do not want to give birth. I would like to be able to fall asleep completely and wake up when the baby is there" or things like that and I say to myself this is not how life and people in general work. [Audrey; 22 weeks AN] Some women's journeys to their expected mode of birth can also be influenced by their growing baby. Leanne, who was particularly anxious about birth during the first interview, at 36 weeks articulated that her views have radically changed. She explained how her baby was the biggest influence: So childbirth is going to be easy with this state of mind. I find it a remarkable evolution since the last time. . . yes it's thanks to the baby because I felt it, because I saw its development and because it said to me: "I'm here. . . I'm a person and you managed to create me, now it's going to be okay, you'll see, I can do everything, I've already turned head down". [Leanne; 36 weeks AN] "Internal and external influences," showed a continuous progression to the second interviews, around 36 weeks, when participants became more active in their birth preparations. As the birth approaches, women start planning and organising helping hands or other forms of support from friends or family members. At this time, Influences is referred to as "Mobilising resources". During her second interview, Isabelle spoke of how she "mobilised resources": I've got a very good relationship with my husband's family. They're really interested, and we telephone one another a lot. They also help me and I think after the birth they'll be really helpful. As with the theme "Decisions", the theme "Influence" was absent in the immediate postnatal period, but in this instance, re-emerged in the last interviews, at 6 months, when participants "Identified the most important influences" as they looked back at their births. Some women describe how their healthcare professionals were the greatest influencers:
So the choice [for my birth] was somewhat influenced by the obstetrician. [Nina; 6 months PN]
On the physical level I don't know [why it was so], but on an emotional level as I experienced it, was the information I had before from midwives; how they followed me, how they encouraged me, in my opinion. Although looking back, I never think about my obstetrician. He had such a tiny role in the whole thing, that I'm most grateful to the two midwives who helped in this positive experience. [Irene; 6 months PN] Nora, who during pregnancy wanted a caesarean but gave birth spontaneously, saw the main influences as her baby and her obstetrician: I think it is my son and my doctor [who were the most influential]. The thing that convinced me was that it was safer for the baby, and that even if it was against my own wishes, I had chosen the safest path for the baby. [Nora; 6 months PN] However, some women believe that women could merely be guided by external influences through their birth journey, but ultimately only internal influences matter: Everyone has to find their own path and then be content with it. Also how you do things after the birth. There are so many people trying to tell you what to do. And at some point you have to build up self-confidence. I now know how I want to handle things with my child. Many might do things the same way but others might say: that is completely wrong. But as long as one has the feeling it is the right path for the child, I think it is the right way to go. And you have to learn that. [Wendy; 6 months PN] Conversely, some women discuss their regret about their healthcare professionals' lack of influence, and how they would have preferred a stronger involvement in their birth journey's especially when an intervention was offered (informed consent):
Discussion
Qualitative studies by nature relies on language to obtain information from subjective experiences. Van Nes describes: "The relation between subjective experience and language is a twoway process; language is used to express meaning, but the other way round language influences how meaning is constructed" [38] (page 314). This rings even more true in Gadamerian hermeneutic studies. Hermeneutics of Gadamer seek to gain understandings through the spoken words. In Gadamerian studies, it is important to not only read transcripts, but to also read them whilst listening to the recording of the interview [31]. This hermeneutic study was carried out in Switzerland, and participants were interviewed in their own national language which were German, French or Italian. As per recommendations made by some researchers, the recorded data were transcribed in the original language and memos were written pertaining to significant codes prior to translating into English, thus maintaining as much as possible of the meaning behind the participants' words [38,39].
The data we presented in the previous section show four hermeneutically derived themes. We developed them further in stage four of the research method, taking into account the preunderstandings each of us brought to the project, and our reflexive discussions throughout. Ratzinger identified a relevant hermeneutic of continuity and one of rupture [40]. Particularly noteworthy was that very few of the participants expressed a wish for caesarean section in the antenatal period but rather their expectations were on the need to have what they perceived as accurate information. Based upon that, they felt that they could make decisions that suited their own lifestyles preferences, although as shown in the final interviews, this did not always happen. Foremost in the mind of many participants was the health of their babies and if a caesarean was recommended for this reason they accepted it [15,41]. Our findings support this schema but additionally include a hermeneutic of discontinuity which, rather than being disruptive, merely created a temporary gap in the hermeneutic of continuity. Such gaps have been described more vividly by some of the literature [42], Bergum, for example, suggested there is actually a rupture as a woman transforms into a mother during the birth process [43]. However, in this study, we have shown that the participants identified themselves as mothers before the birth of their babies as acknowledged by recent UNICEF report on the first 1000 days of life [44].
The absence of data in relation to Decisions, Care and Influences, that was revealed at the time of the third interviews, did not represent a complete break. This pause allows women to focus on getting to know their babies, to develop new routines, and to physically recover from the birth. The first weeks after birth are pivotal to ensure the flourishment of the mother-infant bonding and attachment [45]. This has been described as a close emotional time when women often had little time or energy for anything apart from providing the necessary care for their babies [46]. It is now generally accepted that this is primarily due to the influence of various hormones [47,48]. This resonated particularly in this study, as the participants, being first time mothers, were feeling their way through the early days of motherhood and getting to know their new baby. Yet, they were still reflecting on the birth of their baby when a sense of "coming of age" brought the themes to a turning point. By then, the themes developed into "Getting it right next time", "Holding onto powerful emotions" and "Identifying the most important influences". The expectation of education at this time was clearly shown, and this was particularly evident in the final interviews when participants reflected on the major influences during their pregnancies. However, it has been described as the time when their expectations were often unmet [49]. We speculate the birth outcome is determined by these influences because, as demonstrated by the data, the participants were happy to have given birth to healthy babies; a finding that is supported by other research [2,25,49].
Especially relevant is the discontinuity in the participants' expectations, as reflected in the second interviews in the theme of "Emotions". This may be due to mid to late pregnancy being a time when pregnant women's emotions have stabilised [48]. Additionally, our findings show that most of the participants' preparations for the birth are complete at this time and, while there was still a sense of fear expressed by some participants, there was general acceptance that things were liable to change and, in some ways, no more advance planning could be done.
Finally, we use the term "hermeneutic of rupture" to define the three trajectories that we have outlined in our themes of "Care" and "Decisions". Unlike Ratzinger who saw this phenomenon as negative, we found it to be more neutral as the themes themselves did not disappear totally but, after the first interview, became partially absorbed into two other themes. While some women initially expressed the desire for a caesarean section, most changed their minds during the course of their pregnancy. Nora's past experience of working in maternity services in developing countries for example, seems to have influenced her negatively on having a vaginal birth in relation to pain and complications. She did not trust herself and was also reassured by a precise date of birth. In the second interview, she still expected a caesarean but was contemplating the idea of vaginal birth, which she finally achieved and was happy with her decision. In the postpartum interview, however, these themes no longer existed. On reflection and further discussion, we believe this to be due to what women initially wanted to happen, balanced against acceptance of what actually happened. Our findings show this to be addressed by participants in other themes though not fully absorbed in them.
Conclusion
This study sought to gain understandings around primigravid's expectations of birth, how they flourish during the antenatal period, and how they influence the lived experiences of the birth of their baby, given that Switzerland is known for its high CS rate and its medicalized approach towards birth. From this longitudinal study, four themes Decisions, Care, Influences and Emotions were recorded. The four themes and their intertwining paths during pregnancy and postpartum demonstrate the complexity of the expectations and experiences of women becoming mothers. Planning is strongly present during the antenatal period, and naturally disappears with the birth. Emotions is present mid pregnancy, is muted around 36 weeks pregnancy and holds a key part in women's narrative of their birth at 6 weeks and 6 months postpartum.
Primigravidas' expectations are greatly affected by internal and external influencers. External influencers may be healthcare professionals, friends and family, the media and society's culture, whilst internal influencers are women's own beliefs and desires. Influences evolve throughout the longitudinal study period. Women's choice of birth mid pregnancy passively steers primigravid towards one choice or another, but by the end of the pregnancy, women actively seek and plan help and support for the time of birth and early postpartum days. After 6 months, women reflect on their journey and identify the most important influencers, often healthcare providers.
The sample in this study was "healthy primigravid women" as we were particularly concerned about the high caesarean section rate in the country. Several women did have emergency caesarean sections. Those that retained negative feelings at the final interviews, were not always those who had caesarean sections as satisfaction was more to do with the choices they made throughout the journey and control they were able to exercise. Therefore, women's experiences don't seem to be a strong factor contributing towards the rising caesarean section rates.
Strengths and limitations
The aim of the study which was to develop a model of the emerging expectations of giving birth and the subsequent experiences of healthy primigravid women in four cantons in Switzerland, was achieved. It is the first study of its kind to be carried out in Switzerland. Even though Switzerland is a relatively small country, our findings may be transferrable to other countries in Europe and beyond due to its multicultural facets. Our findings derive from three major language regions in the country, each with its own culture and customs. This, however, brought unique challenges for this Gadamerian hermeneutic study with its emphasis on language. The decision to analyse entirely in English eased the process to providing a "common culture", but we also acknowledge that it is not the participant's culture, and some meaning behind words may have been lost in translation.
Implications for further research and practice
With our focus on "healthy primigravid women" we add a dimension of new knowledge and provide a further layer to literature concerning the complex but under researched postnatal field. While our sampling strategy was intended to be as inclusive as possible, qualitative research can never be truly representative of the population. Thus, we plan to generate a questionnaire based on the findings, and once piloted and validated, administer this to a representative sample of first-time mothers in Switzerland.
In bringing together the data, the plan was not to compare the regions but to develop a model of the "Swiss" experience and the results of the analysis have focused on the commonalities. Nonetheless, it could also be of value in the future to consider the similarities and differences between the different regions of the country so that institutions such as insurers which cover the whole country can ensure they cover the most appropriate services.
The rising caesarean section rates seem to be related to factors other than women's preferences. Ambivalence towards a specific way of giving birth is common during pregnancy. This should be of concern for midwives and obstetricians during antenatal care. Information and counselling should be timely and comprehensive when discussing mode of birth. A negative birth experience may influence future preference for caesarean section. This should be considered by caregivers providing perinatal care.
Finally, since the study is limited to primigravid women, it would be interesting to see if multiparous women experience these stages in the same and at the same intensity. All drew on their experiences to learn from them and utilise them in their planning for future pregnancies. This has implications for health professionals who for almost a century have placed much of their emphasis on antenatal care. It is worthy of consideration that, to make this more relevant, they assess the possibilities of providing a follow up visit to first time mothers at six months postpartum to enable the woman to reflect upon her birth journey.
|
2022-02-06T05:14:56.905Z
|
2022-02-04T00:00:00.000
|
{
"year": 2022,
"sha1": "ae7b6c005d960265c28151e3c348dc1f8ff14efc",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0261902&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ae7b6c005d960265c28151e3c348dc1f8ff14efc",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
52255313
|
pes2o/s2orc
|
v3-fos-license
|
Pseudoxanthoma Elasticum and its Rare Co-Existence with Comedones and Neurofibroma. Clin Dermatol J 2018, 3(3): 000151
A 68 year old male patient, presented with multiple asymptomatic elevated skin lesions over the face, V area of neck since last 40 years. Physical examination revealed numerous open comedones all over the face, forehead and V area of neck. Numerous firm, skin colored to yellowish, non-follicular papules coalescing into plaques were seen around the neck, suprascapular region, both axillae, cubital and popliteal region with redundant lax skin folds. A single pedunculated nontender firm swelling of 1X1 cm size is seen over the left side of the chest. Histopathological examination of punch biopsy from lesional skin showed irregularly clumped faintly basophilic elastic fibres admixed with mucoid material in the mid and lower dermis. Excisional biopsy from the lesion over the chest showed skin with underlying dermal circumscribed, non encapsulated tumour composed of spindle cells arranged in fascicles. Patient improved with topical and systemic retinoids. We are reporting this case as it’s a rare association with pseudoxanthoma elasticum.
Introduction
Pseudoxanthoma elasticum (PXE) is an inherited disorder of the connective tissue mainly involving the elastic fibers of the skin, eyes and cardiovascular system [1]. Skin lesions consist of yellowish papules or plaques with an associated increase in skin laxity. Histopathology of skin lesion shows calcification, alteration and fragmentation of elastic fibres in the mid and lower dermis. The diagnosis is most often made late in the second or third decade of life [1,2].
With a prevalence of 1 in 25,000 to 70,000, it results from mutations in a gene located at chromosome 16p13.1, which encodes for the transmembrane transporter protein adenosine triphosphate binding cassette C6 (ABC-C6) with an autosomal recessive inheritance pattern [2].
We hereby report an unusual case of Pseudoxathoma elasticum and its co-existence with cutaneous neurofibroma and numerous open comedones which is an extremely rare association.
Case Report
A 68 year old male patient, presented with multiple asymptomatic elevated skin lesions over the face, V area of neck since last 40 years. There was no history of similar complaints in the family nor was there any consanguinous marriage. The patient was not on any drugs.
Physical examination revealed numerous open comedones all over the face, forehead and V area of neck, with relative sparing of forehead and nose ( Figure 1). Numerous firm, skin colored to yellowish, non-follicular papules coalescing into plaques were seen around the neck, suprascapular region, both axillae, cubital and popliteal region (Figures 4-7). These areas also showed lax, redundant, sagging folds of skin mainly around the neck (Figures 1-3). A single pedunculated nontender firm swelling of 1X1 cm size is seen over the left side of the chest at the level of left coastal margin along the mid clavicular line (Figure 8). The ophthalmologic examination was normal. The results of routine laboratory tests were within normal limits. Electrocardiography (ECG) and radiography of the chest, both were within normal limits. Skin from neck and left forearm was biopsied and subjected to staining with hematoxyllin -eosin and Von Kossa stains. It showed irregularly clumped faintly basophilic elastic fibres admixed with mucoid material in the mid and lower dermis (Figure 8). At one focus, basophilic dense calcified deposits were seen. No evidence of inflammatory cell infiltration.
Hematoxylin and eosin stained section from the excisional biopsy specimen of swelling over left chest showed skin with underlying dermal circumscribed, nonencapsulated tumour composed of spindle cells arranged in fascicles. Spindle cells showed thin wavy nuclei admixed with stellate cells in myxoid matrix, with wavy collagenous fibres.
On the basis of clinical and histopathologic findings, diagnosis of "sporadic" Pseudoxanthoma Elasticum was made.
Discussion
Pseudoxanthoma elasticum can be associated with considerable morbidity and mortality. The clinical variability is evident by observations that the involvement of all three major organ systems, i.e., skin, eyes and the cardiovascular system, is encountered in some patients. Flexures and periumbilical skin are commonly affected. Ocular involvement is characterised by angioid streaks, breaks in the Bruch's membrane, choriocapillaris etc. Lastly, patients may develop systemic manifestaions like increased risks of acclerated peripheral vascular disease, ischemic heart disease, hypertension and cerebrovascular disease [3,4]. In the present case the lesions over the neck, limbs, suprascapular, supraclavicular, popliteal and cubital area were clinically suggestive of pseudoxanthoma elasticum which was consistent with histopathological findings. Pseudoxanthoma elasticum and its association with reticulate papules and plaques as seen in our case along with multiple open comedones and neurofibroma is an extremely rare association. A study of Pseudoxanthoma elasticum cases in the Indian subcontinent showed that skin lesions and eye involvement are common but systemic involvement is relatively rare [5]. Eye involvement and systemic symptoms were not seen in our case. There is no specific treatment for Pseudoxanthoma elasticum. This patient was treated with topical tretinoin 0.025%, systemic isotretinoin 10mg per day and sunscreen. After 8 weeks of therapy his lesions started showing improvement with resolution of most of comedones (Figure 9). Cutaneous neurofibroma was excised under local anaesthesia.
|
2019-03-18T14:02:50.060Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "8ce792357bb32534b6cd38f7641fcc8e7c7100fb",
"oa_license": null,
"oa_url": "https://doi.org/10.23880/cdoaj-16000151",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "93a507fd0f8cd298d5980343e46dbcaa9f3dc57c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225751400
|
pes2o/s2orc
|
v3-fos-license
|
Ratio-First Order Derivative-Zero Crossing UV-Visible Spectrophotometric Method for Analysis of Amoxicillin, Levofloxacin and Lansoprazole Mixture
Mustafa Gülfen*, Yazgı Canbaz and Abdil Özdemir Department of Chemistry, Faculty of Arts and Sciences, Sakarya University, 54187 Sakarya, Turkey. *Coressponding Author Email: mgulfen@sakarya.edu.tr Received 09 March 2020, Revised 10 June 2020, Accepted 15 June 2020 -------------------------------------------------------------------------------------------------------------------------------------------Abstract A ratio first order derivative zero crossing UV-Visible (UV-Vis) spectrophotometric method has been developed for the simultaneous determination of amoxicillin (Amox), levofloxacin (Levo) and lansoprazole (Lanso). The method was validated for the mixtures of Amox, Levo and Lanso in the standard solutions and pharmaceutical tablets. Amox, Levo and Lanso solutions prepared in MeOH:H2O (50:50 v/v) solvent mixture were used in the measurements. The calibration graphs were prepared at the wavelengths of 248.9, 219.4 and 262.8 nm, respectively, which were determined as the zero crossing points of the ratio – first order derivative spectra of them. The calibration curves were optimized linearly between the concentrations of 1.22 15.0 mg/L for Lanso, 0.95 20.0 mg/L for Levo and 3.42 40 mg/L for Amox together with the correlation coefficients (R) 0.9996, 0.9993 and 0.9987, respectively. The limits of detection (LODs) of the improved method were 1.03 mg/L concentration, and the limits of quantification (LOQs) were 3.42 mg/L. The proposed ratiofirst order derivative zero-crossing method was validated with good accuracies as the recoveries of 100.0%, 102.5% and 99.2% and with high precisions as the RSDs % of 1.37, 2.04 and 2.64 for Amox, Levo and Lanso, respectively. It was shown that the Amox, Levo and Lanso can be determined simultaneously without any separation in pharmaceutical mixture formulations.
In the present work, a ratio -first order derivative -zero crossing UV-Vis spectrophotometric method was applied well to determine the ternary mixtures of Amox, Levo and Lanso. The method is based on the successive derivative of ratio spectra in two successive steps. Firstly, the absorption spectra of the ternary mixtures were divided by the standard spectrum of one component to obtain the ratio spectrum. Then, the first order derivative of the ratio spectra was taken and the calibration curve is formed at the zero crossing points of the other component. By using a second divisor, all the components of ternary mixtures can be determined similarly. A simultaneous UV-Vis absorption spectrophotometric method for the combination of Amox, Levo and Lanso has not been found in the literature which we can reach, although there are UV-Vis methods for the combinations of them with other drugs. The novelty of study is the developing the UV-vis. absorption spectrophotometric method for the combination of Amox, Levo and Lanso. Hence, a ratio-first order derivative -zero crossing UV-Vis spectrophotometric method has been developed to determine simultaneously the ternary mixtures of Amox, Levo and Lanso drug agents in pharmaceutical tablets. The validated method can be applied without any previous separation. After the optimization of the calibration curves of them, their ternary mixtures were analyzed.
Materials and Methods Materials
Amox trihydrate, Levo hemihydrate and Lanso were purchased from Neutec Pharmaceuticals (Sakarya, Turkey). They were used for the standard calibration solutions. In the validation measurements, Largopen including 1176.47 mg Amox trihydrate or 1000 mg Amox was used. It was produced by Bilim Pharmaceuticals Company (Tekirdag, Turkey). Levo measurements were done by using Tavanic drug tablets which are produced by Sanofi Aventis Limited Company (Istanbul, Turkey). One Tavanic tablet contains 512.6 mg Levo hemihydrate. It is equivalent to 500 mg Levo. For the validation measurements of Lanso, Lansor pharmaceutical tablets were used. Lansor is produced by Sanovel Company (Istanbul, Turkey). Each Lansor tablet includes 30 mg active Lanso agent. The other chemicals were in analytical grade. In the preparation of the solutions, ultrapure water (18.2 MΩ) and high purity methanol solvents were used.
Instrument
In UV-Vis absorption measurements, a Shimadzu 2600 model UV-Vis Spectrophotometer (Japan) was used. The UV-Vis Spectrophotometer has double beam system with 1.0 cm path lengths. In the measurements, 1.0 cm-path length quartz cells were used. All the spectra were recorded at the wavelengths of 200 -400 nm with the resolution of 0.1 nm.
Zero order spectra UV-Vis absorption spectra
The standard solutions of Amox, Levo and Lanso were prepared separately in MeOH:
Ratio -first order derivative -zero crossing UV-Vis absorption spectra
To obtain the calibration curves for each of the ratio spectra were obtained by dividing the zero order spectra of two components to the zero order spectrum of third component. The spectra of Amox1-6 (15 -40 mg/L) and Lanso1-6 (2.5 -15.0 mg/L) standard solutions were divided by Levo6 (20.0 mg/L) and they were assigned as 1) Amox1-6/Levo6 and Lanso1-6/Levo6. Next, the spectra of Lanso1-6 (2.5 -15.0 mg/L) and Levo1-6 (7.5 -20.0 mg/L) standard solutions were divided by Amox6 (40 mg/L) as second divisor and these group was assigned as 2) Lanso1-6/Amox6 and Levo1-6/Amox6. Then, the first order derivatives of the ratio spectra were taken by selecting Δλ: 10.0 nm and scaling factor: 1. The zero crossing points of one component in the obtained spectra were determined.
Calibration curves
After the ratio -first order derivative spectra was obtained, the zero crossing points of one component on the ratio -first order derivative spectra were selected for the calibration curves of Amox, Levo and Lanso at the wavelengths of 248.9, 262. 8
Validation
For the validation of the proposed ratiofirst order derivative -zero crossing UV-Vis spectrophotometric method, the accuracy as recovery% and the precision as percent relative standard deviation (RSD%) experiments were conducted in the ternary mixtures. Eighteen synthetic ternary mixture solutions of Amox, Levo and Lanso were prepared. The concentrations of them are in the range of the concentrations of the calibration standard solutions. The UV-Vis absorption spectra were recorded by changing the concentrations of Amox, Levo and Lanso in their ternary mixtures. From the obtained spectra, the percent recoveries and percent relative standard deviations (RSD%) were calculated as the accuracy and precision of the proposed method.
Analysis of pharmaceutical tablets
The pharmaceutical tablets were weighted and powdered with mortar. Largopen pharmaceutical tablets were dissolved in 500 mL MeOH:H 2 O (50:50v/v). One Largopen tablet contains 1000 mg Amox. Lansor tablets for Lanso agent were dissolved in 250 mL MeOH. Each Lansor tablet has 30 mg Lanso drug agent. Tavanic tablets for Levo were dissolved in 250 mL MeOH:H 2 O (50:50v/v). One Tavanic tablet includes 500 mg Levo. Through the dissolution processes of pharmaceutical tablets, the undissolved solid particles were filtered with 0.2 µm membrane filter in a vacuum filter system. Then, six ternary mixture solutions including 25 mg/L Amox, 7.5 mg/L Lanso and 12.5 mg/L Lanso were prepared. Their UV-Vis spectra were measured and their concentrations were calculated using the calibration curves of them.
Zero order UV-Vis absorption spectra
The zero order spectra of Amox, Levo and Lanso drug reagents in H 2 O: MeOH (50:50 v/v) solvent mixture were measured separately. The obtained spectra of Amox, Levo and Lanso at the different standard calibration concentrations are given in Fig. 2. It can be seen from the spectra that the maximum absorption bands of Amox were observed at the wavelengths of 205, 231 and 274 nm. The maximum absorption bands of Levo were seen at the wavelengths of 228, 258, 292 and 330 nm. For the Lanso, the maximum bands were at the wavelengths of 205 and 286 nm. The absorption bands of all of Amox, Levo and Lanso overlapped at the wavelengths below 292 nm. Levo and Lanso absorption bands overlapped at wavelengths below 314 nm Levo can be determined independently from the others at the wavelengths of about 315-345 nm, if it is calibrated. Levo and Lanso can be determined in their binary mixtures between 290 and 370 nm wavelengths. However, in any case, the absorption bands of Amox overlap with the other two components. Therefore, the determination of Amox requires the applications of chemometric or graphical methods to resolve the spectra and to calculate the individual concentrations of the components. One of the ternary mixture methods is ratio-derivative -z ero crossing UV-Vis spectrophotometric method. It is possible to determine simultaneously the ternary mixtures of Amox, Levo and Lanso at about 210 -300 nm wavelength in which all the absorption bands of them overlapped on each other. Hence, in this study, a ratio-derivative zero-crossing UV-Vis spectrophotometric method was developed and validated for the ternary mixtures of Amox, Levo and Lanso.
Ratio -first order derivative -zero crossing UV-Vis absorption spectra
Salinas et al. [34] applied the ratioderivative spectrophotometry to binary mixtures. Then, Nevado et al. [35] developed the method and applied to ternary mixtures by using first order derivative of the ratio spectra of ternary mixtures. They calculated the calibration curves at the zero crossing wavelengths of the first order derivative of the ratio spectra [35]. In ratio -first order derivative -zero crossing method, after the ratio spectra of two analytes to third one were obtained, the two ratio spectra were derived. Any wavelength of the maximums or minimums of one analyte at the zero crossing wavelength of other analyte were used for the calibration calculations. To obtain many maximums or minimums are possible and this is advantage of the method. The aimed analyte can be determined selectively with the ratio -first order derivative -zero crossing method in case of presence of other matrix compounds [36].
In this study, the spectra of the first and second analytes were divided by the spectrum of third analyte standard solution. It is possible to obtain three new ratio spectra as 1) Amox/Levo and Lanso/Levo, 2) Amox/Lanso and Levo/Lanso and 3) Lanso/Amox and Levo/Amox. By dividing these spectra, the ternary variant spectra were reduced to binary variant spectra. Then, these ratio spectra were derived and finally the calibration curves as one variant were formed at zero crossing points of second analyte. With the ratio -first order derivative -zero crossing procedure, many calibration points can be obtained for three analytes. In this study, two new spectra group were obtained as 1) Amox1-6/Levo6 and Lanso1-6/Levo6, and 2) Lanso1-6/Amox6 and Levo1-6/Amox6 by using the ratio -first order derivative. Double divisors, Levo6 and Amox 6 standard solutions, were used in total. The obtained ratio -first order derivative spectra are given in Fig. 3 and Fig. 4. After ratio -first derivative spectra were drawn, the zero crossing points were obtained for third component. There are more than one zero crossing points in the obtained spectra to select a calibration point for suitable component. The selected zero crossing points were determined according to more absorbance distributions of the varying component. So, the calibration curves of Amox, Levo and Lanso were formed by using ratio -first order derivative -zero crossing UV-Vis spectrophotometric method.
Calibration curves
The calibration curve of Amox was formed from the absorbance values at the wavelength of 248.9 nm in Fig. 3, in which the others have constant absorption values on these spectra. The obtained calibration curve for Amox is given in Fig. 5a. Similarly, the calibration curve of Lanso as the second analyte was formed from the absorbance values at the wavelength of 262.8 nm in Fig. 3. The calibration curve of Lanso is given in Fig. 5b. As the third analyte, the calibration curve of Levo was obtained from the absorption values at the wavelength of 219.4 nm in Fig. 4. The calibration curve of Levo is given in Fig. 5c. The calibration and regression data for Amox, Levo and Lanso were summarized in Table 1. The limit of detection (LOD) and limit of quantitation (LOQ) of the method were obtained by standard deviation of response and slope method. The LODs and LOQs were calculated by using LOD = 3.3x(SD/S) and LOQ = 10 x (SD/S) equations, where 'SD' represents the standard deviation of the response and 'S' is the slope of the calibration curves. From the calibration calculations of Amox, Levo and Lanso in their ternary mixtures, good linearity with high regression coefficients (R 2 : 0.9987, 0.9993 and 0.9996), and low LOD (1.03, 0.37 and 0.29 mg/L) and low LOQ (3.42, 1.22 and 0.95 mg/L) values were obtained for their working concentration ranges. Table 2 shows the results of the accuracy as recovery% and the precision as RSD% obtained from the calibration measurements.
Validation of method
The applicability of the developed ratiofirst order derivative -zero crossing method for the ternary mixture of Amox, Levo and Lanso was tested in several synthetic mixtures of their different proportions. The prepared synthetic 18 mixture samples were measured on the UV-Vis spectrophotometer and the concentrations of the drug analytes were calculated by using the proposed method. The accuracy and precision of the method were found for each analytes. The results are given in Table 3. The accuracies were found as the recoveries of 100.0%, 99.2% and 102.5%, and the precisions were as the RSD% values of 1.37, 2.04 and 2.64 for Amox, Levo and Lanso, respectively. In our previous work [4], the ternary mixture analysis of Amox, Lanso and Levo have been studied by using HPLC method. In this study, the mixture of the same combination was studied by ratio -first derivative -zero crossing UV-Vis absorption spectrophotometric method. In the validation measurements by using the HPLC method, the recoveries were 104.1%, 98.3%, and 101.2% and the RSD% values were 3.91%, 2.55% and 2.18% for Amox, Lanso and Levo, respectively. These comparisons show that this UV-Vis absorption method has relatively better accuracy and precision values than the HPLC method. As a result, the ratio-first order derivative -zero crossing method can be applied to the ternary mixtures of Amox, Levo and Lanso with good accuracies and high precisions.
Analysis of pharmaceutical tablets
The proposed ratio -first order derivative -zero crossing spectrophotometric method was applied to the analyses of Amox, Levo and Lanso in pharmaceutical tablets. Six samples solutions were prepared by dissolving Largopen, Lansor and Tavanic commercial drug tablets for Amox (25 mg/L), Lanso (7.5 mg/L) and Levo (12.5 mg/L). After the dissolution of them, the final dilutions were done by using MeOH:H 2 O (50:50v/v) mixture solvent as similar to the calibration solutions. The obtained analysis results of the tablets are given in Table 4. Good accuracy and precision values were found in the analyses of the pharmaceutical tablets. The ratio -first order derivative -zero crossing spectrometric method is also suitable to prevent the interferences of the matrices in drug tablets. The method is precise, accurate and simple. Also, no separation step is required [37,38]. As a result, the ratio -first order derivative -zero crossing spectrometric method can be used in the analyses of the ternary mixtures of Amox, Levo and Lanso. [4]).
Comparing with the studies in the literature
It has not been reached to a study about the combination of Amox, Levo and Lanso about simultaneous determination by using any UV-Vis absorption spectrophotometric method. However, in the literature, there are studies about simultaneous determinations of Amox, Levo or Lanso with other drug reagents with UV-Vis absorption spectrophotometry. Some examples of the studies published in the literature about simultaneous determination in binary of ternary mixtures of Amox, Levo and Lanso with UV-Vis absorption spectroscopic methods are given in Table 5. First order and second order derivative, ratio -first order derivative -zero crossing UV-Vis absorption spectroscopic methods have been used in binary or ternary mixtures. The ternary mixtures are within limited numbers, although the binary mixtures have been studied extensively in the literature.
Conclusion
In the present work, a ratio -first order derivative -zero crossing UV-Vis Spectrophotometric method has been developed and validated for the determination of amoxicillin, lansoprazole and levofloxacin in pharmaceuticals. The results showed that amoxicillin, lansoprazole and levofloxacin could be determined successfully and simultaneously. The proposed method does not require any prior separation step. The ratio -first order derivative -zero crossing UV-Vis spectrophotometric method can be used for the ternary mixtures of amoxicillin, lansoprazole and levofloxacin with good accuracy and precision in the synthetic mixtures or in the pharmaceutical dosage forms of them.
|
2020-07-02T10:29:40.678Z
|
2020-06-30T00:00:00.000
|
{
"year": 2020,
"sha1": "ada2836f833d188c40148144fedcf4a17168a772",
"oa_license": "CCBYNCSA",
"oa_url": "http://www.pjaec.pk/index.php/pjaec/article/download/570/325",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a2832a085b58da1ba7fbdd2540dc60725b156fe9",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
3972325
|
pes2o/s2orc
|
v3-fos-license
|
Review Postmating Female Control: 20 Years of Cryptic Female Choice
Cryptic female choice (CFC) represents postmating intersexual selection aris-ing from female-driven mechanisms at or after mating that bias sperm use and impact male paternity share. Although biologists began to study CFC relatively late, largely spurred by Eberhard ’ s book published 20 years ago, the fi eld has grown rapidly since then. Here, we review empirical progress to show that numerous female processes offer potential for CFC, from mating through to fertilization, although seldom has CFC been clearly demonstrated. We then evaluate functional implications, and argue that, under some conditions, CFC might have repercussions for female fi tness, sexual con fl ict, and intersexual coevolution, with rami fi cations for related evolutionary phenomena, such as speciation. We conclude by identifying directions for future research in this rapidly growing fi eld. the ejaculate neutralized at successive stages. Mechanisms closer to fertilization deal with fewer sperm and, consequently, be more precise than mechanismsacting at earlier stages.Some of these mechanismsare more relevant tointernal fertilizers (e.g., sessile broadcast occur via mechanisms of CFC that bias fertilizations toward sperm with complementary alleles [91]. MHC-dependent gamete fusion has been demonstrated in different taxa (mice [50], salmon [92], and guppies [93]), but what is the speci fi c mechanism driving MHC-based sperm selection? Contradictory reports [353_TD$DIFF] on whether sperm signal their MHC haplotype suggest that expression [354_TD$DIFF] might depend upon male infection status [50]. Strong linkage disequili- brium between testis-expressed MHC genes and MHC-linked olfactory receptor genes in some taxa could indicate whichMHCallelesarecarriedbythesperm(thespermreceptorselectionhypothesis[91]).Thecomplexityinvolvedwith MHC-basedspermselectionisapparentintheredjunglefowl,inwhichfemalesmightuseprematingphenotypiccuesto selectMHC-dissimilarspermandavoidfertilizationsbyrelatedmales[11].Thissystemrequiresthatfemales ‘ know ’ their own MHC genes and be able to assess those of their partners both before and after mating. Clearly, more research is required to precisely establish the mechanisms explaining MHC-based CFC. Functional studies should also investigate macroevolutionary patterns of CFC-related traits, their coevolution with associated male traits, and the phylogenetic and ecological drivers of such patterns. A comparative approach would also help resolve the role of CFC in reproductive isolation (Box 3, main text) and diversi fi cation. Detecting the phylogenetic signature of CFC should be easier than for premating female choice, because CFC can be mediated by morphological or physiological traits that are easier to quantify and compare across species than are more plastic female preference traits. Yet, compared with male reproductive anatomy, female reproductive anatomy is distinctly underrepresented in evolutionary studies, even those investigating CFC! [100].
Trends
In 1996, Eberhard crystallized the idea of CFC as an engine of sexual selection and initiated the study of female-driven processes.
Demonstrating CFC, which is defined as female-mediated morphological, behavioral, or physiological mechanisms that operate to bias fertilization toward the sperm of specific male(s), requires dissecting male and female variance components of sperm retention or paternity.
Technologies developed over the past 20 years have helped elucidate the proximate mechanisms underpinning fertilization and have accelerated the field of CFC.
Females may bias sperm use at successive stages of the reproductive process, including shortly after mating, during sperm transit and/or storage, and at fertilization.
CFC can have fundamental repercussions for sexual selection on males, female fitness, and, consequently, sexual conflict and intersexual coevolution, with ramifications for related evolutionary phenomena (e.g., speciation).
PremaƟng PostmaƟng
Male-female CompeƟƟon (1871) Female Choice (1871) Mechanisms of CFC a Sperm ejecƟon and dumping are oŌen used 'interchangeably; however we'; consider these two potenƟal mechanisms of CFC as disƟnct ('i.e.,'; 'ejecƟon, acƟve' expulsion of sperm from the FRT immediately aŌer maƟng; dumping, acƟve' discharge from SSOs). , act on variance in male access to different females and their eggs (small ovals grouped into separate clutches). Polyandry creates potential for postmating sexual selection acting on variation in the paternity of the eggs within each clutch (represented by eggs of different colors). Sperm competition was recognized by Parker in 1970 [2], while CFC was identified as an engine of sexual selection in 1983 [4]. Several factors can explain why CFC was only appreciated so late. First, postmating mechanisms are inherently obscure and hard to study; often, they are mediated by subtle molecular interactions and in internal fertilizers occur within the female reproductive tract (FRT). Male-driven mechanisms, such as the number of sperm inseminated, appear, at least superficially, more obvious than patterns of differential sperm utilization by females. Finally, an inherent male-dominated cultural bias likely predisposed researchers to male-driven explanations of postmating patterns, reminiscent of the skepticism that met Darwin's idea of premating female choice a century earlier. (B) Postmating processes through which females can control competitive fertilization success after mating (listed in approximate order of occurrence during and after mating; color coded: at mating, shortly following insemination, over prolonged sperm storage, around the time of fertilization). We discuss empirical evidence of these mechanisms in the main text, and restrict our focus to prezygotic stages, excluding mechanisms of differential abortion and maternal investment, which influence offspring fitness rather than paternity share. The arrow on the right represents the proportion of the ejaculate neutralized at successive stages. Mechanisms closer to fertilization deal with fewer sperm and, consequently, must be more precise than mechanisms acting at earlier stages. Some of these mechanisms are more relevant to internal fertilizers than to other organisms (e.g., sessile broadcast spawners). Abbreviation: SSO, sperm storage organ. factors such as phenotype or genotype. Box 1 outlines a general quantitative framework to test CFC; below, we review recent empirical approaches addressing (i) and (ii).
Experimental Manipulation of Male Quality and Compatibility
The causal relationship between a male trait and patterns of female sperm utilization can be illuminated by experimentally manipulating male phenotype while controlling for, or blocking, other factors. For context-dependent phenotypes, such as social status or relatedness, a powerful design involves allowing females to evaluate the same male in different contexts. Changes in female sperm utilization and/or fertilization success associated with such manipulations are consistent with CFC (e.g., [7]). However, plastic male responses (e.g., differential sperm allocation) must be controlled for, increasing the difficulty of demonstrating CFC. Artificial insemination (AI) or in vitro assays of sperm utilization and fertilization can be used to control ejaculate traits and eliminate the influence of premating mechanisms (e.g., [8][9][10][11]). A limitation of in vitro approaches is that they can remove some CFC mechanisms triggered by female assessment of male phenotype. However, AI can be used to experimentally manipulate female perception, such as by exposing a female to one male while inseminating her with the sperm of another [12]. Good genes: models of sexual selection that assume that extreme ornaments indicate the genetic quality of the bearer (usually males), defined as breeding value for fitness. 'Good sperm' hypothesis: predicts that females mate polyandrously to ensure that males of high genetic quality, and increased competitive fertilization success, sire their offspring. P 2 : the paternity and/or fertilization success of the second of two males to copulate with a female, expressed as a proportion of total offspring and/ or egg number. Polyandry: mating systems in which females mate with more than one sexual partner. In the context of this review, we focus on situations in which polyandry creates the opportunity for the sperm of different males to co-occur for the fertilization of the same set of ova. Polyspermy: the condition where multiple sperm enter the egg during fertilization (described as pathological when polyspermy has fatal consequences for the zygote) Box 1. Defining and Demonstrating CFC CFC is operationally defined as variation in fertilization success among males due to nonrandom, differential responses of females; thus, demonstrating CFC requires dissecting male and female variance components of sperm retention or paternity. Although demonstrating CFC has been historically debated [84], the approach outlined below is now widely accepted. The simplest case is a factorial design where females are exposed to sperm of individual males to distinguish consistent patterns of sperm utilization from random error. Each male-female combination is replicated using the same or genetically similar individuals (e.g., full-sibs, isogenic, or inbred lines). We partition Sum of Squares within (SS within , error) and between (SS between ) male-female combinations; SS between is then partitioned across the male and female main effects and their male  female interaction. When SS within >SS between , variation is random across male-female combinations, while SS within <SS between indicates significant differences. A good example of this general approach is provided by a study of Drosophila melanogaster [85], in which the repeated use of individual males with individual females enabled the authors to estimate the repeatability of P 1 and P 2 . A significant female effect indicates consistent differences among females in sperm utilization (e.g., they might lose sperm from SSO at faster rate), regardless of male identity. This scenario can have interesting repercussions for sexual selection on males if males mate nonrandomly with respect to female type, but does not in itself represent CFC. A significant male effect indicates consistent variation among males independent of female identity due to either male effects (e.g., variation in ejaculate fertilizing efficiency) or directional CFC for certain male traits. The two alternatives are not mutually exclusive, and special care is required to distinguish male and female mechanisms. One approach is to measure ejaculate phenotypes related to competitive fertilization success (e.g., sperm numbers or velocity) and generate expectations of paternity share based on the relative values of these male traits. Deviations from such expectations are inconsistent with sperm competition explanations and instead lend support to CFC. For example, Parker et al. [86] generated expectations for P 2 based on S 2 , and this approach was later modified for non-normal data [87,88] and multiple SSOs [89]. Finally, a significant male  female interaction indicates nondirectional CFC, consistent differences across male-female combinations in utilization or fertilization success [90].
This approach can be expanded to include sperm competition between two males and attributing variation in P 2 to the female, first male, second male, or male  male and female  male  male interactions. We can also test hypotheses that certain factors influence CFC by including male or female genotype or phenotype as a main or random effect, depending on experimental design. In the case of directional CFC on a continuous variable, we can use selection analysis to express male fitness (fertilization success, W) as a function of the male phenotype, z, targeted by CFC (Equation I): where e is an error term, W and z represent standardized male fitness and phenotype, respectively, and b represents the standardized gradient of postmating intersexual selection on z (i.e., b = S/s p , where S is the CFC selection differential). However, the causal relationship between male trait and female response can only be demonstrated through experimental manipulations.
S 2 : the proportion of stored sperm from a second of two males to mate relative to the total sperm. Sexual conflict: a divergence in the fitness interests of males and females over reproductive decisions or the outcome of their reproductive interactions.
Sexual selection: selection for traits conferring an advantage in intrasexual competition over mating and fertilization; occurs both before (premating), during, and after (postmating) copulation, either as competition between members of the same sex (usually males) for access to mates (intrasexual), or when members of one sex (usually females) choose individuals and/or sperm of the opposite sex (intersexual). 'Sexy sperm' hypothesis: predicts linkage disequilibrium between female polyandry and male sperm competitive efficiency leading to polyandrous females having a selective advantage over monandrous females, because their sons will be sired by males with competitively superior sperm and will inherit this trait. Sperm chemotaxis: the movement of sperm toward eggs, following a gradient of chemoattractants released by unfertilized eggs. Sperm competition: occurs when the ejaculates of two or more males compete to fertilize the eggs of a female. Sperm precedence: when the sperm of one male has an advantage over another male, for example by being preferentially selected by the female during mating.
positioning in the female sperm storage organ (SSO) [13] or fertilization success [14]. Competitive PCR of microsatellites has been used to quantify S 2 for individual males within the SSOs of multiply mated females [15,16]. Differential labeling of sperm from multiple males has allowed high-resolution characterization of postmating mechanisms, including those related to CFC [8,9,17]. The recent development of transgenic males producing live sperm expressing green or red fluorescent proteins has enabled unprecedented insights into the behavior of sperm within the FRT, and CFC mechanisms [18][19][20][21].
Potential Mechanisms
Eberhard identified multiple proximate mechanisms through which females might bias fertilization at successive stages of the reproductive process [5]. Here, we focus on prezygotic mechanisms at and shortly after mating, mediating sperm storage in the SSO, and at fertilization ( Figure 1B).
Differential Responses at and [ 3 6 1 _ T D $ D I F F ] Shortly after Mating
Females might first influence paternity by controlling the timing and order of competing inseminations. Females of the moth Ephestia kuehniella influence P 2 by remating sooner, through displacement of the first spermatophore from the SSO [22]. Moreover, the outcome of sperm competition is often mediated by the number of sperm inseminated by different males. While ejaculate size is largely under male control, females might influence sperm transfer through spermatophore acceptance or by actively terminating copulation. An elegant study in the guppy Poecilia reticulata showed that a male inseminates more sperm if his mate perceives him to be relatively attractive [7]. Although poorly investigated, female control over copulation duration represents an effective mechanism for mediating which sperm enter the fertilizing pool [23,24] In several species, a proportion of the ejaculate is lost shortly following ejaculation and female processes, such as differential sperm ejection, digestion, and incapacitation, influence which sperm are retained. In some invertebrates, differential sperm ejection is associated with male size [21,25], species identity [26], and courtship duration [27]. Similarly, sperm ejection by female feral fowl Gallus domesticus might disfavor inseminations by socially subdominant males [28,29] (Figure 2). Female kittiwakes Rissa tridactyla can utilize sperm ejection to reduce the risk of fertilization by sperm aging within the FRT from previous copulations, which compromises offspring viability [30]. Sperm ejection might be male induced in the socially polyandrous dunnock Prunella modularis, where the male pecks the female cloaca before mating, which stimulates ejection of previously stored semen from other males [31], although the extent to which males control this female response remains unclear.
Mechanisms of sperm uptake can also create opportunities for CFC, such as contractions of the FRT that facilitate sperm passage from lower to upper FRT in red garter snakes Thamnophis sirtalis parietalis [32]. In some primates, the degree of sperm uptake has been linked to contractions associated with female orgasm, and female Japanese macaques Macaca fuscata are more likely to achieve orgasm-like responses when mating with socially dominant males [33], suggesting preferential sperm uptake for these males. Finally, sperm might be attacked by innate or acquired immune responses, phagocytosed, digested, or incapacitated within the FRT, such as by spermicidal action (e.g., Drosophila pseudoobscura [34]). Females might also exert their control by alleviating sperm incapacitation by rival ejaculates (e.g., bees and ants [35]).
Out of all of these prestorage female-mediated phenomena, evidence that they function as CFC appears clearer for differential sperm ejection in relation to male phenotypes, although even here the causal effect of female response on paternity share remains largely unresolved.
Differential Sperm Storage
If sperm reach storage having escaped ejection, digestion, or incapacitation, they might interact with sperm from other males through displacement, stratification, or mixing. Eberhard first suggested that FRT complexity can increase female control over sperm storage and paternity [5]. Indeed, SSO morphology can influence the degree to which sperm are stored and/or displaced. Female dung flies Scathophaga stercoraria with four SSOs might be better able to control paternity compared with females with only three SSOs [36]. In Drosophila Selection experiments in D. melanogaster and new comparative evidence across Drosophila species indicate that directional CFC targets sperm size, promoting the evolution of giant sperm, one of the most exaggerated sexual ornaments [54]. This appears to be the result of a Fisherian-like process in which female seminal receptacles (SR) length is genetically correlated with sperm length as well as with ejection time, remating rate, and sperm displacement [54].
(B) Male coloration in guppies, Poecilia reticulata. Female guppies prefer to mate with more colorful males, particularly those sporting a relatively large carotenoid-based patch. Pilastro [ 7 1 _ T D $ D I F F ] et al. [7] demonstrated a role of CFC by manipulating the perception of male attractiveness to females, who actively favored fertilization by brightly colored males, controlling the duration of the copula and, thus, the number of sperm transferred [24]. Females terminated copulation earlier and received fewer sperm with males that were perceived of lower quality through the comparison with another more colorful male [7]. (C) Male feral fowl, Gallus domesticus, competing for social status. Male social dominance appears to be favored by CFC in some populations. Females can eject ejaculates immediately following insemination, when approximately 89% is expelled on average [28,29]. Females of a feral population were found to vary predictably in the probability (risk) of sperm ejection and the proportion of ejaculate lost (intensity). Part of this variation is explained by mechanical properties[ 3 4 0 _ T D $ D I F F ] , for example, larger ejaculates suffer a higher ejection risk, possibly because it is harder for females to uptake these inseminations given the lack of intromission[ 3 4 1 _ T D $ D I F F ] . However, other patterns suggest differential sperm ejection by females (e.g., risk increases as females accumulate successive matings and control for ejaculate volume; thus, socially subordinate males suffer higher ejection intensity [28,29]. (D) Nesting male ocellated wrasse, Symphodus ocellatus. In this externally fertilizing species, CFC favors fertilization by 'nesting' males. Males adopt alternative mating tactics: 'nesting' males attend nests where females lay their eggs, while 'sneaker' and 'satellite' males scrounge fertilizations by visiting the nests of nesting males. Nesting males produce faster sperm, while sneakers produce more sperm. Recent experimental evidence demonstrates that female ovarian fluid (OF) biases sperm competition dynamics to increase the relative importance of sperm velocity over sperm numbers, thus favoring the ejaculates of nesting males and reinforcing female premating preference for these males [44]. Reproduced, with permission, from Amy Hong (A), C. Gasparini (B), and H. Løvlie (C).
melanogaster, longer sperm are favored when stored in longer female seminal receptacles (SR) [37] due to their superior ability to displace, and resist displacement by, shorter sperm [38], exemplifying that mechanisms of CFC and sperm competition are not mutually exclusive and often work through a process of male-female interaction. Once stored, sperm can be lost from the SSOs in a process referred to as sperm 'dumping' [39] ( Figure 1B). Dumping has been suggested to occur in several invertebrate taxa (e.g., [40]).
Selective Fertilization Differential Mediation of Sperm Performance
As a key determinant of fertilizing efficiency, sperm swimming performance offers an important mechanism through which females can bias fertilization. Female reproductive fluids are emerging as widespread modulators of sperm swimming. Differential sperm chemotaxis was first demonstrated in a mussel Mytilus galloprovincialis, where chemoattractants in the fluid associated with the eggs differentially mediate the migration of sperm of individual males by changing sperm swimming behavior [41]. Sperm swimming velocity is determined by the interaction between the identity of the male sperm donor and the female 'ovarian fluid' (OF) donor in several external fertilizers (e.g., [42,43]). In the externally fertilizing ocellated wrasse Symphodus ocellatus, OF provides a mechanism by which females can bias the outcome of fertilization toward certain 'nesting' male phenotypes [44] (Figure 2), while, in the guppy, an internal fertilizer, in vitro evidence indicates that OF mediates sperm swimming velocity to bias paternity toward unrelated males [10].
FRT secretions can also mediate differential sperm activation, where sperm must undergo postmating transformations to achieve fertilization. In spiders, secretions from the SSO break sperm capsules to release them, creating the opportunity for females to selectively activate the sperm of different males [45]. In mammals, the pivotal role that FRT secretions have in sperm capacitation and hyperactivation might enable discrimination among sperm of rival males [46].
[ 3 6 2 _ T D $ D I F F ] The differential effects of reproductive fluids offer significant potential for CFC.
Sperm-Egg Signaling
There is some evidence, largely from in vitro studies, that CFC can occur during sperm-egg interactions. External fertilization in sea urchins Echinometra mathaei and Echinometra oblonga is mediated by the sperm protein bindin, which is highly polymorphic within species [47]. This variation leads to assortative fertilization: in situations where all males have an equal opportunity to fertilize eggs, female sea urchins produce eggs that nonrandomly select sperm with a bindin genotype similar to their own [47]. The sea urchin egg glycoprotein EBR1 might facilitate gamete fusion by targeting sperm bindin through cell surface signaling [48]. Egg glycoproteins appear to have a similar function in house mice Mus musculus domesticus; a mismatch with sperm surface proteins leads to a significant reduction in litter size [49]. Gamete protein signaling in this species might also account for egg selection of specific sperm genotypes to avoid inbreeding [9] or to promote certain major histocompatibility complex (MHC) haplotypes [50] (Box 2).
In sea urchins and mice, 'egg defensiveness', a possible adaptation to the risk of pathological polyspermy, might also function as a means of filtering and selecting sperm that are compatible with the egg, or are of sufficient quality [8,[51][52][53]. Similarly, variation in the number or density of cells associated with the cumulus oophorus in mammals is a potential barrier by which females can control fertilization rates under different risks of pathological polyspermy [53]. Both within and between Mus species, the degree of gamete incompatibility is positively associated with sperm competition level, suggesting that the 'discriminatory' nature of eggs becomes greater as the intensity of postmating sexual selection increases [8,52]. Convergent patterns observed among internal and external fertilizers suggest that CFC mechanisms mediating sperm-egg fusion are a phylogenetically widespread phenomenon.
Evolutionary and Functional Implications of CFC
Functional Significance for Females Nonrandom sperm utilization requires adaptive explanations, and several have been proposed over the past 20 years (Table 1). Resolving the adaptive significance of CFC for females is intrinsically tied to the adaptive significance of premating female choice and polyandry. As in premating female choice, adaptive explanations of CFC fall into two broad categories: (i) CFC is adaptive to females and has evolved specifically for the fitness benefits that controlling sperm utilization conveys to females; and (ii) CFC is not adaptive to females and represents either a side effect of other adaptive female traits (e.g., sensory bias) and/or male manipulation of female sperm utilization (e.g., males inducing females to bias sperm utilization in favor of their ejaculates even when this is against the fitness interest of the female; Table 1). Most explanations fall under (i), where CFC is seen as a means for polyandrous females to control paternity when premating choice is difficult or otherwise constrained. A key difference with premating female choice is that the fitness benefits of CFC are likely to occur exclusively through increased offspring fitness (i.e., genetic benefits). These genetic benefits may be shared across females (generating directional selection), or females may vary in their preferred criteria (generating nondirectional selection). In the former scenario, females might favor fertilization by males of certain phenotypes. Genetic mechanisms of good genes and Fisherian runaways are often invoked to explain the evolution of directional CFC (e.g., [54]; Figure 2A). Under the good sperm hypothesis [55], the postmating offshoot of the good genes hypothesis, polyandry selects for ejaculate traits correlated with male genetic quality. The related sexy sperm hypothesis [56] predicts that males successful in sperm competition sire sons with superior ejaculate traits. Both models require a genetic correlation between intrinsic sperm competitiveness and either genetic quality (good sperm) or female polyandry (sexy sperm). While neither model assumes a female role beyond mating multiply, CFC could catalyze both mechanisms.
Box 2. CFC and the Vertebrate MHC
The vertebrate major histocompatibility complex (MHC) is a highly polymorphic haplotype that [ 3 4 6 _ T D $ D I F F ] primarily functions in immune regulation but also as a genetic compatibility system [91]. Given that MHC genes are critical for immune function, an increase in MHC heterozygosity, or the procurement of rare alleles within the MHC complex, is expected to lead to increased resistance among offspring. Consequently, mechanisms of CFC might be expected to favor the sperm of either dissimilar males or males with 'optimal' MHC similarity [91]. MHC-based disassortative fertilization might be a strategy to prevent inbreeding or maximize general genomic heterozygosity (enabling a wider recognition of pathogens; 'heterozygote advantage'), leading to increased offspring fitness [91]. The parasite Red Queen hypothesis posits that, when new combinations of genes are required to provide the best immune response in each generation, female choice for resistance genes that complement their own MHC genotype could, in theory, drive MHC diversity [91]. In the stickleback, individuals with an intermediate number of MHC alleles suffer lower levels of parasite infection, suggesting that MHC heterozygosity is optimized, rather than maximized, through female choice [91]. After mating, this process can occur via mechanisms of CFC that bias fertilizations toward sperm with complementary alleles [91].
MHC-dependent gamete fusion has been demonstrated in different taxa (mice [50], salmon [92], and guppies [93]), but what is the specific mechanism driving MHC-based sperm selection? Contradictory reports [ 3 5 [50]. Strong linkage disequilibrium between testis-expressed MHC genes and MHC-linked olfactory receptor genes in some taxa could indicate which MHC alleles are carried by the sperm (the sperm receptor selection hypothesis [91]). The complexity involved with MHC-based sperm selection is apparent in the red junglefowl, in which females might use premating phenotypic cues to select MHC-dissimilar sperm and avoid fertilizations by related males [11]. This system requires that females 'know' their own MHC genes and be able to assess those of their partners both before and after mating. Clearly, more research is required to precisely establish the mechanisms explaining MHC-based CFC.
_ T D $ D I F F ] on whether sperm signal their MHC haplotype suggest that expression [ 5 4 _ T D $ D I F F ] might depend upon male infection status
Recent evidence indicates that genetic benefits of mate choice might be tempered by negative intersexual genetic correlations in fitness caused by intralocus conflict. Thus, adaptive CFC might enable females to ameliorate the costs of intralocus conflict, by optimizing sex allocation based on paternity. In the lizard Anolis sagrei, selection on body size is sex specific. Males but not females are selected to be large, and females, which are the heterogametic sex, bias fertilization so that male eggs are preferentially fertilized by the sperm of large males [57]. [12,16,60] Field cricket Gryllus bimaculatus [15,60] Orb-web spider Argiope lobata [62] Guppy Poecilia reticulata [10] Red junglefowl Gallus gallus [11] House mouse M. musculus [9] Hybridization avoidance Fruit flies Drosophila simulans  Drosophila mauritiana Nondirectional assortative fertilization [27] Crickets Gryllus bimaculatus  Gryllus campestris [82] Atlantic salmon Salmo salar  Brown trout Salmo trutta [83] Genetic compatibility Mussel Mytilus galloprovincialis Nondirectional [41] Offspring heterozygosity Chinook salmon Oncorhynchus tshawytscha Nondirectional disassortative fertilization [43] Offspring homozygosity Dungfly Scathophaga stercoraria Disruptive assortative fertilization [36] Red Furthermore, in the fruit fly Drosophila simulans, females discriminate against the sperm from males expressing a sex-ratio distorter [58]. By contrast, the evidence that CFC enables female house mice to avoid fertilization by males carrying the t haplotype meiotic driver is less conclusive [59]. Finally, CFC could, in principle, arise as a secondary consequence of viability selection on female fitness, not unlike sensory bias for premating preference evolution. For example, antimicrobial immune response in females might penalize ejaculates with higher microbial loads.
CFC criteria that differ across male-female combinations are often explained by different mechanisms of genetic compatibility (Table 1). For example, heterozygosity can be increased when CFC favors male genotypes that are less similar to the female. Broadly consistent with this idea, a recent study in Chinook salmon Oncorhynchus tshawytscha found that the sperm-OF interaction predicted embryo survival better than did sperm competitive ability alone, providing evidence of the adaptive role of CFC [43]. In mussels, CFC promotes early embryonic viability in a way that is consistent with egg selection for genetically compatible sperm [41]. These effects could arise because CFC optimizes heterozygosity genome wide or at specific fitness-related loci, such as the MHC (Box 2). The benefits of the former are especially clear when considering inbreeding. In principle, females can avoid inbreeding depression by discriminating against the sperm of close relatives. A powerful approach for examining CFC in this context comes from experimental systems where: (i) females bear a cost of inbreeding depression; and (ii) male sperm competitiveness can be manipulated by experimentally controlling ejaculate size independent of female relatedness. Starting from studies of the field cricket Gryllus bimaculatus [60], evidence for CFC against inbreeding is accumulating. Female red junglefowl store fewer sperm following insemination by related than unrelated males (e.g., [11]). AI experiments in guppies showed that CFC biases paternity toward unrelated males in the absence of any premating cues [10,61]. Similarly, in vitro sperm selection against sperm from related males has been demonstrated in house mice [9]. More evidence comes from the Mediterranean orb-web spider Argiope lobata [62], and the Australian field cricket Teleogryllus oceanicus [12]. In other species, however, CFC against inbreeding is either absent or less consistent [25,63,64]. How can we explain this discrepancy? CFC is more likely to function as inbreeding avoidance under certain conditions, namely: (i) viscous population structure; (ii) intermediate levels of inbreeding depression (promoting male investment in, and female resistance against, inbreeding [11]); and/or (iii) limited opportunities for premating inbreeding avoidance (e.g., due to lack of kin discrimination or male coercion). Alternatively, females might benefit by favoring male genotypes that are similar to the female. Such an assortative pattern is observed in dung flies, where CFC favors males more similar to the female at the phosphoglucomutase (Pgm) locus [36], which modulates mobilization of glycogen reserves for flight and temperature-specific larval growth. Similarly, assortative CFC can help prevent hybridization (Box 3).
CFC can also result in female costs [65]. Given that CFC might hamper fertilization, females are expected to walk an evolutionary tightrope between the risk of producing offspring with suboptimal sires and reduced fertility. Similarly, responses against the sperm of genetically similar or related males can be constrained by the risk of autoimmunity or immune responses against embryos in viviparous taxa. However, costs of CFC have seldom been quantified [65] (Box 4 [ 4 0 4 _ T D $ D I F F ] ).
Evolutionary Consequences for Males Variance in paternity share generates opportunity for postmating intersexual selection on males ( Figure 1A), which can be directional or nondirectional (see above). While the latter is expected to maintain genetic variance and polymorphism, the former is expected to erode additive variance, particularly when directional CFC reinforces patterns of premating female choice (as seen in some of the examples in Figure 2). Conversely, when directional CFC works independently or even against other episodes of sexual selection, opportunity arises for alternative pathways through which males can attain reproductive success via alternative mating tactics. For example, territorial males might invest in traits such as ornaments, armaments, or paternal care that are important in premating sexual selection, while sneakers or satellites might invest in traits that increase fertilizing efficiency after mating, including traits favored by CFC. Alternatively, CFC might bias paternity toward territorial males (e.g., [44]).
Provided that CFC benefits females, there is inescapable [ 3 6 3 _ T D $ D I F F ] sexual conflict between the female and the partners whose sperm she disfavors. Therefore, male evolutionary responses to CFC can comprise both strategies that meet female preferences or counteract CFC. Male courtship, mating, and postmating behaviors might ensure that females preferentially use the sperm of one male over those of others [66] and, therefore, can be under selection by CFC. For example, males can prevent or delay female remating through mate guarding, copulatory plugs, or accessory gland proteins that influence female remating behavior. When encountering paternity-biasing traits that allow female control over sperm transfer, males might seek to regain control through derived courtship and mating behaviors (e.g., traumatic insemination) and/or modifications in genital morphology (e.g., [67]). Sperm of some hermaphroditic Macrostomum flatworms have evolved bristle-like structures that help prevent them from being sucked out of the female antrum after mating [68]. Patterns of sperm neutralization by females will also influence male strategies of ejaculate expenditure. Males are selected to invest larger ejaculates when females indiscriminately neutralize a fixed number (but not a fixed proportion) of sperm for each insemination. When sperm neutralization is nonrandom and the ejaculates of a male are favored by some females but disfavored by others (nondirectional CFC), males are expected to Box
CFC and Reproductive Isolation
Rapid coevolution of male and female traits due to postmating sexual selection can lead to postmating-prezygotic (PMPZ) reproductive isolation mediated by competitive or noncompetitive gametic interactions [94]. Just as variation in fertilization success can derive from male (i.e., not CFC) or female (possibly directional CFC) effects or their interaction (nondirectional CFC), PMPZ alone does not automatically implicate CFC (Box 1[ 3 5 5 _ T D $ D I F F ] , main text). In principle, CFC can promote speciation by disfavoring heterospermic fertilization in hybrid zones and secondary contact through conspecific or conpopulation sperm precedence. Evidence for assortative directional CFC in PMPZ isolation is most commonly found in cases of conspecific sperm precedence (CSP), in which progeny of females mating with both a heterospecific male and conspecific male are sired predominantly by the conspecific male [94]. Indeed, CSP often occurs in systems where single matings yield viable offspring, and reproductive barriers become evident only under competitive conditions. CSP is thought to arise when divergent selection generates genetic incompatibilities between populations that effectively favor conspecific over heterospecific fertilizations. Although CFC due to genetic incompatibility is considered nondirectional in intraspecific matings, it becomes directional when selection consistently favors conspecific over heterospecific ejaculates.
Although CFC might mediate CSP at any stage from copulation to fertilization ( Figure 1B, main text), the earliest and clearest examples come from studies of competitive gamete interactions. For example, variation at the bindin and lysin loci mediates species-specific fertilization in sea urchins and abalone, respectively, and [ 3 5 6 _ T D $ D I F F ] are under strong positive selection [95]. Furthermore, a highly controlled paired design involving in vitro sperm competition between Atlantic salmon and brown trout revealed CSP due to the enhancing effect of OF on conspecific sperm chemoattraction and motility [83]. There is some evidence that CSP mechanisms can also involve earlier stages of CFC as well as multiple mechanisms within a system. CSP in competitive hybrid matings between the crickets Gryllus campestris and Gryllus bimaculatus is mediated by both preferential storage and a sperm use bias toward conspecific sperm [82]. Moreover, Drosophila simulans females use differential ejection and use of alternative sperm storage organs to select against Drosophila mauritiana sperm [26].
Beyond generating divergent selection among populations, postcopulatory sexual selection can also affect the establishment and strength of CSP. In house mice, males from populations with high sperm competition outcompeted conpopulation sperm [96], and, in the yellow dung fly, sexually antagonistic coevolution within populations generated heterospecific sperm precedence [97]. Finally, the strength of CSP in mice covaried with the intensity of postmating sexual selection, such that eggs became more discriminatory against heterospecific sperm as the level of sperm competition increased [8].
Box 4. Going Forward: Key Directions for Future CFC Research
The first 20 years of research have brought us evidence that CFC can occur through several traits and mechanisms, building a platform for future studies of CFC at multiple levels. In keeping with the structure of this review, we use a classic categorization of both proximate and ultimate levels of analysis in biology to identify key challenges, summarized in Table I. Most of the effort so far has focused on categories (i) and (iv).
Mechanisms
Studies of mechanisms are arguably the most frequent and best developed of the four categories. However, much remains to be discovered even at this level. Despite recent progress (see main text), unambiguously distinguishing the effect of female-versus male-driven postmating mechanisms remains a key challenge in the study of CFC. Recent work in Drosophila melanogaster elucidated the intimate interaction between the effect of inseminated male accessory gland products and the response of the FRT to such effects, including potential for CFC [98]. In addition, it is becoming clear that multiple mechanisms of CFC might occur in the same organism. However, nothing is known about the temporal and spatial scales of these mechanisms and the way they interact with each other to influence paternity. Resolving individual mechanisms of CFC requires investigating more specific mechanisms. For example, in cases where CFC is based on stimuli (e.g., visual or olfactory) of the male phenotype or genotype, future research should determine how these stimuli can trigger a cascade of physiological, neurological, and endocrinological events that cause CFC. Similarly, little is known about mechanisms underpinning CFC when this is triggered by the phenotype of individual sperm cells. Among-sperm variation must exist to allow CFC mechanisms to act, but with few exceptions (e.g., Box 2, main text), such cues remain unidentified, and this area offers a wealth of future investigation. Sperm might convey molecular information to the FRT on which female sperm recognition mechanisms might act (i.e., the molecular sperm passport hypothesis) [46]. For example, the hyaluronic acid receptor CD44 on human sperm is a putative signal of sperm fertilizing potential and, therefore, sperm quality. The FRT, which is rich in hyaluronic acid (e.g., in cervical mucus, OF, and cumulus cells), can discriminate between sperm via the surface expression of CD44, suggesting that the FRT can 'read' information available on the sperm surface and accept or reject individual sperm [46]. Although unequivocal evidence is currently lacking, this type of sophisticated sperm discrimination is not unprecedented in other taxonomic groups. As we continue to characterize more mechanisms of CFC, a critical step is to identify the underpinning genetic, physiological, and biochemical processes. Future studies should consider what patterns of gene expression, nucleotide polymorphisms, and proteins explain variation in CFC mechanisms and explore whether metabolic differences within the FRT mediate male  female interactions. The advent of genome-editing tools, such as CRISPR, appear particularly promising, because they allow the surgical deletion or replacement of candidate genes to establish the causal relationships among gene sequence, gene expression, and phenotype.
Development
This level of investigation remains almost entirely unexplored, because the majority of CFC work does not consider that patterns of CFC develop or change over the lifespan of a female. However, this is likely in several cases. In honey bees, Apis mellifera, the spermathecal fluid of the queen changes in protein composition, suggesting that the first ejaculate initially experiences a biochemical environment considerably different from that experienced by successive inseminations [99]. These ontogenetic changes have intuitive adaptive significance; virgin females might be less selective to reduce the risk that the eggs are not fertilized, and, as matings accumulate, both female choosiness and selectivity can increase. Similarly, CFC mediated by responses of the acquired immune system in vertebrates can change over time, as a female is repeatedly exposed to the sperm of the same male or genotype. Aging might also affect patterns of CFC; for example, in birds, older females can lose sperm from their sperm storage tubules at a faster rate than can younger females.
Function
Resolving the adaptive significance of CFC hinges on measuring fitness benefits and costs to females. While some benefits have been explored (see main text), we know next to nothing about the costs of CFC to females. Too-stringent CFC criteria and CFC-driven errors in sperm assessment might result in sperm limitation or enduring unfavorable paternity outcomes. However, there are also likely to be immunological and physiological costs associated with developing and maintaining traits associated with CFC. Understanding how these costs might modulate the intensity and choosiness of CFC and selection on correlated traits is an important area for future research. Experimental evolution represents a powerful multigenerational approach for exploring the potential fitness implications, both costs and benefits, of CFC. This approach also creates the opportunity (and the need) to investigate the (co)evolution of male traits. Understanding how potential costs modulate the intensity and choosiness of CFC is an important area for future research. Given that such costs alter the strength and direction of selection acting on both focal and correlated traits, filling this gap will enhance understanding of CFC at the population level (see below).
allocate more when mating in the favored role. However, when individual males are always either favored or disfavored by all females (directional CFC), favored males are expected to always invest less than disfavored males [69]. Other male counter-adaptions can prevent sperm neutralization after mating. Spermatophores of the flatworm Dugesia gonocephala [70] and the snail Helix pomatia [71] might protect sperm from digestion in the copulatory bursa. In hermaphroditic land snails, love darts transfer an allohormone that delays sperm digestion by stimulating contraction of the copulatory canal, allowing more sperm to be stored [72]. In the heteromorphic D. pseudoobscura, nonfertilizing pseudosperm help counter the spermicidal action of the FRT [34]. Similarly, in many taxa, part of the male ejaculate, or even part of the intromittent organ blocks the female genital opening. These traits have been interpreted as defensive adaptations to prevent female remating. A nonmutually exclusive function might be to prevent female sperm ejection.
Finally, if CFC is mediated by immunological responses, males might gain by reducing the bacterial load of their ejaculates. Consistent with this idea, the seminal fluid of several species is enriched with antibodies and other proteins with antimicrobial peptides [73]. Seminal fluid can also contain vesicles, prostasomes, and exosomes with immunosuppressive properties [74], and one of their functions might be to inhibit the female immune response to sperm.
Male-Female Coevolution
Directional CFC and male responses can drive intersexual coevolution [5], divergence, and speciation (Box 3). Comparative studies have shown phylogenetic signatures of coevolution between FRT morphology and male reproductive traits (e.g., [75]). Genetic correlation between a male trait and CFC for that trait is required for Fisherian runaway selection and has been documented for only a few postmating traits, including in Onthophagus dung beetles [76] and Drosophila [54] (Figure 2A). These coevolutionary dynamics can often appear to be sexually antagonistic [77]. For example, across waterfowl species, more complex FRTs have evolved in response to male sexual coercion, seemingly to enable females to retain control over paternity, which has in turn driven the evolution of more complex male genitalia [67]. Similarly, egg
Phylogeny
Functional studies should also investigate macroevolutionary patterns of CFC-related traits, their coevolution with associated male traits, and the phylogenetic and ecological drivers of such patterns. A comparative approach would also help resolve the role of CFC in reproductive isolation (Box 3, main text) and diversification. Detecting the phylogenetic signature of CFC should be easier than for premating female choice, because CFC can be mediated by morphological or physiological traits that are easier to quantify and compare across species than are more plastic female preference traits. Yet, compared with male reproductive anatomy, female reproductive anatomy is distinctly underrepresented in evolutionary studies, even those investigating CFC! [100]. Characterizing ontogenetic and temporal patterns of variation in CFC Understanding macroevolutionary patterns of CFC and CFC-related traits responses mediating sperm attraction and/or entry (e.g., rapid divergence in signaling proteins and sperm performance-egg defensiveness) have been implicated in coevolution at the gametic level [8,51,52].
Concluding Remarks
The idea of CFC has revolutionized the field of sexual selection by providing a critical counterpoint to male-driven sperm competition and illuminating the potential for female-mediated postmating processes. Evidence accumulated over the past 20 years confirms Eberhard's [5] intuition that multiple stages between gamete release and fertilization provide opportunities for CFC. However, evaluating this potential requires disentangling male and female effects, something that has been achieved to a degree only in a handful of organisms, and only at some of these stages. This is because the intimate correspondence of male stimuli and female responses that characterizes the cascade of events from insemination to fertilization often means that the very notion of disentangling male and female effects can be a misleadingly simplistic dichotomy.
If demonstrating CFC is difficult, understanding its functional significance is similarly challenging. Despite intense effort, evidence that polyandry and CFC benefit females remains remarkably elusive [78,79]. One reason for this is that multiple hypotheses have been proposed to explain the adaptive significance of CFC, and the multitude of these hypotheses makes it difficult to rule out the null hypothesis. Furthermore, adaptive CFC is likely driven by genetic benefits to the offspring, which are typically small [78]. We predict that CFC can have an important role under specific conditions, namely in highly polyandrous species, where premating female choice is difficult or severely constrained, such as broadcast spawners or internal fertilizers, where males can coerce females into mating. Investigations of such mating systems have been promising and suggest that, here, CFC can be an agent of evolutionary exaggeration and diversification through its role in sexual selection on males and intersexual coevolutionary dynamics.
|
2018-04-03T02:44:07.310Z
|
2017-05-01T00:00:00.000
|
{
"year": 2017,
"sha1": "de66c35a894ada9c17d599f64d8798719aa601dc",
"oa_license": "CCBY",
"oa_url": "http://www.cell.com/article/S0169534717300460/pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "540a15b91dee2b25b87f3a1c761c635f5c699019",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
244897088
|
pes2o/s2orc
|
v3-fos-license
|
Auditory Sensory Gating in Children With Cochlear Implants: A P50-N100-P200 Study
Background: While a cochlear implant (CI) can restore access to audibility in deaf children, implanted children may still have difficulty in concentrating. Previous studies have revealed a close relationship between sensory gating and attention. However, whether CI children have deficient auditory sensory gating remains unclear. Methods: To address this issue, we measured the event-related potentials (ERPs), including P50, N100, and P200, evoked by paired tone bursts (S1 and S2) in CI children and normal-hearing (NH) controls. Suppressed amplitudes for S2 compared with S1 in these three ERPs reflected sensory gating during early and later phases, respectively. A Swanson, Nolan, and Pelham IV (SNAP-IV) scale was performed to assess the attentional performance. Results: Significant amplitude differences between S1 and S2 in N100 and P200 were observed in both NH and CI children, indicating the presence of sensory gating in the two groups. However, the P50 suppression was only found in NH children and not in CI children. Furthermore, the duration of deafness was significantly positively correlated with the score of inattention in CI children. Conclusion: Auditory sensory gating can develop but is deficient during the early phase in CI children. Long-term auditory deprivation has a negative effect on sensory gating and attentional performance.
INTRODUCTION
There is a close link between cognitive decline and hearing loss (Dye and Hauser, 2014;Heinrichs-Graham et al., 2021). Patients with hearing loss face the risk of delays in multiple cognitive functions, such as working memory and executive function (Lieu et al., 2020). Specifically, attention-deficit disorders are more commonly reported in deaf children compared with normalhearing (NH) peers (Hall et al., 2018). As one of the most successful neural prostheses developed to date, cochlear implants (CIs) help not only to restore hearing of deaf children, thereby supporting speech communication, but also to enhance their cognitive abilities (Kral et al., 2019). For example, CI children showed an improvement in non-verbal cognitive functions and working memory at 6 months after CI surgery (Shin et al., 2007). However, CIs still cannot ensure optimal cognitive outcomes (Kral et al., 2019). There is a great variation in the attentional performances of CI children (Surowiecki et al., 2002). Both preschoolers and school-aged children with CIs were found to face a greater risk of deficits in the attention domain compared with NH children (Kronenberger et al., 2014). Nearly 40% of CI children attending mainstream classes could not pass the test of attention (Mukari et al., 2007), which may result in poor educational performance. However, the related neural mechanism underlying poor attentional performance in CI children remains unclear.
Previous evidence has shown that sensory gating is involved in early information processing of auditory attention (Wan et al., 2008). Sensory gating refers to the brain's ability to filter repetitive irrelevant stimuli (Chien et al., 2019), which is mainly assessed by P50 suppression. As a "pre-attentive" process, P50 sensory gating manifests in the central nervous system modulating its sensitivity to incoming stimuli (Braff and Geyer, 1990), protecting the brain from information overload (Adler et al., 1982). The P50 is a positive component of auditory event-related potentials (ERPs) and usually occurs at about 50 ms after stimuli onset. It has been supposed to be generated from the thalamo-cortical projection to the auditory cortex (Sharma et al., 2009). In a paired-click paradigm, two successive P50 responses are usually evoked by an initial stimulus (S1) and a shortly following identical stimulus (S2). (Fruhstorfer et al., 1970). Normal P50 suppression is characterized by a reduction in P50 amplitude for S2 compared with S1. A higher ratio (S2/S1) or smaller difference in these two P50 amplitudes suggests weaker sensory gating associated with diminished cognitive functioning, such as attention (Lijffijt et al., 2009).
Given that sensory gating is regarded as a multistage process (Boutros et al., 1999), previous studies have also paid attention to the later phases of auditory processing reflected by the N100 and P200 (Rosburg, 2018). The N100 is a negative component appearing about 100 ms after the onset of the auditory stimulus, and the P200 is a positive component appearing about 200 ms. The N100 and P200 components are mainly generated in the primary auditory cortex (Hegerl and Juckel, 1993). These two components have been proposed to involve distinct neural activities (Boutros et al., 2004;Chien et al., 2019) and thus be related to different functions (Lijffijt et al., 2009). Unlike the P50 involving the early phase of information processing, the N100 and P200 are considered to reflect triggering and allocation of attention, respectively (Shen et al., 2020). Thus, different phases of auditory information filtering should be investigated by the P50-N100-P200 complex.
There is a maturational course of sensory gating in typically developing children (Davies et al., 2009). Compared with adults, children always show immature sensory gating ability as revealed by longer P50 latencies (Hunter et al., 2012). With increasing age, young children (1-8 years of age) demonstrate a rapid decrease in latency (Freedman et al., 1987). The latency may stabilize at the pre-adolescent stage (9-12 years of age) and remain stable into adulthood. Brinkman and Stauder (2007) also found a negative correlation between age and the P50 amplitude ratio, indicating age-related sensory gating abilities. However, further analysis revealed that a significant difference in gating ratios was only found between the youngest children group (5-7 years of age) and the other three groups (8-9, 10-12, and 18-30 years of age) and not among the latter three groups. These findings imply that sensory gating may mature around the age of 8 years.
Sensory gating has been reported to be deficient in many neurological diseases (Gjini et al., 2011;Micoulaud-Franchi et al., 2015). Patients with schizophrenia (Smucny et al., 2013) or autism spectrum disorders (Crasta et al., 2021) showed reduced gating abilities reflected by abnormal P50, N100, and/or P200 amplitude ratios. This ineffective inhibitory modulation of sensory information may imply an imbalance of neuronal excitation/inhibition in this population (Culotta and Penzes, 2020). The inhibitory system is thought to be the underlying mechanism in modulating sensory gating (Adler et al., 1982). Evidence has also demonstrated that peripheral auditory deafferentation or sensorineural hearing loss negatively affects inhibitory mechanisms, reflected by a reduction of inhibitory inputs and subsequent imbalance between excitatory and inhibitory systems (Campbell et al., 2020a). The properties of the inhibitory synapses in the central auditory system are changed by auditory deprivation (Takesian et al., 2012). The inhibitory activity decreases, followed by an increase in the excitability of both midbrain and cortical neurons. Synaptic changes induced by early hearing loss contribute to auditory processing deficits and may be persistent even after auditory intervention (Takesian et al., 2009). Therefore, for deaf children who experience early auditory deprivation, it is unclear whether auditory sensory gating is deficient (no or reduced inhibition of repetitive irrelevant stimuli) after cochlear implantation.
In this study, we assessed auditory sensory gating in CI children by measuring the amplitude (gating) ratios of P50, N100, and P200 responses to paired tone bursts (S1 and S2). The attentional performance was also evaluated using the Swanson, Nolan, and Pelham IV (SNAP-IV) scale. We hypothesized that the sensory gating ability could develop after cochlear implantation but still be deficient because of long-term auditory deprivation. Therefore, we predicted that P50, N100, and/or P200 suppression would be poorer in CI children than in NH peers.
Participants
Twenty-four native Chinese children, including 12 prelingually deafened children with unilateral Med-El CI devices [6 females, age range: 4-8 years; mean age ± standard deviation (SD): 6.01 ± 1.33 years] and 12 NH children (4 females, age range: 3.5-8.5 years; mean age ± SD: 6.59 ± 1.54 years), participated in this study. Eleven CI children did not pass the neonatal evoked otoacoustic emission test and were diagnosed with congenital sensorineural hearing loss. The other child was found to have profound sensorineural hearing loss before the age of 15 months. Two CI children had worn hearing aids before cochlear implantation. The auditory and speech abilities of CI children were evaluated by Categories of Auditory Performance (CAP), Speech Intelligibility Rate (SIR), and Meaningful Auditory Integration Scale (MAIS; Peixoto et al., 2013). The scores of these three scales and more detailed information for CI children are listed in Table 1. The NH children did not have a history of hearing loss. The two groups were matched in terms of years of education, family incomes and levels of parental education. They had normal vision and no history of neurological or psychiatric illness. The protocols and experimental procedures in this study were reviewed and approved by Anhui Provincial Hospital Ethics Committee. Each participant's guardians had filled out an informed consent carefully before the experiment.
Sensory Gating Paradigm
In the electroencephalography (EEG) experiment, the tone burst (1,000 Hz, 30 ms duration, 4 ms linear rise/fall time) was used as the auditory stimulus to evoke the P50, N100, and P200 components. Tone bursts were presented in pairs: a conditioning stimulus (S1) and a testing stimulus (S2) with an interstimulus interval of 500 ms and an interpair interval of 8 s through two loudspeakers placed at ± 45 • azimuth, at a distance of 100 cm in front of the participants. The stimuli were delivered at an intensity of 80 dB SPL. For each participant, the experiment consisted of two blocks with 200 pairs of tone bursts in total and lasted for 30 min. The sound stimuli were generated by Adobe Audition 3.0 software (Adobe Systems Incorporated, San Jose, CA, United States) and presented by E-Prime 3.0 software (Psychological Software Tools, Pittsburgh, PA, United States).
Attention Assessment
A Swanson, Nolan, and Pelham IV (SNAP-IV) scale was used to assess the attentional performances of NH and CI subjects. This rating scale was usually used to evaluate attentional deficits in patients with ADHD (Swanson et al., 2001). The SNAP-IV includes 26 items divided into three subscales: inattention (9 items), hyperactivity/impulsivity (9 items), and oppositional (8 items) (Swanson et al., 2001). Parents were asked to rate the items according to the daily performance of their children by selecting one of four grades (not at all, just a little, quite a bit, very much). A Higher score indicated more severe symptoms.
Electroencephalography Recording
The EEG was recorded from a cap with 64 Ag/AgCl electrodes (SynAmps RT, Curry, United States) that were placed at the scalp according to the international 10-20 system. Another two electrodes were located at the left and right mastoids. The reference and ground electrodes were placed on the tip of the nose and the forehead, respectively. Vertical and horizontal electrooculography (EOG) signals were obtained by bipolar electrodes above and below the left eye and lateral to the outer canthi of both eyes, respectively. The EEG data were sampled at 500 Hz and filtered online between 0.05 and 100 Hz. Electrode resistances were kept under 5 k . Each child was asked to watch a silent cartoon sitting on a soft couch and ignore the auditory stimuli.
Data Analysis
Offline analysis of EEG data was conducted by EEGLAB 13.0.0b in Matlab R2013b (The Mathworks, Natick, MA, United States). Data were filtered with a bandpass setting of 10-100 Hz for the P50 component and with a bandpass setting of 4-30 Hz for the N100 and P200 components. The epochs were set at 400 ms, starting at 100 ms before the onset of the stimulus. Baseline correction was performed relative to a baseline of −100 to 0 ms. The independent component analysis was used to remove the eye movement, heartbeat, and CI artifacts from the EEG signals (Hongmei and Nan, 2017). Independent components reflecting these artifacts were identified and removed by visual inspection of the component's properties, including the waveform, 2-D voltage map, and spectrum (Gilley et al., 2006). After artifact removal, segments containing voltage deviations exceeding ± 100 µV on any channels except for EOG channels were rejected. The ERPs evoked by S1 and S2 were calculated by averaging individual trials. The P50 component was defined as the most FIGURE 1 | Grand average event-related potentials in response to S1 (blue solid line) and S2 (red dashed line) at site Cz. Both (A) children with normal hearing (NH) and (B) those with cochlear implants (CIs) showed robust P50, N100, and P200 components. positive peak between 40 and 100 ms after stimulus onset. The N100 and P200 components were determined as the most negative and positive peaks after P50 between 80 and 150 ms and between 120 and 250 ms, respectively (Crasta et al., 2021). The amplitude of P50, N100, or P200 was determined by the peak-to-peak amplitude between the peak of P50, N100, or P200 and its preceding peak with reversal polarity. The gating ratio between P50, N100, or P200 amplitude for S2 and that for S1 (S2/S1) was used to evaluate the sensory gating ability: A lower gating ratio indicated robust gating, and a higher ratio indicated attenuated gating. The electrode Cz was selected for illustration.
Statistical Methods
One NH child and one CI child who had no robust N100 and P200 components were removed from further N100-P200 analysis. To assess whether auditory sensory gating existed in both groups, we compared the amplitude differences of P50, N100, and P200 in response to S1 with those to S2 using repeated measures analysis of variance (ANOVA) with stimulus (S1 and S2) as the within-subject factor. The differences in gating ratios, amplitudes, peak latencies, and SNAP-IV scores between two groups were further evaluated by a one-way ANOVA with group (NH and CI) as the between-subject factor. The Pearson's correlation was performed to assess the relationship among the gating ratios, scores of inattention, and onset or duration of deafness or CI use.
Higher P50 Ratio but Similar N100 and P200 Ratios in Cochlear Implant Children Compared With Normal Hearing Children
We further assessed whether the gating ratios, amplitudes, and peak latencies of P50, N100, and P200 differed between NH and CI children. CI children showed a significantly higher P50 gating ratio than NH children did [F (1,22) = 13.450, p = 0.001] (Figure 2A, middle). However, no significant difference in N100 [F (1,20) = 0.855, p = 0.366] or P200 [F (1,20) = 0.047, p = 0.831] gating ratios was found between these two groups (Figures 2B,C, middle). The amplitudes of N100 and P200 in response to S2 were significantly smaller than those to S1, indicating the presence of the auditory sensory gating in both NH and CI children. However, P50 suppression only existed in NH and not in CI children. (Middle) CI children showed similar N100 and P200 suppression ratios (S2/S1) but a higher P50 ratio compared with NH children. (Right) The P200 latencies in CI children were significantly shorter than those in NH children. Vertical bars represent the standard error. * * * p < 0.001, * * p < 0.01, and * p < 0.05.
DISCUSSION
In this study, we assessed auditory sensory gating in CI children. CI children showed robust N100 and P200 suppression but no P50 suppression. Furthermore, the duration of deafness was positively correlated with the score of inattention. Our results demonstrate that auditory sensory gating can develop in CI children but is deficient during the early phase. Long-term auditory deprivation negatively affects the restoration of auditory sensory gating and attentional performance.
Cochlear implant children showed auditory gating as revealed by the N100 and P200 suppression, indicating that the CI helps to rehabilitate the auditory sensory gating abilities of deaf children. The precise organization of neuronal circuits in the mature brain is established by developmental processes that involve reorganization and fine tuning of immature synaptic networks (Kandler, 2004). The maturation of the auditory system requires stimulation. Auditory deprivation may keep the synapses immature until the cochlear implantation helps to restore hearing and get rid of this frozen state (Sharma et al., 2002a). These activity-dependent processes may include improvement in synaptic efficacy and increased myelination (Gordon et al., 2003). The auditory system may rapidly develop within a critical period of 3-6 months after cochlear implantation and enter a maturation period after 12 months (Ni et al., 2021). Most CI children in our study received implantation before 3.5 years old and still had high plasticity of the auditory cortex (Manrique et al., 1999). Therefore, sensory gating can develop and be functional in CI children, though its developmental trajectory may be delayed. Considered as an automatic and involuntary first part in the attentional processes, sensory gating may prevent limited attentional resources from being disturbed by repetitive irrelevant stimuli and protect CI children from later attentional dysfunction (Hutchison et al., 2017).
Interestingly, compared with NH children, CI children showed similar gating ratios at the N100 and P200 but no robust P50 suppression, indicating deficient sensory gating during the early phase. There are two functionally distinct generators that are related to the P50 suppression, the temporal lobe and the frontal lobe (Weisser et al., 2001;Korzyukov et al., 2007;Campbell et al., 2020b). A considerable body of invasive and noninvasive research on sensory gating suggests that the auditory P50 response may be explained by contributions from the bilateral temporal lobes, including the left and right superior temporal gyri (STG; Lee et al., 1984;Liegeois-Chauvel et al., 1994;Knott et al., 2009;Mayer et al., 2009). In addition to the bilateral temporal lobes, the prefrontal source is usually attributed to the reduction of amplitudes to repeated stimuli (Grunwald et al., 2003;Korzyukov et al., 2007). In an MEG study on M50, the neuromagnetic counterpart of the P50 component, the prefrontal region was found to suppress the activity of the bilateral STG within the auditory M50 network (Josef Golubic et al., 2014). Similar to the P50 component, the N100 and P200 gating responses involve the activation of inhibitory frontal and temporo-prefrontal networks (Campbell et al., 2020b). However, functions of strong suppression regions may be differential. The P50 gating may work as a bottom-up process, while the N100 and P200 are mainly concerned with top-down processes (Boutros et al., 2013). Incoming sensory inputs first activate automatic central inhibitory mechanisms prior to top-down cognitive involvement (Javitt and Freedman, 2015). Evidence suggests that the N100 and P200 gating may be more susceptible to attention compared with early P50 gating (Rosburg et al., 2009). For the absence of P50 suppression and the presence of robust N100 and P200 suppression in CI children, we infer that the multistage inhibitory networks are damaged by auditory deprivation at the early stage but can be compensated for at the later stage by top-down modulation. We also found a positive correlation between the score of inattention and the duration of deafness. These findings suggest that long-term auditory deprivation has a negative effect on both early sensory gating and attentional functions. We did not find significant correlations between the gating ratio and the attention performance. The possible reason is that the Swanson, Nolan, and Pelham IV (SNAP-IV) scale for assessment of the attentional performance depending on parents' daily observation is relatively subjective. However, an objective and more accurate method for young children with hearing disability is indeed lacking.
Our previous study found that when dealing with complex speech sounds, CI children showed smaller and slower mismatch negativity (MMN) and even an absence of the late discriminative negativity (LDN) compared with NH children (Hu et al., 2021). Contrary to these late-latency ERPs, the robust P50-N100-P200 responses could be evoked by simple tone bursts, reflecting early processes of acoustic analysis. Compared with NH children, CI children showed similar P50 amplitudes but significantly different P50 amplitude ratios, suggesting that the brain can encode the acoustic features of novel sounds but has difficulty in inhibiting the neural response to repetitive irrelevant sounds (S2). The inhibitory system is thought to be the underlying mechanism in modulating sensory gating (Adler et al., 1982). Therefore, auditory deprivation may reduce the inhibitory activity, resulting in persistent higher excitability to repetitive irrelevant sounds during the early phase of information processing.
There are still some limitations to this study. First, while we tried to recruit CI children with consistent conditions (such as brand of CI devices), inhomogeneous aspects of CI children were still present. For example, two CI children had fitted hearing aids before cochlear implantation. We cannot separate the effect of early hearing aid fitting from that of CI use on the development of sensory gating. Therefore, a more detailed grouping method should be considered based on a larger sample size. Second, there was a lack of children implanted with the CI devices before the age of 12 months. Previous findings have shown the positive effect of early CI use on auditory rehabilitation (Sharma et al., 2002b;Dettman et al., 2007). Although we did not find correlations between the onset age of CI use and the P50-N100-P200 gating ratio, there is a possibility that earlier cochlear implantation (<12 months) may result in better rehabilitation of auditory sensory gating.
CONCLUSION
The CI helps to restore auditory sensory gating in prelingually deafened children. However, this gating ability is deficient in CI children during the early phase. Long-term auditory deprivation adversely affects auditory sensory gating and attentional performance.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Anhui Provincial Hospital Ethics Committee. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
AUTHOR CONTRIBUTIONS
Y-XC, J-QS, J-WS, and X-TG conceived and designed the experiments. Y-XC, X-RX, X-YH, R-RG, and J-WS recruited the participants. Y-XC and X-TG performed the data acquisition. Y-XC, SH, J-WS, and X-TG analyzed the data. All authors wrote the manuscript and approved the final article.
|
2021-12-06T14:13:47.715Z
|
2021-12-06T00:00:00.000
|
{
"year": 2021,
"sha1": "a63839833334da93063c29e4cc2281b58c0edeee",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2021.768427/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a63839833334da93063c29e4cc2281b58c0edeee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
85872986
|
pes2o/s2orc
|
v3-fos-license
|
Heat-induced Proteome Changes in Tomato Leaves
Three tomato (Solanum lycopersicum) cultivars (Walter LA3465 (heat-tolerant), Edkawi LA 2711 (un- known heat tolerance, salt-tolerant), and LA1310 (cherry tomato)) were compared for changes in leaf proteomes after heat treatment. Seedlings with four fully expanded leaves were subjected to heat treatment of 39/25 8C at a 16:8 h light-dark cycle for 7 days. Leaves were collected at 1200 HR, 4 h after the light cycle started. For 'Walter' LA3465, heat-suppressed proteins were geranylgeranyl reductase, ferredoxin-NADP (+) reductase, Rubisco activase, trans- ketolase, phosphoglycerate kinase precursor, fructose-bisphosphate aldolase, glyoxisomal malate dehydrogenase, catalase, S-adenosyl-L-homocysteine hydrolase, and methionine synthase. Two enzymes were induced, cytosolic NADP-malic enzyme and superoxide dismutase. For 'Edkawi' LA2711, nine enzymes were suppressed: ferredoxin- NADP (+) reductase, Rubisco activase, S-adenosylmethionine synthetase, methioine synthase, glyoxisomal malate dehydrogenase, enolase, flavonol synthase, M1 family peptidase, and dihydrolipoamide dehydrogenase. Heat-induced proteins were cyclophilin, fructose-1,6-bisphosphate aldolase, transketolase, phosphoglycolate phosphatase, ATPase, photosystem II oxygen-evolving complex 23, and NAD-dependent epimerase/dehydratase. For cherry tomato LA1310, heat-suppressed proteins were aminotransferase, S-adenosyl-L-homocysteine hydrolase, L-ascorbate peroxidase, lactoylglutathione lyase, and Rubisco activase. Heat-induced enzymes were glyoxisomal malate de- hydrogenase, phosphoribulokinasee, and ATP synthase. This research resulted in the identification of proteins that were induced/repressed in all tomato cultivars evaluated (e.g., Rubisco activase, methionine synthase, adenosyl-L- homocysteine hydrolase, and others) and those differentially expressed (e.g., transketolase).
Temperature is a key factor determining optimal growth and productivity of plants. In recent decades, many parts of North America have been experiencing an increase in the number of unusually hot days and nights. Continued global warming is likely to result in an increase in frequency and intensity of heat waves and drought (U.S. Global Change Research Program, 2008). Leaves function as the primary manufacturer of many metabolites used during plant growth. Both the integrity of the machinery and functionality of enzymes associated with photosynthetic activity are sensitive to heat stress (Berry and Björkman, 1980;Murakami et al., 2000). Significant inhibition of photosynthesis occurs at temperatures only a few degrees above the optimum, resulting in a considerable loss of potential productivity.
Tomato is one of the most important vegetable species and cash crops in the world. Daytime temperatures consistently above 32°C and evening temperatures that stay above 24°C are considered excessive and are detrimental to tomato plant growth and fruit development. Tomato leaves exposed to prolonged heat stress conditions experience starch depletion as a result of enhanced hydrolysis and reduced biosynthesis activities (Dinar et al., 1983). Short-term heat stress affects pollination, resulting in unfertilized embryos and aborted fruits (Berry and Rafique-Ud-Din, 1988).
Previous research has demonstrated that there is a strong correlation between heat stress and fruit yield in tomato (Berry and Rafique-Ud-Din, 1988). Membrane thermostability and heat-induced increase in chlorophyll a:b ratio and decrease in chlorophyll:carotenoids ratio are directly associated with the level of thermotolerance in tomato cultivars (Camejo et al., 2005;Saeed et al., 2007). Individual genes encoding for heat stress transcription factors (Chan-Schaminet et al., 2009;Scharf et al., 1998;Schultheiss et al., 1996), heat shock and chaperonin proteins (Port et al., 2004), and other functional proteins [e.g., kinases, reactive oxygen species scavengers, enzymes associated with sugar metabolism (Frank et al., 2009;Link et al., 2002)] play key roles in modulating thermotolerance in tomato.
Damage resulting from heat stress is present in multiple forms such as oxidative burst (Camejo et al., 2006), metabolic toxicity, membrane disorganization, inhibition of photosynthesis, and altered nutrient acquisition (Ismail and Hall, 1999;Karim et al., 1999;Wahid et al., 2007). Tolerant plants have a higher capacity to maintain homeostasis under stress by the activation of stress perception and signaling pathways, antioxidant capacity, gene expression regulation pathways, and alteration of metabolic cycles (Bohnert et al., 2006, and references therein). As a result of the complexity of plant response to heat stress, very few tolerant cultivars have been produced using traditional breeding protocols. Genetic transformation techniques have been of little use as a result of limited knowledge and availability of genes with known effects on plant heat-stress tolerance (Foolad, 2005;Wahid et al., 2007). In this project, heat-induced changes in the whole proteomes of tomato leaves were identified using a proteomics approach. The objective of this research was to determine candidate genes and pathways that should be investigated when breeding tolerant tomato cultivars.
Materials and Methods
PLANT GROWTH AND HEAT TREATMENT. Three tomato cultivars {Walter LA3465 [heat-tolerant (Swift, 2008)], Edkawi LA2711 [unknown heat tolerance, salt-tolerant], and LA1310 [cherry tomato]} were compared. Seed stocks were obtained from the C.M. Rick Tomato Genetics Resource Center at the University of California, Davis. Seeds were propagated in a greenhouse at Tennessee State University (Nashville). For this project, tomato seeds were germinated in seed cubes (Smithers-Oasis, Kent, OH), and seedlings were grown to the four fully expanded mature leaf stage in a greenhouse. Before heat treatment, tomato seedlings were transferred into two illuminated incubators (Thermo Fisher Scientific, Pittsburgh, PA), which were programmed at 25°C and light cycle of 16/8 h (day/ night). After 1 week, the incubator for heat treatment was reprogrammed to 39/25°C (16/8 h day/night) and the other incubator for control was kept at 25°C. Cool-white fluorescent tubes provided a photosynthesis photon flux intensity of 500 mmolÁm -2 Ás -1 . Five plants from each cultivar were placed on each shelf, and three layers of shelves (removing the top and the third shelves) were used in an incubator. The upper three fully expanded leaves were collected at 1200 HR, 4 h after the light cycle was initiated. Leaf tissues collected from the five plants on each shelf were pooled as one biological sample (replicate), three samples were collected for the control and treatment, and they were the three biological replicates for proteomics analysis. Immediately after detachment from the plants, the leaf samples were frozen in liquid nitrogen.
PREPARATION OF PROTEIN SAMPLES AND DIFFERENTIAL TWO-DIMENSIONAL FLUORESCENCE GEL ELECTROPHORESIS. To extract protein, frozen leaf tissues were ground into a fine powder and mixed into acetone containing 10% trichloroacetic acid and 0.5% dithiothreitol (Sigma, St. Louis, MO). After incubation at -20°C overnight, protein was precipitated by centrifugation at 10,000 g n at 4°C for 10 min. Pellets were washed four times with pre-chilled 100% acetone to remove all residual acid. Protein pellets were dried in a Thermo Savant SpeedVac (Thermo Fisher Scientific) at low heat.
For gel analysis, the protein powder was reswollen at room temperature in two-dimensional (2D) protein rehydration buffer consisting of 7 M urea, 2 M thiourea, and 4% 3[(3cholamidopropyl) dimethylammonio]-propanesulfonic acid. Soluble proteins were separated by centrifugation at 14,000 g n for 10 min. The protein concentration was determined using Bradford Protein Assay Reagent (Bio-Rad, Hercules, CA).
To quantitatively compare the samples using differential 2D fluorescence gel electrophoresis (DIGE) analysis, three biological replicates were labeled with cyanine dyes Cy3 and Cy5 (GE Healthcare, Piscataway, NJ) according to the manufacturer's instructions. Cy-dye-labeled samples were grouped randomly during electrophoresis so that no two Cy3 and Cy5 pairs were run on duplicate gels to eliminate statistical biases . A dye swap design was incorporated to control for labeling biases. A combined Cy2-labeled internal standard containing equal amounts of all the protein extractions used in the experiment was used to normalize across the multiple gels (Alban et al., 2003), which greatly reduces variation in the samples as a result of electrophoresis and loading. The dye:protein ratio for the experiments was 200 pmol dye:50 mg total protein. The analytical gels were run using 50 mg of protein from each labeled sample. A preliminary analysis on a limited number of samples was done to conduct a power analysis to facilitate the design of the largescale analysis. The pilot study demonstrated that three biological replicates were sufficient to identify differentially expressed proteins with greater than a 1.5-fold change at a statistical power of 0.85 or greater. Therefore, subsequent experiments had three biological replicates per treatment.
For running the gels, a protein sample was first subjected to isoelectric focusing (IEF) on the 24-cm Immobiline DryStrip pH 3-10 NL (GE Healthcare). At the completion of the IEF run, proteins were reduced and alkylated (Zhang et al., 2003). Strips were transferred onto 12.52% acrylamide-sodium dodecyl sulphate gels, which were prepared using 41.75% (v/v) of protogel (National Diagnostics, Atlanta, GA), and the gels (255 · 196 · 1 mm) were run on a Hoefer SE900 vertical slab gel electrophoresis unit (Hoefer, Holliston, MA) using the following protocol: 20°C at 20 mA for 30 min and then 50 mA for 12 h until the bromophenol blue front dye reached the bottom of the gel.
Gels were scanned on the Typhoon 9300 Variable Mode Imager (GE Healthcare) at 100 dpi according to the manufacturer's specifications for Cy Dyes (GE Healthcare) and Colloidal Coomassie Blue (Invitrogen, Carlsbad, CA) -stained gels were visualized with the 632.8 nm helium-neon laser with no emission filter. The gel images were analyzed using Progenesis Samespots (Version 3.3; Nonlinear Dynamics, Newcastle Upon Tyne, UK). All images passed quality control checks for saturation and dynamic range and were cropped to adjust for positional differences in scanning. The alignment procedure was semiautomated. Fifty manual alignment seeds were added per gel (%12 landmark spots per quadrant) and the gels were then autoaligned and grouped according to treatment. The SameSpots default settings, for detection, background subtraction (lowest on boundary), normalization, and matching, were used. Spots (picking lists) were selected as being differentially expressed if they showed greater than a 1.5-fold change in spot density and an analysis of variance score of P < 0.05.
For protein identification, preparative picking gels were run in which 450 mg of protein was loaded. Gel preparation and electrophoresis were done following the same procedure as DIGE gels. The protein gels were stained with Colloidal Blue staining solution (Invitrogen) overnight and destained in double-distilled H 2 O. Proteins spots were picked manually from the gels and digested in situ with trypsin (sequence-grade trypsin, 12.5 ngÁmg -1 ; Promega, Madison, WI) overnight. The resulting peptides were extracted from the gel pieces and concentrated with ZipTip C 18 pipette tips (Millipore, Bedford, MA). An aliquot of each digest was spotted (along with matrix) onto a matrix-assisted laser desorption/ionization-mass spectrometry (MALDI-MS) target.
The samples were subjected to MALDI analysis using a 4700 Proteomics Analyzer equipped with time-of-flight (TOF)-TOF ion optics (Applied Biosystems, Framingham, MA). Before analysis, the mass spectrometer was calibrated, externally, using a six-peptide calibration standard (4700 Cal Mix; Applied Biosystems). Most samples were calibrated internally using the common trypsin autolysis products (at m/z 842.51, 1045.5642, and 2211.1046 Da) as mass calibrants. The external calibration was used as the default if the trypsin autolysis products were not observed in the spectra of the samples. The instrument was operated in the 1 kV positive ion reflector mode. The laser power was set to 4500 for MS and 5200 for MS/MS with collision-induced dissociation off. MS spectra were acquired across the mass range of 850 to 4000 Da. MS/MS spectra were acquired for the 10 most abundant precursor ions provided they exhibited a signal to noise ratio 25 or less. Calibration was external using the known fragments of angiotensin I (monoisotopic mass 1296.6853 Da). A maximum of 2000 laser shots was accumulated per precursor. The MS data were processed using Mascot Daemon (Matrix Science, Boston, and then separated on 12.52% acrylamide-sodium dodecyl sulphate gels. The molecular weight markers (Mw) are shown on the y-axis; they were the Cy2-labeled Broad Range Protein Molecular Weight Markers (Bio-Rad, Hercules, CA). Gels were scanned on a Typhoon 9300 Variable Mode Imager (GE Healthcare) and the images were analyzed using Progenesis Samespots program (Version 3.3; Nonlinear Dynamics, Newcastle Upon Tyne, UK). Numbered spots are protein spots that showed changes above 1.5-fold between control and the heat treatment at below P < 0.05 level through analysis of variance.
MA) to submit searches to Mascot (Version 2.3; Matrix Science). The search parameters used were as follows: tryptic protease specificity, one missed cleavage allowed, 30 ppm precursor mass tolerance, 0.5-Da fragment ion mass tolerance with a fixed modification of cysteine carbamidomethylation, and a variable modification of methionine oxidation. Spectra were searched against an in-house tomato protein database (T.W. Thannhauser, unpublished data) created by combining 40,000 predicted proteins from the tomato UniGene build 2 release (25 Mar. 2009; National Center for Biotechnology Information, Bethesda, MD) and 9000 predicted proteins that to date had been annotated in the tomato genome release (3 May 2009; SOL Genomics Network, Ithaca, NY). Only peptides that matched with a Mascot score above the 95% confidence interval threshold (P < 0.05) were considered for protein identification. Only proteins containing at least one unique peptide (a sequence that had not been previously assigned to different protein) were considered.
Results
In this experiment, no visible damage was observed on the leaf surface after heat treatment. However, the number of protein spots that showed greater than 1.5-fold (P < 0.05) changes between control and treated samples were different according to the cultivars. The highest number of protein spots was identified in 'Edkawi' LA2711 [86 protein spots (Fig. 1A)] followed by 'Walter' LA3465 [43 protein spots (Fig. 1B)]; cherry tomato LA1310 had the fewest number of protein spots [40 protein spots (Fig. 1C)] exhibiting significant changes after heat treatment.
In cherry tomato LA1310 (Table 3), heat-suppressed proteins included aminotransferase (spot 627: -6.1-fold) in lysine biosynthesis; S-adenosyl-L-homocysteine hydrolase (spot 940: -3.4-fold), adenosylmethionine synthetase (spot 612: -5.8-fold) in the activated methyl cycle, antioxidant L-ascorbate peroxidase Code for each gene in the SOL Genomics Network (SGN) database. y Fold change value is the ratio of the normalized volume of the same spot in the condition of heat-treated versus control. For example, a value of 2.0 represents a twofold increase, whereas -2.0 represents a twofold decrease from treated to control conditions.
Discussion
Based on the heat-induced proteomes, proteins that exhibited changes in expression level following the same or contrasting patterns in two or three tomato cultivars were identified. Rubisco activase and S-adenosyl-L-homocysteine hydrolase were suppressed by heat stress in all three cultivars. Rubisco activase is a chaperon protein that modulates the activity of Rubisco (Portis, 2003;Portis et al., 2008;Spreitzer and Salvucci, 2002). Thermotolerance or heat liability of Rubisco activase is considered to play key roles in heat tolerance or susceptibility of a plant species (Kurek et al., 2007;Long and Ort, 2010;Salvucci et al., 2001). The endogenous level of Rubisco activase is an important determinant of plant productivity under heat stress conditions (Ristic et al., 2009). The Rubisco activase is present in two isoforms of 41 to 43 kDa and 45 to 46 kDa that arise from one alternatively spliced transcript. The larger isoform may play an important role in photosynthetic acclimation to moderate heat stress in vivo, whereas the smaller isoform plays a major role in maintaining Rubisco initial activity under normal conditions (Wang et al., 2010). In heat-treated tomato leaves, both the large and small isoforms were suppressed by heat treatment. It is therefore necessary to continue testing other heat-tolerant cultivars or wild relatives to identify heat stable Rubisco activase to increase heat tolerance of tomato cultivars.
S-adenosyl-L-homocysteine (SAM) has key functions as a primary methyl group donor and as a precursor for metabolites such as ethylene, polyamines, and osmoprotectants (Amir et al., 2002). Adenosyl L-homocysteinase is a key enzyme for the regeneration of SAM through the activated methyl cycle. Expression of the protein was suppressed by heat treatment in all three tomato cultivars; however, it was induced by salt and aluminum stresses (Krill et al., 2010;Narita et al., 2004;Zhou et al., 2009aZhou et al., , 2009b. These results suggest that adenosyl L-homocysteinase could play different roles in tolerance mechanisms to different stress factors. In tomato, the major NADP-ME is a cytosolic protein and is found in developing fruit, leaves, roots, and stems (Knee et al., 1996). Overexpression of cytosolic NADP-ME increased plant defense against salt stress (Cheng and Long, 2007). The NADP-ME protein was induced in both 'Walter' LA3465 and cherry tomato LA1310. This suggests that NADP-ME could have an important function in stress tolerance for tomato plants. In addition, several proteins were suppressed in 'Walter' LA3465 and 'Edkawi' LA2711, but not in cherry tomato LA1310. These proteins included FNR, which is responsible for the reduction of NADP + in the PSI complex (Hurley et al., 2002) and geranylgeranyl reductase, which affects the accumulation of geranylgeranylated chlorophyll a and hence the stability of photosynthetic pigment-protein complexes (Shpilyova et al., 2004;Tanaka et al., 1999). Methionine synthase was also suppressed in 'Walter' LA 3465 and 'Edkawi' LA2711. Methionine synthase is a key enzyme for the synthesis of the Code for each gene in the SOL Genomics Network (SGN) database. y Fold change value is the ratio of the normalized volume of the same spot in the condition of heat-treated versus control. For example, a value of 2.0 represents a twofold increase, whereas -2.0 represents a twofold decrease from treated to control conditions. aspartate-derived methionine (Met). Met is used at multiple levels in cellular metabolism: as a protein constituent, in the initiation of mRNA translation, and as a regulatory molecule in the form of SAM (Hesse et al., 2004). In addition, the glyoxisomal malate dehydrogenase in the glycoxylate shunt was also suppressed in 'Water' LA3465 and 'Edkawi' LA2711, but it was induced in cherry tomato LA1310. Heat stress induces production of reactive oxygen species (ROS) such as superoxide radicals, hydrogen peroxide, and hydroxyl radicals. Excess production of ROS causes oxidative damage to cellular components. In tomato, both gene expression and enzymatic activity of SOD are induced by drought and heat stresses (Panchuk et al., 2002;Perl-Treves and Galun, 1991). For tomato, SOD was induced only in 'Walter' LA3465, whereas ascorbate peroxidase (also an antioxidant enzyme) was suppressed in cherry tomato LA1310. There was no change in both enzymes in 'Edkawi' LA2711. Cyclophilins are the major group of drought-and heat-induced stress proteins (Sharma and Kaur, 2009). The cyclophilin (CYP2) chaperon protein was induced only in 'Edkawi' LA2711 but not in 'Walter' LA3465 or in cherry tomato LA1310. Tomato 'Walter' LA3465 and 'Edkawi' LA2711 are domesticated tomato forms, and cherry tomato LA1310 most likely is an ancestor of cultivated tomato (Peralta and Spooner, 2007;Ranc et al., 2008). These results suggest that the impact of these proteins on heat tolerance could be affected by the genetic background of tomato cultivars.
Heat stress-induced depletion of starch in leaves resulting from inhibition of starch formation is more pronounced in sensitive cultivars than in tolerant tomato cultivars (Dinar et al., 1983). In addition to the Calvin cycle, transketolase (TK) also participates in the oxidative pentose phosphate pathway to produces erythrose-4-phosphate. One of its substrates, fructose-6-phosphate, is also the beginning point for starch synthesis, and one of its products, erythrose-4-phosphate, inhibits phosphoglucose isomerase, which catalyzes the first reaction leading to starch biosynthesis (Henkes et al., 2001). Decreased TK activity hence alters photosynthate allocation in favor of starch biosynthesis (Weber, 2007). Suppression of the TK enzyme could be used by the heat-tolerant cultivar Walter LA3465 to maintain starch concentration in leaf tissues.
In summary, heat stress suppressed the accumulation of Rubisco activase and S-adenosyl-L-homocysteine hydrolase in all tomato cultivars. Several enzymes in the glyoxylate shunt, photosynthesis, cell defense, and carbohydrate metabolism pathways were differentially expressed. This research provided the basic information needed to formulate the molecular regulatory mechanism for heat tolerance in tomato.
|
2019-03-30T13:11:50.625Z
|
2011-05-01T00:00:00.000
|
{
"year": 2011,
"sha1": "cd02b8c8a19afe6a0657aa6cabc7e9be01c5d87c",
"oa_license": null,
"oa_url": "https://journals.ashs.org/downloadpdf/journals/jashs/136/3/article-p219.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "ba91cdf450e368be7c59652efc97b7ddae0d1b63",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
233290452
|
pes2o/s2orc
|
v3-fos-license
|
Vertical Tropia Following Horizontal Transposition Surgery
Aim: The aim of this study was to determine the prevalence of vertical tropia following horizontal transposition of both vertical rectus muscles (HToVR) in patients with Duane syndrome or sixth nerve palsy. Methods: This retrospective study included patients with Duane syndrome or sixth nerve palsy who had undergone HToVR muscles. Data collected included: age, gender, diagnosis, laterality, pre-operative angle of deviation, type of surgery and post-operative angle of deviation at one week, three months and six months. Information on the use of botulinum toxin (BT) ipsilateral medial rectus (MR), additional surgery was performed, and the presence of preoperative and postoperative binocular function and any vertical deviation was collected. Results: There were 11 patients, eight patients with a diagnosis of Duane syndrome and three patients with a diagnosis of sixth nerve palsy. The mean age of the patients was 13 ± 14.79 years (range 5–55 years), four were female. The prevalence of post-operative vertical tropia was 54%. The mean vertical deviation for distance, was 7.6^ ± 2.94 (SD) (range 3^–9^). Stereoacuity was present preoperatively in 5 patients and 8 postoperatively. No patient developed diplopia or received further surgery for the vertical tropia. Of the six patients who had intraoperative BT at the time of the HToVR, four developed a vertical deviation. Conclusion: The prevalence of vertical deviation following HToVR muscles was 54% in our series. None of the patients with an induced postoperative vertical deviation reported diplopia or required further surgery for it.
INTRODUCTION
Horizontal transposition of the vertical rectus (HToVR) muscles refers to transposing whole or part of the muscle in order to change its primary or secondary action. It has been used to treat type 1 Duane syndrome or unrecovered sixth nerve palsies (Ansons and Davis 2001).
The HToVR has been shown not only to correct the esotropia associated with Duane syndrome or sixth nerve palsies but also increase the amount of abduction (Ansons and Davis 2001). Modification of the procedure with the placement of posterior fixation sutures has been reported to increase the amount of horizontal deviation corrected and improve the abduction forces (Foster 1997).
However, it has been shown that there is a risk of inducing a vertical tropia following transposition surgery (Ruth, Velez and Rosenbaum 2009) (Dagi and Elhusseiny 2020). Therefore, the aim of this study was to determine the prevalence of vertical tropia following horizontal transposition of the vertical rectus muscles (HToVR) in patients with Duane syndrome or sixth nerve palsy.
METHODS
Data were gathered from the notes of patients that had a diagnosis of type 1 Duane syndrome or unrecovered sixth nerve palsy, undergoing surgical intervention with HToVR muscles, between June 2003 and March 2012. The details of the surgical procedure have been previously described (Schillinger 1959). All patients had an identical surgical procedure by the same surgeon (PW); the vertical recti muscles were transposed to the lateral rectus along the spiral of Tillaux augmented with a Foster suture (Foster 1997). This suture improves the abducting vector forces of the transposed vertical recti. This involves two sutures: one for the transposed superior rectus (SR) and one for the transposed inferior rectus (IR). The suture includes a bite of inferior border of the SR, 7mm from the lateral rectus (LR) insertion and sutured to the sclera just above the LR superior border. Similarly, another suture includes a bite of the superior border of the transposed inferior rectus 7mm from the LR insertion and sutured to the sclera at the lower border of the LR. If botulinum toxin (BT) was used, it was injected via a trans-conjunctival route into the medial rectus at the time of the surgical procedure. BT is given to the medial rectus (MR) to reduce the pull of the MR while the transposed muscles heal into position.
Data collected included age at time of surgery, gender, diagnosis, laterality, preoperative angle of deviation (near and distance), type of surgery, postoperative angle of deviation for (near and distance) at one week, three months and six months follow up post-operatively, preoperative and postoperative binocular vision and additional surgery postoperatively. All measurements were done using the prism cover test. Ethical approval was sought but deemed not necessary. The study complied with the principles of the Declaration of Helsinki.
RESULTS
There were 11 patients, with seven males and a mean age of 13 ± 14.79 years (range 5-55 years). There were eight adults and three children.
The prevalence of postoperative vertical deviation in this case series was 54% (6/11 patients) at the last follow up; where four patients had ipsilateral hypertropia, one patient had contralateral hyperphoria and one had ipsilateral hypotropia. Of the eight patients with Duane syndrome, five patients (Patients 5, 6, 7, 9 and 11 in Table 1) had a vertical deviation (62%). In the group with the sixth nerve palsy, Patient 1 ( Table 1) already had an ipsilateral hypertropia prior to the transposition surgery.
The median vertical deviation in these patients was 6^ (range 3^-9^) for near fixation and 7^ (range 2^-12^) for distance fixation.
No patient reported diplopia following transposition surgery. Those patients who developed a vertical deviation; one patient had a hyperphoria and were binocular (Patient 6), one was non-binocular (Patient 11) and the remaining patients (Patient 1, 3, 5 & 9) had a compensatory head posture. The compensatory head postures were reduced in most patients post-operatively. The degree of head posture was not measured preoperatively and post-operatively; only an observation by the clinician and patient was recorded in the clinical notes.
Eight patients (72%) achieved BSV (determined by the presence of stereoacuity at their last follow up appointment (range of follow up time: 12-58 months after transposition surgery), with a median of 100 seconds of arc (ranging from 40-400 seconds of arc). We found that three patients achieved BSV following HToVR. There were five patients who had BSV with a compensatory head posture prior to transposition surgery, and four of these patients retained BSV. Following further surgery (Left MR recession) for the residual esotropia, Patient 2 (see table) regained BSV.
Patients 1, 3, 5, 6, 8 and 9 also received intra-operative botulinum toxin during the surgery. One patient required further treatment, where they received further BT and subsequently lost to follow up. There were four patients (Patients 3, 5, 6, and 9), who received intraoperative BT who developed an induced vertical deviation, compared with only one patient in the cohort who didn't receive BT intraoperatively. None of these four patients required further treatment to correct the vertical deviation, only two patients (Patients 3 and 9) required treatment for the residual esotropia with further BT. The indication for botulinum toxin was to enhance the transposition effect during the early postoperative period. In those patients who did not receive intraoperative BT (45%, 5/11 patients), three patients required further surgical treatment for the residual ET. This suggests that the addition of BT intra-operatively seems to show better outcomes, due to the lower risk of reoperation, for the residual ET, in this group. One patient received HToVR muscles on the other eye as the patient had bilateral Duane syndrome (patient 11).
The re-operation rate was low following HToVR. Patients 2 and 10 had further adjustable recession of their medial rectus muscles on the same side as the transposition, for residual esotropia, which was done after an interval of six months to reduce the risk of anterior segment ischaemia as decided by the surgeon (PW).
Patients 2, 9 and 10 required postoperative BT (2.5 and 5.0 units); one adult and two children.
Prior to HToVR, there were four patients (Patients 1, 3, 8 and 11) who received BT injectionsl and Patient 7 who had a MR recession. Patients 1 and 8 who had a previous BT went onto receive another injection of BT intraoperatively.
Patient 3 developed a consecutive exotropia following HToVR. However, this patient did not require further surgery. At the last follow up period of six months, the patient had an intermittent exotropia only measuring 6^ for near and distance fixation. Patient 4 developed a small exophoria measuring 6^ and 5^ for distance and near fixation respectively.
DISCUSSION
The prevalence of postoperative vertical deviation in both groups of patients was 54% at the last follow up; where 62% (5/8) of patients with Duane syndrome developed a vertical deviation (3/8 patients had hypertropia, one had hyperphoria and one had hypotropia). In patients with a sixth nerve palsy 33% (1/3) of patients had residual hypertropia. Elsewhere, 32% (Leiba et al. 2010) and11% (Mehendale et al. 2012) of patients developed a vertical deviation. However, our study only had 11 patients in total compared to 22 (Leiba, Wirth, Amstutz and Landau 2010) and 17 (Mehendale, Dagi, Wu, Ledoux, Johnnston and Hunter 2012) patients. This high rate of vertical deviation has also been reported more recently by Dagi and Elhusseiny (Dagi and Elhusseiny 2020) where 50% (4/8 patients) of their cohort developed a vertical deviation. Thirty-seven and a half percent of patients with a sixth nerve palsy (3/8) developed a vertical deviation and 12.5% (1/8) with Duane syndrome developed a vertical deviation following adjustable graded augmentation of superior rectus transposition with or without medial rectus recession (Dagi and Elhusseiny 2020).
The median vertical deviation in these patients was 6^ (range 3^-9^) for near fixation and 7^ (range 2^-12^) for distance fixation.
According to (Leiba, Wirth, Amstutz and Landau 2010) the mean reduction in esotropic deviation was 30^ ± 15.8^ (range 6^-78^), compared with our study where the mean reduction was 11^ and 6^ for distance and near fixation, respectively. Elsewhere, a study demonstrated a reduction of 34^ in the angle of the esotropia (Mehendale, Dagi, Wu, Ledoux, Johnnston and Hunter 2012). They performed SR muscle was transposition along with a MR recession on adjustables. They augmented the SR muscle by placing a suture 8-12mm from the SR muscle insertion. However, (Leiba, Wirth, Amstutz and Landau 2010) opted for the more traditional option, where they performed a full tendon HToVR along with intra-operat BT injection to the ipsilateral MR, hence supporting our surgical technique in our case series.
It would seem that by transposing only the SR muscle, there has been an even greater effect on the reduction in the mean esotropia. This could be due to the use of augmenting the transposition by also recessing the MR during the same procedure, as supported by Johnston and Crouch (Johnston, Crouch and Crouch 2006) and Dagi and Elhusseiny (Dagi and Elhusseiny 2020).
Interestingly, no patient in our cohort reported vertical diplopia following their HToVR. Most patients had a residual compensatory head posture to account for their vertical deviation and one patient was not binocular. Similarly, Dagi and Elhuisseiny (2020) describe that their cohort of patients used a compensatory head posture; however, their patients did report vertical and torsional diplopia.
The limitations in our study are firstly the small number of patients we managed to gather data from, hence the higher incidence of the vertical deviations. Secondly, the retrospective nature of this study which impacts the amount of information gathered from these patients at the time of treatment.
CONCLUSION
In our case series, we found the prevalence of vertical deviation following HToVR was 54%. None of the patients with an induced postoperative vertical deviation reported diplopia or required further surgery. Interestingly, we have shown that the postoperative BSV was restored as an additional three patients achieved BSV.
|
2021-04-18T00:33:45.624Z
|
2021-03-12T00:00:00.000
|
{
"year": 2021,
"sha1": "9fb897696ccdd9f6f34285498a523380aff07f7a",
"oa_license": "CCBY",
"oa_url": "http://www.bioj-online.com/articles/10.22599/bioj.160/galley/201/download/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9fb897696ccdd9f6f34285498a523380aff07f7a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
222076168
|
pes2o/s2orc
|
v3-fos-license
|
Coping strategies and health promotion through teaching-service integration in the context of the COVID-19 pandemic
In the current situation related to 2019-nCoV β-coronavirus, the National Health Authorities have determined the elaboration of contingency plans (CP) that minimize the contagion and allow the functioning of essential activities. The CP presented defines a set of guidelines that allow the adequacy of the response of a public university in Northeast of Brazil linked to the Programa Mais Médicos para o Brasil. Descriptive and qualitative study, type of comment, from the analysis of the data of the CP for the definition of strategies for coping with public health emergencies. The CP consists of ten measures that include assistance via applications/social networks; monitoring of physicians who are at risk; screening of suspected/confirmed cases; production of guides/protocols; 24h psychological/technical assistance to physicians working at primary health care and provision of online courses. The methodology proposed provides different models from those trivially presented in academia and is essential to promote health education. Abstract
ISSN 2179-7994
In the current situation related to 2019-nCoV β-coronavirus, the National Health Authorities have determined the elaboration of contingency plans (CP) that minimize the contagion and allow the functioning of essential activities. The CP presented defines a set of guidelines that allow the adequacy of the response of a public university in Northeast of Brazil linked to the Programa Mais Médicos para o Brasil. Descriptive and qualitative study, type of comment, from the analysis of the data of the CP for the definition of strategies for coping with public health emergencies. The CP consists of ten measures that include assistance via applications/social networks; monitoring of physicians who are at risk; screening of suspected/confirmed cases; production of guides/protocols; 24h psychological/technical assistance to physicians working at primary health care and provision of online courses. The methodology proposed provides different models from those trivially presented in academia and is essential to promote health education.
Abstract
Keywords: Coronavirus Infections; Health Promotion; Health Education.
Resumen
Palabras clave: Infecciones por Coronavirus; Promoción de la Salud; Educación en la Salud. The contingency plan (CP) for COVID-19 comprises 10 intervention measures ( psychologist, as well as a partnership agreement with local psychology courses for psychological support of medical scholar professionals directly linked to assistance.
Neto et al. 6 , demonstrated satisfactory results in reaching the public using the social network as a tool to promote health education. In this way, supervisors can technically prepare themselves to offer a second formative opinion to participating physicians, strengthening the teaching-service integration, which in health units consists of the integrated work of academics, professors, managers and professionals who make up health institutions, aiming to improve individual and collective attention and reorient the educational process and professional training in the health area. 7 In addition, during pandemics, it is common for health professionals, scientists and managers to focus predominantly on the pathogen and biological risk, in an effort to understand the pathophysiological mechanisms involved and propose measures to prevent, contain and treat the disease.
Conducting periodic virtual meetings involving tutors, supervisors and invited experts.
2. Training for supervisors regarding the management of the COVID-19 and flowchart for patient care in UBS by specialists in the region.
Adoption of a longitudinal supervision model offered to doctors program participants by the supervising institution.
Telemedicine as a way to support working groups 4. Formation of technical support team.
5. Formation of psycho-emotional support team.
Formation of PMMB Partnerships with local institutions
6. Partnership with the Cariri Regional Health Superintendence as an intermediary with the Municipalities linked to the program.
Production and summarization of knowledge about COVID-19 for local health departments
7. Elaboration guidelines for physicians and their health teams subsidized by a technical note from the Health Secretariat of Estado/ CFM/AMB/ANVISA, adjusted by our team of specialists for the care of users in the UBS due to the risk of contamination of COVID-19.
Monitoring of physicians and health professionals who are in risk groups
8. Mapping of the UBS where there are physicians from a risk group and monitoring of cases of COVID-19 in the territory in which they operate, both by users and by health professionals.
9. Redistribution of supervisors in the territory in which they work with the aim of improving the assistance of physicians students. In these situations, the psychological and psychiatric implications secondary to the phenomenon, both at the individual and collective levels, tend to be underestimated and neglected, creating gaps in coping strategies and increasing the burden of associated diseases. 8 In this context, psychology plays an important role in the prevention of mental health problems for professionals and, therefore, psychological shifts for guidance about mental health care, in addition to virtual psychological cares are of paramount importance. 9 The sixth and seventh stages are the formation of partnerships with the local Regional Health Superintendence, as an intermediary between municipalities and PMMB and the Extension Project in Family and Community Medicine (Table 1), through which there is the production of epidemiological bulletins from the municipalities to the University and production of guidelines for action in UBS for compliance with COVID-19 by specialists from the University for the municipalities. This approach facilitates the entry of clinical and demographic data and information, in order to combat fake news, myths and rumors about the outbreak of COVID-19. The advancement in the use of social media as a means of information has brought with it the challenge of monitoring and responding quickly to false content disseminated on these channels. In this context, the growing movement of discredit traditional communication channels, which encourages adherence to alternative sources, also becomes a public health risk that must be faced. The communication of specialists cannot be restricted to the academic environment and professionals in the field. 10 This alternative is also in line with the proposed guidelines for medical training in SUS, 11 because, when integrating an extension project directly to assistance, there is a proposal to link medical training to generalist, humanist, critical stance and reflexive, empowering them to work in the different health services at their different levels of care.
The last three stages ( Table 1) The literature outlines that joint efforts by the State and Universities are necessary to increase the hiring of professionals for support teams, institute action protocols for different social scenarios, ensure the necessary inputs to increase the attendance, especially of individual protection equipment, guarantee training of paramentation and desparamentation to all team professionals. 13 It is worth noting that difficulties are being encountered in the application of the CP as deficits for resources in health promotion actions with the community; in some situations, there was difficulty in matching schedules with the students 'curriculum and the supervisors' professional routine; in addition, the shift to operating scenarios is also challenging.
It is also important to note that there was no resistance from the second opinion offer to specialists. It is clear that due to the team's cohesion and longitudinal work over the past seven years, a collective commitment of tutors and supervisors to local primary care has been created, even though the training is different. In fact, the second opinion system opened the prerogative for organizations of pedagogical meetings using web conferences through an agenda on Google, sending invitations, moderating activities between participants and speakers, and evaluation processes within the program.
At the same time, the local administrations received the support offered by the PMMB in a very comprehensive way because, in the current context of the COVID-19 pandemic, many protocols with different approaches and different institutions cause insecurity and make the preparation of contingency plans difficult. Thus, the support of a local team with technical knowledge of the demands of the teams assigned in line with the main national and international guidelines is essential.
The emergence of new diseases has impacts far beyond the cases and deaths they generate. They also create an ideal context that imposes on national public health systems the task of validating their health surveillance and assistance system as for of the opportunity for early detection and the power of response that came in cascading. 10 Despite the state of pandemic and global alert, health surveillance actions based on the triple alliance between teaching-extension and management can be fundamental, especially when considering particularities of actions in different regions of the world, especially in countries with large dimensions such as the Brazil. Finally, it is noteworthy that the group keeps studying possibilities to better assess the impacts of the PC in the assisted municipalities, as well as there is a research project in preparation phase aiming at this mapping.
|
2020-07-23T09:06:22.735Z
|
2020-07-17T00:00:00.000
|
{
"year": 2020,
"sha1": "0c885551096d4b276e4d4509448b8485c9a1997b",
"oa_license": "CCBY",
"oa_url": "https://rbmfc.org.br/rbmfc/article/download/2526/1554",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b5e19d71d553108b1c23bf6e79d2b864bed8c302",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
33164842
|
pes2o/s2orc
|
v3-fos-license
|
Organ Microcirculatory Disturbances in Experimental Acute Pancreatitis. A Role of Nitric Oxide
Summary Microcirculatory disturbances are important early pathophysiological events in various organs during acute pancreatitis (AP). The aim of the study was to investigate an influence of L-arginine (nitric oxide substrate) and N G -nitro-L-arginine (L-NNA, nitric oxide synthase inhibitor) on organ microcirculation in experimental acute pancreatitis induced by four consecutive intraperitoneal cerulein injections (15 µg/kg/h). The microcirculation of pancreas, liver, kidney, stomach, colon and skeletal muscle was measured by laser Doppler flowmeter. Serum interleukin 6 and hematocrit levels were analyzed. AP resulted in a significant drop of microperfusion in all examined organ. L-arginine administration (2x100 mg/kg) improved the microcirculation in the pancreas, liver, kidney, colon and skeletal muscle, and lowered hematocrit levels. L-NNA treatment (2x25 mg/kg) caused aggravation of edematous AP to the necrotizing situation, and increased IL-6 and hematocrit levels. A further reduction of blood perfusion was noted in the stomach only. It is concluded that L-arginine administration has a positive influence on organ microcirculatory disturbances accompanying experimental cerulein-induced AP. NO inhibition aggravates the course of pancreatitis.
Introduction
Hemodynamic shock is one of the initial events accompanying acute pancreatitis.Although the impairment of macrohemodynamic functions (cardiac output, mean arterial pressure) can easily be normalized by vigorous fluid replacement (Knol et al. 1987), persistent microcirculatory dysfunction may be detrimental in organs vulnerable to failure during shock, such as the liver, lung and kidney (Mulder et al. 1994), and may be a key element in the development of the pancreatitis-associated multiorgan dysfunction syndrome (Foitzik et al. 2000).The microcirculatory disturbances within the pancreatic capillary bed is believed to be a crucial factor in the evolution of pancreatitis from edema to necrosis (Zhou andChen 2002, Strate et al. 2003).
Various vasoactive mediators, such as bradykinin, endothelin, thromboxane, the platelet activating factor, and nitric oxide participate in the development of microcirculatory failure (Zhou and Chen Vol. 54 2002).In the last decade, the beneficial effect of therapeutic strategies in acute pancreatitis, affecting vasoactive mediators, has been confirmed in several experimental studies.Recent evidence suggests that nitric oxide, due to its vasodilatory, anti-inflammatory, antiadhesive and anticoagulant properties (Werner et al. 1998a,b), appears to have a beneficial influence on the course of acute pancreatitis.The aim of this study was to evaluate the impact of L-arginine (a substrate for NO synthase) and N G -nitro-L-arginine (L-NNA, NO synthase inhibitor) on splanchnic malperfusion in experimental cerulein-induced acute pancreatitis.
Material and Methods
The study was carried out in 46 male Wistar rats weighing 180-200 g, kept on standard rat chow and fasted overnight before the experiment with water allowed ad libitum.Acute pancreatitis was induced by four intraperitoneal injections of cerulein (Cn) -(Sigma, St. Louis, USA) (15 µg/kg) in 1 ml of saline at 1-hour intervals: at the beginning, and consecutively after the first, second and third hour of the experiment.Five hours after the first cerulein injection, rats were anesthetized with pentobarbital sodium (40 mg/kg).Following anesthesia, a laparotomy was performed, and a fiber optic probe of laser Doppler flowmeter (Periflux 4001, Perimed Jarfalla, Sweden) was positioned against the surface of the pancreas, liver, kidney, stomach, colon and skeletal muscle of the thigh in order to investigate organ perfusion.Blood flow was measured in three different portions of each organ, the mean values were calculated and expressed as percentage basal values obtained in control rats (100 %).After the measurements, blood was aspirated from the inferior cava vein for hematocrit estimation and interleukin 6 functional assay, as depicted previously (Dobosz et al. 1999), the pancreas was removed for microscopic evaluation, and the animals were exsanguinated.
The animals were randomly allocated into four groups: Group I (n=10) -control, Group II (n=12) -Cn-induced pacreatitis without treatment, Group III (n=12) -Cn-induced pancreatitis treated with L-arginine (Calbiochem, Lucerna) 2x100 mg/kg, given in the 1st and 2nd hour after the first Cn injection, Group IV (n=12) -Cn-induced pancreatitis treated with L-NNA (Calbiochem, Lucerna) and 2x25 mg/kg given in the 1st and 2nd hour after the first Cn injection.
Statistical analysis
Data are presented as means ± standard deviation (S.D.).The differences between the groups were analyzed by means of the ANOVA test.P<0.05 values were considered significant.
Results
Four intraperitoneal cerulein injections resulted in marked pancreatic edema with the collection of peritoneal exudates in all the animals.The microscopic examination revealed an edematous form of acute pancreatitis, with inter-and intralobular edema, vacuolization of parenchymal cells, and leukocyte infiltration within the pancreatic gland.No parenchymal necrosis was noticed.In the group of animals treated with L-arginine, besides vacuolization, glandular edema, and leukocyte infiltration, small foci of parenchymal necrosis were detected in two rats.In rats with AP receiving L-NNA, an aggravation of microscopic alterations, including necrosis and hemorrhages were within the pancreatic gland observed.
Microcirculatory values
Cerulein-induced acute pancreatitis resulted in a significant drop of pancreatic microperfusion to 37±4 % of basal values.Administration of L-arginine significantly improved the microcirculation of the pancreas up to 72±10 %, but L-NNA did not lower the pancreatic blood flow (Table 1).Hepatic perfusion in rats with pancreatitis receiving no treatment was decreased to 57±6 %, L-arginine injection raised this value to 76±7 %, L-NNA had no effect on hepatic capillary flow.Renal blood flow in group with pancreatitis was diminished to 45±6 %, which was improved significantly with L-arginine treatment up to 64±5 %, L-NNA administration did not influence renal perfusion.Microcirculatory values of stomach blood flow in group II with AP were reduced to 65±8 %, L-arginine treatment had no effect on this parameter, but L-NNA significantly decreased the stomach perfusion to 46±7 %.Ceruleininduced AP diminished the colonic blood flow to 70±6 % which was augmented with L-arginine to 85±10 %, L-NNA injection had no effect on colonic microcirculation in AP.Skeletal muscle perfusion in animals with pancreatitis was significantly ameliorated after L-arginine administration from 59±3 % to 82±6 %, L-NNA did not change the value of skeletal muscle blood flow (Table 1).
Serum interleukin 6
Cn-induced acute pancreatitis caused a significant increase of serum IL-6 activity from 38±21 U/ml in control animals up to 359±66 U/ml.L-arginine administration had no effect on the IL-6 level, but L-NNA significantly increased this parameter to 409±44 U/ml (Table 2).
Groups
Interleukin 6 (µ/ml) Hematocrit (%) Mean values ± SD. a P<0.05 in comparison to control group; b P<0.05 in comparison to the acute pancreatitis group
Discussion
In the present study, four intraperitoneal cerulein injections caused an edematous form of acute pancreatitis.The consequence of AP in rats was the reduction of capillary blood flow in the pancreas, measured by a laser Doppler flowmeter.The pancreatic microcirculatory disturbances which accompany experimental acute pancreatitis were confirmed by other authors, in both a mild edematous form of the disease and a severe necrotizing one (Konturek et al. 1994, Liu et al. 1995, Schmidt et al. 2002, Strate et al. 2003).
The disturbances of microcirculation in acute pancreatitis are not only confined to the pancreatic capillary bed, but are also observed in other organs (Skoromnyi andStarosek 1998, Foitzik et al. 2002).It was suggested that diffused microcirculatory disorders may play a crucial role in the development of the pancreatitis-associated multiorgan dysfunction syndrome, some authors even define severe AP as a systemic dysfunction syndrome (Foitzik et al. 2000).
The present study confirms these data.Besides the pancreas, reduced capillary perfusion was observed in the liver, kidney, stomach, colon, and skeletal muscle, however, the drop in perfusion of other organs was not so pronounced as in the pancreas.This suggests that in pancreatitis, the pancreatic gland is especially susceptible to microcirculatory disorders.Kinnala et al. (2002) found that splanchnic malperfusion begins with pancreatic hypoperfusion before disturbances in gut microcirculation.On the other hand, Hotz et al. (1998) noted that in mild pancreatitis, pancreatic capillary perfusion remained unchanged, whereas mucosal and subserosal colonic capillary blood flow was significantly reduced.They also demonstrated that severe pancreatitis was associated with a marked reduction in both pancreatic and colonic capillary perfusion.
Microcirculatory disturbances in AP comprise many components: decreased capillary blood flow and capillary density, increased capillary permeability, and enhanced leukocyte-endothelial interaction (Foitzik et al. 2000).It is still not clear which of these factors is the initiating one or the most important.It seems to be logical that any effort to improve the microcirculation may be beneficial for all organs, irrespective of the underlying triggering mechanism.
Several studies documented a positive impact of various therapeutic agents on AP course, improving tissue perfusion: dextran (Klar et al. 1993), pentoxifylline Vol. 54 (Gomez-Cambronero et al. 2000), heparin (Dobosz et al. 1999), bovine hemoglobin (Strate et al. 2003), ICAM-1 monoclonal antibodies (Werner et al. 1998a,b), and endothelin receptor antagonist (Plusczyk et al. 2003).In the current study, the intraperitoneal L-arginine administration (substrate for NO synthase) significantly augmented capillary blood perfusion of all the examined organs, except the stomach.The improvement of pancreatic microperfusion should have a positive influence on microscopic alterations within the pancreas.Contrary to other observations (Konturek et al. 1994, Liu et al. 1995), we observed focal pancreatic necrosis in rats receiving L-arginine.In a recent study using the same model, we analyzed microscopic alterations within the pancreas by means of histological grading (Dobosz et al. 1999).The scoring revealed slightly higher vacuolization rate of acinar cells, leukocyte infiltration and necrosis, although the differences were not significant in comparison to acute pancreatitis group without treatment.The deterioration of morphological changes of pancreatic parenchyma in rats receiving L-arginine could be explained by the intraperitoneal drug administration which might result in an excessive local NO concentration and cytotoxic peroxynitrite production (Beckman et al. 1990).This phenomenon could also explain why we did not observe a decrease of IL-6 concentration in spite of improved pancreatic blood flow.
It was shown that a significant number of adherent leukocytes had been observed in hepatic microcirculation two hours after AP induction (Chen et al. 2001).The L-arginine administration, due to the antiadhesive properties of NO (Werner et al. 1998a,b), could prevent neutrophil adhesion to hepatic capillaries and improve hepatic perfusion noted in our study.It was suggested in another study that hepatic microcirculatory improvement ameliorated phagocytic Kupffer cell function in the liver (Forgacs et al. 2003).
The pathophysiology of renal insufficiency, which is an often observed complication of acute pancreatitis, is heterogeneous.The improvement of renal blood flow observed after L-arginine treatment in rats with pancreatitis could prevent this complication.It was found in severe acute pancreatitis that endothelin receptor blockade, besides the enhancement of pancreatic perfusion, also improved renal function (Foitzik et al. 2000).
Decreased capillary blood flow in the colonic mucosa is associated with impaired gut barrier function and increased translocation of live bacteria through the morphologically intact colonic wall (Foitzik et al. 1997).
The present study revealed that L-arginine treatment in the group with pancreatitis significantly improved the altered microperfusion of the colonic wall.This suggests that nitric oxide may play a role in the prevention of secondary pancreatic infection.It was shown that NO substrates limit bacterial translocation and pancreatic inflammation associated with AP, probably by their bactericidal actions and ability to improve pancreatic blood flow (Cevikel et al. 2003).
Besides organ microcirculatory improvement, L-arginine administration diminishes the hematocrit level.Beneficial influence of L-arginine on hematocrit levels in group III, may also suggest that nitric oxide restricts capillary permeability not only in the pancreatic gland and prevents fluid escape into the extracellular space.Therefore, nitric oxide therapy may allow to avoid vigorous fluid resuscitation observed in patients with AP.It was shown that L-arginine concentrations are depleted in the serum of patients with acute pancreatitis (Sandstrom et al. 2003), so that this kind of therapy seems to be justified.
Protective effect of nitric oxide in acute pancreatitis has also been observed in other recent studies.It was suggested that NO protects against tissue injury in AP, acts indirectly via microcirculatory changes, including inhibition of leukocyte activation and preservation of capillary perfusion (Werner et al. 1998b).A protective role of endogenous NO against oxidative damage to subcellular fractions was noted by Sanchez-Bernal et al. (2004).Other authors found that glyceryl trinitrate and L-arginine treatment significantly attenuated damage of the pancreatic gland and augmented cell proliferation after AP (Jurkowska et al. 1999).
Although the microcirculatory values in rats with AP after L-NNA injection became significantly decreased in the stomach only, inhibition of NO synthase in the current study resulted in aggravation of the course of acute pancreatitis.Microscopic examination revealed the development of a severe necrotizing form of the disease, serum interleukin 6 and hematocrit levels being increased.Similar observations concerning a negative impact of NOS suppression were made by other authors, including reduction in pancreatic blood perfusion, decrease pancreatic tissue oxygenation, deterioration of inflammatory changes and growth of proinflammatory cytokine levels (Konturek et al. 1994, Liu et al. 1995, Werner et al. 1998 a,b).
In summary, these data suggest that in the early period of AP, nitric oxide, maintaining the splanchnic microcirculation, plays an important role in the pathophysiological events of the disease.However, the short time of observation does not allow to conclude clearly about the beneficial effect of L-arginine on AP.The inhibition of NO seems to be deleterious and enhances the progression of the disease.
Table 1 .
Microcirculatory values of pancreas, liver, kidney, stomach, colon, skeletal muscle.Mean values ± SD. a P<0.05 in comparison to the control group; b P<0,05 in comparison to the acute pancreatitis group.
Table 2 .
Interleukin 6 and hematocrit values in experimental groups.
|
2017-08-27T15:35:13.990Z
|
2005-01-01T00:00:00.000
|
{
"year": 2005,
"sha1": "240def259b2a275b37a153f5e2f87f761f4689db",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.33549/physiolres.930637",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "240def259b2a275b37a153f5e2f87f761f4689db",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
157992357
|
pes2o/s2orc
|
v3-fos-license
|
The Currency Carry Trade : Selection Skill or Behavioral Bias
Many attempts have been undertaken to solve the forward premium puzzle with little to no success. The global currency market is considered the most information efficient and transparent of all financial markets since it demonstrates a balance between over and under-reaction to information with remarkable consistency. The Efficient Market Hypothesis espouses investors cannot systematically outperform a benchmark since all investors have access to the same information. Therefore, the expected long-term rate of return for currencies is essentially zero. The Arbitrage Pricing Theory asserts investment returns are random. As such, traders cannot avail themselves of mispriced currencies. The assertion of Uncovered Interest Rate Parity is that bi-national interest rate variance is equal to the expected differential in exchange rates. This paper asks the following questions: does alpha persistence exist in currency carry trade funds or are its excess returns merely a collection of behavioral biases?
Introduction
For investors, risk is unequivocally linked to the behavioral trait of loss aversion; that is, investors are more conscious of losses than gains (Berkelaar, Kouwenberg, & Post, 2004).It is also associated with downside risk; that is, the loss of any portion of the initial investment (Ang, 2006).Despite extreme price volatility and a noisy trading environment, proponents of the currency carry trade believe the high level of risk is justified by its return.Fund managers seek to minimize downside risk and earn returns which are the result of either beta or alpha.Beta is merely the return granted from exposure to the market (Kung & Pohlman, 2004).Alpha is selection skill-the result of successful exploitation of market inefficiencies and behavioral biases.Currency carry fund managers actively trade long and short futures and forward contracts on various currencies to capitalise on currency price and interest rate volatility (Hudson, 2008).Therefore, the following must be considered: Do currency carry trade funds manifest evidence of alpha or as Holmes (2009) posits, are these funds merely a collection of risky biases with associated downside risk?
This overview contributes to the existing literature as it places the evidence in context and provides a survey of current literature and discussion of important theories.Additionally, it conducts, presents and reviews the results of an investigative study of two currency carry trade funds, the PowerShares G-10 Currency Harvest Fund Exchange Traded Fund and the iPath Optimized Currency Carry Exchange Traded Note, in an attempt to determine the existence of performance persistence present in either fund.
The remainder of this paper is structured as follows: Section Two surveys the relevant literature; Section Three describes the data and provides descriptive statistics; Section Four presents a comparative analysis of the empirical results; and Section Five offers a summary and concluding remarks.
Literature Survey
The currency market is the largest and most liquid financial market in the world (Harvey & Huang, 1991;Lequeux & Acar, 1998) with average daily turnover of USD 5.3 trillion.No other financial market better meets Fama's (1970Fama's ( , 1998) ) isomorphic requirements of market efficiency: at any given time, prices reflect information available to all market participants with no natural short-selling constraints.Currency trading is the quintessential example of a zero-sum game; for every long position, there is a short one.Yet if true, how can traders earn systematic profits?Copeland (2014) describes the currency carry trade as the ability to lend in a high interest rate currency such as the New Zealand Dollar (NZD) financed by borrowing in a low interest rate currency such as the Japanese Yen (JPY) at rates approaching zero; that is, borrow low, lend high.‗Carry' is a result of a positive interest rate differential between the two currencies; that is, the decline in the value of the low yield currency relative to the high yield currency.Currency traders are not rational investors but instead rational economic actors (Hardie & MacKenzie, 2007) desiring wealth but not the work necessary to attain it.They are essentially arbitrageurs seeking riskless profits at no cost.Their actions result in a forced endurance of very high downside risk and interest rate fluctuations, both of which threaten profits and significantly increase the chance of forced position unwinding.Lustig, Roussanov, and Verdelhan (2011) insist investors are not exposed to any country or currency-specific risk as a result of carry trades.Instead, the investor bears foreign exchange risk, not sovereign risk (Daniel, et al., 2014).The risk/return profile of currency trading is determined by the prime interest rate listed by various central banks, the result of which is forward rate bias.In November 2014, the Bank of Japan announced a significant increase in its aggressive quantitative easing program with the aim of maintaining near-term interest rates at zero percent.Coupled with the current monetary policies of the Reserve Bank of New Zealand, the NZD/JPY currency pair exists as a synthetic, risk-free asset.Profit occurs when the New Zealand Dollar rises against the Japanese Yen.Trade in the NZD/JPY pair is now consistent with Siegel's Paradox (1972) as risk management is no longer the prime motivator.Pojarliev and Levich (2008) define alpha in the context of currency trading as returns in excess of -transparent and readily implemented currency trading strategies‖.Alpha measures selection risk assumed by a currency carry trade fund manager-it measures risk-adjusted performance.Risk factoring is due to the specific currency pair traded, rather than the overall market.Positive alpha is the additional return awarded for the assumption of additional risk rather than accepting market returns.Currencies are considered to be a zero beta asset (Burnside, et al., 2007) with erratic returns minimally correlated with stocks and bonds.Unlike debt and equity securities where profits depend solely on price appreciation, opportunities for excess returns in currency trading exist in both rising and falling markets (Liang, 2004).
The Efficient Market Hypothesis (EMH) postulates, absent inside information, alpha generation is impossible (Fama, 1970).In essence, forward rate bias is a clear rejection of the EMH.The very existence of successful currency carry trades is a violation of Ross's (1976aRoss's ( , 1976b) ) Arbitrage Pricing Theory (APT) which is based on the law of one price and contends that no security exists which has a zero price and a non-negative payoff.According to the implication of no-arbitrage, profitable trades on the NZD/JPY currency pair are a monetary illusion; the carry premiums should disappear.Yet arbitrage opportunities do exist and are exploited by irrational traders leading to what is termed by DeLong, et al. (1990) as noise trader risk; that is, the risk that arbitrage opportunities exploited by irrational traders disappear leading to large losses.
In keeping with interest rate parity, significant abnormal returns do not occur from lending or borrowing a currency at a foreign or domestic interest rate (Egbers, 2013).Therefore, a rational investor would be indifferent to the inevitable convergence of available interest rates.Should the foreign interest rate be higher than the domestic interest rate, the interest rate differential is compensated by a lower forward exchange rate.The mathematical calculation of Covered Interest Parity is as follows: Where There is general agreement amongst academic finance researchers that uncovered interest rate parity does not hold (Alexius, 2001;Anker, 1999;Chinn, & Meredith, 2004;Chortareas & Driver, 2001;Frachot, 1996).The failure of Uncovered Interest Rate Parity is one of the primary inducements for the currency carry trade and has come to be known as the forward premium puzzle: the notion that currencies of high interest rate countries' appreciate relative to currencies of low interest rate countries (Jylha & Suominen, 2009).Furthermore, academic finance research has shown UIP tends to fail at time horizons less than five years (Gyntelberg & Remolona, 2007).
Leading economic models fail to explain the forward premium puzzle (Bansal, 1997).In practice, currency markets contain pockets of inefficiency, arbitrage opportunities exist (Sarno & Taylor, 2002), and uncovered interest parity does not hold.Therefore, an important question is whether or not alpha persistence exists in the currency carry trade.
Data
A backtest was performed using time series data collected from 2 October 2006 to 28 November 2014-the greatest time period in which data were readily available.The study appropriated daily time series returns for the following: Australian Dollar (AUD), British Pound Sterling (GBP), Canadian Dollar (CAD), New Zealand Dollar (NZD), and United States Dollar (USD) relative to the Japanese Yen (JPY).The interest rate of each currency was the 3-month London Interbank Offered Rate (LIBOR) from the British Bankers Association.
Returns for these currencies were retrieved from Oanda.
The alternative investment industry often argues performance should not be compared to an absolute return target (Anson, 2001).Nonetheless, for valuation purposes it is reasonable to compare the returns of currency carry trade funds versus a performance index.The currency carry trade index reviewed here (AFX Currency Management Index) represents the average performance of active, trend-following currency managers.The index replicates the trading actions of an active manager and provides a more realistic benchmark for active currency traders.(Laws, n.d.).The AFX Currency Management Index uses moving averages of 32, 61, and 117 days.It serves as proxy for a currency carry trade benchmark.
The Deutsche Bank DB G10 Currency Future Harvest Index -Excess Return (ticker symbol: DBCFHX) employs a strategy which is long three currency futures contracts with the highest interest rates and short three currency futures contracts with the lowest interest rates.The currencies considered are: British Pound Sterling (GBP), New Zealand Dollar (NZD), Canadian Dollar (CAD), Australian Dollar (AUD), United States Dollar (USD), Japanese Yen (JPY), the Euro (EUR), Norwegian Krone (NOK), Swedish Krona (SEK), and Swiss Franc (CHF).The PowerShares G10 Currency Harvest Fund Exchange Traded Fund (ticker symbol: DBV) directly tracks DBCFHX and serves as proxy for the first carry factor.Nominal results were collected from Macroaxis.
The iPath Optimized Currency Carry Exchange Traded Note (ticker symbol: ICI) directly tracks the Barclays Optimized Currency Carry Index and serves as a second proxy for the currency carry factor.Nominal results were collected from Macroaxis.
Empirical Strategy
The time series data were analysed using the following statistical tests: - A test for skewness was performed to determine return distribution characteristics.Skewness is a measure of the degree of asymmetry of a distribution around the mean.A normal (that is, Gaussian) distribution is symmetric with a skewness value of zero.
Kurtosis
A test for kurtosis was performed to determine the normality of the data.Kurtosis measures the concentration of data at the tails of the distribution.It compares the relative flatness or peakedness of a particular distribution with that of a normal distribution.Positive kurtosis is characterized by a peaked, or leptokurtic distribution; negative kurtosis indicates a relatively flat distribution.Distributions with high levels of kurtosis are known as fat-tailed and are non-Gaussian (Fung & Hsieh, 1997).
Tracking Error
As defined by Tobe (1999), tracking error is used in evaluating active manager risk.It is most commonly calculated using the standard deviation of the difference between index and portfolio returns-i.e. the standard deviation of excess returns.Low tracking error suggests the fund manager is closely following the index.One could surmise high tracking error is not an indicator of a fund manager's ability to consistently generate a positive alpha.Instead, it is indicative of his attempts to maximize alpha.The reverse is true of a passively managed portfolio; high tracking error is neither desired nor acceptable.Tracking Error is represented mathematically as follows: ( Standard deviation is the most common measure of risk.It measures the variability of returns from the average return-that is, the volatility of the return stream.The assumption is that the higher the volatility, the higher the risk.Its usefulness as a comparative measure of risk is predicated on the assumption the investments being compared share similar return distributions.It assumes a Gaussian distribution, interpreting any difference from the mean, above or below, as risk.As a result, upside volatility, which is used to accomplish investment objectives, is penalized because it is equated with value-destroying downside volatility.
Sharpe's measure is widely seen as an oversimplification of risk.Unlike the Sharpe Ratio which utilizes standard deviation, the Sortino Ratio quantifies the risk-adjusted return of a portfolio through the use of downside risk.Downside risk is considered to be the standard deviation of the returns below a minimum acceptable return.When return distributions are near symmetrical and the target return is close to the distribution median, the two measures will yield similar results.However, as skewness increases and target returns vary from the median, the results are very different.The mathematical representation of the Sortino Ratio is as follows: (6) Where = Expected portfolio (asset) return = Required rate of return = Downside Deviation (that is, square root of the semi-variance) Jensen's Alpha According to Michael Jensen (1968), portfolio managers who accurately predict major changes in the market or identify undervalued assets earn higher returns.Jensen's alpha quantifies the extent to which an investment contributes a value-added relative to a benchmark.The positive implies a manager has the ability to earn excess returns as opposed to purely to random selection (Jensen, 1969).Jensen's measure is calculated as: Where = risk free rate in a corresponding period ̃ = market return in period ̃ = portfolio return in the period
Information Ratio
A large, positive information ratio is evidence that a fund manager consistently achieves excess returns.The information ratio is essentially a measure of risk-adjusted alpha.The Information Ratio establishes whether or not a reported positive alpha is statistically significant from zero or merely a random occurrence.It offers a summary of the mean-variance qualities of a portfolio (Markowitz, 1952;1959).Gupta, Prajogi, and Stubbs (1999) consider it the strongest predictor of performance persistence.The mathematical representation of the Information Ratio is as follows: (8) Where:
Empirical Results and Analysis
A nominal listing of all empirical results is located in Tables 1 and 2. The risk-adjusted return for the DB G10 Currency Future Harvest Index was 0.05% compared to the PowerShares G10 Currency Harvest Fund Exchange Traded Fund (DBV), which was 2.95882%.The tracking error was 2.7322%, indicative of the fund manager's successful attempt to outperform the DB G10 benchmark.The risk-adjusted return of the iPath Optimized Currency Carry Exchange Traded Note (ICI) is -0.12 with a tracking error of 2.9%, indicative of the fund manager's successful attempt to outperform the Barclays Optimized Currency Carry Index benchmark.The graphical representation of the skewness and kurtosis values are contained in Figures 1, 2, and 3 in the Appendix.A negatively skewed distribution exhibits scores are concentrated on the high end of the scale.The currency carry trade exhibits negative skewness (the returns to the left of the mean are fewer yet lie a greater distance from the mean).For the study period, the returns of the currency pairs were negative or close to zero; a finding is consistent with Brunnermeier, Nagel, and Pedersen (2008) as well as Burnside, et al., (2011).The skewness result for DBV is -0.57284 and -0.29 for ICI.These skewness values are acceptable, denoting a Gaussian return distribution.The kurtosis statistic is 0.28968 for DBV which is close to zero and implicit of a normal distribution with a platykurtic shape.Kurtosis for ICI is 4.76 suggesting a leptokurtic, Gaussian distribution.
Currency Carry trades are generally known to exhibit high Sharpe Ratios (Burnside, et al., 2011).Yet the Sharpe Ratio results of -0.5309 for the PowerShares G10 Currency Harvest Fund Exchange Traded Fund (DBV) and -0.4255 for the iPath Optimized Currency Carry Exchange Traded Note (ICI) differ with that notion.According to the Sharpe Ratio, both currency carry trade funds performed worse than the risk free security; that is, the 3 month U.S. Treasury Bill.The Sortino Ratio results are -0.4723 for DBV and 0.067 for ICI.The cumulative results of negative Sharpe and Sortino ratios, negative skewness, and positive, near-zero kurtosis imply both funds have significant downside risk with minimal upside gain.Jensen's Alpha is measured at -0.26 for DBV and -0.18 for ICI.Adjusting these results for negative market beta confirms that the risk assumed by both fund managers is unjustified.
The Information Ratio result is -0.45 for DBV and -0.12 for ICI revealing that there is, at best, below average skill demonstrated by the managers of both currency carry trade funds.Investor sentiment was 0% for ICI; when compared to other exchange traded funds, 100% of investors do not wish to have iPath Optimized Currency Carry Exchange Traded Note (ICI) in their portfolios.The PowerShares G10 Currency Harvest Fund Exchange Traded Fund (DBV) did not fare much better.At an investor sentiment rating of 0.030, 97% of investors would rather purchase another exchange traded fund.These results are indicative of a significant agency problem.
Investors in both funds are reliant on the respective fund managers for financial expertise, yet the results achieved do not demonstrate positive performance persistence.
Biases
The information retrieved from the database vendors is assumed accurate.However, survivorship and instant history biases (that is, backfilling or backtesting) are present in the dataset.Since currency carry trade funds report results to database vendors on a voluntary basis primarily for their own marketing purposes, these biases are inevitable.
Retrodiction is a form of hindsight bias prevalent in backtesting and has an influence on the analysis of results.
The researcher essentially sees what he believes he knew all along.He then makes predictions based on the past and then uses those results to predict future investment returns.Since the outcome has not yet occurred, retrodictions cannot be measured; it is not possible to determine distinct chances of initial occurrences from knowledge of the final state.
The spread in the returns of the five currency pairs is not consistent with traditional risk factors.It is consistent with the noisy signal of over and under-reaction to new information, a behavioral trait rampant in trend-following/momentum trading strategies.
Limitations of the Study
No consensus exists within the literature as to which economic variables are most highly correlated with changes in currency exchange markets.Meese and Rogoff (1983) reason currency forecasting models cannot outperform a coin toss.As such, it is difficult to identify appropriate quantitative variables to measure.Unlike the stock market, there is no consensus amongst currency traders as to a proper benchmark for the currency market-essentially, no market portfolio exists, just a collection of long and short positions.This study utilized the AFX Currency Management Index as a proxy benchmark for currency carry trades which is, unfortunately, incorporated in only a few empirical studies.The Index makes subjective decisions regarding its currency composition, weighting schemes, and rebalancing criteria.As such, the robustness of this study will be difficult to measure against other relevant academic finance research.
Concluding Remarks
Currency carry trades create new pricing opportunities and further ensure market transparency.Whether an investor purchases available currency carry funds or buys currency pairs directly, the amount of profit generated will be limited due to high transaction costs further exacerbated by the constant rebalancing required to implement and maintain a momentum strategy.Standard risk measures cannot account for the excess returns seen in currency carry trading.
The currency carry trade is seen as a skill-based strategy.As such, traders choose not to compare their performance to an index, which is understandable due to the lack of a generally accepted index similar to the FTSE 100 or S&P 500.The general notion of portable alpha and its overlay potential relative to the currency carry trade is flawed.This study finds what appears to be occurrences of ‗incidental' alpha perhaps the result of the peso problem (Burnside, et al., 2011); that is, a small probability of a large change in interest rates and vice versa.The result of backtesting the five currency pairs discovered either negative alpha or positive alpha of no significance.An investor would have fared no better using the PowerShares G10 Currency Harvest Fund Exchange Traded Fund or the iPath Optimized Currency Carry Exchange Traded Note than choosing either fund's constituent currency pairs on his own.
An investor in either fund submits himself to significant noise trader risk.Fund managers invest on behalf of their clients; their own capital is not at risk resulting in what Shleifer and Vishny (1997) deem a -separation of brains and capital‖.Instead of alpha persistence, this paper concludes there is instead evidence of wishful thinking (Weinstein, 1980) from the managers of both currency carry trade funds.Consistent with the aforementioned behavioral bias, both funds extrapolate past results as a guide to future performance-a clear portrayal of hindsight bias.These biases lead fund managers to believe they possess extraordinary skills, which in reality do not exist.
This investigation yielded no evidence of either performance persistence or portable alpha.The PowerShares G10 Currency Harvest Fund Exchange Traded Fund and the iPath Optimized Currency Carry Exchange Traded Note both yielded overall negative alpha, indicating neither manager manifested skill.These findings could be due to data selection.Nevertheless, the results of this study are consistent with the generalized notion that currencies are a zero sum game; excess returns are incidental and attributable to market beta.The return characteristics of currency carry trades are such that excess systematic returns are earned in exchange for enduring significant downside risk and high volatility.
The forward premium puzzle remains unresolved.
considers mean and variance statistics appropriate for a Gaussian distribution.The Sharpe Ratio is a univariate measure commonly applied to analyse returns in conjunction with the risks taken to achieve those returns; it quantifies excess return per unit of risk and is represented mathematically as follows: (5) Where = Expected portfolio (asset) return = Risk free rate of return = Portfolio (or asset) standard deviation 3.1.5Sortino Ratio Portfolio Return = Benchmark Return = Standard Deviation of excess returns (i.e.tracking error)
|
2018-12-18T08:17:22.883Z
|
2016-08-29T00:00:00.000
|
{
"year": 2016,
"sha1": "73da0ba164730f0753a601d9555001ba7dcc916e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5539/ibr.v9n9p176",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "73da0ba164730f0753a601d9555001ba7dcc916e",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
9431289
|
pes2o/s2orc
|
v3-fos-license
|
SUMO and SUMO-Conjugating Enzyme E2 UBC9 Are Involved in White Spot Syndrome Virus Infection in Fenneropenaeus chinensis
In previous work, small ubiquitin-like modifier (SUMO) in hemocytes of Chinese shrimp Fenneropenaeus chinensis was found to be up-regulated post-white spot syndrome virus (WSSV) infection using proteomic approach. However, the role of SUMO in viral infection is still unclear. In the present work, full length cDNAs of SUMO (FcSUMO) and SUMO-conjugating enzyme E2 UBC9 (FcUBC9) were cloned from F. chinensis using rapid amplification of cDNA ends approach. The open reading frame (ORF) of FcSUMO encoded a 93 amino acids peptide with the predicted molecular weight (M.W) of 10.55 kDa, and the UBC9 ORF encoded a 160 amino acids peptide with the predicted M.W of 18.35 kDa. By quantitative real-time RT-PCR, higher mRNA transcription levels of FcSUMO and FcUBC9 were detected in hemocytes and ovary of F. chinensis, and the two genes were significantly up-regulated post WSSV infection. Subsequently, the recombinant proteins of FcSUMO and FcUBC9 were expressed in Escherichia coli BL21 (DE3), and employed as immunogens for the production of polyclonal antibody (PAb). Indirect immunofluorescence assay revealed that the FcSUMO and UBC9 proteins were mainly located in the hemocytes nuclei. By western blotting, a 13.5 kDa protein and a 18.7 kDa protein in hemocytes were recognized by the PAb against SUMO or UBC9 respectively. Furthermore, gene silencing of FcSUMO and FcUBC9 were performed using RNA interference, and the results showed that the number of WSSV copies and the viral gene expressions were inhibited by knockdown of either SUMO or UBC9, and the mortalities of shrimp were also reduced. These results indicated that FcSUMO and FcUBC9 played important roles in WSSV infection.
Introduction
Small ubiquitin-like modifiers (SUMO) are a family of small proteins that could covalently attach to and detached from other proteins in cells to modify their functions. The process of covalent and reversible attaching of SUMO moiety to a target protein was known as SUMOylation, which was an important post-translational modification and involved in various cellular processes [1][2][3]. Although amino acid sequence of SUMO is similarly to ubiquitin, SUMOylation does not typically lead to degradation of the substrate and instead has a more diverse array of effects on substrate function, such as nuclear-cytosolic transport, transcriptional regulation, apoptosis, protein stability, response to stress and antiviral defense [4,5]. In mammalian cells, four SUMO family members were identified, namely, SUMO-1, -2, -3 and -4, whereas in invertebrates there is only a single SUMO gene [6,7]. The conjugation of SUMO to target proteins involves three classes of enzymes, E1 activating enzyme, E2 conjugating enzyme and E3 target specificity enzyme [8], and the Ubc9 is the unique SUMO E2 enzyme known to conjugate SUMO to target substrates [9][10]. The Ubc9 serves as a lynchpin in the SUMO conjugation pathway, interacting with the SUMO E1 during activation, with thioester linked SUMO after E1 transfer and with the substrate and SUMO E3 ligases during conjugation [11].
White spot syndrome virus (WSSV) is one of the most devastating viral pathogens in shrimp, and caused considerable economic losses to shrimp culture industry worldwide [12]. In our previous research, SUMO in hemocytes of Chinese shrimp Fenneropenaeus chinensis was found to be significantly up-regulated at both mRNA and protein levels post WSSV infection [13]. And a recent research demonstrated that WSSV Immediate early (ie) proteins could be modified by crayfish SUMOylation, and the modification would benefit WSSV replication [1]. All these results implied that SUMO and UBC9 played important roles in WSSV infection. Up to now, the SUMO and UBC9 cDNAs have been cloned in some crustaceans, including Litopenaeus vannamei [14], Procambarus clarkia [1], Eriocheir sinensis [15] and Scylla paramamosain [16]. However, the knowledge on SUMO and UBC9 of shrimp in viral infection is still limited.
In the present work, full length cDNAs of SUMO (FcSUMO) and UBC9 (FcUBC9) in F. chinensis were cloned and characterized, and their distribution characteristics were both determined at gene and protein levels. Moreover, the potential roles of SUMO and UBC9 in WSSV infection were further investigated in vivo by RNA interference (RNAi).
Shrimp and sample preparation
Ministry of Agriculture of China allows the Chinese shrimp to be caught from Yellow Sea of China before and after fishing-moratorium period, and the shrimps used in the present study were caught after the fishing-moratorium period. The grossly healthy Chinese shrimps with average size of 15-17 cm in length were caught from Yellow Sea of China, which were all negative for WSSV by PCR assay according to previously described method [17]. Eight tissues, including hemocytes, lymphoid organ, ovary, heart, intestine, muscle, gill and hepatopancreas were sampled from 12 healthy shrimps. For WSSV challenge experiment, shrimps were acclimatized for 5 days at 25°C. Each shrimp was intra-muscularly injected with 100 μl WSSV inoculum (10 7 copies) prepared according to the previous method [13]. Shrimps were injected with 100 μl phosphate-buffered saline (PBS, pH 7.4) as control. The hemocytes and ovary were sampled from 6 randomly selected shrimps in each group before infection and at 6, 12, 24, 36, 48, 60 and 72 h post infection (hpi) as previously described [18].
Cloning and sequencing of FcSUMO and FcUBC9 cDNA
The partical cDNA fragments of SUMO and UBC9 were amplified by RT-PCR from shrimp hemocytes RNA using their respective degenerate primers, which were designed based on the conserved region of other known SUMO or UBC9 sequences. The PCR amplification and the purification, cloning and sequencing of PCR products were performed according to the previous method [19].
To obtain the full-length cDNA sequences, gene specific primers of SUMO and UBC9 were designed respectively based on their partial cDNA sequences obtained, and rapid amplifications of cDNA ends (RACE) were performed using the SMART RACE cDNA Amplification Kit (Clontech, USA) according to the manufacturer's instruction. The RACE products were purified, cloned, and sequenced. The full-length cDNAs of FcSUMO and FcUBC9 were obtained by ligation of their overlapping cDNA fragments. All of the primers used in the present work were listed in Table 1.
Tissue distribution of FcSUMO and FcUBC9 mRNA
Total RNA was extracted from different tissues of shrimp using TRIzol reagent (Invitrogen, USA), and then treated with RNase-free DNase I (TaKaRa, Japan). The first-strand cDNA was synthesized from 2 μg of DNA-free total RNA by M-MLV reverse transcriptase (Promega, USA) according to the manufacturer's protocol. Quantitative real-time RT-PCR (qRT-PCR) was used to analyze the mRNA expression levels of FcSUMO and FcUBC9 in different tissues. The gene specific primers SUMO-F4 and SUMO-R4 were used to amplify a 150 bp product of FcSUMO, and specific primer pairs UBC9-F4 and UBC9-R4 were used to amplify a 121 bp product of FcUBC9, while the 18S rRNA primer pair 18S-F and 18S-R were used for amplification of the internal control fragment for qRT-PCR. PCR was carried out using SYBR Premix Ex Taq™ (Takara) in a Thermal Cycler Dice 1 Real Time System (Eppendorf, Germany) with the following conditions: 95°C for 2 min, followed by 40 cycles of 95°C for 10 s, 58°C for 10 s, and 72°C for 20 s. A dissociation curve with a single peak was used to monitoring the amplified product. The data were calculated according to 2 -ΔΔCt method.
Detection of FcSUMO and FcUBC9 mRNA expressions post WSSV infection
Total RNA of hemocytes and ovary were prepared from the WSSV infected shrimps sampled at various time points as above, and qRT-PCR was performed to investigate the effects of WSSV infection on FcSUMO and FcUBC9 transcript levels respectively. The qRT-PCR was carried out according to the methods described above.
Recombinant expression and purification of FcSUMO and FcUBC9
The FcSUMO and FcUBC9 open reading frame (ORF) genes were amplified by PCR using specific primer pairs rSUMO-F/rSUMO-R and rUBC9-F/rUBC9-R respectively. After confirmation by sequencing, the purified PCR products were cloned into pET-28a vector to obtain the recombinant plasmids (pET-28a-SUMO and pET-28a-UBC9), then transformed into Escherichia coli BL21 (DE3) (Novagen). Positive clones were screened by PCR and confirmed by sequencing, then incubated in LB medium and induced with isopropyl-β-D-thiogalactosidase Table 1. Primers used in this work.
Production of rSUMO and rUBC9 antisera
Purified rSUMO and rUBC9 fusion protein were used to immunize BALB/c mice to obtain the antisera respectively. The immunization procedure was performed as previously described [22]. The reactivities of the PAbs were determined by western blotting. Briefly, the purified rSUMO and rUBC9 were separated by SDS-PAGE and then transferred onto PVDF membrane (Millipore, USA). After blocking with 4% bovine serum albumin (BSA) in PBS for 1 h at 37°C, the membrane was incubated with PAb against rSUMO or rUBC9 for 1 h at 37°C. After washing thrice with PBST (PBS containing 0.05% Tween 20), goat-anti-mouse Ig-alkaline phosphatase antibody (1:4000, Sigma) was added for 1 h incubation at 37°C. Positive bands were developed with substrate solution (100 mM NaCl, 100 mM Tris and 5 mM MgCl 2 , pH 9.5) containing 5-bromo-4-chloro-3-indolyphosphate (BCIP, Sigma) and nitroblue tetrazolium (NBT, Sigma) for 20 min, and stopped by washing with distilled water. The PAbs were replaced by sera of unimmunized mice as control.
Characterization of SUMO and UBC9 in hemocytes of F. chinensis by Western blotting and indirect immunofluorescence assay (IIFA) For western blotting, collected hemocytes were lysed in Western and IP buffer (Beyotime, China), and then the cell lysate was centrifuged at 4°C for 20 min at 13,000 rpm. The supernatant was collected and subjected to SDS-PAGE. And then the samples were transferred onto PVDF membrane and subjected to the procedures described above for detection of SUMO and UBC9 in hemocytes of F.chinensis. For IIFA, the hemocytes were suspended in PHPBS (377 mM NaCl, 2.70 mM KCl, 8.09 mM Na 2 HPO 4 , 1.47 mM K 2 PO 4 , pH 7.4, 780 mOsmÁL), settled onto glass sliders for 30 min subsiding, and then fixed with acetone for 15 min. The slides were overlaid with PAb against rSUMO or rUBC9. After incubation for 1 h at 37°C in a moist chamber, the sliders were rinsed thrice with PHPBS for 5 min each time and incubated with goat-anti-mouse Ig-FITC (1:256, Sigma), contained Evan's blue dye (EBD) as the counterstain, for 1 h at 37°C in the dark, and DAPI staining (blue) was used to visualize cell nuclei. Finally, the slides were rinsed again and observed by fluorescence microscope. The PAbs were replaced by sera of unimmunized mice as control.
RNA interference and WSSV infection
Double-stranded RNA (dsRNA) corresponding to FcSUMO, FcUBC9 and green fluorescent protein gene (GFP) sequences were generated by in vitro transcription. DNA templates for dsRNA preparation were amplified by PCR using specific primers, SUMOi-F and SUMOi-R for FcSUMO, UBC9i-F and UBC9i-R for FcUBC9, EGFPi-F and EGFPi-R for GFP (Table 1), which were designed with E-RNAi (http://www.dkfz.de/signaling/e-rnai3/idseq.php). The PCR products were purified, and 1 mg of each template was used in an in vitro transcription reaction (MBI Fermentas, USA) according to the manufacturer's protocol. The sense and antisense single stranded RNA were then mixed at equimolar amounts and annealed to construct the dsRNAs. The quality of dsRNAs was verified by agarose gel electrophoresis, and the dsRNAs were quantified by NanoDrop spectrophotometer (Thermo Scientific, USA). Then the concentration of dsRNAs was adjusted to 600 μgÁml -1 . Shrimps were divided into four groups, SUMOi group, UBC9i group, negative control group and blank control group, and injected with 100 μl dsSUMO, dsUBC9, dsGFP and TNE buffer respectively. qRT-PCR was carried out to confirm the effect of target gene interference within 5 days at 12-h intervals.
To investigate the effects of suppression of the FcSUMO and FcUBC9 transcripts on WSSV infection, shrimp were grouped and injected with dsRNA or TNE buffer as described above, and at 48 h after the injection, shrimp were injected with 100 μl WSSV inoculum (10 7 copies), and shrimp injected with equal volume of PBS was served as blank control group. Each treatment was replicated with three batches of 50 shrimp. Total hemocyte genomic DNA was extracted using DNA extraction kit (Takara) before WSSV infection and at 6, 12, 24, 36, 48, 72 h post infection. An equal quantity of DNA (50 ng) was added into the SYBR Green Premix with the WSSV primer set (VF and VR) for quantitative real-time PCR (qPCR). The number of WSSV copies in hemocytes of F. chinensis at various time points was detected according to our previous work [18]. Furthermore, the expression levels of 10 genes of WSSV, including 3 ie genes (wsv051, ie1and ie2), 2 early genes (wsv477 and dnapol) and 5 late genes (vp28, vp26, vp24, vp19 and vp15) at 48 hpi were measured by semi-quantitative RT-PCR. Mortality of each experimental group was recorded daily.
Statistics
Data were given as arithmetic mean values. The statistical analysis was performed using the software SPSS 19.0. One-way ANOVA and Duncan's multiple comparisons of the means were done to compare data obtained. Differences were considered significant at P< 0.05. Blast homology analysis showed that the deduced amino acid sequence of FcSUMO is highly conserved among different species, especially the UBQ domain and the C-terminal double Gly motif ( Fig 1A). Moreover, FcSUMO was fairly close to SUMOs of other four Decapoda crustaceans, L. vannamei, E. sinensis, P. clarkia and Penaeus monodon (99-100% identities). The phylogenetic relationships between FcSUMO and other SUMOs were also analyzed by neighbor-joining tree using full amino acid sequences. As shown in Fig 1C, SUMOs of Decapoda crustacean, artemia and nematode formed one major cluster, and SUMOs of fish and mammalian formed another cluster. Multiple sequences alignment and phylogenetic analysis of UBC9 proteins was also performed. UBC9 shared high similarity with other species, and all UBC9 sequences contained conserved Cys93 residue which is indispensable for binding SUMO (Fig 1B). The sequence identity between FcUBC9 and three other Decapoda crustaceans, L. vannamei, Macrobrachium nipponense and P. clarkia was 98-100%. In the phylogenetic tree, the UBC9 of arthropod species grouped together, and UBC9 of vertebrate clustered together (Fig 1D).
Tissue distribution of FcSUMO and FcUBC9 mRNA
The qRT-PCR was employed to detect the FcSUMO and FcUBC9 mRNA expressions in different tissues of healthy shrimp. The FcSUMO and FcUBC9 mRNA expression profiles were highly similar, and the highest transcription level was detected in hemocytes, and high expression levels of FcSUMO and FcUBC9 were also detected in ovary, whereas low expression levels were in muscle and hepatopancreas (Fig 2).
Expression kinetics of FcSUMO and FcUBC9 post WSSV challenge
The time course of FcSUMO and FcUBC9 expressions in hemocytes and ovary post WSSV infection were investigated by qRT-PCR. The results showed that FcSUMO mRNA expression levels in hemocytes and ovary were significantly up-regulated post WSSV infection, and reached the highest level at 24 and 36 hpi respectively (Fig 3A). The FcUBC9 mRNA expression levels in hemocytes and ovary were also significantly up-regulated post infection, and reached their peak levels at 24 hpi, and then decreased to the control level at 72 hpi ( Fig 3B). To be noted, the up-regulation extent of these two genes in hemocytes was much higher that in ovary (Fig 3).
Characterization of SUMO and UBC9 in hemocytes of F. chinensis
SDS-PAGE revealed that the SUMO and UBC9 with His-tag were successfully expressed in E. coli BL21 (DE3) with expected MWs of 19.7 kDa and 25.6 kDa respectively (Fig 4A and 4B, lane 2). After purification with Ni-NTA column, high purity rSUM in rSUMO group O and rUBC9 were obtained (Fig 4A and 4B, lane 3). PAbs against rSUMO or rUBC9 were obtained from the immunized mice, which could specifically react with the rSUMO or rUBC9 respectively in the lysate of induced E.coli BL 21 (Fig 4A and 4B, lane 4). The results of western blotting showed that the PAb against rSUMO could react strongly with a 13.5 kDa protein (Fig 4C, Lane 2) in the lysate of F. chinensis hemocytes, and anti-UBC9 antibodies could react with a 18.7 kDa protein (Fig 4C, Lane 3). No reactive protein bands were observed in control (Fig 4C, Lane 4). The results of IFA showed that PAbs against rSUMO and rUBC9 mainly reacted with the proteins in the nucleus of hemocytes (Fig 4D and 4E), and no green positive signals were observed in control (Fig 4F). The effects of silencing of FcSUMO and FcUBC9 The FcSUMO and FcUBC9 mRNA expressions were significantly down-regulated at each sampling time after injection of dsRNA (data not shown). The relative expression levels of FcSUMO and FcUBC9 both decreased to their minimum values at 48 h post injection, their silencing efficiencies were 77.8% and 67.7% respectively compared to the dsGFP groups, whereas the expressions of FcSUMO and FcUBC9 were not significantly affected by dsGFP or TNE injection (Fig 5A). At 48 h prior to WSSV challenge, shrimp were injected with dsRNA to characterize the roles of SUMO and UBC9 in regulating viral replication, and the number of WSSV copies in hemocytes samples was calculated. The results showed that the changes of WSSV copies in different groups displayed a similar tendency. The viral copies maintained at a low level at the early stage post infection, and then significantly increased to a high level at 24 hpi, then displayed a stable increase afterward. However, the WSSV copies in dsSUMO and dsUBC9 injection groups were significant lower than that in the dsGFP and TNE injection groups at each sampling time, and the viral number in UBC9 silenced shrimp was a little lower than the number in SUMO silenced shrimp (Fig 5B). In addition, expression levels of ten WSSV genes at 48 hpi were monitored by semi-quantitative RT-PCR. As shown in Fig 5C, the expressions of viral ie genes, early genes and late genes were significantly inhibited in SUMO and UBC9 silenced shrimp compared to the control.
To investigate the effects of SUMO and UBC9 knockdown on mortality in WSSV infected shrimp, cumulative mortality of shrimp in each group was calculated. The results showed that the silencing of the two genes both delayed shrimp mortality. Mortality increased steadily post infection and reached to 100% at 9th day in SUMO silenced group and 10th day in UBC9 silenced group. By contrast, in the two positive control groups, 100% cumulative mortality was observed at 7 days post infection. There was almost no mortality in the negative control group (Fig 5D).
Discussion
In the present work, a SUMO cDNA and a UBC9 cDNA were cloned form hemocytes of F. chinensis by RACE technique. Multiple sequences alignment of the deduced proteins showed that the two molecules both had significant homology with the ones from various species, and the amino acid sequence of FcSUMO was even completely identical to the SUMOs of L. vannamei and P. clarkia, indicating that they are highly evolutionarily conserved. And this finding is consistent with the fact that SUMO and UBC9 polypeptides are conserved from yeast to human [23]. Like the genes identified from other species, FcSUMO and FcUBC9 have their respective active sites, double Gly in SUMO and Cys93 in UBC9, which are crucial in SUMOylation. During SUMOylation, an inactive SUMO is converted to its active form by exposing the C-terminus double Gly residues, which then form a thioester bond with a cysteine of the E1 activating enzyme, subsequently it is transferred onto the active Cys93 of UBC9 and finally passed to the ε-amino group of substrate lysine residues on the target proteins [24]. We speculate that the FcSUMO and FcUBC9 work in a similar way to the SUMOs and UBC9s of other organisms in the SUMOylation process.
Tissue expression profile analysis by qRT-PCR revealed that FcSUMO and FcUBC9 are ubiquitous in the examined tissues, and it was highly expressed in hemocytes and ovary. Similarly, several previous studies have shown the high expressions of SUMO and UBC9 in gonad in other crustaceans, and the function of SUMOylation in testis and ovary developments has been reported [15,16]. However, it was rarely reported that SUMO and UBC9 are highly expressed in hemocytes in other species. In WSSV infection experiment, FcSUMO and FcUBC9 were up-regulated post infection, and the up-regulation extent of these two genes in hemocytes was much higher that in ovary. Similarly, previous study also showed that the mRNA expressions of SUMO and UBC9 were significant up-regulated post WSSV infection in the hepatopancreas and intestine [1]. These results demonstrate that SUMOylation plays an important role in immune response of hemocytes to viral infection.
The localization of FcSUMO and FcUBC9 in hemocytes was analyzed by IFA, and the results showed that the positive signals were mainly observed in nuclei of hemocytes. This result was consistent with the observation in mammals [25][26][27]. Furthermore, SUMO modification has been implicated in many important cellular processes including the control of genome stability, signal transduction, targeting to and formation of nuclear compartments, cell cycle and meiosis, and these process mainly occurred in nuclear region [28][29][30]. We speculate that the distributions of the two proteins are essential for their functions in multiple cellular processes.
To date, proteins from many virus families have been shown to be modified by SUMO conjugation, and this modification appears critical for viral protein function. Conversely, viruses can also alter the sumoylation of host proteins to create a cellular environment that facilitates viral survival and reproduction [31]. Gene knockdown using dsRNA has been shown to be a powerful tool for the investigation of gene function in crustacean, which was widely employed to inhibit host genes involved in viral infection in shrimp and displayed a good effect [1,[32][33]. In present research, suppression of SUMO or UBC9 gene transcript levels resulted in the inhibitions of the increase of WSSV copies and the viral gene expressions, and the reduction of shrimp mortality, which indicated that SUMO and UBC9 are involved in WSSV replication. The finding is consistent with the results reported in crayfish [1]. These results suggested that the sumoylation has a close correlation with WSSV infection. However, the specific effects of sumoylation on WSSV remain unclear, so further researches should be performed to lead an in-depth understanding of the relationship between WSSV infection and host sumoylation, which might have some utility for antiviral therapeutics.
|
2018-04-03T00:10:44.311Z
|
2016-02-29T00:00:00.000
|
{
"year": 2016,
"sha1": "6022182888847aba7155e7283cf0b3df5b5a0498",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0150324&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c610c60eb89f2724df94bd8675fd7ba2e51c0994",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
5732774
|
pes2o/s2orc
|
v3-fos-license
|
Long-term intra-individual variability of albuminuria in type 2 diabetes mellitus: implications for categorization of albumin excretion rate
Background Diabetic kidney disease (DKD) is the leading cause of end-stage renal disease in the Western world. Early and accurate identification of DKD offers the best chance of slowing the progression of kidney disease. An important method for evaluating risk of progressive DKD is abnormal albumin excretion rate (AER). Due to the high variability in AER, most guidelines recommend the use of more than or equal to two out of three AER measurements within a 3- to 6-month period to categorise AER. There are recognised limitations of using AER as a marker of DKD because one quarter of patients with type 2 diabetes may develop kidney disease without an increase in albuminuria and spontaneous regression of albuminuria occurs frequently. Nevertheless, it is important to investigate the long-term intra-individual variability of AER in participants with type 2 diabetes. Methods Consecutive AER measurements (median 19 per subject) were performed in 497 participants with type 2 diabetes from 1999 to 2012 (mean follow-up 7.9 ± 3 years). Baseline clinical characteristics were collected to determine associations with AER variability. Participants were categorised as having normo-, micro- or macroalbuminuria according to their initial three AER measurements. Participants were then categorised into four patterns of AER trajectories: persistent, intermittent, progressing and regressing. Coefficients of variation were used to measure intra-individual AER variability. Results The median coefficient of variation of AER was 53.3%, 76.0% and 67.0% for subjects with normo-, micro- or macroalbuminuria at baseline. The coefficient of variation of AER was 37.7%, 66% and 94.8% for subjects with persistent, intermittent and progressing normoalbuminuria; 43%, 70.6%, 86.1% and 82.3% for subjects with persistent, intermittent, progressing and regressing microalbuminuria; and 55.2%, 67% and 82.4% for subjects with persistent, intermittent and regressing macroalbuminuria, respectively. Conclusion High long-term variability of AER suggests that two out of three AER measurements may not always be adequate for the optimal categorisation and prediction of AER.
Background
Diabetic kidney disease (DKD) is the leading cause of end-stage renal disease (ESRD) in the Western world. Current interventions for DKD do not arrest, but only delay the progression to ESRD [1]. The early and accurate identification of DKD, followed by early interventions may therefore offer the best chance of slowing the progression of kidney disease. One important method for evaluating the risk of progressive DKD involves identifying abnormal albumin excretion rate (AER), however the limitations of relying solely on AER as a marker of DKD are being increasingly recognised [2,3].
The variability of 24 h urinary AER has been a topic for discussion due to its unpredictable nature. Rather than an abrupt transition from normal to abnormal values, albumin excretion often increases slowly over several years [4,5]. The average increase in AER ranges from 10% to 30% per year until overt nephropathy develops, with some subjects showing slower rates of increase in AER after the development of microalbuminuria [6]. Regression from microalbuminuria to normoalbuminuria may also occur due to tight glycaemic control or the use of Renin-Angiotensin system inhibitors (RASi) [7]. Furthermore, the phenomenon of spontaneous regression from microalbuminuria to normoalbuminuria is now well recognised [5,[8][9][10]. Despite the above, the strong relationship of progression of albuminuria in type 2 diabetes mellitus (T2DM) to declining glomerular filtration rate (GFR), ESRD and cardiovascular (CV) disease emphasises the importance of accurately classifying AER patterns [11][12][13].
Several studies have demonstrated a wide range of intra-individual variability of albuminuria in diabetes, with the majority having a coefficient of variation in the range of 28% to 47% [5,14]. Factors which affect the wide variation in AER include the type of urine sample analysed (e.g. 24 h, timed overnight, first morning, random), the concentration of urinary albumin, the time period over which the samples were collected (days, weeks, months), the clinical characteristics of participants, as well as the pre-analytical handling and storage of the urine samples [9,14,15]. The inherent variability of AER in people with diabetes also needs to be considered. Most previous studies investigating AER variability have been small and with a short follow up period in subjects with type 1 diabetes mellitus (T1DM) [16][17][18]. Only a few, small studies of the variability of albuminuria in T2DM have been previously reported [19,20]. Furthermore, there is a deficiency of studies which have documented the variability of albumin excretion over a prolonged period, with the follow-up period in most studies being less than 1 year.
Due to the high variability in AER, most guidelines [14,21] recommend the use of more than or equal to two out of three AER measurements within a 3-to 6-month period to categorise AER. Even with this recommendation, misclassification of AER categories can occur due to a potentially greater long-term variability of AER than that reported in shorter-term studies. The aim of this study was therefore to investigate the longterm intra-individual variability of albuminuria over several years. Furthermore, we sought to identify the relationship of the variability of AER with various clinical and biochemical parameters.
Study design
Consecutive AER measurements in 617 participants with T2DM were recorded from 1999 to 2012. These patients attended the diabetes clinics at Austin Health, Melbourne and provided 24 h urine samples prior to each clinic visit, at intervals of 3-12 months. Patients were asked to discard their first urine void of the day and collect urine for the next 24 h. AER was measured in each urine sample provided by the Department of Biochemistry at Austin Health. Over this time period, the Beckman method was used to measure urinary albumin. The laboratory coefficient of variation of this method is approximately 5%. We utilised a modified protocol based from Kania et al. [22] using a 10 ml aliquot of a 24 h urine collection, pH-adjusted with NaOH to a final concentration of 25 mM and stored at −20 degrees to prevent degradation of albumin.
Baseline clinical and biochemical characteristics were collected in 2000 (Table 1). These included sex, age, body mass index (BMI), disease duration, glycated haemoglobin (HbA1c), estimated glomerular filtration rate (eGFR), the use of RASi agents, smoking status, total cholesterol levels, high-density lipoprotein (HDL) cholesterol levels, and systolic blood pressure (SBP).
A minimum of five AER measurements per subject was an inclusion criterion. Participants were first classified into baseline albuminuria categories: normo-(<20 mcg/min), micro-(20-200 mcg/min) and macroalbuminuria (>200 mcg/min) groups when at least two out of three of their first three samples fell within the respective ranges [3]. The first three samples were collected over an average period of 0.8 years. Participants were further independently subcategorized by two of the authors (AL and CN) into four AER pattern groups ( Fig. 1): Persistent (normo-, micro-, macro-): Participants with all AER values within the range of their respective baseline albuminuria categories.
Intermittent (micro-, macro-): Participants with one or more AER values above/below that of their respective baseline AER groups, and a return to their baseline albuminuria category at study completion (when at least two A representative plot of AER values for a patient with AER <20 mcg/min at the start of the study progressing to >20 mcg/min at study completion. d. Regressing pattern (micro-to normoalbuminuria): A representative plot of AER values for a patient with AER >20 mcg/min at the start of the study, progressing to <20 mcg/min at study completion of the last three values were within the baseline category).
Regressing: Participants with a decrease in serial AER values, with two or more values in the range below their baseline categories at study completion.
Progressing: Participants with an increase in serial AER values, with two or more values above their baseline albuminuria categories at study completion.
These AER patterns were similar to those described in the study by Steinke et al. [23], which identified four patterns of AER trajectories (persistent, temporary persistent, intermittent and progression) in normo-and microalbuminuric patients with T1DM.
It should be noted that urinary albumin-creatinine ratio (ACR) was not measured in this study.
Statistical analysis
The coefficient of variation is defined as the ratio of the standard deviation (SD) to the mean and was used as a measure of intra-individual AER variability. Median coefficients of variation of AER were used for each participant because the data were not normally distributed. The first three AER measurements were used to classify individuals into normo-, micro-and macroalbuminuria groups. Multivariate regression was used to examine the effects of baseline demographic variables on the intraindividual coefficient of variation of AER: baseline albuminuria group, HbA1c, age, gender, duration of diabetes, total cholesterol, HDL, systolic BP, BMI, RASi use at baseline and smoking. Multivariate regression was also performed to compare coefficients of variation among baseline albuminuria groups and AER pattern groups, with adjustment for clinical and biochemical characteristics. Wilcoxon rank-sum tests were performed to test the equality of coefficients of variation between those who have and those who have not had RASi treatment at baseline. All analyses were carried out using StataCorp. 2011. Stata Statistical Software: Release 12. College Station, TX: StataCorp LP.
Patient characteristics
There were 617 potential participants with baseline and consecutive AER measurements. However 120 had insufficient follow-up AER data (i.e., less than five AER measurements) to permit categorization of temporal AER patterns. A total of 497 participants with sufficient AER data was therefore available for inclusion in the study. Participants were categorized into normo-, micro-or macroalbuminuria groups at baseline, and subsequently into one of the four longitudinal AER pattern groups.
Of the 497 participants in the study, 289 (58%) had normoalbuminuria, 157 (32%) had microalbuminuria and 51 (10%) had macroalbuminuria at baseline. The median number of urine samples for each of the 497 subjects was 19 (range 5-43) collected over 2-13 years. The number of samples was 19 ± 8, 21 ± 8 and 16 ± 8 (mean ± SD) for participants with baseline normo, micro or macroalbuminuria, respectively and the mean follow-up period for all participants was 7.9 ± 3 years. For those with baseline normo-micro-and macroalbuminuria, the follow up periods were 8.5 ± 3.0, 8.7 ± 3.1 and 6.5 ± 2.8 years, respectively. Baseline clinical characteristics according to albuminuria categories are shown in Table 1. As expected, a greater proportion of those who had micro-or macroalbuminuria at baseline were treated with RASi agents (normo 52%; micro 78%; macro 80%).
Relationship between baseline AER variability and participant characteristics
Using multivariate regression, there was no evidence of a significant relationship between median coefficient of variation of baseline AER measurements for all participants and each of the following variables: HbA1c, age, gender, duration of diabetes, total cholesterol, HDL-cholesterol, SBP, BMI, and smoking.
The intra-individual variability of AER was compared between participants on (n = 312) and not on (n = 185) RASi agents at baseline, regardless of albuminuria group. The median coefficient of variation for AER was significantly higher in participants on RASi therapy at baseline compared to those not on RASi agents (66% vs. 55%, p = 0.003). After adjustment for baseline albuminuria categories, the coefficient of variation was 1.13 times greater in treated participants versus those not treated with RASi agents (p = 0.013).
At study completion, 98 out of 157 (62.4%) participants with microalbuminuria used RASi agents during the study. Eighty nine (90.8%) of the 98 participants were found to be on RASi treatment by the end of the study. For the remaining 9 participants, 6 were never treated with a RASi agent and 3 had ceased RASi treatment by the end of the study.
Comparison of AER variability among normo-, micro-and macroalbuminuria groups
Coefficients of variation of each baseline albuminuria category are demonstrated in Table 2. After adjusting for baseline characteristics, the median coefficient of variation was 53% for participants with normoalbuminuria, 76% for those with microalbuminuria and 67% for those with macroalbuminuria. This coefficient of variation was significantly different among the three baseline albuminuria categories (p = 0.027). The median coefficient of variation was significantly lower in the normoalbuminuria group (67.7%) compared to the microalbuminuria group (81.6%) after adjustment for RASi use (p = 0.007). There was no evidence of a difference in coefficient of variation between the macroalbuminuria group (75.4%) and the normoalbuminuria group (p = 0.41).
Comparison of AER variability for temporal AER patterns according to baseline AER Normoalbuminuria at baseline Coefficients of variation of the persistent, intermittent and progressing patterns of AER are demonstrated in Table 2. There was a difference in coefficients of variation across the three patterns after adjustment for baseline characteristics (p < 0.001). The median coefficient of variation was significantly lower in the persistent pattern (38%) than the intermittent pattern (79%; p < 0.001), and also lower when compared to the progressing pattern group (95%; p < 0.001) by definition as AER was changing.
Microalbuminuria at baseline
Coefficient of variations of the persistent, intermittent, progressing and regressing patterns of AER for participants with baseline microalbuminuria are shown in Table 2. There was an overall difference in coefficients of variation across the four AER patterns after adjusting for baseline characteristics (p = 0.001). The median coefficient of variation was similar in the persistent pattern (43%) compared to the intermittent pattern (71%; p = 0.064), but lower than the progressing pattern (86%; p = 0.002) and the regressing pattern (82%; p = 0.008) groups. There was also evidence of a difference in coefficients of variation between the intermittent and progressing pattern groups (p = 0.003), as well as a difference between the intermittent and regressing pattern groups (p = 0.033). In the present study, 28% of participants showed remission from micro-to normoalbuminuria independently of RASi use.
Macroalbuminuria at baseline
Coefficients of variation of the persistent, intermittent and regressing patterns are shown in Table 2. There was no evidence of a significant difference in coefficients of variation among the three AER patterns (p = 0.071).
Comparison of variability of AER according to temporal patterns of AER with or without RASi treatment
There was no evidence of a difference in the coefficient of variations of AER amongst the patients treated with or without RASi with any of the temporal AER patterns.
Theoretical effect of increasing AER measurements
The theoretical effect of increasing the number of AER measurements per participant on 95% confidence limits for intra-individual coefficient of variations of AER arbitrarily set at 50% (normoalbuminuria) and 75% (microalbuminuria) respectively is shown in Fig. 2. If a patient with normoalbuminuria at baseline provides three urine samples with a median AER of 20 mcg/min and a coefficient of variation of 50%, the 95% confidence interval would be approximately 50-150%. With seven or more samples, the confidence interval plateaus at approximately 60-130%. Similarly, if a participant with microalbuminuria at baseline provides three urine samples with a median of 100 mcg/min and a coefficient of variation of 75%, the 95% confidence interval would be approximately 20-180%. With ten or more samples, the confidence interval plateaus at approximately 50-150%. It appears that a larger number of AER measurements are needed for participants with microalbuminuria than normoalbuminuria in order to achieve similar confidence intervals.
Discussion
The major finding of this study is that the long-term intra-individual coefficient of variation of AER is high, implying that more than three AER measurements may be necessary to accurately categorise albuminuria. In this study, we categorized albuminuria into normo-, microor macroalbuminuria groups according to participants' initial three AER measurements and then subsequently into groups according to four patterns of AER trajectories: persistent, intermittent, progressing and regressing. The coefficient of variation of AER was 37.7%, 66% and 94.8% for subjects with normoalbuminuria who had persistent, intermittent and progressing AER pattern. It is recognised that by definition, patients with persistent AER measurements within the normo-, micro-and macro-albuminuria groups will have a lower co-efficient of variation than patients with intermittent, progressing or regressing patterns of AER. The coefficient of variation of AER for micro-and macroalbuminuria patients, regardless of the subsequent AER pattern fell within the above boundaries. In addition, out of all the baseline clinical and biochemical characteristics analysed, only the use of RASi agents significantly influenced the variability of AER. However, this effect was only evident for baseline AER variability and the data were insufficient to estimate the influence of RASi on the long term temporal patterns of AER used in this study. Kropelin et al. [24] evaluated data from three randomized intervention trials (BENEDICT, DIRECT, ALTITUDE and the Irbestartan in Patients with Type 2 Diabetes and Microalbuminuria (IRMA-2)) study and demonstrated that increasing the number of urine collections per study visit and the number of visits does not change the average drug effect estimate. It was therefore suggested that using a single urine collection per study visit was sufficient to define transition of albuminuria as an end-point in clinical trials [24]. The median intra-individual coefficients of variation of AER reported here are higher than the intra-individual variability seen in previous studies in T2DM (31-43%) [4,20,25]. These previous studies were typically conducted under stricter conditions (i.e., as part of a clinical investigation and not an outpatient setting). Although, one study conducted under routine clinic conditions, involving 1391 participants with T1DM and T2DM, in which an average of 2 to 3 samples per participant per visit (range 2 to 18) were collected over 2 years, reporting a coefficient of variation of 58%-82% [26]. The differences in coefficients of variation between our study Many of the studies mentioned were conducted over shorter time intervals, on smaller numbers of patients, with smaller numbers of urine samples collected per participant and in specific study settings. By contrast, the current study investigated the intra-individual variability of AERs for patients with T2DM undergoing routine clinical assessment and a mean follow up of 7.9 years [4,25].
There have only been a few previous reports of AER variability for patients with T2DM according to baseline AER categories of normo-, micro-and macroalbuminuria. In one study of 87 participants where only 3 overnight samples were collected per participant, the overall variability of AER was 25.7%, compared with 36.1%, 24.8% and 22.3%, for normo-, micro-and macroalbuminuric participants, respectively [27]. In contrast, the present study showed that the median coefficient of variation was significantly lower in those with normoalbuminuria (53.3%) compared to those with microalbuminuria (76%) after adjustment for the use of multiple characteristics including RASi agents (p = 0.007) at baseline. This study also showed a median coefficient of variation of 67% for those with macroalbuminuria. The higher intra-individual coefficient of variation in the albuminuria groups in the current study could be attributed to the longer duration of this study and the larger number of samples collected. Other factors that could have contributed include the progression of disease, effect of treatment on the disease or a fall in the number of participants as the years progressed.
The Renal Insufficiency and Cardiovascular Events (RIACE) study has also reported on the variability of albuminuria in T2DM [20]. The investigators determined AER from a subset of participants -833 subjects had AER and 3229 participants had ACR measured at different laboratories. These investigators determined that the concordance rate between a single urinary albumin excretion (UAE) and geometric mean of multiple measurements depended upon the degree of albuminuria-94.6% for normoalbuminuria, 83.5% for microalbuminuria, and 91.1% for macroalbuminuria. It is difficult to directly compare our results with the RIACE study, as the RIACE study used both ACR and AER, whereas the current study used AER. In the RIACE study, only 20.5% of participants had an AER measurement and 79.5% had an ACR measurement. Another limitation of the RIACE study is that participant characteristics were not equally distributed among albuminuria classes. In the RIACE study, the distribution of participants according to AER categories was 71.9% with normoalbuminuria, 23.2% with microalbuminuria and 4.9% with macroalbuminuria. In our study, we had 58% with normoalbuminuria, 32% with microalbuminuria and 10% with macroalbuminuria. Furthermore, compared to the RIACE study, we used AER measurements in all our participants. It is possible that using ACR can alter the number of samples required to categorise albuminuria, however in the current study our main purpose was to demonstrate AER variability. An important aim of the present study was to observe temporal AER patterns and to determine baseline predictors of those patterns. Interestingly, in the current study, we found that only 5 out of 157 participants with microalbuminuria at baseline were persistently microalbuminuric throughout the average of 7.9 ± 3.1 years follow up period. The current study highlights the highly variable nature of microalbuminuria in patients with T2D. Historically, it has been assumed that the development of microalbuminuria signalled the inevitable progression to macroalbuminuria [14]. However, there is an increasingly recognised view that the development of microalbuminuria can no longer be viewed as a committed and irreversible stage of DKD, with spontaneous remission being frequently reported [2,11]. Approximately 60% of patients with T1DM have displayed spontaneous remission of microalbuminuria independent of the use of RASi agents over 5-10 years of follow up [14]. However, spontaneous remission of microalbuminuria to normoalbuminuria in a cohort study of T1D patients followed up for over 30 years was not associated with a reduction in CV or renal risk compared to sustained microalbuminuria despite adjustment for RASi inhibitors [13]. However, the lack of association between CV and renal risk in the remission of microalbuminuria to normoalbuminuria may have been missed [28]. Changes in albuminuria may have been too small to detect any clinical significance despite long-term follow-up. In the present study of participants with T2DM, 28% showed remission from micro-to normoalbuminuria independent of RASi agent use. Other studies of T1DM and T2DM have reported rates of spontaneous remission from microalbuminuria to normoalbuminuria that have ranged from 39%-64% [9,16,17,23].
We found that the baseline characteristics of sex, age, BMI, disease duration, HbA1c, smoking status, total cholesterol levels, HDL-cholesterol, SBP, and even use of RASi agents had no significant effect on the variability of AER. One limitation of this study was the inability to compare changes over time in medications, HbA1c, eGFR, systolic blood pressure and cholesterol/HDL levels with the changes over time in AER. The definitive answer as to whether a relationship does or does not exist between clinical and biochemical parameters and variability in AER therefore requires a study to determine if there is a relationship between temporal changes in both clinical and biochemical parameters over time and variability in AER. Unfortunately, we were not able to account for factors such as blood pressure, glycaemic control, dietary salt consumption, physical activity and inflammation as information pertaining to the above variables was not collected in a longitudinal fashion for this study. We recognise this as a limitation of the current study. Despite this, several studies have shown that there is no association between the variability of AER and sex [18,25], age [18], BMI [18], total cholesterol [18,19,29] and SBP [18].
In this study, we also examined the theoretical effect of increasing the number of AER measurements per participant on 95 confidence limits for intra-individual coefficient of variation. As seen in Fig. 2, seven to ten measurements of AER can mark the beginning of the plateau of the 95% confidence intervals for the normoalbuminuric and microalbuminuric groups, respectively. It can therefore be inferred from the current study that the commonly accepted definition of two out of the first three samples is insufficient to categorize albuminuria at baseline (3). The RIACE study also suggests that although two AER samples can provide a robust classification of albuminuria status (sensitivity of 90.6 and specificity of 94.6), disease progression and the efficacy of reno-protective treatment such as the use of RASi agents cannot be accurately monitored unless albuminuria is measured at frequent intervals over a prolonged period of time [20]. Similarly, 388 T2DM patients using RASi inhibitors, losartan or irbestartan in the RENAAL and IDNT trials, showed a 30% reduction in albuminuria 3 months after commencement of RASi inhibition and a further decrease of 44.8% in 174 patients after 12 months [30]. The variability of albuminuria within patients suggests that incorporating multiple measurements improves risk algorithms and assessment of treatment effects over time [30].
In this study, 120 out of 617 (19%) participants were excluded from the study due to the lack of sufficient AER samples collected (minimum of five samples were required to be included in the study). This significant portion of participants excluded is a potential selection bias as patients with less samples collected were potentially those who are poorly compliant with follow up requirements. A limitation to the current study and to studies investigating albuminuria, is that there is no reference laboratory method of measuring urine or serum albumin levels [31]. Albuminuria is calibrated to a serum albumin reference material and is diluted to the concentrations measured in urine [31]. There is also a lack of development of a standard procedure for dilution and diluent in measuring albuminuria [31]. Furthermore, in the current study, we focused on albumin excretion rate as it has traditionally been accepted as the reference method for determining albuminuria rather than the albumin to creatinine ratio and did not specifically study albumin to creatinine ratio in the current study. However, it is appreciated that the albumin to creatinine ratio is usually now the preferred method for assessing albuminuria.
It is appreciated that it is often impractical to obtain multiple AER measurements in the clinical setting before deciding on treatment. Furthermore, it is important to appreciate that the relationship between AER and renal/vascular outcomes is continuous [5]. Attempts to classify AER into categories are performed to provide a simple framework for researchers and clinicians to interpret the results of interventions that alter albuminuria and to stratify the risk that individual patients have for the development and progression of renal and vascular disease. Clinicians should be aware of the wide variability of urinary albumin excretion and find a balance between theoretical suggestions for the number of AER measurements in the context of other risk factors. For instance, increases in AER in participants with hypertension are an indication for intervention whereas shortterm increases in AER in normotensive participants are not necessarily an indication for intervention. During the time that the current study was conducted, many centres were using AER routinely as the way of determining albuminuria. Over the years, ACR has increasingly been used to determine albuminuria. The current study addressed the variability in AER therefore we are not able to comment as to whether the study will change the way we manage our patients as very few centres are now using AER to measure albuminuria.
Conclusions
The current study highlights an important finding, that there is a high degree of variability of AER in people with diabetes and that this high long-term variability of AER suggests that two out of three AER measurements may not always be adequate for the optimal categorisation and prediction of AER.
|
2017-12-07T09:28:14.929Z
|
2017-12-06T00:00:00.000
|
{
"year": 2017,
"sha1": "338a476875e766d81881347206908fef9703e2d4",
"oa_license": "CCBY",
"oa_url": "https://bmcnephrol.biomedcentral.com/track/pdf/10.1186/s12882-017-0767-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "338a476875e766d81881347206908fef9703e2d4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
28404495
|
pes2o/s2orc
|
v3-fos-license
|
Spontaneous Current Generation in Cosmic Strings
It is shown that in models including the standard electroweak theory and for some particular values of the underlying parameters, electric currents can be spontaneously generated in cosmic strings, without the need of any external field (e.g., electric or magnetic) as is required in most models. This mechanism is then shown to break spontaneously the Lorentz invariance along the initially Goto-Nambu string. The characteristic time needed for the current to build up is estimated and found to lowest order to depend only on the mass of the intermediate $W$ vector boson and the fine structure constant.
INTRODUCTION
Cosmic strings [1] are linear vortex defects predicted to be formed at a cosmological phase transition during which the vacuum manifold would not be simply connected. The first interest in studying them comes from the fact that since a typical grand unified theory (GUT) predicts a few phase transitions (whose order, as far as the strings are concerned, does actually not matter [2]), and because the vacuum structure needed to form strings happens to be generically realized, one can, following Vilenkin [3], reasonnably assume that cosmic strings have an existence probability of at least 1 2 . Although they are not the only possible topological defects that could be formed in such phase transitions, they have the advantage, as compared for instance with domain walls and monopoles which must be somehow inflated away [1,4], to be at present compatible (as are as well the textures [5]) with all existent cosmological data [6], while being also possibly responsible for the formation of large scale structure [7] and the observed anisotropies in the cosmic microwave background (CMBR) [8]. Most models based on these strings assume that they are generated at the GUT phase transition, so that the dimensionless parameter GU , with G the Newton constant and U the energy per unit string length, giving the expected relative order of magnitude of any gravitational effect due to these strings (e.g., light deflection [9] or CMBR temperature fluctuations [6]), was assumed to be ∼ 10 −6 .
Another kind of strings was proposed by Witten [10] in 1985 who pointed out the possibility that bosonic or fermionic superconducting currents could be trapped in the strings, thereby inducing many electromagnetic effects, such as, for instance, a new scenario for structure formation [11]. Shortly thereafter, it was shown by Davis and Shellard [12] and independently by Carter [13] that, although the regular strings cannot be potentially responsible for a cosmological catastrophe (i.e., the remnant mass density would not exceed the critical density) because of gravitational radiation and the absence of any stabilising mechanism, the situation was completely different for current-carrying string loops since in the latter case, there exists centrifugally supported equilibrium configurations (called vortons [12] or rings [13]) which would overfill the Universe [14] by many orders of magnitude if they were stable (a point which still demands further clarification and is presently under investigation [15]), this stabilizing mechanism being enhanced when electromagnetic corrections are taken into account [16], and should not be confused with the much less efficient "spring", or magnetostatic support mechanism [11,17,18].
The Witten mechanism to produce currents in cosmic strings has been studied by many authors, interested in particular by their internal microscopic structure [17], and who exhibited clearly the characteristic features of what should be expected in these objects, such as the existence of a maximum (spacelike) current (or the current quenching phenomenon) and a phase frequency threshold (for timelike currents). Independently, a "macroscopic" formalism [19] was derived that allows one in principle to evaluate the dynamics of any current-carrying string configuration once its equation of state, relating the energy per unit length U to the tension T , is given. This equation of state, for the Witten simple model describing strings, and whose properties are believed to be qualitatively (if not quantitatively) similar to more complicated (and realistic) strings models, was indeed obtained (albeit unfortunately only numerically), so that it has now become possible to study realistically the cosmological importance of superconducting cosmic strings, and their astrophysical, gravitationally induced, signature, since the structure of the spacetime surrounding a string of any kind is also known [20].
The purpose of this article is to show that there exists yet another mechanism by which a string can become not only superconducting, but also current-carrying without invoking any extra external field (e.g., electromagnetic), hence the name spontaneous current generation for this mechanism, comparable in many respects to a similar mechanism existing in 3 He vortex lines [21]. This phenomenon occurs for particular values of the underlying microscopic parameters, when the current-carrier field is neither a scalar nor a fermionic field (these cases being essentially equivalent due to the two dimensional description of the vortex), but a charged-coupled vector field such as the intermediate W ± in the electroweak theory. To illustrate this mechanism in the most realistic possible way, we shall consider a simple stringforming extension [22] of the standard electroweak theory [23] (this latter model being string-free since its vacuum manifold, isomorphic to the 3-sphere S 3 , is simply connected). Other motivations [24] such as supersymmetry or superstrings inspired models also lead, generally as a low energy limit, to the model we wish to consider, namely that in which an extra-U (1) is gauged, this new symmetry being spontaneously broken. It is interesting to notice that because of the large number of experimentally testable phenomenological consequences of such a model, the energy scale of the symmetry breaking involved E Z ′ say, and thus the energy per unit length of the corresponding strings, is in fact constrained to exceed [25] 300 -500 GeV (depending on the couplings), which is actually close to the upper limit provided by the vorton mechanism [12,14], namely E Z ′ < ∼ 10 TeV. We shall first introduce our string forming model, as well as the vortex solutions themselves, in the simple case to begin with where no current is flowing in the strings, i.e., the so-called Nielsen-Olesen [26] solutions, or Kibble type vortices. Then we go on the spontaneous current generation itself which is shown to be be due to an electromagnetic instability of the vacuum for massless W field at zero temperature, and finally discuss how the phenomenon is responsible for a spontaneous breaking of the Lorentz boost symmetry along the initially Kibble-like string.
I. KIBBLE VORTICES BEYOND THE ELECTROWEAK MODEL.
The electroweak theory [23] is based on the spontaneous breaking of the SU (2) L × U (1) Y symmetry by means of an SU (2) doublet Higgs field H down to the electromagnetic U (1) symmetry. This means that the vacuum has the topology of the quotient group SU (2) × U (1)/U (1), which is isomorphic to SU (2), i.e., it has the topology of the 3-sphere and is therefore simply connected. As a result, topologically stable cosmic strings are not present in this model (and in fact, due to the experimental bound on the Higgs mass M H > ∼ 65 GeV [25], even string-like solutions in this model are dynamically unstable [27,28]). In order to investigate the structure of cosmic strings in a realistic model that would take the electroweak theory into account, it is thus necessary to modify this theory first. There are basically two different approaches that can be followed to extend this model. The first one consists in assuming the Higgs realisation of the symmetry breaking not to be fundamental, and to consider instead dynamical symmetry breaking such as in the chiral approach [29] involving the SU (2) L × SU (2) R symmetry. This leads to the existence of semi-topological defects [because only one direction of SU (2) R is actually gauged], which may be shown [30] to be dynamically stable and moreover superconducting. The second approach, the one we shall follow, is to regard the Higgs representation, and thus the Higgs field H itself, as fundamental, and to extend the gauge group. It turns out that the most simple such extension one can think of, consisting in an extra U (1), also generates topologically stable (and superconducting) cosmic strings [22].
The string-forming model we shall now examine is the following (this section is essentially useful to fix the notation used throughout): initially, the symmetry SU (2) L × U (1) Y × U (1) F (with F the extra hypercharge) is broken down to SU (2) L × U (1) Y by means of a Higgs field Φ, and this is followed by the usual electroweak phase transition. The model is minimal in the sense that we assign a vanishing F hypercharge for the H field, and symmetrically assume Φ to be an SU (2) singlet. We shall altogether neglect the fermionic sector of the model, but it may be remarked that the hypercharge F , with the previous assignement made on the Higgs fields, coincides with B − L (up to a normalisation factor absorbable in the fermionic fields) in this sector, and that even though U (1) F is broken, the baryonic and leptonic numbers B and L are conserved. Also the model will be anomaly-free provided one includes a right-handed neutrino.
We therefore start with the Lagrangian density (again, without the fermions) where the (classical) potential between the Higgs fields is (we assume that both phase transitions are second order so we neglect the logarithmic corrections [2,31] in this zero temperature effective theory, see, however, Ref. [32] on that point) and we have set the covariant derivative with T i the generators of SU (2) L in the representation of the particle upon which the derivative acts, g, g ′ and q the gauge coupling constants of SU (2) L , U (1) Y and U (1) F respectively, and the kinetic terms of the gauge vectors are expressed through The Higgs doublet is understood as H = (H + , H 0 ), and its vacuum expectation value (VEV) is experimentally known to be We are now interested in a vortex solution of this model, of the kind proposed by Nielsen and Olesen [26], for the Φ field. Since we are concerned by classical solutions, it is necessary that we fix the gauges. For the string-forming fields, there is a particularly convenient gauge choice: if the vortex solution is taken to be aligned along the z axis (which is always possible since the curvature effects can be locally neglected), we can choose a cylindrical coordinate system, and in this system, the phase of the Φ field is identified with the angular coordinate θ. The Nielsen-Olesen vortex solution then takes the simple form with n the winding number [1,2,4].
Let us now turn to the electroweak fields. Because of the disjoint structure of the initial invariance, we have not lost any freedom in going to the vortex gauge. We can thus choose the most convenient gauge with regard to the subsequent interpretation, namely the unitary gauge, in which only the neutral component of H is considered: . Before going any further in the resolution of the Euler-Lagrange equations for this system, we wish to examine in more details what occurs in the strings core.
The string solution is defined as the set of points in space where Φ = 0. Moreover, the vacuum (or the false vacuum in the case of the strings core) should represent a minimum of the potential (2). Varying this potential for h and ϕ, we see that the extremization yields two differents possibilities, namely, far from the strings core, i.e., in the usual vacuum, whereas in the strings core with ϕ = 0, then h should satisfy (not taking the kinetic terms into account for the moment) from which we can conclude that two cases may occur in principle. The first case, already studied elsewhere [22], is for f > f crit , with which corresponds to a shift in the SU (2) doublet Higgs VEV at r = 0. The second case, to which we now turn definitely, is for f ≤ f crit . If the underlying parameters are such that this inequality is satisfied, then there is no real solution to Eq. (8). Thus, one finds that the real minimum of the potential is now at h = 0 as long as ϕ ≤ ϕ min , where ϕ 2 given by inserting a nonzero value for σ into Eq. (8) and solving for h = 0]. Fig. 1 illustrates the internal string structure which is obtained when the kinetic terms are included. This figure represents a solution of the field equations derived from the Lagrangian (1) under the gauge assumptions and with the vortex solution (6), with zero vector fields A iµ and B µ .This solution was obtained by means of a successive over relaxation method [34], and the distances are in units of the inverse Φ mass (λ φ v φ ) −1 . More details concerning the numerical procedure itself and the stability of the solution can be found in Ref. [22], but here, and in particular in the next section, we shall be mainly interested in what occurs close to the strings core, namely the symmetry restoration. For the time being, let us just remark that since the Higgs field h is real, there is no associated current with it, so the fact that it be trapped in the string, its VEV varying from r = 0 to r → ∞, merely changes the actual value of the string's energy per unit length, but otherwise does not break the Lorentz boost invariance along the string. Therefore, setting the stress-energy tensor in the form with u and v two unit timelike and spacelike vector respectively, tangent to the strings worldsheet, U being the energy per unit length and T the tension, the Lorentz invariance requires, whether there is a Higgs condensate or not, that the equation of state be that of Goto-Nambu, i.e., U = T =Cte [4,19]. This is important because once the current-generation mechanism which we will investigate in the next section has been at work, this degeneracy in the stress-energy tensor eigenvalues is spontaneously raised, so the Lorentz invariance is spoiled.
The electroweak vacuum surrounding a cosmic string of the kind we just investigated is in fact not stable. This can be seen as follow: in the standard vacuum, the W ± particles are charged and massive because of the Higgs field H VEV. Now, close to the string core, as we have just seen, this VEV actually vanishes so the W ± particles become charged and massless. As a result, they can be created by pair through any fluctuation of the electromagnetic field, but since they are charged, they can actually be considered themselves as the sources for these electromagnetic fluctuations. More precisely, as will be shown in this section, fluctuations in the W field yield a corresponding nonvanishing A and Z, with nonzero gradients. This implies nonzero electric and magnetic fields which are used as negative masses for the W particles. The vacuum surrounding a cosmic string is thus unstable and there is a spontaneous current generation in the form of W flowing along the strings.
To see how this phenomenon actually occurs, let us concentrate on the stress energy tensor T µν given by and in particular the energy density U = T tt . Setting as usual [17] Q(r) = n − 1 2 qC θ , W ± µ = (A 1µ ∓ iA 2µ )/ √ 2, Z µ = cA 3µ − sB µ , A µ = sA 3µ + cB µ , s ≡ sin θ W , c ≡ cos θ W , tan θ W ≡ g ′ /g, and using the vortex ansatz (6) gives, if one considers only a configuration where radial electric and orthoradial magnetic fields are present (i.e., with A z (r), A t (r), Z z (r) and Z t (r) the only nonvanishing components of the photonic and the Z fields) where a prime means differentiation with respect to the radial coordinate r. Let us now consider the quadratic terms in W ± that are present in Eq. (12), i.e., the effective mass matrix M 2 ij , defined by Because of the coupling between the W fields and the gradients of the photon and the Z fields, this mass matrix is in fact nondiagonal, and one can easily derive the eigenvalues as where, to first order in g, the eigenvectors are and we have set β a = g(sA ′ a + cZ ′ a ), a = z, t the radial (respectively orthoradial) component of the electric (resp. magnetic) field.
It can be remarked on Eqs. (14), (15) and (16) that W 3 has a nonpositive definite mass, which is not the case for all other components of this gauge field. Therefore, since we are seeking a minimum energy configuration, it seems safe to assume W 1 = W 2 = 0 = W θ . Setting, for simplicity, W + 3 ≡ W , one has where now β z and β t are arbitrary. We recover the previous results [22] W − r = ±iW − z or W − r = ±iW − t for the purely magnetic or electric cases respectively (i.e., when only one component of A and Z is explicitely considered).
Under the assumptions given by Eqs. (20), (21) and (22), it is now simple to derive the effective potential for the W field, namely: so this field effectively behaves as a Higgs "scalar" in the region of parameter space where its squared mass is negative. This is indeed possible close to the strings core since there, as we have seen, still in the case where f < f crit , the Higgs field h vanishes, so to first order in g, Eq. (16) implies that m 2 3 is actually negative. Thus, any fluctuation in W , the latter being coupled with the photon, will generate a small fluctuation in A z , A t , Z z and Z t , which, if the initial perturbation was axisymmetric, will produce an electric or a magnetic field. Since W is effectively massless in the core of the string, the energy in the electromagnetic perturbation is already sufficient to create a pair W + W − , which can in turn be seen as the source for the electromagnetic fields. Since these electromagnetic fields are necessary to support the W condensate in the strings core, one is led to the conclusion that a current has been spontaneously generated. We shall now investigate in more details this mechanism.
III. SPONTANEOUS CURRENT GENERATION.
To exhibit the instability of the electroweak vacuum surrounding a Nielsen-Olesen string (6), we turn to the Euler-Lagrange equations derivable from the Lagrangian (1), which we expand to first order in the coupling constant g and to lowest order in the various fields involved to consider the case of a perturbation in W 3 [given by Eq. (19)]. For the photon, we have a similar equation applying for the Z field with sin θ W replaced by cos θ W , so also similar conclusions can actually be drawn for both fields, and close to the strings core, in the symmetry restoration region where h = 0. In fact, because there, h = 0, neglecting a possible backreaction due to outer region couplings, we see that the background Nielsen-Olesen string is essentially unaffected by the inclusion of the W , A and Z fields.
We shall now examine a perturbation in W 3 = W in the form W = |W (r)|e iωt . Inserting this form into Eq. (24) yields equations in which we shall assume for now on that ∂ z is ignorable, and we will denote the differentiation with respect to the radial coordinate r by a prime, whereas a dot will mean a derivative with respect to time. We shall also work in the Lorentz gauge ∇ µ A µ = 0 since in this gauge the equations for the various components of A are decoupled. Actually, the gauge condition together with the field equation (26) with ∂ z ignorable yields, upon differentiation with respect to rȦ and inserting this into Eq. (26) yieldsÄ so that setting A r = f (t)g(r) givesf with α a constant and ∆ 2 the two dimensionnal Laplacian in the transverse plane. So f ∝ e iαt , and the radial dependent part obeys a Schrödinger equation with a positive definite potential r −2 . Therefore, the eigenvalues α 2 of Eq. (32) are all positive and the string state is stable against A r perturbations. Thus, neglecting A r in the resulting stationnary configurations is justified and we shall consistently set A r = 0 in what follows.
Inserting Eq. (29) into Eq. (28) is the last step toward the definite equations describing the dynamics of A z and A t when a perturbation in W 3 is applied, and we find out of which the spontaneous current mechanism can be clearly exhibited. The linearized field equations (33) and (34) have a first immediate consequence, namely that β t A z = β z A t . It turns out that this relation is still valid when the whole set of classical field equations are used, so the overall field configuration is in fact determined by the value of the ratio β z /β t . We will return to that point later. Moreover, Eqs. (33) and (34), being inhomogeneous because of the source term due to the W field [a direct consequence of the nonabelian nature of SU (2) × U (1)], the configuration (A z = 0 and A t = 0) is not solution of the field equations, and more generally there is also no solution with vanishing gradient. But this is precisely the condition for the squared mass m 2 3 to be nonpositive definite. Thus, we know that there exist unstable modes in Eq. (25). These modes will grow exponentially, as we shall now show, as will also A t and A z , until they reach an equilibrium configuration where the quadratic terms become large enough to stop the instability.
To examine the W instability, we note first that Eq. (25) is not well adapted since it was written for the usual components of W µ . Instead, we write an equivalent linearized action [obtained by retaining only the lowest order tems in the Lagrangian (1)] for the field W = W + 3 alone, namely whose variations yield the following Euler-Lagrange equation: where we have set a = ( and σ(r) = 2g Although it is impossible to solve Eq. (36), some information regarding the transition from the nonconducting state to the current-carrying state can be obtained from it if one makes an "adiabatic" hypothesis, namely assumes that the transition is slow enough that the time derivative in the gauge fields can be neglected compared to their spatial gradients. In this hypothesis, setting W = ξ(t)ρ(r) into Eq. (36) yields where the function ζ depends on the radial coordinate r only. Thus, the oscillatory modes ξ ∝ e iωt satisfy a dispersion relation whose solutions have an imaginary part for ζ > 1 4b (c/r + 2ǫ) 2 , i.e., in the low frequency limit (thereby justifying the "adiabatic" hypothesis). In the limit of zero frequency (ω → 0), we can estimate roughly, in order of magnitude, the expected value of the timescale necessary for the string to become current-carrying (namely τ ∼ ω −1 ): assuming the current-carrier field to have an amplitude [17] |W | ∼ M W , and taking r 0 to be the typical distance over which the fields vary, then Eqs. (33) and (34) give A z ∼ gsr 0 M 2 W , and we find i.e., a time independent of the string thickness, an expected result since the background string and the electroweak fields are decoupled in the core (again, neglecting backreaction). It should be mentionned that for coupling values f > f crit , the same mechanism actually applies, but that in this case, the initial configuration and only metastable and although the current will definitely be spontaneously generated, it will be through tunnelling. As a result, the life time of the noncurrent-carrying configuration is in fact increased by a factor depending on the strings thickness and the value of h at r = 0, these giving the order of magnitude of the expected potential barrier and its width: the standard WKB approximation then gives an extra exponential factor in Eq. (42).
IV. INTERNAL STRING STRUCTURE.
The spontaneous current generation mechanism we have just discussed has in fact many interesting consequences, including, we believe, cosmological (notably in the framework of the vorton problem [12] which becomes even more unavoidable in this context), and in this section, we wish to emphasize a particular effect, namely that generating a current this way breaks the Lorentz boost invariance along the string spontaneously. The basic reason that this occurs is that the particle that gets trapped in the string is in fact a W ± µ , i.e., a vectorial particle, and a nonvanishing VEV for a vector picks a privileged direction in space time. Also we shall exhibit the internal microscopic structure of the string and compare it with what is obtained in the simple Witten [10,17] bosonic toy model.
As was already said, the current generation phenomenon will spontaneously break the Lorentz boost symmetry along the string: before the W condenses in the strings core, the energy per unit length and the tension are both equal so a boost in the z direction does not change the physics of the system. Now, when a perturbation in W is applied, as we have just seen, the W and A fields VEV increase exponentially, so the degeneracy of the stress energy tensor is raised exponentially as times passes by, until again the system reaches a stationnary configuration. It is therefore an actual spontaneous mechanism because it does not require any external field and it needs not even exist in the expanding Universe (as is the case for the usual symmetry breaking Higgs mechanism).
A stationnary configuration obtained this way consists in a W field together with, as argued before, any value for the ratio β z /β t . In fact, it turns out that the only thing to know to determine (nearly) entirely the configuration is whether this ratio is less or greater than unity, for once this is known, it suffices to apply a boost along the string to remove one of the field A z or A t . As a result, the only interesting cases are the magnetic case for which one can always set β t = 0 and A t = Z t = 0, the electric case having β z = 0 and A z = Z z = 0, and the null or lightlike case with β z = β t ≡ β, A z = A t ≡ A and Z z = Z t ≡ Z, as can be seen on Eqs. (33) and (34). It can then be shown [17,22] that the most general configuration will be where, without lack of generality [17], the phase function ψ could be chosen as ψ = ωt − kz. The "energy per unit length" is theñ of the initial 18 field functions (ϕ, h, C µ , W ± µ , A µ , and R µ ), the internal microscopic structure is fully determined by the knowledge of 6 field functions [ϕ(r), h(r), Q(r), Υ(r), P (r) and R(r)] and two free parameters, namely the winding number n and the state parameter ν. This is to be compared to the original Witten bosonic model whose structure needs the knowledge of already 4 field functions and the same two free parameters, and with very similar field equations. Thus we believe that, apart from the spontaneous current generation mechanism discussed in the present article, most qualitative conclusions regarding this simple model should apply as well to this more realistic model.
CONCLUSIONS.
In examining the internal structure of cosmic strings arising in the most simple string-forming extension of the standard electroweak model, we have found that, because of the nonabelian nature of SU (2) × U (1), the field W can condense spontaneously in a strings core if the coupling constant between the string-forming Higgs field and the usual SU (2) doublet Higgs is less than a critical value. This phenomenon can be understood in the following way: for certain values of the coupling constants between the string forming Higgs field Φ and the SU (2) doublet Higgs field H, the latter has a vanishing VEV close to the strings core, so the initial SU (2) × U (1) symmetry is restored. Therefore, the intermediate vector bosons, just like the photon, remains massless in this region. Any exitation of the photon field will thus have enough energy to generate a pair W + W − through vacuum fluctuations. In turn, the effectively massless W particle, being charged, is responsible for the existence of a nonvanishing electromagnetic field. This turns out to be in fact an unstable fluctuation mode, and nonzero VEV for W and the photon therefore build up spontaneously.
By using the field equations for the electroweak fields in the symmetry restored region, we have been able to exhibit explicitely this instability, and to estimate what we believe to be a lower bound on the time necessary for the current to be generated. This timescale is, as expected, independent of the underlying string parameters provided the latter are such that the symmetry restoration mechanism actually occurs. It should be remarked that because the phenomenon here described is essentially electromagnetic and involves only the W particle, the timescale found could have been deduced on dimensionnal ground, namely τ ∼ (eM W ) −1 . Although this is a huge time compared to the characteristic length of the string if the underlying string forming theory is at GUT scale, it is still sufficiently short to be irrelevant for cosmological considerations. Thus, current-carrying strings seem quite generic in string-forming GUT models since the potential (2) is in fact very general even as a low energy limit and, as we have seen, the current formation mechanism is independent of the background string structure.
Considering the results of Ref. [22] and the present calculation, it can be concluded that for any value value of the coupling between the string forming theory and the electroweak fields, the resulting strings are superconducting in the sense of Witten, whether the current builds up through tunneling (high frequencies metastability [22]) or instability. Thus, if cosmic string exist, and if they are not arbitrarily decoupled from the low energy physics (a requirement of "naturalness"), then, they are superconducting. Since the present knowledge in high energy physics tells us that approximately half of the plausible GUT theories contains cosmic strings, it means that we can estimate the existence probability of superconducting cosmic string to be also of the order 1/2.
A final remark seems appropriate at this point: the current generation we have exhibited here relies in fact entirely on the nonabelian nature of SU (2). This means that for a string forming GUT, it will exist as well since GUT models usually involve large unifying groups with nonabelian couplings, and various Higgses. Therefore, gauge bosons having masses of the order of the GUT scale should spontaneously condense in the string core, within a timescale given this time by τ ∼ (gM GUT ) −1 , where g is the GUT group coupling constant. The cosmological relevance of such effects is then obvious since many of these gauge bosons are responsible for baryon number violation, so that in particular, these strings would enhance the primordial baryon number asymmetry.
|
2018-04-03T01:10:30.183Z
|
1993-12-14T00:00:00.000
|
{
"year": 1993,
"sha1": "a585af5de99bd612a654f1bdecae05b93edc4d03",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/9312280",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a585af5de99bd612a654f1bdecae05b93edc4d03",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
}
|
257222552
|
pes2o/s2orc
|
v3-fos-license
|
DEPDC1 and KIF4A synergistically inhibit the malignant biological behavior of osteosarcoma cells through Hippo signaling pathway
The treatment of osteosarcoma (OS) is still mainly surgery combined with systematic chemotherapy, and gene therapy is expected to improve the survival rate of patients. This study aimed to explore the effect of DEP domain 1 protein (DEPDC1) and kinesin super-family protein 4A (KIF4A) in OS and understand its mechanism. Th expression of DEPDC1 and KIF4A in OS cells was detected by RT-PCR and western blot. The viability, proliferation, invasion and migration of OS cells and tube formation of human umbilical vein endothelial cells (HUVECs) after indicated treatment were in turn detected by CCK-8 assay, EdU staining, wound healing assay, transwell assay and tube formation assay. The interaction between DEPDC1 and KIF4A was predicted by STRING and confirmed by co-immunoprecipitation. The expression of epithelial-mesenchymal transition (EMT)-related proteins, tube formation-related proteins and Hippo signaling pathway proteins was detected by western blot. As a result, the expression of DEPDC1 and KIF4A was all increased in U2OS cells. Down-regulation of DEPDC1 suppressed the viability, proliferation, invasion and migration of U2OS cells and tube formation of HUVECs, accompanied by the increased expression of E-cadherin and decreased expression of N-cadherin, Vimentin and VEGF. DEPDC1 was confirmed to be interacted with KIF4A. Upregulation of KIF4A partially reversed the effect of DEPDC1 interference on the above biological behaviors of U2OS cells. Down-regulation of DEPDC1 promoted the expression of p-LATS1 and p-YAP in Hippo signaling pathway, which was reversed by upregulation of KIF4A. In conclusion, down-regulation of DEPDC1 inhibited the malignant biological behavior of OS cells through the activation of Hippo signaling pathway, which could be reversed by upregulation of KIF4A.
Introduction
Osteosarcoma (OS) is a kind of malignant tumor derived from mesenchymal stem cells and most common in children and adolescents [1]. About 80% of osteosarcomas occur in the long bones of the extremities, most often in the long metaphyseal region around the knee joint [2]. The annual incidence of osteosarcoma is 1-4 per million [3,4]. With the progress of medical treatment, neoadjuvant chemotherapy can effectively improve the 5-year survival rate of patients, but there are still 30-40% of patients with tumor recurrence and metastasis, especially those with lung metastasis, which often lead to respiratory failure and poor prognosis [5,6]. Therefore, further research on the pathogenesis of OS will help identify new diagnostic and therapeutic targets, improve the prognosis and improve the survival rate of patients.
The DEP domain 1 protein (DEPDC1) is an oncoprotein containing a DEP domain, which has not been detected in 24 normal human tissues, including normal lung tissue, except testicular surface [7][8][9] and was first reported in bladder cancer [10]. Studies have shown that DEPDC1 protein is involved in a variety of cell functions, such as promoting cell proliferation and inhibiting apoptosis [11][12][13]. The expression of DEPDC1 is obviously upregulated in some cancers, and the high expression level of DEPDC1 is closely related to the progression of cancer, including hepatocellular carcinoma [14], bladder cancer [15], lung adenocarcinoma [16] and gastric cancer [17]. A current study has indicated that DEPDC1, one of the hub genes, is highly expressed in osteosarcoma, and its high expression is associated with poor prognosis [18]. At present, its specific role and mechanism in OS have not been studied.
Dysregulation of kinesin super-family protein 4A (KIF4A) expression can induce mitosis and aneuploid cell formation [19]. The high expression of KIF4A could be used as a diagnostic and prognostic marker of OS, and the silencing of KIF4A could inhibit the invasion and migration of OS cells, and induce their apoptosis and cell cycle arrest [20]. High expression of KIF4A in OS predicted poor prognosis and promoted tumor growth by activating the MAPK pathway [21]. KIF4A promoted cell proliferation and migration of esophageal squamous cell carcinoma through Hippo signaling pathway [22]. Hippo signaling pathway was also affected by the changes in gene expression and DNA methylation in cholangiocarcinoma [23]. Hippo signaling pathway was involved in the promotion effect on cell proliferation and invasion of OS [24].
Therefore, this study aimed to explore the effect of DEPDC1 and KIF4A in OS and understand its mechanism. To knock down DEPDC1 expression and overexpress KIF4A expression in osteosarcoma cells, short hairpin (sh)RNA targeting ANGPT2 (sh-DEPDC1#1 and sh-DEPDC1#2), pcDNA3.1-KIF4A as well as corresponding negative control (sh-NC) and pcDNA3.1-NC were obtained from RIBIO (Guangzhou, China). When OS cells were reached to about 80% confluence, OS cells were transfected with different vectors using Lipofectamine 3000 according to the manufacturer's protocol. The transfection efficiency was confirmed by RT-qPCR and western blot 48 h later.
RT-qPCR
After indicated treatment, total RNA from cells was extracted using Trizol reagent (Invitrogen), and the Pri-meScript ® RT reagent Kit (Takara) was used to make RNA transcribed into cDNA according to the manufacturer's instructions. Quantitative real-time PCR (RT-qPCR) was performed with SYBR Green qPCR Master Mix using a 7500 Thermocycler (Applied Biosystems). The relative expression levels of DEPDC1 and KIF4A normalized to GAPDH were calculated by the 2 −ΔΔCt method [25].
Western blot
The treated cells were lysed with ice-cold RIPA buffer for 30 min (Beyotime) to obtain the proteins, concentration of which was determined by a bicinchoninic acid protein assay kit (Beyotime). Proteins (20 μg) were isolated with sodium dodecyl sulfate-polyacrylamide gels electrophoresis (SDS-PAGE) and then transferred to a polyvinylidene fluoride (PVDF) membrane (Millipore, Beford, MA, USA). After blocked with 5% nonfat milk at room temperature, the membrane was incubated with the primary antibodies including DEPDC1, PCNA, E-cadherin, N-cadherin, Vimentin, VEGF, KIF4A, p-LATS1, LATS1, p-YAP, YAP, p-MST1, MST1 and GAPDH overnight at 4 °C and then incubated with specific horseradish peroxidase (HRP)-conjugated secondary antibody at room temperature for 1 h. Finally, the proteins were detected using enhanced chemiluminescence reagents and bands density was analyzed by Image J 1.8.0 software.
CCK-8 assay
Cells (1 × 10 5 cells/well) were seeded into the 96-well plates and cultured for 24 h. After respective treatment for 24, 48 and 72 h, cells were treated with CCK8 solution (10 μL) and incubated at room temperature for another 4 h. Finally, the absorbance was determined at 450 nm using a microplate reader.
EdU staining
Cells received respective treatment and seeded into the 96-well plate (1 × 10 5 cells/well) for incubation about 24 h. Then, cells were incubated with EdU labeling agent for another 4 h. Next, cells were fixed with 4% paraformaldehyde for 15 min at room temperature and then incubated with 0.5% Triton X-100 in PBS for 15 min at
Wound healing assay
After indicated treatment, cells were seeded into the 6well plates (5 × 10 4 cells) and cultured nearly 100% confluence. A sterile plastic micropipette tip was used to scratch a wound along the center line of the well, followed by the incubation for 24 h. The images of the cells were photographed at 0 and 24 h using an optical microscope (Olympus Corporation).
Transwell assay
After respective treatment, cells were digested by pancreatin and suspended within the serum-free medium. The 200 μL of the cell suspension (5 × 10 3 cells) was added to the upper chamber coated with Matrigel and the lower chamber was added with 600-μL DMEM containing 10% FBS. After incubation for 24 h at 37 °C, the invaded cells were fixed with 4% formaldehyde fixation and stained with 0.1% crystal violet for 30 min (Beyotime). Subsequently, the invading cells were observed by an optical microscope (Olympus Corporation).
Tube formation assay
After respective treatment, the supernatant of cells was obtained and used for the incubation of HUVECs in 96-well plates coated with Matrigel for 48 h. The tubes formed by the HUVEC cells were observed by an optical microscope (Olympus Corporation).
Co-immunoprecipitation
After respective treatment, total proteins in cells were extracted through immunoprecipitation buffer for 30 min on ice. Then, proteins were incubated with anti-DEPDC1 or anti-KIF4A overnight at 4 °C, followed by incubation with protein A/G magnetic beads (Thermo Scientific) at 4 °C for another 3 h. Subsequently, bead-antibody complexes were washed by immunoprecipitation buffer. The above mixture was then centrifuged at 3000g to collect the immunoprecipitates, which was analyzed by western blot.
Statistical analysis
All data were presented as mean ± SD and GraphPad Prism 8.0.1 software was used to analyze the experimental data. A student's t test was performed for comparisons between two groups and ANOVA followed by Tukey's post hoc test was carried out for comparisons among multiple groups. P < 0.05 was considered as statistical significance.
DEPDC1 was highly expressed in OS cells, and down-regulation of DEPDC1 inhibited cell proliferation.
The expression of DEPDC1 was upregulated in osteosarcoma cells (U2OS, SaOS-2 and MG-63) compared with that in hFOB1.19 group, and the highest DEPDC1 expression was observed in U2OS cells ( Fig. 1A and B). After U2OS cells were transfected with sh-DEPDC1#1 and sh-DEPDC1#2, the expression of DEPDC1 was down-regulated, and the lower expression of DEPDC1 was observed in sh-DEPDC1#2 group ( Fig. 1C and D). Therefore, sh-DEPDC1#2 was selected for the subsequent experiment. Down-regulation of DEPDC1 suppressed the viability and proliferation of U2OS cells ( Fig. 1E and F). The expression of PCNA was also inhibited by down-regulation of DEPDC1 (Fig. 1G).
Down-regulation of DEPDC1 inhibited metastasis of OS cells.
When U2OS cells were transfected with sh-DEPDC1, down-regulation of DEPDC1 inhibited the migration and invasion of U2OS cells ( Fig. 2A and B). The expression of E-cadherin was upregulated while the expression of N-cadherin and Vimentin was down-regulated by the down-regulation of DEPDC1 in U2OS cells (Fig. 2C).
Down-regulation of DEPDC1 inhibited angiogenesis in OS cells.
The images of tube formation of HUVEC are presented in Fig. 3A. The number of tubes was reduced by the downregulation of DEPDC1 (Fig. 3B). The expression of VEGF was also reduced in U2OS cells transfected with sh-DEPDC1 (Fig. 3C). STRING found a potential interaction between DEPDC1 and KIF4A (Fig. 4A). Th expression of KIF4A in U2OS cells was higher than that in hFOB1.19 group (Fig. 4B and C). The expression of KIF4A or DEPDC1 in U2OS cells was observed when the protein samples were incubated with anti-DEPDC1 or anti-KIF4A, which indicating that DEPDC1 could interact with KIF4A ( Fig. 4D and E). Down-regulation of DEPDC1 suppressed the KIF4A expression, while KIF4A overexpression had no obvious effect on DEPDC1 expression ( Fig. 4F and G).
Upregulation of KIF4A partially reversed the effect of DEPDC1 interference on proliferation of OS cells
The expression of KIF4A was upregulated in U2OS cells transfected with Ov-KIF4A ( Fig. 5A and B). Upregulation of KIF4A improved the decreased viability and proliferation of U2OS cells induced by the down-regulation of DEPDC1 (Fig. 5C and D). The expression of PCNA was upregulated in U2OS cells co-transfected with sh-DEPDC1 and Ov-KIF4A (Fig. 5E).
Upregulation of KIF4A partially reversed the effect of DEPDC1 interference on metastasis and angiogenesis of OS cells
Upregulation of KIF4A promoted the migration and invasion of U2OS cells transfected with sh-DEPDC1 ( Fig. 6A and B) by decreasing the expression of E-cadherin and increasing the expression of N-cadherin and Vimentin (Fig. 6C). Upregulation of KIF4A also increased the number of tube and promoted the expression of VEGF in U2OS cells transfected with sh-DEPDC1 (Fig. 6D-F).
DEPDC1 and KIF4A synergistically regulated Hippo signaling pathway
Down-regulation of DEPDC1 increased the expression of p-LATS1 and p-YAP, while decreased the YAP expression in U2OS cells, which was reversed by upregulation of KIF4A. The expression of LATS1, p-MST1 and MST1 was not obviously changed in U2OS cells whether or not receiving transfection (Fig. 7).
Discussion
In the current study, we found that DEPDC1 was upregulated in OS cell lines. The down-regulation of DEPDC1 inhibited the viability, migration, invasion, and proliferation of OS cells and tube formation of HUVECs. Our further study clarified that DEPDC1 was interacted with KIF4A. Upregulation of KIF4A could weaken the effect of DEPDC1 interference to improve the viability, migration, invasion, and proliferation of OS cells and tube formation of HUVECs. DEPDC1 has been shown to be highly expressed in most tumors. Amisaki et al. found that DEPDC1 expression was upregulated in hepatocellular carcinoma tissues compared with normal livers, and the high expression of DEPDC1 in tumor tissues was associated with tumor progression and poor prognosis [26]. DEPDC1 was reported to be overexpressed at both mRNA and protein levels in nasopharyngeal carcinoma tissues compared with normal or non-tumor tissues, and the siRNA-mediated deletion of DEPDC1 significantly inhibited the proliferation of nasopharyngeal carcinoma cell lines CNE-1 and HNE-1 [27]. DEPDC1 interference suppressed hepatocellular carcinoma cell proliferation, colony formation and invasion in vitro and HUVEC angiogenesis [28]. Overexpression of DEPDC1-induced EMT of HepG2 cells with the upregulated expression of N-cadherin and Vimentin, and down-regulated expression of E-cadherin and Slug and promoted the capillary tubule formation [14]. In this study, we found that DEPDC1 expression was increased in OS cell lines. After knockdown of DEPDC1, the proliferation, invasion and migration of U2OS cells and HUVEC tube formation were all inhibited obviously. Previous studies indicated that enhanced KIF4A expression predicted poor prognosis and promoted tumor growth in OS and down-regulation of KIF4A could suppress the colony formation, invasion and migration and cell cycle of OS cells [20,21]. Here, overexpression of KIF4A polished the effect of DEPDC1 interference to promote the proliferation, invasion and migration of U2OS cells and HUVEC tube formation. Hippo signaling pathway is a highly conserved evolutionarily that regulates organ size and maintains tissue homeostasis by controlling cell proliferation and apoptosis [29]. In recent years, more and more studies have found that Hippo pathway plays an important role in blander cancer, lung cancer, breast cancer, liver cancer and colorectal cancer [30,31]. Hsa_circ_0005273 could upregulate the expression of YAP1 through miR-200a-3p, thus promoting the progression of breast cancer [32]. Oncoprotein CagA could promote YAP expression, which promoted the EMT of gastric cancer [33]. Verteporfin (VP), a YAP specific inhibitor inhibited YAPinduced bladder cancer cell growth and invasion [34]. It has been found that Hippo signaling pathway can regulate the proliferation, apoptosis, invasion and metastasis of OS cells [35]. The YAP and TAZ are two important transcriptional co-activators that are negatively regulated by the Hippo signaling pathway. High expression of YAP/ TAZ could promote cancer development and inhibition of YAP and TAZ might be useful to treat tumors with high YAP and/or TAZ activity [36]. The present study indicated that down-regulation of DEPDC1 activated the Hippo signaling pathway to overexpress p-LATS1 and p-YAP, thereby inhibiting YAP to suppress the proliferation of OS cells. Also, overexpression of KIF4A could reverse the effect of down-regulation of DEPDC1 on Hippo signaling pathway.
In conclusion, the expression of DEPDC1 and KIF4A was increased in OS cells. Down-regulation of DEPDC1 inhibited the proliferation, invasion and migration of OS cells and HUVECs tube formation through the activation of Hippo signaling pathway, which could be reversed by upregulation of KIF4A. The expression of DEPDC1 and KIF4A in osteosarcoma tissues and the correlation of them will be investigated in future study.
|
2023-02-28T15:23:37.565Z
|
2023-02-27T00:00:00.000
|
{
"year": 2023,
"sha1": "60427559c5340652b891eecda459cf8b4c8ef2b2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "60427559c5340652b891eecda459cf8b4c8ef2b2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258321307
|
pes2o/s2orc
|
v3-fos-license
|
Inflammatory, Oxidative Stress and Small Cellular Particle Response in HUVEC Induced by Debris from Endoprosthesis Processing
We studied inflammatory and oxidative stress-related parameters and cytotoxic response of human umbilical vein endothelial cells (HUVEC) to a 24 h treatment with milled particles simulating debris involved in sandblasting of orthopedic implants (OI). We used different abrasives (corundum—(Al2O3), used corundum retrieved from removed OI (u. Al2O3), and zirconia/silica composite (ZrO2/SiO2)). Morphological changes were observed by scanning electron microscopy (SEM). Concentration of Interleukins IL-6 and IL-1β and Tumor Necrosis Factor α (TNF)-α was assessed by enzyme-linked immunosorbent assay (ELISA). Activity of Cholinesterase (ChE) and Glutathione S-transferase (GST) was measured by spectrophotometry. Reactive oxygen species (ROS), lipid droplets (LD) and apoptosis were measured by flow cytometry (FCM). Detachment of the cells from glass and budding of the cell membrane did not differ in the treated and untreated control cells. Increased concentration of IL-1β and of IL-6 was found after treatment with all tested particle types, indicating inflammatory response of the treated cells. Increased ChE activity was found after treatment with u. Al2O3 and ZrO2/SiO2. Increased GST activity was found after treatment with ZrO2/SiO2. Increased LD quantity but not ROS quantity was found after treatment with u. Al2O3. No cytotoxicity was detected after treatment with u. Al2O3. The tested materials in concentrations added to in vitro cell lines were found non-toxic but bioactive and therefore prone to induce a response of the human body to OI.
Introduction
With the increased use of orthopedic implants (OI) due to increased life expectancy, biocompatibility and relevance of materials are gaining interest. OI materials must be biologically acceptable to minimize adverse local tissue reactions and robust enough to support weight bearing during common activities of daily life [1]. Modern materials for joint replacement are well tolerated and accepted by the body if they are in bulk form, mechanically stable and sterile [1]. However, it was found that fibrotic tissue often surrounds surgical implants, which was connected to metallic wear particles released into the tissue that surrounds OI [2]. Tissue damage can trigger inflammation which can result in fibrosis through different pathways [3]. Excessive wear of OI that produces particle debris (and consequently, osteolysis) results in aseptic loosening of the OI. Pain and reduced mobility indicate a revision surgery [4,5] that represents an additional risk to patients, also regarding thromboembolic events, infection, dislocation and death [6]. The choice of the materials used and surface elaboration are therefore key to the longevity of the endoprosthesis.
The main materials used for endoprostheses are metal, polyethylene and ceramic. Follow-up studies of populations of hips with an implanted prosthesis showed different survival rates and different problems with different materials and their combinations (reviewed in [7]). Ceramics showed promising results regarding wear and related loosening; a 6 year (mid-term) follow-up study including 310 hips with ceramic head and liner prostheses showed that 99.0% of the hips had not been associated with re-operation and there was no radiological evidence of osteolysis or loosening [8]. It was reported that alumina−alumina ceramic OI generated 400 times fewer wear particles than metal-polyethylene OI, which resulted in a lower rate of periprosthetic osteolysis in alumina−alumina OI [9]. Zirconiatoughened alumina ceramic was found to release 1 µg/year of wear debris into circulation and surrounding tissue, which is considered very low [10].
The quality of the prostheses could be improved by focusing on the microscopic properties of the interfaces (between the prosthesis parts and between the prosthesis and tissues). In ceramics, most of the debris was found to consist of particles sized between 0.1 and 10 µm, but larger particles sized up to 1 mm were also observed [11]. Besides toxic effects, also inflammatory and oxidative stress responses are of importance. It was summarized in the review on the response of cells that, upon biomaterial implantation, a sequence of events is initiated with an injury, followed by blood−material interactions, provisional matrix formation and acute innate inflammatory response acting on monocytes, fibroblasts, osteoblasts, osteoclasts and mesenchymal stem cells [12]. The activation of macrophages was suggested as the dominant mechanism in periprosthetic inflammation [13]. It begins with interaction of the particles on membrane receptors (such as CD14 and toll-like receptors) followed by the release of pro-inflammatory cytokines (e.g., tumor necrosis factor (TNF)-α, IL-1β, IL-6, prostaglandin E (PGE)-2), growth factors (macrophage colony stimulating factor 1-M-CSF), pro-osteoclastic factors (receptor activator of nuclear factor kappa B ligand-RANKL) and chemokines (e.g., IL-8, macrophage inflammatory protein-MIP-1α, monocyte chemoattractant protein-MCP-1). Moreover, phagocytosis of wear debris takes place [12]. The suggested underlying mechanisms are the up regulation of transcription factor NF κβ and the activation of inflammasome danger signaling. This leads to a decreased osteoblast function and increased osteoclast activity [14]. It was suggested that inefficient phagocytosis with excessive production of inflammatory mediators may lead to sustained inflammation and, eventually, fibrotic changes [3,15].
In a study involving murine macrophages (RAW264.7) exposed to corundum microand nanoparticles, aliquots of cell culture supernatants were tested for different cytokines, growth factors and nitric oxide [16]. Exposure to corundum particles led to a decrease in the number of vital macrophages, an increase in the number of giant cells, formation of micron-sized aggregates in the cell culture medium and production of giant cells [16].
Human articulate chondrocytes were shown to attach to composite Al 2 O 3 , SiO 2 , CaAl 2 Si 2 O 8 , Ca 3 (PO 4 ) 2 , Ca 2 Al 4 O 7 and NaAlSiO 4 surfaces [17]. However, elaboration of the surface turned out to be of importance. It was shown that corundum sandblasting of surfaces significantly increased surface wettability, MG63 cell attachment and proliferation and alkaline phosphatase activity in comparison with the control surface [18]. The corundum contamination was found on and under the surface of the new and retrieved dental implants [19]. The study of new and retrieved dental implants and restorative materials-commercially pure titanium (cpTi), the Ti 6 Al 4 V alloy and CoCrMo-by light microscopy, SEM and energy-dispersive spectroscopy, showed that the surfaces of the Ti and Ti 6 Al 4 V implants were affected by corundum blasting [20,21]. Moreover, contamination of the new and early removed femoral components made of the alloy Ti 6 Al 7 Nb with corundum wear particles was found also 5 to 20 µm below the surface [22], as the hard corundum wear particles were embedded into a softer matrix of Ti 6 Al 7 Nb alloy during the sandblasting process. The microstructural analysis of the cross-sections showed that the cracks around the built-in particulate matter range to the surface. Such cracks allow corrosion and represent sites for the attachment and colonization of bacteria, the formation of biofilm, periprosthetic infection and implant failure. Therefore, in addition to aseptic loosening of the implant, the corundum particles can cause periprosthetic infection [22]. As unavoidable generation of wear debris from any part of a prosthesis leads to prosthesis failure, histological analysis of the tissue obtained during implant revision surgery is considered important for wear-particle identification, and for the classification of biological reactions to wear particles [23].
It is therefore important to study the effect of debris on cells. In a preliminary study, we observed morphological changes in HUVEC to exposure to corundum particles [24]. With the aim of better understanding the mechanisms underlying the effects of the sandblasting debris contamination on osteointegration, we here address the effect of three types of particles: Al 2 O 3 -white alumina; u. Al 2 O 3 -white alumina previously used in the process of sandblasting of OI; and ZrO 2 /SiO 2 -zirconia/silica composite, on morphology, inflammatory response, oxidative stress response and cytotoxicity and of HUVEC.
Cell Culture
HUVEC were a kind gift of Snežna Sodin Šemrov from the Immunology Laboratory, Department of Rheumatology, University Medical Centre Ljubljana, Ljubljana, Slovenia. They were purchased from Lonza, Basel, Switzerland, No. 480242. HUVEC were confirmed to be mycoplasma negative using the MycoAlert™ Kit (Lonza, Basel, Switzerland). Cells were cultured with an initial concentration of 3 × 10 4 cells/cm 2 and allowed to attach and grow for 24 h in Dulbecco's modified Eagle's medium (Sigma Aldrich, St. Louis, MO, USA), supplemented with 4 mM L-glutamine and 5% (v/v) fetal bovine serum (FBS) (Sigma Aldrich, St. Louis, MO, USA) at 37 • C in an incubator with a humidified atmosphere containing 5% CO 2 . For ROS assay positive controls, cells were treated with 5 mM H 2 O 2 for 30 min. For LD assay positive controls, cells were treated with 75 µM oleic acid for 24 h. For apoptosis positive control, cells were treated with Staurosporine 10 µM for 24 h. Experiments with cells were made in a duplicate or triplicate.
Preparation of Particles
Three different types of particles: Al 2 O 3 , white alumina; u. Al 2 O 3 , white alumina previously used in the process of sandblasting of OI; and ZrO 2 /SiO 2 were obtained from FerroECOBlast, Dolenjske Toplice, Slovenija. Original-sized particles were milled in smaller particles by shaking the samples with 2 cm diameter steel beads at 1500 Hz for 10 min in the shaker Milimix 20, Domel, Slovenia. Prior to being added to the cell culture media, the particles were sterilized using UV light.
Measurement of Zeta Potential
The suspensions of particles were monitored with electro-kinetic measurements of the ζ-potential [25] by using a Litesizer™ 500 (Anton Paar GmbH, Graz, Austria). The values of the ζ-potential were measured in particle suspension containing either fresh or conditioned cell medium at final particle concentration of 100 µg/mL. Before each individual measurement, the pH value of the suspension was determined.
Dynamic Light Scattering (DLS)
The hydrodynamic diameter of particles (D h ) in fresh and conditioned cell media was determined by dynamic light scattering (DLS) [26] using a Litesizer™ 500 (Anton Paar GmbH, Graz, Austria). The D h values were obtained from the diffusion coefficients (D) that were assessed from the correlation function of the scattered electric field (g 1 (t)) obtained from the correlation function of the scattered light intensity g 2 (t) by applying the Siegert relation. To convert D to D h , the Stokes−Einstein equation was used (D h = kT/3πηD, where k is the Boltzmann constant, T is the absolute temperature and η is the viscosity of the medium in which the particles diffused). The viscosity of the medium was approximated to the viscosity value of water at 25 • C. The scattered light was measured at an angle θ = 90 • .
Characterization of Abrasives
The abrasive particles for sand blasting were characterized by a combination of X-ray powder diffraction [27] (XRD, PANalytical X'Pert Pro MPD diffractometer), scanning electron microscopy (SEM, Thermo Fisher Quanta 650) operated at 5 kV [28] and transmission electron microscopy (TEM, JEOL, JEM 2010 F, Akishima, Tokyo, Japan) operated at 200 kV [28] coupled with energy-dispersive X-ray spectroscopy (EDXS) [29]. For the SEM investigations, the particles were deposited on a conductive carbon tape and sputtered with 6 nm layer of Au-Pd. For the TEM investigations, the particles were deposited by drying a drop of suspension on a copper-grid-supported perforated transparent carbon foil.
Treatment of Cells with Ceramic Particles
For treatment, the cell culture medium was replaced with respective media containing different concentrations of different particles (i.e., 10, 50 and 100 µg/mL). Cells were further grown under the same conditions for 24 h.
Measurements of Inflammation
Processes by IL-6, IL-1β, and TNF-α IL-6, IL-1β and TNF-α were measured as described in [30]. The solution was composed of equal volumes of conditioned medium and Sample Diluent Buffer A from the enzymelinked immunosorbent assay (ELISA) kit (Sigma Aldrich, St. Louis, MO, USA) catalog numbers: SI-RAB306 for IL-6, SI-RAB0273 for IL-1β and SI-RAB0476 for TNF-α. A 100 µL sample was assessed spectrophotometrically by measuring the absorbance at 450 nm with BioTek (Cytation 3, Bad Friedrichshall, Germany) instrument. The results were expressed in pg/mL of the sample.
Cholinesterase Activity Assay
Treated cells (cca 3 × 10 4 cells/cm 2 ) were evaluated for ChE activity following Ellman's method [31]. Firstly, cell homogenates were prepared. Briefly, cells were detached from the surface of the 12 well plate with a cell scrapper and centrifuged along with the cell culture medium at 300× g for 10 min at room temperature (RT) in a Centric 322B centrifuge (Domel, Železniki, Slovenia). After removing the supernatants, cells were resuspended in 310 µL 0.1% Triton X-100 and put on ice. After that, cells were centrifuged at 10,000× g for 10 min at 4 • C in a Sigma 3-30 KS centrifuge (Sigma Aldrich, St. Louis, MO, USA) to separate the membranes and (nano)particles from the sample (supernatant) used for ChE measurements. For ChE assay, 90 µL of the supernatant (100 mM potassium phosphate (K-P) buffer, pH 8 with 0.1% Triton X-100 for blank) was transferred into a 96 well microtiter plate with 90 µL of Ellman's reagent (5,5 -dithiobis-(2-nitrobenzoic acid) (DTNB)) in 250 mM potassium phosphate buffer (P-P buffer) (100 mM, pH 8.0), pH 7.4) and left for 20 min. After 20 min, the endogenous reaction of cell substrates, usually present in the sample, was completed. After 20 min, 20 µL of 1 mM substrate acetylthiocholine chloride was added to each well, and absorbance values were measured at 420 nm using a spectrophotometer (BioTek, Cytation 3, Bad Friedrichshall, Germany) for 20 cycles (at 1 min intervals, for 20 min). All measurements were performed at RT in triplicates. The experiment was performed in three repetitions. The activity of ChE was expressed as nM/min/mg of proteins, therefore also the concentration of proteins was measured in each sample using the Protein Kit Pierce™ BCA Protein Assay Kit (Thermo Fischer Scientific, Waltham, MA, USA).
For protein assay, 20 µL of sample (100 mM K-P buffer, pH 8 with 0.1% Triton-X 100 for blank) was transferred into a 96 well microtiter plate with 200 µL of mixture of reagents A:B (50:1) from the kit and incubated at 37 • C for 30 min. After incubation absorbance at 560 nm using a spectrophotometer (BioTek, Cytation 3, Bad Friedrichshall, Germany) was measured in triplicate. Concentration of proteins in mg/mL was calculated using a standard curve made with measuring of the BSA standards (2, 1.5, 1, 0.75, 0.5, 0.25, 0.125 and 0 mg/mL).
Glutathione S-Transferase Activity Assay
The sample was prepared the same way as for the ChE assay. For the GST assay, we followed the Mannervik method [32]. A total of 50 µL of the supernatant (100 mM K-P buffer, pH 8 with 0.1% Triton-X 100 for blank) was transferred into a 96 well microtiter plate with 50 µL of 4 mM 1-Chloro-2,4-dinitrobenzene (CDNB) (Sigma Aldrich, St. Louis, MO, USA), prepared in absolute ethanol and 50 µL of 4 mM L-glutathione reduced (GSH) (Sigma-Aldrich, St. Louis, MO, USA), prepared in K-P buffer, pH 8. Absorbance values were measured at 340 nm using a spectrophotometer Cytation 3 (BioTek, Bad Friedrichshall, Germany) for 20 cycles (at 1 min intervals, for 20 min). The activity of GST was expressed as nM/min/mg of proteins, therefore also the concentration of proteins was measured in each sample using the same protocol as described for the ChE assay. All measurements were performed at RT in triplicates. The experiment was performed in three repetitions.
Detection of Reactive Oxygen Species (ROS), Lipid Droplets (LD) and Apoptosis via Flow Cytometry
We followed the procedures described in [33]. Positive controls cells for LD and apoptosis were treated with 75 µM oleic acid (Cayman Chemical, Ann Arbor, MI, USA) or Staurosporine 10 µM for 24 h, respectively. Cells were then harvested and centrifuged at 300 g, for 10 min at RT. Subsequently, cells were re-suspended in PBS (Sigma-Aldrich) and 5 mM 2 ,7 -Dichlorodihydrofluorescein diacetate (CM-H2DCFA) (Thermo Fisher Scientific, St. Louis, MO, USA) (for ROS) or 0.5 µg/mL boron dipyrromethene (BODIPY) 483/503 (for LD) and incubated for 30 min at 37 • C (for ROS) or RT (for LD). Then, 1 drop/0.5 mL of Annexin V-Pacific Blue (Annexin V-Pacific Blue Ready Flow Reagent, Thermo Fisher Scientific, St. Louis, MI, USA) was added to the mixture. After 15 min, the cell fluorescence was measured using the flow cytometer FACS Melody (Becton Dickinson Biosciences, Franklin Lakes, NJ, USA) equipped with violet (405 nm), blue (488 nm) and yellow/green (561 nm) lasers. Cells for ROS positive control were also incubated with 5 mM H 2 O 2 for 30 min after staining and before measurement. For cytotoxicity, we used the Annexin V-Pacific Blue Ready Flow Reagent (Thermo Fisher Scientific, St. Louis, MO, USA) staining kit to monitor apoptosis using flow cytometry. Samples were stained for cytotoxicity when prepared for ROS and LD detection.
Statistical Analysis
The data from IL-6, IL-1β, TNF-α, ChE and GST measurements were expressed as arithmetic means ± standard deviations (SD) and were statistically analyzed with one-way analysis of variance (ANOVA), followed by Dunnett's multiple comparison test.
All the statistical analyses were made with Prism 5.03 Software (GraphPad Software, Boston, MA, USA).
Characterization of Particles
XRD for the Al 2 O 3 and u. Al 2 O 3 samples appeared very similar. The XRD patterns showed strong, sharp reflections corresponding to corundum and very weak reflections, which could not be identified (Figure 1a). Moreover, the morphology of both Al 2 O 3 and u. Al 2 O 3 samples was similar. SEM showed particles of irregular shapes with sizes ranging from several tens of nm to several tens of µm (Figure 1b,c). TEM showed that the smallest nanoparticles from the Al 2 O 3 and u. Al 2 O 3 samples were approximately 50 nm in size (Figure 1d). EDXS showed the presence of only Al and O in the unused Al 2 O 3 , whereas in the u. Al 2 O 3 sample, P, Ca, Na, Cl and Ag were also detected as minor elements. In the ZrO 2 /SiO 2 , monoclinic zirconia was the only crystalline phase detected by XRD (Figure 1a). Moreover, the ZrO 2 /SiO 2 sample consisted of irregularly shaped particles with a very broad size distribution ranging from less than hundred nm to several tens of µm; however, the particles had much rougher surfaces compared to the Al 2 O 3 and u. Al 2 O 3 abrasives (Figure 1e). TEM showed that that the particles of the ZrO 2 /SiO 2 abrasive were composed of elongated crystalline zirconia particles embedded in the amorphous silica matrix (Figure 1f). EDXS analysis showed only Zr and O at the crystalline areas and Si, O, Al and Zr at the amorphous areas. Table 1 shows the parameters of the characterization of particle suspensions: zeta potential, pH and average hydrodynamic particle diameter (D h ) measured in fresh and conditioned cell media. Zeta potential was slightly negative in all samples measured. Two populations of particles corresponding to two peaks of the I(D h ) curve were detected: a population of small particles with average D h below 10 nm and a population of particles with average D h larger than 1 µm (Table 1).
Morphological Changes of the Treated Cells
SEM analysis revealed that cells treated with the three types of material did not differ in morphology or surface coverage when compared to untreated cells. In contrast, as expected, the positive control treated with the apoptosis inducer staurosporin was strongly affected in both parameters ( Figure 2).
Inflammatory Response of HUVEC Cells
To assess the inflammatory response of HUVEC, IL-6, IL-1β and TNF-α were measured in conditioned media of HUVEC after 24 h exposure of the cells to three different types of particles at three concentrations (10, 50 and 100 µg/mL). An increase in IL-6 with respect to control was observed in samples treated with u. Al 2 O 3 and ZrO 2 /SiO 2 at all three concentrations ( Figure 3A). In samples treated with unused Al 2 O 3 the effect was the least; for the lowest concentration the effect was within the experimental error of the control, while the higher concentrations of Al 2 O 3 likewise increased the concentration of IL-6 in the conditioned medium ( Figure 3A). These results indicate that all types of particles tested induced an increase of IL-6 concentration in the conditioned media. The effect of the particles on the concentration of IL-1β in the conditioned media was less pronounced than that of IL-6; it stayed within the experimental error for u. Al 2 O 3 particles with concentrations 10 and 50 µg/mL ( Figure 3B). However, at higher concentrations of ZrO 2 /SiO 2 and unused Al 2 O 3 , an increase in the concentration of IL-1β was noted ( Figure 3B). TNF-α concentration increased in samples treated with ZrO 2 /SiO 2 at 50 µg/mL and 100 µg/mL and in samples treated with u. Al 2 O 3 at all three concentrations ( Figure 3C). For both ZrO 2 /SiO 2 and u. Al 2 O 3 , a concentration-dependent trend was observed. TNF-α concentration was not increased in samples treated with unused Al 2 O 3 ( Figure 3C). Table 1 shows the parameters of the characterization of particle suspensions: tential, pH and average hydrodynamic particle diameter (Dh) measured in fresh a ditioned cell media. Zeta potential was slightly negative in all samples measur populations of particles corresponding to two peaks of the I(Dh) curve were det population of small particles with average Dh below 10 nm and a population of p
Morphological Changes of the Treated Cells
SEM analysis revealed that cells treated with the three types of material did not diffe in morphology or surface coverage when compared to untreated cells. In contrast, as ex pected, the positive control treated with the apoptosis inducer staurosporin was strongl affected in both parameters (Figure 2).
Oxidative Stress Response of HUVEC Cells
To assess the oxidative stress response of HUVEC, activities of ChE and GST and quantities of ROS and LD were measured after 24 h exposure of HUVEC to three different types of particles at three concentrations (10 µg/mL, 50 µg/mL and 100 µg/mL). ChE and GST activities were expressed as activity in nmol/min/ng/proteins. ROS and LD production were expressed by the fold change of median fluorescence intensities of the respective dyes in comparison to control cells. The average ChE activities of all tested samples, except 50 µg/mL and 100 µg/mL of unused Al 2 O 3 , were higher than the average ChE activity of the control; however, the experimental errors were rather large. The only statistically significant difference with respect to the control was observed with 100 µg/mL of u. Al 2 O 3 ( Figure 4A). The activity of GST was higher in samples treated with u. Al 2 O 3 in a concentration-dependent way ( Figure 4B). A slight increase of GST activity was observed when cells were treated with 100 µg/mL of ZrO 2 /SiO 2 and with u. Al 2 O 3 at higher concentrations ( Figure 4B). The amount of ROS was increased in all samples except for two (samples treated with u. Al 2 O 3 at 10 µg/mL and with ZrO 2 /SiO 2 at 50 µg/mL) ( Figure 4C). For u. Al 2 O 3, a concentration-dependent trend was observed ( Figure 4C). The number of LDs was increased in all treated samples in comparison to untreated samples ( Figure 4D
Inflammatory Response of HUVEC Cells
To assess the inflammatory response of HUVEC, IL-6, IL-1β and TNF-α were measured in conditioned media of HUVEC after 24 h exposure of the cells to three different types of particles at three concentrations (10, 50 and 100 µg/mL). An increase in IL-6 with respect to control was observed in samples treated with u. Al2O3 and ZrO2/SiO2 at all three concentrations ( Figure 3A). In samples treated with unused Al2O3 the effect was the least; for the lowest concentration the effect was within the experimental error of the control, while the higher concentrations of Al2O3 likewise increased the concentration of IL-6 in the conditioned medium ( Figure 3A). These results indicate that all types of particles tested induced an increase of IL-6 concentration in the conditioned media. The effect of the particles on the concentration of IL-1β in the conditioned media was less pronounced than that of IL-6; it stayed within the experimental error for u. Al2O3 particles with concentrations 10 and 50 µg/mL ( Figure 3B). However, at higher concentrations of ZrO2/SiO2 and unused Al2O3, an increase in the concentration of IL-1β was noted ( Figure 3B). TNFα concentration increased in samples treated with ZrO2/SiO2 at 50 µg/mL and 100 µg/mL and in samples treated with u. Al2O3 at all three concentrations ( Figure 3C). For both ZrO2/SiO2 and u. Al2O3, a concentration-dependent trend was observed. TNF-α concentration was not increased in samples treated with unused Al2O3 ( Figure 3C).
Oxidative Stress Response of HUVEC Cells
To assess the oxidative stress response of HUVEC, activities of ChE and GST and quantities of ROS and LD were measured after 24 h exposure of HUVEC to three different types of particles at three concentrations (10 µg/mL, 50 µg/mL and 100 µg/mL). ChE and GST activities were expressed as activity in nmol/min/ng/proteins. ROS and LD production were expressed by the fold change of median fluorescence intensities of the respective dyes in comparison to control cells. The average ChE activities of all tested samples, except 50 µg/mL and 100 µg/mL of unused Al2O3, were higher than the average ChE activity of the control; however, the experimental errors were rather large. The only statistically significant difference with respect to the control was observed with 100 µg/mL of u. Al2O3 Since u. Al 2 O 3 particles were shown to be the most bioactive in terms of ChE, GST and IL-6 production, we chose them to study the ROS production and apoptosis in cells. The histograms ( Figure 5) show the fluorescence of cells stained with CM-H 2 DCFA (ROS) and BODIPY 483/503 (LD). Gates (horizontal bars) indicate the % of cells treated with u. Al 2 O 3 that were positive for ROS or LD in comparison to the untreated control. Untreated cells were 20% positive for ROS and slightly positive for LD ( Figure 5A,B). For the ROSpositive control, samples were treated with 5 mM H 2 O 2 resulting in 30.4% of positive cells ( Figure 5C). For the LD-positive control, 75 µM oleic acid was used, resulting in 97.0% of positive cells ( Figure 5D). A total of 27% of cells treated with u. Al 2 O 3 were ROSpositive when particles were used at a concentration of 50 µg/mL. In contrast, at 10 µg/mL concentration of u. Al 2 O 3 , fewer treated cells than control cells were ROS positive. All particle-treated samples were positive for LD production in comparison to negative control samples (~20% vs. 2%), although in a concentration-independent manner ( Figure 5B,F,H,J). cells were treated with 100 µg/mL of ZrO2/SiO2 and with u. Al2O3 at higher concentration ( Figure 4B). The amount of ROS was increased in all samples except for two (sample treated with u. Al2O3 at 10 µg/mL and with ZrO2/SiO2 at 50 µg/mL) ( Figure 4C). For u Al2O3, a concentration-dependent trend was observed ( Figure 4C). The number of LDs wa increased in all treated samples in comparison to untreated samples ( Figure 4D). No clea LD concentration-dependent trend was observed ( Figure 4D). A clear increase in ROS an LD production was successfully induced in positive control samples after treatment wit 5mM H2O2 and 75 µM oleic acid. ( Figure 5C). For the LD-positive control, 75 µM oleic acid was used, resulting in 97.0% o positive cells ( Figure 5D). A total of 27% of cells treated with u. Al2O3 were ROS-positiv when particles were used at a concentration of 50 µg/mL. In contrast, at 10 µg/mL concen tration of u. Al2O3, fewer treated cells than control cells were ROS positive. All particle treated samples were positive for LD production in comparison to negative control sam ples (∼20% vs. 2%), although in a concentration-independent manner ( Figures 5B,F,H,J).
Cytotoxicity
Apoptosis may play an important role in the regulation of inflammation or be th result of inflammation in cells. Figure 6 shows both FSC/SSC dot plots and Annexin V fluorescence histograms (marker for apoptosis) of untreated cells (negative control), Stau rosporine-treated cells (positive control) and u. Al2O3-treated cells. By comparing un treated controls (Panel A) with the positive control (Panel B), it can be seen that Stauro sporine effectively expanded the apoptotic population (69% vs. 5%) during the 24 h treat ment. In contrast, no significant differences are visible between the negative control (Pane A) and particle-treated samples (Panels C−E), indicating that no apoptosis was induced upon 24 h of incubation with u. Al2O3. In the dot plots shown in Figure 6, a dose-depend ent increase of the SSC signal can be observed. Increase in cell granularity could be a con sequence of particle endocytosis or LD/intracellular vesicle formation. This would be in line with the data presented in Figures 4 and 5, which show an increase in LD formation
Cytotoxicity
Apoptosis may play an important role in the regulation of inflammation or be the result of inflammation in cells. Figure 6 shows both FSC/SSC dot plots and Annexin V fluorescence histograms (marker for apoptosis) of untreated cells (negative control), Staurosporine-treated cells (positive control) and u. Al 2 O 3 -treated cells. By comparing untreated controls (Panel A) with the positive control (Panel B), it can be seen that Staurosporine effectively expanded the apoptotic population (69% vs. 5%) during the 24 h treatment. In contrast, no significant differences are visible between the negative control (Panel A) and particle-treated samples (Panels C−E), indicating that no apoptosis was induced upon 24 h of incubation with u. Al 2 O 3 . In the dot plots shown in Figure 6, a dose-dependent increase of the SSC signal can be observed. Increase in cell granularity could be a consequence of particle endocytosis or LD/intracellular vesicle formation. This would be in line with the data presented in Figures 4 and 5, which show an increase in LD formation.
Discussion
We have treated the HUVEC with three types of particles (Al2O3, u. Al2O3 (retrieved from removed OI) and ZrO2/SiO2). We observed no morphological changes of cells treated with Al2O3, ZrO2/SiO2 and u. Al2O3 compared to untreated control cells (Figure 2). These findings were further supported by the data obtained via flow cytometry showing that no significant changes in cell viability were visible in samples treated in comparison to the negative control samples. We observed an increase in inflammation parameter IL-6 after treatment with all types of particles and there was a concentration-dependent trend after treatment with u. Al2O3 and ZrO2/SiO2. We observed an increase in the inflammation parameter IL-1β with a concentration-dependent trend after treatment with ZrO2/SiO2 (Figure 3) and after treatment of cells with 50 µg/mL Al2O3. We detected an increase of the oxidative stress parameters: ChE and GST activities were significantly higher in samples treated with 100 µg/mL u. Al2O3 (Figure 4). We found a trend of increasing quantities of LD in samples treated with all concentrations of all types of particles (Figures 4 and 5),
Discussion
We have treated the HUVEC with three types of particles (Al 2 O 3 , u. Al 2 O 3 (retrieved from removed OI) and ZrO 2 /SiO 2 ). We observed no morphological changes of cells treated with Al 2 O 3 , ZrO 2 /SiO 2 and u. Al 2 O 3 compared to untreated control cells (Figure 2). These findings were further supported by the data obtained via flow cytometry showing that no significant changes in cell viability were visible in samples treated in comparison to the negative control samples. We observed an increase in inflammation parameter IL-6 after treatment with all types of particles and there was a concentration-dependent trend after treatment with u. Al 2 O 3 and ZrO 2 /SiO 2 . We observed an increase in the inflammation parameter IL-1β with a concentration-dependent trend after treatment with ZrO 2 /SiO 2 ( Figure 3) and after treatment of cells with 50 µg/mL Al 2 O 3 . We detected an increase of the oxidative stress parameters: ChE and GST activities were significantly higher in samples treated with 100 µg/mL u. Al 2 O 3 (Figure 4). We found a trend of increasing quantities of LD in samples treated with all concentrations of all types of particles (Figures 4 and 5), which could be a consequence of oxidative stress. LD can play a protective role against ROS [24]. ROS, which are commonly produced during cell metabolism, were not significantly increased in monitored cells (Figures 4 and 5). Moreover, we observed no cytotoxicity effects. OIs are implanted for years, indicating a release of wear debris in the surrounding tissue and in blood circulation. In contrast, the time of treatment of cells in our study was 24 h, which could show an acute response of the cells to particle exposure only. For times shorter than 24 h, the expected effect would be smaller. For longer maintenance of cells the medium should have been changed, which would remove the particles and disturb the effect that we wished to observe. Nevertheless, our results show that particles are not inert as regards the cellular response. This was the aim of the present work, however, further study (including the time dependence) of the effect of the particles is indicated. To decisively point to inflammation, oxidative stress and increased cell vesiculation, experiments should include time dependence of the effects in vitro in different types of cells, and observation of short-and long-term effects in vivo. Another limitation of the study is that the size of the particles has not been taken into account.
In the literature, eventual implant loosening due to aseptic osteolysis has been attributed to local inflammatory responses to wear and corrosion products that are produced by articulating implant interfaces [20,21]. The response to implant debris is dominated by local immune activation, e.g., macrophages [11,12,16,35]. Generally, to produce an in vitro inflammatory response, particles need to be less than 10 µm in size, i.e., prone to being phagocytosed. Immune reactivity has been shown to depend on the number of particles produced or the dose (i.e., the concentration of phagocytosed particles per tissue volume, which can be characterized by knowing the size distribution and the number of debris) [12]. Evidence involving cells, model particles and pathogenic microbes indicates that particle size, shape, rigidity and surface roughness are important parameters for cellular uptake and subsequent immune responses [36]. In interaction of particles with the membrane, the (mis)match of the membrane curvature and the intrinsic curvature of the particle are key factors which dictate how likely it is that the particle would be taken up by the cell [37]. Elongated particles (fibers) are generally more pro-inflammatory than round particles, and there is a growing consensus that metallic particles are more proinflammatory than polymers in vivo [12]. It was found that titanium dioxide nanoparticles (TiO 2 NPs) induced oxidative stress, reduced osteogenesis and impaired the antioxidant defense system [38]. Jamieson et al. (2021) reported that ceramic oxide nanopowders in vitro were phagocytosed by THP-1 macrophages, which resulted in a cell inflammatory response [39]. A significant increase in IL-1β secretion of the cells following ceramic treatment was observed [39]. Bertrand et al. (2018) reported fibrotic changes and the presence of ceramic wear particles in periprosthetic tissue around ceramic-coated OI [40]. They found a correlation between increased tissue fibrosis and implantation time and therefore assumed that induction of fibrosis was connected to the release of debris from OI. This indicated that ceramic OI produces wear that could be biologically active [40]. They suggested that the reason for tissue fibrosis might have been the long-term inflammatory response of peripheral blood mononuclear cells and the inflammatory response of fibroblasts to ceramic OI. Moreover, they observed in vitro effects of ceramics on peripheral blood mononuclear cells from healthy donors during two days of incubation [40], short-term inflammatory effects and inflammatory response by an increased IL-1β and IL-6 concentrations. This agrees with our results (Figure 3). In contrast, Bylski et al. [41] found no increase in TNF-α concentration in THP-1 monocytic cells treated with aluminum oxide.
Toxicity of the released particles to cells can be on a chemical basis (due to the released soluble ions or molecules) or on a mechanical basis (due to mechanical impact of the insoluble particles). It was found that cytotoxicity of ceramic particles to macrophages was lower than that corresponding to metal ions and that it did not depend on their chemical species [42]. Moreover, while the differences in size of the particles did not affect their mechanical toxicity, the shape of the particles mattered: the dendritic particles had a higher cytotoxicity than spindle and globular particles [42]. Ceramics might induce different cytotoxic effects in different cell types and that some cell types are more sensitive when encountering ceramic particles. Cytotoxicity of insoluble wear debris (e.g., Al 2 O 3 particles) was reported to be lower than the wear debris of soluble metal ions [39], which were also shown to cause oxidative stress [43,44]. Moreover, Yamamoto et al. (2004) reported that particle shape influenced particle cytotoxicity [42]. The most toxic were dendriticshaped particles, followed by spindle-and globular-shaped particles [42]. Cytotoxicity of cells exposed to Al 2 O 3 nanoparticles was previously discussed by Radziun et al. [45], who reported that Al 2 O 3 nanoparticles could penetrate the membranes of L929 mouse fibroblasts and BJ human fibroblasts, yet decrease in cellular viability was not detected. Similar results were reported by Jamieson et al. [39] for THP-1 human macrophages. It was concluded that, even though cell lines could phagocytose Al 2 O 3 nanoparticles, no significant cytotoxic effects were observed [39]. Moreover, our results did not suggest cytotoxic effects of Al 2 O 3 nanoparticles on cell line HUVEC after 24 h exposure ( Figure 6). In contrast, Catelas et al. (1999) reported a higher rate of apoptosis of the J774 macrophages for larger Al 2 O 3 particles (4.5 µm in diameter), while smaller particles (0.6, 1.3 and 2.4 µm in diameter) had less effect [46]. Olivier et al. (2003) tested the cytotoxicity of J774.2 macrophages and L929 fibroblasts after treatment with Al 2 O 3 particles [47]. Particles were more cytotoxic for macrophages than for fibroblasts, having an impact on apoptosis and necrosis of the cells [47]. Significant decrease in the viability of 3T3-E1 mouse osteoblast-like cells after treatment with zirconia nanoparticles was reported by Ye et al. (2018) [48]. Xie et al. (2019) observed that Al 2 O 3 was unable to achieve good chemical bonding with tissues [49]. To overcome this problem, they prepared a new material, SiAlON-Al 2 O 3 ceramics, with a porosity and compression strength that is more suitable for proliferation and survival of cells [49].
With SEM (Figure 2) we wanted to observe budding of the plasma membrane of HUVEC, yet there was no visible plasma membrane budding of treated or untreated cells. Budding precedes the formation of small cellular particles (SCPs) that become free to move in the surrounding medium and are, in principle, able to reach distant cells if they are transported by body fluids. Although the mechanisms of SCP uptake by cells are not completely understood, it has been reported that SCPs may affect the phenotype and function of the recipient cells [50,51]. Methods have been developed to harvest SCPs from the cell culture media [52][53][54], e.g., differential centrifugation, size exclusion chromatography, immunecapture-based methods and microfluidic device-based methods. SCPs are heterogeneous in shape and composition; furthermore, if they are membrane enclosed, their identity is not fixed and they are subject to transformations of shape, size and composition during the processing of samples [55,56]. Although much effort has been invested in the elaboration of SCP harvesting from different media and body fluids, there are still fundamental issues associated with these procedures [57]. We have performed the isolation of SCPs from the media in which the cells had grown by differential ultracentrifugation. We have used a protocol that is common for harvesting small extracellular vesicles [58]. We have assessed samples by interference light microscopy and visualized them by cryogenic electron microscopy [59]. The signal was under the detection limit of the interference light microscopy, and we observed no membrane-enclosed particles by cryogenic electron microscopy (not shown). SEM images (Figure 2), confirm our inability to detect free SCPs with other used methods. As SCPs have recently been gaining interest for their biological roles, budding and vesiculation of the cells induced by particles should be further studied in the future.
Conclusions
We observed changes in morphology and in inflammation markers (IL-6 and IL-1β concentration), and oxidative stress parameters (ChE and GST activity and LD quantities) in HUVEC after 24 h of treatment with some types of particles. Within the range of the studied parameters, we found no evidence of their cytotoxic effects. We have not observed increased membrane budding in the treated cells and we found no small cellular particles in the isolates from the media. Further studies on the effect of particles are indicated by observing longer treatment and by exploring the non-local effects emerging from small cellular particles possibly shedding off differently from the treated cells.
|
2023-04-26T15:03:20.522Z
|
2023-04-22T00:00:00.000
|
{
"year": 2023,
"sha1": "adaa54e59cdb246f00e0a773a01a3383d248e3f0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/16/9/3287/pdf?version=1682228814",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ae54a864fbb5ec8e8212426913431aaa2e50e1cf",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
979
|
pes2o/s2orc
|
v3-fos-license
|
Learning Features that Predict Cue Usage
Our goal is to identify the features that predict the occurrence and placement of discourse cues in tutorial explanations in order to aid in the automatic generation of explanations. Previous attempts to devise rules for text generation were based on intuition or small numbers of constructed examples. We apply a machine learning program, C4.5, to induce decision trees for cue occurrence and placement from a corpus of data coded for a variety of features previously thought to affect cue usage. Our experiments enable us to identify the features with most predictive power, and show that machine learning can be used to induce decision trees useful for text generation.
Introduction
Discourse cues are words or phrases, such as because, first, and although, that mark structural and semantic relationships between discourse entities.They play a crucial role in many discourse processing tasks, including plan recognition (Litman and Allen, 1987), text comprehension (Cohen, 1984;Hobbs, 1985;Mann and Thompson, 1986;Reichman-Adar, 1984), and anaphora resolution (Grosz and Sidner, 1986).Moreover, research in reading comprehension indicates that felicitous use of cues improves comprehension and recall (Goldman, 1988), but that their indiscriminate use may have detrimental effects on recall (Millis, Graesser, and Haberlandt, 1993).
Our goal is to identify general strategies for cue usage that can be implemented for automatic text generation.From the generation perspective, cue usage consists of three distinct, but interrelated problems: (1) occurrence: whether or not to include a cue in the generated text, (2) placement: where the cue should be placed in the text, and (3) selection: what lexical item(s) should be used.
Prior work in text generation has focused on cue selection (McKeown and Elhadad, 1991;Elhadad and McKeown, 1990), or on the relation between cue occurrence and placement and specific rhetorical structures (Rösner and Stede, 1992; Scott and de Souza, 1990; Vander Linden and Martin, 1995).Other hypotheses about cue usage derive from work on discourse coherence and structure.Previous research (Hobbs, 1985;Grosz and Sidner, 1986;Schiffrin, 1987;Mann and Thompson, 1988;Elhadad and McKeown, 1990), which has been largely descriptive, suggests factors such as structural features of the discourse (e.g., level of embedding and segment complexity), intentional and informational relations in that structure, ordering of relata, and syntactic form of discourse constituents.Moser and Moore (1995;1997) coded a corpus of naturally occurring tutorial explanations for the range of features identified in prior work.Because they were also interested in the contrast between occurrence and non-occurrence of cues, they exhaustively coded for all of the factors thought to contribute to cue usage in all of the text.From their study, Moser and Moore identified several interesting correlations between particular features and specific aspects of cue usage, and were able to test specific hypotheses from the literature that were based on constructed examples.
In this paper, we focus on cue occurrence and placement, and present an empirical study of the hypotheses provided by previous research, which have never been systematically evaluated with naturally occurring data.We use a machine learning program, C4.5 (Quinlan, 1993), on the tagged corpus of Moser and Moore to induce decision trees.The number of coded features and their interactions makes the manual construction of rules that predict cue occurrence and placement an intractable task.
Our results largely confirm the suggestions from the literature, and clarify them by highlighting the most influential features for a particular task.Discourse structure, in terms of both segment structure and levels of embedding, affects cue occurrence the most; intentional relations also play an important role.For cue placement, the most important factors are syntactic structure and segment complexity.
The paper is organized as follows.In Section 2 we discuss previous research in more detail.Section 3 provides an overview of Moser and Moore's coding scheme.In Section 4 we present our learning experiments, and in Section 5 we discuss our results and conclude.
Related Work
McKeown and Elhadad (1991; 1990) studied several connectives (e.g., but, since, because), and include many insightful hypotheses about cue selection; their observation that the distinction between but and although depends on the point of the move is related to the notion of core discussed below.However, they do not address the problem of cue occurrence.
Other researchers (Rösner and Stede, 1992; Scott and de Souza, 1990) are concerned with generating text from "RST trees", hierarchical structures where leaf nodes contain content and internal nodes indicate the rhetorical relations, as defined in Rhetorical Structure Theory (RST) (Mann and Thompson, 1988), that exist between subtrees.They proposed heuristics for including and choosing cues based on the rhetorical relation between spans of text, the order of the relata, and the complexity of the related text spans.However, (Scott and de Souza, 1990) was based on a small number of constructed examples, and (Rösner and Stede, 1992) focused on a small number of RST relations.(Litman, 1996) and (Siegel and McKeown, 1994) have applied machine learning to disambiguate between the discourse and sentential usages of cues; however, they do not consider the issues of occurrence and placement, and approach the problem from the point of view of interpretation.We closely follow the approach in (Litman, 1996) in two ways.First, we use C4.5.Second, we experiment first with each feature individually, and then with "interesting" subsets of features.
Relational Discourse Analysis
This section briefly describes Relational Discourse Analysis (RDA) (Moser, Moore, and Glendening, 1996), the coding scheme used to tag the data for our machine learning experiments. 1DA is a scheme devised for analyzing tutorial explanations in the domain of electronics troubleshooting.It synthesizes ideas from (Grosz and Sidner, 1986) and from RST (Mann and Thompson, 1988).
Coders use RDA to exhaustively analyze each explanation in the corpus, i.e., every word in each explanation belongs to exactly one element in the analysis.An explanation may consist of multiple segments.Each segment originates with an intention of the speaker.Segments are internally structured and consist of a core, i.e., that element that most directly expresses the segment purpose, and any number of contributors, i.e. the remaining constituents.
For each contributor, one analyzes its relation to the core from an intentional perspective, i.e., how it is intended to support the core, and from an informational perspective, i.e., how its content relates to that of the core.The set of intentional relations in RDA is a modification of the presentational relations of RST, while informational relations are similar to the subject matter relations in RST.Each segment constituent, both core and contributors, may itself be a segment with a core:contributor structure.In some cases the core is not explicit.This is often the case with the whole tutor's explanation, since its purpose is to answer the student's explicit question.
As an example of the application of RDA, consider the partial tutor explanation in (1)2 .The purpose of this segment is to inform the student that she made the strategy error of testing inside part3 too soon.The constituent that makes the purpose obvious, in this case (1-B), is the core of the segment.The other constituents help to serve the segment purpose by contributing to it.(1-C) is an example of subsegment with its own core:contributor structure; its purpose is to give a reason for testing part2 first.
The RDA analysis of (1) is shown schematically in Figure 1.The core is depicted as the mother of all the relations it participates in.Each relation node is labeled with both its intentional and informational relation, with the order of relata in the label indicating the linear order in the discourse.Each relation node has up to two daughters: the cue, if any, and (1) Coders analyze each explanation in the corpus and enter their analyses into a database.The corpus consists of 854 clauses comprising 668 segments, for a total of 780 relations.Table 1 summarizes the distribution of different relations, and the number of cued relations in each category.Joints are segments comprising more than one core, but no contributor; clusters are multiunit structures with no recognizable core:contributor relation.(1-B) is a cluster composed of two units (the two clauses), related only at the informational level by a temporal relation.Both clauses describe actions, with the first action description embedded in a matrix ("You should").Cues are much more likely to occur in clusters, where only informational relations occur, than in core:contributor structures, where intentional and informational relations cooccur (χ 2 = 33.367,p <.001, df = 1).In the following, we will not discuss joints and clusters any further.
An important result pointed out by (Moser and Moore, 1995) is that cue placement depends on core position.When the core is first and a cue is associated with the relation, the cue never occurs with the core.In contrast, when the core is second, if a cue occurs, it can occur either on the core or on the contributor.
The algorithm
We chose the C4.5 learning algorithm (Quinlan, 1993) because it is well suited to a domain such as ours with discrete valued attributes.Moreover, C4.5 produces decision trees and rule sets, both often used in text generation to implement mappings from function features to forms. 3Finally, C4.5 is both readily available, and is a benchmark learning algorithm that has been extensively used in NLP applications, e.g.(Litman, 1996;Mooney, 1996;Vander Linden and Di Eugenio, 1996).
As our dataset is small, the results we report are based on cross-validation, which (Weiss and Kulikowski, 1991) recommends as the best method to evaluate decision trees on datasets whose cardinality is in the hundreds.Data for learning should be divided into training and test sets; however, for small datasets this has the disadvantage that a sizable portion of the data is not available for learning.Cross- N th used as the test set.The error rate of a tree obtained by using the whole dataset for training is then assumed to be the average error rate on the test set over the N runs.Further, as C4.5 prunes the initial tree it obtains to avoid overfitting, it computes both actual and estimated error rates for the pruned tree; see (Quinlan, 1993, Ch. 4) for details.Thus, below we will report the average estimated error rate on the test set, as computed by 10-fold cross-validation experiments.
The features
Each data point in our dataset corresponds to a core:contributor relation, and is characterized by the following features, summarized in Table 2.
Segment Structure.Three features capture the global structure of the segment in which the current core:contributor relation appears.
• (Con)Trib(utor)-pos(ition) captures the position of a particular contributor within the larger segment in which it occurs, and encodes the structure of the segment in terms of how many contributors precede and follow the core.For example, contributor (1-D) in Figure 1 is labeled as B1A3-2after, as it is the second contributor following the core in a segment with 1 contributor before and 3 after the core.
• Inten(tional)-structure indicates which contributors in the segment bear the same intentional relations to the core.
• Infor(mational)-structure. Similar to intentional structure, but applied to informational relations.
Core:contributor relation.These features more specifically characterize the current core:contributor relation.
• Infor(mational)-rel(ation).About 30 informational relations have been coded for.However, as preliminary experiments showed that using them individually results in overfitting the data, we classify them according to the four classes proposed in (Moser, Moore, and Glendening, 1996): causality, similarity, elaboration, temporal .Temporal relations only appear in clusters, thus not in the data we discuss in this paper.
• Syn(tactic)-rel(ation). Captures whether the core and contributor are independent units (segments or sentences); whether they are coordinated clauses; or which of the two is subordinate to the other.
• Adjacency.Whether core and contributor are adjacent in linear order.
Embedding.These features capture segment embedding, Core-type and Trib-type qualitatively, and Above/Below quantitatively.
• Core-type/(Con)Trib(utor)-type.Whether the core/the contributor is a segment, or a minimal unit (further subdivided into action, state, matrix).
• Above/Below encode the number of relations hierarchically above and below the current relation.
The experiments
Initially, we performed learning on all 406 instances of core:contributor relations.We quickly determined that this approach would not lead to useful decision trees.First, the trees we obtained were extremely complex (at least 50 nodes).Second, some of the subtrees corresponded to clearly identifiable subclasses of the data, such as relations with an implicit core, which suggested that we should apply learning to these independently identifiable subclasses.Thus, we subdivided the data into three subsets: • Core1 : core:contributor relations with the core in first position • Impl(icit)-core: core:contributor relations with an implicit core While this has the disadvantage of smaller training sets, the trees we obtain are more manageable and more meaningful.Table 3 summarizes the cardinality of these sets, and the frequencies of cue occurrence.
We ran four sets of experiments.In three of them we predict cue occurrence and in one cue placement.4
Cue Occurrence
Table 4 summarizes our main results concerning cue occurrence, and includes the error rates associated with different feature sets.We adopt Litman's approach (1996) to determine whether two error rates E 1 and E 2 are significantly different.We compute 95% confidence intervals for the two error rates using a t -test.E 1 is significantly better than E 2 if the upper bound of the 95% confidence interval for E 1 is lower than the lower bound of the 95% confidence interval for E 2 .For each set of experiments, we report the following: 1.A baseline measure obtained by choosing the majority class.E.g., for Core1 58.9% of the relations are not cued; thus, by deciding to never include a cue, one would be wrong 41.1% of the times.
2. The best individual features whose predictive power is better than the baseline: as Table 4 makes apparent, individual features do not have much predictive power.For neither Core1 nor Impl-core does any individual feature perform better than the baseline, and for Core2 only one feature is sufficiently predictive.
3. (One of) the best induced tree(s).For each tree, we list the number of nodes, and up to six of the features that appear highest in the tree, with their levels of embedding. 5Figure 2 shows the tree for Core2 (space constraints prevent us from including figures for each tree).In the figure, the numbers in parentheses indicate the number of cases correctly covered by the leaf, and the number of expected errors at that leaf.
Learning turns out to be most useful for Core1, where the error reduction (as percentage) from baseline to the upper bound of the best result is 32%; error reduction is 19% for Core2 and only 3% for Impl-core.
The best tree was obtained partly by informed choice, partly by trial and error.Automatically trying out all the 2 11 = 2048 subsets of features would be possible, but it would require manual examination of about 2,000 sets of results, a daunting task.Thus, for each dataset we considered only the following subsets of features.
1.All features.This always results in C4.5 selecting a few features (from 3 to 7) for the final tree.
C4
.5 on all features.
3. In Table 2, three features -Trib-pos, Intenstruct, Infor-struct -concern segment structure, eight do not.We constructed three subsets by always including the eight features that do not concern segment structure, and adding one of those that does.The trees obtained by including Trib-pos, Inten-struct, Infor-struct at the same time are in general more complex, and not significantly better than other trees obtained by including only one of these three features.We attribute this to the fact that these features encode partly overlapping information.
Finally, the best tree was obtained as follows.We build the set of trees that are statistically equivalent to the tree with the best error rate (i.e., with the lowest error rate upper bound).Among these trees, we choose the one that we deem the most perspicuous in terms of features and of complexity.Namely, we pick the simplest tree with Trib-Pos as the root if one exists, otherwise the simplest tree.Trees that have Trib-Pos as the root are the most useful for text generation, because, given a complex segment, Trib-Pos is the only attribute that unambiguously identifies a specific contributor.
Our results make apparent that the structure of segments plays a fundamental role in determining cue occurrence.One of the three features concerning segment structure (Trib-Pos, Inten-Structure, Infor-Structure) appears as the root or just below the root in all trees in Table 4; more importantly, this same configuration occurs in all trees equivalent to the best tree (even if the specific feature encoding segment structure may change).The level of embedding in a segment, as encoded by Core-type, Trib-type, Above and Below also figures prominently.
Inten-rel appears in all trees, confirming the intuition that the speaker's purpose affects cue occurrence.More specifically, in Figure 2, Inten-rel distinguishes two different speaker purposes, convince and enable.The same split occurs in some of the best trees induced on Core1, with the same outcome: i.e., convince directly correlates with the occurrence of a cue, whereas for enable other features must be taken into account.6Informational relations do not appear as often as intentional relations; their discriminatory power seems more relevant for clusters.Preliminary experiments show that cue occurrence in clusters depends only on informational and syntactic relations.Finally, Adjacency does not seem to play any substantial role.
Cue Placement
While cue occurrence and placement are interrelated problems, we performed learning on them separately.First, the issue of placement arises only in the case of Core2; for Core1, cues only occur on the contributor.Second, we attempted experiments on Core2 that discriminated between occurrence and placement at the same time, and the derived trees were complex and not perspicuous.Thus, we ran an experiment on the 100 cued relations from Core2 to investigate which factors affect placing the cue on the contributor in first position or on the core in second; see Table 5.
We ran the same trials discussed above on this dataset.In this case, the best tree -see Figure 3 -results from combining the two best individual features, and reduces the error rate by 50%.The most discriminant feature turns out to be the syntactic relation between the contributor and the core.However, segment structure still plays an important role, via Trib-pos.While the importance of Syn-rel for placement seems clear, its role concerning occurrence requires further exploration.It is interesting to note that the tree induced on Core1 -the only case in which Synrel is relevant for occurrence -includes the same distinction as in Figure 3: namely, if the contributor depends on the core, the contributor must be marked, otherwise other features have to be taken into account.Scott and de Souza (1990) point out that "there is a strong correlation between the syntactic specification of a complex sentence and its perceived rhetorical structure."It seems that certain
Discussion and Conclusions
We have presented the results of machine learning experiments concerning cue occurrence and placement.As (Litman, 1996) observes, this sort of empirical work supports the utility of machine learning techniques applied to coded corpora.As our study shows, individual features have no predictive power for cue occurrence.Moreover, it is hard to see how the best combination of individual features could be found by manual inspection.
Our results also provide guidance for those building text generation systems.This study clearly indicates that segment structure, most notably the ordering of core and contributor, is crucial for determining cue occurrence.Recall that it was only by considering Core1 and Core2 relations in distinct datasets that we were able to obtain perspicuous decision trees that significantly reduce the error rate.
This indicates that the representations produced by discourse planners should distinguish those elements that constitute the core of each discourse segment, in addition to representing the hierarchical structure of segments.Note that the notion of core is related to the notions of nucleus in RST, intended effect in (Young and Moore, 1994), and of point of a move in (Elhadad and McKeown, 1990), and that text generators representing these notions exist.
Moreover, in order to use the decision trees derived here, decisions about whether or not to make the core explicit and how to order the core and con-tributor(s) must be made before deciding cue occurrence, e.g., by exploiting other factors such as focus (McKeown, 1985) and a discourse history.
Once decisions about core:contributor ordering and cue occurrence have been made, a generator must still determine where to place cues and select appropriate lexical items.A major focus of our future research is to explore the relationship between the selection and placement decisions.Elsewhere, we have found that particular lexical items tend to have a preferred location, defined in terms of functional (i.e., core or contributor) and linear (i.e., first or second relatum) criteria (Moser and Moore, 1997).Thus, if a generator uses decision trees such as the one shown in Figure 3 to determine where a cue should be placed, it can then select an appropriate cue from those that can mark the given intentional / informational relations, and are usually placed in that functional-linear location.To evaluate this strategy, we must do further work to understand whether there are important distinctions among cues (e.g., so, because) apart from their different preferred locations.The work of Elhadad (1990) and Knott (1996) will help in answering this question.
Future work comprises further probing into machine learning techniques, in particular investigating whether other learning algorithms are more appropriate for our problem (Mooney, 1996), especially algorithms that take into account some a priori knowledge about features and their dependencies.
Figure 3 :
Figure 3: Decision tree for Core2 -placement you know that part1 is good, B. you should eliminate part2 before troubleshooting inside part3.
Table 3 :
Distributions of relations and cue occurrences built out of the 2 to 4 attributes appearing highest in the tree obtained by running
Table 4 :
Summary of learning results
|
1997-10-21T17:19:35.000Z
|
1997-07-07T00:00:00.000
|
{
"year": 1997,
"sha1": "4fe40ec15f2156e3cbda2c0419e213e52ba6e1a9",
"oa_license": null,
"oa_url": "http://dl.acm.org/ft_gateway.cfm?id=979628&type=pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "c5b2e949e7ad88c7d2922b8e5c5f016e178db493",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
222133805
|
pes2o/s2orc
|
v3-fos-license
|
Relationship of Sarcopenia with Microcirculation Measured by Skin Perfusion Pressure in Patients with Type 2 Diabetes
Background Few studies have examined the relationship of sarcopenia with the microcirculation. The current study investigated the relationship of sarcopenia with microcirculatory function, as assessed by skin perfusion pressure (SPP), in type 2 diabetes mellitus (T2DM) patients. Methods In total, 102 T2DM patients who underwent SPP measurements and bioelectrical impedance analysis (BIA) were enrolled in this cross-sectional study. SPP was assessed using the laser Doppler technique. Sarcopenia was defined as low height-adjusted appendicular muscle mass (men, <7 kg/m2; women, <5.7 kg/m2) using BIA. We divided the participants into two groups based on SPP (≤50 and >50 mm Hg), and an SPP below 50 mm Hg was considered to reflect impaired microcirculation. Results Fourteen patients (13.7%) were diagnosed with impaired microcirculatory function of the lower limb based on SPP. The prevalence of sarcopenia in all subjects was 11.8%, but the percentage of patients with an SPP ≤50 mm Hg who had sarcopenia was more than triple that of patients with an SPP >50 mm Hg (28.6% vs. 9.1%, P=0.036). A significant positive correlation was found between SPP and appendicular muscle mass adjusted for height (P=0.041 for right-sided SPP). Multiple logistic regression analysis showed that patients with sarcopenia had an odds ratio of 4.1 (95% confidence interval, 1.01 to 24.9) for having an SPP ≤50 mm Hg even after adjustment for confounding factors. Conclusion These results suggest that sarcopenia may be significantly associated with impaired microcirculation in patients with T2DM. Nonetheless, the small number of patients and wide CI require cautious interpretation of the results.
INTRODUCTION
Diabetes may affect the microcirculation throughout the body [1]. Chronic vascular complications of diabetes are related to the function of the microcirculation [2]. In addition, an association between dysfunction of the microcirculation and macrovascular disease has been suggested [3,4]. Evidence supports the possibility that microcirculatory dysfunction can eventually lead to more severe microvascular and macrovascular complications; therefore, it is necessary to detect dysfunction of the microcirculation promptly and to identify patients with diabetes at risk of microcirculatory dysfunction [5][6][7].
Previous studies have used the skin microvasculature as a model to estimate the microvascular complications of diabetes [8,9] and to investigate the relationship between cardiovascular (CV) risk and microcirculatory function [6,7]. Several studies have reported associations of microcirculatory changes in the retinal and renal systems with cardiovascular disease (CVD) events [5,10]. However, studies regarding the relationship of microcirculatory changes in the feet and CVD are limited.
Measuring skin perfusion pressure (SPP) using laser Doppler is a noninvasive, easily performed method that measures the microcirculatory pressure of the artery at the skin level [11]. The SPP is valuable for evaluating microcirculatory function. Most previous studies regarding SPP have focused on limb ischemia. Several studies have revealed that SPP measurements can be used to accurately diagnose peripheral artery disease (PAD) and diabetic foot disease (DFD) compared with other methods such as the ankle-brachial index (ABI). In addition, previous studies have investigated whether SPP can predict wound healing in critical limb ischemia (CLI) after reconstruction or medication, especially in patients with diabetes or end-stage renal disease on hemodialysis [12][13][14].
Previous studies have suggested that vascular lesions might be associated with sarcopenia. The reported prevalence of sarcopenia was as high as 15% in patients with type 2 diabetes mellitus (T2DM), reflecting a higher prevalence than was observed in controls [15]. Sarcopenia, which has been proposed to be a prognostic factor in diabetes, is associated with poor outcomes such as a higher hospitalization rate, CV events, and mortality in patients with T2DM [16,17]. Until now, only a few studies have shown that sarcopenia in T2DM patients was related with PAD, DFD, CLI, and mortality after leg amputation [18,19]. Moreover, studies regarding sarcopenia as a prognostic factor in patients with impaired microcirculation remain very limited.
If sarcopenia is associated with poor microcirculation, this information could help select a treatment plan, and interventions to modify sarcopenia may improve the prognosis of patients with impaired microcirculation. However, to the best of our knowledge, no study has yet evaluated the association of the microcirculation, as assessed by SPP, with sarcopenia in T2DM patients. Therefore, we aimed to investigate whether sarcopenia is associated with microcirculatory function in patients with T2DM.
Study design and subjects
Among 187 participants with T2DM who underwent SPP measurements to evaluate complications of diabetes at the Endocrinology Division of Soonchunhyang University Bucheon Hospi-tal from September 2018 to October 2018, those with a history of PAD or type 1 diabetes, who were older than 80 years of age, and who did not have bioelectrical impedance analysis (BIA) data were excluded. Finally, 102 participants were included for analysis in this cross-sectional study. We reviewed patients' demographic, biochemical, and clinical data and treatment history in detail using their medical records. Subjects were classified by smoking status as non-smokers or current smokers. All participants were informed of the purpose of the study, and their consent was obtained. The study was approved by the Institutional Review Board of Soonchunhyang University School of Medicine, Bucheon Hospital (IRB number: 2019-08-021-001).
Anthropometric and biochemical measurements
The participants' weight and height were measured to the nearest 0.1 kg and 0.1 cm. Body mass index (BMI) was calculated as body weight (kg) divided by height (m) squared. Blood samples were collected from all patients after overnight fasting. Glycated hemoglobin (HbA1c) was measured by ion-exchange high performance liquid chromatography (Bio-Rad, Hercules, CA, USA). The methodology was aligned with the Diabetes Control and Complications Trial and National Glycohemoglobin Standardization Program standards. A liquid enzymatic method was used to measure total cholesterol (TC), low-density lipoprotein cholesterol (LDL-C), and triglyceride (TG; 7600-110, Hitachi Inc., Tokyo, Japan) levels. The selective inhibition method was used to measure high-density lipoprotein cholesterol (HDL-C) levels. The estimated glomerular filtration rate (eGFR) was calculated by the Modification of Diet in Renal Disease study equation. Serum fasting insulin was measured using an immunoradiometric assay kit (DIAsource, Ottignies-Louvain-la-Neuve, Belgium). Insulin resistance was evaluated by the homeostasis model assessment of insulin resistance (HOMA-IR) index. The HOMA-IR was calculated by the following formula: [fasting insulin (µIU/mL)×fasting plasma glucose (mmol/L)]/22.5.
Arterial brachial-ankle pulse wave velocity (PWV) and the ABI were measured using an automated device (VP-1000, Colin, Komaki, Japan). Measurements of abdominal fat thickness were made using high-resolution B-mode ultrasonography. Visceral fat thickness (VFT) and subcutaneous fat thickness (SFT) were measured 1 cm above the umbilicus using a 12-MHz linear-array probe and a 3.5-MHz convex-array probe, respectively. VFT was defined as the distance between the anterior wall of the aorta and the posterior aspect of the rectus abdominis muscle perpendicular to the aorta. SFT was defined as the maximal Copyright © 2020 Korean Endocrine Society thickness of the fat tissue layer between the skin-fat interface and the linea alba.
SPP measurements
SPP was measured with a laser Doppler probe using a Sensi-Lase PAD-IQ (Vasamed, Eden Prairie, MN, USA) on both the dorsal and plantar sides. According to previously published SPP reference means, we considered that SPP values of less than 50 mm Hg indicated impaired microcirculation. An SPP of 50 mm Hg has been suggested as a cut-off value for PAD in patients with conditions such as diabetes mellitus (DM) and chronic kidney disease who have a high probability of calcification in lower leg arteries [14]. We analyzed the lower of the two SPP values obtained from the plantar and dorsal aspects of each foot as a marker of impaired microcirculation.
Body composition and definition of sarcopenia
We used BIA to assess sarcopenia. Appendicular skeletal muscle mass (ASM) was calculated by summing the lean mass in the arms and legs, which primarily represents skeletal muscle mass in the extremities. We defined sarcopenia, or low muscle mass, as ASM divided by height squared (ASM/Ht 2 ; kg/m 2 ). The ASM/Ht 2 cut-off values for low muscle mass were 7 kg/m 2 in men and 5.7 kg/m 2 in women [20].
Statistical analysis
Data were reported as mean±standard deviation or median (interquartile range) for continuous variables and number (%) for categorical variables. The correlations of ASM/Ht 2 with SPP and other clinical variables were assessed by Spearman rank correlation coefficients. The lower of the two SPP values measured at the dorsal and plantar surfaces of each foot was taken as the SPP.
Differences in demographic and clinical characteristics according to whether SPP was below 50 mm Hg on either side and sarcopenia were evaluated using the Student t-test and chisquare test for categorical variables. Odds ratios (ORs) were used as a measure of the association between the SPP and the presence of sarcopenia in multivariate logistic regression analysis. Multiple logistic regression analysis with the presence or absence of an SPP below 50 mm Hg as the dependent variable was performed.
A two-tailed P value less than 0.05 was considered to indicate statistical significance. All statistical analyses were performed using SPSS version 14.0 (SPSS Inc., Chicago, IL, USA).
Baseline clinical characteristics of study subjects
The general characteristics of the study participants are presented in Table 1. The mean age of the participants was 55.9 years, and their mean BMI was 26.0±3.1 kg/m 2 . Eighty (78.4%) were treated with statins and 53 (52%) were treated with antiplatelet agents. In total, 14 (13.7%) patients had either a dorsal or plantar SPP below 50 mm Hg on either side. The prevalence of sarcopenia in all subjects was 11.8%. Median age was used to create two age groups (<57 years vs. ≥57 years), and patients' general characteristics were compared between these two groups (Supplemental Table S1). The younger group had a higher ASM/Ht 2 and contained a greater percentage of men and current smokers. The prevalence of sarcopenia and right-side SPP below 50 mm Hg were significantly higher in the older group (19.2% vs. 4% and 19.2% vs. 6%, respectively). We also compared patients' characteristics according to sex (Supplemental Table S2). Men were younger and had a shorter duration of DM, higher right dorsal SPP, higher TG levels, and higher ASM/Ht 2 than women. The prevalence of sarcopenia was comparable in both sexes. However, the prevalence of right-side SPP below 50 mm Hg and any-side SPP below 50 mm Hg were significantly higher in women.
Bivariate correlations of SPP and ASM/Ht 2 with clinical variables
The correlations of SPP and ASM/Ht 2 with clinical variables are shown in Table 2. The lower of the two SPP values measured at the dorsal and plantar surfaces of each foot was used as the SPP. Right-side SPP was positively correlated with ASM/Ht 2 (r=0.198, P=0.041). Left-side SPP was positively correlated with TC and LDL-C levels (r=0.242, P=0.018 and r=0.241, P=0.043, respectively). Right-side SPP was positively correlated with leftside SPP (r=0.512, P<0.001). ASM/Ht 2 was positively correlated with BMI (r=0.458, P<0.001) and negatively correlated with age (r=-0.359, P=0.001) and HDL-C (r=-0.304, P=0.005). ASM/ Ht 2 showed borderline significant correlations with VFT and SFT. However, SPP was not correlated with HbA1c, ABI, or PWV.
When we analyzed dorsal and plantar SPP on each side separately, right dorsal SPP and right plantar SPP levels were positively correlated with ASM/Ht 2 (r=0.279, P=0.008 and r=0.238, P= 0.016, respectively) and right dorsal SPP and left dorsal SPP were negatively correlated with age (r=-0.208, P=0.037 and r= -0.032, P=0.001, respectively) (data not shown).
Comparisons of clinical characteristics according to the presence of low SPP
The clinical characteristics and laboratory findings according to the presence of SPP levels below 50 mm Hg on either side are presented in Table 3. Higher SPP was more prevalent in men than women (men, 68%; women, 32%). Patients with higher SPP were more likely to consume alcohol than those with lower SPP (P=0.001). The prevalence of sarcopenia was significantly higher in patients with lower SPP than in those with higher SPP (9% vs. 28.6%, P=0.036). The mean values of ASM/Ht 2 were borderline significantly lower in patients with lower SPP than in those with higher SPP (7.3 vs. 8.0, P=0.056). There were no Copyright © 2020 Korean Endocrine Society significant differences in age, duration of DM, BMI, fat thickness, HbA1c, lipid profiles, eGFR, ABI on each side, medication history (e.g., statins or anti-platelet agents), and treatment modality according to whether patients' SPP was above or below 50 mm Hg. Table 4 presents comparisons of clinical characteristics and labo-
DISCUSSION
Our study showed that sarcopenia was significantly associated with the presence of impaired microcirculatory function in patients with T2DM after adjustment for confounding factors. The prevalence of sarcopenia was higher in those with an SPP ≤50 mm Hg than in those with an SPP >50 mm Hg, and significant positive correlations were found between SPP and appendicular muscle mass adjusted for height. The microcirculation is a system of blood vessels less than 150 μm in diameter, comprising arterioles, capillaries, and venules [21]. This system is responsible for the primary function of the vasculature [21]. Diabetes may affect the microcirculation in diverse parts of the body, from the eyes to the kidney and skin [1]. Chronic vascular complications in diabetes are related to the microcirculation [2][3][4]. Diabetes-related microcirculatory dysfunction can eventually lead to more severe complications; therefore, it is necessary to detect microcirculatory dysfunction promptly and to identify patients at risk.
Skin microcirculation is an accessible model for estimating diabetes-associated vascular complications [9]. SPP utilizes a laser Doppler probe and pressure cuff to evaluate reactive hyperemia in the skin [11]. This technique can be performed easily, noninvasively, and simply, and it only takes a few minutes. This method does not cause any discomfort to the patient, and it can provide precise information about vascular status and potential ischemic areas [22,23].
Previous research investigated whether laser Doppler SPP can be used to evaluate the severity of limb ischemia in diabetes and/or hemodialysis patients [23]. In addition, studies evaluated whether SPP measurements in the foot were comparable to ABI and whether SPP could be used as a screening test for limb ischemia [14,24]. They found that SPP measurements are a valuable method for diagnosing PAD and that SPP can more easily be used to assess the severity of limb ischemia than the ABI. Although the ABI is the most widely recommended initial screening test for diagnosing PAD and assessing the severity of the obstruction in patients with PAD in the legs, the test has limitations in subjects with diabetes, in whom arterial calcificationassociated stiffness is highly prevalent and may cause false results due to incompressibility [25].
In contrast, SPP is not affected by arterial wall calcification or skin temperature [23]. Because SPP measures the final pathway of capillary flow through the skin with a laser Doppler probe, it has the potential to determine severe limb ischemia status in the setting of calcification. Moreover, SPP can assess the effectiveness of revascularization or medical therapy and can predict successful wound healing. Studies have shown that SPP can assess microcirculation more effectively than macrocirculatory tests such as the ABI in patients with diabetes [24]. A study suggested that SPP values of 40 mm Hg or higher may be a reasonable treatment target for ischemic wounds [13]. Other studies showed that an SPP <50 mm Hg was associated with the highest sensitivity for detecting PAD and morbidity [14,26]. Okamoto et al. [14] reported the superiority of SPP measurements for detecting PAD in hemodialysis patients with a cutoff value of 50 mm Hg. In this study, we defined normal SPP values as >50 mm Hg, and using this cutoff, 13.7% of patients were diagnosed with impaired microcirculatory function in the lower limb. This prevalence rate is lower than those reported in other studies [18,26]. Our study did not include patients diagnosed with PAD, and the mean ABI of study population was 1.15, representing a lower probability of significant limb ischemia. Differences in the normal reference range and heterogeneity of the participants of each study may explain the different prevalence rates reported for microcirculatory dysfunction in the lower limbs. In addition, whereas previous studies have reported a positive correlation between the ABI and SPP, no significant correlation was found between the ABI and SPP in the present study. Although we cannot provide a definitive explanation for the lack of a correlation between the ABI and SPP values in this study, we may speculate regarding some possible reasons. Most previous studies analyzed populations with PAD or other serious conditions such as CLI. In contrast, we only included T2DM patients without PAD. It therefore seems likely that differences in study populations (e.g., stable PAD vs. serious PAD vs. CLI vs. no evidence of PAD) may influence this result. Ishioka et al. [27] suggested some possible explanations for discrepancies between ABI and SPP values. In patients with normal or high ABI values (≥0.9) and low SPP values (<50 mm Hg), non-compressible vessels caused by high arterial calcification, as shown in hemodialysis or DM patients, might pseudo-normalize or even elevate the ABI value. Another possibility is the existence of below-ankle arterial lesions. ABI is measured using ankle pressure, but SPP examines the microcirculation of below-ankle locations such as the instep and sole. If a patient has impaired microcirculation only in the arteries below the ankle, normal ABI and abnormally low SPP values might be expected. In patients with a low ABI and normal SPP values, poor microcirculation below the knee and above the ankle might explain the discrepancy between the ABI and SPP.
Previous literature showed that the prevalence of sarcopenia was as high as 15% in Korean patients with T2DM, a proportion three times higher than that of control subjects [15]. Those researchers used the dual-energy X-ray absorptiometry (DXA) skeletal muscle index to diagnose sarcopenia. In another Korean study of 414 T2DM patients aged 65 years and older, the risk for low muscle mass was two to four times higher in patients with diabetes than in the control group [28]. In this study, we used BIA, and the prevalence of sarcopenia in all subjects was about 12%. We defined sarcopenia as a low AMI/Ht 2 of less than 7.0 kg/m 2 (in men) or 5.7 kg/m 2 (in women). The prevalence of sarcopenia varies considerably, even within the same cohort, according to the different instruments and cut-off values that have been applied to define low muscle mass. In 2014, Chen et al. [20] presented a consensus statement from the Asian Working Group for Sarcopenia and proposed instruments and cut-off values for Asian countries. Both BIA and DXA were identified as appropriate for determining the body composition using criteria for Asians.
Sarcopenia occurs earlier in patients with T2DM than in those without T2DM, and is strongly related to increased frailty in patients with DM [16]. Patients with both sarcopenia and diabetes are more likely to be hospitalized and to experience poor clinical outcomes [17]. Although previous studies have suggested that vascular lesions might be associated with sarcopenia, only a few studies have shown that sarcopenia in T2DM patients is related with PAD, CLI, and mortality after leg amputation due to DFD [18,19]. Kim et al. [28] showed that the mortality rate in patients with sarcopenia was higher than that in those without sarcopenia in patients who underwent amputation for diabetic foot. These results imply that preventing sarcopenia in patients with diabetes is important for maintaining high survival rates. The presence of sarcopenia can be a predictor of the outcomes of leg amputation.
Cheng et al. [18] investigated the associations of sarcopenia with DFD in a cross-sectional study. They found that sarcopenia was independently associated with DFD (OR, 2.05; 95% CI, 1.15 to 3.89) after controlling for confounders including age, DM duration, and chronic vascular complications. A worse prognosis was seen in patients with DFD accompanied by sarcopenia. Compared to patients without sarcopenia, patients with sarcopenia exhibited a higher proportion of PAD (8.1% vs. 3.1%). The percentage of sarcopenia in DFD patients was more than double than that in patients without DFD (35.3% vs. 16.4%). The skeletal muscle index was significantly lower in patients with DFD (6.79±1.20 kg/m 2 vs. 7.21±1.05 kg/m 2 ).
To the best of our knowledge, this is the first study to investigate the associations of microcirculation, as assessed by SPP, with sarcopenia in patients with T2DM. The present study showed that individuals with sarcopenia had an OR of 4.1 for low SPP compared to those without sarcopenia after adjusting for sex, age, smoking, HbA1c, duration of DM, statin treatment, and antiplatelet agent treatment. If sarcopenia is associated with poor microcirculation, as seen in our results, this information could help select treatment plans to improve sarcopenia at earlier stages in patients with poor microcirculation.
The present study has several limitations that should be mentioned. First, because of the cross-sectional nature of this study, we could not determine the causality of the relationship between impaired microcirculation, as assessed by reduced SPP, and sarcopenia. Second, the sample size was too small to clarify the association of SPP with sarcopenia. The number of patients with sarcopenia in this study was only 12. As a result, the CI was very wide regardless of adjustment. We cannot exclude type 2 error because of the small sample size. A larger patient sample is needed to confirm our results. Third, there was a potential for selection bias because our study population consisted of individuals who underwent assessments for DM complications at a single university hospital; therefore, the present study subjects are not fully representative of all patients with DM. A larger sample size may allow generalization of our results. An additional limitation is that we assessed sarcopenia using only BIA. The definition of sarcopenia includes low muscle mass and low muscle strength. However, we did not evaluate muscle strength or physical performance. Nevertheless, a major strength of the present study is that it is the first study to investigate the potential association between microcirculation, as assessed using SPP, and sarcopenia.
In conclusion, we found that sarcopenia was significantly associated with impaired microcirculation, as assessed by SPP, in patients with T2DM. This is the first study to evaluate SPP and sarcopenia in T2DM patients, making our results meaningful. However, future prospective studies with a larger number of patients are required to establish a direct relationship between impaired microcirculation and sarcopenia in patients with T2DM. Early detection of peripheral hypoperfusion in patients with sarcopenia may be a valuable strategy for detecting and improving complications in patients with T2DM.
CONFLICTS OF INTEREST
No potential conflict of interest relevant to this article was reported.
|
2020-09-29T13:06:09.783Z
|
2020-09-01T00:00:00.000
|
{
"year": 2020,
"sha1": "36e283f1090453fe239dcd8fc6309e25b75f8273",
"oa_license": "CCBYNC",
"oa_url": "https://www.e-enm.org/upload/pdf/enm-2020-679.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0b1e76f9722fa00572047aa09fcfc0b67bd3f632",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
38244729
|
pes2o/s2orc
|
v3-fos-license
|
Faster and Enhanced Inclusion-Minimal Cograph Completion
We design two incremental algorithms for computing an inclusion-minimal completion of an arbitrary graph into a cograph. The first one is able to do so while providing an additional property which is crucial in practice to obtain inclusion-minimal completions using as few edges as possible : it is able to compute a minimum-cardinality completion of the neighbourhood of the new vertex introduced at each incremental step. It runs in $O(n+m')$ time, where $m'$ is the number of edges in the completed graph. This matches the complexity of the algorithm in [Lokshtanov, Mancini and Papadopoulos 2010] and positively answers one of their open questions. Our second algorithm improves the complexity of inclusion-minimal completion to $O(n+m\log^2 n)$ when the additional property above is not required. Moreover, we prove that many very sparse graphs, having only $O(n)$ edges, require $\Omega(n^2)$ edges in any of their cograph completions. For these graphs, which include many of those encountered in applications, the improvement we obtain on the complexity scales as $O(n/\log^2 n)$.
Introduction
We consider the problem of completion of an arbitrary graph into a cograph, i.e. a graph with no induced path on 4 vertices. This is a particular case of graph modification problem, in which one wants to perform elementary modifications to an input graph, typically adding and removing edges and vertices, in order to obtain a graph belonging to a given target class of graphs, which satisfies some additional property compared to the input. Ideally, one would like to do so by performing a minimum number of elementary modifications. This is a fundamental problem in graph algorithms, which corresponds to the notion of projection in geometry: given an element a of a ground set X equipped with a distance and a subset S ⊆ X, find an element of S that is closest to a for the provided distance (here, the number of elementary modifications performed on the graph). This is also the meaning of modification problems in algorithmic graph theory: they answer the question to know how far is a given graph from satisfying a target property.
Here, we consider the modification problem called completion, where only one operation is allowed: adding an edge. In this case, the quantity to be minimised, called the cost of the completion, is the number of edges added, which are called fill edges. The particular case of completion problems has been shown very useful in algorithmic graph theory and several other contexts. These problems are closely related to some important graph parameters, such as treewidth [2], and can help to efficiently solve problems that otherwise are hard on the input graph [6]. They are also useful for other algorithmic problems arising in computer science, such as sparse matrix multiplication [50], and in other disciplines such as archaeology [37], molecular biology [7] and genomics, where they played a key role in the mapping of the human genome [26,36].
Unfortunately, finding the minimum number of edges to be added in a completion problem is NP-hard for most of the target classes of interest (see, e.g., the thesis of Mancini [42] for further discussion and references). To deal with this difficulty of computation, the domain has developed a number of approaches. This includes approximation [45], restricted input [8,9,12,38,39,44], parameterization [13,22,35,43,54] and inclusion-minimal completions. In the latter approach, one does not ask for a completion having the minimum number of fill edges but only ask for a set of fill edges which is minimal for inclusion, i.e. which does not contain any proper subset of fill edges whose addition also results in a graph in the target class. This is the approach we follow here. In addition to the case of cographs [41], it has been followed for many other graph classes, including chordal graphs [29], interval graphs [20,46], proper interval graphs [49], split graphs [30], comparability graphs [28] and permutation graphs [19].
The rationale behind the inclusion-minimal approach is that minimumcardinality completions are in particular inclusion-minimal. Therefore, if one is able to sample 4 efficiently the space of inclusion-minimal completions, one can compute several of them, pick the one of minimum cost and hope to get a value close to the optimal one. One of the reason of the success of inclusion-minimal completion algorithms is that this heuristic approach was shown to perform quite well in practice [4,5]. The second reason of this success, which is a key point for the approach, is that it is usually possible to design algorithms of low complexity for the inclusion-minimal relaxation of completion problems.
Related work. Modification problems into the class of cographs have already received a great amount of attention [27,31,32,40,41], as well as modification problems into some of its subclasses, such as quasi-threshold graphs [10] and threshold graphs [23]. One reason for this is that cographs are among the most widely studied graph classes. They have been discovered independently in many contexts [15] and they are known to admit very efficient algorithms for problems that are hard in general [11] . Moreover, very recently, cograph modification was shown a powerful approach to solve problems arising in complex networks analysis, e.g. community detection [34], inference of phylogenomics [32] and modelling [18]. The modification problem into the class of quasi-threshold graphs has also been used and it revealed that complex networks encountered in some contexts are actually very close to be quasi-threshold graphs [10], in the sense that only a few modifications are needed to transform them into quasi-threshold graphs. This growing need for treating real-world datasets, whose size is often huge, asks for more efficient algorithms both with regard to the running time and with regard to the quality (number of modifications) of the solution returned by the algorithm.
Our results. Our main contribution is to design two algorithms for inclusionminimal cograph completion. The first one (Section 4) is an improvement of the incremental algorithm in [41]. It runs in the same O(n + m ) complexity, where m is the number of edges in the completed graph, and is in addition able to select one minimum-cardinality completion of the neighbourhood of the new incoming vertex at each incremental step of the algorithm, which is an open question in [41] (Question 3 in the conclusion) which we positively answer here. It must be clear that this does not guarantee that the completion computed at the end of the algorithm has minimum cardinality but this feature is highly desirable in practice to obtain completions using as few fill edges as possible.
When this additional feature is not required, our second algorithm (Section 5) solves the inclusion-minimal problem in O(n+m log 2 n) time, which only depends on the size of the input. Furthermore, we prove that many sparse graphs, namely those having mean degree fixed to a constant, require Ω(n 2 ) edges in any of their cograph completions. This result is worth of interest in itself and implies that, for such graphs, which have only O(n) edges, the improvement of the complexity we obtain with our second algorithm is quite significant : a factor n/log 2 n.
Preliminaries
All graphs considered here are finite, undirected, simple and loopless. In the following, G is a graph, V (or V (G)) is its vertex set and E (or E(G)) is its edge set. We use the notation G = (V, E), n = |V | stands for the cardinality of V and m = |E| for the cardinality of E. An edge between vertices x and y will be arbitrarily denoted by xy or yx. The neighbourhood of x is denoted by N (x) (or N G (x)) and for a subset For a rooted tree T and a node u ∈ T , we denote parent(u), C(u), Anc(u) and Desc(u) the parent and the set of children, ancestors and descendants of u respectively, using the usual terminology and with u belonging to Anc(u) and Desc(u). The lowest common ancestor of two nodes u and v, denoted lca(u, v), is the lowest node in T which is an ancestor of both u and v. The subtree of T rooted at u, denoted by T u , is the tree induced by node u and all its descendants in T . We use two other notions of subtree, which we call upper tree and extracted tree. The upper tree of a subset of nodes S of T is the tree, denoted T up S , induced by the set Anc(S) of all the ancestors of the nodes of S, i.e. Anc(S) = s∈S Anc(s). The tree extracted from S in T , denoted T xtr S , is defined as the tree whose set of nodes is S and whose parent relationship is the transitive reduction of the ancestor relationship in T . More explicitly, for u, v ∈ S, u is the parent of v in T xtr S iff u is an ancestor of v in T and there exist no node v ∈ S such that v is a strict ancestor of v and a strict descendant of u in T . Cographs. One of their simpler definitions is that they are the graphs that do not admit the P 4 (path on 4 vertices) as an induced subgraph. This shows that the class is hereditary, i.e., an induced subgraph of a cograph is also a cograph. Equivalently, they are the graphs obtained from a single vertex under the closure of the parallel composition and the series composition. The parallel composition of two graphs G 1 = (V 1 , E 1 ) and The series composition of G 1 and G 2 is their disjoint union plus all possible edges between vertices of G 1 and vertices of G 2 , i.e., the graph G ser = V 1 ∪V 2 , E 1 ∪E 2 ∪{xy | x ∈ V 1 , y ∈ V 2 } . These operations can naturally be extended to an arbitrary finite number of graphs.
This gives a nice representation of a cograph G by a tree whose leaves are the vertices of G and whose internal nodes (non-leaf nodes) are labelled //, for parallel, or S, for series, corresponding to the operations used in the construction of G. It is always possible to find such a labelled tree T representing G such that every internal node has at least two children, no two parallel nodes are adjacent in T and no two series nodes are adjacent. This tree T is unique [15] and is called the cotree of G, see example in Fig. 1. Note that the subtree T u rooted at some node u of cotree T also defines a cograph, denoted G u , whose set of vertices is the set of leaves of T u , denoted V (u) in the following. The adjacencies between vertices of a cograph can easily be read on its cotree, in the following way.
Remark 1
Two vertices x and y of a cograph G having cotree T are adjacent iff the lowest common ancestor u of leaves x and y in T is a series node. Otherwise, if u is a parallel node, x and y are not adjacent.
The incremental approach. Our approach for computing a minimal cograph completion of an arbitrary graph G is incremental, in the sense that we consider the vertices of G one by one, in an arbitrary order (x 1 , . . . , x n ), and at step i we compute a minimal cograph completion H i of G i = G[{x 1 , . . . , x i }] from a minimal cograph completion H i−1 of G i−1 , by adding only edges incident to x i . This is possible thanks to the following observation that is general to all hereditary graph classes that are also stable by addition of a universal vertex, which holds in particular for cographs. Lemma 1 (see e.g. [46]). Let G = (V, E) be an arbitrary graph and let H be a minimal cograph completion of G. Consider a new vertex x ∈ V adjacent to an arbitrary subset N (x) ⊆ V of vertices and denote G = G + x and H = H + x the graphs obtained by adding x to G and H respectively. Then, there exists a subset M ⊆ V \ N (x) of vertices such that H = (V, E(H ) ∪ {xy | y ∈ M }) is a cograph. Moreover, for any such set M which is minimal for inclusion, H is an inclusion-minimal cograph completion of G . We call such completions (minimal) constrained completions of G + x.
For any subset S ⊆ V of vertices, we say that we fill S in H if we make all the vertices of S \ N (x) adjacent to x in the completion H of G + x. The edges added in a completion are called fill edges and the cost of the completion is its number of fill edges.
The new problem. From now on, we consider the following problem, with slightly modified notations. G = (V, E) is a cograph, and G + x is the graph obtained by adding to G a new vertex x adjacent to some arbitrary subset N (x) of vertices of G. Both our algorithms take as input the cotree of G and the neighbourhood N (x) of the new vertex x. They compute the set N (x) ⊇ N (x) of neighbours of x in some minimal constrained cograph completion H of G + x, i.e. obtained by adding only edges incident to x (cf. Lemma 1). Then, the cotree of G is updated under the insertion of x with neighbourhood N (x), in order to obtain the cotree of H which will serve as input in the next incremental step.
We now introduce some definitions and characterisations we use in the following.
Definition 1 (Full, hollow, mixed). Let G be a cograph and let x be a vertex to be inserted in G with neighbourhood When S is full or hollow, we say that S is uniform.
We use these notions for nodes u of the cotree as well, referring to their associated set of vertices V (u). We denote C nh (u) the subset of non-hollow children of a node u.
Theorem 1 below gives a characterisation of the neighbourhood of a new vertex x so that G + x is a cograph.
Theorem 1 ( [16,17]). (Cf. Fig. 2) Let G be a cograph with cotree T and let x be a vertex to be inserted in G with neighbourhood N (x) ⊆ V (G). If the root of T is mixed, then G + x is a cograph iff there exists a mixed node u of T such that: 1. all children of u are uniform and 2. for all vertices y ∈ V (G) \ V (u), y ∈ N (x) iff lca(y, u) is a series node.
Moreover, when such a node u exists, it is unique and it is called the insertion node.
Remark 2 In all the rest of the article, we do not consider the case where the new vertex x is adjacent to none of the vertices of G or to all of them. Therefore, the root of the cotree T of G is always mixed wrt. x.
The reason for this is that the case where the root is uniform is straightforward: the only minimal completion of G + x adds an empty set of edges and the update of cotree T is very simple. By definition, inserting x in G with its neighbourhood N (x) in some constrained cograph completion H of G+x results in a cograph, namely H. Therefore, to any such completion H we can associate one insertion node which is uniquely defined, from Theorem 1 and from the restriction stated in Remark 2.
Definition 2. Let G be a cograph with cotree T and let x be a vertex to be inserted in G. A node u of T is called a completion-minimal insertion node iff there exists a minimal constrained completion H of G + x such that u is the insertion node associated to H.
From now and until the end of the article, G is a cograph, T is its cotree, x is a vertex to be inserted in G and we consider only constrained cograph completions of G + x. We therefore omit to systematically precise it. x so that G + x is a cograph. The nodes and triangles in black (resp. white) correspond to the parts of the tree that are full wrt. x (resp. hollow wrt. x). The insertion node u, which is mixed, appears in grey colour.
Characterisation of minimal constrained completions
The goal of this section is to give necessary and sufficient conditions for a node u of T to be a completion-minimal insertion node. From Theorem 1, the subtrees attached to the parallel strict ancestors of the insertion node u must be hollow. As we can modify the neighbourhood of x only by adding edges, it follows that if u is the insertion node of some completion, then u is eligible, as defined below.
Definition 3 (eligible). A node u of T is eligible iff for all the strict ancestors v of u that are parallel nodes, all the children of v distinct from its unique child u ∈ C(v) ∩ Anc(u) are hollow.
When a node u is eligible, there is a natural way to obtain a completion of the neighbourhood of x, which we call the completion anchored at u. Definition 4 (Completion anchored at u). Let u be an eligible node of T . The completion anchored at u is the one obtained by making x adjacent to all the vertices of V (G) \ V (u) whose lowest common ancestor with u is a series node and by filling all the children of u that are non-hollow.
The completion anchored at some eligible node u may not be minimal but, on the other hand, all minimal completions H are completions anchored at some eligible node u, namely the insertion node of H.
Lemma 2.
For any completion-minimal insertion node u of T , there exists a unique minimal completion H of G+x such that u is the insertion node associated to H and this unique completion is the completion anchored at u.
is given by Theorem 1 and is the same for every completion having u as insertion node. Moreover, as in any such completion, the children of u in T are uniform, then any non-hollow child v of u must be filled. Then, the completion H min defined by the modified neighbourhood N min (x) = N ū (x) ∪ v∈C(u) and v is non-hollow V (v) of x is included in every completion having u as insertion node. As there exists some minimal completion having u as insertion node, then from Theorem 1, u is left mixed after completion and so u has some hollow child with regard to N (x). Consequently, u is also mixed with regard to N min (x). Finally, since the insertion of x with neighbourhood N min (x) satisfies conditions 1 and 2 of Theorem 1, then the completion H min has u as insertion node. And since H min is included in all such completions, it follows that H min is the unique minimal completion having u as insertion node. 2 To characterise completion-minimal insertion nodes, we will use the notion of forced nodes. Their main property (see Lemma 4 below) is that they are full in any completion of G + x.
Definition 5 (Completion-forced). Let G be a cograph with cotree T and let x be a vertex to be inserted in G. A completion-forced (or simply forced) node u is inductively defined as a node satisfying at least one of the three following conditions: 1. u is full, or 2. u is a parallel node with all its children non-hollow, or 3. u is a series node with all its children completion-forced.
Lemma 3. Let G be a cograph with cotree T and let x be a vertex to be inserted in G. A node u of T is completion-forced iff there exists a unique cograph completion of G u + x, which is the one where all missing edges between x and V (u) are added.
Proof. Let us show the result by induction on |V (u)|. First, consider a completionforced node u of T and a completion H of G u + x. If u satisfies Condition 3 of Definition 5, then, by induction hypothesis, all its children are full in H (as H is also a cograph completion of G v +x, for any child v of u) and so is u. If u satisfies Condition 1, then since u is full before completion, it is also full after. Consider now the case where u is completion-forced because it satisfies Condition 2 of Definition 5, i.e. u is parallel and all its children are non-hollow.
Assume for contradiction that H does not fill u. Then, denote u the insertion node associated to H in T u . Theorem 1 implies that u is eligible, and since all the children of u are non hollow, it follows that u is not a strict descendant of u. Consequently, u = u and since all the children of u are non hollow, Lemma 2 implies that H fills all of them, and so H fills u as well: contradiction. Thus, u is filled in any completion H of G u + x and therefore, there exists a unique such completion.
Conversely, consider a non-completion-forced node u of T . If u is a series node, then u has at least one non-completion-forced child v. By induction hypothesis, there exists a completion H of G v + x that does not fill v. Then, the completion H of G u + x that coincides with H on V (v) and that fills all the other children of u is a cograph completion of G u + x that does not fill u. Now, if u is a parallel node, then u has at least one hollow child v. As u is clearly eligible in T u , the cograph completion H anchored at u is properly defined. Since H leaves v hollow, then H does not fill u, which achieves the proof. 2 Lemma 4. Any completion-forced node u of T is filled in all the completions of G + x.
Proof. This is a direct consequence of Lemma 3. Indeed, any completion of G+x restricted to V (u) is a completion of G u + x. Moreover, from Lemma 3, there exists a unique cograph completion of G u + x and this completion makes V (u) full. 2 The next remark directly follows from Theorem 1 and Lemma 2.
Remark 3
The insertion node u of any minimal completion of G+x has at least one hollow child and at least one non-hollow child. Therefore, u is non-hollow and non-completion-forced.
We now characterise the nodes u that contain some minimal-insertion node in their subtree T u (including u itself). In our algorithms, we will use this characterisation to decide whether we have to explore the subtree of a given node.
Lemma 5.
For any node u of T , T u contains some completion-minimal insertion node iff u is eligible, non-hollow and non-completion-forced.
Proof. If u is eligible non-hollow and non-completion-forced, consider such a node v of T u which is lower possible in T u . If v is a series node, as v is eligible so are all its children. It follows that all the children of v are either completionforced or hollow. Since v is non-completion-forced, at least one of its children is hollow and since v is non-hollow at least one of its children is non-hollow. The same holds if v is a parallel node: since v is non-completion-forced, at least one of its children is hollow and since v is non-hollow at least one of its children is non-hollow. Then, in both cases, in the completion H anchored at v, v is mixed and so is u. Consequently, there exists a minimal completion H included in H and necessarily u is mixed in H as well. From Theorem 1, it is straightforward to see that all minimal completions having an insertion node out of T u leaves u full or hollow. It follows that the insertion node associated to H belongs to T u . Now, conversely, if there exists v ∈ T u which is a completion-minimal insertion node, let us denote H the minimal completion anchored at v. From Remark 3, v is non-hollow in G + x, and so is u. Moreover, from Theorem 1, it is straightforward to see that v is eligible and so is u. From Theorem 1 again, v is mixed in H and so is u. Then, Lemma 4 implies that u is non-completion-forced, which achieves the proof of the lemma. 2 Lemma 6 below gives additional conditions for u itself to be an insertion node. Lemma 6. A node u of T is a completion-minimal insertion node iff u is eligible, non-hollow and non-completion-forced and u satisfies in addition one of the two following conditions: 1. u is a series node and u has at least one hollow child, or 2. u is a parallel node and u has no eligible non-completion-forced child.
Proof. We first show that if the conditions of the lemma are satisfied, then u is a completion-minimal insertion node. From Lemma 2, if u is a completion-minimal insertion node, then there exists a unique minimal completion H such that u is the insertion node associated to this completion. From Lemma 2 again, this completion H is the completion anchored at u, which is properly defined here as u is eligible, see Definition 4. We will now show that H is minimal.
If u is a parallel node, as u is non-completion-forced, u has at least one hollow child v, and the same holds if u is a series node because of Condition 1. From Definition 4, v is hollow in H. Let H be a minimal completion of G + x and let u be its insertion node u . We will show that H is not strictly included in H. From Lemma 2, if u = u, then H = H and therefore, from now, we consider only the case where u = u. Note that, from Theorem 1, the only nodes of T that remain mixed after completion into H are the ancestors of u . All the nonhollow nodes of T that are not ancestors of u are filled in H . Then, if u is not a descendant of u, node u is filled in H and so is node v. It follows that, if u is not a descendant of u, H is not included in H.
Now, consider the case where u is a strict descendant of u (remember that u = u) and suppose for contradiction that u is a parallel node. Lemma 5 implies that u is eligible. Since u is a strict descendant of u, then all the children of u, except its child w that is an ancestor of u , are hollow. Then, from Condition 2 of the present lemma, it follows that w must be completion-forced. Lemma 4 implies that w, and so u , is filled in H . This contradicts the fact that u is the insertion node, as from Theorem 1, this node remains mixed after completion. Thus, u is not a parallel node, but a series node. From Remark 3, u is nonhollow in G + x and consequently, u is not a descendant of v (the hollow child of u). Since u is a series node, it follows that v is filled in H , which is therefore not included in H. This achieves the proof that the conditions of the lemma are sufficient.
Let us now show that they are necessary. Consider a completion-minimal node u and let us show that it satisfies the conditions of the lemma. Firstly, because T u contains some completion-minimal insertion node, namely u, Lemma 5 implies that u is mixed, eligible and non-completion-forced. Let H be the completion anchored at u. From Theorem 1, u is mixed in H. Then, from Lemma 2, it follows that u has at least one hollow child. Condition 1 is satisfied.
We now show that if u is parallel and does not satisfy Condition 2, then the completion H anchored at u is not minimal, which implies that u is not a completion-minimal insertion node. Since u is mixed, it has at least one nonhollow child v. Moreover, since u does not satisfy Condition 2, v is the unique non-hollow child of u (then v is eligible) and v is non-completion-forced. As v is eligible, non-hollow and non-completion-forced, it follows from Lemma 5 that T v contains some completion-minimal insertion node. The corresponding minimal completion H is included in H and even strictly included as H leaves v mixed, while H fills it (since v is not hollow). Thus, H is not minimal. By contraposition, if H is minimal, Condition 2 is satisfied. This achieves the proof of the lemma. 2
An O(n + m ) algorithm with incremental minimum
In this section, we design an incremental algorithm whose overall time complexity is O(n + m ), where m is the number of edges in the output completed cograph. We concentrate on one incremental step, whose input is the cotree T of some cograph G (the completion computed so far) and a new vertex x together with the list of its neighbours N (x) ⊆ V (G). Each node u ∈ T stores its number |C(u)| of children and the number |V (u)| of leaves in T u . One incremental step takes time O(d ), where d is the degree of x in the completion of G+x computed by the algorithm. Within this complexity, our algorithm scans all the minimal completions of the neighbourhood of x and select one of minimum cardinality. Our description is in two steps.
First step: collecting information on nodes of T. In this step, for each non-hollow node u of T we determine the following information: i) the list of its non-hollow children C nh (u), ii) the number of neighbours of x in V (u) and iii) whether it is completion forced or not. To this purpose, we perform two bottom-up searches of T from the leaves of T that are in N (x) up until the root of T . Consequently, each of these searches discovers exactly the set N H(T ) of non-hollow nodes of T (for which we show later that their number is O(d )).
In the first search, we label each node encountered as non-hollow, we build the list of its non-hollow children and count them. The nodes that are not visited, and therefore not labelled are exactly the hollow nodes of T .
In the second search, for each non-hollow node u we determine the rest of its information, that is ii) the number of neighbours of x in V (u) and iii) whether it is completion forced or not.
It is straightforward to get this information for the leaves l of T that belong to N (x): there is exactly one neighbour of x in V (l) and l is forced. Then, all the leaves in N (x) forward their information to their parents in an asynchronous way. Along this process, each non-hollow node u of T is able to know whether it has received the information from all its non-hollow children, as we determined their number in the first search. When it happens, when u has received the information from all its non-hollow children, u is able to determine its own information: u makes the sum of |V (v) ∩ N (x)| for all its non-hollow children v, and u determines whether it is completion-forced as follows. If u is parallel, then u is completion-forced iff all its children are non-hollow, and if u is series, then u is completion-forced iff all its children are completion-forced. Then, u forwards its information to its parent and the process goes on until the root of the tree itself has determined its information. At that time, the process ends as all the non-hollow nodes of T have already determined their information.
Second step: finding all completion-minimal insertion nodes of T. We search the set of all non-hollow, eligible and non-completion-forced nodes of T . For each of them, we determine whether it is a minimal insertion node and, in the positive, we compute the number of edges to be added in its associated minimal completion. Then, at the end of the search we select the completion of minimum cardinality.
Since, all the ancestors of a non-hollow eligible non-completion-forced node also satisfy these three properties, it follows that the part of T we have to search is a connected subset of nodes containing the root. Then, our search starts by determining whether the root is non-completion-forced. In the negative, we are done: there exists one unique minimal completion of G + x which is obtained by adding all missing edges between x and the vertices of G.
Otherwise, if the root is non-completion-forced (it is always eligible, by definition, and non-hollow, from Remark 2), we start our search. For all the nonhollow children of the current node (we built their list in the first step), we check whether they are eligible and non-completion-forced and search, in a depth-first manner, the subtrees of those for which the test is positive (cf. Lemma 5).
During this depth-first search, we compute for each node u encountered the number of edges, denoted cost−above(u), to be added between x and the vertices of V (G) \ V (u) in the completion anchored at u. This can be computed during the search as follows: if the parent v of u is a parallel node (necessarily eligible, since we parse only eligible nodes), then cost − above(u) = cost − above(v); and if the parent v of u is a series node, then cost − above(u) = cost − above( We also determine whether u is a minimal insertion node by testing whether it satisfies Condition 1 or 2 of Lemma 6. This can be done thanks to the information collected in the first step. Importantly for the complexity, note that Condition 2 of Lemma 6 can be tested by scanning only the non-hollow children of u. In the positive, if u is a minimal insertion node, then we determine the number of edges, denoted cost(u), to be added in the completion anchored at u, as cost(u) = cost − above(u) From Lemma 6, minimal insertion nodes are non-hollow, eligible and noncompletion-forced. Therefore, our search discovers all the completion-minimal insertion nodes, and computes the cost of their associated minimal completion. We keep track of the minimum cost completion encountered during the search and outputs the corresponding insertion node at the end. Finally, we need to update the cotree T for the next incremental step of the algorithm (as depicted in Figure 3). To this purpose, we use the algorithm of [16] as explained below.
Complexity. The key of the O(d ) time complexity is that we search and manipulate only the set N H(T ) of non-hollow nodes of T . For each of them u, we need to scan the list of its non-hollow children C nh (u) and to perform a constant number of tests and operations that can all be done in O(1) time (thanks to the information collected in the first step). For example, when we need to test the number of hollow children of u we avoid to count them by computing their number as |C(u)| − |C nh (u)|. The computation of cost − above(u) can also be done in O(1) time by noting that the sum u ∈C(v),u =u |V (u ) \ N (x)| can rather be computed as |V Let v be a non-hollow child of one ancestor of u, then v is filled and it follows that the sum of the sizes of T v for all such v is bounded by N (x). The number of ancestors of v is also linearly bounded by N (x) as half of these ancestors are series and therefore have a child v which is filled.
When, the insertion node u has been determined, the completed neighbourhood N (x) of x can be computed in extension by a search of the part of T that is filled, which takes O(d ) time. Then, the cotree of the completion H of G + x is obtained from the cotree of G (as depicted in Figure 3) in the same time complexity thanks to the algorithm of [16]. Overall, one incremental step takes O(d ) time and the whole running time of the algorithm is O(n + m ), where m is the number of edges in the output cograph.
An O(n + m log 2 n) algorithm
Even though it is linear in the number of edges in the output cograph, the O(n+m ) complexity achieved by the algorithm in [41] and the one we presented in Section 4 is not necessarily optimal, as the output cograph can actually be represented in O(n) space using its cotree. We then design a refined version of the inclusion-minimal completion algorithm that runs in O(n + m log 2 n) time, when no additional condition is required on the completion output at each incremental step. This improvement is further motivated by the fact that, as we show below, there exist graphs having only O(n) edges and which require Ω(n 2 ) edges in any of their cograph completions. For such graphs, the new complexity we achieve also writes O(n log 2 n) (since m = O(n)) and constitutes a significant improvement over the O(n 2 ) complexity of the previous algorithm (since m = Ω(n 2 )).
Worst-case minimum-cardinality completion of very sparse graphs
In this section, we show that there exist graphs that have only O(n) edges and that require Ω(n 2 ) edges in any of their cograph completions. Actually, we show that this even holds in the more general case where the target graph class has bounded rank-width (see [47] for a definition), which includes the class of cographs as well as the class of distance hereditary graphs (see [52] for a definition). Furthermore, although it is not necessary for the purpose of this article, we also show that the same behaviour occurs for chordal completion, as we believe that this fact is interesting in itself. Our proofs are based on the notion of vertex expander graphs (see [33] for a survey on the topic). We first show that these graphs require Ω(n 2 ) edges in any of their cograph completions, as stated by Theorem 2 below, and we conclude by pointing out that there exist constructions of vertex expander graphs with only O(n) edges.
In our proof of Theorem 2, we will use the fact that cographs are graphs of bounded rank-width, for which we have Proposition 1 below. Roughly speaking, it states that if a graph G has rank-width at most r, then there exists a cut of G of rank at most r such that both parts of the cut are large.
Proposition 1 ([48]
). Let r be an integer and let G be a graph whose rankwidth is at most r. Then there exists a subset S ⊆ V (G) of vertices, such that n 3 ≤ |S| ≤ n 2 and cutrank(S) ≤ r.
We remark that Proposition 1 is stated by Oum and Seymour [48] in terms of symmetric submodular functions. Also see [47] for definitions of rank-width and cutrank. We will need the following proposition which shows that if a cut (S, V \ S) of a graph has a small rank, say r, then there can be only a small number 5 of equivalence classes of vertices in S according to their neighbourhood in V \ S. We are now ready to state and prove Theorem 2, regarding completions in graph classes H of bounded rank-width.
Theorem 2. Let c > 0 be a positive real number and r be a positive integer. Let also G be a c-expander and H be a class of graphs whose rank-width is at most r. Then, there exists a positive real number K c,r , depending only on c and r, such that any completion of G into a graph in H has at least K c,r · n 2 edges.
Proof. Let H be a completion of G into a graph in H. Since H is a supergraph of G, it follows immediately from the definition that H is a c-expander. Moreover, since H has rank-width at most r, from Propositions 1 and 2, there exists a subset S ⊆ V (G) of vertices, such that n 3 ≤ |S| ≤ n 2 and there exists a partition S = S 1 ∪ S 2 ∪ . . . ∪ S t , with t ≤ 2 r , such that for every i ≤ t and any pair u, v of vertices in S i , N (u) \ S = N (v) \ S. Assume, without loss of generality, that the S i 's are ordered by increasing cardinality. We denote And since the S i 's are ordered by increasing size, we conclude that the inequality holds for all indices: for all i ∈ 1, t , we have |S i | > c 2+c |S|. In the complement case, i.e. if |S 1 | ≤ c 2 |S \ S 1 |, then consider the largest index i such that |U i | ≤ c 2 |S \ U i |. Note that necessarily we have 1 ≤ i < t. We now prove that |S i+1 | = Ω(|S|), where the hidden factor depends only on c and r. By definition of i, we have On the other hand, because the S i 's are ordered by increasing cardinality, we have that |U i | ≤ i |S i+1 | ≤ 2 r |S i+1 |. By injecting this inequality in the one above we obtain As a partial conclusion, we have either (i) for all i ∈ 1, t , |S i | > c 2+c |S|, or (ii) there exists i ∈ 1, t−1 such that |U i | ≤ c 2 |S \U i | and for all j ∈ i+1, t , we have |S j | > c (2+c) (1+2 r ) |S| (because the S i 's are ordered by increasing cardinality). Beside this, because of the expansion property of S, we have |N (S)| ≥ c |S|, meaning that there are at least c · |S| vertices out of S that are adjacent to at least one vertex of S. Moreover, note that from the definition of the S i 's, we have that if a vertex x ∈ V \ S is adjacent to some vertex y ∈ S i , for some i ∈ 1, t , then x is adjacent to all the vertices of S i . In case (i) of the alternative above, where |S i | > c 2+c |S| for all i ∈ 1, t , we obtain that there must be at least c |S| · c 2+c |S| = c 2 2+c |S| 2 edges between S and V \ S in graph H. Thus, in this case, because |S| ≥ n 3 , the conclusion of the theorem holds. In the other case, i.e. case (ii) of the alternative above, we have |U i | ≤ c 2 |S \ U i | for some i ∈ 1, t − 1 and for all j ∈ i + 1, t , |S j | > c (2+c) (1+2 r ) |S|.
The expansion property applied to
Moreover, each of the vertices in N (S \ U i ) \ S is adjacent to all the vertices of S j for some j ∈ i + 1, t . And since |S j | > c (2+c) (1+2 r ) |S|, we obtain that there are at least , which achieves the proof of the theorem.
2 Remark 1. The result of Theorem 2 holds in particular for cographs and distance hereditary graphs, which both have rank-width at most 1.
It is also worth noting that in the particular cases of cographs and distance hereditary graphs, the proof above can be greatly simplified as follows. For a cut (S, V \ S) of rank at most 1, all the vertices of S having some neighbour in V \ S have exactly the same neighbours in V \ S. This corresponds to the fact that there are at most 2 equivalence classes S 1 , S 2 in Proposition 2 (r = 1): the vertices of S that have some neighbour in V \ S and those that do not have any. Moreover, the expansion property for S and for V \ S (remind that from Proposition 1 we have n 3 ≤ |S| ≤ n 2 ) implies that the numbers of vertices in S and in V \ S that have some neighbour on the other side of the cut are both Ω(c.n), which proves the statement of Theorem 2.
The results above hold for any input graph that is a c-expander. Nevertheless, in order to achieve our goal, we still need the existence of very sparse c-expanders. This has already been established as there exist deterministic constructions of very sparse graphs that are c-expanders, see for example the construction of 3-regular c-expanders by Alon and Boppana [1], for some fixed c. Such graphs have only O(n) edges but, from Theorem 2, require Ω(n 2 ) edges in any of their cograph completions (as well as in any of their completions in a graph class H of bounded rank-width). More generally, it is part of the folklore that, for any constant a > 1, there exist c > 0 and p > 0 such that, for any n ∈ N sufficiently large, the proportion of graphs on n vertices and a.n edges that are c-expanders is at least p. This means that many graphs of fixed mean degree have the vertex expansion property and therefore require Ω(n 2 ) edges in any of their cograph completions. Motivated by this frequent worst-case for the O(n+m ) complexity, we will design an O(n + m log 2 n)-time algorithm for inclusion-minimal cograph completion of arbitrary graphs.
A similar behaviour for chordal completion. The fact that some very sparse graphs, having O(n) edges, may require Ω(n 2 ) edges in any of their completions also occurs for other target classes, whose rank-width is unbounded. In particular, we now show that the very popular chordal completion problem also exhibits such a behaviour, which we believe is worth of interest in itself, though unnecessary for the strict purpose of this article. Our proof is as previously based on vertex expander graphs, for which we have the following result.
Proposition 3 ([24]
). If G is a c-expander for a constant c > 0 independent of n, then the treewidth of G is Ω(n).
In addition, it is well known (see [2]) that the treewidth of a graph G is the minimum size (minus 1) of the maximum clique among all chordal completions of G. Consequently, Proposition 3 immediately gives an Ω(n 2 ) lower bound on the number of edges in any chordal completion H of a c-expander G, since H must have a clique of size Ω(n). To conclude, remind that, as mentioned above, there exist constructions, both deterministic and random, of c-expanders having only O(n) edges.
We now turn to the description of our O(n + m log 2 n)-time algorithm for inclusion-minimal cograph completion.
Data structure
Our data structure is composed of two copies of the cotree: one stored in a basic data structure and one using the advanced dynamic data structure of [51] named dynamic trees. We note that we could use only the advanced data structure of [51], as it can be patched to contain the additional information that we store in the basic data structure. But to avoid questions about the compatibility of such a patch with the performances of the data structure of [51], we prefer to store the additional information we need, and to perform the related operations, independently in another structure. This is the reason why we describe our algorithm using two structures.
In the first copy of T (the basic data structure), each node u stores its parent, the list of the children of u and their number |C(u)|, as well as a bidirectional couple of pointers to the corresponding node of u in the second copy of T , so that we can move from one element in one copy of the cotree to the same element in the other copy in O(1) time. In addition, we enhance this basic data structure storing the cotree with one additional feature: given a node u and two of its children u 1 , u 2 , this feature allows us to determine which of u 1 , u 2 appears first in the list of children of u in O(1) time. To this purpose, the set of children of a node u is not only stored in a doubly linked list, as in the classical version of the tree, but a copy of this list is also stored using the order data structure of [3,21]. This data structure allows to answer order queries, i.e. which of two given elements of the list precedes the other one, and supports two update operations, insert and delete. The delete operation removes a given element from the order data structure while the insert operation insert a new element in the order data structure just after a specified element. The order query and the two update operations all take O(1) worst-case time.
Dynamic trees [51] In addition to the classical data structure described above, we also use the data structure developed in [51] to store a copy of the cotree T and maintain it at each incremental step. This data-structure maintains a dynamic forest rather than a single tree. This will be useful for us as we will cut a part of the cotree and attach it to another node during the update of the cotree under the insertion of a new vertex. The dynamic trees of [51] allow to answer the two following kinds of query: lowest-common-ancestor? Given two nodes u and v of T , provide the lowest common ancestor lca(u, v) of u and v. next-step-to-descendant? Given a node u of T and one of its strict descendants v, provide the (unique) child of u which is an ancestor of v.
These two kinds of query are handled in O(log n) worst-case time in the data structure of Sleator and Tarjan [51]. To be precise, the second operation is not described in [51], but it can be obtained as a combination of other operations they provide. Indeed, their data structure also supports, in the same complexity: an update operation called evert(u) which, given a vertex u of T , makes u become the root of T , and a query operation named root?(u) that provides the root of the tree T to which node u belongs.
Then, the query next-step-to-descendant?(u, v) we use here can be resolved by the sequence of operations (two updates and two queries): r =root?(u), evert(v), parent?(u), evert(r), which takes O(log n) time.
Along our incremental algorithm, we need to maintain the dynamic data structure of [51], which can be done thanks to the following update operations: cut. Given a node u in a tree T of the forest F such that u is not the root of T , remove the edge between u and parent(u). Then, u becomes the root of its new tree T in forest F . link. Given a node u in a tree T of the forest F such that u is not the root of T and given the root v of a tree T = T , make u the parent of v.
Note that operations cut and link are converse of each other. As for queries, all update operations takes O(log n) worst-case time.
Algorithm
Our algorithm determines the set W of the nodes that are simultaneously eligible, non-hollow and non-completion-forced and that are minimal for the ancestor relationship among nodes having these three properties (i.e. none of their descendants satisfies the considered property). Then, it picks any of them to be the insertion node of the minimal completion returned at this incremental step. Indeed, since nodes of W satisfy the conditions of Lemma 5 and none of their children does (because nodes of W are minimal for the ancestor relationship), it follows that nodes of W are completion-minimal insertion nodes. In order to get the improved O(n + m log 2 n) complexity, we avoid to completely search the upper tree T up N (x) to determine W . Instead, we use a limited number of lowest-common-Ancestor? queries.
Clearly, if a parallel node u of T is the lca of two leaves in N (x) then T u \ {u} contains no eligible node. Let P max be the set of parallel common ancestors of vertices of N (x) that are maximal for the ancestor relationship and let us denote W = P max ∪ N out , where N out is the set of vertices of N (x) that are not descendant of any node in P max , i.e. N out = N (x) \ p∈Pmax V (p). Note that all the nodes w ∈ W are eligible, and so are their ancestors. It follows that the set W that we want to compute is the set of the non-completion-forced nodes in the upper tree T up W that are minimal for the ancestor relationship (i.e. none of their descendants in T up W are not completion-forced).
Finding an inclusion-minimal insertion node. In order to compute W , we start by computing the tree T = T xtr N (x)∪Ax extracted from (see Section 2) the leaves that belong to N (x) and the set A x of their lowest common ancestors, i.e. nodes u such that u = lca(l 1 , l 2 ) for some leaves l 1 , l 2 ∈ N (x). Then, we search T to find its parallel nodes P max that are maximal for the ancestor relationship and we remove their strict descendants. The leaves of the resulting tree are exactly nodes of W . Finally, for each node w ∈ W we determine its lowest non-completion-forced ancestor nf a(w ) in T and we keep only the nf a(w )'s that are minimal for the ancestor relationship: this is the set W . It is worth noting from the beginning that since T has exactly d leaves and since all its internal nodes have degree at least 2, then the size of T is O(d).
Let us now show how to compute T in O(d log 2 n) time. To this purpose, we sort the neighbours of x according to a special order of the vertices of the cograph G called a factorising permutation [14]. A factorising permutation is the order in which the vertices of G (which are the leaves of the cotree) are encountered when performing a depth-first search of the cotree T . There are as many different factorising permutations as different depth-first search of T . Here, we use the factorising permutation π which is obtained by visiting the children of one node u of T in the order they are stored in the list of the children of u used in the implementation of the cotree. To determine whether a vertex y 1 is before or after a vertex y 2 in the factorising permutation π, we can proceed as follows: 1) find u = lca(y 1 , y 2 ) and find the two children u 1 and u 2 of u that are respectively the ancestor of y 1 and y 2 , and 2) determine whether u 1 is before or after u 2 in the list of children of u. Operation 1) can be executed in O(log n) time thanks to the data structure of [51] by performing one lowest-common-ancestor? query and two next-step-to-descendant? queries. Operation 2) can be executed in O(1) time using the order data structure of [3,21]. Then, comparing the order of occurrence of two vertices y 1 and y 2 in π takes O(log n) time and totally, sorting all the neighbours of x respectively to order π takes O(d log d log n) = O(d log 2 n) time.
The benefit of doing so is that, once the neighbours of x are sorted in the order x 1 , x 2 , . . . , x d in which they appear in π (we say from left to right), we can build T efficiently. We consider the neighbours of x one by one in this order and at each step we compute the tree T i extracted from {x 1 , . . . , x i } and their lowest common ancestors. Then, at the end of the computation, when i = d, we obtain T d = T . For each i between 2 and k, we obtain T i from T i−1 as follows: we compute v i = lca(x i−1 , x i ) and we insert it at its correct position in the tree T i−1 built so far.
Note that, since we consider the x i 's from left to right in the order of the factorising permutation π, the newly computed common ancestor v i is the only node that may be in T i but not in T i−1 . Moreover, for the same reason, if v i is not yet a node of T i−1 then v i has to be inserted on the rightmost branch of the tree T i−1 , and if v i is already a node of T i−1 then v i already belongs to this branch, and so we discover it when we try to insert it on this branch. In order to do so, we climb up the rightmost branch of T i , starting from the father of x i−1 , and for each node v encountered on this branch we determine whether v i is higher or lower than v in the tree (or eventually equal) by computing lca(v, v i ). The total number of comparisons (treated by lca queries) made along the computation of T d is O(d). Indeed, as explained in [25], every time we pass above a node v on the rightmost branch, v leaves the rightmost branch for ever and will then never participate again to any comparison. Then, the total number of lca queries we need to built T (including the d − 1 queries made on the pairs of neighbours of x appearing consecutively in the order of the factorising permutation) is proportional to its size, that is O(d). Since each of these queries takes O(log n) time thanks to the data structure of [51], the complexity of building T from the sorted list of neighbours of x is O(d log n).
Once T is built, a simple search starting from its root determines the set P max of its parallel nodes that are maximal for the ancestor relationship, and we cut off from T all the subtrees rooted at the children of nodes in P max . The leaves of the resulting tree are precisely the nodes of W that we wanted to determine. As T has size O(d), this step takes O(d) time.
Then, for each w ∈ W , we determine its lowest non-completion-forced ancestor nf a(w ) in T . From the definition of P max , the lowest parallel ancestor of w is non-completion-forced. Then, nf a(w ) cannot be higher in T than the grand-parent of w . It follows that we have to check the non-completion-forced condition only for w and its parent, which can be done, for each of them u, in O(|C nh (u)|) time. Then, we remove the nf a(w )'s that are not minimal for the ancestor relationship to obtain the set W , this takes O(d) time, and We now need to find the non-completion-forced nodes of the upper tree T up W that are minimal for the ancestor relationships. To that purpose, for each w ∈ W , we determine its lowest non-completion-forced ancestor nf a(w ) in T . From the definition of P max , the lowest parallel ancestor of w is non-completion-forced. Therefore, nf a(w ) cannot be higher in T than the grand-parent of w . It follows that we have to check the non-completion-forced condition only for w and its parent, which can be done as follows. If w is a leaf of T , i.e. w ∈ N out , then w is forced. If w is a parallel node of T , i.e. w ∈ P max , then w is forced iff its number of children in T equals its number of children in T . Now, for the parent v of W , if v is a parallel node, as noted above, v is necessarily non-completionforced. Otherwise, if v is a series node, v is completion-forced iff i) v belongs to T and ii) its number of children in T equals its number of children in T , and iii) all its children in T belong to W and iv) all its children in T are forced (cf. conditions given above for w ). If none of w and parent(w ) is non-completionforced, then necessarily parent(parent(w )) is. As testing these conditions for one node u takes O(|C nh (u)|) time, then determining the nf a(w )'s for all nodes w ∈ W takes O(d) time. Finally, we determine the nf a(w )'s that are minimal for the ancestor relationship, i.e. the nodes of W , by searching T upward on at most two levels, starting from each of the nodes in W . This also takes O(d) time. Then, we arbitrarily pick one node w in W and the minimal completion of the neighbourhood of x returned by the algorithm is the one anchored at w. Therefore, the total complexity of finding one completion-minimal insertion node in one incremental step of the algorithm is O(d+d log n+d log 2 n) = O(d log 2 n).
Updating the data structure. In the previous section, we showed how to determine the insertion node w and the list of its children to be filled. Then, depending on whether w is a parallel or a series node, the cotree T must be Fig. 3. Modification of the cotree under the insertion of x at insertion node w. The triangles in black (resp. white) correspond to the parts of the tree that are filled (resp. that remain hollow) in the completion anchored at w. modified as shown in Figure 3, and the data structure of [51] associated to the cotree must be updated accordingly. The key for doing so while preserving the O(d log 2 n) time complexity is to perform operations involving only the nonhollow children of w. Indeed, their number is O(d), while the number of the hollow children of w can be up to Ω(n) and arbitrarily large compared to d.
After the insertion of x, the insertion node w is replaced by three nodes, see Figure 3. Two of them have the same label as w: one w h has for children the hollow children of w and the other one w nh has for children the non-hollow children of w. In order to preserve the complexity, it is important to form these two nodes as follows. We cut from w its non-hollow children and its parent, we then obtain w h , still linked to its correct children. Then, we link all the nonhollow children of w to a new node w nh . This takes O(d log n) as it requires O(d) cut and link operations, each of which is supported in O(log n) time by the data structure of [51], and the corresponding delete and insert operations in the order data structure storing the lists of children of the nodes in the tree take O(1) time. The rest of the transformations in order to get the new tree as depicted in Figure 3 only requires 4 link operations. Thus, the time complexity of updating the data structure in one incremental step is O(d log n).
As a conclusion, the complexity of one incremental step of the algorithm is O(d log 2 n) and overall, the complexity of the whole algorithm is O(n+m log 2 n).
Conclusion and perspectives
We designed two incremental algorithms for computing an inclusion-minimal cograph completion of an arbitrary graph G. The first one has a time complexity of O(n + m ), where m is the number of edges in the output completion, which matches the complexity of [41]'s algorithm. The specificity of our algorithm is that, within this complexity, it is able to compute a minimum-cardinality completion of the neighbourhood of the new vertex x introduced at each step of the incremental algorithm, which is a highly desirable feature in practice to obtain inclusion-minimal completions of small cardinality. The way we achieved this is by scanning at each incremental step the set of all possible minimal completions of the neighbourhood of x. This is particularly interesting as, beside the minimum-cardinality criteria, this opens the possibility of choosing the completion selected by the algorithm using any criteria one wishes.
Our second algorithm improves the time complexity of computing an inclusionminimal cograph completion of G to O(n + m log 2 n). This improvement is motivated by the fact that, as we gave evidence for it, many graphs (namely those having the expansion property) that have only O(n) edges require Ω(n 2 ) edges in any of their cograph completions. Unfortunately, we obtained this improved complexity at the price of giving up the additional feature obtained in the first algorithm, namely computing a minimum-cardinality completion of the neighbourhood of x at each incremental step. Therefore, the first open question arising from our work is whether it is possible to provide this functionality within the O(n + m log 2 n) time complexity, or at least within a time complexity of the form O(n + m polylog(n)).
The question of improving further the time complexity, when expressed with regard to the size of the input graph, of computing an inclusion-minimal cograph completion is also open. Although it seems difficult to reach a linear complexity with the techniques we use here, nothing indicates that the O(n + m log 2 n) complexity we obtained could not be improved further, say for example to O(n+ m log n). Such an improvement would be very valuable both in theory and in practice for dealing with very large real-world networks [34,32,18].
Another appealing perspective is to design algorithms that are able to use not only addition of edges but also deletion of edges in order to minimally modify an arbitrary graph into a cograph. What is the best complexity that can be achieved for the general cograph editing problem (where both addition and deletion of edges are allowed)? Is it possible, in this case as well, to design an incremental algorithm that provides a minimum-cardinality modification of the neighbourhood of x at each incremental step? The behaviour of the general cograph editing problem seems quite different from the one of the pure completion (or pure deletion) problem. Therefore, answering these questions would significantly contribute to leverage our understanding of graph modification problems.
|
2018-01-23T22:45:53.441Z
|
2017-12-16T00:00:00.000
|
{
"year": 2020,
"sha1": "179d285aba303bd12c5dab5c7d1da2304353feb9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2001.07765",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6fc61bef027d3ac17271aee7bd64f3b249657f10",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
59026996
|
pes2o/s2orc
|
v3-fos-license
|
Calculation and Analysis of Permanent Magnet Eddy Current Loss Fault with Magnet Segmentation
This paper investigates the problem of calculating and analyzing the effect of the permanent magnet eddy current loss fault due to magnet segmentation. Taking an interior permanent magnet synchronous motor with inverter supplied as an example, the rated power of motor was 2.2 kW.Three-dimensional finite-element model was firstly established based on finite-element software.Then, the model mesh and boundary conditions were handled specially; permanent magnet eddy current loss fault was calculated and analyzed theoretically with magnet segmentation from space harmonic and time harmonic, respectively. Finally, calculation results were compared and explained. A useful conclusion for permanent magnet synchronous motor design has been obtained.
Introduction
With the development of power electronic devices and the improvement of the motor control technology, the permanent magnet synchronous motor has attracted more and more attention for its advantages of high efficiency, wide speed regulation, and high power density.But with the increase of the power density, it is worthy of considering how to make the motor temperature rise in the range of allowable limit value.Reducing the temperature rise of the motor should start from two aspects: how to improve the cooling capacity of motor and how to reduce the loss of motor.Because of the high speed and large carrier frequency of the permanent magnet synchronous motor, the eddy current loss of the permanent magnet is large.
In order to reduce the eddy current loss of permanent magnet, researchers have conducted a lot of research [1][2][3][4][5].Among the studies, the axial section method of permanent magnet is widely accepted.Based on the previous research, the eddy current loss of interior permanent magnet synchronous motor is studied in this paper.Taking an interior permanent magnet synchronous motor with a rated power 2.2 kW as an example, a three-dimensional (3D) finiteelement model is established.Permanent magnet eddy current loss fault was calculated and analyzed theoretically with magnet segmentation from space harmonic and time harmonic, respectively; at the same time, the finite-element method is verified by analytic method.
One of the main problems of NdFeB permanent magnet synchronous motor is thermal demagnetization, which is caused by the permanent magnet eddy current loss.In particular, the permanent magnet AC servomotor is mostly used in the fractional slot concentrated winding whose magnetic motive force (MMF) harmonic content is very rich [6].Thus, reducing the eddy current loss of permanent magnet has attracted more and more attention, in which the axial section of the pole is used to reduce the eddy current loss of the permanent magnet [7][8][9].It is widely adopted by the designers of the motor.
There are two reasons for permanent magnet to generate eddy current losses [10][11][12].One is the slotted stator and stator winding distribution caused by uneven distribution of MMF and the space harmonic.The second is the nonsinusoidal time harmonic of the stator current caused by inverter power supply.Permanent magnet eddy current losses can be expressed as
Analysis and Calculation of Eddy Current Loss of Permanent Magnet of Interior Permanent Magnet Motor
Suppose there is an infinite neodymium iron boron permanent magnet; the external magnetic field is parallel to the surface in the direction of the -axis; the vortex at a certain point in the permanent magnet can be decomposed into two mutually perpendicular eddy current densities; it can be determined by where is eddy current density, is electrical conductivity of permanent magnet materials, is the voltage between the nodes in the calculation unit, and is distance between nodes in a computing unit.As shown in Figure 1, suppose the potential difference between two points on the two sides of the permanent magnet block is 1 , 2 .For easy calculation, it can be set as 1 = 2 = .The potential difference of the permanent magnet in the thickness direction is ignored again.The total voltage of the 4 points on the surface of the permanent magnet which is composed of the two ends of the loop is where is loop area and () is magnetic density in the circuit; it can be expressed as where 0 is static magnetic density and is dynamic magnetic flux induced by stator armature current. is regarded as a nonsinusoidal periodic function; the trigonometric series of is decomposed into Fourier series: where and are the amplitude of the dynamic magnetic flux generated by the harmonic component of the armature current in the permanent magnet. Then, The effective value of () is According to formula (7), the eddy current density in the permanent magnet is obtained: where ℎ is permanent magnet magnetization direction length, so the eddy current loss power density of the permanent magnet can be obtained: Eddy current loss density is Eddy current loss density of permanent magnet is integral and then it can get eddy current loss in permanent magnet.
Effect of Pole Axial Section on the Eddy Current Loss Fault of Permanent Magnet Produced by Space Harmonic Generation
The basic parameters of an interior permanent magnet synchronous motor with a rated power 2.2 kW in this paper are shown in Table 1.The 3D model of motor is constructed by using the finite-element analysis software.In order to save the computing resources, coupled with cycle magnetic distribution, a unit of the motor is modeled and calculated.
Figure 2 shows 3D finite-element mesh model of a 2.2 kW permanent magnet synchronous motor.In order to keep the consistency for carrying on mesh each time, the 3D model is set up by the insulation boundary condition and the zero excitation.The eddy current loss of the permanent magnets which are divided into different segments is calculated under the condition of no load.The results are shown in Figure 3 and Table 2.
From Figure 3, the vortex line is cut off by the axial section of the permanent magnet, and it is formed locally in the section of the magnetic pole.In comparison with Table 2, the mean of eddy current loss increases with the number of magnetic poles.
Effect of Pole Axial Section on the Eddy Current Loss of Permanent Magnet Produced by Time Harmonic Generation
Permanent magnet eddy current loss mainly is generated by the time harmonic generation.Literature [13] shows that eddy current loss is the biggest when the axial length of the permanent magnet is equal to 2.3 times the penetration depth.Permanent magnet of the penetration depth can be defined as the depth of magnetic field act on the inside of the permanent magnet.Magnetic field intensity decreases exponentially with the increase in depth.The penetration depth can be calculated as [14-17] where is the permanent magnet penetration depth, is sinusoidal frequency, is absolute permeability, and is conductivity.Figure 4 is the current waveform which is measured by inverter power supply test at rated frequency = 100 HZ. Figure 5 shows the harmonic amplitude after Fourier decomposition.From Figure 4, the harmonic amplitude of the current waveform is smaller at rated speed of the motor.
According to the conclusion of literature [13], this paper selects the 37th harmonic of current waveform whose permanent magnet penetration depth is about 11.2 mm corresponding to the 37th harmonic [17][18][19][20][21].When permanent magnet eddy current loss was calculated by using the ANSOFT software, in order to ensure the synchronous motor work, the eddy current losses in the permanent magnets under the fundamental current and 37th times harmonic current were calculated firstly, and then permanent magnet eddy current loss generated by the fundamental was calculated.The difference between the two results is the permanent magnet eddy current loss by 37th harmonic.
The eddy current losses of permanent magnets which are segmented into two and three are calculated, and the eddy current losses produced by the thirty-seventh time harmonic current source are shown in Table 3. From Figure 6, the eddy current losses in the different segments of the permanent magnet under the interaction of the fundamental wave and the 37th harmonic can be seen, only the fundamental wave and only the 37th harmonic.It can be seen that permanent magnet eddy current loss had not been reduced with segments numbers of the permanent magnet increasing from Table 3.When segments numbers of the permanent magnet are three, at this time the ratio of pole axial length and depth of penetration is 1.8; the eddy current loss generated by 37th times harmonic in permanent magnet not only failed to reduce but also increased.
Calculation and Result Analysis of the Fault
For the 2.2 kW permanent magnet synchronous motor, the permanent magnet eddy current loss caused by the space harmonics increased while the number of segments decreased, and the permanent magnet eddy current loss caused by the time harmonics did not increase with the number of segments decreasing.The reason for why the permanent magnet eddy current loss generated by fundamental and 37th times harmonic increased with the number of segments decreased is that the 37th harmonic current amplitude is smaller and the permanent magnet eddy current loss is mostly generated by fundamental waveform current.
Why did the permanent magnet eddy current loss caused by the space harmonics increase with the number of segments decreasing?The first reason is that the axial split magnetic pole is equivalent to the oblique pole, and the air gap flux density waveform is improved; the other reason is that the sectional pole blocked the formation of eddy current loop, so the permanent magnetic eddy current loss decreased with the increased of number of segments.The source of excitation provided by the inverter contains a large number of harmonic components and higher harmonic amplitude.The depth of penetration of low order harmonic is larger and more than magnetic direction length, so it is not considered.However, the depth of penetration of high order harmonic is smaller and the skin effect is very strong, so permanent magnet eddy current loss produced by time harmonic will have a maximum value.
Conclusions
The permanent magnet eddy current loss caused by the space harmonics increased with the number of segments decreasing, and the permanent magnet eddy current loss caused by the time harmonics did not increase with the number of segments decreasing.So, when the motor is designed, especially designing the high speed motor, by using the magnet segmentation to reduce the eddy current loss in the magnets, first of all, consider output current waveform of the inverter and pay attention to the ratio between the pole axial length and the penetration depth of high harmonics waveform of current.Lastly, the compromise between cost of segmental magnetic pole and the magnitude of reducing the eddy current loss also needs to be considered; generally the numbers of segments do not exceed 4.
2 3 4 Figure 2 :
Figure 2: 3D sectional drawing of 2.2 kW permanent magnet synchronous motor (1 is the stator, 2 is the windings, 3 is the permanent magnet, and 4 is the rotor).
Figure 3 :
Figure 3: (a) Eddy current density of a permanent magnet.(b) Eddy current density of two-segment permanent magnet.(c) Eddy current density of three-segment permanent magnet.(d) Eddy current density of four-segment permanent magnet.(e) Eddy current density of fivesegment permanent magnet.(f) Eddy current density of six-segment permanent magnet.
Figure 6 :
Figure 6: Eddy current loss of permanent magnet at different stages.
Table 1 :
Parameters of 2.2 kW motor and permanent magnet.
Table 2 :
Eddy current losses in different segments of a magnet.
Table 3 :
Eddy current losses of permanent magnet under 37th harmonic currents at different stages.
|
2018-12-17T17:35:36.037Z
|
2016-05-08T00:00:00.000
|
{
"year": 2016,
"sha1": "146e28a89969cb9e1ed111e738d80cb37ef65206",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2016/7308631.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "146e28a89969cb9e1ed111e738d80cb37ef65206",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
73718582
|
pes2o/s2orc
|
v3-fos-license
|
Counselling Psychology : Concept , trend and medical setting
The significant role and contributions of counselling is now well recognized in remedial and preventive areas. Different models of healing and human functioning has appreciated the incredible efforts of counselling in the relevant fields. Medical setting has always been promoted by counselling, where counselling has proved its exertions through vital contributions in primary care to deal with various issues and problems towards patient perception, diagnosis, treatment and care. The health awareness, prevention and developmental issues are also covered by counselling in medical care. The contributions of counselling to medical care are enormous. This paper explores the integrity of counselling in medical setting along with the issues of patients perception.
INTRODUCTION
Counselling can be classified into remedial, preventive and the developmental approach (Jordaan et al., 1968).All these interventions may include individual and social counselling organized on different levels or groups; it also contains the ability to tackle future difficulties and to make balance between plans and enhancement of relationship whether it could be between couples, parent-child, individuals and within communities (Kagan et al., 1998).Benefited inputs of counselling has been reported by many studies in several areas.Bio-psychosocial model of healing and remedial aspects of human functioning has appreciated the counselling contributions in the approachable field.The definition of a bio-psychosocial perspective by Engel's was to incorporate the patient's psychological experiences and social or cultural context into a more comprehensive framework for understanding disease, illness and health (Matarazzo, 1982).Since literature shows many debates in connection to health, illness and well-being, well-defined concept of disease says that it Is not only associated with physical factors but also influenced by mind.
Significant changes were seen in health and health care systems.Important aspects of psychological science had been beautifully explored by Gentry (1984) who stated that the following points like life-style factors in the manifestation, maintenance of health problems and the development of psychological theories about health had facilitated the formation of new initiatives.After emergence of behavioral medicine and health psychology, a new explanation of health, illness and researches on health related factors were increased so far (Karademas, (2009).As an outcome, patients were increasingly referred to psychologists for treatment and symptom management, where psychologists were involved in the efforts for health maintenance and health promotion in almost every population (Ayers et al., 2007;Bennett, 2000;Sarafino, 1999).
Role of counselling in health care
Comprehensive knowledge and skills to deal with issues in health care has always been offered by counselling.The counselling plays an important role on the person's strengths, treating persons with respect and care.The approach of counselling has also incorporated the environmental factors, resources as well as psychoeducation in treatment, bio-psychosocial model for understanding health and being familiar with interdisciplinary collaborations (Altmaier and Johnson, 1992;Roth-Roemer et al., 1998).Counselling psychology was directly related to health care treatments for certain health problems, including pain and insomnia (Krumboltz et al., 1979).In 1998, the first edition was exclusively dedicated to the role of Counselling Psychology in health care and this edition focused on variety of topics, like professional issues, areas of practice and interventions in special populations (Roth-Roemer et al., 1998).Medical professionals also acknowledge the significant contributions of psychological applications in the context of health and illness.It is recognized by the professional that the patient behavior is vitally important in preventive medicine and chronic illness where the active involvement of the patient is required (Corney, 1997).
Literature suggests that in 1987, the British Association for counselling sponsored a survey conducted by Glenys Breakwell to map the extent of counselling provision in the National Health Service outside primary care and found that the authorities considered the approach of counselling, although they had less counselling actions with various interpretations (Corney, 1997).During illness, where the patient's life gets affected by anxiety or distress, this is the most negative factor being identified for slow recovery from illness.Ley (1998) suggested that no evidence was available for the increased anxiety or depression when patients were told of their diagnosis.Literature suggests that adequate information and preparation before surgery has positive impact on patients and preparation has been shown to effect postoperative pain and symptoms (Corney, 1997).Similarly, Maguire (1991) quoted that ‗‗it is not necessary to give straightforward information to the patients; however, it is required in highly sensitive issues to explore what the individual already understands and how to cope with time.The importance of counselling has gradually increased and it has made significant contributions.Similarly, it has also reported that emotional distress can be reduced by active participation from the patient through decisions making in the treatment.Thus, patients Amanullah and Firdos 23 may actively involve towards illness or disease rather than just having feeling of being dependent or passive.It is also helpful to a counselor or a doctor to facilitate the concern patient by providing more information about various options (Corney, 1997).Evidences reported that counselling psychologist has exalted attentions towards health and well-being in the implementing of dynamic models that express hope for human growth and change even under conditions of disease and adversity (Elliott and Shewchuk, 1996).Well, it was reported that various issues and problems in medical settings are dealt with by counselling such as in post-traumatic stress, coping and adjustment, pain management, pre-and post-operative stress, HIV disease, adjustment to coronary heart disease, substance misuse, renal disease, treatment noncompliance, infertility, anxiety, helping sick children and their families (Bor and Allen, 2007).Growing role of counselling in primary health care shows that it is not only significant for working with the patient with regards to diagnosis, treatment and care but also related with health education and prevention counselling (Bor and McCann, 1999).The counselling session helps patient to express their feelings about loss of abilities, roles and selfesteem; further, the counselors or doctors can assist them in knowing terminologies and/or coping with problems, gaps or other changes.
Generally, mental distress is accounted higher due to depression and anxiety because of the various factors and negative life events.Majority of referred patients are depressed, having anxiety and psychosocial problems, adding to the necessity of counselor or counselling in the primary care of patients with physical illness (Corney and Jenkins, 1993).Although, the major contributions of counselling has been documented as such in supporting patients, motivation for adherence towards treatment, continuation and positive life style.Number of behavioral risk factors has come into account to care as a threat to health, including smoking, diet, lack of exercise and drug abuse.Thus, individual or group counselling would be a significant addition to medical intervention (Corney, 1997).The role of counselling is not only significant to mental health concerns but also for overall health.
Rectitude of counselling psychology in medical context
Today, the world has become complex day-by-day and different risk factors or problems have emerged as a serious reason to illness.Behavioral issues like smoking, sexual behavior, psychological stress or failure to engage in a positive self-care are more evident in most common causes of death than bacterial infection which claimed millions of lives (Berman and James, 2012) The existence of counselling psychologists in the field of medical care has been an interesting question since decades along with questions about its identity, interface (Scott, 1980); yet the fact is that counselling psychology has made an important contribution to human health.It is reported by authors that counselling psychologist has contributed a lot including holistic attention to the ecologies or systems in which an individual's health is embedded; focus on human strengths, well-being and the concept of positive health rather than the absence of disease; interest in diverse and underserved populations and a developmental lifespan perceptive (Alcorn, 1991).
It is important to determine the efficacy of counselling in medical setting (Tolley and Rowlands, 1995).The efficacy has been proven by various studies, Milne and Souter (1988) assessed the effect of counselling on the level of stress and found significant increases in the use of coping skills and decreased in the levels of stress.Additionally, Maes (1992) examined the psychosocial interventions with counselling and explored that it may affect cardiac rehabilitation in such a way that intervention may facilitate psychosocial recovery and aid return to everyday activities.Secondly, it may play an important role in secondary prevention by improving compliance with medical advice concerning medication and lifestyle changes.Davis and Fallowfield (1991) highlighted after the review of studies that counselling has been employed as an adjunct to physical treatment for many other medical conditions ranging from diabetes mellitus to spinal cord injury.In addition to Maes (1992), Davis and Fallowfield (1991) also reviewed and quoted that -considerable evidence on counselling and related forms of intervention can have beneficial effects on reported stress levels, professional re-integration, necessary lifestyle changes and perhaps in morbidity and mortality as well.Moreover, the contributions of counselling in medical care are enormous and it has played an important role in the respective fields; still we need to know some specific or detailed programs and training for counselling intervention.
Counselling psychologist and medical setting
The main expertise of Counselling Psychologists involves dealing with behavior through the executions of different roles, responsibilities along with different evaluations.A Counselling Psychologist is known as an expert in behavior, since psychologists practice for several things such as understanding of psychological process of human behavior, mental and physical functioning of the patients, advisors towards care, training experts and organizer of projects and different psychosocial interventions.BPS Division of Counselling Psychology (2007) and Kagan et al. (1998) quoted that -counselling psychologists has significant role as they can deliver the information regarding the psychological well-being to the patients; however, it also offers specific recommendations which will be helpful to assist medical staff''.Under the proficiencies of counselling, psychologist may provide a portion of advice or instructions on how to manage everyday difficulties, work load, on decision making, dealing with death and dying patients (BPS Division of Counselling Psychology, Earll and Bath, 2004).Additionally, psychologists also gives information on every significant parts of the medical milieu such as patients and their needs, medical personnel, the environment and any special conditions (Carmin and Roth-Roemer, 1998).The role of Counselling Psychologists is very important and significant as it concerns patients' health because despite physical strain, there is another crucial aspect of patients' life where requisite counselling and psychologists are needed.According to identical concept defined by Bennett (2000) and Belar and Deardorff (1995), a psychologist can offer large help to the patients functioning at every important level such as physical level, emotional level, cognitive and behavioral level.Variety of collection of techniques are available in order to use as individual and group counselling, therapies, training, crisis intervention, stress management, motivational interview, guided imagery, behavior analysis and modification, cognitive restructuring and many more as per the obligatory conditions (Karademas, 2009).Cognitive behavioral models are the main base of these techniques which have been associated in the effectiveness of many health conditions including cardio-vascular disorders (e.g., Bellg, 2004;Gidron et al., 1999); diabetes mellitus (e.g., Norris et al., 2001), HIV/AIDS (e.g., Bor et al., 2004;Chesney and Antoni, 2002); sexual health (e.g., Aarø et al., 2006) and surgical procedures (e.g., Lang et al., 2000;Petry, 2000) and many more.
Interestingly, Gatchel and Oordt (2003) have created four major models on the roles of psychologist in the primary care.The roles are: working with psychological clinic and not integrated into the health care clinic, working as a provider and collaborates with the medical personnel, and acting as a behavioral health consultant by involving in different tasks.The final model states to the staff adviser who consults only the medical staff about defining and treating the problem, although these models are not equally absolute.However, it shows how much counselling and medical settings are closely connected and likewise show the significance of counselling for the guidance of patients.Similarly, the participation of counselling has also been highlighted in prevention and promotion of health.The interesting summarization on counselling psychology has been given by American Psychological Association (APA) which states that -Counselling psychology is a general practice and health service provider specialty which centers on typical or normal developmental issues as well as a typical or dis-ordered development too and helps people improve their well-being (American Psychological Association (APA) n.d.Paragraph 1) as well as states that counseling psychology interventions may be -preventive, skill-enhancing or remedial‖ (Paragraph 8) (Berman and James, 2012).
Similarly, detailed guidelines to define best practice in prevention, research, training and social advocacy to improve the well-being of individual and community has been designed.These best practices will assist the psychologist in evaluating their preparation for the participation in prevention work and their understandings (Hage et al., 2007).Another significant point has also been highlighted in relation to the importance of qualities of counselors which shows that counselors do not only place emphasis on methodology, but it is suggested that counselors who offer warmth, genuineness and empathy have been shown to be consistently effective (Corney, 1993).
Counselling and communication skills
Counselling psychologists works to provide an opportunity which focuses on health education and preventative counselling (Bor and McCann, 1999).Important to mention, medical professionals or other fields equally feel the necessity of having positive communication and basic counselling skills.To fulfill the requirement of patients, the awareness of basic counselling skills and positive communication is a must among professionals.Evidence suggests that counselling and communication skills are inter-connected as counselling fulfills their goals by following the steps and skills of communication.
Issues and counselling
The evidences suggest that majority of patients' cases are surrounded by the mental illness, behavioral risk factors and psychosocial issues.Apart from the importance of counseling, few significant issues were also pin-pointed to be explored, which would always be vital in future and crucial to consider.Undoubtedly, training and improvement of skills has always been needed in the context of counselling.It is important to determine the criteria for trainings, required skills, experiences as well as the qualifications of the counselor in the medical settings.It was said by Roslyn Corney (1997) that systematic training is required specifically for those counselors who were working in medical sectors with the proper accreditations and registration procedures.Purposed literature suggests that counselors working in practice are diverse in the qualifications and experiences they possess (Sibbald et al., 1993).Another issue which is very important and critical as reported by Corney (1997) is collaboration and confidentiality to maintain by counselors.To maintain the confidentiality is very important in any official case; however, in the case of counselling in medical setting, it is important to share relevant information to the doctors, but to some extent counselors do not share all information with the medical doctors for the sake of the confidentiality which may lead to difficulties.Similarly, Tyndall (1993) purposed that an interdisciplinary study programme found that counselors were often less able to form collaborative relationship than other health workers or social service personnel.Few awareness criteria for counselors were suggested regarding the provisions in the community such as social services personnel, community mental health teams, voluntary and self-help groups as well as to have good understanding of the medical model and the side effects of the drugs that their clients receive, even if they do not use the model themselves (East, 1995).Increasing role of counselors in medical settings provided brief, evidence based counselling sessions focusing on symptom control or alleviation and helping to enhance patient autonomy and coping (Bor et al., 2004).However, counselors have played a considerable role in the making of team resource management and patient safety programmes.
Patient's awareness, perception and satisfaction towards counselling
Patient's perception and utilization of counselling is important to explore because if the patient has any prolonged illness where guidance is important, these are not taken as recommended professional advices.Lifestyle factors like diet, exercise, sleep and smoking behavior in some cases are difficult to change due to the required time, considerable effort and motivation.Likewise, ambivalence about behavior change is a common problem in health care consultations (Rollnick et al., 1992).As an outcome, no improvement and low medication adherence were seen in patients.It is reported that the barriers to the utilization and perception toward counselling or any lifestyle change is underlying the attitude of health care professional or counselors towards the personal costs, and the patient looking closely at the personal implications of change and the immediate costs while minimizing future benefits (Tuckett et al., 1985).Therefore, the patient's resistance to change is increased (Miller and Rollnick, 1991).Different models on health behavior shows the three common concepts (Doherty et al., 2000) that may create impact on patient perception and behavior; these include the patient's expectations about the consequences of engaging in the behavior, the influence of the patient's perception of, or beliefs about, personal control over the behavior, and the social context of the behavior.Generally, in the light of these models, emphasis has been placed on a different set of techniques of counselling to be applied in various hospitals and in private practices for intervention and as a part of treatment.
The existence of counselling in medical context has been proven by the evidences as well as by scholars' views.The aspect worth considering about patient awareness, perception and satisfaction towards counselling has been sophisticatedly handled by the researches, as one study concerned about how GP's counselling or other counselling should be conducted into practice; however, general practitioners (GPs) perceived counselling as difficult (Mann and Putnam, 1989).Similarly, patients characterized that counselling are as insensitive and rushed in few cases (Malterud and Ulriksen, 2010;Brown et al., 2006); also, low patient compliance (Oldridge and Stoedefalke, 1984;Graves and Miller, 2003) were also experienced in lifestyle counselling in general practice.On the other hand, it was suggested that shared decision-making, an integral aspect of patient centered medicine increases the patients' expectations as to their own compliance (Edwards et al., 2004).
An examination of the effects of verbal and written counselling given to patients and of Cochrane review shows that the combination of both verbal and written health information improves patients' knowledge and satisfaction (Johnson and Sandford, 2008).However, on the contrary, many patients reported dissatisfaction about the information which they received.A new initiative of counselling has been taken by pharmacists in the management of chronic illness but patients still seems to be unfamiliar with the concept ( Van Geffen et al., 2009;Chewning and Schommer, 1996;Gastelurrutia et al., 2006).A review showed that the rates of counselling provided in pharmacies reported by consumers ranged from 8 to 56% (Puspitasari et al., 2009).However, patients' needs for and satisfaction with information is likely to fluctuate over time and with their experience of treatment ( Van Geffen et al., 2009;Dickinson and Raynor, 2003).Similarly, several studies depicted the intervention or counselling technique to enhance the medication adherence.Likewise, Omran et al. (2012) conducted a review on pharmacist's intervention to improve medication adherence.Different methods of improving adherence which includes telephone followups, handing out leaflets and face-to-face sessions were recorded.Studies showed that adherence rates were higher in patients receiving some sort of intervention.However, a study by Geffen et al. (2011) examining patients satisfaction on cardiovascular medications received from pharmacies, reported that 58% were unsatisfied with the information they received.Interesting information was also revealed from the patients regarding the information of side effects as many patients preferred to receive as much side-effect information as possible.However, some patients did not wish to receive sideeffect information, as they were concerned that it may impact negatively on their adherence to medication (Borgsteede et al., 2011) One of the other strain reported by Hamrosi et al. (2013) showed that patients want written information, however they are generally not supplied with it.Time constraints, possible creation of patient anxiety, low literacy, and perceived length and complexity of the information were common reasons for not providing it.Moreover, effective and less demand can be one of the main factors to increase the relationship between counselling professionals and patient behavior.
Conclusion
Overall, the investigation of effectiveness of counselling in any sector is hard to determine in a specific way although it has shown countless constructive outcomes.This article expands to understand the role of counselling in medical setting.The process was completed through the different opinions, evidences and inputs of researches on the significant contribution of counselling psychology.Results suggest that client or patient centered approach had a significant impact on medical practice for patient care.Practitioners reported that sufficient understanding of the patient views is important in the process of consultation and medical treatment and decision has to be developed on the basis of shared involvement.It was also coded that to facilitate this into practice, various skills of counselling such as active listening, empathy, responding and reflection has to be improved which is very important for counselors and medical practitioners.Counselling may not only be related to the improvement in patient well-being and emotional well-being but also improves the compliance with treatment, patient satisfaction and recovery from surgery or other medical treatment.The central role of counselling is to provide benefit for patient in risk reduction or illness (physical and mental both) and it is also helpful for family members, community and for safety programmes.Interesting spotlight on the patient awareness and perception towards counselling revealed that a number of risk factors may create impact on adherence to follow the recommended action such as difficult counselling, insensitive and highly cost effective.As was well documented, psychological counselling includes meditation, relaxation and other methods of stress management (Young and Jacqueline, 2007) which will be very sustainable to any chronic health issues.Evidences also show the considerable efforts of counselling in prevention and promotion of health in lifespan development and changes.The significance of counselling has been widely accepted by all over the world.Thus, different sectors are applying the counselling skills to improve team management in the related fields.However, the need of training, research, practices and proper education in counselling still remain the same, but to be improved.It is mandatory to provide adequate training progammes or awareness workshops for counselors to do proper justice to the emerging requirement.The contributions of counselling would always prove to be a better model for human growth, health and well-being under any adversity.
|
2018-12-27T17:22:32.218Z
|
2018-03-30T00:00:00.000
|
{
"year": 2018,
"sha1": "a4b06219e55c0b6671c36f475a9cbd8bc05e2eb6",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/IJPC/article-full-text-pdf/642963156376.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a4b06219e55c0b6671c36f475a9cbd8bc05e2eb6",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
266972422
|
pes2o/s2orc
|
v3-fos-license
|
Ultrasound imaging of male urethral stricture disease: a narrative review of the available evidence, focusing on selected prospective studies
Purpose To synthetize the current scientific knowledge on the use of ultrasound of the male urethra for evaluation of urethral stricture disease. This review aims to provide a detailed description of the technical aspects of ultrasonography, and provides some indications on clinical applications of it, based on the evidence available from the selected prospective studies. Advantages and limitations of the technique are also provided. Methods A comprehensive literature search was performed using the Medline and Cochrane databases on October 2022. The articles were searched using the keywords “sonourethrography”, “urethral ultrasound”, “urethral stricture” and “SUG”. Only human studies and articles in English were included. Articles were screened by two reviewers (M.F. and K.M.). Results Our literature search reporting on the role of sonourethrography in evaluating urethral strictures resulted in selection of 17 studies, all prospective, even if of limited quality due to the small patients’ number (varied from 28 to 113). Nine studies included patients with urethral stricture located in anterior urethra and eight studies included patients regardless of the stricture location. Final analysis was based on selected prospective studies, whose power was limited by the small patients’ groups. Conclusion Sonourethrography is a cost-effective and safe technique allowing for a dynamic and three-dimensional urethra assessment. Yet, because of its limited value in detecting posterior urethral strictures, the standard urethrography should remain the basic ‘road-map’ prior to surgery. It is an operator-dependent technique, which can provide detailed information on the length, location, and extent of spongiofibrosis without risks of exposure to ionizing radiation.
Introduction
Successful treatment of urethral stricture disease requires not only adequate surgical experience but also appropriate preoperative diagnosis.The basic tools widely used for the initial evaluation of patients with suspicion of urethral stricture (US) are uroflowmetry, supplemented with the IPSS (International Prostate Symptom Score) questionnaire.However, these non-invasive tests remain only supplements to the available imaging methods.Currently, the standard imaging of the urethra includes urethroscopy, cystourethrography (CUG) with voiding cystourethrography (VCUG), and increasingly used sonourethrography (SUG) and magnetic resonance urethrography (MRU) [1][2][3].Comprehensive data collection is of utmost importance prior to an operation, because factors such as stricture length, location, and extent of periurethral pathology have a key impact on the choice of surgical approach, reconstruction technique, and the final outcome.The implementation of SUG has been already described more than 30 years ago, yet importantly, this method is still evolving.Compared to the first data provided by McAninch in 1988, who was the first to describe implementation of SUG in US diagnosis, the currently widely available high-quality ultrasound devices offer incomparable image quality and detail in the assessment of Extended author information available on the last page of the article Page 2 of 12 pathological tissue [4].The main limitation of the ultrasound technique includes operator dependence and lower sensitivity for evaluation of posterior urethra-a limitation that according to some authors can partly be overcome by the use of transrectal ultrasound [5].Sonourethrography has shown significant value in several studies and in the light of the growing interest in the application of this method, this narrative review provides a summary of the available literature on the diagnostic role of SUG in the management of urethral strictures.The aim of this review is a thorough analysis of the SUG including technical aspects of the procedure, operator dependency, advantages, and limitations.
Pathophysiology of urethral stricture disease
The pathophysiology of urethral stenosis is linked to excessive fibrotic growth at the level of the corpus spongiosum.The result of this pathological process is known as "spongiofibrosis".In contrast to the normal urethral wall, the epithelial layer at the site of stricture is much thicker.Dense packing of elastin fibers around the narrowed urethra causes the loss of natural elasticity of the urethra until they finally prevent proper urination [6].Fluid's irritative effect at the site of urethral damage may theoretically intensify the process, but this mechanism has not been practically explored in human studies [7].It is yet noteworthy, that the first murine model for urinary extravasation revealed that mesenchymal spongiofibrosis can be induced by urethral injury with subsequent extravasation.Understanding of this cause-andeffect sequence explains the need to look for more accurate diagnostic methods that provide information on pathology beyond the urethral lumen [8].
Conventional urethral imaging techniques: urethrocystography and urethroscopy
Cystourethrography and voiding cystourethrography have been the oldest and most used imaging modalities for patients with US, still being the "gold standard".The examination is widely accessible and the location and length of the stricture can be evaluated instantly and at a relatively low cost.A great advantage of this method is the ability to assess the entire length of the urethra including the posterior urethra.In the case of complete obliteration of the urethra, in patients who are already on a suprapubic catheter-the proximal segment can be visualized by performing the antegrade urethrogram.Furthermore, CUG/VCUG also detects presence of diverticula, stones, fistula or false path.The main limitation is lack of information about the tissue beyond the lumen of the urethra; thus, information on spongiofibrosis cannot be obtained.Some comparable studies suggest that CUG/VCUG underestimates stricture length [9][10][11][12][13][14].Moreover, both the patient and physician may be exposed to ionizing radiation during the procedure, unless an infusion line is used to fill the urethra and bladder.The impact of radiation can be especially significant when repeated examinations are necessary.On the other hand, urethroscopy enables a real-time endoscopic visualization of the urethral lumen without exposure to harmful radiation.Noteworthy, urethrocystoscopy and CUG/VCUG have been considered the preferred tools in post-urethroplasty follow-up protocols to detect a recurrent stricture [15,16].Yet, urethroscopy rarely enables assessment of the stricture length as the caliber of symptomatic strictures is usually narrower than the standard cystoscopes used [17].Moreover, urethroscopy is limited in providing a clear diagnosis in complex cases such as multiple strictures, or complete urethral obliteration.
Novel urethral imaging technique: magnetic resonance urethrography
Magnetic resonance urethrography stands out among the methods used in the diagnosis of urethral stricture, because it provides three-dimensional images of urethral stricture disease, including data on the tissue surrounding the urethra.One of the major differences, which also determines the choice of one of these methods, is the range of urethra evaluation.Magnetic resonance urethrography was found to be accurate in assessment of both anterior and posterior urethra.The value of MRU is particularly emphasized for the evaluation of the posterior urethra, because preoperative assessment of these strictures correlated more closely with operative findings compared to RUG/VCUG [18][19][20].
Materials and methods
A comprehensive literature search was performed using the Medline and Cochrane databases in October 2022.Studies that evaluated the use of SUG in the diagnosis of urethral stricture disease were included in the analysis.Prospective studies were selected for this review to obtain the most informative data possible.This exclusion of case reports, editorials, and commentaries, while potentially limiting the scope of the review, was deemed necessary to ensure the highest quality and clinical relevance of the findings.Articles were screened by two reviewers (M.F. and K.M.) who followed the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) statement.The selection process is presented in the PRISMA flowchart (Fig. 1) [21,22].The articles were screened using the keywords "sonourethrography", "urethral ultrasound", "urethral stricture", and "SUG".Only human studies and articles in English were included.Case reports, conference abstracts, editorials, and comments were excluded from detailed analysis.
Results
Seventeen papers were selected as a result of our literature review on the use of SUG in assessing urethral strictures.Final analysis is based on prospective studies, the majority of which are limited by a small patient population (number of patients varied from 28 to 113).Nine studies included patients with urethral stricture located in anterior urethra and eight studies included patients regardless of the stricture location.As most of the available literature assesses the value of ultrasound based on comparing this method to other modalities and/or surgical findings, the collected data are presented in the Table 1.As shown in the table, the diagnostic accuracy of SUG was compared to RUG/VCUG, MRU, and sonoelastography.The accuracy of SUG was generally high, with most studies reporting a sensitivity of over 80% and a specificity of over 90%.However, there was some variability between studies, with the accuracy of SUG being lower for strictures in the posterior urethra.
Sonourethrography technique
The technique of the procedure itself has not changed much since its introduction and most of the authors follow the same steps.The ultrasound transducer should be positioned on the perineal area, and high-frequency ultrasound waves are directed into the urethral tissue.The ultrasound frequencies should be adjusted in different parts of the urethra-15-18 MHz for the penile urethra (from meatus to the distal bulbar urethra) and 9-12 MHz for the bulbar urethra (up to the urethral external sphincter).Special attention should be paid to the impact of the examiner's pressure created with the transducer against the skin, as too much pressure may generate an impression of a false stricture.Moreover, to avoid artefacts and evaluate the dynamic view of the intraurethral flow, the urethra should be filled during the examination.
The following is a general description of the steps involved in injecting contrast for sonourethrography: 1. Patient preparation: Patient is positioned in a lithotomy position, with legs supported and separated.Perineal area is cleansed with an antiseptic solution.2. Filling the urethra: A tip of a thin catheter is inserted into the urethral meatus, and the saline is prepared in a syringe.In case of a distal urethra stricture, a blunt plastic cannula can be used.Saline is slowly administered into the urethral lumen, typically in small portions.Any discomfort or adverse reactions should always be noted.3. Ultrasound imaging: The ultrasound transducer is positioned and moved from the urethral meatus toward the perineal area, and the saline-filled urethra is imaged in real-time.The physician should carefully assess the images for any abnormalities or areas of narrowing.The direction of examination is not relevant; however, the entire length of the urethra available for examination should always be assessed.4. Post-procedure: Once the imaging is completed, the catheter or cannula is removed, and the patient is instructed to void.
Anatomy of male urethra on sonourethrography
Normal urethra as seen on Fig. 2 presents as an anechoic tubular area, with smooth outline, usually of 8-10 mm in diameter [23].If saline is introduced, small hyperechoic echoes may be visible within the urethral lumen (Fig. 3).
Alterations in course of spongiofibrosis present as hyperechogenic areas in comparison to the normal echogenicity of corpus spongiosum (Fig. 4).Calcifications may be encountered.Ultrasound also allows the evaluation of mucosa and its abnormalities, lumen abnormalities such as diverticula, Cowper glands, paraurethral soft tissues and/or perineal masses, posttraumatic changes, etc., as well as imaging of the bladder (which may show a thickened trabeculated bladder wall in case of high-pressure voiding due to the presence of stricture).
Additional ultrasound techniques
Sonoelastography also known as virtual or electronic palpation is a novel technique used for measurement of tissue stiffness.Talreja et al., in a study on 77 patients with clinical features of anterior urethral stricture disease concluded that sonoelastography estimates stricture site and length better in comparison with RUG/VCUG and SUG.It estimates the degree of spongiofibrosis which serves as an important prognostic factor for stricture recurrence more accurately than SUG.Despite several subsequent studies, it is not widely used [24][25][26][27][28][29].Bosio described contrast-enhanced voiding urosonography (CE-VSUG) via the transperineal approach in a pediatric population after catheter filling of the bladder with ultrasound contrast diluted in serum, and its use for assessing posterior urethral anomalies and the degree of vesicoureteral reflux in children has become widespread [30].
Diagnostic accuracy of sonourethrography compared to other methods and surgical findings
Most of the studies compared SUG findings with that of RUG/VCUG in the diagnosis of urethral stricture.In two studies SUG was found to be more accurate at diagnosing stricture presence and estimating the stricture length compared to RUG [9,10].Yet, the sensitivity in detecting the stricture and estimating its length using the SUG largely depends on the part of the urethra where the stricture is located.In six studies, SUG has been found to be superior to RUG for anterior urethral strictures [9,10,26,[31][32][33] The highest correlation for stricture length at operation was for strictures located in the penile urethra [6].Another early study comparing SUG to conventional RUG found that RUG tended to underestimate actual stricture length as compared to SUG [32,34].Tembhekar and colleagues evaluated the role of SUG in 70 male patients referred to the urology department for symptoms suggestive of urethral stricture disease.This study diagnosed 39 strictures in 33 patients.RUG/VCUG and SUG were equally efficacious in diagnosing anterior urethral strictures; however, only one of three (33.3%)posterior urethral strictures were adequately visualized on SUG.The group also concluded that SUG was superior in evaluating spongiofibrosis; however, this appeared to be subjective, based on authors' opinion.Interestingly, 61 of the 70 (87%) of patients involved in this study preferred SUG over conventional RUG, as it was felt to be less invasive and caused less discomfort [35,36].Only in one study, SUG was the least accurate method compared with RUG/VCUG and MRU with average overestimation of 2 mm as related to the operative measure [18].Despite high accuracy of SUG in most patients, the authors of this study experienced some notable outliers in the SUG measurements.None of these problems occurred in the penile urethra; instead, they were all exclusive to the bulbar or membranous urethra.This accurately depicts the technical challenges of performing SUG in the posterior urethra, which is nearly impossible despite optimal patient placement and considerable operator expertise [18,37].Also, it was discovered that in 44 out of 232 (19%) patients undergoing anterior urethral reconstruction included in the study, the results of the intraoperative SUG changed the planned reconstructive technique (based on the preoperative RUG).The authors of this study described criteria to perform an anastomotic urethroplasty based on the intraoperative urethral ultrasonogram findings demonstrating a bulbar urethral stricture length of < 25 mm on aggressive urethral distension [38].
Sonourethrography for the assessment of spongiofibrosis
Most authors concluded that SUG enables the evaluation of spongiofibrosis in the anterior urethra and provides similar accuracy as compared to MRU.More anatomical detail is MRU's principal benefit, which is offset by the cost of the modality and the difficulty of image interpretation.A qualitative and quantitative evaluation of spongiofibrosis may also be provided by SUG incorporating real-time elastography [26,30,37].It is yet unknown whether determining the exact extent of spongiofibrosis before the surgery has significant clinical value and is still to be investigated in further research.However, most authors agree that it has an influence on the choice of surgical technique as excision of the fibrotic fragment and end-to-end anastomosis is preferred in the case of extensive spongiofibrosis [38].In a study by Ravikumur et al. [31], SUG appeared to more accurately depict stricture length, stricture diameter, and degree of spongiofibrosis when correlated with cystoscopic and intraoperative findings.
Sonourethrography as a sole imaging technique
Most of the articles that have been published demonstrate the value of SUG as an auxiliary modality in addition to the standard methods of diagnosing urethral strictures such as RUG or urethroscopy.However, in a recent study, Bryk and colleagues evaluated the viability of using SUG as the sole imaging technique for diagnosing urethral strictures prior to surgical treatment.This study demonstrates that, in a highvolume center with an experienced team, SUG may be the sole imaging modality needed to plan a definitive urethral reconstruction.It should be highlighted that this study only included patients with anterior urethral strictures.In comparison to RUG, which was 90% accurate in this study of 30 men who underwent both procedures, SUG was 100% correct for anterior urethral strictures, but only 60% accurate for posterior urethral strictures.Hence, as the authors concluded, it is not recommended to extend these findings to the posterior urethra.In the light of available data on SUG, because of its limited value in detecting posterior urethral strictures, the standard urethrography should remain the basic 'road-map' prior to surgery, particularly in patients with suspected urethral stricture undergoing initial diagnosis [39].
Highlights and clinical indications
Retrograde urethrography has historically been the gold standard for identifying urethral strictures; however, because of its certain drawbacks, novel imaging techniques have been investigated and evaluated.Before deciding on surgical Page 10 of 12 intervention, it is crucial to thoroughly consider the length, location, number of the strictures and their morphology since each may affect the choice of the treatment method.Modern high-resolution ultrasound is widely available; thus, the quality of data provided by this diagnostic method has improved significantly since the first description several decades ago.Sonourethrography has nowadays become a viable supplement to the standard modalities and provides additional valuable information.Fibrous scarring of the corpus spongiosum leading to a decrease in the urethral lumen is the fundamental theory explaining the pathogenesis of urethral stricture disease.Sonourethrography provides data on spongiofibrosis with satisfactory accuracy making this method widely used mostly in specialized reconstructive urology centers.As a high-resolution, multi-planar, and costeffective technique that can be performed in an outpatient setting, SUG has found its place in the new standards of diagnostics of anterior urethral strictures.It is safe for both the patient and the physician because neither are exposed to radiation.Moreover, the possibility of using saline instead of iodine contrast makes it applicable also for allergic patients.
However, knowing in which clinical situations SUG is of the greatest value is crucial.As proven in numerous publications, the satisfactory accuracy of the SUG refers primarily to the penile urethra.Some authors question the value of the radiological assessment of strictures of the distal urethra and its impact on the choice of surgical technique.These strictures are often extensive or multiple, rather than single as mostly observed in iatrogenic bulbar strictures.Thus, regardless of length and extent of spongiofibrosis, these strictures often require onlay urethroplasty with opening the urethral lumen when most accurate assessment of the pathology may be achieved during the surgery.On the other hand, in these cases, SUG seems to be the best method to show the periurethral pathology up to the urethral opening with high accuracy, allows discussion of the surgical plan with the patient before the surgery.Moreover, SUG can be of particular use to calculate the flap width in the pendulous urethra, where fasciocutaneous flaps are frequently used for reconstruction.For this purpose, Morey and McAninch proposed a straightforward formula 26-3 D (where D is the urethral diameter in mm) [40].The lumen diameter can be measured with satisfactory accuracy with ultrasonography.This prevents excessive flap width from causing urine pooling and enables the fasciocutaneous flap to be harvested before the urethra is opened.
Furthermore, SUG can be particularly valuable in cases when conventional ascending urethrography is challenging or impossible due to the anomalous anatomy of the distal urethra.This is particularly the case in patients with hypospadias, when both the native and reconstructed urethra are often extremely difficult to evaluate.While descending SUG avoids the need to inject a contrast agent, micturating SUG, although challenging, is feasible even in very complex cases and does not require catheterization of the urethra.The use of SUG in these patients should also be particularly considered as a follow-up tool-without exposing the patient to radiation.Thus, future research should investigate the accuracy of sonourethrography in the follow-up of patients after urethral stricture surgery.This could be a way to detect early recurrence of the stricture.In addition, research is ongoing to develop new ultrasound techniques that can improve the accuracy and clinical utility of sonourethrography.For example, researchers are exploring the use of three-dimensional ultrasound and contrast-enhanced ultrasound.
One of the significant limitations of SUG is operator dependency and although the statistical analysis on the issue is scarce or non-existent, nearly all papers stress that despite wide availability of ultrasound and inclusion of the technique in both urethral stricture diagnostic algorithms and guidelines, it has not yet been entirely incorporated into urological everyday practice [41][42][43].Also the issues of long learning curve, limitation in evaluating the posterior urethra, technical aspects of the examination, such as patient preparation and the length of the examination itself are being raised [43,44].
Conclusion
Sonourethrography assessment of the male anterior urethra in patients with anterior urethral strictures is a safe, welltolerated, minimally invasive and cost-effective diagnostic modality.For the posterior urethra, this technique cannot be recommended, based on the available published evidence.While more studies are needed to better characterize SUG, it could be proposed as an additional diagnostic modality, especially in severe and recurrent cases.More evidence on SUG and more data from studies with larger patients' groups need to be collected in the next future, as so far no randomized clinical trials have been published.Although in the future, SUG might replace CUG/VCUG as the investigation of choice in the diagnosis of anterior urethra strictures, at present, combining RUG/VCUG still remains the gold standard in evaluating urethral stricture disease.
Fig. 2 Fig. 3 Fig. 4
Fig. 2 Ultrasound image of bulbar urethra in longitudinal scan shown within the white box (Figure provided by the authors) U urethra.CS corpus spongiosum.BSM bulbospongiosus muscle.Thin white line-urethral epithelium, thick white line -Buck's fascia, dotted line-DPF deep perineal fascia
Table 1
Diagnostic accuracy of sonourethrography compared to other modalities and surgical findings if applicable
Table 1 (
Author's contribution MF: protocol/project development, data Collection, manuscript writing/editing.MV: protocol/project development, data Collection, manuscript writing/editing.KM: data Collection, manuscript writing/editing.JA: data Collection, manuscript writing/ editing.FC-J: protocol/project development, data collection.AC: protocol/project development, data collection.CMR: data collection, manuscript writing/editing.WV: protocol/project development, data collection, manuscript writing/editing.MW: protocol/project development, data collection.GM: protocol/project development, data collection, manuscript writing/editing.MM: protocol/project development, data collection, manuscript writing/editing.Funding Open Access funding enabled and organized by Projekt DEAL.
|
2024-01-14T14:14:36.448Z
|
2024-01-13T00:00:00.000
|
{
"year": 2024,
"sha1": "ee52441060a35120d6d1a3096963202102d64565",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "c3d2fc750f2df7dd6e6652e71038a84f234c334d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258647022
|
pes2o/s2orc
|
v3-fos-license
|
Exploring pest mitigation research and management associated with the global wood packaging supply chain: What and where are the weak links?
Global trade continues to increase in volume, speed, geographic scope, diversity of goods, and types of conveyances, which has resulted in a parallel increase in both quantity and types of pathways available for plant pests to move via trade. Wood packaging material (WPM) such as dunnage, pallets, crates, and spools, is an integral part of the global supply chain due to its function in containing, protecting, and supporting the movement of traded commodities. The use of untreated solid wood for WPM introduces the risk of wood boring and wood-infesting organisms into the supply chain, while the handling and storage conditions of treated WPM presents risk of post-treatment contamination by surface-adhering or sheltering pests. The wood-boring and -infesting pest risks intrinsic to the solid wood packaging pathway were addressed in the 2002 adoption and 2009 revision of ISPM 15, which was first implemented in 2005–2006 in North America. Although this global initiative has been widely implemented, some pest movement still occurs due to a combination of factors including; fraud, use of untreated material, insufficientor incompletetreatment, and post-treatment contamination. Here we examine the forest-to-recycling production and utilization chain for wood packaging material with respect to the dynamics of wood-infesting and contaminating pest incidence within the environments of the international supply chain and provide opportunities for improvements in pest risk reduction. We detail and discuss each step of the chain, the current systems in place, and regulatory environments. We discuss knowledge gaps, research opportunities and recommendations for improvements for each step. This big picture perspective allows for a full system review of where new or improved pest risk management strategies could be explored to improve our current knowledge and regulations.
conditions of treated WPM presents risk of post-treatment contamination by surface-adhering or sheltering pests. The wood-boring and -infesting pest risks intrinsic to the solid wood packaging pathway were addressed in the 2002 adoption and 2009 revision of ISPM 15, which was first implemented in [2005][2006] in North America. Although this global initiative has been widely implemented, some pest movement still occurs due to a combination of factors including; fraud, use of untreated material, insufficient-or incomplete-treatment, and post-treatment contamination. Here we examine the forest-to-recycling production and utilization chain for wood packaging material with respect to the dynamics of wood-infesting and contaminating pest incidence within the environments of the international supply chain and provide opportunities for improvements in pest risk reduction. We detail and discuss each step of the chain, the current systems in place, and regulatory environments. We discuss knowledge gaps, research opportunities and recommendations for improvements for each step. This big picture perspective allows for a full system review of where new or improved pest risk management strategies could be explored to improve our current knowledge and regulations.
Introduction
The first International Year of Plant Health (IYPH) in 2020 and early 2021 was proposed to renew and reinvigorate a global focus on tactics to protect the environment, conserve biodiversity, slow climate change, and protect the world's food production and natural resources (IPPC 2021a). The IYPH was necessary because invasive pests in all ecosystems are estimated to cost the world tens of billions of dollars annually (Bradshaw et al. 2016;Diagne et al. 2021) and have an annual cost of over $22.1 billion in North America alone (Crystal-Ornelas et al. 2021;Rico-Sánchez et al. 2021) although it is widely accepted the true costs are likely greater. Many invasive species are pests of trees and forests, where their damage can also affect forest productivity, ecosystem services, and carbon sequestration (Jones 2016;Fei et al. 2019;Quirion et al. 2021). Since potential plant pests can be inadvertently moved via international trade (Allen and Humble 2002), finding ways to reduce the risk of plant pest movement in traded commodities and their conveyances is one important step toward these goals.
The expansion of global trade over the past several decades has opened new markets and increased the speed at which goods move around the world. This rapid increase in trade volume is partly due to a growing global population with increased purchasing power and a corresponding increased demand for goods. One consequence of these phenomena is a global increase in plant, animal, and microorganism movements associated with trade commodities that can lead to the introduction and establishment of invasive pests (Banks et al. 2015). The global phytosanitary community works collectively to develop solutions to address the risks associated with trade such as those posed by invasive pests and the movement of untreated solid wood packaging material (WPM, Fig. 1).
While earlier (mid-1800s to early-mid 1900s) introductions of invasive species to North America often occurred via trade in infested plants, most recent introductions (e.g., Anoplophora glabripennis, Agrilus planipennis) are thought to have been introduced via WPM. International standards for phytosanitary measures (ISPMs-developed under the auspices of the International Plant Protection Convention) and regional standards for phytosanitary measures (RSPMs-developed under the auspices of regional plant protection organizations like the North American Plant Protection Organization) are guidelines used for phytosanitary trade. These guidelines can be used by countries or the global community to develop regulations to reduce the risk of non-indigenous species introductions associated with specific commodities or conveyances. The standard addressing the risk of wood boring and wood infesting pests in WPM is ISPM 15: Regulation of wood packaging material in international trade which sets the requirements for acceptable treatments of WPM (IPPC 2019). Other ISPMs provide guidelines for reducing the risk of infesting organisms in plants (ISPM 36;IPPC 2016), on vehicles, machinery, and equipment (ISPM 41;IPPC 2017b), and integrating measures in a systems approach for pest risk management (ISPM 14;IPPC 2021c and RSPM 41;NAPPO 2018).
International awareness of the risk posed by contaminating organisms has resulted in a renewed interest to determine how and why these potential invasive pests are moved within global supply chains (e.g., NAPPO 2022). Integral to the understanding of contaminating (also referred to as "hitchhiking") organisms and their control is differentiating among the functional niches of non-indigenous species. Infesting organisms infect or invade plant tissues whereas contaminating organisms lack this physical or physiological relationship with the article on which they are found. Because contaminating organisms can be found on any packaging material or conveyance, their presence in the WPM supply chain is not unique. However, this risk can only be addressed when we understand how the regulatory and logistical conditions affecting supply chains already influence the risk posed by contaminating organisms. With this information, the phytosanitary community can then develop new and more effective strategies to minimize the risks present in the global supply chain.
To understand how WPM may affect the risk of introducing organisms we created a schematic representation of a "typical" global supply chain. Our example supply chain follows a commodity created overseas that will be delivered to a consumer somewhere in North America (Fig. 2). We then use the example to review how the activities at different steps influence where, when, and how unwanted organisms can potentially enter and exit a supply chain. We focus on WPM and present a supply chain that terminates in North America as an example; however, this is globally relevant as many goods are moved with WPM and many of the phytosanitary risk reduction principles we review are relevant for other commodities, conveyances, and supply chains.
The literature on preventing and managing the spread of organisms from one place to another uses a range of terms to describe these species (Iannone et al. 2021). Numerous attempts have been made to standardize the terminology within theoretical frameworks and models (e.g., Richardson et al. 2000;Kolar and Lodge 2001;Colautti and MacIsaac 2004;Catford et al. 2009) but not without some controversy (e.g., Sagoff 2005;Boltovskoy et al. 2018). Most of these frameworks focus on the steps of invasion as a species transitions from its first introduction to a new place, through its establishment, spread, and integration with the local ecosystem and the attendant impacts. Within those frameworks, we focus on the transport and entry of species (e.g., Catford et al. 2009) and how phytosanitary measures can prevent their establishment, spread, and impact. Consistent with its use in invasion ecology, we use the term non-indigenous species to refer to species introduced beyond their native range due to human activity (Kolar and Lodge 2001) and indigenous species to refer to species within their native range. This Wood packaging material (WPM) is the term for solid wood products used to aid in the transportation, protection, or containment of commodities. Wooden pallets (A) are the most common WPM used to facilitate the movement of packaged and bulk goods within warehouses, in trucking, and in shipping containers. Wooden packaging like spools (B), crates (C), cases or frames that contain or protect a commodity are also WPM. Blocks, strapping, and other wooden materials used to secure loose goods from damage or unwanted movement while in transport are also WPM and collectively are referred to as dunnage (D). Dunnage can be used within conveyances like sea containers to prevent the movement of goods, or within the holds of ships either as a counterbalance for ship stability or to restrain break-bulk cargo (commodities too large or otherwise unsuitable for shipping containers). Photo credits: LF Greenwood (A, C), DR Coyle (B) Susan C Usman (D)
WPM IS PACKAGED AND PACKED
WPM is packaged with a commodity and packed into a cargo transport unit (CTU). Cross contamination during packaging or packing is possible. Trained workers can remove or reject contaminated or unmarked WPM.
SHIP IS EN ROUTE IN INTERNATIONAL WATERS
Cross contamination may occur in breakbulk and CTUs. Workers can be trained to decontaminate exposed surfaces.
CTUs s ARE OPENED AND PACKAGED GOODS REMOVED AT DISTRIBUTION CENTER
Pests may escape and establish. Risk of escape increases when WPM or goods stored outdoors, or in poorly controlled indoor settings; less so when goods stored indoors.
WPM IS TREATED PER ISPM 15
Wood borers and microorganisms remaining after debarking are significantly reduced from WPM treated to ISPM 15 standards. ISPM 15 mark is applied to signal treatment at this stage may result in marked WPM harboring viable pests.
TREES ARE HARVESTED
Pests (e.g., wood borers, phloem feeders, microorganisms) may enter the supply chain if they are present in or on trees destined for use in wood packaging material (WPM).
WPM IS STORED PRIOR TO USE
Infestation or contamination can occur if WPM stored improperly (e.g., poor storage yard management, wet conditions, outdoor storage under lights).
CTUs s AND BREAKBULK ARE OFFLOADED INTO THE PORT
destination country. Inspection may identify marked WPM harboring viable pests, unmarked WPM, and contaminated WPM. Detection of pests or non-compliant WPM may result in CTUs and break bulk commodities being rejected or importers assigned a commodities and ships. Dunnage may be collected and reused by the port of entry for loading outbound trade.
CTUs s AND BREAK BULK SIT AT PORT OF ENTRY
Further inspections may occur. Beachhead contamination possible.
PACKAGED GOODS ARE STORED IN WAREHOUSES OR DISTRIBUTION CENTERS
Pests may escape and establish. Risk of escape increases when WPM or goods stored outdoors, or in poorly controlled indoor settings; less so when goods stored indoors. Trained workers can notice and report pest sightings.
WPM IS MANUFACTURED
Phloem feeders and contaminating pests are removed from WPM manufactured to ISPM 15 standards; risk from pest infestation after not manufactured to ISPM 15 standards may retain external pests or tissue (e.g., bark) susceptible to reinfestation.
IF DAMAGED, WPM MAY BE REPAIRED OR REMANUFACTURED
Damaged WPM may be repaired or remanufactured with untreated wood, introducing new risk of pest infestation or contamination by both native and non-native pests. To remain ISPM 15 compliant, depending on type and extent of repairs, treatment may need to be reapplied.
WPM IS STORED PRIOR TO REUSE OR DISPOSITION
Pests may escape and establish. Risk of escape increases when WPM stored outdoors. Survey and detection efforts can prioritize stored WPM that was associated with higher risk commodities. Trained workers and community members can notice and report pest sightings. Beachhead contamination possible.
Fig. 2
Flow chart identifying where infesting and contaminating pests may enter and escape wood packaging material in the supply chain. The chart shows movement of WPM from source to destination; WPM as it is produced in its source country, enters the supply chain, becomes associated with goods, is transported to North America, is disassociated from its goods, and then either disposed of or reused use is also consistent with International Plant Protection Convention (IPPC) and ISPM usage (e.g., IPPC 2021b). We use the term 'pest' to refer to a species that has harmful impacts (e.g., to the environment or to an economy) in its native or introduced range.
This review is presented in sections that correspond to our example global supply chain which follows a piece of WPM as it is produced, enters the supply chain, becomes associated with goods, is transported to North America, is disassociated from its goods, and then either disposed of or reused (Fig. 2). We review some of the challenges associated with mitigating the risk of invasive pests along supply chains and provide suggestions for areas of research that could address these challenges.
Trees are harvested (Fig. 2, Box 1)
The first step in the creation of WPM occurs when a tree is harvested. Insects, fungi, nematodes, and many other organisms use trees as a resource, most commonly for food, shelter, or as a substrate for oviposition. These organisms can potentially be present in WPM and be transported anywhere in the world if they are not removed or rendered infertile, inactive, unable to complete development or reproduce, or killed.
The types of organisms that use trees and the tree tissues they use varies among species and groups. For example, live trees may contain bark beetles (Scolytinae), which consume live phloem tissue and are found on the phloem-inner bark interface (Lieutier et al. 2004;Vega and Hofstetter 2015). Cerambycidae and Buprestidae larvae usually consume phloem but can also be found feeding and living in the sapwood and heartwood (Lieutier et al. 2004;Haack et al. 2017). Siricidae larvae and some Scolytinae (e.g., ambrosia beetles) live in the sapwood and heartwood (Schiff et al. 2012;Hulcr and Stelinski 2017). Other insects (e.g., Lymantria dispar, Lycorma delicatula) attach eggs or pupae to surfaces, including standing or downed trees (Elkinton and Liebhold 1990;Liu 2019) and some insects (e.g., Halyamorpha halys, Orcheses fagi) may overwinter within the bark of trees (Lee et al. 2014;Morrison et al. 2017). Fungi and other microorganisms may be introduced to trees via insect or mechanical damage, and windblown spores may infect foliage or wounds on a tree. Pest population densities increase and decrease over time and the periodic or episodic outbreaks experienced by some pests may be caused by natural or anthropogenic factors, such as climate change or monocultures. These outbreaks can result in the increased probability that pests may be in wood destined to become WPM. For example, the planting of Populus monocultures for windbreaks in China led to elevated A. glabripennis populations in the 1960s-1990s (Haack et al. 2010;Yan and Qin 1992) which may have contributed to their introduction to the United States sometime prior to their first discovery in 1996 (Haack et al. 1996). Non-indigenous pest species may also have elevated populations in their invaded range, like H. halys in the U.S. (Valentin et al. 2017) and Pityophthorus juglandis in Italy (EPPO 2015). These are sometimes referred to as beachhead or bridgehead populations (Lombaert et al. 2010;Bertelsmeier and Keller 2018), which may result in increased infestation or contamination of WPM in the new range and subsequent export to other countries.
The timing of the harvest process also affects the number of pests and other organisms that can enter the WPM production chain. If trees are harvested when pests are not present or are present in low numbers, the risk of pest introduction is lower. Likewise, if trees are harvested in an area with a high pest density (e.g., salvage logging due to a bark beetle outbreak) the risk of that pest's presence in the WPM chain is greater. Insects can also be attracted to fallen trees associated with blow-down events (e.g., hurricanes, windstorms; Vogt et al. 2020) resulting in significant population increases. However, harvesting activities can also mitigate this threat, as debarking round wood (i.e., logs) at the harvest site removes most of the pests that live on or in the bark and phloem layer (e.g., Thorn et al. 2016) and some pests may be dislodged from round wood as it is transported to a processing facility. Organisms living inside the sapwood and heartwood have a higher likelihood of surviving harvest, and harvesting processes that minimize bark disturbances will increase the survivorship of organisms that have colonized the outer surfaces of round wood.
Round wood is often piled and stored at the harvest site or at processing facilities (Fig. 3A), and organisms that are attracted to recently cut trees may enter the wood and thus the WPM chain. Several insect and fungal species attack milled untreated wood and lumber and can persist inside this material for some time (Verrall 1945;Gray and Borden 1985;McLean 1985;Peters et al. 2002; but see Haack and Petrice 2009). Various management tactics (e.g., rapid removal of harvested round wood immediately after harvest, harvesting during seasons when pests are not active, application of pesticides, anti-aggregation pheromones, or water) may prevent or reduce pest infestation of stored round wood.
, Box 2)
Wood packaging material is defined by the IPPC, as "wood or wood products…used in supporting, protecting or carrying a commodity" (IPPC 2021b). This definition excludes paper products (like cardboard boxes) but includes dunnage (IPPC 2021b). Within ISPM 15 crates, boxes, packing cases, dunnage, pallets, cable drums, spools and reels are all considered WPM, but the standard exempts WPM made from thin wood (less than or equal to 6 mm thickness) and processed wood material (e.g., plywood, particle board, etc.). While WPM is often referred to as solid wood packaging material to differentiate it from WPM made of processed wood; we refer here to WPM in keeping with the IPPC definition and ISPM 15 understanding. WPM is typically constructed from sawn wood, i.e., rectangular pieces of different dimensions that have been sawn from In the sawn wood production process, round wood (A) is first mechanically debarked, using machinery such as a rotary-head debarker (B). Debarked wood (C) may show evidence of insect damage (arrow 1) and can still retain some patches of bark (arrow 2). Debarked wood is then milled to produce sawn wood (D) of different dimensions and grades, some of which may be used for solid wood packaging material. Photo credits: CJK MacQuarrie Exploring pest mitigation research and management associated with the global wood packaging… round wood, or what is more commonly referred to in North America as 'lumber'.
Debarking is a part of most sawn wood production processes. In debarking, round wood ( Fig. 3A) is subjected to a physical process to remove the bark (e.g., using a rotating instrument or scraping with hand tools; Fig. 3B). Most organisms that live in and just under the bark will be removed from the round wood during this process (Jones et al. 2013;Mac-Quarrie et al. 2020). In practice, debarking often does not remove all the bark from a piece of round wood (Fig. 3C). Trees do not grow perfectly round and excessive debarking would be required to address the natural variations in the shape of the tree stem, which could damage the underlying wood and reduce the yield. Debarking is only intended to remove most of the bark; any remaining bark is removed from sawn wood during subsequent steps of the milling process (see MacQuarrie et al. 2020 for discussion). Thus, wood that has been through this process is referred to as "debarked" and not "bark free." After debarking, the round wood is processed into sawn wood of various dimensions (Fig. 3D).
In a sawmill, wood may be processed for multiple uses, with better quality sawn wood (i.e., wood without visual and structural defects) allocated to the production of high-value goods or construction materials. Lower quality wood (with more visual and structural defects) can include the outer sawn wood or slabs (i.e., wood that may still have some rounded profile) or edges (i.e., the waste created when sawn wood is cut longitudinally to achieve the desired dimension) that are a by-product of creating higher grades of sawn wood. This lower quality wood is often used for other purposes, including WPM. WPM can be constructed from lower quality wood, or wood that is not suitable for other uses due to structural defects. This lower quality wood may be sourced from poor quality or low value tree species, trees impacted by disturbances such as windthrow or fire, or trees that have been killed by pests. Sawn wood can then be used to manufacture items such as pallets, reels, or crates ( Fig. 1A-C). Dunnage (Fig. 1D) is another class of WPM; it is most often single pieces of whole wood in standardized or custom cut shapes and sizes. Dunnage is used primarily for stabilizing cargo during transit. There are also processed wood or paper products (e.g., oriented strand board, particle board, molded wood fiber, cardboard) used in the construction of WPM (Fig. 4). Creating processed wood products (e.g., chipping) kills most of the insects present in the wood (McCullough et al. 2007;Allen et al. 2017). Additional processing steps such as compression, heating, and gluing further reduce the phytosanitary risks of processed wood products.
WPM is treated per ISPM 15 (Fig. 2, Box 3)
Sawn wood intended for use in WPM destined for international trade must be compliant with ISPM 15 (IPPC 2019). The process to produce compliant sawn wood has three primary components: (1) a specific criterion for debarking, (2) an approved application of heat or other treatment (e.g., fumigation) of the wood, and (3) a mark to indicate the WPM has been subjected to an approved phytosanitary treatment. Prior to 2009, the goal of compliance with ISPM 15 was to render the risk of woodborne pests "practically eliminated," in 2009 the standard was amended to "significantly reduced" (IPPC 2019). The ISPM 15 standard does not specify an acceptable survival or mortality rate for any taxonomic group exposed to treatment, nor does it state a number of viable pests that are allowed to be present by any defined measure (i.e. not by; per individual piece of wood, per unit of packaging, or per consignment.) The quantification and administration of measures and treatments is instead the responsibility of National Plant Protection Organizations (NPPO) of each of the IPPC's contracting parties.
Debarking is intended to prevent pest infestation of bark or underlying phloem (Haack and Petrice 2009) following heat or fumigation treatment. However, wood used to construct WPM can still retain some bark and the WPM will still be compliant with ISPM 15, but that bark must be less than 3 cm wide or, if the piece is longer than 3 cm, smaller than 50 cm 2 in area (IPPC 2019). Wood boring insects or microorganisms may still be living in these small bark pieces or deeper in the wood; if present, these organisms should be addressed by the subsequent heat or fumigation treatments. Allowing a small amount of bark to be retained means that there is a risk that post-treatment pests might re-infest the WPM. However, these allowances in ISPM 15 are based on the low probability of bark and wood boring insects completing development if they infest the WPM after it has been treated (Haack and Petrice 2009).
The approved treatments are intended to kill organisms that remain in or on wood after the debarking and milling process (e.g., Mayfield et al. 2014;Mackes et al. 2016). Wood destined for use as WPM must undergo one of three currently approved treatments: heat treatment at 56 °C for 30 min where the temperature is measured throughout the entire profile of the wood, dielectric heating (microwaving) to 60 °C for 1 min where the temperature is measured at the surface of the material, or fumigation by sulfuryl fluoride or methyl bromide to a minimum concentration-time product and residual concentration over 24 h (IPPC 2019). The heating methods are intended to damage cell contents and structures of pests, thereby rendering them inactive, unable to complete development or reproduce, or dead (NAPPO 2014). The use of methyl bromide is being reduced or phased out by many countries because of its negative impact on the ozone layer (Besri 2010;IPPC 2008) as well as due to human health concerns. It is explicitly stated in the standard's documentation that none of these approved treatments are designed to provide post-treatment protection from contaminating pests (IPPC 2019).
The final component of ISPM 15 is the application of an official mark to the WPM (Fig. 5A, B). WPM lacking this mark, or with the mark incompletely or incorrectly applied, is considered non-compliant. The mark (also called a stamp) allows for visual confirmation that the WPM has been treated and gives Manufactured wood and paper products used in the transport, protection, and containment of commodities not considered to be solid wood packaging material (A: oriented strand board, B: particle board, C: compressed fiber, D: card-board). The manufacturing process used to create these products reduces wood to a dimension that is functionally nonsurvivable for organisms present at the time of manufacturing. Photo credits: DR Coyle (A, C, D), CJK MacQuarrie (B) the specific treatment (e.g., HT for heat treatment), the country of origin and the treatment facility. The mark is applied under the authority of the national plant protection organization (NPPO) of the country where the WPM is manufactured and the mark must conform to ISPM 15 specifications (IPPC 2019;Sela et al. 2017). To ensure that WPM is properly treated, NPPOs certify and audit ISPM 15 treatment facilities. When non-compliant WPM is found outside of its country of origin, the NPPO of the importing country is responsible for notifying the NPPO of the exporting country (if WPM is not marked) or country of WPM origin and certification (if the ISPM 15 mark is present; this is often, but not always, the exporting country). The exporting or certifying NPPO is then responsible for tracing the source of the non-compliant material and taking necessary and/or appropriate actions with the certifying facility. Zahid et al. (2008) found that non-compliance due to improper marks varied widely, based on year, origin of the WPM, and commodity. As countries implemented the standard in the years following ISPM 15 adoption, presence and quality of marks improved.
At this point in the supply chain, following fully compliant treatment, WPM has significantly reduced risk of spreading wood infesting pests. The following sections therefore address the risks associated with use in the supply chain of untreated WPM, inadequately or insufficiently treated WPM, fraudulently stamped untreated WPM, or WPM that has been exposed to contaminating organisms.
WPM is stored, then packaged and packed (Fig. 2, Boxes 4 and 5)
Newly treated WPM produced at a WPM manufacturer is often stored for a period before being loaded with goods. During this storage period treated WPM may become infested or contaminated with wood specific post-treatment pests, such as bostrichids (Haack and Petrice 2009) and surface or crevice contaminating pests (e.g., L. dispar, L. delicatula, H. halys). This can happen at the manufacturing facility, or at the location where the WPM is associated with commodities (e.g., boxes placed on a pallet, wire reeled on a spool-this process is referred to as 'packaging'). During the packaging process, WPM is at risk of becoming contaminated with pests if it comes into Each mark is required to show the International Plant Protection Code (IPPC) logo (1); the country code (2); the facility code where treatment was applied (3) and the type of treatment applied (4); versions of that mark (B) as applied to pallets; and a pallet showing pre-milling insect damage (C). Illustration by CJK MacQuarrie; Photo credits: LF Greenwood contact with other infested or contaminated objects or environments (e.g., commodity, piece of equipment, surface, or other contaminated WPM). Packaged WPM is also at risk of being contaminated if it is stored in an open environment. For example, stone, tile, and heavy machinery parts are often packaged on WPM before being stored together outdoors until an order is received for those goods. During this period, WPM can become contaminated with soil, egg cases (e.g., L. delicatula, see: Barringer et al. 2015), or by organisms that shelter in crevices (e.g., wasps, Rau 1930; terrestrial snails, Chen et al. 2016). The risk of packaged WPM bearing contaminating pests can be mitigated by using practices that decrease the risk of it being colonized during storage, including adhering to good yard management practices (IPPC 2020;IMO ILO UNECE 2014) and by post-storage inspection of the packaged WPM.
Packaged WPM is at risk of being contaminated while being prepared for shipment overseas. WPM may encounter contaminating organisms before it is placed into a cargo transport unit (CTU). This process is referred to as 'packing' in the shipping industry. Shipping containers are the most common CTU, but other conveyances such as railcars are also considered CTUs. We use 'container' to refer to CTUs in general. Transferring a container to a conveyance (e.g., a ship) is called 'loading'. WPM can be contaminated by equipment used to assist with the packing or loading process, or by pests already in the container.
Materials are moved from origin to port, loaded onto a ship, and en route in international waters (Fig. 2, Boxes 6, 7, & 8) As much of the goods traded internationally are transported in ships, most containers will spend time stored at a seaport prior to departure (Kaluza et al. 2010). In a typical scenario, containers are packed and sealed at a manufacturing or production facility or at a warehouse (Box 5), then transported to a seaport via rail or truck (Box 6) where they sit at the shipyard prior to being loaded onto a vessel (Box 7) and shipped (Box 8).
Both unloaded WPM and containers are at risk of external contamination during storage at ports prior to packaging, packing, and loading, especially when stored near exposed lights or on vegetated surfaces. Many insects are attracted to lights (Mazhkin-Porshnyakov 1960; Owens and Lewis 2018) and some may land on or crawl to materials stored near lights, increasing the chance that individuals or egg masses are transported. Even without the influence of light, any time an implement of trade sits outdoors or in open storage near a population of potentially contaminating organisms there is an opportunity for pest contamination. WPM in packed and sealed containers is at risk of contamination if organisms enter the container via cracks or air vents (Koch and Galvan 2008;Lee et al. 2014).
The risk of contamination varies with the length of time the containers are present in the exposed environment, the population density and life stage of the potentially contaminating organism, and the ecological conditions in both close proximity (e.g., grass present in a dirt storage yard) and general vicinity (e.g., forested port environment). If WPM or containers are contaminated prior to departing the port of origin, cross-contamination of WPM or containers may occur en route to the cargo's destination. For example, soil contamination on any packaging material or container can harbor spores, insects, microorganisms, or seeds. During a voyage, these contaminating organisms may mature or become motile and contaminate nearby surfaces.
Once loaded onto a ship, containers, WPM, and break-bulk commodities (i.e., large items like steel beams or heavy machinery) are very difficult to inspect. As such, the process of ship loading is an opportunity for inspection and mitigation of contaminating pests. After loading, however, all container surfaces that are adjacent to a neighboring container or the ship superstructure cannot be visually inspected. This means that less than 10% of the surface area of all the containers is visible on the smallest classes of container ships, and less than 5% is visible on the more commonly used larger ships. Further, only a fraction of loaded surfaces or exposed WPM are low, close, or accessible enough in the stacks to be visually inspected without the use of drones, binoculars, or other instruments. To arrive at these values we assumed a stack of 504 forty foot containers arranged on a small Feeder class ship in a 7L × 9W × 8H block. This configuration has 9% of its total surface area visible for inspection. We further estimated that for the more common Panamax class of container ships, with 3001-5000 containers, only 5% of surface area is visible. WPM used in the securing of breakbulk cargo within the hull of the ship often cannot be accessed at all once loaded due to safety and access issues.
It is the shipper's responsibility to ensure containers are "clean, free of cargo residues, noxious materials, plants, plant products and visible pests" (IMO ILO UNECE 2014) before being loaded on the ship. The CTU code (IMO ILO UNECE 2014) provides guidance and recommendations for the shipper, but these are not mandatory. In the southern Pacific Region the Sea Container Hygiene System is used whereby countries shipping in this region implement a container cleaning regime, which includes cleaning the interior and exterior of containers, and external treatment with insecticide before containers are packed at the loading port. Under this system, the level of inspections of containers from a destination is adjusted using a risk-based sampling approach that takes into account how frequently they are compliant with the regulations (Australian Government 2019).
Ship arrives at a North American port of entry (Fig. 2, Box 9) Ports of entry are a critical control point for inspection and mitigation of non-indigenous organisms. Arrival at the port of entry presents the first opportunity for non-indigenous organisms to escape and the first domestic opportunity for inspection of shipments by the receiving NPPO. Also, the regulatory status of an organism can change during transport; for instance, a non-regulated organism in the country of origin becomes a regulated pest when the cargo enters the receiving country's waters or crosses a land border.
Before containers and break-bulk cargo are offloaded (Box 10) ships may be inspected by the country in which they are arriving. In North America, ships and cargo are initially under the control of national border protection services: Canadian Border Services Agency (CBSA) in Canada, U.S. Customs and Border Protection (USCBP, housed within U.S. Department of Homeland Security) in the U.S., and Procuraduría Federal de Protección al Ambiente (PROFEPA) in Mexico. These agencies and their associated NPPOs facilitate the flow of trade and are responsible for enforcement activities. Inspection efforts have variable foci, ranging from illegal drugs, human trafficking, contraband items, to biological threats such as those discussed in this paper. Inspection rates and modes relevant to plant-pest protection vary among North American countries and are influenced by various factors (e.g., country, port of origin, port of arrival, time of year, containerized or breakbulk, type of commodity). Decision support systems have also been proposed to aid in deciding to inspect certain ships (e.g., Gray 2016). Ships containing break-bulk cargo and other cargo (such as "roll on, roll off" wheeled cargo, known as 'ro-ro') commonly secured with dunnage are subject to increased inspections in United States ports, as these cargo types have had high rates of non-compliance associated with dunnage in the past (J. Sagle pers. comm.). In Canada and the U.S., these inspections typically happen before the ship is at dock, whereas in Mexico officials may not board ships prior to docking; instead, inspection of the cargo and conveyances is completed in the port yard.
CTUs and break bulk are offloaded and kept in a controlled area (Fig. 2, Box 10 and 11) Organisms have their first opportunity to escape when materials arrive in their destination country. Those that do escape and successfully establish in a port area can create beachhead populations (Lombaert et al. 2010;Bertelsmeier and Keller 2018) which can act as a source of new species introductions and may potentially lead to the additional spread of unwanted species into surrounding areas. For instance, Harmonia axyridis spread to multiple continents from an established beachhead population in North America-not from its native Asia (Lombaert et al. 2010). In the case of A. glabripennis, introductions from its native range, beachhead populations, and humanmediated intra-continental movement have all likely contributed to its spread throughout Europe and the U.S. (Javal et al. 2019).
The offloading of containers, ro-ro, and breakbulk commodities at the destination port (Box 10) represents a significant opportunity for organisms to escape. Offloading is also a time for external contamination to be observed, and for indigenous and non-indigenous organisms present in the importing country's port to become newly associated with commodities. If unrecognized infested or contaminated containers or goods sit outside after initial offloading in terminals or yards (Box 11), contaminating organisms may escape into the local environment. For motile organisms (e.g., mobile life stages of invertebrates) this escape may be prompted by any number of abiotic or biotic factors (e.g., a stoppage of movement, completion of dormant period, a change in light or temperature) while sessile organisms or stages (e.g., egg masses, pupae) or soil that contains organisms may be dislodged by the movement of containers and goods within the port or direct exposure to wind or rain. Small organisms associated with WPM within a container can escape otherwise sealed containers via vents, cracks, or along door frames. Loaded containers generally spend 3-5 days in a port (Steenken et al. 2004) but empty containers or dunnage can reside in port for much longer, giving pests additional time to develop or escape. Some containers may also be opened at facilities within the port, providing additional opportunities for pests to escape.
Some proportion of WPM is inspected at all North American ports, but the rate of inspection varies widely according to country of entry, port of entry, country of origin, and commodity. Inspections are often focused on shipments from higher risk origin areas or commodities, similar to the decision metrics for pre-arrival inspections of ships. Work et al. (2005) estimated the annual inspection rate in the U.S. at approx 2% of all WPM. USDA APHIS estimates riskbased sampling currently yields an annual average of 300 wood boring and bark beetles found in wood packing material (USDA 2021). Containers or WPM that are determined to be non-compliant after initial offloading are not allowed out of the controlled port areas in the U.S. or Canada. In the U.S., non-compliant containers or packed WPM may be required to be re-sealed and re-exported at the expense of the carrier by U.S. Customs and Border Protection, or less commonly, non-compliant materials may be destroyed or treated on site. If the dunnage is determined to be non-compliant while associated with a break-bulk commodity, the entire consignment (non-compliant dunnage and its associated commodity) may be rejected and subject to re-export, which may include the vessel. In Canada, the NPPO will order the noncompliant WPM to be removed from the country and may treat material that poses an immediate risk prior to doing so (CFIA 2023). WPM not in compliance with Mexico's standards (SEMARNAT 2018) is not allowed to leave a Mexican port and is subjected to a quarantine protocol (i.e., fumigation) prior to its destruction.
Dunnage removed from ships during the unloading process is often stored within the controlled area of the port. Dunnage represents a significant risk by harboring both infesting and contaminating pests if it is untreated, undertreated, or was handled in a way that allowed post-treatment contamination. As dunnage often has little to no associated chain of custody information, it is more difficult to determine who is responsible for the disposition of non-compliant dunnage. Non-compliant dunnage may be destroyed on site (Box 10), loaded back on a ship and re-exported, treated and allowed to be taken from the port, or illegally deposited within port property.
In North America (and likely elsewhere) illegal deposition of used and untreated dunnage is an increasingly serious issue. Due to these challenges, in 2016 the U.S. revised its regulations to allow for the more rapid destruction of illegally deposited dunnage via incineration at ports of entry (USDA 2017). Since 2008 all shipborne dunnage arriving in Canada has been treated as non-compliant and measures have been taken to treat it as such, regardless of the presence of an ISPM 15 stamp. In 2021 Canada's NPPO recommended allowing dunnage to be reused, as long as it is, or was, rendered ISPM 15 compliant before reuse (CFIA 2021a). In the largest Mexican ports, dunnage that is unloaded is fumigated before its destruction. However, dunnage may remain for considerable periods of time in openspace storage within Mexico's port environs before being destroyed, thereby increasing the risk of organisms maturing to a motile life stage and escaping into the port environs.
Importing non-compliant WPM can have significant logistical and monetary consequences. The U.S. may issue fines or other monetary penalties to shippers of non-compliant WPM and may require the re-export of goods, containers, or conveyances associated with non-compliant WPM. They may also revoke the participation of offending shippers in voluntary programs such as the USCBP Trade Partnership Against Terrorism (C-TPAT) program (USCBP 2022) that speed imports through the inspection process. In Canada, the NPPO can take enforcement actions on violations and issue fines to the entity that is responsible for the non-compliant material. Records of non-compliance are kept by CFIA in Canada, USCBP and USDA APHIS in the U.S., and PROFEPA and SEMARNAT in México. These data are used to develop inspection protocols for commodities, ports, or ships that may present at higher risk of non-compliant material being present.
CTUs and breakbulk leave the port and are moved to distribution centers (Fig. 2, Box 12); WPM is transported with commodities to point of sale or separated at point of distribution (Fig. 2, Boxes 13-16) Once a container or packaged WPM leaves a port it can be transported by a wide range of conveyances anywhere in the receiving country. During this time there are many opportunities for associated pests to disperse into the local environment. Packed containers and WPM are often stored for variable periodsfrom days to months-at railyards and distribution centers (Boxes 13,14,16,17). At distribution centers, some containers are opened, unpacked, and the commodities may be unpackaged (i.e., separated from the original WPM). Some goods will remain packaged with their original WPM as they are moved to a retailer (e.g., large appliances, tile, plastic-wrapped palletized bulk goods such as multiple individual sacks of rice). This point in the supply chain presents a high-risk opportunity for pests to leave the unpackaged WPM and disperse into the area surrounding distribution centers (Box 14). Similar to the pest escape context present at ports, the unpacking process introduces potential stimuli for a pest to emerge (e.g., environmental changes such as light, temperature, and humidity) and removes barriers to escape that may have been present in a sealed container; the risk increases with the amount of time spent in a single location.
Distribution centers are located at shipping hubs around the continent, increasing the number of places that could become a first point of introduction (e.g., Krishnankutty et al. 2020) or beachheads for the domestic spread of newly arrived invasive species. Distribution centers and warehouses may be far removed from coastal ports where, historically, most introductions first occurred. However, the storage of large amounts of WPM at distribution centers, or at points of sale for historically high-risk commodities, allows NPPOs and other entities to conduct focused surveillance and target analyses to increase the likelihood of early detection at these types of locations (Rabaglia et al. 2019;Krishnankutty et al. 2020;Morisette et al. 2020).
WPM is separated from goods and stored (Fig. 2, Boxes 16 and 17) When WPM arrives at a distribution center, the distributors handle WPM and the packed commodities in a variety of ways. They may unpackage the WPM received from the manufacturer and re-package it onto a different unit of WPM (sometimes onto reused WPM; Box 19) or they may leave the commodities packaged and send the WPM to retailers or direct to consumers. Once unpackaged commodities arrive at their final destination they are separated from any remaining WPM and the disposition of any WPM becomes the responsibility of the retailer or consumer (Box 17 and 20). The management and storage of WPM can be unprofitable or inconvenient once it reaches homes or businesses in rural or residential areas, with little incentive for best management practices that could reduce pest-related risks.
WPM separated from goods is often stored for some time prior to the WPM entering a reuse pool, being recycled, or destroyed. During this storage time, as before, the risk of stored WPM becoming contaminated is contingent on local pest presence and the storage environment, and the risk of pest escape from stored WPM is dependent on duration of storage, seasonality, storage area conditions, and other environmental factors. WPM is disposed of or recycled (Fig. 2, Box 18 and 20) or enters reuse pool (Fig. 2, Box 19) Pallets, dunnage, crates, spools, and other types of WPM likely each have different rates of entering the reuse markets in North America. One of the most reused types of WPM are wood pallets. The lifespan of a typical pallet includes multiple periods of use across 2 to 10 years (Gnoni and Rollo 2010;Deviatkin et al. 2019; Brad Gething, National Wooden Pallet & Container Association, pers. comm.) and is influenced by factors like its construction, what commodities it has been used to transport, and how many times it was handled during a trip. Pallets and other WPM can be remanufactured or repaired by replacing damaged components (Box 18). In the U.S., recycled and remanufactured pallets make up 42% of the pallet pool (Gerber et al. 2020) but we could find no data on the proportion of the recovery market that is occupied by pallets initially manufactured overseas. Similarly, we found no data on the frequency of reused pallets used for the export of North American goods. To maintain ISPM 15 compliance, pallets that have been repaired or remanufactured must adhere to the manual's specified guidelines on marks and retreatment.
The risk associated with primary infesting pests in repaired and remanufactured WPM could increase if ISPM 15 repair guidelines are not followed and untreated wood is used in the repair process. In the U.S., the majority of repair is done with components from reclaimed pallet pieces so a failure to adhere to repair guidelines would be unusual (Brad Gething, National Wooden Pallet & Container Association, pers. comm.) but it remains possible. Domestic-and international-origin pallets moving into reuse pools could present a risk of transporting invasive pests either domestically, or internationally, if contaminated or infested while in storage prior to reuse (Box 19). As WPM ages over time, different types of pests may be attracted to the material (Naves et al. 2019) so the profile of post-treatment infestation risk is variable. In the U.S. at least one jurisdiction regulates the movement of WPM and other high risk articles to prevent the spread of the non-specific contaminating pest, L. delicatula (Pennsylvania Department of Agriculture 2018).
WPM not destined or suitable for reuse is either destroyed in controlled settings (i.e., solid waste facilities, wood processing facilities, or landfills), used in recycling or downcycling markets, or reclaimed (Box 20). WPM that is destroyed may be chipped or otherwise mechanically broken down and sold as other products (e.g., mulch, soil amendment, animal bedding) or enter commercial fiber markets and be manufactured into other wood products (e.g., paper, chipboard, fuel pellets) (Shiner et al. 2021). The final disposition of WPM in these settings likely presents very low pest risk, due to the final dimensions of the wood products being too small to sustain pest development in most cases. Some microorganisms and very minute arthropods-such as fungi, nematodes, or ambrosia beetles-may persist even on chipped or shredded material.
WPM destined for disposal may represent a risk for transporting pests to the immediate area around a given facility; for example, some U.S. regions may be net importers of used WPM for the disposal industry (Shiner et al. 2021). The eventual fate of the fraction of WPM that is neither reused nor destroyed is unknown and the material disappears from the supply chain-this may represent use as fuel wood, conversion to handicraft materials, or other less common final dispositions. There is a paucity of data regarding the final disposition of WPM globally.
Discussion
Managing the phytosanitary risk associated with every piece of WPM used in the international supply chain is a complex and multi-step process involving multiple entities and countries. We have reviewed the various stages in the supply chain to identify distinct areas of phytosanitary risk and determined that the greatest pest risk reduction occurs in the steps up to and including the processing, construction, and full compliance with ISPM 15 treatment. Our review also suggests that the risk posed by WPM after ISPM 15 treatment may be due to; heat or fumigant tolerant organisms surviving treatment, systematic failures in the application of treatments, and post-treatment contamination by contaminating pests. This last cause, however, is shared among WPM and non-wood conveyance material (e.g., plastic, metal).
Several biosecurity tactics, including ISPM 15, are used to help mitigate potential phytosanitary risks (Epanchin-Niell et al. 2021). WPM is a significant pathway by which pests are moved in global commerce and while the implementation of ISPM 15 is documented to have reduced the observed infestation rate of WPM by approximately half, live wood pests are still found in ISPM 15-marked WPM (Haack et al. 2014Franklin 2021). No current research exists detailing what proportion of these findings are due to fraud, undertreatment, insufficient treatment level, or other causes. The overall risk of these continuing live interceptions will be unknown until we have a better understanding of their actual frequency and ecological potential to establish and form reproducing populations.
Current tactics used to mitigate risks from WPM in global supply chains are mostly focused on those parts of the supply chain that occur before the commodity departs its port of origin. Much less is known about how WPM is handled in the receiving countries' port and warehouse environments, and how that relates to pest risk mitigation after WPM is in transport to its final destination. Evaluating each step in the WPM supply chain, as we have done here, can identify areas of high risk or high opportunity, where information is lacking and further research, data collection, transparency, or analysis are required, and therefore where to focus future mitigation and research efforts. We discuss these opportunities in the following paragraphs.
The effectiveness of ISPM 15 hinges on treatment levels, compliance, and implementation
The most significant measure mitigating the risk of WPM is ISPM 15. ISPM 15 was first adopted in 2002 and is now implemented by nearly 100 countries. By 2009 there was a measurable correlation between the implementation of ISPM 15 and a 36-52% decline in the percentage of infested WPM intercepted at U.S. ports (Haack et al. 2014). However, a lack of baseline international interception data and the fact that different countries implemented ISPM 15 at different times has continued to limit the ability to quantify declines and make accurate measurement of the change in interception rate over the twenty years since implementation (Haack et al. 2014. Audits by the European Union concluded that non-treatment and fraudulent ISPM 15 marks were the biggest risks related to wood packaging material, and where full compliance with ISPM 15 occurred it would be effective (EC 2013).
As written and if fully implemented by all 184 contracting parties, the ISPM 15 standard is a powerful tool to mitigate risk, however, the differences in economic, governmental, cultural, and commercial environments among countries create substantial hurdles to achieving the full mitigation of risk. Each contracting party has an obligation and responsibility to administer the requirements of the approved treatments within ISPM 15 at all certified facilities under their authority, and receiving NPPOs may audit the administration of those treatments in the respective source facilities, but the ultimate details of implementation of the treatment requirements is up to the individual NPPO. The hurdles presented by variations in implementation worldwide may be significant enough to create inconsistencies in the effectiveness of treatments, which then generates a significant phytosanitary imbalance between how WPM is both treated at points of origin and how it is inspected and received at the port of entry.
Determining efficacy of treatments under both laboratory and real-world conditions is challenging. Ormsby (2022) has proposed a measure of efficacy and representative taxa against which proposed treatments could be developed and tested for ISPM 15, which could address some of the data deficiencies we have identified. This approach could be combined with an ISPM 15-specific experimental design protocol which would test the real-world efficacy of treatments. Such an approach would give greater clarity by creating an objective measure of phytosanitary treatments that would allow stronger evaluations of plant health protection efforts. Future experimentation aligned with Ormsby's recommended level of efficacy could provide stakeholders with the data necessary to evaluate concerns that conventional heat treatment parameters outlined in ISPM 15 may be inadequate and therefore the direct causal factor driving some of the findings of non-compliance in apparently treated WPM.
The phytosanitary measures described in ISPM 15 do not, by design, provide permanent protection against all types of pests. Much has yet to be learned regarding the incidence and risks associated with pests that become associated with WPM following ISPM 15 treatment. Responding to these risks, if deemed necessary, would also likely require the development and implementation of new policies. ISPM 15 treatment, as conceived, should decrease the pest risk of WPM to a level similar to that of processed wood products (e.g. oriented strand board.) Understanding how and if treated WPM obtains and maintains this of a low risk profile over its entire lifespan and what the level of concern for pests like dry wood borers is to different countries would require additional consideration. Countries can implement management strategies and prescribe handling activities where all WPM, containers, and conveyances may encounter contaminating pests. Research is also required to develop new methods to efficiently treat or retreat WPM that is suspected or known to have become contaminated within the supply chain. These treatments could potentially be applied within the closed environment of a container (e.g., a fumigant, trap, or bait) before it leaves a controlled area.
The success of ISPM 15 relies on the effectiveness and use of approved treatments with complete application. Unfortunately, very little data exists on the frequency of accidental inadequate treatment or intentional treatment fraud to determine how consistently phytosanitary treatments are appropriately applied. In some cases, the consistent application of accepted treatments to WPM may be insufficient such that some heat-tolerant organisms survive and thus would be transported in treated and marked material (Haack et al. 2014;Wu et al. 2017;Eyre et al. 2018;. Saprophytic fungi also play a role in suppressing pathogenic fungi that may survive treatments (Uzunovic et al. 2008) though the real world implications of this is not fully understood. The mechanisms underlying the effect of the ISPM 15 heat treatment on the physiological processes of pests are also not understood, nor are the implications of sub-lethal effects on pests that survive treatment. Understanding the implications of these phenomena could lead to better approaches for assessing and predicting risk from potential pest species and in the development of new or modified WPM treatments. An additional challenge to the development of new treatments is the testing needed to determine if those measures are sufficiently effective. Measuring effectiveness has sometimes required exposing thousands, or tens of thousands of insects to the new treatment (e.g., precisely 93,616 insects as in the case of Pro-bit9; Baker 1939), which may not be practical or possible with wood-infesting pests (see Ormsby 2022 for discussion). To address this issue, Ormsby (2022) has proposed that lower numbers of insects can be tested when assessing the effectiveness of treatments against wood-and phloem-feeding insects.
There are also non-biological issues that can impact the effectiveness of ISPM 15. Although it is in violation of the international treaty, infested WPM does enter export chains of custody with fraudulent marks (which falsely indicate the WPM has been treated to ISPM 15 standards; Haack et al. 2014) or lacking in marks altogether (Eyre et al. 2018). This illegal activity may remain undetected as the volume of trade is high while inspection rates are low, and even if inspected, the stamping process is not complemented by additional security or independent confirmation. There is no secondary verification process of a mark's validity or completeness of treatment beyond the presence of a compliance agreement between the treatment facility and the country of origin's NPPO; no chemical or physical indicators are currently known that could be used to provide verification that treatment occurred. As the application of fraudulent marks to WPM is an issue that has trade and legal consequences for trading partners, as well as serious invasive species movement risks, the development of tools or technologies to determine whether marked packaging is non-compliant, whether due to fraud or undertreatment, would be an asset to ISPM 15 implementation.
Issues with fraud and illegal behavior are not unique to WPM. Standardized certification marks are used in other industries (e.g., plumbing fixtures, electrical components, computer parts) where fraud also occurs. Ensuring WPM is ISPM 15 compliant is the responsibility of the NPPO in the country where the WPM originates. Undertreatment-either accidental or purposeful-or deliberate fraud that goes undetected before export use occurs, are serious issues that can result in fines (e.g., NWPCA 2017; USCBP 2004USCBP , 2017. Some countries are very stringent with ISPM 15 requirements (e.g., European Commission 2013) yet in North American ports of entry, findings of noncompliance are not uncommon.
Incomplete, insufficient, or improper application of treatment presents financial and legal risk across supply chains; procuring apparently-compliant WPM does not protect private entities from legal, financial, and logistical consequences if that WPM is found to be non-compliant or otherwise infested with live actionable pests. Understanding why these findings occur would better equip the international community to address these issues; effective interventions to reduce non-compliance due to fraudulent markings are different from those necessary to reduce the use of unmarked or undertreated WPM. More studies which assess non-compliance among or across categories of WPM or determine the proportion of findings due to fraud, undertreatment, pest survivorship to treatment, and/or lack of treatment in non-compliant WPM could guide where education, guidance, or policy actions may be needed. Making these determinations with intercepted non-compliant WPM would be difficult and determining true causality would require international cooperation. Research has also not examined the social and economic motivations around compliance and its implications to forest health (Williams et al. in press), or examined how the complex chains of custody common to international supply chains might influence management of WPM.
In some countries, across economic and social spectrums, issues with compliance may arise from a lack of information or infrastructure to properly treat WPM and apply the ISPM mark. Additionally, the resources for verification of treatment facilities, expertise to build facilities, and infrastructure to audit and verify treatments may not be available. In many countries, NPPOs have capacity challenges; for example, Papyrakis and Tascioti (2019) found that communication between treatment facilities and the NPPO is lacking in several African countries, and the ISPM treatment mark and treatment facility verification is not available. In response to this study the IPPC created an expert working group to compile global guidance repositories and create an ISPM 15 implementation manual (IPPC 2017a).
Integrating systems approaches
Our objective was to present a detailed outline of steps involved in the international WPM supply chain as it relates to preventing the entry and spread of forest pests and pathogens into and within North America. A potential future step is to conduct a Hazard Analysis Critical Control Point (HACCP) assessment of this supply chain. HACCP principles are based on using risk assessment to determine how to reduce risk along a production line. Such an assessment could identify how a systems approach might be used to mitigate risks of WPM in supply chains.
Systems approaches consider the combined effects of independent and combined dependent measures on reducing overall pest risk rather than the effect of a single intervention. For instance, harvesting wood for WPM outside the active season for a potential pest and milling that wood in such a way as to remove the tissue where the pest resides are two separate methods that could reduce the specific pest risk for a piece of WPM similar to what a single treatment might accomplish. Without an assessment, the effects of interventions on the pest risk associated with the WPM supply chain are not possible to quantify. Currently, systems approaches are used to mitigate risks of international pest movement for many global commodities, particularly fruits and vegetables (Quinlan et al. 2020;IPPC 2021c) and ash sawn wood (EU 2016). More recently, a standard has been written for the forest product industry and NPPOs with guidance on how to design and implement systems approaches for wood commodities (NAPPO 2018).
One area where systems approaches may be most effective is in reducing risks of contaminating organisms on WPM. Most guidelines for wood pests in commodities address infesting pests closely associated with their host tree species. An added variable present in WPM, packed commodities, containers, and conveyances is pest contamination not specific to host species, nor is the potential for contamination limited to the commodities listed in the consignment. We previously outlined numerous places in the WPM supply chain where contaminating organisms and pests can contaminate WPM. Many terrestrial pests can spread via contamination (Meurisse et al. 2019) and external pest contamination on shipping containers can vary from ~ 0.1% (NZMAF 2006) up to 5% (Gadgil et al. 2000). Recognition of the role containers play in this pathway has resulted in cleanliness programs; e.g., North American Sea Container Initiative (NAPPO 2020), IPPC Sea Container Task Force (SCTF FAO 2008;IPPC 2018;IPCC 2020), and the Australian Sea Container Hygiene System (Australian Government 2019). It is likely that surface contamination of WPM used in similar environments as containers, such as crates and dunnage (Fig. 1C, D), experience a similar range of surface contamination as containers, and thus similar opportunities for mitigation of risk. For instance, one approach to reduce external contamination is to use filters to render lights in storage areas less attractive to insects (Pawson and Bader 2014;Justice and Justice 2016). Implementing a systems approach to reduce contamination of WPM would require additional research to develop a suite of complementary and effective pest mitigation tactics.
Enforcement challenges
Effective enforcement of rules governing the use of ISPM 15 compliant WPM can promote the use of this lower risk material in supply chains. Improving inspection program data collection and conducting targeted studies would help determine the incidence of ISPM 15 non-compliant or untreated WPM, and the incidence of ISPM 15 compliant WPM bearing contaminating organisms (Nodar 2021). Considering this risk is shared regionally, the ideal scenario would be for the U.S., Canada, and Mexico to have harmonized phytosanitary guidelines and enforcement protocols wherever feasible. One approach to begin to answer questions of frequency and types of noncompliance may be to adopt harmonized risk-based sampling regimes. This method identifies and ranks non-compliant imports, then uses that data to identify high risk commodities and predict how many inspections are needed to achieve a desired probability of detection (NAPPO 2021b). Another approach may be to use artificial intelligence methods to increase the effectiveness by marginal or continuous improvement of survey and inspection regimes. To do so would require a large amount of data on the contents of containers, including commodities and their packaging as well as origin and destination, etc. in order to inform a model developed using machine learning approaches. Such a model could be rapidly updated with new interception data and would permit realtime targeting, which would be advantageous when shipments of commodities have one origin but several destinations in different countries.
In comparison to the multi-piece constructed types of WPM such as pallets and spools, we know much less about the risk profile and enforcement of dunnage in North America. Specifically, the propensity of dunnage to harbor organisms that have survived treatment (as it is often much larger by two dimensions than any component piece of a pallet or crate) and the proportion of dunnage that is destroyed after inspection are two significant knowledge gaps. There are no public statistics for the amount of dunnage that arrives at or exists in North American ports, the volume that is destroyed, the length of time between seizure or offloading and destruction, the incidence of findings of non-compliance, or the final destination of offloaded dunnage. This includes dunnage that was loaded in non-North American ports and offloaded by ships before leaving a port in North America, sometimes illegally or without authorization. Canada's NPPO recently reviewed and updated its shipborne dunnage program and created a new risk management document to provide more options for segregating compliant and non-complaint dunnage and to develop disincentives for non-compliant dunnage (CFIA 2021a). In response to increased frequency of enforcement and findings of non-compliance in apparently ISPM 15 compliant dunnage in U.S. ports of entry, some importers have begun exploring options toward additional private inspection at the exporting port, beyond solely requiring the use of ISPM 15 compliant materials (Lovett and Davila 2021).
Risk management of dunnage represents an immense challenge in North American ports. It is therefore necessary to develop phytosanitary guidelines accepted and enforced by all relevant governmental and private authorities that administer and operate in ports. The existing differences among risk management tactics by the three largest North American nations present risks that could be reduced or resolved by harmonizing the approaches to management of dunnage arriving at ports. To prevent the entry of infested or contaminated dunnage into supply chains that lead to North American ports, additional or more stringent phytosanitary requirements or inspections of dunnage could be carried out at the exporting ports to prevent the initial loading of noncompliant pieces. In addition, limited inspections could be conducted on the ships while at sea. If dunnage was determined en route to be non-compliant, its discharge for treatment could be pre-authorized when appropriate, with mitigating measures such as fumigation, heat treatment, or destruction (e.g., incineration, chipping) in authorized facilities within the port areas. The threats to North American ports posed by non-compliant dunnage need to be better managed as part of a holistic approach to risk reduction from all dunnage. Actions that allow for the post-arrival treatment of dunnage could create unintentional incentives for the use of non-compliant materials by shippers. Accidentally creating these unintentional incentives could have the net effect of increasing pest and pathogen presence in dunnage supply chains. One strategy would be to develop third-party approaches to inspecting dunnage before it leaves an exporting port (Lovett and Davila 2021).
A significant part of the enforcement challenge with dunnage is caused by its lack of chain of custody, especially to the commodities it is physically associated with during its primary period of use. Because dunnage is not a multipiece manufactured WPM type, it structurally serves its purpose equally well if cut or salvaged from other wooden materials found at the shipping or loading site. Dunnage is required to be ISPM 15 compliant and should be stored and handled in the same way as other WPM. However, in practice, these use case scenarios allow dunnage or blocking pieces to be added immediately prior to shipping by entities other than the owners or brokers for the commodities being shipped, thereby decoupling the owner of the commodity from the ability to use preferred or proven suppliers of treated dunnage. Dunnage may also be added or loaded by entities other than those responsible for other commodity associated WPM in or adjacent to the same container. There is often no clearly identified responsible party for the presence of a given piece of dunnage (J. Sagle pers. comm.). Without a line of clear ownership for non-compliant dunnage when it is intercepted at the port of entry, enforcement actions and penalties leveled may not impact the most relevant parties.
One incremental improvement to the enforcement challenge around WPM is to use existing programs that incentivise shippers to consistently use fully compliant material, giving the shippers access to streamlined movement of goods. In 2019, the U.S. C-TPAT program added compliance with ISPM 15 for all participating trade partners (USCBP 2020). The C-TPAT program is a voluntary program that provides defined benefits to trade partners who engage in trade security best practices, including adherence and compliance to all relevant international regulations. Canada has two similar programs to C-TPAT in the U.S.: Customs Self Assessment (CBSA 2022a) and Partners in Protection (CBSA 2022b) but they do not have apparent explicit incentives to engage in phytosanitary best practices. Mexico does not have a similar program. The U.S. and Canadian programs may be effective at reducing the amount of unmarked untreated dunnage from entering the supply chain, which could reduce overall pest presence in the supply chain. However, the presence of fraudulently stamped, insufficiently treated, or undertreated dunnage would not be decreased through these mechanisms, as those materials may have apparently valid marks-and thus no visual cue they are in violation of ISPM 15. It is difficult for the users of dunnage to recognize they are purchasing or loading non-compliant WPM if it appears properly marked. Transparency to buyers regarding what facilities have a recent documented history of selling marked dunnage subsequently found to be non-compliant would enable private parties to make informed procurement decisions, which in turn would enable a market-based feedback loop reducing the amount of forest pests entering the supply chain in marked dunnage.
Detecting and removing contaminating organisms on conveyances
Ships, trains, trucks, and other conveyances represent a significant risk of introducing organisms to new locations (e.g., Short et al. 2020). While we focused on WPM in supply chains, we acknowledge these materials are one part of an multifaceted transport system where contamination could occur. WPM is placed into containers, loaded on ships, and transported by airplanes, trucks, and trains. Along the way these conveyances can become contaminated and, in turn, contaminate WPM that was free of organisms when it left its exporting country. Mitigating external contamination on conveyances is challenging, especially during the part of the supply chain where sea containers are transported and stored before being loaded onto a vessel (Fig. 2, Boxes 4 and 5). Mitigating these risks requires the cooperation of multiple trade partners to maintain lower risk yards, equipment, and facilities, as well as visual inspections by trained port personnel. Unfortunately, these facilities and personnel may be subject to constraints on time, staffing, space, and safety protocols in the port environment that can impede best practices and pre-departure inspections.
While there are hundreds of cargo ports in North America, 15 major ports handle 97% of incoming cargo trade on the continent (Mwaniki 2018). Ports may make decisions based on balancing risk and cost effectiveness to determine if inspection and mitigation activities are efficacious and economically sound; these decisions may differentially affect high and low throughput ports due to conveyance volume. Some guidance and recommendations do exist (e.g., IMO ILO UNECE 2014), but they are not mandatory. Research into risk versus cost effectiveness is needed, as models and on the ground testing might help identify areas of improvement. As well, the development of new tools to allow better inspection of more containers and conveyances (e.g., drones, AI-assisted inspection systems) could allow ports of entry to conduct more post-arrival inspections.
Research to document the real-world incidence of contamination in different storage scenarios would be beneficial to determine the propensity of contaminating pests to become associated with WPM, other commodities, and their conveyances. What evidence exists is limited to a species complex of Lymantria that are contaminating pests of particular concern (Stewart et al. 2016). Adult Lymantria moths are attracted to certain wavelengths of light produced by the bulbs commonly used in the lights at some ports. Studies on the specific wavelength Lymantria moths are attracted to (Wallner et al. 1995) led to international guidance on reducing the risk of transporting invasive moth species on shipping containers (NAPPO 2017(NAPPO , 2021a. Other programs require vessels that have been present near infested ports during flight season of some species (e.g., L. dispar asiatica) be certified to be free of the insect before departure. Ships that pass inspection are issued certificates prior to departure (Mastro et al. 2021). These programs have proven successful, as 98% of ships arriving to Canadian ports were certified Lymantria-free (Mastro et al. 2021). Similar efforts to evaluate the impact of pest biology and ecology and interactions with climate, environment, type of conveyance, or other conditions could lead to the development of similar tactics for other organisms of concern.
Acknowledging that appropriate sanitary measures are not always successfully deployed, the Lymantria complex guidance for ships contaminated with egg masses provides clear recommendations for how noncompliant ships should be addressed (NAPPO 2017) via RSPM 33. For example, in Canada, a ship directly contaminated with egg masses may be required to leave the port for cleaning in international waters, redirected to another destination for decontamination, and subject to penalties (CFIA 2021b). Additionally, they may be refused entry for up to 2 years during the risk period for Canada (NAPPO 2021a). Non-compliant wood packaging material on board a ship may be refused entry (USCBP 2021) and in other cases the WPM may be removed and treated (CFIA 2021a).
The contamination and reuse of WPM
The risk of contamination after ISPM 15 compliant WPM treatment can be mitigated by how the WPM is stored. If stored indoors, it will be less likely to be contaminated with pests that contaminate surfaces in the vicinity of host trees (e.g., L. delicatula, L. dispar). WPM stored outdoors in areas with tall grass is at elevated risk for contamination by terrestrial snails (Cowie and Robinson 2003). WPM stored in the vicinity of bright lights is at elevated risk for contamination by light-attracted pests (Mastro et al. 2021). We know of no research that has examined the likelihood of WPM being infested and/or contaminated during storage or under different storage conditions. Though guides exist for best practices for preventing spread of some organisms on substrates, including WPM (e.g., PDA 2018 for L. delicatula), guides are not yet available for all contaminating pests.
Pallets are a commonly reused type of WPM. Damaged pallets can be repaired and reused; the risk of pest movement associated with untreated or contaminated repair components could be problematic if the guidelines in ISPM 15 are not followed. The risk of these repaired pallets that may contain untreated or contaminated components to act as vectors in the domestic movement of non-indigenous organisms has not been investigated. Domestically produced pallets, and pallets moving between countries (e.g., between Canada and the United States), which are not subject to ISPM 15 requirements could pose a risk for movement of pests within a country. For example, in North America, A. glabripennis and A. planipennis have all undergone substantial movements mediated by cargo transport and other human activities (Shatz et al. 2013;Short et al. 2020). Understanding how and where this type of pest movement occurs is essential for adequate intra-continental management to occur. To address this problem, some countries utilize domestic movement regulations to minimize risk associated with untreated WPM (CFIA 2021c). Pathway analyses for the movement of invasive species via WPM within North America is difficult because not all domestically moved pallets are treated according to ISPM 15 requirements, and if found, the origin of contaminating pests is difficult to trace back. Pallet leasing and pooling may provide information on the history of WPM before it becomes associated with a commodity in domestic distribution.
Plastic pallets and processed wood pallets (pressed, ply, oriented strand, Fig. 5) have been proposed as alternatives to solid wood pallets due to their different risk profile for wood boring and wood infesting organisms. Both plastic pallets and processed wood pallets have different structural properties and reuse profiles than wooden pallets. Plastic pallets require higher energy costs to manufacture and transport than wood (Anil et al. 2020) and require redistribution systems within supply chains, which introduces different logistical and energy costs (Tornese et al. 2018). The use of these WPM alternatives presents a complex set of far-reaching implications, costs, and benefits not explored in this paper, and all remain subject to the same issues relating to the transport of contaminating organisms.
Conclusion
Wood packaging material has been a significant pathway for the introduction of non-indigenous forest pests to North America. ISPM 15 was designed to reduce pest risk from major woodborne pests on this pathway to acceptable levels; however, these pests continue to be intercepted in association with international supply chains and trade activities. These interceptions may occur because of fraud or inadequate application of treatment, failure to treat, pest survivorship of treatment, or other factors that have not been explored. Concrete data on the relationship between these factors and the continued presence of pests in WPM is lacking, and should therefore be an area of renewed research effort. WPM is also a pathway for contaminating pests. This paper follows the supply chain of WPM and identifies areas for improvement and data collection opportunities, and highlights areas in need of additional research which can help improve and inform pest risk reduction strategies for industry, shippers, and NPPOs. Gaps in knowledge highlighted here fall under three major topics: more accurate quantification of different sources of risks, improved treatment application and implementation, and expanded education and training opportunities.
Data that are more accurate or complete would improve risk management models, contribute to more informed management decisions, and benefit both public and private partners. For instance, greater transparency regarding the origin of improperly treated WPM pieces could help improve private procurement decisions and source facility education. Analysis of the many NPPO led strategies and tactics designed to decrease pest risk relies on the collection of pre-and post-intervention data. Unfortunately, many missing data elements have combined to hamper any meaningful analysis of these interventions' effectiveness. These include a lack of baseline data, differences in when various policies were implemented around the world, shifts in those policies, lack of incidence and volume data, and changes in enforcement. Our knowledge of the rates of inspection at ports and at the final destination for cargo are still based on estimates.
Accurate quantification of pest incidence in different types of WPM at each step in the supply chain is still needed, as is the rate or amount of WPM that is reused and/or recycled, and thus exiting the supply chain. More complete datasets could contribute to the analyses suggested by this paper, including systems approaches to reduce pest risk along the WPM supply chain, risk-based sampling approaches to improve biosecurity, and mitigating pest risk associated with different types of WPM (e.g., dunnage). Such analyses would help improve pest risk assessments, guide inspection efforts, and increase the efficacy and efficiency of inspection at specific points in the supply chain.
Despite ISPM 15's universal treatment guidelines, there is still variation in how and if treatments are being applied. We lack data on the proportion of new WPM entering global supply chains each year that is fully compliant with ISPM 15 treatment requirements. Importantly, we also lack data on the causes of the non-compliance found in the remainder of new WPM; fraud and incomplete treatment contribute in unknown proportions to the untreated and undertreated WPM into supply chains. To address and reduce non-compliance, various incentives (e.g., streamlined trade programs) and disincentives (e.g., fines) are in place around the world-but we do not have data showing how they may or may not be contributing to improvements in ISPM 15 compliance rates. Without measures of effectiveness, we cannot focus industry and governmental efforts towards the programs that will most efficiently reduce pest presence in the supply chain. We also lack data on post-treatment contamination rates of WPM -for instance, there is no available research on how often fully ISPM 15 compliant WPM is subsequently contaminated while in use or storage. These data could help develop mitigation strategies to reduce the risk of pests in or on WPM throughout the global supply chain.
Private entities, NPPOs, and tree protection advocates would all benefit from improved education and training of the manufacturers, users, and handlers of WPM. The development of best practices for the handling and care of WPM in manufacturing, storage, use, and recycling facilities could decrease the rate of WPM contamination. Our review highlights several opportunities to increase the knowledge and technical capacity of inspection systems worldwide in the service of global plant health.
Collaborative efforts among industry stakeholders, scientific institutions, government agencies, nonprofit entities, NPPOs and academia are required to increase awareness and address these knowledge gaps, and preventative actions along the supply chain are key to maintaining safe trade. Data on North American pest interceptions, quantifying the number and guilds of organisms moving with trade, as well as the commodities on which they move, would elucidate many of the outstanding questions posed here, including where pest risk is highest and where opportunities to implement interventions to reduce pest risks would be most effective. Combining robust interception and treatment data with knowledge of biological characteristics of pests and a practical knowledge of trade pathways will enable us to better determine how plant pests move in and on commodity-specific pathways and take informed actions to avert their continued entry and potential spread. Interception data, improved traceability, and new science-based tools to evaluate non-compliant WPM are needed to measure and distinguish between fraudulent and accidental under-treatment. This data-driven and science-based approach, combined with an improved understanding of the social and economic factors that will increase proper treatment application and implementation of ISPM 15, will best protect North American trees from infesting and contaminating pests while promoting safe trade using WPM.
|
2023-05-13T15:12:47.017Z
|
2023-05-11T00:00:00.000
|
{
"year": 2023,
"sha1": "4e0fd3ea306d771c979d8f6e2cb4c6b165bb3a75",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1007/s10530-023-03058-8",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "ee3487a4dbfb81e1a4993a4306bf4aff3d925a0d",
"s2fieldsofstudy": [
"Environmental Science",
"Business"
],
"extfieldsofstudy": []
}
|
157062968
|
pes2o/s2orc
|
v3-fos-license
|
An Assessment of Indoor Acoustic Condition in Students Hostels within Obafemi Awolowo University , Nigeria
It has been hypothesized that objective assessment for building acoustic conditions only may not always be representative of the users’ perception in occupied indoor spaces. This study objectively and subjectively examined indoor acoustic condition in rooms within students’ hostels in Obafemi Awolowo University, Nigeria. The objective assessment considered the physical measurement of sound pressure level in the rooms in relation to the rooms’ physical characteristics like window to external wall area and window to floor area ratios. The subjective assessment considered the occupants’ perception of the acoustic condition in the rooms in relation to their personal characteristics like age, gender, body mass index, metabolic rate, and body skin area. The sound pressure level was measured in each of the randomly selected 44 rooms at 15 minute intervals between 7 hours and 19 hours daily through a period of eight weeks. The measurement was done with High Accuracy Digital Sound Noise Level Data Loggers placed at work plane at the centre of the rooms. The geometry of the rooms was documented through physical measurements. All the occupants of the selected rooms as well as the two adjoining rooms, amounting to 696 respondents, were purposively selected to fill a questionnaire regarding activities carried out in the rooms, the frequency of fenestration opening, the personal characteristics of the occupants and the rooms’ occupancy ratio. This study established a strong correlation between the objective and subjective assessments of the acoustic condition in the spaces. Moreover, out of all the occupants’ personal characteristics considered, it was the age that has a relationship with the occupants’ perception of the acoustic condition that is closest to significant level.The relationship between their perception and measured sound pressure level was slightly more pronounced among the male gender than the female with correlation coefficients of 0.115 and 0.096 respectively. This study concluded that none of How to cite this paper: Orola, B.A. and David, S.A. (2019) An Assessment of Indoor Acoustic Condition in Students Hostels within Obafemi Awolowo University, Nigeria. Open Journal of Acoustics, 9, 13-25. https://doi.org/10.4236/oja.2019.92002 Received: February 26, 2019 Accepted: May 7, 2019 Published: May 10, 2019 Copyright © 2019 by author(s) and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/
Introduction
It has been established that the indoor acoustic condition is one of the main factors that determine the quality of the indoor environment in general (Bluyssen, 2010) [1].Therefore, it is strongly related to the performance, comfort, and health of occupants of indoor spaces within the built environment.An indoor space with acceptable acoustic environment is one that can control excess noise pollution from both indoor and outdoor sources.Hence, acoustic comfort has been defined as a state of contentment with acoustic conditions.Furthermore, the acoustic condition has been defined by ISO 12913-1 (2014) [2] as the acoustic environment as perceived or experienced, and/or understood by a person or people, in context.This identified with a robust definition by Rasmussen and Rindel (2005) [3].The study remarked that acoustic comfort is a concept that can be characterized by the absence of unwanted sound, desired sounds with the right level and quality, opportunities for acoustic activities without annoying other people.
The acoustic condition is however mostly associated in engineering to only sound pressure levels low enough not to cause discomfort or annoyance (Nikolaos-Georgios Vardaxis et al., 2018) [4].Hence, many scholars assessed it purely through objective measurements in laboratories and not in the field where occupants in the built environment experience real life situations (Osasona et al., 2011 [5]; Kim et al., 2013 [6]; Spah et al., 2013 [7] and Kylliainen et al., 2016 [8]).
However, studies have shown that the use of only objective assessment for building acoustic conditions may have limited potentials in guiding building design.This is because it may not always be representative of the perception of users of occupied indoor spaces (Ljunggren et al., 2014) [9].This, therefore, suggests that a synergy between objective and subjective assessment may produce more credible results to improve on available standards and to accurately predict the responses of occupants of indoor spaces.
The quality of the acoustic environment is linked to numerous physical parameters, which include both the physical properties of sound itself and the physical properties of the indoor space (Osasona, et al., 2011 [5]; Vermeir and Van der Bergh, 2003 [10]; Iordache et al., 2013 [11]).Sound is characterized by the sound pressure level in a short-term and long-term period and by sound frequency.The acoustic environment is influenced by such physical room properties as sound insulation, absorption and reverberation time (Frontczak, and Open Journal of Acoustics Wargocki, 2011) [12].Generally, acoustic comfort assessments are based on some objective acoustic models.Fundamental to the models is the reverberation time which is the time required for the sound pressure level in a room to decrease by 60 dB after being stopped at the emission source (Harris and Shade, 1994 [13]; Jian, 2006 [14]).
There are many other models used in various acoustic comfort studies with their different strength and weaknesses.First is the Geometric model which assumes that the rays leave evenly acoustic source mirroring the environment.
This leads to reflections and every surface that the rays come in contact will suffer attenuation.The acoustic ray theory is based on the idea that sound is propagated in the form of a ray, with properties similar to those found in the geometrical optics (Gerges, 2000) [15].Such assumption may only be reliable when the wavelength is extremely smaller than the dimensions of the room where it occurs.Hence geometric model evaluations may not be satisfactory at low frequencies (Vieira and de Sousa Costa, 2012) [16].
Second is the Image-Source model which treats each sound reflection as a virtual source, which is out of the environment and consists of the source image, across the wall.According to Vieira and de Sousa Costa, (2012) [16], it is the most commonly used model in rectangular environments such as schools, offices and homes.However, it does not take into account the diffusion effects of reflections or mirroring caused by irregular surfaces.The third is the Ray-Tracing model which takes into account the diffuse reflections and requires a computational time only proportional to the length of the impulse response, but does not provide results with good temporal resolution (Vieira and de Sousa Costa, 2012) [16].
Other acoustic quality models include Speech Privacy index which is mainly applied for open plan offices and relates to the degree of speech disturbance between two individuals who are not in conversation with each other; Speech Intelligibility index which evaluates the degree of understanding or non-understanding of speech in rooms; and Articulation Index which is a signal-to-noise ratio assessment and reflects the degree to which intruding speech contents, from adjacent work stations, exceeds the ambient sound pressure level at the listener's ear in an indoor space (Osasona et al., 2011 [5]; Andersson and Chigot, 2004 [17]).
In recent times, however, the instrumentation for the measurements and evaluation of acoustic quality has been aided by development within the field of sound recording as well as the development of laptops (Andersson and Chigot, 2004) [17].Moreover, Horrall, Pirn, and Markham, (2003) [18] concluded that a portable computer with integrated soundboard and a suitably amplified loudspeaker and test microphone are all that are needed to perform in-situ measurements of Articulation Index or other accepted indices.In line with this, Andersson and Chigot, (2004) [17] remarked that such instrumentation allows technicians to survey a large number of working places economically.There are cost efficient tools meeting the requirements for testing in most common environments where oral privacy is likely to be required.In view of the foregoing, this study employed the use of sound level data logging instruments in conjunction with relevant computer software to objectively assess the acoustic quality in relation to the overall indoor environmental quality in indoor spaces within the study area.
Much of studies regarding the assessment of the acoustic condition within the built environment has been done either at the urban scale or in non-domestic buildings like worship centres, learning and teaching environment as well as care facility centres (Astolfi and Pellerey, 2008 [19]; Ana et al., 2009 [20]; Osasona et al., 2011 [5]; Yilmazer and Acun, 2018 [21]; Aletta et al., 2018 [22]).Some of the studies carried out in domestic buildings, including residences, were mostly done within a context far different from the ones existing in residential neighbourhoods in Africa as a whole and in Nigeria in particular (Nikolaos-Georgios Vardaxis et al., 2018 [4]).This is noteworthy because it has been established that occupants' response to aspects of the indoor environment is linked to their personal and socio-cultural characteristics, and that this relationship has not been adequately examined (Frontczak and Wargocki, 2011 [12]).This is further corroborated by the definition of acoustic condition according to ISO 12913-1 (2014) [2], which emphasized the "context" of its assessment or perception.
The main aim of this study was, therefore, to assess the acoustic condition in the rooms within occupied student's hostels in Obafemi Awolowo University, Ile-Ife, Nigeria both objectively and subjectively.The peculiarity of this context is evident in its location and the primary task for which the rooms were designed.First, the spaces were sleeping cum reading rooms within the campus of a University community.They do not require an acoustic condition exactly the same with that expected of a classroom or a library, but still need an acoustic condition acceptable for reading while carrying out other domestic activities within the same space.Second, the spaces were largely occupied by occupants form a socio-cultural background whose perception has not received adequate examination in acoustic comfort studies (Nikolaos-Georgios Vardaxis et al., 2018 [4]).Hence, the specific objectives of this study were first, to examine measured sound pressure level in the hostel rooms in relation to the physical characteristics of the rooms.Third, examine the occupants' perception of the acoustic condition in the rooms in relation to their personal characteristics.At last, analyse the relationships between the two.
The Study Area
The studied student hostels are within the campus of Obafemi Awolowo University which is located within Ile-Ife, a small city in South-western Nigeria located between latitudes 7˚28'N -7˚34'N and longitudes 4˚27'E -4˚35'E with an elevation of about 275 m above sea level.There are nine main hostel buildings within the neighbourhood with a combined capacity of 10,344 students.These are the Murtala Mohamed Post-graduate hall, Adekunle Fajuyi hall, Moremi hall, La-Open Journal of Acoustics doke Akintola hall, Alumni hall, ETF hall, Angola hall, Awolowo hall, and Mozambique hall.
Each hostel building has study cum sleeping rooms as the main spaces with other ancillary spaces like kitchenettes, bathrooms and laundry at one end of each of the block of rooms.Observation revealed that the walls are of sandcrete blocks rendered on both sides with cement and sand plaster, and painted with matte finish.The windows are made of glass louver to achieve natural ventilation.With the exception of Angola and Mozambique halls which have ceiling fans installed in all the rooms, the rooms within most of the hostel buildings were designed with no mechanical ventilation system.Their doors are timber flush doors.The roofs are made of corrugated asbestos with asbestos ceiling.
Among different design and layout features characterizing the different hostel buildings which might influence the quality of the acoustic environment in the spaces areterraces and balconies, as well as vegetation and green spaces that serve as a buffer from street noise (Zhao et al., 2009 [23]; Dzhambov and Dimitrova, 2014 [24]).
Material and Methods
The indoor sound pressure level in the selected rooms was measured with Selected rooms for measurements were 44 in all.These were randomly selected such that at least four rooms represented each of the nine main room layout types identified within the students' hostels.All the selected rooms have the same wall, window and ceiling material finishes.The geometry of the rooms was documented through physical measurement using measuring tape.This was used to generate data on the window area, the wall area, the floor area, the window to floor area ratio and the window to wall area ratio.Data regarding the floor area per occupant in each room was also collected for it determines the amount of sound absorbing furniture in each room.All the occupants in the selected rooms, as well as the two adjoining rooms, were purposively selected to fill a questionnaire.This amounted to 696 respondents.The questionnaire elicited information regarding the activities carried out in the rooms and the frequency at which occupants opened the fenestrations.The same questionnaire was used to capture other data about the occupants' gender, age and complexion, as well Plate 1. DT-173 High accuracy digital sound noise level data logger.
as the occupants' perception of the acoustic condition in the rooms.Other personal characteristics like weight and height were measured using Generic height and weight scale.These were used to calculate the Body Mass Index (BMI), the Body Metabolic Rate (BMR), and the Body Skin Area (BSA).The BMI was calculated using Equation (1) (United States Department of Health and Human Services); the BMR was calculated using Equations (2a) and (2b) for male and female respectively (Frankenfield, Roth-Yousey and Compher, 2005 [25]); while the BSA was calculated using Equation (3) (Farlex Partner Medical Dictionary, 2012 [26]).
( ) ( )
Body Mass Index Weight kg square of the height m = Each respondent was asked to indicate the most prominent indoor and outdoor noise source and to rank the extent to which they were satisfied with the acoustic condition in the room.The data collected were subjected to statistical analysis using the IBM SPSS Statistics 22.The mean measured sound pressure levels in each room layout type were objectively assessed with the standards according to the Nigerian Federal Environmental Agency (FEPA, 1990 [27]).The Agency stipulated that maximum permissible exposure limit for indoor acoustic comfort is 90 dB within an eight-hour period, and the exposure to impulsive or impact noise should not exceed 140 dB.Further analysis was also carried out to determine which of the physical characteristics of the rooms can best predict the sound pressure levels.Furthermore, a regression analysis was carried out to determine which of the occupants' personal characteristics can best predict their perception of the acoustic conditions in the rooms.
Results
Physical measurement revealed that the nine different room layouts studied have window to floor area ratio that ranged from 0.08 to 0.52.Their window to ex-ternal wall area ratio ranged from 0.08 to 1.01, with 35.7% of the respondents occupying the room layout type that has the lowest window to floor ratio and window to external wall ratio of 0.08.
Out of the 696 administered questionnaires, 576 were returned.After the questionnaires were sorted however, 462 were usable for the analysis resulting in a 66.38% response rate.All the respondents were either undergraduate or postgraduate students in the University with 62.8% being males while 37.2% were females.The Body Mass Index (BMI) distribution of respondents showed that 67.3%, fell within normal range of 18.5 and 25, while 30.9% were either under-weight or over-weight, and 1.8% were obese.Regarding the metabolic rate (BMR), 57.6% of the respondents were having metabolic rates between 58.33 and 75 kcal/hour, 38.7% were below that range, while 3.8% were above the range.
Regarding the Body Skin Area (BSA), 43.9% of the respondents were within the average range of between 1.7 m 2 and 2.0 m 2 , 52.5% were below the range, while 3.6% were above the range.Moreover, only 9.8% of the respondents were not adult (below 18 years of age), while a majority 61.8% were between ages 18 and 23 years.
The mean measured sound pressure levels in each room layout ranged from 27.75 dBA to 56.29 dBA, with an overall mean value of 48.77 dB.The highest measured sound pressure level during the entire study period was far lower than 90 dB which was the maximum allowable limit value according to FEPA (1990) [27], Figure 1 and Figure 2 show the average distribution of the mean measured sound pressure level during the day in the room layouts with the lowest window to external wall area ratio and the highest window to external wall area ratio respectively.
This study found that the main contributors to indoor sound pressure levels in the spaces as rated by the occupants were indoor noise sources.Furthermore, Table 1 and Table 2 revealed the contributions of different identified external and internal noise sources to the acoustic environment of the spaces as rated by the occupants.Table 1 showed that 76% of the occupants regarded roommates chatting as the most prominent indoor sound source, followed by noise from electronic gadgets, while Table 2 showed that 54.1% of the occupants regarded noise from people walking along the adjoining corridor as the most prominent outside noise source followed by activities in the hostel common room.This is similar to Wang and Jan (2014) [28], who found that the highest percentage of occupants rated "talking within the space" as the most prominent source of noise affecting them.Moreover, as shown in Table 3, only 9% of the occupants were dissatisfied with the mean sound pressure levels in the spaces.
Discussions
This study established a strong correlation between the objective and subjective assessments of the acoustic condition in the spaces.The objective analysis revealed that the mean measured sound pressure levels in the nine room layouts were less than the maximum allowable in the spaces by between 37.46% and 69.17%.This showed that the entire room layout met the recommended standard in Nigeria, and hence should provide significantly high level of acoustic comfort.
This was confirmed by the subjective assessment by the occupants which revealed that over 80% of the occupants were satisfied with the acoustic condition in the spaces.This showed that objective measurements could be effectively used to predict responses of occupants to acoustic conditions in the students' hostels.
Further analysis revealed a direct significant relationship between the mean measured sound pressure levels and the window to floor area, as well as with the 0 10 window to external wall area ratios of the spaces both at ρ < 0.01.This showed that the higher the ratios, the higher the measured sound pressure level.However, the correlation coefficient was higher for the window to external wall area ratio (0.45) than that for the window to floor area ratio (0.36).WhileShield and Dockrell (2004) [29]could not arrive at a conclusive confirmation for such relationship, several other studies like Aasvang et al. (2008) [30] carried out in bedrooms exposed to railway noise, and Tong et al. (2015) [31] carried out in unoccupied test rooms, established similar relationship between indoor sound levels with building characteristics related to the widow area and type.This suggests that direct relationship between indoor sound pressure levels and building characteristics related to window sizes exists in different outdoor/indoor acoustic context within which a building is located.
Furthermore, according to Frontczak and Wargocki, (2011) [12] little is known regarding the potential influence of building type (which include their physical characteristics) on acoustic comfort of occupants in indoor spaces.
However, studies like Leder et al. (2015) [32] and Sakellaris et al. ( 2016) [33] suggests that the floor area is strongly related to occupants' satisfaction with the acoustic environment.In fact, Leder et al. ( 2015) [32] concluded that satisfaction with acoustics and privacy was most strongly affected by workstation size and office type.While this study did not establish an exactly similar relationship, it, however, found an inverse relationship at a statistically significant level (correlation coefficient of −0.102) between the floor area per occupant and the measured sound pressure level.Which means the higher the floor area per occupant the lower the measured sound pressure level is.This, therefore, shows that the relationship between the building characteristics and the quality of the acoustic environment is evidently significant not only in offices but also in residencies like students' hostels.
The relationship between the perception of the occupants regarding the acoustic condition in the spaces and some of their personal characteristics were analysed.The personal characteristics considered were their gender, age, Body Mass Index, Body Metabolic Rate and their Body Skin Area.Analysis revealed that it is the occupant's age that had a relationship closest to the significant level with the occupants' perception of acoustic condition as measured by the sound pressure level.It was an inverse relationship, which showed that the higher the occupant's age, the better their level of satisfaction with the acoustic condition.No other personal characteristics considered have relationship close to significant level with their perception of indoor acoustic condition.This is similar to the findings of Sakellaris et al. (2016) [33] Moreover, a regression analysis showed that a change in the age among occupants between ages 21 and 23 years, as well as in the metabolic rate among occupants with less than 58.33 kcal/hr.are statistically significant in predicting the occupants' perception of indoor acoustic condition.The former was however more significant.
Although Frontczak and Wargocki (2011) [12], remarked that very few studies provided convincing evidence regarding the impact of personal characteristics of occupants on level of satisfaction with indoor conditions, the findings of the few available studies were not totally the same with that of this study.While Kim et al. (2013) [34] gave evidence of gender differences in noise level and sound privacy satisfaction, Sakellaris et al. (2016) [33] showed that the relationship between indoor comfort and noise was higher among the male gender than the female, and that age was a significant determinant of occupants' perception of the acoustic environment.Although this study found no statistically significant relationship between occupants' gender and their perception of the indoor acoustic condition, it, however, found that the relationship between their perception and measured sound pressure level was slightly more pronounced among the male gender than the female with correlation coefficients of 0.115 and 0.096 respectively.However, Kim et al. (2013) [6] and Sakellaris et al. ( 2016) [33] along with most others with similar conclusion were carried out within office environment with specified tasks, and occupants age distribution far different from those in this study.This may account for the difference in the findings of this study.
Conclusion
This study established a significant correlation between the objective and subjective assessments of the indoor acoustic condition using measured sound pressure level in the rooms within the students' hostels.This showed that physical measurements of indoor sound pressure levels in the rooms can be used to effectively predict occupants' perception of the indoor acoustic condition in the spaces.It also showed that physical characteristics of indoor spaces are major determinants of their acoustic condition.Moreover, out of all the occupants' personal characteristics considered, it was only the age that has a relationship with their perception of measured indoor sound pressure level closest to a level that is statistically significant (with correlation coefficient of −0.04).This study concluded that none of the considered occupants' personal characteristics can effectively predict their response to indoor acoustic condition in the spaces.However, because of the very close age distribution of the respondents, this relationship may have to be further explored among respondents with wider age distribution exposed to the same range of indoor sound pressure level.It may also be expedient for future research to carry out similar study among respondents with more varied personal characteristics.
DT-173 High Accuracy Digital Sound Noise Level Data Loggers shown in Plate 1 with a measuring range of 30 dB to 130 dB and data memory of 129,920 samples.It has a dynamic range of 50 dB, a frequency range of 31.5 Hz to 8 kHz, and an accuracy of +/−1.4 dB.The data loggers were connected to Personal Computers (PC) and placed at work plane (1.0 m above the finished floor level) at the center of the selected rooms.The data was taken in each of the rooms at 15-minute intervals between 07 hrs and 19 hrs daily through a period of eight weeks altogether spanning between the month of February and July, 2018.The data was then downloaded into PC using the sound data logger application software for analysis.
Figure 1 .
Figure 1.Mean measured sound pressure levels in the room layout with the lowest window to external wall area ratio.
Figure 2 .
Figure 2. Mean measured sound pressure levels in the room layout with the highest window to external wall area ratio.
pressure levels (dB) Time of the dayTable 1 .
Most prominent indoor noise sources in the spaces.
Table 2 .
Most prominent outside noise source in the spaces.
Table 3 .
Occupants' perception of the indoor sound pressure level.
|
2019-05-14T09:57:46.320Z
|
2019-05-10T00:00:00.000
|
{
"year": 2019,
"sha1": "24c0cc9420aca6d2a29d1c64b58e3706057fcf26",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=92341",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "24c0cc9420aca6d2a29d1c64b58e3706057fcf26",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
5501252
|
pes2o/s2orc
|
v3-fos-license
|
Urban Lighting Project for a Small Town: Comparing Citizens and Authority Benefits
The smart and resilient city evolves by slow procedures of mutation without radical changes, increasing the livability of its territory. The value of the city center in a Smart City can increase through urban lighting systems: its elements on the territory can collect and convey data to increase services to city users; the electrical system becomes the so-called Smart Grid. This paper presents a study of smart lighting for a small town, a touristic location inside a nature reserve on the Italian coast. Three different approaches have been proposed, from minimal to more invasive interventions, and their effect on the territory has been investigated. Based on street typology and its surroundings, the work analyzes the opportunity to introduce smart and useful services for the citizens starting from a retrofitting intervention. Smart city capabilities are examined, showing how it is possible to provide new services to the cities through ICT (Information and Communication Technology) without deep changes and simplifying the control of basic city functions. The results evidence an important impact on annual energy costs, suggesting smart grid planning not only for metropolis applications, but also in smaller towns, such as the examined one.
Introduction
Energy demands represent a global issue that calls for innovative local energy solutions, such as the ones generally proposed in Sustainable Energy Action Plans (SEAP).EU Countries have set up goals to reduce greenhouse gas emissions by 20%, to increase energy efficiency by 20% and to increase the share of renewable energy sources to 20% by 2020.Public Administrations are called to reduce CO2 levels and the impact of energy production on the environment.For this reason, Municipalities can join the Covenant of Mayors [1], the most important global movement on a local level, and receive support during the transition period to find the appropriate resources.By the end of February 2015, more than 6000 cities around Europe (3000 in Italy) signed the agreement and started working on their SEAPs [2].
This research focuses on energy efficiency development in urban areas, combining public lighting management and city resources.Cities around the world proceed in projects like energy efficient public lighting and CHP (combined heat and power) projects as well as energy efficient buildings, satisfying high thermal standards and using eco-materials.The rapid development of renewable energy technologies represents a great future potential [3,4], and gives the possibility of public and private buildings becoming energy producers [5].Cities around the globe could become important players in the future CO2 trading market, as they emit large quantities of greenhouse gases caused by public lighting, urban transportation and public building energy usage.
The public lighting systems in our cities are basic and vital services for city users and public administrations.Citizens demand high-quality services but urban lighting is a significant consumer of energy.The maintenance cost of street lighting is a challenging energy and financial burden for governments around the world [6].
In Italy as well as across the EU, urban and street lighting systems are old, obsolete and inefficient, not complying with regulations [7], sometimes damaged, and they represent to all municipalities a potential source in terms of energy efficiency and CO2 reduction.The need to meet targets for Energy saving and reduction of CO2 production perfectly matches with the need for the renovation of urban lighting.
A recent study carried out by the European Commission [8] has shown that between 30% and 50% of electricity used for lighting could be saved by investing in energy-efficient lighting systems.In many cases, such investments are not only profitable and sustainable but also improve lighting quality [9].However, urban lighting is an issue consisting of several balanced aspects: standards, minimum requirements, valorization of the city, structure and composition of the urban environment, and requalification of urban spaces.
This paper shows a study for a new lighting system of a small town on the Italian coast, an attractive touristic location inside a nature reserve.Three different approaches, from minimal to more invasive actions are described, and their results on the town are investigated by comparing energy consumption and lighting quality.Technical characteristics and energy consumption data regarding the existing system have been provided by the municipality.The streets considered in this case study are not currently complying with law requirements.The proposed systems were modeled using commercial software and it was set as a minimum requirement to comply the standards.The first level is lamp retrofitting to achieve the minimum required illumination by current standards.The second level approach, introducing deeper changes than the first one, is the design of a new system to comply with the laws, to maximize energy savings and to obtain the requalification of urban spaces that could become more attractive [10].The third approach studies the lighting system as a smart grid opportunity; therefore, the systems are improved with more contextualized services for city users, making the lighting system a multi-service system and therefore a "smart" infrastructure [11].The new lighting system, designed following the definition of Smart Cities, achieve better results from an economic, social and environmental point of view [12].
This approach can be considered a model for other cities to develop a comparison among different public lighting solutions and opportunities.The sensors can be distributed over the territory installing them on existing urban infrastructures; in the case study, the grid is the lighting system, which has widespread coverage in the town.The distribution of the sensors was studied basing on the analysis of the urban characteristics and needs [13].Furthermore, in this paper, the main advantages of the proposed smart grid, including quantifiable and non-quantifiable benefits, are discussed [14].
Current Regulations
Road lighting requirements are described in the standards UNI 11248-2012 [15] and UNI EN 13201-2004 [16]; classification of these requirements is put in relation to the road type category defined by Codice della strada (Highway Code) [17].UNI 11248 defines the range of lighting classes based on main user type (motorized traffic, cyclists and pedestrians) for each road type; a group of variable parameters, such as the traffic flux, defines the selection of the lighting class; UNI EN 13201-2004 lists the lighting standards for each class.One or more lighting classes of the range could be attributed to the same road depending on the condition parameters; for example, the class changes moving from rush hour to later evening hours, when there are fewer users (see Figure 1); in this case, dimming control is important to satisfy the different lighting needs and to maintain an appropriate illuminance and/or luminance for the users [18].Furthermore, in Italy, regional laws control the emissions towards the sky and lighting pollution that implement lighting restrictions for energy savings.The next revision of the Italian standard will introduce adaptive installation and control strategies to satisfy the European standard requirements [19].The case studies presented in this paper satisfy the standards.
New Technology: Source and Control System
The designer is asked to choose the suitable lighting technology considering the surroundings of the installation.New optics technologies are designed to reduce the dispersion of light flux; light pollution causes unnecessary energy consumption and has environmental impacts.Moreover, recent research proves the role and effect of light on humans [20], especially related to spectrum and exposure to high levels of illumination [21,22].For example, Light Emitting Diode (LED) spectrum has a spike in the blue region, which today, appears to be acting on human beings, both their circadian and attentive systems.
From previous considerations, an environmentally friendly lighting system should minimize the lighting levels and at the same time guarantee safety conditions for drivers and pedestrians; it should also be energy efficient and respectful of humans, the environment, and all kinds of living beings.
Another aspect of design concerns the selection of the lighting source based on their characteristics; LED lights are able to maintain their characteristics, even with 50% dimmed flux and have very long life cycles; metal halide lamps have a high color rendering index but they present a high CCT; and high pressure sodium lamps have high luminous efficiency but a very low color render index.Adding control systems to the design can assure a point by point control, switching and managing the fluxes at preset times, assuring the control of the lighting power utilized and dimming it.Market offers different types of regulators; the basic bi-power system has a preset power regulation set by the hour.Another possible solution is the use of remote control systems that automatically adapt the luminous flux and manage the entire lighting grid.Additional control sensors included in the panel board or in each light point send data to the remote control system, the software analyzes the information (for example how many users are in the area) and adjusts the luminous flux.Furthermore, the aging of dimmed lamp is slower than a continuously operated lamp [23].
However, technology may go further; the poles can be transformed in nodes of a smart grid transmitting information about users, weather and diagnosis operation data; automation saves energy and decreases costs, dimming lighting flux when the street is not used.Studies show [24] that it is important to encourage the installation of smart dimmable electronic ballasts, as well as receiving switching and dimming commands from a streetlight lane, so the controller can also be used to auto-detect lamp and electrical failures.The advantage for municipalities is to apply devices and sensors to the lighting system, which is an existing network grid.The main advantage of implementing a lighting system is reflected in the amount of energy saving and the reduction in both operational and capital cost.Lighting Smart systems can help the maintenance and management of the systems itself.
Case Study
This work considered four streets in a little town with 10,000 citizens (data from Italian National Institute of Statistics) in the north of Italy; it is a seaside town with seasonal tourism.The lighting class is designed for the evaluated quantity of users but during the summer the population increases because of tourism.Therefore, the lighting systems have to supply an appropriate service.The town is an attractive touristic location inside a nature reserve where lighting legislation is more restrictive to preserve the natural environment.
The study is divided in three main phases: analysis, design and comparison.In the first phase, the authors analyzed the streets, their surroundings and the existing lighting systems.The second phase concerns the three level lighting design approaches and in the third part the results are evaluated and compared to define which is the most suitable solution (see details in Table 1).The streets selection was based on their different characteristics (see Tables 2 and 3 and Figure 2): S1 is a seafront street and S2 connects the seafront with the town center, whereas S3 and S4 are residential streets with a peak of users only during the rush hours.
S1 consists of a two way street, with two sidewalks and a cycle path.The speed limit is 50 km/h and the traffic flow is constant during the whole day, with a reduction late at night.The carriage is 7 m, the sidewalks section varies from 1.5 m to 5 m, and the cycle path is 2 m wide; the lighting has a single sided disposition, with 7 m high poles, 15 m distant to each other and the source is high pressure sodium (HPS) lamps with a power of 150 W.
The weak features of the existing system are the inadequate color rendering index, and the lack of luminous uniformity caused by the extreme distance between poles, resulting in a violation of the standard UNI EN 13201-2004.
S2 is a town center street.It is a two way traffic street with on-street parking and two sidewalks that are shadowed with small trees.The carriage, parking zones included, is 13 m and the sidewalks are 1.5 m wide; an opposite lighting disposition is realized with poles 4.5 m high placed at a distance of 15 m and with HPS lamps having a power of 100 W. The speed limit is 50 km/h and the traffic flow is constant during the whole day.
S2 lighting system has the same problems as S1 but this street also has trees foliage obstructing the luminous flux, producing shadows on both the road and the sidewalks.
Even if it is located in the central part of the town, S3 is used only by cars, it has no sidewalks along the road; moreover, the vehicular traffic flow is never intense, both during the whole day and all night.It is a two way street 9 m in width; the single sided lighting is provided with HPS 150 W lamps on 8.3 m high poles with a 1.3 m bracket arms and a distance of 30 m.
The use of HPS lamps with a low color rendering index cause a bad perception of the environment and the poles distribution cause a low luminance uniformity value with zones in complete darkness.
S4 is a residential street that is mainly used by local traffic and the flow is scarce during the day as well as during the night.It is a one way street with on street parking area, and two sidewalks.The whole carriageway is 8 m wide, sidewalks are 2.5 m in width, and the two lanes measure 2.5 m each.The lighting poles are only in the left side of the street, at a mean distance of 25 m; many of them are 8.2 m high and the HPS 150 W lamps are mounted on bracket arms of 1.4 m length.
Table 2 shows the lighting classes and Table 3 synthetically describes all the characteristics of the four selected streets and the energy consumption; total consumption has been calculated summing lamp power, devices consumption and hours of use.A 150 W HPS lamp consumes approximately 750 kWh per year, a 100 W lamp consumes approximately 500 kWh.
Level 1: Retrofitting
The first level approach aims to improve the urban street lighting.Retrofitting suggests the replacement of existing luminaries with new ones chosen for their technological properties; light sources and optics are replaced but the geometry of the existing system (distance and height) remains the same.This is the most common action since it requires less financial resources.Results, though, remain less successful than the other approaches.
LED lamps are used to replace HPS lamps in the four streets because of their easy luminous flux control.Seasonal traffic influences S1 and S2 during the summer when they get crowded with both vehicles and pedestrians; respect to the other light sources for street application, LED allows designing a lighting level for rush hours and then to dim the light emission in the winter period when the number of users is lower.Moreover, they are located in the city center where a high chromatic quality is required for heritage places: even if both LED and MH lamps have high CRI, LED was preferred for the large variety of CCT available; MH lamps have the same power consumption of the existing HPS but a lower luminous efficacy.On the contrary, S3 and S4 are streets with low vehicular traffic all year long and the choice of LED allows for easy dimming during the late night hours.
Table 4 shows the possible retrofitting intervention proposed in the four studied streets, presenting the main features of the new road lighting.With respect to the current lighting system, the use of LED technology represents an improvement of the light quality, particularly of the color rendering; economically, the durability of LED, higher than the other lighting sources, allows the reduction of maintenance costs, even if the HPS are considerably energy efficient lamps.
The annual energy saving in terms of absorbed power, after lamps substitution replacing HPS with LED, in all the four streets is shown in Table 5; results show that the retrofit produces a sensible reduction of about 33% of the energy consumption for the street lighting.The compliance to the Standards on street lighting (UNI EN 13201-2004) was verified using commercial lighting software using photometric data provided by the manufactures; a virtual 3D model of the street has been realized, and lamp geometry has been reproduced along the road.The software calculates luminous flux deriving from the photometric data.The lighting systems are designed to comply with the standards for carriageways, sidewalks and cycle paths.The use of asymmetrical optics has improved the luminance uniformity, even while maintaining the poles distance and facilitating a reduction in light pollution.This intervention causes a positive effect on the environment, decreasing the undesired light on buildings and towards the sky.
Level 2: New Lighting System
The second level approach consists of a new lighting system proposing different high and distance of the light points.Fixtures choice depends on the surroundings: the presence of trees along the S2, cycle path in S1 or the sidewalks were taken into account in the lighting design.Lighting levels and luminous distribution affect the pedestrian sensations [25] feeling less safe in lower lighting conditions [26].A dynamic road lighting is proposed that could adapt lighting conditions on the street only when and where it is needed [27]: studies examine appropriate lighting level designed to meet the pedestrians need for sense of security [28].
The LED lamps proposed for all four streets consume less power compared to the first level approach, between 80 W and 100 W in S1 and S2, respectively, and 50 W and 38 W in S3 and S4, respectively.The annual consumption is 480 kWh in S1 and is 25% lower compared to the existing system, in S2 it is 230 kWh with a savings of 53%, in S3 the consumption is 400 kWh and the saving is 47%, and in S4 it is a 76% reduction.Savings in S4 is 62% and is higher compared to both the existing and the first level approach; the results of the analysis highlight that the actual system is unsuitable, especially in S3 and S4 where the proposes could achieve more energy savings; S1 comparison shows a low difference, but, in these results, the different control system that can increase savings is not considered.
The poles are 6 m high in S1 because of the trees.In S2 the position of the poles is bilateral with 6 m height.In S3 and S4, the poles are 8 m high.The on-off switch could be traditional and economic device or automatic equipment could be installed.In a new lighting system, it is possible to manage the luminous flux level as well as the on-off switch mode; therefore new road lighting could be more efficient and less pollutant.The solution considers a remote control with sensors on each pole to satisfy the different lighting needs during winter and summer in S1 and S2, which are main roads with sidewalks, S2 connects schools, public spaces and theater with S1 the seafront road.This control manages the flux level, maximizes the service with energy and human capital savings but it is an expensive system; therefore, S3 and S4 roads have no remote control but a new generation ballast, which dims the power at a specific time.The solution considers four power slots (see Table 6).Standards recommend choosing a different lighting class when the users are less than 25% or 50% (UNI 11248), and sometimes the road has three different classes, as it is in S3 and S4.Both actions consider LED but in the second approach the lamp power is lower.In this case, the energy efficiency and the longitudinal uniformity are higher than the retrofitting solution (see Figure 3 and Table 7) and energy efficiency also improves by means of system control.
Level 3: Smart Lighting
The third level approach considers the same lighting system of the second level but equipped with a complete remote control.Each pole has sensors detecting users flow, weather and the real flux emitted by the system; a pole, the "head", sends signals to the others and manages the flux and real time control malfunctions.Especially in S1 and S2, this management greatly increases energy savings and extends the average life of the lighting systems since they work at full power only during summer rush hours when the users are both citizens and seasonal tourists.The system management, based on the presence of users, also controls the traffic lights or the crosswalk lightings with additional energy savings.Different manufactures promote integrated systems of lamps, users' detection sensors, wireless communication and lighting control.Lighting management can be reached controlling each single lamp point or an electrical line.The most common are passive infrared sensors (PIR sensor) or camera.Lighting control device is a luxmeter, which verify the amount of light and sends a simple message when the illuminance detected does not reach the expected value because of a natural decade of the lamp that could be old, or dirty, otherwise could detect line dysfunctions.
This grid could also convey information about the city, monitoring and/or managing the environment in several ways useful for citizens (for example real time traffic flow news or waiting time for buses) and for public administration (buildings energy consumption) [29].The lighting system becomes a smart grid, the infrastructure for a Smart City in which the nodes are the poles with the integrated functions.Other accessories/functions can be added as microphones for gunshot identification or dB sensor for loud noises informing directly the local law enforcement; a parking meter could be installed realizing data, about free parking lots information and also to control the users flux.Most of the information to users could be sent to public screens, other information collected by the remote control software could be sent to municipality.Countless are the possibilities and operations that could be achieved by the smart lighting grid on demand of the local authorities and the context services provided differ by street.S1 is a seafront road, users are citizens and tourists therefore the supplied services are a free Wi-Fi Internet connection and a set of easy access information about weather, public transportation and neighborhood information.The benefits for street users are focused on touristic services but, on the other hand, the municipality can gain useful information about waiting time of public transportation and authorities can take prompt actions or collect data to plan a more functional service.S2 is a citizen's road; the system distributes data about the theater schedule and traffic information.There are also public schools, the municipality can use the new grid to activate a buildings control systems that manage and check the energy consumption and verify the buildings security during the night hours.In addition, teachers and headmasters can use the same database and easily promote and manage, for example, the interschool books and documents exchange.S3 and S4 pass through a residential district; the services are general information about city life (see Figure 4).Traffic information is useful for citizens that can choose the best route, information on the available parking or any recommendations by the authorities as a work in progress or holidays.The management system dims the luminous flux based on the presence of users, not only for lighting control, but also for demand systems.The smart city approach has the aim of improving the life quality in urban areas; the ICT considerably stimulates the way of enjoying the city.This set of technologies increases the interconnection between networks.The innovative multi-functional system manages the city in a more sustainable way from different points of view: energy, environmental, functional and social.The approach analyzes the city and the territory as a nodes grid.The goal is a multilevel approach for the urban lighting as a smart grid.The smart lighting becomes the infrastructure of the city, a Smart City.
Discussion
Approaches show different energy savings depending on the level of action (see Figure 5), but each action presents a lower energy consumption regarding the existing lighting system.S1 third level approach has the highest energy savings compared to the existing system that does not achieve the standards and has old fixtures.The new smart lighting system can manage the seasonal tourist flux and achieves higher energy savings providing services to the users at the same time.
S2 is a touristic road too, and for this reason, the third level approach with the remote control system is advisable to manage the seasonal flux.
S3 needs new infrastructure but can achieve energy savings and lighting pollution decrease installing bi-power regulators.
For S4, the existing system has a geometry that allows retrofitting achieving all the established aims.
The third level approach is chosen in S1 and S2, which are main roads with numerous users; in S3 and S4, the second and the first, respectively, improve the service.The lighting system becomes a smart grid with new services for users in S1 and S2, whereas in S3 and S4 the new management technologies have higher energy savings (see Table 8).The choice of the level of intervention depends on the characteristics of the environment and the choice of the various, relevant and applicable services of the third level have to be considered in any case.
Conclusions
Energy demands represent a global issue and Public Administrations should have a role in reducing energy requirements and CO2 levels.Municipalities joining the Covenant of Mayors receive support during the transition period, while studying and acting for reducing their impact on the environment.
In Italy, urban and street lighting systems are old, obsolete and inefficient; not complying with regulations; sometimes damaged; and they obviously represent an easy and fast way to increase energy efficiency and CO2 reduction.
The need to meet the targets for Energy saving and reduction of CO2 production perfectly matches with the need for the renovation of urban lighting.Moreover, urban light appears as the best candidate to make the first step to enter the Smart City concepts.
This paper shows a three level approach applied to a North Italian town sited in a protected area.The approaches, from retrofitting solution to smart city applications, are investigated with the aim of improving energy savings while guaranteeing the full compliance of standards.The different approaches used depending on the background and the choices of which level is proposed to be adopted are explained.In this paper, we have presented methods to improve services for a medium size town through the lighting systems.Every street has its own characteristics and the actions success depend on the primer analysis.
Results, in terms of energy efficiency and lighting quality, show that approaches could be feasible and environmentally friendly at the same time.There are about 5630 Italian towns with less than 5000 habitants where smart lighting can be used to improve services, as shown in the case study.
This approach could be used as a model for others cities to compare different public lighting analyses.
Slowly but firmly all European cities and towns are bound to assume actions in energy management planning in order to guarantee their sustainable development and the road lighting system replacement is an easy and quick solution.
Figure 1 .
Figure 1.How to select the lighting classes.
Figure 3 .
Figure 3. S1 new lighting system luminance value rendering in false color.
Table 3 .
Dimensions of the four streets.
Table 4 .
Dimensions of the new lamps.
Table 5 .
Annual energy consumption before and after the intervention of lamp substitution.
Table 6 .
Annual energy consumption of the new systems lamps.
|
2016-03-14T22:51:50.573Z
|
2015-10-21T00:00:00.000
|
{
"year": 2015,
"sha1": "8d55fcfb547b7da814f2af4332c6cb047b5bca77",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/7/10/14230/pdf?version=1445423513",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "8d55fcfb547b7da814f2af4332c6cb047b5bca77",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
264474058
|
pes2o/s2orc
|
v3-fos-license
|
Molecular landscape of the JAK2 gene in chronic myeloproliferative neoplasm patients from the state of Amazonas, Brazil
JAK2V617F (dbSNP: rs77375493) is the most frequent and most-studied variant in BCR::ABL1 negative myeloproliferative neoplasms and in the JAK2 gene. The present study aimed to molecularly characterize variants in the complete coding region of the JAK2 gene in patients with BCR::ABL1 negative chronic myeloproliferative neoplasms. The study included 97 patients with BCR::ABL1 negative myeloproliferative neoplasms, including polycythemia vera (n=38), essential thrombocythemia (n=55), and myelofibrosis (n=04). Molecular evaluation was performed using conventional PCR and Sanger sequencing to detect variants in the complete coding region of the JAK2 gene. The presence of missense variants in the JAK2 gene including rs907414891, rs2230723, rs77375493 (JAK2V617F), and rs41316003 were identified. The coexistence of variants was detected in polycythemia vera and essential thrombocythemia. Thus, individuals with high JAK2V617F variant allele frequency (≥50% VAF) presented more thrombo-hemorrhagic events and manifestations of splenomegaly compared with those with low JAK2V617F variant allele frequency (<50% VAF). In conclusion, individuals with BCR::ABL1 negative neoplasms can display >1 variant in the JAK2 gene, especially rs2230722, rs2230724, and rs77375493 variants, and those with high JAK2V617F VAF show alterations in the clinical-laboratory profile compared with those with low JAK2V617F VAF.
Introduction
The BCR::ABL1 negative chronic myeloproliferative neoplasms (MPN) represent a heterogeneous group of clonal diseases of the hematopoietic progenitor cell, of which the most classic are polycythemia vera (PV), essential thrombocythemia (ET) and primary myelofibrosis (PMF) (1,2).In the 5th Classification of Hematolymphoid Tumors, published in 2022, the World Health Organization (WHO) revised certain aspects for the category of MPN (1), establishing as diagnostic criteria for the diagnosis of PV elevated hemoglobin concentration and/or hematocrit, accompanied by panmyelosis and detection of JAK2V617F or exon 12 variants in JAK2.
Most of the variants identified in JAK2 result in a gain of function, and are characterized as somatic missense types that lead to unregulated production of hematopoietic cells in bone marrow and accumulation of mature cells in peripheral blood (7).JAK2V617F (dbSNP: rs77375493) is the most commonly identified variant in MPN and is found in up to 95% of cases of PV and between 50-60% of cases of ET and PMF (8).This variant is located in exon 14 of the JAK2 gene and is characterized as a missense variant.It is a product of the substitution of a guanine by a thymine at position 1,849, that leads to a substitution of valine with phenylalanine at the amino acid position 617 (V617F) of the protein structure (9,10), a position that belongs to the pseudokinase domain, which is a region of the primary positive and negative regulation of the protein (10,11).
Variants in exon 12 of the JAK2 gene are identified in ~3% of JAK2V617F-negative patients diagnosed with PV (12).Genetic alterations in this exon include missense and indel variations (13), which confer a marked erythrocytic picture in individuals with PV, and appear at younger ages when compared to the JAK2V617F variant (14).
The presence of coexisting non-driver variants can modulate the JAK2V617F variant allele frequency (VAF).In MPN, the determination of JAK2V617F VAF is pivotal when evaluating laboratory and clinical implications.It is worth mentioning that, in PV, a high VAF (≥50%) is associated with fibrotic progression and positively associated with total white blood cell count (WBC), neutrophil count, and thrombosis events, especially in the presence of coexisting non-driver variants (15), while in ET, a high VAF is correlated with increased thrombo-hemorrhagic events, hypercoagulable status, and low quantitation of hemostasis factors (16,17).
Sanger sequencing and next-generation sequencing have allowed the identification of variants in other JAK2 exons (18,19).Several variants have been identified in the complete coding region of the JAK2 gene, which affect other domains of the JAK2 protein (19,20) and lead to constitutive activation of the JAK/STAT pathway, with most of the described variants being somatic, with only a small fraction of them being germinal.This finding suggests that certain patients may develop a non-clonal myeloproliferative phenotype, with variable penetrance at the familial level (21).
Certain variants that are acquired in the coding region of JAK2 are described as benign or of uncertain clinical significance, and the primarily affected exons are 6 (22), 9-10 (23), 11-15 (19) , and 19 (24).According to certain studies, some variants in these regions have been found in coexistence, presenting cytokine-independent signaling (25), and are even associated with leukemic transformation and development of non-hematological solid tumors (23,24,26).Thus, the present study aimed to molecularly characterize variants in the complete coding region of the JAK2 gene in individuals with BCR::ABL1 negative chronic myeloproliferative neoplasms.
Materials and methods
Patients.In the present study, 97 patients from the state of Amazonas, Brazil, diagnosed with PV (n=38), ET (n=55) and MF (n= 04), who were treated between July 2021 and March 2023 at Hospital Foundation for Hematology and Hemotherapy of Amazonas (which is the only reference institution in the state of Amazonas for the diagnosis and treatment of hematological diseases) were included.Participants showed an absence of BCR::ABL1 transcripts.Additionally, all the patients with a MF diagnosis who agreed to participate in the investigation were included.
The prese study was performed in accordance with the Declaration of Helsinki and Resolution 466/12 of the Brazilian Ministry of Health.This study was approved by the National Ethics Committee, which is responsible for approving relevant human studies in Brazil (approval no.4.450.813).Written informed consent was obtained from all subjects involved in the study.
Clinical and laboratory data.Clinical data were obtained from medical records, which included data regarding sex, age, splenomegaly, history of thrombotic or hemorrhagic events, and treatments administered.Laboratory data were obtained from blood samples and included red blood cell count (RBC), hematocrit (Ht), hemoglobin (Hb), mean corpuscular volume (MCV), mean corpuscular hemoglobin (MCH), WBC, percentage of segmented neutrophils, monocytes and lymphocytes; Platelet count, prothrombin time-International Normalized Ratio (PT-INR), activated partial thromboplastin time (aPTT), fibrinogen (FIB), lactate dehydrogenase (LDH) and uric acid (UA).UA and LDH analyses were performed after diagnosis and during treatment, mentioning that several patients included in the study had received several years of hydroxyurea administration.The median optimal treatment regime in PV patients was 4 years (100-500 mg/per day of hydroxyurea or 2 mg/per day of Anagrelide), in ET patients it was 10.5 years (100-300 mg/per day of hydroxyurea or 2 mg/per day of Anagrelide), and in MF patients it was 2 years (2 mg/per day of Anagrelide).Of note, administration of hydroxyurea can significantly alter laboratory analysis.
Blood-sample processing and RNA extraction.Total RNA was extracted from peripheral blood samples with EDTA anticoagulant using TRIzol ® (Ambion; Thermo Fisher Scientific, Inc.), according to the manufacturer's protocol.cDNA was synthesized using SuperScript™ III Reverse Transcriptase (Promega Corporation).Reverse transcription was used to obtain cDNA, using the following thermocycling parameters: 5 min at 25˚C and 60 min at 42˚C.After the reaction, the cDNA was stored at -80˚C until used for PCR.
PCR and Sanger sequencing analysis.Amplifications were performed using a total volume of 25 µl.Reaction products were visualized using electrophoresis on a 1.5% agarose gel stained with ethidium bromide.PCR products were purified with the DNA precipitate and purification protocol using polyethylene glycol 8000 (Promega Corporation) as described previously (27)(28)(29).A Sanger sequencing reaction (in both directions) was performed using BigDye ® Terminator v3.1 (Applied Biosystems; Thermo Fisher Scientific, Inc.), according to the manufacturer's protocol.The sequences of the primers used are listed in Table I, and were designed using Primer-BLAST-NCBI and OligoAnalyzer Tool-IDTDNA to evaluate the percentage of GC, Tm, Hairpin capacity, and ΔG index, to flank the complete coding region of JAK2, spanning from exon 3 to exon 25 (Fig. 1).The products of the sequencing reaction were purified using the EDTA/ethanol protocol and were subsequently evaluated in an automatic sequencer (3500 XL Genetic Analyzer ® , Applied Biosystems handbook; Thermo Fisher Scientific, Inc., pag.12) using the POP-7 polymer.
Data analysis.The sequences obtained were initially analyzed using the Sequencing Analysis software (Applied Biosystems; Thermo Fisher Scientific, Inc.); only high-quality sequences were used for variant analysis (Q score ≥30).Geneious software 6.0.6 (Biomatters, Inc.) was used to obtain contigs and compare them to the Homosapien JAK2 reference sequence, transcript 2, mRNA (NCBI: NM_001322194.2).Samples with the presence of rare variants were sequenced and confirmed at least twice.VAF was measured in JAK2V617F-positive individuals using Minor Variant Finder (Applied Biosystems, Thermo Fisher Scientific, Inc.) and Edit R software (moriaritylab.shinyapps.io/editr_v10).The clinical significance of the variants identified in the research was analyzed using the Polyphen2 tool and the ClinVar-NCBI site (https://www.ncbi.nlm.nih.gov/clinvar/).Statistical analysis.Categorical variables are presented as the frequency (n, %).Continuous numeric variables are presented as the median and interquartile range (IQR).The distribution of continuous numerical variables was verified using a Shapiro-Wilk test.Statistical analysis of categorical variables was performed using a0020 2 test.Kruskal-Wallis and Mann-Whitney U tests were used to analyze numerical variables, when appropriate.Data from individuals with MF were excluded from the statistical analysis between groups due to the number of patients with MF.P<0.05 was considered to indicate a statistically significant difference.Statistical analysis of the data was performed using GraphPad Prism version 8.2.1 (GraphPad Software, Inc.).
Results
Clinical and laboratory characteristics of patients.Samples from 97 patients diagnosed with MPN were evaluated, and these were distributed among PV (n=38), ET (n=55), and MF (n=04).During the length of the study, none of the patients showed transformation to acute leukemia, post-PV, or post-ET-MF.Clinically, ET showed a predominance in females (P=0.0276),compared with PV and MF.All individuals were between the fifth and sixth decade of life (P=0.565;comparing the age between the PV and TE groups).Splenomegaly was detected more frequently in MF, than in PV and ET (75, 23.6, and 16.3%, P=0.0212, respectively) patients.
Thrombotic and hemorrhagic events were more often observed in ET cases (16.3 and 21.8%, P=0.6406 and P=0.0205, respectively) when compared to PV cases.The thrombotic events included deep venous thrombosis, thrombosis of the splenic vein, esophageal varices, and miscarriage, and the following hemorrhagic events were evaluated in the study: Hypermenorrhagia, ocular and gingival hemorrhage, and hemorrhage of the gastrointestinal tract.All medical records of the patients included in this study were reviewed and none of these reported acquired von Willebrand syndrome.
In the blood count, an increase in the erythrocyte lineage was observed in individuals with PV compared to those with ET and MF, with an increased RBC (5.03 x mm 3 , P<0.0001), a finding that is complemented by Ht values Table I.Sequences of the primers used for PCR and Sanger sequencing.
(48%, P<0.0001) and Hb concentration (15.2 g/dl, P<0.0001).Hemometric values were found to be increased in ET cases [Mean Corpuscular Volume, MCV: 103.9 fl; P=0.0013; Mean Corpuscular Hemoglobin, MCH: 33.5 pg, P=0.006, and Mean Corpuscular Hemoglobin Concentration (MCHC): 32.5 g/dl, P= 0.1160] when compared to PV and MF cases.The white blood cell count was within normal ranges in PV and TE cases, compared with those with MF (P=0.0134).However, the percentage of neutrophils was higher in MF patients (76.4%) when compared to ET and PV patients (P= 0.0232), and the lymphocyte count was slightly higher in ET than in PV and MF patients (29.2%, P= 0.0005).In ET patients, a high platelet count was observed when compared to PV and MF patients (470,500 x mm 3 , P<0.0001).Erythropoietin measurements were not available in the present study.
Values in the hemostasis tests of individuals with PV, ET, and MF were closely related; however, a slight increase in fibrinogen concentrations was observed in individuals with MF (321 mg/dl, P=0.400).Biochemical analyses demonstrated higher concentrations of LDH and UA in subjects with MF (904.5 U/l, P=0.0295 and 6.8 mg/dl, P=0.006; respectively) compared with PV and ET patients.Clinical and laboratory values are described in Table II.
Variants detected in chronic MPN patients.In this study, missense variants were identified in the FERM domain (rs907414891); 1 variant in the FERM-SH2 linker region (rs2230723), 1 variant in the pseudokinase domain (rs77375493), and 1 variant in the kinase domain (rs41316003).This totals 4 missense variants identified in the complete coding region of the JAK2 gene, as described in Table III.In addition, other synonyms and benign variants were detected in the complete coding region of the JAK2 gene (rs2230722, rs576746768, rs2230728, rs2230724, and rs55930140).Conversely, the rs10119726 variant is a synonymous variant and does not have a description of its clinical significance on ClinVar.These variants are shown in Figs.2-11.
Frequency and distribution of missense variants in patients with variant alleles of the JAK2 gene.The frequency of variants was estimated in the population (PV=38, ET=55, and MF=04), and it was noted that most of them were in the first protein domains, especially in the FERM domain, followed by the pseudokinase domain.The variant rs77375493 (JAK2V617F) showed a high frequency in individuals with PV when compared to those with ET (65.7 and 38.1%, respectively, P=0.0116).Variant rs2230723 was found in sporadic cases of PV and ET.Interestingly, rs907414891 and rs41316003 were found only in cases of ET, but not in cases of PV or MF.The frequency of missense variants is presented in Table IV.
Mutational landscape of the JAK2 gene in individuals with chronic MPN.After estimating the frequency of the variants in the complete coding region of the JAK2 gene, the mutational profile of the individuals was mapped.It was observed that patients with variant alleles in JAK2 simultaneously presented with 1-3 variants.Among the primary variants found simultaneously in the three types of MPN were rs2230724, rs2230722, and rs77375493, thus highlighting that most individuals with PV presented with the three variants when compared to those with ET (P=0.0023).In contrast, individuals with ET showed a predominance of two variants (rs2230722 and rs2230724) compared to those with PV (P=0.0253).The mutational landscape of the patients is presented in Table V.Individuals with four variants were not found.
JAK2V617F VAF in patients with PV and ET.Of the 97 patients included in this study, the allele burden of JAK2V617F was measured in 46 individuals who were JAK2V617F-positive (PV, n=25 and ET, n=21).The allele burden of JAK2V617F was compared in individuals with PV and ET.In each disease, two groups were considered to describe the VAF of JAK2V617F: High VAF (≥50%) and low VAF (<50%).Individuals with ET showed a low VAF JAK2V617F (<0.0001) when compared to those with PV who showed VAF ≥50% (0.0477).Individuals with MF were excluded from this comparison.The comparison of the VAF of JAK2V617F among the groups is presented in Table VI.Regarding the clinical profile in individuals with PV, thrombotic and hemorrhagic events were evenly distributed among both groups.However, in the PV patients, splenomegaly was more frequent in individuals with a high VAF.The clinical data of the individuals with PV according to VAF of JAK2V617F are presented in Table VII.
Comparison of the clinical and laboratory profile according to the VAF of JAK2V617F in patients with
The comparison of laboratory profiles in individuals with PV, according to their VAF of JAK2V617F, showed an increase in hematimetric values (RBC, 4.7 x mm 3 ; Ht, 46.9%; Hb, 14.9) in individuals who presented a VAF of JAK2V617F ≥50%, compared with those with a VAF of <50%.WBC and platelet count were slightly augmented in individuals with a VAF of ≥50%.Likewise, LDH was elevated in individuals with a VAF of JAK2V617F of ≥50% (486.5 U/l).Hemostasis tests were relatively equivalent between both groups in PV patients.The laboratory profiles of the individuals with PV, according to the VAF of JAK2V617F, are presented in Table VII.
Comparison of the clinical and laboratory profiles according to the VAF of JAK2V617F in patients with ET.
In the individuals with ET, the clinical and laboratory profiles were also described based on the VAF of JAK2V617F.Regarding the clinical characteristics in individuals with ET, thrombo-hemorrhagic episodes were the most commonly recorded clinical events in the patients, especially in those with VAF of JAK2V617F of ≥50%; however, despite this fact, it was not statically significant.Just as in the PV individuals, splenomegaly was more frequent in individuals with a high VAF.The clinical data of the individuals with ET according to the VAF of JAK2V617F are presented in Table VII.
The laboratory profiles of individuals with ET, according to the VAF of JAK2V617F, showed an increase in hematimetric values (RBC, 5.1 x mm 3 ; Ht, 47.0%; and Hb, 15.5 g/dl) in individuals who presented a VAF of JAK2V617F of ≥50% when compared to those with a VAF of <50%.The WBC showed equivalence in both groups.Interestingly, the platelet count was increased in individuals with a VAF of <50%.Likewise, for individuals with ET, LDH was elevated in individuals with a VAF of JAK2V617F of ≥50% (412.1 U/l).Hemostasis was slightly prolonged in individuals with a VAF of JAK2V617F of ≥50%.The laboratory profiles of individuals with ET, according to VAF JAK2V617F, are presented in Table VII.
Discussion
MPNs are generally characterized by an increase in cell counts in the blood, which can lead to clonal evolution and disease progression.Despite investigations in other Brazilian states (30)(31)(32), this study is the first to address JAK2V617F mutation detection and the hematologic profile according to JAK2V617F VAF in patients from the state of Amazonas diagnosed with MPN.
Regarding the proportion of MF patients, which is a multifactorial issue, previous studies in Brazil have shown a lower proportion of MF patients compared with PV and ET (30)(31)(32), and it is noteworthy that MF is the most aggressive MPN, and shows a high ratio of leukemic transformation.Silva et al (32) determined the prevalence of JAK2V617F in MPN in Pernambuco, Brazil, and found that few patients had MF diagnosis compared with those with PV and ET.Similarly, Macedo et al (30) investigated the association between the JAK2 46/1 haplotype and acquisition of JAK2V617F.They observed the lowest number of cases of MF.Furthermore, they concluded that the JAK2 46/1 haplotype was present in JAK2V617F positive individuals and associated with MPN phenotype in Brazilian patients.Likewise, in another study, Macedo et al (31) assessed the association of TNF The present study showed that the increase in the erythrocyte lineage was in fact a characteristic of individuals with PV and that the increase in the platelet count was an indicator that is suggestive of ET, according to the indicators established by the WHO (1).RBC counts are directly related to Hb and Ht concentrations; it is hypothesized that these two hematological parameters are reliable indices for the diagnosis of PV (33).
Currently, erythropoietin measurement is considered a major diagnostic criterion for PV diagnosis (1,34).In the present study, these measurements were not available; however, MCV is considered a marker that can be used to differentiate between PV and ET (33).In the present study, MCV was found to be lower in patients with PV than in those with ET.This finding may explain the iron deficiency and the accelerated time for renewal of red blood cells in these patients (33,35).
The role of the lymphocyte count in MPN is not well described.Stefaniuk et al (36) found that there was little evidence for the prognostic significance of the neutrophil-lymphocyte ratio and lymphocyte-monocyte ratio in MPN, but they both may be higher in patients with PMF compared to healthy individuals, and may be associated with chronic inflammation and tumorigenesis.Likewise, Mulas et al (37) described that high a neutrophil-lymphocyte ratio had been reported in JAK2-positive patients and this parameter could be used as an indicator of chronic inflammation in MPN.
In addition, Vannucchi et al (38) reported that individuals with MPN have an increased risk of developing lymphoproliferative neoplasms, particularly in those that were JAK2V617F-positive.Similarly, Garcia-Gisbert et al (39) found that certain patients with a diagnosis of MPN showed CD3+ JAK2V617F-positive lymphocytes, These findings may support the hypothesis that JAK2V617F-positive lymphocytes may be related to leukemic transformation.
Furthermore, it has been highlighted that MPN is associated with a high risk of thrombotic thromboembolic events when compared with the general population, and is also associated with increased hematopoietic counts (40), which was also observed in the present study.This fact may be explained by the presence of a high VAF of JAK2V617F (≥50%), which likely stimulates deregulation signaling in hematopoietic progenitor cells and may be potentialized by the presence of other variants in genes such as CALR and MPL; these are directly implicated in platelet activation and increased platelet account (40).
Administration of hydroxyurea is frequently used in cases of PV and ET for the normalization of hematological counts (41,42).The results of the present study showed that the high platelet count observed in individuals with ET was directly related to the increase in the frequency of thrombo-hemorrhagic events, which indicates that platelets could in fact be the primary mediators of thrombotic activation in these patients.As such, the study by Buxhofer-Ausch et al (43) demonstrated that platelet count normalization is an important factor in reducing thrombotic risk, regardless of the leukocyte count.However, further studies are needed to confirm what the cut-off point in the platelet count is to trigger these risks.
Esophageal and gastric complications are often described in patients with myeloproliferative neoplasms diagnosis (44), and this is typically due to portal system hypertension or von Willebrand syndrome, which is the result of excessive thrombocytosis.However, in the present study, bleeding complications were relatively high, especially in patients with ET.This fact may be due to an increased platelet count with functional platelet disorders, such as impaired platelet aggregation response to collagen and reduced number of dense granules in platelets (45).In addition, current literature notes that ET is more common in females, and bleeding and thrombotic risks are the major complications in MPN patients (40,46).Nevertheless, female biology may play a role in the development of bleeding and thrombotic events, likely due to pregnancy and the use of contraceptives interfering with the interactions of platelets and other molecules in the endothelium.
Other variants in the JAK2 gene have been reported, and most of these variants are of the somatic type (21,22).The existence of germline variants in MPN has also been described, and this includes showing patterns of erythropoietin (EPO) hypersensitivity and weak constitutive signaling of the JAK2/STAT5 pathway compared to JAK2V617F (47).
Therefore, by applying Sanger sequencing in the complete coding region of the JAK2 gene, the results of the present study demonstrated the existence of somatic and germline variants in individuals with MPN other than JAK2V617F, with somatic variants being the most frequent.This is also corroborated by previous studies (19,48,49).Moreover, germline variants in individuals with MPN at an early age in individuals with a familial predisposition, compared with those with somatic variants, have been described (50).Age differences between patients with somatic and germline mutations were not investigated in the present study, and this will form a future research direction.
JAK2V617F is the most common variant in BCR::ABL1 negative MPN (51), with constitutive activity of the JAK2/STAT5/STAT3 pathway, and it is highly associated with the development of cardiovascular and thrombotic complications (15).In the present study, JAK2V617F was identified in 65.7% of the patients with a diagnosis of PV.This may be related to the median optimal treatment regimes, as these individuals have been treated with cytoreductive therapy for several years.
The effects of JAK2 VAF are well established; however, the specific populations affected are poorly understood.Through the comparison of JAK2V617F VAF, it was shown that patients from the state of Amazonas with PV had a JAK2V617F VAF that was higher than those diagnosed with ET, and individuals with a VAF of ≥50% had more thrombo-hemorrhagic events and a slight prolongation in coagulation tests, especially in PT-INR and aPTT when compared with those with a VAF of <50%, which is that not dissimilar to previous studies (40,46).This fact directly suggests that individuals with a high JAK2V617F VAF exhibit increased intracellular signaling, cellular activation, and possible alterations in coagulation factors, thus contributing to the deregulation of hemostasis.
Furthermore, the results of the present study are in agreement with the results of Hu et al (16) who demonstrated that individuals with PV had a high JAK2V617F VAF (≥50%) compared with those with ET.In addition, the results of the present study demonstrated that patients from the state of Amazonas with a diagnosis of PV had a mutational landscape that was more complex than that of individuals with ET from the same state.This landscape showed at least three mutations in concomitance in the JAK2 gene, suggesting genomic instability and, subsequently, the instability of regulatory mechanisms at the protein level and possibly in the myeloproliferative phenotype of individuals with MPN.
According to data available on the ClinVar-NCBI website, a number of the acquired variants located in the extension of the JAK2 coding region are either benign or of uncertain clinical significance.This indicates that most of the variants reported to date are in the FERM domains, kinase, and binding regions (19,(22)(23)(24), and this finding relates to the present study, since the detected variants are located in the aforementioned regions.
Thus, it is highlighted that the presence of variants in the FERM domain may result in increased basal activity of JAK2 (52,53), which is a phenomenon that may explain the myeloproliferative phenotype in JAK2V617F-negative individuals who present with other variants in the JAK2 gene, and could possibly be related to the clinical phenotype in the different subtypes of neoplasms, a phenomenon that is still not well understood.The present study identified the rs907414891 variant, located in the FERM domain, which results in the exchange of isoleucine for valine at position 166 of the JAK2 protein (p.Ile166Val).Currently, this variant has no description in the literature regarding its clinical impact.However, the exclusive presence of rs907414891, rs576746768, rs413160003, and rs55930140 in ET individuals may represent novel clonal biomarkers in ET.Nevertheless, it is necessary to perform additional molecular and functional tests to verify their possible association with MPN.
The SNV rs2230722, located in exon 6 of JAK2, was frequently observed in the present study and had a higher predominance in females, in agreement with Sokol et al (22).This variant was more frequent in women with platelet aggregation syndrome compared to men; and was significantly associated with deep vein thrombosis.As such, the variant could be correlated with the clinical picture of MPN, especially in individuals with thrombotic complications.The SNV rs2230724, a variant that is present in exon 19 of JAK2, was detected in the present study in the JH2-JH1 linker region.Although variants in this region are not frequently described in MPN, alterations in the JH1-JH2 interaction may generate dysregulation in the inhibition of catalytic activity and, therefore, alter its function.This SNV, together with rs2230728, are reported in hematologic cancers and associated with the progression to acute leukemia, especially in individuals older than 45 years old (23); and may thus serve as genetic markers of leukemic progression in MPN.The coexistence of JAK2 variants is not often described in MPN; however, this could have greater repercussions in the individual's clinical picture (50).In the present study, concomitance was observed in up to three variants, in the presence of JAK2V617F, and presented laboratory profiles with slight increases in cell counts, including red blood cells and platelet counts, which indicates that these variants may confer genomic instability and increase intracellular signaling of the JAK/STAT, PI3K, MAPK, NF-κB, and HIF1-α pathways, to induce tumorigenesis and facilitate the acquisition of other variants within the same gene (50,54).
Using Sanger sequencing, Lanikova et al (55) demonstrated the presence of SNV rs2230723 in coexistence with JAK2V617F and, in this case, described normalized hematological counts after administration of hydroxyurea.In other experiments, both variants showed increased STAT1, STAT3, and STAT5 signaling, which suggested the potential of both variants in the predisposition to malignancies.Likewise, other variants in JAK2 may confer weak constitutive signaling of the JAK/STAT pathway, resulting in a 'more attenuated' myeloproliferative phenotype, with slightly altered cell counts.However, further studies are needed to assess the functional behaviors of these variants, both individually and when combined.
Although the present study highlights the importance of detecting other variants in the entire coding region and the coexistence of variants in the same gene with possible repercussions on the clinical and laboratory status of individuals with MPN, it has several limitations.Among the primary limitations of this study is the small sample size due to the lack of patients from various centers.Future studies will aim to recruit a larger cohort from several centers to confirm the results.Here, only patients from the Hospital Foundation of Hematology and Hemotherapy of Amazon were included (a unique reference institution in the state of Amazonas for the diagnosis and treatment of hematological diseases).Another limitation is the lack of functional studies that confirm the myeloproliferative activity of these variants, the lack of allelic association of variants with outcomes, which may explain the possible predispositions for the development of MPN, and JAK2 analysis was performed once along of the study.Likewise, the individuals included in the present study were treated with hydroxyurea and anagrelide, decreasing the probability of the detection of JAK2V617F mutations.The results also may be affected by the low sensitivity of Sanger sequencing.
Table VII.Clinical data in individuals with PV according to the VAF of JAK2V617F.
Figure 1 .
Figure 1.Structure of the JAK2 gene.Gray boxes correspond to coding exons 3-25, which were analyzed using Sanger sequencing.Arrows indicate the primers used in the reactions.Text in bold type in the boxes indicates the size of the fragments.
PV.The clinical and laboratory profile of individuals with PV and ET with JAK2 variants were compared considering the VAF of JAK2V617F in both groups [high VAF (≥50%) and low VAF (<50%)].
Figure 2 .
Figure 2. Chromatogram of the JAK2V617F variant by DNA sequence analysis.
Figure 4 .
Figure 4. Chromatogram of the rs2230722 variant by DNA sequence analysis.
Figure 3 .
Figure 3. Chromatogram of the rs2230724 variant by DNA sequence analysis.
Figure 5 .
Figure 5. Chromatogram of the rs2230728 variant by DNA sequence analysis.
Figure 6 .
Figure 6.Chromatogram of the rs907414891 variant by DNA sequence analysis.
Figure 7 .
Figure 7. Chromatogram of the rs2230723 variant by DNA sequence analysis.
Figure 8 .
Figure 8. Chromatogram of the rs10119726 variant by DNA sequence analysis.
Figure 9 .
Figure 9. Chromatogram of the rs41316003 variant by DNA sequence analysis.
Figure 10 .
Figure 10.Chromatogram of the rs55930140 variant by DNA sequence analysis.
Figure 11 .
Figure 11.Chromatogram of the rs576746768 variant by DNA sequence analysis.
Table II .
Demographic, clinical, and laboratory characteristics of patients.
Table III .
Missenses variants detected by Sanger sequencing in the entire coding region of the JAK2 gene in patients with myeloproliferative neoplasms.
a Clinical significance was described according to ClinVar reports.b Variant type was described according to dbSNP reports.
Table IV .
Frequency and distribution of missenses variants in patients.
Table V .
Frequency and distribution of variants in patients.
Table VI .
JAK2V617F variant allele frequency in patients with PV and ET.
a P<0.05, b
|
2023-10-26T15:34:15.645Z
|
2023-10-23T00:00:00.000
|
{
"year": 2023,
"sha1": "4a5232f0f0a738a675c7a514b8d612cc0e91a968",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/br.2023.1680/download",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d93ad7b3e0e79126352e9fb573d97dc21c07e273",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
}
|
232079518
|
pes2o/s2orc
|
v3-fos-license
|
Positron-emission-tomography in tubercular lymphadenopathy: A study on its role in evaluating post-treatment response
Lymph node tuberculosis is one of the most common forms of extrapulmonary tuberculosis worldwide. The study aimed to evaluate the role of positron emission tomography-computed tomography (PET-CT) in determining post-treatment response in lymph node tuberculosis. A PET-CT was done in all treatment naïve tubercular lymphadenitis adults at baseline and after six months of therapy. The post-treatment clinical response was compared with the metabolic response on PET-CT. Of the 25 patients with tubercular lymphadenitis, 9/25 patients showed a complete metabolic response (CMR) at six months, while 16 patients had a partial metabolic response (PMR). All patients with CMR had a good clinical response. However, discordance between clinical and PET findings was noticed in those with PMR. The role of PET-CT in evaluating post-treatment response in patients with tubercular lymphadenitis needs further evaluation with a larger sample size.
Introduction
Tuberculosis has affected humanity since ages and continues to be a significant global public health epidemic even today (1). Extrapulmonary tuberculosis (EPTB) forms a substantial proportion of the total number of cases (2)(3)(4). Lymph node tuberculosis (LNTB) has been reported as the most frequent form of EPTB, with an incidence of 30.8 per 100,000 population (3). The management of lymph node tuberculosis is a challenging ordeal as many patients continue to have persistent swelling despite the prescribed duration of treatment. Often, the endpoint of therapy is difficult to determine objectively (4). The present guidelines of LNTB have been mainly extrapolated from experience with pulmonary tuberculosis. The duration of treatment in tubercular lymphadenitis is usually guided by a clinical response alone. There is a need for a more accurate objective tool for evaluating response to therapy in patients with LNTB. This study aimed to determine the role of positron emission tomography-computed tomography (PET-CT) in assessing the treatment responses in patients with LNTB.
Materials and Methods
This is a prospective study in which treatment naïve adults (> 14 years of age) diagnosed as LNTB were enrolled in the study after taking written informed consent. Patients with significant comorbidities (malignancy, immunosuppression) were excluded from the study. Pregnant and lactating females were also excluded. An approval from the Institute Ethics Committee was obtained before the study was initiated (IECPG/384/29.06.2016). Clinical findings and baseline laboratory investigations were noted for all patients. A PET-CT scan (pre-therapy) was done at the baseline before the initiation of treatment. The treatment for lymph node tuberculosis was guided by the national guidelines for lymph node tuberculosis (2). A repeat PET-CT (post-therapy) was done after six months for all the recruited patients. Before the post-therapy scan was done, they were classified to have "complete clinical response (CCR)" or "partial clinical response (PCR)" based on the resolution of clinical symptomatology by a team of expert clinicians not involved in the treatment process. After the post-therapy scan was done, both the images were analyzed and reported by two nuclear medicine physicians blinded to the clinical response. Anatomical regions and the total number of sites of abnormal tracer accumulation were noted. Based on the pre-therapy scan and post-therapy scan parameters, the patients were classified to have "complete metabolic response (CMR)" and "partial metabolic response Brief Report (PMR)". The treatment was stopped in those patients with CCR. The decision to continue or stop treatment in patients with PCR was taken by the treating clinician not involved in the study.
Lymph nodes with the most intense FDG-uptake were carefully identified on both the scans. The maximum standardized uptake value (SUV-max) of tracer in the nodes was assessed using a circular region of interest. A receiver operating characteristic (ROC) curve analysis was performed to find the sensitivity and specificity of delta SUV-max (% change between the SUV-max pre-scan and post-scan) in predicting clinical response. The differences in the percentage change in SUV-max were assessed using an unpaired nonparametric Wilcoxon test. A p-value of less than 0.05 was considered significant.
Results and Discussion
Of the 62 patients who were screened, 37 patients were recruited for the study. After a baseline scan, 12 patients did not return for the repeat scan. A total of 25 patients were included in the final analysis. All patients were diagnosed based on the clinical and radiological features. Concurrent histopathological and microbiological evidence was present in 80% (n = 20) and 40% (n = 10) of the patients respectively. The study group consisted of 13 males and 12 females with a median age of 29.7 years (range 14-75 years). A total of 80% (n = 20) of the patients had mediastinal lymphadenopathy, while cervical and abdominal lymphadenopathy was seen in 76% (n = 19) and 40% (n = 10), respectively. Multiple lymph node sites were involved in 76% (n = 19) of the patients. In nine patients, PET demonstrated additional involvement sites, including liver, spleen, ileocecal region, pleura, pericardium, and dorsal spine. The clinical, radiological, pathological and microbiological features are summarized in Table 1.
The visual analysis of FDG/PET-CT showed a CMR in 9/25, all of whom showed a CCR (Table 2). Figure 1 shows CMR in one of the recruited patients. A PMR 36 (6). Experimental animal models have shown that FDG PET-CT activity seems to be in direct correlation to the bactericidal activity of anti-tuberculosis treatment (7,8).
It is pertinent to note that FDG uptake is based on the glycolytic activity in the neutrophils, lymphocytes and macrophages and represents inflammation (9,10). Therefore, the absence of uptake suggests a decrease in inflammation, suggesting that the patient has responded to treatment. This is the reason why in patients with no metabolic uptake in the post-therapy scans had all responded clinically. It is evident from our results that the persisting activity may not suggest active disease. In such cases, clinical assessment should be given more importance in deciding the course of action (11)(12)(13)(14)(15)(16)(17)(18). Similar to a previously published study, our study showed that change in SUV-max could be used as a surrogate for predicting clinical response as well (15). We found that percentage decrease of 77% for cervical lymph nodes and 52% decrease for mediastinal lymph nodes had excellent specificity and considerable sensitivity. However, there is a need for a larger sample size to validate the cut-offs that we derived.
In conclusion, although PET may be useful as an adjunct to clinical response in guiding duration of therapy in some cases of tubercular lymphadenitis, its routine use as a stand-alone guide for treatment response needs to be ascertained with further studies with larger sample size.
was seen in 16/25 patients, nine of these patients showed a CCR and their treatment was stopped. The remaining seven patients with PMR and PCR were continued on therapy for a variable range of duration. All patients were followed up until the end of the study period. None of the patients with CCR (including 9 with CMR and 9 with PMR) relapsed until the end of the follow-up period. The mean duration of follow-up after completion of treatment was 127 +/-57 days.
SUV max of lymph nodes for both the scans was computed. The percentage change in standardized uptake value (∆SUV-max) was analyzed between baseline PET and the scan after six months of treatment. The changes in the SUV-max values were statistically significant in cervical and mediastinal lymph node stations with a p-value of 0.043 and 0.006, respectively. However, abdominal lymph nodes did not show a statistically significant change. A receiver operating characteristic (ROC) curve analysis was done to analyze the diagnostic utility of the per cent change in SUV-max as a marker of clinical response to treatment in patients with cervical/ mediastinal lymph nodes. A change in 77% or more in the SUV-max of cervical lymph nodes had a sensitivity and specificity of 75% (95%CI: 42.8-94.5%) and 100% (95%CI: 47.8-100%) respectively in predicting clinical response. A change in 52% or more in the SUV-max of mediastinal lymph nodes had a sensitivity and specificity of 75% (95%CI: 47.6-92.7%) and 100% (95%CI: 54.1-100%) respectively in predicting clinical response.
FDG PET-CT is increasingly being used for diagnosis and guiding therapeutic decisions in patients with infectious diseases. The available literature on FDG/PET-CT for assessing results and treatment response in patients with pulmonary TB is limited and suggests a good correlation between the two of them (5). However, there is a paucity of literature evaluating FDG/PET-CT's role in EPTB in particular. PET-CT has been hypothesized to predict the need for treatment intensification or prolonged therapeutic strategies or
|
2021-03-02T06:22:30.587Z
|
2021-02-27T00:00:00.000
|
{
"year": 2021,
"sha1": "73e3e0f38dfbff941a0a0e78617d78e9835c4af0",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/ddt/15/1/15_2020.03042/_pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5ceee7188026a0e58a064e0f87885af1d18fdd9d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
232245980
|
pes2o/s2orc
|
v3-fos-license
|
Reevaluating claims of ecological speciation in Halichoeres bivittatus
Abstract Allopatry has traditionally been viewed as the primary driver of speciation in marine taxa, but the geography of the marine environment and the larval dispersal capabilities of many marine organisms render this view somewhat questionable. In marine fishes, one of the earliest and most highly cited empirical examples of ecological speciation with gene flow is the slippery dick wrasse, Halichoeres bivittatus. Evidence for this cryptic or incipient speciation event was primarily in the form of a deep divergence in a single mitochondrial locus between the northern and southern Gulf of Mexico, combined with a finding that these two haplotypes were associated with different habitat types (“tropical” vs. “subtropical”) in the Florida Keys and Bermuda, where they overlap. Here, we examine habitat assortment in the Florida Keys using a broader sampling of populations and habitat types than were available for the original study. We find no evidence to support the claim that haplotype frequencies differ between habitat types, and little evidence to support any differences between populations in the Keys. These results undermine claims of ecological speciation with gene flow in Halichoeres bivittatus. Future claims of this type should be supported by multiple lines of evidence that illuminate potential mechanisms and allow researchers to rule out alternative explanations for spatial patterns of genetic differences.
ties of many marine organisms render this view somewhat questionable. In marine fishes, one of the earliest and most highly cited empirical examples of ecological speciation with gene flow is the slippery dick wrasse, Halichoeres bivittatus. Evidence for this cryptic or incipient speciation event was primarily in the form of a deep divergence in a single mitochondrial locus between the northern and southern Gulf of Mexico, combined with a finding that these two haplotypes were associated with different habitat types ("tropical" vs. "subtropical") in the Florida Keys and Bermuda, where they overlap. Here, we examine habitat assortment in the Florida Keys using a broader sampling of populations and habitat types than were available for the original study. We find no evidence to support the claim that haplotype frequencies differ between habitat types, and little evidence to support any differences between populations in the Keys. These results undermine claims of ecological speciation with gene flow in Halichoeres bivittatus. Future claims of this type should be supported by multiple lines of evidence that illuminate potential mechanisms and allow researchers to rule out alternative explanations for spatial patterns of genetic differences.
In a pioneering study, Rocha et al. (2005) presented evidence supporting the possibility of ecological speciation in coral reef fishes, presenting two possible cases of parapatric speciation in Atlantic Halichoeres. One of these case studies focused on Halichoeres bivittatus, in which they demonstrate a deep (3.6%) divergence in cytochrome B (cytb) sequences between a northern "subtropical" lineage (spanning the northern Gulf of Mexico, peninsular Florida, and the eastern coast of the United States) and a southern "tropical" lineage (spanning the Yucatan peninsula, Cuba, the eastern Bahamas, and all points south including the southern Caribbean and coastal Brazil).
Finding a deep divergence at a locus with geographic structure is not in itself evidence of speciation. For example, such a divergence can be expected even under neutral processes (Irwin, 2002), in particular with respect to mitochondrial loci such as cytochrome B (Irwin, 2002;Neigel & Avise, 1993;Taylor & Hellberg, 2006). Rocha et al. (2005) presented evidence that the two haplotypes were preferentially associated with different types of habitat in the Florida Keys and Bermuda, with individuals of the northern lineage being found in inshore areas that experience colder minimum temperatures, while the southern lineage dominated in populations that experienced warmer and more stable temperature regimes.
They also pointed to the long pelagic larval stage of H. bivittatus and apparent connectivity between populations separated by large geographic distances to suggest that there was significant potential for gene flow between northern and southern H. bivittatus populations.
In light of this, they argued that the genetic divergence seen between lineages represented by these major haplotype groups was unlikely to be explained by geographic distance. The finding of genetic divergence in the face of gene flow, combined with habitat partitioning in the contact zone between the two haplotypes, led the authors to conclude that ecological processes either had driven or were in the process of driving parapatric speciation in this system. If true, this represents a departure from the more common pattern of speciation in this clade (Wainwright et al., 2018) and other Caribbean fishes, in which new species seem to primarily arise from vicariance or longdistance dispersal events (Choat et al., 2012;Robertson et al., 2006).
At present, the support for parapatric ecological speciation in H. bivittatus hinges almost entirely on the demonstration of habitat segregation in areas where the two lineages overlap. However, support for habitat segregation in the Florida Keys was based on samples of only two populations and included larval samples, which may demonstrate patterns of microhabitat segregation due to reasons unrelated to speciation. Here, we present the results of an attempt to further explore patterns of habitat segregation for H. bivittatus in the Florida Keys by sampling additional populations of adults and conducting more extensive statistical analyses.
| ME THODS
To test habitat partitioning among Halichoeres bivittatus haplotypes, we analyzed the same mitochondrial cytochrome B fragment as Rocha et al. (2005) for thirteen additional populations/collection sites. We sampled eight populations in the Florida Keys including four populations on the edge of the continental shelf (Sombrero Light, 11 Foot Mound, XMuta, and Tennessee Reef), two populations on patch reefs in the inshore channel (East Washerwoman and East Turtle Shoal), and two grass beds located directly offshore in water <2m in depth (near mile marker 62 on Long Key and behind Keys Marine Lab (KML) on Vaca Key). For a broader geographic context, we also sampled fishes from two sites further north on the Gulf Coast of Florida, two sites in the Bahamas, and one site from Belize.
Florida and Bahamas specimens were collected in 2005 and 2006, and Belize specimens were collected in 2006. In addition to comparing fore reef and inshore patch reef, we included the grass bed habitat as it experiences even greater seasonal and diurnal fluctuations in temperature than the inshore patch reef and as such provides an additional test of the proposed habitat segregation.
All animal handling procedures were approved by the University of California, Davis Institutional Animal Care and Use Committee.
Fish were caught using a combination of hand nets, barrier nets, and otter trawls. Specimens were euthanized using MS-222 dissolved in seawater, and samples were taken from muscle tissue and preserved in 95% ethanol. We extracted DNA using DNeasy™ (Qiagen) columns and PCR-amplified a 723-base pair fragment of the mitochondrial cytochrome B gene using the L14768 and H15496 primers from Rocha et al. (2005). PCR products were cleaned using ExoSap-IT (USB Corp.). Purified templates were dye-labeled using BigDye (ABI) and sequenced on an ABI 3077 automated DNA Sanger sequencer.
Of the 225 individuals of H. bivittatus sampled for Rocha et al. (2005), only 12 sequences have been made available on GenBank (Benson et al., 2013), accession numbers AY823558.1 to AY823569. All of these are included in our analyses. We aligned sequences using ClustalW (Thompson et al., 2002) and inferred a population phylogeny using BEAST v.2.6.3 (Bouckaert et al., 2014).
All cytb sequences were imported into BEAUTi and partitioned by codon position. All partitions had trees and clocks linked, while site models were allowed to vary. We used ModelTest with "transition-TransversionSplit" (Bouckaert & Drummond, 2017) to infer site models. BEAST analyses were run twice, with 50,000,000 steps of the Markov chain, sampling every 1,000 generations. After removing 10% of trees for burn-in and combining the two runs in LogCombiner, a maximum clade credibility tree was generated in TreeAnnotator with median node heights. For consistency with Rocha et al. (2005), we also conducted a separate analysis using the TN93 model. All analytical results from the trees inferred with this model were functionally identical to those from the full Bayesian procedure, however, and will not be presented here. We implemented a strict molecular clock and a constant coalescent tree model, as is appropriate for population genetic data when not inferring population size changes (Drummond & Rambaut, 2007). We constructed a strict consensus tree using the "contree" function in the APE R package, and used it to assign individuals to either "northern" or "southern" haplotypes for visualization and further analysis.
We conducted population genetic analyses using the R packages adegenet and hierfstat (Goudet, 2005;Jombart, 2008). Because some sites were represented by only a few individuals, we pooled sites by habitat types: "offshore reef," "inshore reef," or "inshore grass bed." To assess whether haplotypes were segregating between different populations, we measured pairwise Fst (Nei, 1987) between all pairs of habitat types in the Florida Keys. To evaluate the statistical significance of these patterns, we compared the observed genetic distance between habitat types with that expected if the assortment of haplotypes was random. The expected patterns under this null hypothesis were estimated using a permutation test in which sequences were randomly assigned to habitat types, keeping sample sizes consistent with those from the empirical data. In order to test whether the results were robust to our assignment of populations to habitat types, we repeated the analyses without pooling sites. Further details of the analysis and all code are provided in the supplemental materials.
Additionally, we used a single-threshold generalized mixed Yulecoalescent (GMYC) model (Pons et al., 2006) for single-locus species delimitation analyses. The GMYC model uses an ultrametric tree to infer a shift between Yule speciation and coalescent processes, using this shift to delimit species. The GMYC method was implemented using the "splits" package in R and the consensus tree inferred using BEAST. We then used a likelihood-ratio test to test the hypothesis that more than one species was present in our dataset.
| RE SULTS
Phylogenetic and broad-scale biogeographic patterns were concordant with those seen in Rocha et al. (2005), showing a deep (~5%) divergence between a broadly northern and a broadly southern lineage (Figures 1 and 2). We found an approximately equal mix of the two haplotypes in the Florida Keys. In contrast, the Bahamas were dominated by the southern haplotype, with only one individual out of the forty having the northern haplotype. We note that this individual was the only Bahamas specimen obtained from GenBank and that no fine-scale locality information was available for it. Given that the authors providing the original data (Rocha et al., 2005) report the Bahamas as being home to the southern lineage of H. bivittatus, however, it is possible that this sequence was misidentified when it was posted to GenBank. Similarly, we find that of the two examples from the Virgin Islands in the original study that were available on GenBank, one was from the southern lineage and one was from the northern lineage.
Our GMYC analysis showed no evidence for more than one species in our dataset (LRT: p = .313). Moreover, as shown in Figure 3, the finer-scale analyses of haplotypes in the Florida Keys do not support the hypothesis of habitat segregation presented in Rocha et al. (2005). Under that hypothesis, we would expect to see a significant association between haplotype group and collection site, with inshore patch reef and grass bed populations primarily represented by the northern "subtropical" haplotype and continental shelf populations dominated by the southern "tropical" haplotype. With higher power to detect differences as a consequence of sampling more individuals in this region (78 specimens vs. 36 specimens), more populations (8 vs. 2), and more diverse habitats (i.e., with the inclusion of fore reef, inshore patch reef, and shallow grass bed populations), we find no strong evidence for the hypothesis that there are differences in allele frequencies between habitat types or individual populations. Comparison of sites grouped by habitat type showed no significant differences ( Figure S1.1). For the site-level analysis, the only statistically significant difference between any pairs of populations was between the fore reef site XMuta and the single individual from the KML grass bed site. This result is likely an artifact of permutation tests conducted with a small sample size (4 samples from one population and 1 from the other; see Figure S1.2). Moreover, in this sole exception, the direction of the difference was opposite to that expected: The specimens sampled from the fore reef were of the northern haplotype, while the lone individual from the shallow grass bed was of the southern haplotype (Figure 3).
| D ISCUSS I ON
In the current study, we attempted to replicate a widely cited study of parapatric ecological speciation in marine fishes. Our results do not support the hypothesis that the northern and southern lineages of Halichoeres bivittatus represent a product of either cryptic or incipient ecological speciation. We find no evidence that these two lineages represent different species. Further, we find no evidence for habitat partitioning between inshore patch reefs, grass beds, and reefs on the edge of the continental shelf in the Florida Keys. On the contrary, our study finds that northern and southern lineages are randomly distributed among habitat types and populations in this region. The only site-by-site comparison in the Florida Keys that was significantly different from random assortment was in the opposite direction to that predicted.
Demonstrating speciation with gene flow is notoriously difficult.
For these purposes, we find lists of criteria such as those presented by Potkamp and Fransen (2019) to be of particular value; they allow us to quickly quantify the strength of evidence for a given process and adjust our level of confidence accordingly. They suggested six criteria that needed to be addressed: Habitat (Keys only) cytochrome B haplotype groups in some locations strongly supports the presence of geographically overlapping populations. However, this pattern could also come about if the divergence seen in cytochrome B were entirely due to allopatric divergence followed by secondary contact, or due to neutral processes (Irwin, 2002), and as such is not sufficient to support any mechanism of speciation.
Consideration of barriers to north/south dispersal that may contribute to allopatric speciation may be particularly relevant, as the broad geographic distribution of haplotypes presented in Rocha et al. (2005, figure 1) and the current study very closely matches one of the most significant faunal breaks in the region for fishes (Robertson & Cramer, 2014, figure 2) and other marine groups (summarized in Robertson & Cramer, 2014, figure 1), suggesting that geography and current regimes in the region may create barriers to gene flow that are relatively consistent across a broad range of taxa.
Examining the other criteria, we find that questions 1 and 3 have not been addressed in any study, while the remainder are supported only by verbal arguments based on the dispersal capability of the group (criterion 5) and the previous finding of habitat assortment in the Keys and Bermuda (criteria 2 and 4). We note that in parapatric ecological speciation, substantial differentiation in mitochondrial haplotypes would only be expected to arise well after strong selection linking ecological divergence and assortative mating, and as such, we should expect to see fairly strong evidence for criteria 1-4 in this system.
As we could not replicate the sampling of Rocha et al. (2005) in Bermuda or Key Largo, it is still possible that habitat partitioning is occurring in those localities. In light of our findings and the lack of any demonstration of morphological differentiation or assortative mating between northern and southern lineages, however, we find it difficult to see how such highly localized habitat partitioning could be considered evidence for either ecological speciation or speciation with gene flow in the rest of the Caribbean and Gulf of Mexico.
Instead, we caution that the apparent differences in haplotype frequencies in these populations demonstrated by Rocha et al. (2005) could be driven by a number of processes that are not necessarily associated with speciation, including lottery recruitment and postrecruitment selection related to local conditions (Bernardi et al., 2012;Grorud-Colvert & Sponaugle, 2011;Searcy & Sponaugle, 2001;Selkoe et al., 2006). The use of sequences from larvae in Rocha et al. (2005) in some populations makes it particularly difficult to eliminate these processes as alternative explanations for patterns seen in H. bivittatus. As such, we would suggest that even those localized results should be viewed with extreme caution until they have been replicated with adult fish over a longer timescale.
There is a growing body of evidence that ecological factors play an important role in structuring the genetic diversity of marine populations and promoting speciation (Prada & Hellberg, 2020;Taylor & Hellberg, 2005;Whitney et al., 2018;Momigliano et al., 2017;Teske et al., 2019;Holt et al., 2020;Bird et al., 2011;Choat et al., 2012;Potkamp & Fransen, 2019;Faria et al., 2021;Rüber et al., 2003), and failure to replicate one study is not sufficient cause to question the growing consensus that ecological speciation and speciation with gene flow play an important role in generating marine biodiversity. Likewise, there is an abundance of evidence that allopatry has also promoted speciation in marine settings (Chenuil et al., 2018; F I G U R E 2 Full study area including Rocha et al. (2005) data from GenBank and new collections. Pie charts indicate the relative frequency of haplotypes in different study areas. Pie chart for novel collection data from the Florida Keys is across all newly sampled sites combined; a detailed view of localities within the Keys is given in Figure 3. *Data from Rocha et al. (2005) submissions to GenBank. These were typically one sequence per locality and do not necessarily represent frequencies reported in the original manuscript. Circles are colored by haplotype: blue = northern; orange = southern * * * * * * * * * See Fig. 3 Haplotype N S Ekimova et al., 2019;Holt et al., 2020;Laakkonen et al., 2021;Wainwright et al., 2018). We should not be surprised that in such a species-rich and unique environment, there is evidence for a variety of speciation mechanisms. The question is when should we conclude that the weight of evidence supports a given scenario. While many in the field might suggest that our default position should be one of assuming allopatric speciation until proven otherwise, we are less confident that this is the appropriate stance to take for marine environments. Rather, we suggest that when we do not know the answer to four of the six criteria for demonstrating speciation with gene flow, or to any of the three criteria for demonstrating ecological speciation, the most appropriate position is to simply acknowledge that we have insufficient evidence to argue for any mechanism of speciation in this system. "We don't know" is a deeply unsatisfying answer, but it is the only one that accurately reflects the currently available evidence.
ACK N OWLED G M ENTS
The authors would like to thank the generous assistance of the The authors are also grateful to Luiz Rocha, who provided helpful feedback on an earlier version of the manuscript. East side of Tennessee Reef (4) XMuta (2) MM62 (13) 11 Foot Mound (22) East Turtle Shoal (24) East Washerwoman (4) East of Sombrero Light (8) writing-review & editing (equal). Teresa L. Iglesias: Conceptualization (supporting); data curation (supporting); funding acquisition (sup-
DATA AVA I L A B I L I T Y S TAT E M E N T
The DNA alignment used here is available on Dryad, https://
|
2021-03-17T13:17:54.031Z
|
2021-03-12T00:00:00.000
|
{
"year": 2021,
"sha1": "299749a2bebbac17c6155c7fcf2381e77d6d5c62",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.7936",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c4ec51f9d94e6760fad4cfd0808b04e6b154db2d",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology",
"Geography"
]
}
|
116233732
|
pes2o/s2orc
|
v3-fos-license
|
Utilizing Downdraft Fixed Bed Reactor for Thermal Upgrading of Sewage Sludge as Fuel by Torrefaction
A lab-scale downdraft fixed bed reactor was used for the study of sewage sludge, a non-lignocellulosic biomass, torrefaction to enhance the thermochemical properties of sewage sludge. The torrefaction was carried out for a temperature range of 200–350 ◦C and a residence time of 0–50 min. Degree of torrefaction, torrefaction index, chemical exergy, gas analysis, and molar ratios were taken into account to analyze the torrefied product with respect to torrefaction temperature. The effect of torrefaction temperature was very pronounced and the temperature range of 250–300 ◦C was considered to be the optimum torrefaction temperature range for sewage sludge. Chemical exergy, calorific value and torrefaction index were significantly influenced by the change in the relative carbon content resulting in decrease of the O/C and H/C molar ratios.
Introduction
Biomass as a renewable energy source is recognized globally and is available in a generous amount on the earth, which can be transformed into biofuels or energy, utilizing various thermal, physical, or biological processes.Not only the curbing depletion of fossil fuel but negative environmental impacts associated with it, such as greenhouse gases, acid rain, and deterioration in climate has caught the interest of exploiting the renewable energy like biomass as fuel [1].A study by Daniel et al. [2] has revealed integrating biomass into combined cooling and heating power (CCHP) for providing energy for buildings.Utilizing biomass or waste can lessen the environmental issues due to high carbon dioxide emissions, which are chiefly produced by fossil fuels.Hence, clean and renewable energy sources are in high demand [3] and sewage sludge is acknowledged as a low-cost material for biomass combustion [4] yet, these wastes are disposed into landfills or the ocean due to economic reasons.As the sewage sludge constitutes high amount of organic content, sewage sludge surely can be a promising biomass resource for energy recovery.Utilizing sewage sludge to generate heat through incineration and combustion can be a good alternative but the emission of heavy metals has led to various disagreements [5].Due to low hemicellulose and cellulose, and a high ash content of sewage sludge, the combustion behavior is entirely different from that of lignocellulosic biomass [6].For these reasons, the quality of raw sludge needs improvement to utilize it to obtain useful forms of energy.Energy recovery therefore plays a vital role whilst considering the management of sewage sludge.Various comparative studies have been carried out for the analysis of sewage sludge for energy recovery using various alternatives, such as Life-cycle assessment (LCA) [7] and SWOT (strengths, weaknesses, opportunities, and threats) [8].
For decades, thermochemical technologies, such as combustion, gasification, pyrolysis, and others, have been employed for biomass conversion [9].A promising alternative is provided by torrefaction, which alters sludge into coal-like solid fuel particles.During torrefaction, the biomass is heated to a temperature range of 200-350 • C in the absence of oxygen.During the process, loss of moisture occurs with a partial loss of the volatile (approximately 20%) as a result of which there is alteration in the characteristics of original raw biomass [10].With the removal of the light fraction volatiles, the heating value of the torrefied biomass gradually increases [10].This thermal treatment converts oxygen to carbon monoxide and carbon dioxide, which in turn reduces the O/C ratio but elevates the energy density and hydrophobicity of sludge [11].Torrefaction lessens the moisture content of the biomass, which is an imperative benefit that inhibits biological degradation [12], increases the energy density [13], and enhances the combustion efficiency [14].Despite these advantages study of non-lignocellulosic biomass is still limited.Various reactors have been used to carry torrefaction, such as muffle furnace [15], fixed bed reactors [16], auger reactor [17], and fluidized bed reactors [18].To the best of our knowledge, downdraft fixed bed has not yet been used for the torrefaction of biomass although the downdraft fixed bed reactors have been utilized for gasification of biomass [19].A study by Kou et al. [19] reveals the benefit of using downdraft fixed bed over updraft and cross-draft bed reactor for gasification.This study, therefore, explores the benefits of using downdraft fixed bed reactor for torrefaction under different operating parameters.Higher heating value (HHV) or calorific value, different molar ratios, chemical exergy, torrefaction index, and severity factor were used to evaluate the optimum temperature range for torrefaction of sewage sludge using fixed bed downdraft reactor.
Experimental Apparatus
A laboratory scale downdraft fixed bed made of stainless steel was used for the torrefaction of sewage sludge.The bed height was 600 mm with an internal diameter of 30.7 mm.The schematic diagram of the bed is provided in Figure 1.Thermocouples were used to measure temperatures of the inlet gas, the bed, and the reactor wall.Calorific value was measured using bomb calorimeter (Parr Instrument Co., Model 1672, Moline, IL, USA), whereas for elemental analysis Thermo Fisher Scientific INC., Thermo FLASH 200 (Hudson, NH, USA) was used.In addition MK9000 (Eurotron Instruments, Chelmsford, UK,) gas analyzer was used to analyze the emitted gas during torrefaction.200, 250, 300, and 350 • C temperature range along with 0-50 min of residence time was used for this study.American Society for Testing and Materials (ASTM) D3172 method was used for calculating proximate analysis of raw and torrefied sludge.torrefaction, which alters sludge into coal-like solid fuel particles.During torrefaction, the biomass is heated to a temperature range of 200-350 °C in the absence of oxygen.During the process, loss of moisture occurs with a partial loss of the volatile (approximately 20%) as a result of which there is alteration in the characteristics of original raw biomass [10].With the removal of the light fraction volatiles, the heating value of the torrefied biomass gradually increases [10].This thermal treatment converts oxygen to carbon monoxide and carbon dioxide, which in turn reduces the O/C ratio but elevates the energy density and hydrophobicity of sludge [11].Torrefaction lessens the moisture content of the biomass, which is an imperative benefit that inhibits biological degradation [12], increases the energy density [13], and enhances the combustion efficiency [14].Despite these advantages study of non-lignocellulosic biomass is still limited.Various reactors have been used to carry torrefaction, such as muffle furnace [15], fixed bed reactors [16], auger reactor [17], and fluidized bed reactors [18].To the best of our knowledge, downdraft fixed bed has not yet been used for the torrefaction of biomass although the downdraft fixed bed reactors have been utilized for gasification of biomass [19].A study by Kou et al. [19] reveals the benefit of using downdraft fixed bed over updraft and cross-draft bed reactor for gasification.This study, therefore, explores the benefits of using downdraft fixed bed reactor for torrefaction under different operating parameters.Higher heating value (HHV) or calorific value, different molar ratios, chemical exergy, torrefaction index, and severity factor were used to evaluate the optimum temperature range for torrefaction of sewage sludge using fixed bed downdraft reactor.
Experimental Apparatus
A laboratory scale downdraft fixed bed made of stainless steel was used for the torrefaction of sewage sludge.The bed height was 600 mm with an internal diameter of 30.7 mm.The schematic diagram of the bed is provided in Figure 1.Thermocouples were used to measure temperatures of the inlet gas, the bed, and the reactor wall.Calorific value was measured using bomb calorimeter (Parr Instrument Co., Model 1672, Moline, IL, USA), whereas for elemental analysis Thermo Fisher Scientific INC., Thermo FLASH 200 (Hudson, NH, USA) was used.In addition MK9000 (Eurotron Instruments, Chelmsford, UK,) gas analyzer was used to analyze the emitted gas during torrefaction.200, 250, 300, and 350 °C temperature range along with 0-50 min of residence time was used for this study.American Society for Testing and Materials (ASTM) D3172 method was used for calculating proximate analysis of raw and torrefied sludge.
Materials
Sewage sludge was obtained from wastewater treatment plant in Pocheon, South Korea.The sample was homogeneously mixed and dried at 105 • C for 24 h.The samples were then ground into powders and sieved into separate sizes before torrefaction.250-355 µm size was selected for this study.Table 1 illustrates properties of the sample where 'others' indicates inorganic components present in sewage sludge.
Methods
Sample load of 15 g (dry weight) and a volumetric flow of 300 Nm 3 /min of nitrogen was used for this study.Nitrogen helps in maintaining the inert atmosphere along with controlling the rapid temperature rise within the downdraft fixed bed.The bed was heated to the desired temperature using a heating jacket.After the desired temperature was reached, the weighed sample was fed from the hopper situated at the top of the reactor.The sample was torrefied at desired torrefaction temperature (200-350 • C) and residence time (0-50 min) in presence of predetermined volumetric flow rate of nitrogen as an inert material and heat carrier.After the completion of the test, the sample was taken out immediately.
Torrefied Product Analysis
All of the results obtained in this study are obtained prior to densification.The degree of torrefaction plays an important role in determining the quality and composition of the torrefied products.The degree of torrefaction is the ratio between the calorific value of the torrefied biomass to that of the raw biomass, which was calculated using the following equation:
Degree of torrefaction =
Calorific value of torrefied biomass (MJ/kg) Calorific value of raw biomass (MJ/kg) The HHV or the calorific value was measured in Mega Joule per kilogram of biomass.The energy density enhancement was calculated using torrefaction index (non-dimensional form), which demonstrated the enhancement in the energy density through the torrefaction of sewage sludge using Equation (2): Torrefaction Index (TI) = Energy density enhancement for design condition t p Energy density enhancement for reference condition (ref) In this study chemical exergy has also been introduced where the chemical exergy of sewage sludge and the torrefied product was calculated with the help of Equations ( 3) and ( 4) [20]: e ch (MJ/kg) = (HHV obtained λ biomass + 9417S) 1000 ( 3) where, HHV is the experimental calorific value that was obtained and λ biomass is a dimensionless coefficient relating chemical exergy and heating value of the biomass and C, H, N, are O are the elemental composition in wt.%.H/C is the ratio of hydrogen mass to carbon mass and N/C and O/C correspondingly for nitrogen and oxygen.Using Equations ( 3) and ( 4), chemical exergy was calculated for all of the torrefied sewage sludge at various temperatures and residence time.
Degree of Torrefaction
The HHV or the calorific value describes the potential energy content of a biomass.One of the major advantages of torrefaction is the elevated energy content per unit of mass of the torrefied yield.The degree of torrefaction can be used as an indicator to identify the relative energy gain in the torrefied product [21].Nitthitron and Suthum [21] further illustrates that the value of degree of torrefaction exceeding unity shows a greater energy gain per unit mass.
Figure 2 illustrates the degree of torrefaction with respect to torrefaction residence time at various torrefaction temperatures.From the graph, it is clear that with an increase in torrefaction residence time and temperature the degree of torrefaction increases.From Figure 2, it can be seen that increasing the torrefaction residence time and temperature significantly increased the HHV, which in turn increased the degree of torrefaction of the sewage sludge except for 350 • C. At 200-300 • C, there is an increase in the degree of torrefaction with an increase in torrefaction residence time.This increase might be as a result of the gain of the calorific value due to the removal of oxygen.In contrast, pyrolysis reaction may have occurred at a higher torrefaction temperature, resulting in the decrease in the degree of torrefaction at 350 • C with an increase in torrefaction residence time.A study by Martin et al. [22] suggested that the effect of torrefaction temperature was greater than the effect of torrefaction residence time.Similarly, Barta et al. [23] found that the torrefaction temperature was dominant over residence time until 275 • C, but at 300 • C, the torrefaction residence time had a significant effect on the torrefied yield.This increase might be as a result of the gain of the calorific value due to the removal of oxygen.However, at 350 • C, the rate of degree of torrefaction decreases on further increasing the torrefaction residence time.This may be attributed to the fact that sewage sludge is a non-lignocellulosic biomass and constitutes thermally degradable organic components, which degrades easily on elevated temperatures as a result of which there is deterioration in the degree of torrefaction.In addition, the increase was best gained between 200 and 300 • C, although the overall net gain of the degree of torrefaction was more pronounced for 250 • C.An increase in the calorific value with torrefaction temperatures were obtained by Zanzi et al. [24], Iroba et al. [25], and Nimlos et al. [26].
Other important parameters that contribute for the characterization of solid fuel are volatile matter and fixed carbon, which are also partly responsible for the alteration of calorific value [27].As demonstrated in Figure 3a,b torrefaction has an opposite effect on these two operating parameters.With an increase in torrefaction temperature and residence time, it can be observed that the fixed carbon and ash increases whilst the volatile fraction decreases.The torrefied product is desired to have less volatile fraction and more fixed carbon as it amplifies the calorific value of the torrefied product.For 300 • C, there was a decrease of 13.77% in the volatile content, whereas an increase of 68.95% was observed for fixed carbon.Least changes were observed for 200 • C for both volatile fraction and fixed carbon.Although the highest value for fixed carbon is for 350 • C, the net gain in the value is minimal when compared to 250 and 300 • C. With an increase in torrefaction residence time and temperature, the ash content increased and the highest was for 350 • C. The increase in the ash content was primarily due to the mass loss during the torrefaction process.Similar results were obtained by Park et al. [28].Improved fixed carbon value of torrefied biomass can be an advantage as it aids the heat of combustion improving the efficiency of the overall combustion applications [29].In contrast, an increase in the ash content can have a negative impact on the usage of the torrefied product as a higher ash content is associated with fouling, slagging, and agglomeration of the bed [30].A study by Deng et al. [31] demonstrated a decrease of 38.88% in the volatile content, whereas Mani [32] achieved only 7.8% decrease in the volatile content, the differences in the value was due to the types of the biomass that were used for the pretreatment process.The former was high due to the high volatile content that was present in the agricultural residue when compared to the latter, which considered woody biomass [33].Other important parameters that contribute for the characterization of solid fuel are volatile matter and fixed carbon, which are also partly responsible for the alteration of calorific value [27].As demonstrated in Figure 3a,b torrefaction has an opposite effect on these two operating parameters.With an increase in torrefaction temperature and residence time, it can be observed that the fixed carbon and ash increases whilst the volatile fraction decreases.The torrefied product is desired to have less volatile fraction and more fixed carbon as it amplifies the calorific value of the torrefied product.For 300 °C, there was a decrease of 13.77% in the volatile content, whereas an increase of 68.95% was observed for fixed carbon.Least changes were observed for 200 °C for both volatile fraction and fixed carbon.Although the highest value for fixed carbon is for 350 °C, the net gain in the value is minimal when compared to 250 and 300 °C.With an increase in torrefaction residence time and temperature, the ash content increased and the highest was for 350 °C.The increase in the ash content was primarily due to the mass loss during the torrefaction process.Similar results were obtained by Park et al. [28].Improved fixed carbon value of torrefied biomass can be an advantage as it aids the heat of combustion improving the efficiency of the overall combustion applications [29].In contrast, an increase in the ash content can have a negative impact on the usage of the torrefied product as a higher ash content is associated with fouling, slagging, and agglomeration of the bed [30].A study by Deng et al. [31] demonstrated a decrease of 38.88% in the volatile content, whereas Mani [32] achieved only 7.8% decrease in the volatile content, the differences in the value was due to the types of the biomass that were used for the pretreatment process.The former was high due to the high volatile content that was present in the agricultural residue when compared to the latter, which considered woody biomass [33].Other important parameters that contribute for the characterization of solid fuel are volatile matter and fixed carbon, which are also partly responsible for the alteration of calorific value [27].As demonstrated in Figure 3a,b torrefaction has an opposite effect on these two operating parameters.With an increase in torrefaction temperature and residence time, it can be observed that the fixed carbon and ash increases whilst the volatile fraction decreases.The torrefied product is desired to have less volatile fraction and more fixed carbon as it amplifies the calorific value of the torrefied product.For 300 °C, there was a decrease of 13.77% in the volatile content, whereas an increase of 68.95% was observed for fixed carbon.Least changes were observed for 200 °C for both volatile fraction and fixed carbon.Although the highest value for fixed carbon is for 350 °C, the net gain in the value is minimal when compared to 250 and 300 °C.With an increase in torrefaction residence time and temperature, the ash content increased and the highest was for 350 °C.The increase in the ash content was primarily due to the mass loss during the torrefaction process.Similar results were obtained by Park et al. [28].Improved fixed carbon value of torrefied biomass can be an advantage as it aids the heat of combustion improving the efficiency of the overall combustion applications [29].In contrast, an increase in the ash content can have a negative impact on the usage of the torrefied product as a higher ash content is associated with fouling, slagging, and agglomeration of the bed [30].A study by Deng et al. [31] demonstrated a decrease of 38.88% in the volatile content, whereas Mani [32] achieved only 7.8% decrease in the volatile content, the differences in the value was due to the types of the biomass that were used for the pretreatment process.The former was high due to the high volatile content that was present in the agricultural residue when compared to the latter, which considered woody biomass [33].The molar ratios of hydrogen and oxygen with respect to the relative carbon content i.e., O/C and H/C ratios are substantial parameters for the characterization of fuel composition.From Figure 4, for both molar ratios, torrefaction at 200 °C did not have much of a significant effect, but with an increase in torrefaction temperatures from 250 to 300 °C presented a significant decrease.At 350 °C, for both H/C and O/C molar ratios, a steep decrease was observed.Whereas, a gradual decrease in the H/C molar ratio at 250 and 300 °C was achieved.For O/C at 300 °C, a slight increase was observed until 20 min residence time, after which the fall in the O/C value was noticeable.These alterations in the molar ratios are primarily due to the release of bound water and are also due to the fractional removal of oxygen by decarboxylation, and dehydration reactions [34].This decrease consequently increased the gross calorific value, which in turn decreases the volatile content.In addition, with an increase in torrefaction temperature and residence time, the hydrogen and oxygen content of the torrefied product decreased, resulting in reduction of the H/C and O/C ratios.Therefore, it can be seen from the graph that the increase in torrefaction temperature and residence time provoked a reduction in the O/C and H/C molar ratios due to the loss of carbon in the form of carbon dioxide, light hydrocarbons, and water.Moreover, a similar reduction trend of O/C was observed by Sandeep et al. [35] in their study for lignocellulosic biomass.Reduction in the oxygen and hydrogen content strengthens the fact that the hydroxyl groups deteriorate during torrefaction.Moreover, from an energy density perspective, C-C bond projects higher energy compared to C-O or C-H bonds as the relative increase in carbon content results in better calorific value.Increase in the relative carbon content and decrease of other elements, such as O, N, H, and S may also result in a decrease in the volatile fraction.The molar ratios of hydrogen and oxygen with respect to the relative carbon content i.e., O/C and H/C ratios are substantial parameters for the characterization of fuel composition.From Figure 4, for both molar ratios, torrefaction at 200 • C did not have much of a significant effect, but with an increase in torrefaction temperatures from 250 to 300 • C presented a significant decrease.At 350 • C, for both H/C and O/C molar ratios, a steep decrease was observed.Whereas, a gradual decrease in the H/C molar ratio at 250 and 300 • C was achieved.For O/C at 300 • C, a slight increase was observed until 20 min residence time, after which the fall in the O/C value was noticeable.These alterations in the molar ratios are primarily due to the release of bound water and are also due to the fractional removal of oxygen by decarboxylation, and dehydration reactions [34].This decrease consequently increased the gross calorific value, which in turn decreases the volatile content.In addition, with an increase in torrefaction temperature and residence time, the hydrogen and oxygen content of the torrefied product decreased, resulting in reduction of the H/C and O/C ratios.Therefore, it can be seen from the graph that the increase in torrefaction temperature and residence time provoked a reduction in the O/C and H/C molar ratios due to the loss of carbon in the form of carbon dioxide, light hydrocarbons, and water.Moreover, a similar reduction trend of O/C was observed by Sandeep et al. [35] in their study for lignocellulosic biomass.Reduction in the oxygen and hydrogen content strengthens the fact that the hydroxyl groups deteriorate during torrefaction.Moreover, from an energy density perspective, C-C bond projects higher energy compared to C-O or C-H bonds as the relative increase in carbon content results in better calorific value.Increase in the relative carbon content and decrease of other elements, such as O, N, H, and S may also result in a decrease in the volatile fraction.
Increase in torrefaction temperature also favors the generation of gases, such as carbon monoxide (CO), carbon-dioxide (CO 2 ), thermal hydrocarbon (THC), and methane (CH 4 ), which is demonstrated in Figure 5.The gas analysis was carried out independently in order to study the gases that are emitted during the torrefaction of sewage sludge.The gas emission was not studied for different residence time.Rather, a continuous emission test from 50 to 350 • C was conducted to study the emission properties during torrefaction.CO 2 and CO were released after 250 • C, whereas CH 4 was released approximately after 270 • C and only subtle traces of THC is generated until 240 • C.This increase in the CO 2 is associated with the decarboxylation of the acid group, as explained by White and Dietenberger [36], whereas methane is produced due to the cracking and de-polymerization reactions [31].As aforementioned, the decarboxylation reaction occurs at an elevated temperature, which is also supported by the production of CO 2 and CO [37].Various researchers have proposed 300 • C as the optimal torrefaction temperature [38,39].From the results obtained in this study, it can be concluded that 250-300 • C can be considered as the optimum temperature for the torrefaction of sewage sludge.The difference in the optimum temperature range obtained in other research to this research may be due to the different types of biomass that were used i.e., lignocellulosic and non-lignocellulosic biomass.Increase in torrefaction temperature also favors the generation of gases, such as carbon monoxide (CO), carbon-dioxide (CO2), thermal hydrocarbon (THC), and methane (CH4), which is demonstrated in Figure 5.The gas analysis was carried out independently in order to study the gases that are emitted during the torrefaction of sewage sludge.The gas emission was not studied for different residence time.Rather, a continuous emission test from 50 to 350 °C was conducted to study the emission properties during torrefaction.CO2 and CO were released after 250 °C, whereas CH4 was released approximately after 270 °C and only subtle traces of THC is generated until 240 °C.This increase in the CO2 is associated with the decarboxylation of the acid group, as explained by White and Dietenberger [36], whereas methane is produced due to the cracking and de-polymerization reactions [31].As aforementioned, the decarboxylation reaction occurs at an elevated temperature, which is also supported by the production of CO2 and CO [37].Various researchers have proposed 300 °C as the optimal torrefaction temperature [38,39].From the results obtained in this study, it can be concluded that 250-300 °C can be considered as the optimum temperature for the torrefaction of sewage sludge.The difference in the optimum temperature range obtained in other research to this research may be due to the different types of biomass that were used i.e., lignocellulosic and nonlignocellulosic biomass.Increase in torrefaction temperature also favors the generation of gases, such as carbon monoxide (CO), carbon-dioxide (CO2), thermal hydrocarbon (THC), and methane (CH4), which is demonstrated in Figure 5.The gas analysis was carried out independently in order to study the gases that are emitted during the torrefaction of sewage sludge.The gas emission was not studied for different residence time.Rather, a continuous emission test from 50 to 350 °C was conducted to study the emission properties during torrefaction.CO2 and CO were released after 250 °C, whereas CH4 was released approximately after 270 °C and only subtle traces of THC is generated until 240 °C.This increase in the CO2 is associated with the decarboxylation of the acid group, as explained by White and Dietenberger [36], whereas methane is produced due to the cracking and de-polymerization reactions [31].As aforementioned, the decarboxylation reaction occurs at an elevated temperature, which is also supported by the production of CO2 and CO [37].Various researchers have proposed 300 °C as the optimal torrefaction temperature [38,39].From the results obtained in this study, it can be concluded that 250-300 °C can be considered as the optimum temperature for the torrefaction of sewage sludge.The difference in the optimum temperature range obtained in other research to this research may be due to the different types of biomass that were used i.e., lignocellulosic and nonlignocellulosic biomass.
Torrefaction Index (TI)
A study by Pabir Basu et al. [40] suggests that the degree of torrefaction or energy densification alone cannot determine the quality of the torrefied product and proposes the use of the torrefaction index.Lee and Lee [41] also supports this statement that energy yield alone cannot be used to define the optimum operating condition for torrefaction.As a matter of fact, as the severity of the process escalates, there is a reduction in the energy yield, which in turn reduces the net usable energy of raw material.Therefore providing the optimized condition for torrefaction would aid the reduction of weight loss but would boost the calorific value.One of the effective ways to calculate energy quality is by analyzing the exergy of the biomass.Pavelka [42] defines exergy as the maximal theoretical useful work acquired if the system is in thermodynamic equilibrium with the surroundings via reversible process.In context of biomass, exergy plays a vital role in evaluating the energy potentiality that is stored in the form of chemical bonds of the compounds.
From Figure 6 it can be seen that the overall increase in chemical exergy increases with increase in torrefaction residence time and temperature.At 200 • C, there is a gradual increase in the exergy but a significant increase is observed at 300 • C followed by 250 • C and the least value is projected by 350 • C. The graph obtained for chemical exergy can be correlated with O/C molar ratio and degree of torrefaction.The decrease in the O/C molar ratio is due to the increase in the relative carbon content which in turn increases the chemical exergy.This may be due to the fact that elemental oxygen present in the biomass operates as an oxidizing agent instead of combustible matter in contrast to the elemental carbon of the biomass, which undergoes oxidation during the chemical reaction releasing heat, which can be converted to work.In addition, the trend followed by degree of torrefaction and chemical exergy is comparable.For 200-300 • C, the degree of torrefaction and chemical exergy increases with an increase in torrefaction residence time.At 350 • C, the degree of torrefaction decreases with an increase in residence time; whereas, there is a negligible increase in the chemical exergy.This again can be associated with the increase in calorific value of the biomass due to the increase in the relative carbon content with the increase in torrefaction residence time.The highest value of exergy obtained in this study was 17.05 MJ/kg for 300 • C at 50 min residence time.The greater the value of exergy the greater is the useful work of the torrefied biomass, which in turn can enhance the thermochemical processes, such as pyrolysis, combustion, and gasification.For better-torrefied yield product on the basis of chemical exergy it can be concluded that 250-300 • C can be used as the optimal torrefaction temperature for sewage sludge.
Conclusions
The torrefaction properties of sewage sludge were investigated using a downdraft fixed bed reactor with respect to torrefaction temperature and residence time.An increase in the degree of torrefaction was seen at higher torrefaction temperature and residence time.The total net gain in the
Conclusions
The torrefaction properties of sewage sludge were investigated using a downdraft fixed bed reactor with respect to torrefaction temperature and residence time.An increase in the degree of torrefaction was seen at higher torrefaction temperature and residence time.The total net gain in the degree of torrefaction was highest for 250 • C, whilst an adverse effect was seen for 350 • C with an increase in torrefaction residence time and temperature.Reduction in the O/C and H/C molar ratios, volatile content with increase in the fixed carbon, and ash content with escalated torrefaction temperature was observed.The increase in degree of torrefaction and decrease in the O/C can be associated with the increase in carbon content, resulting in an increment in the chemical exergy of the torrefied biomass.This increase in exergy might improve the thermodynamic properties of the biomass; however, economic analysis must be considered and further investigation must be made in order to gain a clear understanding.From all of the results obtained in this study, it can be concluded that torrefaction above 300 • C would not be desirable.Also, to obtain maximum yield, a temperature range of 250-300 • C should be considered for the torrefaction of sewage sludge.
This torrefaction temperature range of 250-300 • C for sewage sludge may vary depending on the origin of sewage sludge, chemical composition, reactor type, and other experimental parameters.Furthermore, an in depth study of energy consumption and losses during torrefaction must also be considered as further research in order to provide a better understanding of the torrefaction characteristics of sewage sludge.
Figure 1 .
Figure 1.Schematic diagram of the fluidized bed used in this study.
Figure 1 .
Figure 1.Schematic diagram of the fluidized bed used in this study.
Figure 2 .
Figure 2. The degree of torrefaction of torrefied sample as a function of torrefaction residence time.
Figure 2 .Figure 2 .
Figure 2. The degree of torrefaction of torrefied sample as a function of torrefaction residence time.
Figure 4 .
Figure 4. Changes of (a) H/C (b) O/C molar ratios as a function of torrefaction residence time.
Figure 5 .
Figure 5. Gas analysis from torrefaction of sewage sludge as a function of the temperature.
Figure 4 .Figure 4 .
Figure 4. Changes of (a) H/C (b) O/C molar ratios as a function of torrefaction residence time.
Figure 5 .
Figure 5. Gas analysis from torrefaction of sewage sludge as a function of the temperature.
Figure 6 .
Figure 6.Chemical exergy of sewage sludge as a function of torrefaction residence time.
Figure 6 .
Figure 6.Chemical exergy of sewage sludge as a function of torrefaction residence time.
Table 1 .
Properties of raw sewage sludge.
Table 2
depicts the comparison of the experimental torrefaction index obtained in this study, with the torrefaction index for different torrefaction regimes according to Pabir Basu et al. [40].It demonstrates that the values obtained in this study aligns well with the values provided by Pabir Basu et al. [40].At 350 • C, not much improvement in the torrefaction index is demonstrated.Therefore, torrefaction of sewage sludge above 350 • C will not be required.If the polymeric composition of the biomass is known for specific torrefaction temperatures, than TI can be used as a potential tool for preliminary design or the selection of biomass ahead of operating absolute test [40].
Table 2 .
Torrefaction Index in different Regimes.
|
2018-12-31T16:09:59.132Z
|
2017-11-18T00:00:00.000
|
{
"year": 2017,
"sha1": "9432f7facd7ca4f7f04804fe4aedcd3f8432cfb8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/7/11/1189/pdf?version=1511002378",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9432f7facd7ca4f7f04804fe4aedcd3f8432cfb8",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
256598281
|
pes2o/s2orc
|
v3-fos-license
|
LEX-EFT: The Light Exotics Effective Field Theory
We propose the creation of a Light Exotics Effective Field Theory (LEX-EFT) catalog. LEX-EFT is a generic framework to capture all interactions between the Standard Model (SM) and all (or at least a large class of) theoretically allowed exotic states beyond the Standard Model (bSM), indexed by their SM and bSM charges. These states are light enough to be on or nearly on shell in some collider processes. This framework, which subsumes beyond the Standard Model paradigms as generally as possible, is meant to extend recent successful implementations of bSM EFTs and complement e.g. the Standard Model Effective Field Theory (SMEFT), which can capture the off-shell effects of exotic fields. In this work, we review a general method for the construction of a complete list of gauge-invariant operators involving SM interactions with light exotics via iterative tensor product decomposition, up to the desired order in mass dimension. Each operator is characterized by specific Clebsch-Gordan coefficients determined by the charge flow; we show how this charge flow affects the range of EFT validity and cross sections associated with an effective operator. We create an example catalog of exotic scalars coupling to SM gauge boson pairs, and we highlight some operators with exotic weak $\mathrm{SU}(2)_{\text{L}}$ charges that can produce spectacular LHC phenomenology. We further demonstrate the utility of the LEX-EFT approach with several examples of effects on kinematic distributions and cross sections that would not be captured by EFTs agnostic to the exotic degrees of freedom and may evade the main inclusive collider searches tailored to the existing preferred set of standard bSM theories.
Introduction
As the LHC era continues, it is important to leave no stone unturned in the search for new phenomena beyond the Standard Model (bSM).The space of all possible phenomenological signatures is vast, and new methods are needed to ensure that a maximal region of this space is being explored.In recent years, new ground has been opened up by the introduction of general approaches.Computational techniques like anomaly detection, for example, remain model agnostic [1].Along different lines, the Standard Model Effective Field Theory (SMEFT) attempts to capture new physics by enumerating a complete set of general operators that govern Standard Model interactions, with all new states assumed heavy (off shell) and integrated out of the theory [2].
The use of effective field theories (EFTs) has proven to be a phenomenological strategy of great utility in the LHC era, as it promises to cast the widest possible net in the pursuit of new physics.EFTs offer a simple formalism for cataloging the interactions of SM and bSM states.Much work has been done within the SMEFT framework, mentioned just above, which at the time of writing has a complete catalog of SM operators up to dimension eight and includes Higgs interactions.This approach has led to a plethora of collider analyses intended to measure and constrain the SMEFT Wilson coefficients in order to search for new physics.Discoveries of or constraints on bSM states based on SMEFT analyses requires matching between the effective theory and a full bSM model.Recent applications of EFTs beyond the Standard Model include a diverse and vibrant dark matter (DM) program [3][4][5][6][7] and interactions of axion-like particles [8].The DM EFT catalog, for example, has led to the exploration of many new possible dark matter discovery channels at the LHC [9].
Motivated by these successes, we seek to expand the theoretical coverage of the phenomenological landscape by introducing a Light Exotics Effective Field Theory (LEX-EFT), a systematized general approach to interactions of exotic states with the SM.Light exotic fields are those light enough not to be totally integrated out of the theory.Since LEX particles can appear on shell in at least some considered phenomenological processes, LEX-EFT is complementary to the consideration of the off-shell processes carried out in e.g. the SMEFT.
In this work, to be precise, we propose the creation of a comprehensive effective operator catalog of interactions between exotic states and the Standard Model, indexed by the specific gauge representations of the exotic fields.Ideally, LEX-EFT keeps the concision and generality that makes the EFT approach to phenomenology advantageous.In particular, we view the advantages of LEX-EFT as follows: • LEX-EFT offers a complete list of all possible interactions between light exotics and the Standard Model up to the desired order in effective cut-off (mass dimension).It is thus a guide for bSM precision and collider searches, it allows for the analysis of new event topologies, and it offers a comprehensive map of event kinematics without the burden of specifying UV-complete models.
• A complete LEX-EFT catalog would subsume other classes of exotic bSM models including supersymmetry, exotic Higgs models, and dark matter EFTs.Such a complete catalog may illuminate new interactions in these theories and thus new phenomenological channels for study.
• The LEX-EFT catalog would also bring to theoretical consideration bSM states that have not received model-building attention.It would thus cast a wider net over all of theory space in a systematic manner, accomplishing a goal that in the past few years has crystallized and started to receive attention from the theory community [10][11][12].
As we imagine the LEX-EFT approach would be closely followed up by a simplified model building approach, this would spark new theoretical innovation.
The LEX-EFT approach removes some of the model agnosticism of other general approaches to phenomenology, yet it allows the capture of many phenomenological features of collider processes that would not be possible otherwise.We highlight the following distinctive features in this work: • Kinematics and collider cross sections: Using unique LEX operators allows one to keep track of process kinematics, which are vital in constructing collider searches for new physics.It also allows for the accurate computation of collider production cross sections, scaled by the relevant effective operator coefficients, up to the validity limits of the EFT.This allows full consideration of all processes involving production and decay of exotic states in collider searches.
• Charge flow and validity of parameter space: Constructing effective operators that are singlets under all gauge groups requires specification of the Clebsch-Gordan coefficients in operators linking light exotic fields to the SM.For any given set of fields, there may be multiple ways to perform charge contractions.Each of these contractions then corresponds to a unique operator, which gives a picture of the charge flow of the process involved.There may be naturally large coefficients associated with some operators, which drastically affects predictions of production cross sections in the theory.Moreover, we find that the range of validity of an effective operator may vary widely based on choice of charge contraction, even if the fields involved in the operators are the same.
Even though the LEX-EFT approach focuses on the collider production of on-shell new states, the proposed operator catalog does have implications for loop-level processes, which should be explored in future work.In particular: • Operator correlations: A theory containing a specific LEX state has operators that may have correlations based on gauge invariance or other theoretical considerations.This approach works even with LEX states that are totally off-shell.The operator catalog for off-shell states leads to a specific list of correlated SMEFT operators that could be measured once the bSM states are integrated out.
• Precision measurements at loop level: Specifying the light exotic state appearing in a theory facilitates the computation of precision quantities such electroweak oblique parameters, lepton anomalous magnetic dipole moments, b → sγ, etc., which may not be obvious from other operator catalogs.
This paper is organized as follows.In Section 2 we review the iterative construction of Lorentz-and gauge-invariant operators including light exotic fields.We moreover introduce an example catalog of operators featuring scalars in higher-dimensional representations of SU(2) L .In Section 3 we explore the idea of exotic charge contraction and demonstrate how the quantum numbers of light exotics can affect both LHC cross sections for (b)SM processes and the valid experimentally accessible EFT parameter space.In Section 4, we provide two phenomenological examples of light-exotics models that produce identical final states at the LHC but exhibit totally distinct kinematics that cannot be captured in any EFT that excludes bSM degrees of freedom.Section 5 summarizes this work and suggests avenues of future research within the LEX-EFT framework.
An iterative tensor product method to construct new singlet operators
The LEX-EFT framework is underpinned by a straightforward group-theoretic procedure for obtaining a complete operator list of novel gauge-singlet operators up to a specified dimension.We therefore begin this work by describing the procedure in general and providing some reasonably self-contained examples.We start with a new LEX state denoted by Φ i that lies in a specified representation of SM and bSM gauge/global groups.The goal is to create a complete catalog of singlet operators that couple these LEX fields and SM fields ψ i up to a given order in an EFT cutoff Λ.More precisely, each effective operator has the form 1 plus first derivatives of such fields 1 , and we must find a complete list of all charge singlets composed of SM and LEX fields up to the desired mass dimension 4 + d.Further, the coupling coefficients λ contain the group-theoretic information about how to complete the charge contraction of the fields in the operator.As we discuss in Section 3, there may be more than one way to contract the charges, and in general there are many distinct charge contractions of multi-field operators.Thus the operator coefficients are different, and we must consider these separate operators.
In order to make sure our list of charge singlets is complete, we follow an iterative procedure that exploits the known group theory tensor products of irreducible representations of semisimple Lie groups.We consider the fields Φ i and ψ j to be in some irreducible representation(s) of these groups, with an r-dimensional representation denoted in general by r. 2 We can then consider the representation of the direct product of pairs of fields, For any given group there exists a list of such tensor products.We now discuss the construction of invariants from this list of bilinear tensor products.
Observation.If there exist invariant combinations of n + 1 and m + 1 fields transforming in the direct product representations p of a group, then there exists an invariant combination of n + m fields in the reducible representation [15].
Example.Suppose that two distinct tensor products contain the same irreducible representation e; that is, that a ⊗ b ⊃ e and c ⊗ d ⊃ e. (2.3) In this case we immediately infer the existence of the two trilinear invariants and we can also create a new iterated invariant by exploiting the fact that e ⊗ ē contains a singlet: This singlet contains the direct product of four irreducible representations, which can be mapped back to an operator containing SM and/or LEX fields.We note that the "intermediate representation" e need not be in a representation corresponding to the fields in the theory.It may, however, be useful in determining the flow of charge.
This process can be iterated with further nesting of bilinear tensor products in order to create singlets containing five representations, then six, etc.By continuing in this manner and mapping onto (b)SM fields, we can create gauge-invariant operators with more states.The iterative insertion of tensor products may be systematized to create complete lists of invariants which contain a specified number N of LEX/SM fields in irreducible representations.We refer to such singlets as "N -field invariants".The lists of N -field invariants will be complete as long as all possible intermediate states (representations) are accounted for.The complete list of invariants can then become a list of effective operators with predetermined field content up to the desired order in an effective field theory expansion (e.g. in the cutoff Λ −1 ).
For example, suppose we wanted to create a complete list of invariants containing four fields.We would begin by noting the representations of the SM or LEX fields, thus mapping {Φ i , ψ i } → r i .We would then determine all possible bilinear tensor products that involve these representations.From here we can create a list of three-state invariants.We then follow the iterative expansion process above, inserting all possible bilinears in intermediate states, to create a complete list of singlet products containing four terms.These can then be mapped back into the SM or LEX states to create a complete list of operators.The operators obtained by mapping back onto the states {Φ i , ψ i } will be proportional to the Clebsch-Gordan coefficients that contract the specific charge indices of the four-state invariants.
Though the list of N -field invariants is complete, the resulting terms contain fields of various mass dimension (gauge field-strength tensors, scalars, spin 1/2 fermions, etc) and therefore may map to operators of various effective dimension.Nevertheless, a similar iterative process may be employed to create lists with five fields, six fields, etc. Eventually all operators up to the desired dimension in EFT will be found.We give a brief argument below concerning the completeness of this process.
Regarding completeness.There exist in the theory a finite number, M , of SM/LEX fields transforming in irreducible representations r i with i ∈ {1, . . ., M }.An invariant of interest contains a specified number N of these fields.Since we are concerned with invariants involving N fields, we must contract all indices.The intermediate sub-product of a LEX state in representation r LEX with any other states of the theory will be in some intermediate representation r ′ .The sub-product of the remaining states must contracted in the conjugate representation r′ , so that singlets take the form There will be a maximal representation size to any sub-product of states within the N -field invariant, hence a maximum representation size of any intermediate representation r ′ .
We now argue from induction.To build three-field invariants involving a LEX field, we need only consider the m possible bilinear tensor products of the LEX state with other representations allowed in the theory, [r LEX ⊗ r i ] r ′ j , to obtain the finite list of irreducible representations r ′ in the direct product.If any single field in the theory is in the conjugate representation r′ j , then we can directly contract indices to form an invariant: With a list in hand of all m possible bilinear products r LEX ⊗ r i in representations r ′ j , we can proceed to construct the four-field invariants.We find the direct products of the allowed representations r k ⊗ r l that are in a given conjugate representation r′ j and contract these fields according to to obtain singlets.To proceed to five fields, we now consider all possible trilinear products of the form r LEX ⊗ r i ⊗ r j .We note we have already found by exhaustion the representations of bilinear products of the first two fields in the previous step.In that step, the bilinears were in representations r ′ j such that r LEX ⊗ r i ⊃ r ′ j .We can thus iterate the bilinear tensor products r ′ j ⊗ r j ⊃ r ′ k to find the representations r ′ k of all trilinear products.We then find the remaining bilinear representations r k ⊗ r l that are in the conjugate representation r′ k and contract these fields to form the five-field invariant.This process can be repeated indefinitely and will ultimately produce all possible terms -we only need to know the list of bilinear tensor products that involve relevant SM/LEX fields and the intermediate representations r ′ j , r ′ k , and so on.We note that this method can be applied not only to constructing gauge singlets but also straightforwardly to representations of the Lorentz group, since fields (and their first derivatives) in irreducible representations of the Lorentz group can be characterized by SU(2) × SU(2) quantum numbers.
We can choose different ways to build the effective operator catalog.One way is, given a specific field content, to build all operators containing a certain number of fields.Another is to build all operators up to a certain dimension in the EFT cutoff Λ.Yet another way is to build all operators that interact through a certain portal.There are some good application of this in studies of dark matter where a DM candidate may interact, for example, through a quark portal, so that all possible gauge-singlet DM-quark operators should be specified.
Note that in order for a model to be a true EFT, it must contain every possible operator up to a specified mass dimension.Generating complete bases of independent operators for EFTs has been the subject of much research [2,[16][17][18], involving both traditional group theory constructions and the Hilbert series method, and there exist computational tools [19][20][21] for listing invariant products of fields.There are some complications when (covariant) derivatives are present; namely, some seemingly different operators may be related via integration by parts, and some operators may vanish when equations of motion for the fields are enforced.In the following sections, we consider extending the SM by adding scalars in many different representations of the SM gauge group.A full list of invariant operators for each model is beyond the scope of this work.Instead, we take a signal-based approach, and we choose to outline only those operators that can result in diboson resonances.Any of these models could be promoted to a complete EFT in order to take full advantage of the formalism, and we leave such efforts to future work.
Example: A catalog of exotic scalars in the diboson portal
The rest of this section is devoted to providing an example LEX-EFT catalog of operators that produce novel phenomenology.This operator list extends to mass dimension seven.We begin with a simple phenomenological idea: we wish to catalog all couplings between a CP-even spin-0 field and pairs of SM gauge bosons.These new LEX scalars ϕ can carry various SM quantum numbers but are restricted to be singlets in any bSM gauge groups.Our example catalog serves two demonstrative purposes.First, it gives further practice in using the tensor product technique to produce novel singlet operators; second, it demonstrates that a simple idea -the diboson portal coupling to a single scalar -can give rise to disparate and novel event topologies.
The complete list of operators is found in Tables 1-4, which are organized by the mass dimension of the operators.Again, we list operators up to dimension seven, including insertions of Higgs fields.For any operator that contains a Higgs insertion, the Higgs field may be set to its vacuum expectation value, lowering the effective mass dimension of the operator at a cost of a v/Λ suppression.We have written CP-preserving terms only.In the left columns we list the LEX scalar field quantum numbers under the SM gauge group SU(3) c × SU(2) L × U(1) Y .The right columns contain the effective operators falling under each previous category.
Let us first discuss LEX states with SU(3) c quantum numbers.In order to maintain gauge invariance, operators that contain a single gluon field-strength tensor G µν must necessarily contain a LEX field ϕ that is in the adjoint representation (8) of SU(3) c .Socalled color octets appear in many bSM scenarios, such as SUSY and minimal flavor violation (MFV), and exhibit interesting and varied phenomenology [22][23][24][25].Diboson couplings of color octets, in particular, do appear in the literature [26][27][28] but are underdiscussed, mainly focusing on the digluon coupling and resultant dijet resonances.A color-octet LEX state might be a singlet under SU(2) L or have nontrivial weak quantum numbers.Color octets with SU(2) L quantum numbers are quite interesting but have received far less phenomenological attention than weak-singlet color octets.For instance, a weak-doublet color octet with SM quantum numbers (8, 2, 1 2 ) was proposed in the Manohar-Wise model [22], and produces some interesting collider signatures [28,29].Yet this model is still understudied, as the masses of these fields are still quite unconstrained by collider searches.LEX fields with these quantum numbers may couple to W µν G µν with the addition of one Higgs insertion to create a SU(2) singlet; namely, with σ a (at least proportional to) the generators of the fundamental representation of SU(2) L such that a is a weak adjoint index and i, j are fundamental indices (see Table 1 and following for index conventions).Similarly, the biadjoint field with SM quantum numbers (8, 3, 0) has only been studied, to our knowledge, in the context of electroweak oblique corrections [30].Within the LEX-EFT framework, such a field may couple to through the dimension-five operator We do note that at dimension seven, even the standard weak-singlet color-octet (8, 1, 0) scalar may couple to W µν G µν through the operator It is also possible for color octets in the quadruplet (4) and quintuplet (5) representations of SU(2) L to couple to the diboson pairs W aµν G A µν through operators with additional Higgs insertions.These operators are of particular interest because they contain multiplyelectrically-charged states.
LEX fields that couple to a pair of gluon field strengths G µν G µν may be in various representations.With the decomposition we see the LEX state may be in a singlet, adjoint, decuplet (10), or 27 of SU(3) c .The LEX states in these operators may appear at dimension five with SU(2) L × U(1) Y quantum numbers (1, 1); they may appear at dimension six with SU(2) L × U(1) Y quantum numbers (2, 1 2 ) via the insertion of one Higgs state, and they may appear at dimension seven with two Higgs insertions in the weak triplet (1, 1) or singlet (1, 1) representations.We note that a field in the 10 representation can be written as a symmetric tensor with three fundamental indices, and one in the 27 can be written as a symmetric tensor with two fundamental and two anti-fundamental indices.We make use of this notation in Tables 1 and 4.
Higher-dimensional representations of SU(2) L
To elaborate upon this example, we now discuss the construction of operators involving various representations of the weak SU (2) L gauge group.It is well known that the representations of SU(2) may be mapped onto simple spin algebra from quantum mechanics, where the n-dimensional representation maps onto objects of spin J with n = 2J + 1.Thus, for example, a field in the five-dimensional representation maps to a spin-2 object with five possible spins: J ∈ {−2, −1, 0, 1, 2}.We may then infer the tensor product relations among operators containing fields charged under SU (2).Recall that the tensor products of objects with spins J and L with J ≥ L follow (2.13) As an example, consider the tensor product of two three-dimensional representations of SU(2), 3 ⊗ 3. The triplets of SU(2) map to J = L = 1, so the possible spin-product states are J ∈ {0, 1, 2}, corresponding to the one-, three-, and five-dimensional representations of SU (2).We therefore arrive at the tensor product relation in SU (2).From here, we can use the iterative tensor product method to construct the singlet operators that couple LEX states in higher-dimensional representations of SU(2) to pairs of gauge bosons.
All possible representations can be constructed by taking successive products of the fundamental.These higher-dimensional representations may be denoted as symmetric tensors.Totally symmetric tensors of dimension d and rank r have independent components.For SU(2), d = 2, so n = r + 1.Thus the n-dimensional representation is a rank-(n − 1) symmetric tensor.As an example, the 6 of SU(2) may be ) 1.
represented as a rank-five tensor ϕ ijklm .We can write the covariant derivative acting on a general SU(2) L n-multiplet Φ as where τ a n are the generators of the n-dimensional SU(2) L representation.As is typical, the third generator of the group is diagonalized, and the eigenstates are those states with definite electric charge after electroweak symmetry breaking.Thus, it may be worth explicitly stating the relation between these states and the symmetric (n − 1)-tensors described above.If we label the isospin values as {−J, . . ., J − 1, J}, then we define 1 and 2.
Table 4: Dimension-seven exotic operators that couple boson pairs to color-charged bSM fields ϕ with specified SM quantum numbers.Indices are as shown in Tables 1-3.
where the ϕ i 1 ...i n−1 are totally symmetric.This ensures that e.g.ϕ †ijkl ϕ ijkl is a canonically normalized mass term.Writing all higher representations in terms of symmetric tensors makes the construction of invariant operators much simpler, as it only requires all indices to be contracted.We define the higher representations as having all lower indices, and when required we raise indices with the invariant Levi-Civita symbol.Once the operators are written, the charged-state interaction terms may be extracted via the above relation.
While models with additional weak doublets or triplets have been studied extensively, fields in higher dimensional representations of SU(2) L have received less attention.Of particular interest is the five-dimensional representation.Explicitly, a scalar quintuplet of SU(2) L has isospin components Φ = (Φ ++ , Φ + , Φ 0 , Φ − , Φ −− ).Here we consider the quintuplet to have zero hypercharge, so the electric charges of the states are integral and range from −2 to +2.The 5 of SU(2) L is a real representation, so we may enforce Φ −− = (Φ ++ ) † and Φ − = (Φ + ) † .We express the quintuplet as a rank-four symmetric tensor ϕ ijkl , and we use group theory to outline the possible singlet operators.
Filling out the catalog; phenomenological observations
With some formalism in hand for higher-dimensional representations of SU(2) L , we now describe some illustrative examples of operators which will also help demonstrate the use of tensor products to find all singlets.As we wrote above, we have the SU(2) the tensor product decomposition 3⊗3 = 1⊕3⊕5.We may identify the fields in the three-dimensional representation of SU(2) L with the gauge bosons, so that the product 3 ⊗ 3 corresponds to the bilinear W µν W µν .We can now find a singlet product of LEX states that can marry this bilinear to make a weak singlet.
The 5 may correspond to a single LEX state in the five-dimensional representation of SU(2), denoted as above by ϕ ijkl .Thus the quintuplet may couple to two W field-strength tensors.For illustration, we expand this operator in terms of the charged components of the exotic scalar: This allows the scalar to decay into boson pairs.Also of interest are the four-and sixdimensional representations, which must be complex in order for the states to have integral electric charges.Their weak hypercharges must be equal or opposite to that of the SM Higgs, and they may couple to boson pairs via dimension-six invariant operators.Explicitly, the quadruplet has states Φ = (Φ 2 ) representation of the SM gauge group, and the sextet has states Φ = (Φ Moving on, we note that we can compose more singlets by inserting Higgs fields to absorb SU(2) tensor indices.For example, we have the tensor product 2 ⊗ 6 ⊃ 5 as with spins 1 2 Identifying the SU(2) doublet with the Higgs and the sextet with the LEX field ϕ ijklm , we can construct the singlet ϕ ijklm H i W jk µν W lmµν .This corresponds to the iterated tensor product invariant 2 ⊗ 6 ⊗ 3 ⊗ 3. The tensor product of a quadruplet with a single Higgs doublet, meanwhile, contains a triplet and quintuplet; that is, where in the latter, the exotic state is "promoted" to a 5, and in the former the state is "demoted" to a 3. Thus, we consider two operators: to form singlets that couple W pairs to the triplet or septet fields at dimensions six or seven.
We note that in addition to coupling directly to pairs of gauge field-strength tensors, LEX states may couple to pairs of weak gauge bosons by coupling to the Higgs kinetic term D µ H D µ H † [31].For example, the LEX SU(2) L triplet appears in the dimension-five operator Once Higgs VEVs are inserted, this becomes an operator of effective dimension three: With additional Higgs insertions, LEX states in four-and five-dimensional representation of SU(2) L may also appear in this manner for operators up to mass dimension seven.
Finally, operators may have a combination of gauge field-strength tensors and covariant derivatives acting upon Higgs doublets that precipitate the diboson couplings.An interesting example is the dimension-six operator This operator involves the color-octet weak-doublet scalar.If the Higgs VEV is inserted into this operator, it couples the octet to a gluon and weak gauge boson.However, if the Higgs VEV is not inserted, it has the interesting property of coupling the octet to a gluon-Higgs pair.
The phenomenology of the diboson portal operators is very interesting and complex.One production process common to all diboson operators is associated production, pp → ϕV , where a LEX field is produced along with one SM gauge boson.Additionally, any non-singlet LEX field may be pair produced via gluon fusion or vector boson fusion.What happens after production of these LEX states can be spectacular.As mentioned above, the LEX multiplets in larger representations of SU(2) contain multiply charged states.In order to ensure that all operators preserve electromagnetism, the weak hypercharge assignments of LEX states always guarantee integer electric charges of these states.We then find a characteristic multi-gauge boson topology for the collider production of these states.We have previously studied some models with gauge boson final states [27,32], but there is a lot of ground left to cover.One interesting example is the associated production of a multiply charged SU(2) L quintet state via the effective operator Here the doubly charged state is produced in association with a W ; namely, pp → Φ ++ W − , where the charged state decays to same-sign W s via Φ ++ → W + W + .Thus the entire process contains three W s, two of which produce a mass resonance, and the other of which may be significantly boosted.The full process can be written as where the brackets indicate the mass resonance.An even more complex example of this might be the production and decay of a weak-sextet scalar through e.g. the dimension-six operator (2.28) This state contains the fields Φ = (Φ +++ , Φ ++ , Φ + , Φ 0 , Φ − , Φ −− ).We may then have a simple electroweak quark-fusion process like qq → Φ +++ Φ −− .Provided that there is a mass-splitting term for the multiplet, the multiply charged states will cascade decay via the electroweak interaction; for example, Φ +++ → Φ ++ W + → Φ + W + W + .The Φ − Φ + might decay to W Z through the effective operator.Thus the entire process contains seven gauge bosons: where brackets once more indicate the mass resonances in the decay chains.It happens [33] that such a mass splitting can be introduced via the interaction (2.30) After electroweak symmetry breaking, the Higgs VEV results in differing mass contributions to the different isospin states of the bSM multiplet.The triply charged state would be the heaviest, and this would allow for the above decay chain.
Charge flow and cross sections
In this section, we examine the utility of specifying the light-exotic content of a theory through the lens of some simple examples.In particular, we introduce two families of toy models with various bSM color-charged fields and demonstrate how the SU(3) c representation alone, with all else being equal, can dramatically affect LHC cross sections.These examples are part of a class totally separate from the LEX scalars communicating with the Standard Model through the diboson portal; we hope that the change of tack serves to demonstrate the breadth of the theory space that can be explored with LEX-EFT.
Completing an exotic operator with more exotics
Consider first an operator governing an SU(3) c color-sextet scalar φ and quarks and gluons but no leptons: The electric charge of φ is Q = 1/3 if the two quarks are of unequal type (i.e., one up and one down).The coefficients Π a ij s are Clebsch-Gordan coefficients that project out a color singlet from the direct-product representation 3 ⊗ 3 ⊗ 6 ⊗ 8.The operator (3.1) produces multijet events at LHC through single sextet production, and a cursory investigation of a similar operator is carried out in Section 4. Here, we simply wish to observe that the group-theoretic coefficients Π a ij s -and the size of resulting cross sections -are not unique.In particular, there is an operator of the form (3.1) for each independent color singlet that can be formed from the given direct-product color representation.How to proceed depends on one's point of view.On one hand, the number of such independent singlets is finite, and it is possible to construct a single operator whose color factor is expressed as a linear combination of (all of the) independent Clebsch-Gordan coefficients.On the other hand, operators with linearly independent Clebsch-Gordan coefficients are certainly distinct operators, and it may be reasonable -and phenomenologically interesting -to investigate an individual infrared operator while assuming that its particular Clebsch-Gordan coefficients depend on the representation of the ultraviolet degree(s) of freedom that have been integrated out.To demonstrate this portal-based approach, which extends the line of thinking begun in Section 2, we provide three toy UV completions of this operator in Figure 1.
The top diagram in Figure 1 shows an s-channel completion by way of a color-triplet fermion Ψ 3 .This completion produces the desired color structure by contracting a 3 with a 3 in 3 ⊗ 3 ⊗ 8 and 3 ⊗ 3 ⊗ 6. (3. 2) The vertex corresponding to the first color invariant, which couples a gluon to two different quarks, arises at loop level in minimal extensions of the Standard Model with compactified extra spatial dimensions, which preserve Kaluza-Klein (KK) parity [34,35], and at tree level in KK parity-violating models [36] (in either case, the role of the color-triplet fermion Ψ 3 is played by e.g. a level-one KK quark).Explicitly, the coefficients in (3.1) in this scenario take the form where t a 3 are the generators of the fundamental representation of SU( 3) and the Clebsch-Gordan coefficients K ij s , which are symmetric in SU(3) fundamental indices, uniquely project the color singlet out of the direct product 3 ⊗ 3 ⊗ 6 [37].It happens that the squared norm of the array (3.3), which would for instance be computed as part of the cross section σ(qg → φq c ), is given by The operator (3.1) could instead be completed by yet another sextet scalar Φ 6 , as shown by the middle diagram in Figure 1, in which case a 6 is contracted with a 6 in 6 ⊗ 6 ⊗ 8 and 3 ⊗ 3 ⊗ 6. (3.5) While exotic, this model can be envisioned as a color-sextet analog of the extra-dimensional model with SU(3) c KK excitations considered above.In this case, the group-theoretic factors are where in this simple example the only difference is in the generators t a 6 of the six-dimensional representation of SU(3) [15].The factor that arises in the computation of cross sections in this case is given by Finally, the operator (3.1) can be completed yet again at loop level (and without nondiagonal gauge interactions) by introducing two color-charged degrees of freedom in the sextet and adjoint representations of SU(3) c , as shown in the bottom diagram in Figure 1.This case is interesting because it does not correspond to a single heavy color-charged degree of freedom, and so the color flow must be tracked more carefully.In particular, the loop is built of the color invariants and the group-theoretic factors in (3.1) take the form with square The square of (3.9) is therefore smaller than both (3.4) and (3.7).We therefore find that cross sections computed within the framework of the effective operator (3.1) differ by a factor of up to 9, ignoring all other differences including loop factors, depending on the color representation of the UV degree of freedom.
A family of toy models for hh production
For some variety, consider next a toy model in which some color-charged scalar ϕ enjoys a renormalizable coupling to the SM Higgs doublet H, so that its dynamics are captured by This model generates loops of the form displayed in Figure 2 that could enhance the cross section of Higgs pair production at the LHC, which is encoded in the Wilson coefficient of the dimension-six operator Here the trace is over SU(3) c adjoint indices.The effect of the exotic scalar ϕ on σ(gg → hh) can be significant and depends strongly on the scalar's charge(s).To demonstrate this, we show in Figure 3 the leading-order (one-loop) LHC dihiggs production cross section as a function of the exotic scalar mass m ϕ in two well motivated scenarios: one with a color-triplet scalar, à la squarks, and another with a color-octet scalar.In both scenarios, the scalar-Higgs coupling λ ϕH in (3.11) is set to 0.1.The enhancement becomes negligible before either scalar reaches 1 TeV in mass, but in the light-scalar regime where the bSM contribution dominates the SM loops, we find a factor-of-O (10) difference between the two scenarios.While the interference between SM (t, b) loops and ϕ loops is intricate, we show using the dashed curves in Figure 3 that the ϕ loops by themselves are responsible for most of this discrepancy. 3The responsible group-theoretic term is simply tr t a r t b r , with r the SU(3) c representation of ϕ.It is well known that for the Lie groups SU(N ) this trace is proportional to δ ab , with the factor of proportionality defining the Dynkin index T r of the representation; we provide in Table 5 the Dynkin indices of a handful of representations of SU(3).We use these results to extend the ϕ-only (interference-free) dihiggs enhancements beyond the color-flow limitations of MG5_aMC to toy models with ϕ in higher representations of SU(3) c .These rescaled results are displayed in Figure 4 in close analogy with the dashed curves in Figure 3.In order to demonstrate the growth of the new-physics enhancements with the scalar-Higgs coupling λ ϕH , we adopt a different value of 1.5 in this latest figure.In this benchmark, ϕ contributions TeV, ignoring interference between SM and scalar loops.These new-physics contributions are displayed to compare group-theory factors for increasingly exotic color representations.Scalar-Higgs coupling λ ϕH is set to 1.5 to demonstrate scaling (viz.Figure 3).Also displayed are robust lower limits on m ϕ for selected representations.remain significant into the TeV scale for some color representations: see the inset of Figure 4 for a detailed view of the contributions from low-dimensional representations.
Also visible in Figure 4 are some suggestive, but by no means comprehensive, lower bounds on the mass of ϕ in the color representations that have been probed by experiment.This condition applies certainly to the 3 and 8 and to lesser extent to the 6 and 10 (and higher).We take care to note the intrinsic limitation on these bounds that is imposed by model dependence: it is well known that experimental bounds can shift by O(100) GeV upon consideration of a new production/decay channel.We therefore highlight in Figure 4 a set of conservative limits that apply to the preponderance of well motivated channels.In particular, we take • m ϕ > 500 GeV for a color-triplet scalar, still allowed by Run 2 searches for pairproduced squarks q decaying to q χ0 with relatively heavy χ0 [38]; • m ϕ > 600 GeV for a color-sextet scalar, allowed for scalars coupling to same-sign quark pairs with reasonable O(0.1) coupling strengths [39]; • m ϕ > 800 GeV for a color-octet scalar, the most conservative bound derived fairly recently [40] for CP-even color octet in the Manohar-Wise model [22]; • m ϕ ≳ 1900 GeV for color-decuplet/quindecuplet scalars that may appear as R hadrons at the LHC in the absence of an efficient decay channel [41].
Figure 4 shows that these limits -which we reiterate can be strengthened at the expense of model independence -can be used to probe large scalar-Higgs couplings λ ϕH of O(1).
By the same token, we conclude that light -even sub-TeV -color-charged scalars are still viable in this λ ϕH regime.
Group theory impacts on EFT validity and collider reach
Before we move on, we make some observations about the impacts of flow on the range of validity of an effective field theory and, on a related note, the self-consistent experimentally accessible parameter space of such theories.Just above (viz.Figure 3), we offered a simple example of two models identical in all aspects except for the color charge of the light exotic field ϕ.The dihiggs production cross sections in these models suggest that the characteristic scale of the EFT obtained by integrating out ϕ may vary by O(100) GeV.This line of thinking can be extended to the EFT cutoff Λ, even though one might suppose that the range of validity of an effective field theory should be unaffected by non-kinematic factors.
We make this notion more concrete by computing the perturbative unitarity bound [42,43] on the cutoff Λ of the operator (3.1), which permits processes of the form qg → φq c .This particular process has angular momentum J = 1/2.In the massless-quark limit, the perturbative unitarity bound on Λ derived by computing the definite-helicity transition amplitude with m φ the mass of the light sextet φ and the group-theoretic factor Π a ij s Πs a ij given by (3.4), (3.7), and (3.10) in the three cases studied in Section 3.1.Since we found there that the numerical factor could differ by a factor of up to 9, we see from (3.13) that the perturbative unitarity bound on the operator (3.1) can vary by a factor as large as based purely on the SU(3) c representation of the intermediate exotic field (viz.Figure 1).This significant effect, which we reemphasize is completely independent of the kinematics of the physical process generated by the operator, has been observed in only a few disparate contexts [45,46] -as far as we are aware -and deserves greater appreciation, since it can void potentially wide swaths of EFT parameter space on self-consistency grounds.By the same token, though, it may be that higher minimum EFT cutoffs due to the above mechanism are neutralized by larger cross sections, resulting in net gains in (potentially) accessible parameter space, provided that the effective operator is of high enough mass dimension.The previous example is illustrative: if the cross section σ(qg → φq c ) rises by a factor of 9, but the unitarity-bounded cutoff rises by a factor of √ 3, then in principle the experimental reach along the Λ axis of EFT parameter space is greater for any given m φ despite the higher minimum cutoff.Altogether, therefore, we conclude that even O(1) numerical factors affect both cross sections and unitarity bounds in effective field theories, ultimately determining how much parameter space should be considered valid and accessible.This observation adds yet another motivation for theorists to comprehensively explore the space of light-exotics models.
Unique kinematics of light-exotics processes
We now provide some examples of unique kinematic features that may appear in processes at the LHC and that can only be captured theoretically by specifying the intermediate states that may be (nearly) on shell in such processes.We first consider a pair of models that both produce final states with two hard jets and a pair of charged leptons (jj ℓ + ℓ − ).
We then highlight two models that produce the (related) final states with hard jets and missing transverse energy (jj + E miss T ).
4.1 jj ℓ + ℓ − : sextet vs. leptoquark Scalar leptoquarks (LQs) [47], novel spin-zero SU(3) c triplets carrying both baryon and lepton number, have a long history both in unified theories [48][49][50] and in phenomenological models that accommodate lepton flavor universality violation [51][52][53].A so-called firstgeneration scalar leptoquark Φ LQ with electric charge Q = −1/3 minimally couples at mass dimension four to electrons and up quarks according to where {U , E} are the first-generation quark and lepton SU(2) L doublets with indices a, b ∈ {1, 2}, and {u, e} are the corresponding weak singlets.The Yukawa-type couplings y L , y R are considered independent in this analysis.The quantum numbers of this leptoquark are specified in Table 6.In this model, the relevant LHC production process for the jj ℓ + ℓ − channel is QCD pair production, gg → Φ † LQ Φ LQ , followed by the decay Φ LQ → ue − and its conjugate.A representative diagram for this process is shown in the lower panel of Figure 5.The CMS Collaboration conducted a search [54] for this specific process using L = 35.9fb −1 of Run 2 data and, in the absence of a signal, excluded first-generation leptoquarks with masses m LQ < 1435 GeV at 95% confidence level (CL) [55].
The same final state can be produced at the LHC in a cousin of the operator (3.1) containing the following dimension-six interaction(s) between SM fermions and a Q = 1/3 color-sextet scalar Φ [37,39,44,[56][57][58][59]: The couplings λ uℓ are elements of a matrix in quark and lepton generation space, with I or X = 3 labeling the heavy generation(s).The coefficients J are the generalized Clebsch-Gordan coefficients [37] required to construct gauge-invariant contractions of the direct-product representation 3 ⊗ 6 ⊗ 8 in SU(3) [15].Here and below, the 6 is indexed by s, r, . . .and the 3 by i, j, . . . .The quantum numbers of this color-sextet scalar are specified in Table 6.Here the relevant LHC process is single sextet production with an associated lepton (for first-generation SM fermions, ug → Φ † e + ) and subsequent decay through the same operator, Φ † → uge − . 4The sole diagram for this process is displayed in the upper panel of Figure 5.
In simple scenarios where both exotic scalars couple only to first-generation SM fermions, the final states of these two processes are indistinguishable at the LHC.But the kinematics of these processes are quite different.Some illustrative distributions are compared in Figure 6 for exotic scalars set to a common mass of 1.5 TeV, this mass having been chosen in view of the 1.44 TeV limit on m LQ mentioned above.The simulated event samples were produced in MadGraph5_aMC@NLO (MG5_aMC) version 3.3.1 [60,61], showered and hadronized using Pythia 8 version 8.244 [62], and analyzed with MadAnalysis 5 version 1.9.20 [63][64][65] after performing object reconstruction using its inbuilt simplified fast detector simulator (SFS) [66] and an interface to FastJet version 3.3.3[67].Jets were reconstructed according to the anti-k t algorithm [68] with radius parameter set to R = 0.4.
In the top panel of Figure 6, we show that the transverse momentum (p T ) of the leading lepton is expected to be significantly higher in the sextet model than for LQ pair production.This is because in the former model, the leading lepton is the one that recoils off of the sextet when it is produced, whereas in the latter model it is a product of one of the decaying leptoquarks.The middle panel shows the leading jet p T , which is likewise expected to be higher in the sextet model but more sharply peaked for LQ pairs.The bottom panel shows the invariant mass m ℓ 2 j 1 j 2 of the two hardest jets and the second lepton.In the sextet model, the system ℓ 2 j 1 j 2 corresponds to the decay products of the exotic scalar, and so the invariant mass can be used to reconstruct the sextet [44].No such identification can be made within the leptoquark model, and indeed the distribution is much broader and certainly not peaked at the LQ mass.
: sextet vs. squark
Final states with jets and missing transverse energy have long provided the quintessential search channel for supersymmetry at the LHC since squarks are copiously pair produced in many models (for example, q q → q † q) and usually decay to quarks and neutralinos (q → q χ0 and the conjugate).This is certainly the case in the Minimal Supersymmetric Standard Model (MSSM), which we focus on for simplicity in this work.A representative diagram for this process is displayed in Figure 7.Meanwhile, an extension of the sextet scalar (4.2) can produce similar final states at the LHC.Suppose, in particular, that the quarks and leptons are left handed and interact with Φ according to with notation similar to (4.2).Then, instead of single sextet production in association with a charged lepton, we have an associated neutrino (dg → Φ † ν) and the decay(s) Φ † → dgν.These processes notably involve down-type quarks, but aside from the differing SM fermions, the relevant diagram -displayed in Figure 7 -is identical to the one in Figure 5.
As for the previous pair of models, we explore the kinematics of these jj + E miss T processes in Figure 8.We again only consider couplings between exotic scalars and firstgeneration SM fermions, so that for instance the MSSM process is left-handed up squark production, pp → ũ † L ũL .As mentioned in Section 3.2, ATLAS and CMS have released comparable limits on light-flavor squarks q based on the full Run 2 dataset, L ≈ 139 fb −1 : ATLAS excludes m q < 1210 GeV assuming one non-degenerate light-flavor squark and a light neutralino (m q < 1850 GeV for eight degenerate squarks) [38], and CMS excludes m q < 1250 GeV (1710 GeV) in the same scenarios [69].For the purposes of this discussion, we do not take a firm position on the MSSM squark spectrum and suggest as a starting point some m q lying between the aforementioned limits.m ũ = 1500 GeV happens to reside in this neighborhood, so we use the same scalar masses as in the leptoquark comparison. 5he samples were produced and analyzed using the same toolchain as before; the squark sample relied on the MSSM implementation shipped with MG5_aMC [70].
The top panel of Figure 8 shows the E miss T distributions in both models.We see a characteristic peak around m ũ/2 in the MSSM and a broader distribution for the sextet, both because the electron neutrino is lighter than the neutralino in our MSSM benchmark and because one of the neutrinos recoils off of the color-sextet scalar.The middle panel plots the transverse momentum of the leading jet; these distributions are quite similar to the leading-jet p T distributions in Figure 6 since the neutralino is still fairly light.Finally, the bottom panel shows the invariant mass m j 1 j 2 of the two leading jets, which -in contrast to the missing energy -is sharply peaked at m Φ /2 in the sextet model but much broader in the MSSM.Altogether, we again have a pair of scenarios with light exotic particles that must be specified in a renormalizable or effective theory in order to capture the LHC kinematics.
Conclusions
We have introduced the Light Exotics Effective field Theory (LEX-EFT) to study the phenomenology of on-shell or nearly on-shell exotic particles.We presented a general method for constructing a complete catalog of operators coupling these new states to the Standard Model.The LEX states are categorized by their SM quantum numbers, and we outlined a general iterative tensor product method to create a complete list of gauge singlets up to the desired mass dimension of effective operator.We described the effect of charge flow on the operator coefficients, which are comprised of distinct products of Clebsch-Gordan coefficients.We demonstrated through some simple examples that these are important to determining the range of validity of the effective operator, even as they strongly affect production cross sections within the EFT framework.
We also discussed the distinct kinematics of LEX-EFT operators via an example model of SU(3) c color sextets coupling to the Standard Model through dimension-six operators.We showed how several kinematic observables within this model are strongly dependent on the specific LEX state by way of comparison with another model producing in-principle identical final states at LHC.Such distinctive kinematics allow for tailored collider searches more powerful than inclusive searches tuned to models of supersymmetry or leptoquarks.We think this highlights the need for a wider array of collider searches driven by more general models.Finally, we created an example LEX-EFT operator catalog detailing the couplings of a CP-even scalar to pairs of SM gauge bosons up to mass dimension seven.This demonstrated the use of the iterative tensor product method and hinted at the wide array of nonstandard particles that may be accessed through this portal, some of which depart greatly from well trodden bSM paths.We note that in previous work, we presented a catalog of SU(2) L singlet color-sextet spin-0 and spin- 1 2 fields up to mass dimension six [15].It is remarkable that though both of these endeavors represent only a tiny fraction of the full possible operator catalog, they still yield interesting and surprising interactions between SM and bSM states and spectacular collider phenomenology.
Opportunities for further work in the LEX-EFT framework are manifold, and we take this opportunity to lay out a long-term plan for the in-depth study of this paradigm.The first and most obvious step is to build out the operator catalog with new LEX states.Approaches to building the operator catalog may follow several systematic paths.One of these, as suggested in this work, is to index this catalog by the SM quantum numbers of the exotic state(s); that is, to specify the representations of the exotic state and use the iterative tensor product procedure to obtain all singlet operators up to the desired mass dimension.Another possible way to aggregate LEX-EFT operators is a portal-based approach.For example, in this work, we specified SM gauge boson pairs as the portal to new physics, and we built a catalog of all possible CP-even scalar LEX fields that can be accessed through this portal through mass dimension seven.These approaches are complimentary: the portal-based exercise gives an idea of which exotic states can be produced through certain processes, and the full study of such states then requires the construction of a complete EFT.
More immediate work might follow directly from topics brought up in this paper.For example, it might be interesting to follow up with collider studies on any of the effective operators in our example catalog in higher-dimensional representations of SU(2) L , since that promises to yield complex collider signatures.Another route might be to construct the complete operator list up to dimension seven for any of these states, considering all couplings to the SM beyond the diboson portal.
More generally, the kinematic landscape of possible collider final state is vast.We have demonstrated the unique kinematic features that appear in particular models even when final state particles are the same or at least indistinguishable in a detector.In the continuing absence of definitive collider evidence for physics beyond the Standard Model, a systematic search through possible event topologies is needed.Once complete, the LEX-EFT operator catalog can be mined for new collider phenomenology.As demonstrated by the diboson portal presented here and previous work on color sextets, collider final states for the LEX-EFT catalog can have nonstandard collider signatures and striking event topologies that would not be predicted without the catalog.Working through the LEX-EFT catalog scans the landscape of possible collider signatures for new physics, taking a "leave no stone unturned" approach.
Finally, we reiterate that it will eventually be incumbent upon theorists to complete the EFTs for classes of model with particularly compelling phenomenology.This will likely follow the path of recent developments in dark matter studies with simplified models and then next-generation models [71].This will no doubt offer the benefit of expanding the theoretical landscape of bSM paradigms beyond the standard fare, and may lead to the discovery of new theoretical mechanisms or paradigms.
Figure 1 :
Figure 1: Diagrams representing toy UV completions of the operator (3.1) by way of (top) a heavy quark Ψ 3 , (middle) a heavy sextet scalar Φ 6 , and (bottom, at loop level) a heavy sextet scalar Φ 6 and octet scalar Φ 8 .
Figure 2 :
Figure 2: Representative diagrams containing some color-charged scalar ϕ resulting in enhanced rates of dihiggs (hh) production at the LHC.
Figure 3 :
Figure 3: Leading-order LHC dihiggs (hh) production cross sections at √ s = 13 TeV.Leading loops consist of third-generation SM quarks and additional (red) color-triplet or (blue) color-octet scalars ϕ.Scalar-Higgs coupling λ ϕH is set to 0.1.Only the solid curves correspond to observable cross sections; new-physics contributions are displayed to compare group-theory factors.
Figure 4 :
Figure 4: Enhancements to leading-order LHC dihiggs (hh) production cross sections at √ s = 13TeV, ignoring interference between SM and scalar loops.These new-physics contributions are displayed to compare group-theory factors for increasingly exotic color representations.Scalar-Higgs coupling λ ϕH is set to 1.5 to demonstrate scaling (viz.Figure3).Also displayed are robust lower limits on m ϕ for selected representations.
SU( 3 ) 27 Table 5 :
Factors of proportionality (Dynkin indices) in the generator trace tr t a r t b r = T r δ ab for physically interesting irreducible representations of SU(3).
Figure 6 :Figure 7 :
Figure 6: Distributions of (top) hardest lepton p T , (center) hardest jet p T , and (bottom) invariant mass of hardest jets and second-hardest lepton for (red) dimension-six sextet scalar production and (green) scalar leptoquark pair production at LHC.
Figure 8 :
Figure 8: Distributions of (top) missing transverse energy, (middle) hardest jet p T , and (bottom) invariant mass of two hardest jets for (red) dimension-six sextet scalar production and (green) first-generation squark pair production at LHC.
Table 1 :
Dimension -five operators that couple boson pairs to bSM fields ϕ with specified SM quantum numbers.Here SU(2) L indices (i, j, . . .fundamental and a, b, . . .adjoint) are lowercase and SU(3) c indices are capital letters.
Table 2 :
Dimension -six exotic operators that couple boson pairs to BSM fields ϕ with specified SM quantum numbers.Indices are as shown in Table
Table 3 :
Dimension -seven exotic operators that couple boson pairs to color-singlet bSM fields ϕ with specified SM quantum numbers.Indices are as shown in Tables .21)which allow the quadruplet to couple to two SU(2) L gauge bosons or to one SU(2) L and one weak-hypercharge U(1) Y gauge boson, respectively.Similarly, we may use alternate tensor products such as 2 ⊗ 2 ⊗ 3 ⊃ 5 and 2 ⊗ 2 ⊗ 7 ⊃ 5(2.22)
|
2023-02-06T06:42:36.260Z
|
2023-02-02T00:00:00.000
|
{
"year": 2023,
"sha1": "aabf12dd85de9f83114e669839d072616d364e0a",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP08(2023)050.pdf",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "aabf12dd85de9f83114e669839d072616d364e0a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
8507669
|
pes2o/s2orc
|
v3-fos-license
|
Vertical parasagittal hemispherotomy for Sturge–Weber syndrome in early infancy: case report and literature review
Introduction The authors here present a rare case of a 3-month-old infant with unilateral Sturge-Weber syndrome (SWS) who had excellent seizure control and no aggravation of previous existed neurological deficits after vertical parasagittal hemispherotomy (VPH). To our knowledge, this patient with SWS was the youngest one who received VPH. Case description The use of VPH results in a successful treatment of intractable epilepsy in a patient with seizure onset in early infancy. At follow-up, the patient’s neurodevelopmental status has been improved since the surgery. Discussion It is generally accepted that early-onset seizures in children with SWS are associated with worse neurological and developmental outcomes. However, when surgical treatment should be considered and how it should be performed remain a longstanding controversy. We promote early surgery in children with SWS and early-onset epilepsy. Conclusion We suggest that VPH may be a useful adjuvant in the management of SWS with refractory epilepsy in early infancy and this procedure carries low neurological risk.
Background
Sturge-Weber syndrome (SWS) is a rare congenital neurocutaneous disorder characterized by facial port wine stains and associated intracranial leptomeningeal angiomatosis. Seizures are the most common manifestation of the disease and two thirds of patients with SWS present with seizures during the first year of life (Bachur and Comi 2013;Jagtap et al. 2013;Alkonyi et al. 2011). It is well established that early-onset seizures are more difficult to control and may lead to neurological and developmental deterioration. The appropriate treatment depends on the available natural history and early surgery is advocated to attain seizure control in those with severe medically intractable epilepsy (Schramm et al. 2012;Honda et al. 2013;Kramer et al. 2000). Introduced by Delalande in 1992, vertical parasagittal hemispherotomy (VPH) has been described as an effective surgical technique for hemispheric disconnection. This technique allows complete disconnection of the hemisphere through a cortical window with good results of seizure control and gets the utmost degree of vessel preservation within the disconnected hemisphere to reduce the risk of ischemic cerebral edema (Delalande et al. 2007;Delalande and Dorfmuller 2008). Although VPH is a newly developed hemispherotomy technique, it can be a useful adjuvant in the management of epilepsy for SWS patients in early infancy and may help preserve neurological function by preventing progressive neurological deterioration and intellectual impairment.
( Fig. 1a). She did not show any neurologic manifestations until 51 days after birth when she developed her first seizure. She had no family history of seizures or epilepsy, and her development was age-appropriate. Her brother and mother developed febrile convulsion during their infancy. The diagnosis of SWS was established by the presence of the facial nevus flammeus and seizures. The patient's legal parent gave consent to publish this case report and any accompanying images.
Computerised tomography (CT) scan revealed cortical calcification in the left cerebral hemisphere (Fig. 1b). Magnetic Resonance (MR) imaging demonstrated unilateral cerebral atrophy and leptomeningeal venous angioma (Fig. 1c). Initially, her seizures were well controlled by the combination therapy with phenobarbital and clonazepam. However, the patient developed mild right hemiparesis and tonic seizures with focal features at a frequency of 4-5 times per day which were refractory to medication. She was admitted to our hospital and underwent a thorough general physical and neurological examination at 3 months of age. The interictal scalp electroencephalography (EEG) (Fig. 2a) showed obvious asymmetry with spikes detected on left frontal and anterotemporal area, and the ictal scalp EEG revealed changing of the background activities and bilateral rhythmic slow waves with no localized value (Fig. 2b). Immediate postictal evaluation of positron emission tomography (PET) demonstrated significant intense hypermetabolism in the left frontal lobe (Fig. 1d).
Operation and postoperative following up
Preoperatively, central venous catheterization for blood transfusion was placed in this infant. Intraoperative blood loss was estimated and blood component transfusion including packed red blood cells and fresh frozen plasma (FFP) was used during surgery. As our previous paper suggested, for small infants less than 7 kg, FFP of 10 ml/kg was routinely transfused preoperatively for hemispheric surgeries. The total blood loss for hemispheric surgeries was 150-250 ml and total time of the surgeries was 5-6 h.
The patient underwent a VPH to disconnect the entire cortex from the underlying diencephalic structures. A surgical specimen was sent to pathology and the histological examination of the resected material confirmed the diagnosis of SWS. She developed mild diabetes insipidus (DI) and edema in the left frontal lobe immediately after surgery, soon recovered and no replacement treatment was needed. Postoperative MR imaging demonstrated a complete disconnection of the affected hemisphere had been achieved (Fig. 3c-e).
The patient's seizures ceased immediately after the surgery. At last follow-up, she has been free of seizures for more than 3 years with anti-epileptic drug (ZNS, 30 mg/ day). She can walk without assistance and enjoy talking with her mother. Her right hand is auxiliary hand level and she can eat by herself using left hand and spoon. The patient's developmental quotient (DQ) was improved from DQ 81 to DQ 85 (2014)(2015).
Discussion
This case demonstrates the utility of VPH as an effective surgical treatment for SWS patients with medically refractory epilepsy. VPH is an established treatment for intractable epilepsy due to diffuse hemispheric disease. Delalande (2007) reported the first series of 83 patients who underwent VPH for refractory epilepsy with shorter operative times and fewer early postoperative complications (Delalande et al. 2007). His surgical technique can achieve complete disconnection of the affected hemisphere and preserve an intact vessel supply (Delalande and Dorfmuller. 2008). However, the application of VPH to the SWS patients in early infancy is rarely reported. To our knowledge, only five cases younger than 1 year of age who underwent VPH have been previously reported (Delalande et al. 2007;Dorfer et al. 2013).
In SWS, 75 % of patients develop seizures during the first year of life. In addition, previous authors reported that early-onset seizures occurring in patients younger than 1 year of age may be more difficult to control and are associated with worse neurological and developmental outcomes (Alkonyi et al. 2011;Jagtap et al. 2013;Thomas et al. 2012). Numerous authors advocate extra-early hemispherotomy for hemispheric epileptic etiology including SWS because early-onset epilepsy that started in infancy always indicated a worse prognosis including medical intractability of the seizures, progressive hemiparesis, and mental retardation (Bourgeois et al. 2007;Tuxhorn and Pannek. 2002;Honda et al. 2013). It is worthy of note that, in younger children undergoing hemispheric surgery, significant intraoperative bleeding is common and is frequently seen in patients with malformations of cortical development which may be associated with hypovolemia and death (Bourgeois et al. 2007;Kossoff et al. 2002;Schramm et al. 2012). In a series of 27 patients with SWS, 3 required cerebrospinal fluid shunt placement, 6 had more than one operation because of residual lesion or the risk of massive hemorrhage (Bourgeois et al. 2007). The surgical techniques of VPH provide smaller skin incision and bone flap, which reduces blood loss and avoids the exposure of large venous sinuses. It allows complete disconnection of the hemisphere through a cortical window with good results in terms of seizure outcome and a relatively low complication rate (Delalande and Dorfmuller. 2008). We believe that this kind of disconnective technique can help reducing the potential complications associated with large brain excision (Delalande et al. 2007;Delalande and Dorfmuller 2008).
In the present case, VPH was performed in a 3-monthold infant with SWS and refractory seizures. Though PET showed some focal features, we performed hemispherotomy of VPH owing to the clinical hemispheric syndromes associated with a congenital hemispheric cerebral pathology. After surgery, she became seizure free and experienced improvement in her developmental status and motor performance that lasted up to her most recent follow-up (3 years after hemispherotomy). Our case concurs with those of many studies which have shown that hemispherectomy performed early in life is associated with minimal hemiparesis and better intellectual development (Table 1) (Bourgeois et al. 2007;Kossoff et al. 2002;Honda et al. 2013;Dorfer et al. 2013).
Our patient developed transient DI and brain edema in the ipsilateral frontal lobe which produced mild midline shift. It is likely that the edema and DI was caused by the obstruction of venous drainage, but the underlying mechanism for such an association remains unknown. Previous data showed that manipulation and retraction of the fragile brain and vessels of infant may result in markable postoperative brain edema (Dorfer et al. 2015).
We advocate that all infants should be extensively monitored for blood/volume imbalance, peripheral temperature and serum electrolytes. Based on our experience and literature reviews, we suggest that this kind of surgery should be performed in a pediatric epilepsy center where age-appropriate anesthesia and postoperative intensive care are available (Jagtap et al. 2013;Thomas et al. 2012;Bachur and Comi 2013;Alkonyi et al. 2011).
Jonas et al. reported that earlier surgery (<1 year) results in 76 % patients free of seizure compare to 58 % (age at operation >5 year) which demonstrated that surgery before the age of 1 year is favorable for good surgical outcome (Jonas et al. 2004). However, this data came from various etiologies, not solely on SWS. We suggest that VPH could reduce blood loss and avoid the exposure of large venus sinuses which should be a concern in infant surgery. Further, VPH exhibit good outcome of seizure control and improvement of hemiparesis in children with SWS and early-onset seizures. Future research in the form of more case reports and case series may add evidence to the literature about the use of VPH in the management of refractory epilepsy in infants with SWS.
Conclusion
We have reported the VPH surgery on 3-month-old infant having an unilateral congenital nevus flammeus. VPH exhibit good outcome of seizure control and improvement of neurodevelopmental status and hemiparesis in children with SWS and early-onset seizures.
|
2018-04-03T02:22:32.671Z
|
2016-08-30T00:00:00.000
|
{
"year": 2016,
"sha1": "d62a9f8d3f4c932f0c188999ab31a326f61874fd",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s40064-016-3096-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d62a9f8d3f4c932f0c188999ab31a326f61874fd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259164766
|
pes2o/s2orc
|
v3-fos-license
|
Regression-Based Model Error Compensation for Hierarchical MPC Building Energy Management System
One of the major challenges in the development of energy management systems (EMSs) for complex buildings is accurate modeling. To address this, we propose an EMS, which combines a Model Predictive Control (MPC) approach with data-driven model error compensation. The hierarchical MPC approach consists of two layers: An aggregator controls the overall energy flows of the building in an aggregated perspective, while a distributor distributes heating and cooling powers to individual temperature zones. The controllers of both layers employ regression-based error estimation to predict and incorporate the model error. The proposed approach is evaluated in a software-in-the-loop simulation using a physics-based digital twin model. Simulation results show the efficacy and robustness of the proposed approach
I. INTRODUCTION
The increasing penetration of renewable energy sources (RESs) in the public power grid leads to a demand for intelligent energy management systems (EMSs) for buildings. The most popular method for controlling EMSs is Model Predictive Control (MPC). However, for MPC to be effective, an appropriate model of a building's energy behavior is necessary.
There are several approaches to building such a model, which can be categorized as white-box modeling, gray-box modeling, and black-box modeling. White-box models, mostly developed using building energy performance simulation tools such as EnergyPlus or TRNSYS, can be very accurate, but are usually too complex to be used directly in the MPC's optimal control problem (OCP). Grey-box models, such as state space or Resistor-Capacitor (RC) models, are less accurate, but can be utilized well in an OCP [1]. Both white-and gray-box modeling of buildings is very complex and requires building-specific expert knowledge, i. e. models cannot be easily transferred to other buildings [1]. Thus, data-driven black-box modeling has experienced an increase in interest [2], e. g. using Gaussian Processes (GPs) or artificial neural networks (ANNs). While the biggest advantage is the comparatively low modeling effort, they require a large amount of data, the aqcuisition of which is again challenging [3]. At the same time, including possibly known dynamics or behavior are difficult to incorporate directly and may also have to be approximated. Therefore, a hybrid approach of these modeling paradigms is likely necessary to succeed in employing building EMSs in a larger scale in the real world.
One option is to replace (a part of) the building's model by a data-driven surrogate model. In [4], [5], a machine learning model is trained with simulation data from a physics-based model. Then, the machine learning model is included in the MPC's OCP. Data-driven surrogate models are also frequently used for real-world buildings. In [6], ANNs are trained with historical data from a test building located at the University of L'Aquila, Italy to predict both energy consumption and temperature development. The ANNs are then utilized as the sole model in the MPC. In [7], recurrent neural networks are used to approximate a nonlinear thermal model of an airport check-in hall. The check-in hall's temperature is then controlled using MPC to both follow a reference trajectory and not violate comfort boundaries by solving a linear OCP. However, ANNs can also be used as part of the objective function, instead of replacing model equations in the constraints. In [8], radial basis function (RBF)-based ANNs are used to approximate both the thermal dynamics and the occupant comfort for 4 university office rooms. For more examples of data-driven control approaches, the reader is referred to the review [9]. Notably, only very few studies consider multi-zone buildings.
A second option for a hybrid model approach is a data-driven error estimator (or residual estimator). Here, the goal is not to replace a part of the gray-box model, but to reduce the model error by augmenting it with a residual value, estimated by a data-driven regression model. However, applications in the building sector are sparse. In [10], a physics-based model of a single-office building in Stuttgart, developed in TRNSYS and MATLAB, is first simplified to a RC gray-box model. Then, a GP model is trained to predict the error of the RC model, using simulation data from the physics-based model as ground truth. However, it was not applied to any control purposes. Applications of error estimators in combination with MPC can be found in different areas. In [11], GPs are used to learn the model error for an autonomous racing car. Training data is received from simulation without MPC. The GPs are explicitly used in the OCP as part of the model dynamics. In [12], a RBF-based disturbance estimator for a nonholonomic robot is used for event-triggered MPC. The disturbance is assumed to be dependent on the system state and control input only, and could thus be interpreted as a model error. In [13], a GP based error estimation is combined with an extended Kalman filter to achieve offset-free tracking of a 6 degrees of freedom robotic arm.
In this work, we use a hierarchical setup for the MPC of the energy system of a medium-sized office building in Offenbach, Germany. An aggregator is used to control the total energy flows, which are then allocated to the 9 individual temperature zones by a distributor. Gray-box state space models are used on both levels. A physics-based digital twin serves as a surrogate model of the actual building. To compensate the model errors of both the aggregator and the distributor, we train two regressionbased error estimators. As features, only signals which are easily obtainable both online and offline are used. Training data is derived from a software-in-the-loop (SiL) simulation of the digital twin with real-world measurement data. The main contributions are the development of the data-driven estimators for a multi-zone building using a digital twin and real-world measurement data, and their application for error compensation in a hierarchical MPC approach.
The rest of the paper is structured as follows. The building itself as well as its digital twin and the simplified gray-box models are described in Section II. The hierarchical MPC setup is explained in Section III. The data-driven error estimators and their training process is discussed in Section IV. The successful error compensation by combining the error estimators with the hierarchical MPC approach is shown by long-term simulation results in Section V. Finally, we conclude with a discussion on the impacts and necessary further steps in Section VI.
II. BUILDING MODELS
In this section, we will first give a brief description of the actual building. Then, we will explain the different models used in this study, i. e. 1) the digital twin, 2) a state-space model with only a single temperature zone used by the aggregator. and 3) a state-space model of the 9 temperature zones used by the distributor.
A. Building Description
The building used in this study is a medium-sized company building located in Offenbach, Germany. It has a footprint of approx. 13, 000 m 2 and can be separated into 9 different temperature zones, which include offices, halls, some workshops and, as a peculiarity, an emissions lab. Besides the connection to the public power grid, the main energy sources are a gas-fired combined heat and power plant (CHP) for co-production of electricity and heat with 199 kW el , a fairly large photovoltaic (PV) plant with 750kWp, which serve an average load demand of approx. 250 kW. It further has gas-fired heating boilers and an electric heating, ventilation, and air conditioning (HVAC) system. A stationary second-life battery with a capacity of 98 kWh can be used as electric storage.
B. Digital Twin
A Modelica-based simulation model implemented in Simula-tionX is used as a digital twin [14]. It covers the 9 different temperature zones, their couplings, heat losses to both the ambient air and the ground, internal heat gains from electrical consumption and occupants, and the above mentioned energy producers and consumers, including various constraints on the power production. The CHP has a minimal power output of 50 %, below which it cannot be modulated. Furthermore, its power-up and and power-down times, as well as nonlinear efficiencies are considered. The SimulationX model uses historic measurement data for the ambient air temperature, the electric power demand (per zone), solar irradiation, and PV power production.
C. Aggregator Model
As discussed in the introduction, the physics-based digital twin is not suited to be used in an OCP. Thus, we use a simplified state space model representing the most important entities. Note that the hierarchization, i. e. the use of an aggregator and a distributor, is done to ensure the scalability of the control approach. This also allows the integration of additional components, e. g. charging stations for electric vehicles [15]. Of the total 9 temperature zones, 7 are aggregated as a single 'building zone' with an average temperature ϑ b (in°C). The remaining 2 zones refer to server rooms and are aggregated with an average 'server zone' temperature ϑ s (in°C). The stationary battery's stored energy E (in kWh) completes the state vector x agg . The inputs u agg to the system consist of the grid power P grid , the (electrical) CHP power P chp , the gas heating powerQ rad , and the HVAC cooling powerQ cool .
As disturbances d agg , PV power P PV , the building's electrical power demand P dem , the ambient air temperature ϑ air (in°C), (constant) losses to the groundQ other,b , and (constant) internal heatingsQ other,s are considered. All powers are given in kW. The time-continuous state space model is then given by where C th,b and C th,s are the thermal capacities of the building and server zone, respectively in kWh K , H air,b and H air,s are the heat transfer coefficients to the ambient air of the building and the server zone in kW K , respectively, β bs is the heat transfer coefficient between the two zones in kW K and c cur is the ratio of the CHP's electrical to thermal power. Numerical values are given in Table I. In the following, we only use its discretized state space form. Furthermore, we respect the model errors for the building and the server zone temperatures, i. e.
with ϵ agg (k) = 0 ϵ b (k) ϵ s (k) ⊺ and T s being the sampling rate in h. Note that we can respect the model error ϵ agg (k) only in the discretized form since it has to be estimated from discretely sampled data points. For more details on the modeling itself, the reader is referred to [16].
D. Distributor Model
The distributor models the 9 temperature zones individually, while neglecting the electrical part of the aggregator model. The temperature ϑ i of a single zone i can be described bẏ with C th,i being the thermal capacity of zone i in kWh K , β ij the heat transfer coefficient between zones i and j in kW K , H air,i the heat transfer coefficient between zone i and the outside air in kW K , andQ heat,i andQ cool,i in kW the heating and cooling powers allocated to zone i.Q other,i is an uncontrollable disturbance, which is assumed constant and either represents heat losses to the ground (for the building zones 1-7) or internal heat gains (for the server zones 8 and 9).
Using the 9 ϑ i as states x dis ,Q heat,i andQ cool,i as inputs u dis ,Q other,i as disturbances d dis , and again discrete model errors ϵ dis (k) = ϵ 1 (k) . . . ϵ 9 (k) ⊺ , they are expressed as the discrete state space model where A dis is the system matrix, B dis the input matrix and S dis the disturbance matrix. Again, T s denotes the sampling time and the numerical values are given in Table I. For brevity, the reader is referred to [16] for more details on the state space model.
III. CONTROL APPROACH
In this section, we describe the OCPs solved by the MPC on both the aggregator and distributor level.
A. Aggregator Control
The aggregator's goal is to regulate the building temperatures while minimizing the monetary costs. This is expressed as a Hair,i, Hair,s = Hair,8 + Hair,9, β bs = β29 + β58 + β68, βij = βji, and all other βij not listed below are zero, e. g. β12 = 0.
apply. The notation ϑ b (n|k) refers to the value for ϑ b (k + n) predicted at time step k. N pred is the number of steps in the prediction horizon. Second, the monetary costs are expressed as where ℓ mon describes the costs arising from gas usage and buying (selling) electrical energy from (to) the public grid. Note that we consider German industry pricing, in which different prices for buying and selling as well as high peak costs apply. Details on both numerical values and how J mon can be reformulated using an epigraph formulation, which results in a linear programming problem, can be found in [16, pp. 24]. Third, the server zone is only kept within an acceptable temperature range by The input constraints are given by The state constraints are given by Note that ϑ b and ϑ s are unconstrained to avoid infeasibilities in the later co-simulation without error compensation. Both are only regulated due to the respective cost functions.
Usually, the weights are chosen such that a reasonable compromise is determined [17]. Alternatively, multi-objective optimization can be used [18], [19], since the aggregator's OCP is always solvable quickly enough due to the hierarchization. However, here we choose w comf = 0.99, w mon = 0.01, w s,agg = 0.99 to ensure that the controller tries to achieve ϑ b = 22°C and 15°C ≤ ϑ s ≤ 21°C at all times. This simplifies the evaluation of the error compensation later on. Note that in the actual implementation, additional slack variables are used due to the reformulation of J mon and of the max-terms of J s,agg .
Since we want to assess the compensation of the model error, we simulate with no prediction error. Namely, we assume perfect predictions for the PV power, the building's load and the ambient air temperature. For an assessment of the influence of real predictions for the facility under study, the reader is referred to [20] and [21].
B. Distributor Control
In the distributor, the total heating and cooling powers determined by the aggregator are split (distributed) between the individual zones. To this end, we use individual weights 9 i=1 w th,i = 1 proportional to the thermal capacities, i. e.
The temperature goals are the same as in the aggregator, i. e. we punish temperature deviations from 22°C in the 7 building zones by For the 2 server zones, the same temperature range applies as in the aggregator. Outside of these, we punish temperature deviations by The inputs are subject to box constraints which stem from the building's internal infrastructure, 0 ≤Q heat,i (k) ∀ i = 1, . . . , 7 ≤ 893.95 kW, (14a) Note that the server zones 8 and 9 have no heating systems, since they have to be cooled all the time. Furthermore, the total powers are constrained by the powers allocated by the aggregator, cool,i (k) =Q cool,s (k).
As in the aggregator, the zone temperatures have no hard constraints to avoid infeasibilities in the co-simulation with no error compensation.
Together, the distributor's OCP is described by with u dis = (u dis (0|k), . . . , u dis (N pred − 1|k)) being the sequence of control inputs, and the same prediction horizon as in the aggregator. Again, the time step notation (k) and (k + 1) in (4), (14) and (15) are to be read as (n|k) and (n + 1|k), respectively.
IV. ERROR COMPENSATION METHODOLOGY
As previously described, the control approach is aware of a model error ϵ(k) in both the aggregator and distributor. We aim to perform error compensation, i. e. we want to find an estimatorε(k) that can approximate this error, such that ϵ(k) ≈ ϵ(k). Incorporating the estimator to approximate the model error should improve control performance. We use machine learning regression models to build these estimators. We train 9 estimatorsε i (k), i = 1, . . . , 9, i. e. one for each temperature zone in the distributor. The estimators in the aggregator are the weighted sum of the individual zone estimators, i. e.ε This is analogous to ϑ b and ϑ s being the weighted averages of the individual zone temperatures ϑ i . The estimators will predict the model error for only one time step at a time, i. e. N p separate predictions will be made to calculateε i (n|k) over the horizon n = 0, . . . , N p − 1 at each time step k.
A. Feature selection
The first step to training a regression model is feature selection. The target variable (i. e. labels) of the regression model are the measured model errors. In principle, these can be calculated as the difference between observed state and predicted state, i. e. ϵ(k) = x(k + 1) − x(1|k). Generally, the resulting difference may also include prediction errors of the disturbances. This can be circumvented by recalculating x(1|k) using the state space model and measurements of disturbances d(k) and inputs u(k).
For selecting the features, we want to consider that the resulting estimators should be easily (re)trainable and employable. This means that we should only use features that are readily available and both measurable and predictable. Therefore, all disturbances of the MPC controller are good candidate features, as they are both measurable and predictable in the case of the proposed EMS. From a brief correlation analysis between measured errors ϵ(k) and measured disturbances (omitted for brevity), we deducted that a set of 5 features should provide good a basis for training, i. e.
1) the current ambient temperature ϑ air (k), for unaccounted heat flows to/from the environment (e. g. inaccurate heat transfer coefficients; warm/cold air from ventilation) 2) the past values of the ambient temperature ϑ air (k − 1), . . . , ϑ air (k−n hist ), for heat diffusion from other zones 3) the total building load P dem (k), as electrical consumption is transformed into heat and is correlated to occupant
B. Models
Based on these features, we propose two candidate regression models as estimators: 1) A linear regression model and 2) an XGBoost regression model. XGBoost (eXtreme Gradient Boosting) is an open-source software library that provides an efficient and effective implementation of the gradient boosting framework for machine learning [22]. It uses gradient boosting [23] to improve the performance of decision trees, which can be used for both regression and classification problems.
For the first estimator, we propose the linear model for each zone i, where n hist = 2. The parameters α i , β i,j , γ i,l , δ i,l , κ i are fitted through least-squares regression. The features ToD(k) and DoY(k) are transformed using a cyclical transformation to normalize them uniquely to values between -1 and 1, preserving the cyclical nature of day time and seasons.
For the second estimator,ε xgb i (k), we train an XGBoost regressor for each zone i. For this estimator, we use all aforementioned features and n hist = 2. Contrary to the linear model, we do not apply a cyclical transformation to the time features, as this is not needed with tree-based regression models and can actually be detrimental.
C. Training and evaluation
To generate the training data for training the estimators, we use the digital twin model described in Section II-B. We simulate a full calendar year using the digital twin in a SiL setup together with the described control approach and without compensation, i. e.ε(k) = 0. In this setup, the digital twin running in SimulationX is connected through an FMU (functional mock-up unit) to a Python bridge, linking it to the MPC controller implemented in MATLAB using the PARODIS framework [24]. At each discrete time step k, the controller receives the updated system states from the digital twin model, determines the control input u(k) and applies it to the digital twin. For the simulation in the digital twin and the predictions for the MPC controller, we use measurement data for the weather and electrical demands collected for the year 2021 at the Honda R&D facility in Offenbach, Germany. During the simulation, we collect both the states predicted by the MPC as well as the realized (i. e. measured) states. From these we calculate the model error ϵ(k) for training. In a real world setting, one would use a baseline controller to run in parallel to a controller with an estimator pre-trained in a digital twin setting, to be able to calculate raw model errors to retrain estimators on new data.
One of the main error sources between digital twin models of buildings and reality, next to occupant behavior, are the estimated heat capacities of the building zones [25]. Therefore, we benchmark the robustness of our proposed error compensation against this error, by creating additional simulation scenarios, where we change the heat capacities in the model of the MPC, while keeping them the same in the digital twin model. Overall, we examine four scenarios, namely 1) heat capacities in MPC model are exact, 2) heat capacities in MPC are 50 % of digital twin, 3) heat capacities in MPC are 150 % of digital twin, 4) total heat capacity of the building is exact, individual capacities are shifted randomly according to Algorithm 1.
To train the estimators, we use a 70/30 train-test split on the collected data, and use scikit-learn [26] to fit the linear regressor as well as the scikit-learn interface of the XGBoost Python library for training the XGBoost estimator, respectively. Table II shows the performance of the trained estimators on the training and test data sets in terms of mean absolute error (MAE) of the residual model errorε i (k) − ϵ i (k). We calculate the overall MAE as the weighted sum of the MAE of each temperature zone over the data set, i. e.
The linear estimator shows fair performance on both data sets. The XGBoost estimator shows very good performance on the training set and similar performance on the test set. This suggests that the estimator is not overfitting.
V. SIMULATION RESULTS
To evaluate the performance of the proposed error compensation approach, we applied the trained estimators in the previously described SiL simulation. First, we simulated the baseline 2021 simulation with active compensation. Figure 1 shows the resulting average temperature of the building zone for the baseline case compared to linear compensation and XGBoost compensation. This shows that the control performance regarding the comfort costs in the aggregator (i. e. deviation from the setpoint of 22°C) is significantly improved with active error compensation. Figure 2 shows the residual errors, i. e. left over model error, for zones 1 and 9 in the baseline case compared to the active compensation using the XGBoost estimator. This suggests that the estimator manages to approximate the actual model error also in the SiL simulation. This is confirmed by looking at the overall MAE of the residual errors, as shown in Table II The performance of the estimators is further illustrated in Figure 3, where the MAE of each of the 9 temperature zones is shown and compared between the three cases. This again shows that both estimators manage to approximate the model error well in all zones with XGBoost exhibiting best performance. As motivated in Section IV-C, we want to test the robustness of the approach against errors in estimated heat capacities in the controller. We therefore simulated the three described scenarios, i. e. 1) heat capacities at 50 %, 2) heat capacities at 150 %, and 3) randomly shifted heat capacities in the MPC, with error compensation. We only simulated with the better performing XGBoost estimator. The results are again shown in Table II. In all cases, the estimators manage to significantly reduce the residual model error, while yielding only slightly worse performance than with exact heat capacities. Overall, the results suggest that the approach is reasonably robust against this type of error.
VI. CONCLUSION & OUTLOOK
We have shown that our proposed data-driven error compensation approach can significantly reduce the residual model error between the proposed hierarchical MPC controller and a digital twin building model in a SiL simulation. We have proposed two simple regression-based error estimator models, which achieve an error reduction of up to 56 % (linear model) and 80 % (XGBoost) in a baseline full calendar year simulation. We have shown that the proposed approach is robust against model errors of heat capacities in the controller. Furthermore, we have shown that the regression-based estimators generalize reasonably well by applying the estimators trained on 2021 measurement data to a simulation based on 2022 measurement data. Despite significant change in occupant behavior between 2021 and 2022, both estimators exhibit good performance.
While the proposed error compensation approach achieves significant model error reduction in all zones and improved control performance of the overall building temperature, as shown in Figure 1, the control performance in individual zones is still lacking. This is illustrated in Figure 4, where the temperature of zone 1 over the course of the year 2021 is shown, with and without error compensation. The control performance is only marginally better in the case with active error compensation. This is due to the structure of the hierarchical control approach: The aggregator derives a heating and cooling budget by considering the weighted sum of the individual zone errors of the distributor. Thereby, positive and negative components cancel out. In turn, not enough heating and cooling budget is allocated for compensation in the distributor. This problem could be resolved in future work by extending the control scheme by introducing additional communication between the two layers.
For this paper, we have used data from a full calendar year for training. However, further investigation could be conducted to understand how the amount of training data relates to the performance of the error compensation, to determine how much data is needed to (re)train compensators in a real life setting.
|
2023-06-16T01:15:38.174Z
|
2023-06-15T00:00:00.000
|
{
"year": 2023,
"sha1": "85acd0545ae16875b2294b5f740ed651a03a3b3e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d233565ede87dea224beab01f25cd28559d4d751",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
}
|
229041024
|
pes2o/s2orc
|
v3-fos-license
|
THE ROLE OF BIOLOGICALLY ACTIVE SUBSTANCES AND SHORT AT-FRAGMENTS OF NUCLEIC ACIDS IN THE GENETIC TRANSCRIPTION PROCESS
s. Abstract N 352A. P. 175. 6. Kurchii B.A. On the molecular nature of the lock and the key for the eukaryal genes/clusters. Proceeding of IX Ukrainian Biochemical Congress (24–27 October 2006b, Kharkiv, Ukraine). P. 49–50. 7. Bernard V., Brunaud V., Lecharny A. TC-motifs at the TATA-box expected position in plant genes: a novel class of motifs involved in the transcription regulation. BMC Genomics. 2010. Vol. 11. P. 166. URL: http://www.biomedcentral.com/14712164/11/166. 8. Reidt W., Wohlfarth T., Ellerstrom M., Czihal A., Tewes A., Ezcurra I., Rask L., Baumlein H. Gene regulation during late embryogenesis: the RY motif of maturation-specific gene promoter is a direct target of the FUS3 gene product. Plant J. 2000. Vol. 21. P. 401–408. 9. de Pater S., Katagiri F., Kijne J., Chua N.-H. bZIP proteins bind to a palindromic sequence without an ACGT core located in a seed-specific element of the pea lectin promoter. Plant J. 1994. Vol. 6. P. 133–140. 10. Lefevre J.-F., Lane A.N., Jardetzky O. A temperature dependent transition in the Pribnow box of the trp promoter. FEBS. 1985. Vol. 190. P. 37–40. 11. Chou K.-C., Kйzdy F.J., Reusser F. Kinetics of processive nucleic acid polymerases and nucleases. Anal. Biochem. 1994. Vol. 221. P. 217–230. КУРЧІЙ Б.О. Ірпінський економічний коледж, Україна, 08200, м. Ірпінь, вул. Гагаріна, 9, e-mail: kurchii@ukr.net РОЛЬ БІОЛОГІЧНО АКТИВНИХ РЕЧОВИН ТА КОРОТКИХ АТ-ФРАГМЕНТІВ НУКЛЕЇНОВИХ КИСЛОТ У ПРОЦЕСІ ГЕНЕТИЧНОЇ ТРАНСКРИПЦІЇ Описано специфічні фрагменти названих функціонально реактивними групами (або дескрипторами) в молекулах біологічно активних речовин (БАР). Ці фрагменти характеризуються наявністю активного атома водню або ненасиченою функцією. Зроблено висновок, що БАР є найважливішими факторами, котрі співпрацюють з генетичними ключами та генетичними замками, щоб розпочати транскрипцію. Генетичні ключі – це нуклеїнові кислоти, але не білки. Постулюється, що клітинна мембрана може служити депо для генетичних ключів на початку окисного стресу. Під час фази рецесії окисного стресу синтезуються нові генні ключі для нових транскрипційних актів. Ключові слова: БАР, дескриптори, генні ключі, генні замки, транскрипція.
The presence of functionally active groups (or descriptors) in biologically active substances (bioregulators, BASs) is mandatory for the manifestation of their in vivo biological activity. The most typical descriptors of BASs are presented in Fig. 1 and 2 [1][2][3].
The substances given as drugs or pesticides that are inactive should be activated in vivo. Formation of active metabolites of BASs, also called biotransformation, may occur by the following ways: Thereby, the inactive BASs require in vivo transformation by endogenous free radicals or enzymes into free radicals. BASs in free radical state can react with essential cellular constituents such as lipids, DNA, RNA, proteins etc. Hence, in vivo activated BAS (the free radical of BAS) can act as initiator (In˚) of free radical chain reactions in the cells. Polyunsaturated fatty acids (LH) presented in the cellular membranes can be easily oxidized. The membrane destruction will be result of this action. I proposed that the transcription process is carried out with the participation of gene keys (primer analogs in PCR) which can be preliminary synthesized and stored within the cell membranes that serve them as depot [3] and can be liberated from membranes under oxidative stress in the cells. Membrane destruction results in the gene key liberation from the cell membranes into the cytoplasm and then it can move to the nucleus (Fig. 3). This scenario may be characteristic for the onset of oxidative stress caused by free radicals of BASs. New gene keys are synthesized at genes for new transcription acts during the recession phase of oxidative stress.
In this paper, I do not consider the currently known mechanisms for synthesizing RNA on DNA template. Only one stage of the RNA synthesis process is described here: recognition of the required gene (or the cluster of genes) from which RNA will be synthesized. RNA synthesis requires single-stranded DNA. But the gene consists of double-stranded DNA on which RNA synthesis is unknown. In order to start the transcription process RNA polymerase must find the necessary gene. The gene key assists RNA polymerase in selection the appropriate gene lock at the gene promoter [3,[4][5][6]. Gene promoters are characterized by the presence of nucleotide fragments such as TATA box PLM and TATAД-PLM motifs [7], CATG-CATG motif [8], C-box (ATGACGTCAT) and Gbox (GACACGTGTC) [9], Pribnow box (consensus sequence TAATAT) [10]. These structures are disposed at both ends of the DNA strand.
The gene key that contains TA (or TATA box) nucleotide pairs is presented in vivo by singlestranded DNA or RNA (which must contain adenine nucleotides). The TA nucleotide pairs from the gene key can form a hydrogen bond with one of the DNA strands. The bond is formed between O(C2) of thymine within the gene lock and H(C6) of adenine within the gene key ( Fig. 4 and 5).
Analysis of structures in Figures 4 and 5 indicates the presence of all four DNA nucleotides (A-T and G-C). At first glance, nothing special, except for the lack of a hydrogen bond at one of the thymine nucleotide oxygen. But this feature is unique: the hydrogen of the H 2 N-group at C6 adenine can form a hydrogen bond with oxygen at C4 or C2 of thymine. Moreover, only a single-stranded DNA having adenine nucleotide in its structure can form the hydrogen bond with the double-stranded DNA (the gene). Note that neither thymine nor uracil can form the hydrogen bond with the doublestranded DNA due to the already formed the bond at the N1 and C6 adenine atoms. However, uracil of the single-stranded RNA can form the hydrogen bond with the adenine of the single-stranded DNA.
Thus, the most easily hydrogen bonds of the adenine nucleotide (A) can be formed with thymine (T) nucleotides, which must be present in the DNA of the gene in at least two places. The ideally the linear length of the gene key should match the linear length of the gene lock. Accordingly, at least two nucleotides of adenine (A) must be present at the beginning and the end of the gene key.
Theoretically, the gene key can be attached to both 5′ end (Fig. 4) and 3′ end of DNA strands (Fig. 5). It is believed that the 3′ end variant may be preferred [11]. However, if the gene key is needed to separate DNA strands, then its attachment can also take place to the strand with 5′ end of DNA (Fig. 5).
Conclusions
In conclusion, it should be emphasized that when considering transcription of DNA with the participation of receptors, the role of BASs in the initiation of oxidative processes in the cell membranes should be taken into account. Unfortunately, described in the literature the scenarios of DNA transcription involving protein receptors, the role of BASs in the initiation of oxidative processes in membranes are absent. Moreover, in the described mechanisms of the participation of protein receptors in transcription, the role of BASs is not mentioned at all. The role of BASs and protein receptors in the processes of transcription inhibition and the death of living beings under the influence of large concentrations of BASs is also not mentioned.
|
2020-11-05T09:08:42.360Z
|
2020-09-01T00:00:00.000
|
{
"year": 2020,
"sha1": "08206fa53e59b3411b71647d9b156092ae0b1886",
"oa_license": null,
"oa_url": "http://utgis.org.ua/journals/index.php/Faktory/article/download/1250/1324",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9f217e3be344db1c61593b50ee90ac8b4b3b49b2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
30095625
|
pes2o/s2orc
|
v3-fos-license
|
Human Tissue Inhibitor of Metalloproteinases 3 Interacts with Both the N- and C-terminal Domains of Gelatinases A and B
We compared the association constants of tissue inhibitor of metalloproteinases (TIMP)-3 with various matrix metalloproteinases with those for TIMP-1 and TIMP-2 using a continuous assay. TIMP-3 behaved more like TIMP-2 than TIMP-1, showing rapid association with gelatinases A and B. Experiments with the N-terminal domain of gelatinase A, the isolated C-terminal domain, or an inactive progelatinase A mutant showed that the hemopexin domain of gelatinase A makes an important contribution to the interaction with TIMP-3. The exchange of portions of the gelatinase A hemopexin domain with that of stromelysin revealed that residues 568–631 of gelatinase A were required for rapid association with TIMP-3. The N-terminal domain of gelatinase B alone also showed slower association with TIMP-3, again implying significant C-domain interactions. The isolation of complexes between TIMP-3 and progelatinases A and B on gelatin-agarose demonstrated that TIMP-3 binds to both proenzymes. We analyzed the effect of various polyanions on the inhibitory activity of TIMP-3 in our soluble assay. The association rate was increased by dextran sulfate, heparin, and heparan sulfate, but not by dermatan sulfate or hyaluronic acid. Because TIMP-3 is sequestered in the extracellular matrix, the presence of certain heparan sulfate proteoglycans could enhance its inhibitory capacity.
The tissue inhibitors of metalloproteinases (TIMPs) 1 are specific protein inhibitors of the matrix metalloproteinases (MMPs), a group of zinc-dependent enzymes that include collagenases, gelatinases, and stromelysins. Four forms of human TIMP have been cloned: TIMP-1 (1), TIMP-2 (2), TIMP-3 (3)(4)(5)(6), and, more recently, TIMP-4 (7). TIMP-1 and TIMP-2 are secreted by many cell types in culture and are found in body fluids and tissue extracts. TIMP-3 is unique in that it appears to be a component of the extracellular matrix (8 -10) and occurs in relatively small amounts, possibly being expressed during specific cellular events (11).
The TIMPs have comparable abilities to inhibit the active forms of the MMPs when assessed using macromolecular substrates (12,13) and have been shown to make tight binding noncovalent complexes with active MMPs with a 1:1 stoichiometry (14 -17). The inhibitors have related primary and secondary structures, consisting of an N-terminal subdomain of three disulfide bonded loops and a smaller C-terminal region also containing three loops (18 -20). The N-terminal domain of TIMP-1 and TIMP-2 can act as a functional inhibitor (19,21,22), interacting with the catalytic domain of the enzymes such that competition with low molecular weight substrate analogue inhibitors can be observed (23). 2 Using peptide substrate assays, it has been possible to demonstrate that TIMP-MMP complexes interact with K i values of 10 Ϫ9 to 10 Ϫ12 M (24). Comparative studies of the association rates of TIMP-1 and TIMP-2 with different members of the MMP family in our laboratory have shown exceptionally strong C-terminal domain interactions between TIMP-1 and gelatinase B and between TIMP-2 and gelatinase A, suggesting that complexes between the respective pro forms of these enzymes, the active sites of which are inaccessible, and inhibitors can also occur (20,25,26). This supports other biochemical studies of these complexes (27)(28)(29)(30).
In this study, we have assessed the ability of TIMP-3 to associate with active MMPs using a kinetic method, and we have compared this with TIMP-1 and TIMP-2. We have also investigated the contribution of the C-terminal domains of both gelatinase A and gelatinase B to the interaction with TIMP-3, because this has important implications for the regulation of proenzyme activation. We have tested the effect of heparin and other polyanions on TIMP-3 activity in our soluble kinetic assay to determine whether interaction with similar components of the extracellular matrix could affect the capacity of TIMP-3 to inhibit MMPs.
Kinetic Studies-Active enzymes were active site titrated against a standard preparation of TIMP-1 (20). TIMP-2 and TIMP-3 were active site titrated with stromelysin-1 that had been titrated against the standard TIMP-1. Assays were performed at 25°C for gelatinase A and gelatinase B or at 37°C for stromelysin-1 and matrilysin in a buffer containing 50 mM Tris-HCl, pH 7.5, 100 mM NaCl, 10 mM CaCl 2 , and 0.05% Brij 35 (fluorometry buffer). Hydrolysis of 1 M substrate Mca-Pro-Leu-Gly-Leu-Dpa-Ala-Arg-NH 2 for the gelatinases and matrilysin or Mca-Pro-Leu-Ala-Nva-Dpa-Ala-Arg-NH 2 for stromelysin-1 was followed using a Perkin Elmer LS 50B fluorescence spectrometer (20,25,38). Inhibition of the matrix metalloproteinases by TIMPs was analyzed under pseudo-first-order conditions using suitable ratios of enzymes: inhibitors as described previously (20,25). Association rate constants (k on ) were estimated from the progress curves using published equations (20,25) and the Enzfitter (Biosoft) or Grafit (Erithacus Software) program. The effect of ionic strength was analyzed by increasing the concentration of NaCl in the standard buffer from 0.1 M to 0.25 M and 0.5 M. For competition assays, various concentrations of (⌬1-414)gelatinase A or proE375A-gelatinase A were added to the cuvette with the gelatinase A before the addition of TIMP-2 or TIMP-3. Because the K i values for the TIMP:gelatinase A interaction are unknown, the K i and the K d for the TIMP:competitor interaction are expressed as relative values using an arbitrary value of 1 for the K i . The relationship between the two dissociation constants is given in Equation 1: in which EI and E f are the TIMP:gelatinase A complex and free gelatinase A, respectively, whereas FI and F f are the TIMP:competitor complex and free competitor, respectively. Equation 1 can be rewritten as: in which F t , E t , and I t are total reagent concentrations. In our assays, F t Ͼ ϾFI, and I f is negligible, so Equation 2 can be simplified to Equation 3, from which the relative K d can be readily calculated.
The effect of various polyanions on the rate of association was carried out using a constant amount of enzyme and inhibitor (concentrations similar to those used to calculate the k on values listed in Table I) with increasing concentrations of each test polyanion in the fluorometry buffer.
Binding of TIMP-3 to Progelatinases-TIMP-3 was incubated in the presence or absence of progelatinases in TCABN for 1-2 h at 25°C. Complexes with progelatinases were isolated on gelatin-Sepharose that had been blocked with 0.2 mg/ml bovine serum albumin in TCABN. The column was washed with TCABN, and bound material was eluted with TCABN containing 15% dimethyl sulfoxide. Eluates were analyzed by rabbit collagenase diffuse collagen fibril assays (39) and reverse zymography (40).
Binding of TIMPs to Heparin-Agarose-Approximately 1 g of each TIMP was applied to heparin-agarose (blocked with 0.2 mg/ml bovine serum albumin) in TCABN buffer. Columns were washed with TCABN, and proteins were eluted stepwise with the same buffer containing 0.5 M NaCl and then 2 M NaCl. Bound and unbound fractions were analyzed for TIMP content by SDS-polyacrylamide gel electrophoresis and silver staining and by rabbit collagenase diffuse collagen fibril assay (39).
Deglycosylation of TIMPs-5 g of TIMP-3 or TIMP-1 were incubated for 4 h at 37°C in the presence or absence of 1250 units of PNGase F (New England Biolabs). TIMPs were diluted in fluorometry buffer and used in assays as above.
RESULTS
We analyzed the inhibition of active gelatinase A, gelatinase B, stromelysin-1, and matrilysin by TIMP-3 using continuous fluorometric assays with the appropriate fluorescent peptide substrate (see "Experimental Procedures"). As discussed previously for TIMP-1 and TIMP-2 (20, 25), we were unable to obtain accurate values of K i (Ͻ200 pM). Our measurements were therefore limited to the association rate constants (k on ) at low reagent concentrations, over a range where the observed rate was linear with TIMP concentration. In Table I, the data are compared with k on values for TIMP-2 that were re-assayed at the same time and k on values for TIMP-1 derived from our previous work (25,26). All three TIMPs bound relatively slowly to stromelysin-1 and matrilysin. In general, we found that TIMP-3 was more like TIMP-2 than TIMP-1, showing rapid binding to gelatinase A and slower association with gelatinase B. The contribution of the C-terminal domains of gelatinase A and gelatinase B to TIMP-3 binding was assessed by measuring the association rate of the isolated catalytic domains, (⌬418 -631)gelatinase A and (⌬426 -688)gelatinase B. Whereas TIMP-2 binding was only affected by the loss of the gelatinase A C-terminal domain, TIMP-3 association was slower in the absence of the C-terminal domains of both gelatinase A and gelatinase B (1400-fold and 12.5-fold, respectively).
The effect of ionic strength on the rate of association of gelatinase A and TIMP-3 was analyzed at increasing NaCl concentrations. Similar to TIMP-2 (20), there was a marked decrease in k on from 16.0 ϫ 10 6 M Ϫ1 ⅐s Ϫ1 in 0.1 M NaCl to 9.6 ϫ 10 6 M Ϫ1 ⅐s Ϫ1 (0.25 M NaCl) and 7.3 ϫ 10 6 M Ϫ1 ⅐s Ϫ1 (0.5 M NaCl), suggesting that ionic interactions are involved in the association of gelatinase A and TIMP-3.
The contribution of the C-terminal domain of gelatinase A to TIMP-3 binding was assessed by measuring the effect of adding increasing amounts of (⌬1-414)gelatinase A (the isolated Cterminal domain) or proE375A-gelatinase A (an inactive form of progelatinase A) to the inhibition assay and observing the effect on the association rate for active full-length gelatinase A. The effect on inhibition by TIMP-2 was also measured for matrix metalloproteinases Association rate constants (k on ) were estimated from the inhibition progress curves using equations described previously (20,25). The data for TIMP-1 were taken from our previous work [25,26]. comparison. The increase in the final steady-state velocity and the decreased rate of inhibition observed with increasing concentrations of (⌬1-414)gelatinase A and proE375A-gelatinase A were deduced to be due to an effective decrease in TIMP-3 concentration by binding to the C-terminal domain, as was seen for TIMP-2 (20). The data were analyzed as described under "Experimental Procedures" to obtain an estimate for K d , the dissociation constant, relative to the K i for the appropriate TIMP:gelatinase A interaction (Table II). The interaction of TIMP-3 with (1-414)gelatinase A was significant but was around 16-fold weaker than the interaction of TIMP-2. The interaction between TIMP-3 and proE375A-gelatinase A was about five times weaker than that for TIMP-2. In both cases, the interaction of the TIMPs with proE375A-gelatinase A was stronger than that with the isolated C-terminal domain, which suggests that additional sites of interaction exist in the proenzyme-TIMP complex.
To further characterize the region of gelatinase A responsible for the C-terminal domain interaction, we used two C-terminal domain mutants: regions of the C-terminal domain of gelatinase A were exchanged for the corresponding regions of the C-terminal domain of stromelysin-1, which does not interact significantly with the TIMPs (25). As was the case for TIMP-2 (36), replacement of residues 418 -474 in N-G.C-SGG did not affect the rate of association with TIMP-3 (k on ϭ 17.0 ϫ 10 6 M Ϫ1 ⅐s Ϫ1 , compared with 16.5 ϫ 10 6 M Ϫ1 ⅐s Ϫ1 for gelatinase A). However, the additional substitution of residues 568 -631 in N-G.C-SGS reduced the rate of association of TIMP-3 with gelatinase A by a factor of 100 to 0.1 ϫ 10 6 M Ϫ1 ⅐s Ϫ1 , suggesting that residues 568 -631 of gelatinase A are crucial for the interaction with TIMP-3.
Because the kinetic data suggested that TIMP-3 has significant interactions with the hemopexin domains of gelatinase A and gelatinase B, we assessed the ability of TIMP-3 to bind to various pro form constructs of gelatinases A and B, in which normal catalytic domain interactions are precluded due to the presence of the propeptide domain (Table III). A small amount of TIMP-3 alone bound to the gelatin-Sepharose matrix. Enhanced retention of TIMP-3 was observed after preincubation with progelatinase A or progelatinase B, suggesting that TIMP-3 shows significant binding to both proenzymes. TIMP-3 was recovered in the unbound fraction after incubation with pro(⌬418 -631)gelatinase A or pro(⌬426 -688)gelatinase B. TIMP-3 bound to gelatin-Sepharose after preincubation with proN-G.C-SGG but did not bind if proN-G.C-SGS or proN-GL.C-SL were used. TIMP-2 was retained on the gelatin-Sepharose after incubation with progelatinase A but not after incubation with progelatinase B.
The Effect of Heparin on the Rate of Association of TIMPs with Gelatinase A-Increasing concentrations of heparin in the fluorometry buffer reproducibly resulted in a bell-shaped distribution for the association rate of TIMP-3 with gelatinase A (Fig. 1a). As the heparin concentration was increased to 100 g/ml, the association rate increased 3.7-fold compared with the k on measured in the absence of heparin. Further increases in the amount of heparin resulted in a decrease in the rate of association to levels approaching that observed in the absence of heparin. The addition of heparin to TIMP-2 and gelatinase A had a negligible effect on the association rate. The association rate of TIMP-1 and gelatinase A was increased by 4.6-fold with 800 g/ml heparin, but the distribution was not bell-shaped, as it was for TIMP-3. Although TIMP-1 appears to be more dramatically affected than TIMP-3 due to the manner in which the data is presented, the k on for TIMP-3 increased to 10 8 M Ϫ1 ⅐s Ϫ1 and exceeds the maximum rate accurately measurable using this system, whereas values for TIMP-1 plateaued at around 2 ϫ 10 7 M Ϫ1 ⅐s Ϫ1 . Preincubation of either TIMP-3 or gelatinase A with heparin or the addition of heparin to the buffer did not affect the association rate obtained. Increasing amounts of heparin did not affect the association rate of (⌬418 -631)gelatinase A and TIMP-3 (data not shown). SDS-polyacrylamide gel electrophoresis and silver staining (data not shown) and a rabbit collagenase diffuse collagen fibril assay revealed that TIMP-1 and TIMP-2 did not bind at all to heparin-agarose in 0.15 M NaCl, whereas TIMP-3 did bind, and 95% was eluted by 0.5 M NaCl, and 5% was eluted by 2 M NaCl.
The TIMPs are differentially glycosylated by our NS0 cell expression system: TIMP-2 is nonglycosylated, TIMP-1 is glycosylated, and TIMP-3 is produced in glycosylated and nonglycosylated forms. The potential role of glycosylation in binding to the polyanions was investigated by comparing the effect of heparin on the inhibition of gelatinase A by TIMP-1 and TIMP-3 in their glycosylated and deglycosylated forms. After treatment of TIMP-1 and TIMP-3 with PNGase F, which cleaves off the carbohydrate at its link with asparagine, there was a decrease in the apparent molecular weight of both TIMP-3 and TIMP-1, giving a distinct band on a silver-stained 12% polyacrylamide gel, but no decrease in apparent molecular weight where the inhibitors were incubated under the same conditions without PNGase F (data not shown). This suggests that the carbohydrate had been removed. Using the collagenase fibril assay, we found that 98% of both glycosylated and deglycosylated TIMP-3 bound heparin-agarose in 0.15 M NaCl and both were eluted by 0.5 M NaCl, whereas neither form of TIMP-1 bound significantly. In the fluorimetric assay, PNGase F had no activity against Mca-Pro-Leu-Gly-Leu-Dpa-Ala-Arg-NH 2 , and neither gelatinase A activity nor the rate of inhibition of gelatinase A by TIMP-1 was affected by the addition of PNGase F (data not shown). Deglycosylation of TIMP-1 and
)gelatinase A or proE375A-gelatinase A to TIMP-3 and TIMP-2
Inhibition assays were carried out at 25°C with increasing amounts of either (⌬1-414)gelatinase A or proE375A-gelatinase A. The dissociation constant (K d ) was estimated as described under "Experimental Procedures" and is given as a relative value compared to that of the appropriate TIMP:gelatinase A interaction. n is the number of assays carried out.
Analysis of complex formation between TIMP-3 and progelatinases A
and B using gelatin-Sepharose Each proenzyme was incubated for 1-2 h at 25°C in an approximate 1:1 molar ratio with 1 g of TIMP-2 or TIMP-3. Complexes of TIMP bound to proenzyme were isolated on gelatin-Sepharose that had been blocked with bovine serum albumin. TIMP activity in the bound and unbound fractions was measured in a rabbit collagenase diffuse fibril assay (39), and values are expressed as the percentage of total activity recovered for each incubation. TIMP-3 had no effect on the rate of inhibition of gelatinase A in either the presence or absence of heparin (data not shown).
Hence, it appears that the carbohydrate component of TIMP-1 and TIMP-3 is not responsible for the effect seen with heparin.
To confirm that the effect of heparin is mediated by ionic interactions, the ionic strength of the fluorimetry buffer was increased, and the association rate of TIMP-3 and gelatinase A was measured. Increasing the NaCl concentration from 0.1 M to 0.25 M or 0.5 M in the presence of 10 g/ml heparin abolished the effect of heparin on the association rate: in 0.1 M NaCl, heparin increased the k on 1.4-fold, whereas in 0.25 M NaCl and 0.5 M NaCl, the k on was identical in the presence and absence of heparin. TIMP-3 was also eluted from heparin-agarose by 0.5 M NaCl. Raising the ionic strength had an identical effect on deglycosylated TIMP-3 in the presence or absence heparin (data not shown). This indicates that the effect of heparin is mediated by ionic interactions, probably between its negatively charged sulfate groups and the positively charged residues in TIMP-3 and gelatinase A.
The Effect of Other Polyanions on the Inhibition of Gelatinase A by TIMP-3-The effect of various polyanions on the rate of association of TIMP-3 with gelatinase A was tested using the standard fluorometric assay. Like heparin, dextran sulfate resulted in a bell-shaped distribution for the association rate over the concentration range studied, with an increase in k on of 4.4-fold at 50 g/ml dextran sulfate (Fig. 1b). Heparan sulfate resulted in a slight increase in the association rate over the relatively small concentration range studied (Fig. 1b). Hyaluronic acid and dermatan sulfate had no effect, although the former did result in an increase in the steady-state rate, probably due to increasing viscosity (data not shown). There was no effect on the rate of association of TIMP-3 and gelatinase A when de-N-sulfated heparin was used (Fig. 1c).
The Effect of Heparin on the Rate of Association of TIMP-3 and Other MMPs-The association rate of TIMP-3 and stromelysin-1 was not affected by heparin over the concentration range of 0 -800 g/ml (data not shown), probably because stromelysin does not bind to heparin (34). The rate of interaction of TIMP-3 with matrilysin was increased 2-fold by heparin, but the rate did not decrease with high heparin concentrations, as it did for gelatinase A, and de-N-sulfated heparin also increased the association rate slightly (Fig. 2a). There was also a slight increase in the rate of association with heparan sulfate, hyaluronic acid (as well as an increase in the steady-state rate as for gelatinase A), and dermatan sulfate (data not shown). Dextran sulfate increased the association rate 15-fold, and the distribution was bell-shaped, as it was for gelatinase A (Fig. 2b). However, the pattern of these results differed from those of TIMP-3 and gelatinase A, suggesting a different mode of action for the effect on TIMP-3 and matrilysin.
DISCUSSION
Our data show that TIMP-3 is able to associate with the MMPs stromelysin-1 and matrilysin with rates similar to TIMP-1 and TIMP-2: the association rate is relatively slow (10 5 M Ϫ1 ⅐s Ϫ1 ), presumably because stromelysin-1 has negligible Cterminal domain interactions with TIMPs (17,25), and matrilysin comprises solely a catalytic domain. The interaction of TIMP-3 with active gelatinase A (10 7 M Ϫ1 ⅐s Ϫ1 ) and gelatinase B (10 5 M Ϫ1 ⅐s Ϫ1 ) is more similar to that of TIMP-2. The rate of association of TIMP-3 with these gelatinases is enhanced by the hemopexin domains of the enzymes. The apparent K d data suggest that significant interactions occur between the C-terminal domain of gelatinase A and TIMP-3. The interaction between TIMP-3 and the C-terminal domain of gelatinase A is slightly weaker than that of TIMP-2, probably due to the absence of the highly negatively charged C-terminal tail in TIMP-3 that is present in TIMP-2 (these last 8 residues of TIMP-2 have been shown to be highly significant in the interaction with the C-terminal domain of gelatinase A (20)), but serves to increase the rate of association of TIMP-3 and gelatinase A 1000-fold. As for TIMP-2 (36), C-terminal domain interactions with residues 568 -631 are particularly important for rapid association of TIMP-3 and gelatinase A. The decrease in the rate of inhibition of gelatinase A by TIMP-3 with increasing ionic strength suggests the involvement of charged residues in the interaction, as seen for TIMP-2 (20). We also reported previously that TIMP-3 inhibition of the catalytic domains of MT1 MMP and MT2 MMP was similar to that of TIMP-2 (41,42). It is known that TIMP-2 and TIMP-4 bind to progelatinase A via C-terminal domain interactions (20,43). Here we demonstrate that TIMP-3 is also able bind to progelatinase A. Complex formation between TIMP-3 and progelatinase A involves C-terminal domain interactions: the binding of progelatinase A to TIMP-3 was reduced by removal of the hemopexin domain or by replacement with the C-terminal domain of stromelysin-1. As with the active enzyme, residues 568 -631 but not residues 418 -474 of the hemopexin domain play an important role in the association of progelatinase A and TIMP-3. These residues constitute part of blade 3 and the whole of blade 4 in the four-bladed propeller structure determined by x-ray crystallography (44,45) and border a surface patch of lysine residues (residues 566, 567, and 568) that may be important for the electrostatic interaction. This region was also important for the association of gelatinase A with TIMP-2 (36) and suggests that TIMP-2 and TIMP-3 share common features of the binding site for progelatinase A. Although TIMP-3 is able to bind progelatinase A and MT1 MMP (41) like TIMP-2, we have been unable to convincingly demonstrate its involvement in progelatinase A activation as we did for TIMP-2 (36). It is unclear whether this is due to technical difficulties caused by the adherent nature of TIMP-3 or an alternative pericellular activation pathway involving TIMP-3 bound to the matrix or whether the binding of TIMP-3 to progelatinase A is not strong enough to support the formation of a membrane receptor (36).
Removal of the C-terminal domain of gelatinase B significantly reduced the rate of inhibition of this enzyme by TIMP-3. Hence, as for TIMP-1 (26), the hemopexin domain of gelatinase B is important for association with TIMP-3, although it contributes little to the association with TIMP-2. Like TIMP-1 (26), TIMP-3 can also bind to progelatinase B, but not to pro(⌬426 -688)gelatinase B, indicating that these C-terminal domain interactions are sufficient and necessary to yield a stable proenzyme-inhibitor complex. The precise biological role of this property of the TIMPs is not yet known, although a role in progelatinase B activation is possible.
The assays of TIMP-3 inhibitory activity described above were carried out in solution. Although these studies are valuable from a comparative point of view, it must also be borne in mind that TIMP-3 is apparently largely extracellular matrixbound in vivo, although the components to which it binds remain to be determined. Proteoglycans consist of core proteins with numerous attached glycosaminoglycan chains: the latter are negatively charged polysaccharides composed of repeating disaccharides (for reviews, see Refs. 46 and 47). TIMP-3 possesses 9 positively charged residues (8 lysines and 1 arginine) that are not present in TIMP-1 or TIMP-2 (see alignment, Fig. 3 in Ref. 3) and that are generally conserved in TIMP-3 from different species. It is likely that these charged residues may be involved in the interaction of TIMP-3 with cell surface or extracellular matrix glycosaminoglycans. We therefore tested the effect of some commercially available polyanions on the inhibition of various MMPs by TIMP-3.
The rate of inhibition of gelatinase A but not (⌬418 -631)gelatinase A by TIMP-3 was increased by heparin. Both gelatinase A and TIMP-3 bind to heparin, but there is no heparin binding site in (⌬418 -631)gelatinase A (48), suggesting that a heparin binding site is required in both interacting proteins. The bell-shaped distribution of the association rate over the heparin concentration range studied is reminiscent of similar curves for the effect of heparin on progelatinase A autoactivation (48) or for the inhibition of thrombin by antithrombin III (49). The curve suggests a biomolecular mode of binding to heparin that increases the local concentration of reactants, thereby increasing their rate of interaction, rather than a conformational effect.
The effects of the polyanions tested on the rate of inhibition of gelatinase A by TIMP-3 appear to correlate with negative charge density. The sulfated compounds, dextran sulfate, heparin, and heparan sulfate (4 -5 O-linked, 2-3 O-and N-linked and 1 O-or N-linked sulfate per disaccharide, respectively (50)), enhanced the rate of interaction, whereas dermatan sulfate (1 O-linked sulfate per disaccharide), de-N-sulfated heparin, and hyaluronic acid (unsulfated) had no effect, suggesting that the interaction of enzyme and inhibitor with these polyanions is based on charge density as well as structure. A specific recognition domain in heparin has been described for basic fibroblast growth factor and antithrombin (51). The existence of such a domain for TIMP-3 would be compatible with a surface concentration mechanism like that of antithrombin and thrombin (51). TIMP-3 does not contain any of the reported linear heparin binding motifs, but a motif defined by the threedimensional structure could exist (52). It is likely that TIMP-3 interacts with cell surface and extracellular matrix glycosaminoglycans via the large number of positively charged residues in TIMP-3, and that this is the basis for its location in the extracellular matrix both in vivo and in cell culture. Hence, colocalization of TIMP-3 with proenzymes in the pericellular environment may be a mechanism for increasing the rate of inhibition of MMPs and regulating extracellular matrix breakdown during morphogenetic processes.
Heparan sulfate proteoglycans such as perlecan (53) and syndecans (54) are also implicated in binding growth factors that promote angiogenesis. A recent study demonstrated that TIMP-3 can inhibit endothelial cell migration and angiogenesis in response to the angiogenic factors basic fibroblast growth factor and vascular endothelial growth factor (55). TIMP-2 had similar effects upon endothelial cell migration in vitro, but TIMP-1 was ineffective (55,56). This implicates MT1 MMP in the angiogenic process, an enzyme that can degrade matrix components and initiate the autoactivation of gelatinase A (36, 40, 57). The study presented here suggests that the effects of TIMP-3 and TIMP-2 might be due to the specific ability of these inhibitors to bind to progelatinase A as well as to inhibit MT1 MMP. Colocalization of TIMP-3 in the pericellular environment via binding to the extracellular matrix, including heparan sulfate proteoglycans, would place this inhibitor in a key position to inhibit MMPs produced by endothelial cells, thus regulating degradation of the extracellular matrix and release of the angiogenic factors required for migration and angiogenesis.
|
2018-04-03T01:35:09.457Z
|
1999-04-16T00:00:00.000
|
{
"year": 1999,
"sha1": "9104dd958887394dd133ecb30a19d152c075cbe5",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/274/16/10846.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "13423c2a4eb1c5b1c735a098391c55cea3cb47ed",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
220265583
|
pes2o/s2orc
|
v3-fos-license
|
The Hubble constant and sound horizon from the late-time Universe
We measure the expansion rate of the recent Universe and the calibration scale of the baryon acoustic oscillation (BAO) from low-redshift data. BAO relies on the calibration scale, i.e., the sound horizon at the end of drag epoch $r_d$, which often imposes a prior of the cosmic microwave background (CMB) measurement from the Planck satellite. In order to make really independent measurements of $H_0$, we leave $r_d$ completely free and use the BAO datasets combined with the 31 observational $H(z)$ data (OHD) and GW170817. For the two model-independent reconstructions of $H(z)$, we obtain $H_0=69.66^{+5.88}_{-6.63}$ km s$^{-1}$ Mpc$^{-1}$, $r_d=148.56^{+3.65}_{-4.08}$ Mpc in the cubic expansion, and $H_0=71.13\pm2.91$ km s$^{-1}$ Mpc$^{-1}$, $r_d=148.48^{+3.73}_{-3.74}$ Mpc in the polynomial expansion, and we find that the values of sound horizon $r_d$ are consistent with the estimate derived from the Planck CMB data assuming a flat $\Lambda$CDM model.
I. INTRODUCTION
In the past few years, cosmological parameters have been measured with unprecedented precision. In particular, the cosmic microwave background (CMB) experiments, such as WMAP and Planck, played a key role. The Planck Collaboration presents the strongest constraints to date on key parameters, such as the Hubble constant H 0 . H 0 cannot be measured by CMB experiments directly, but can be inferred once the other cosmological parameters are determined by global fitting. In the ΛCDM model, Planck find a lower value of H 0 in the first data release [1], and report the updated results, H 0 = 67.27±0.60 km s −1 Mpc −1 , in the final data release [2]. The constraint of H 0 in CMB measurement relies on the choice of cosmological model. At present, although the ΛCDM model is basically successful in fitting available cosmological data, it is still challenged by some compatibility tests at low and high redshifts. Recently, the discrepancy in the Hubble constant measured from low and high redshift probes have attracted a lot of attentions. In particular, SH0ES (Supernovae and H 0 for the Equation of State) project [3] constructed a local distance ladder approach from the Cepheids to measure H 0 . The local measurement of H 0 is model-independent as it does not depend on cosmological assumptions. They improve the accuracy of H 0 and publish the updated results as H 0 = 74.03 ± 1.42 km s −1 Mpc −1 [4], which increases the tension with the final result of Planck to 4.4σ.
In the absence of systematic errors in both measurements, the model-dependent CMB measurement should be consistent with the model-independent local measurements if the standard cosmological model is correct. The tension could provide evidence of physics beyond the standard model. With clear motivation, extensive research has been done on extended models beyond the standard model to alleviate inconsistencies between data sets. For example, see Ref. [5][6][7][8][9][10][11][12][13][14][15][16][17][18]. On the other hand, a growing number of other measurements independently provide measurements of the Hubble constant. The H0LiCOW collaboration [19] present another independent approach to measure H 0 by the time delay from lensing. In a flat ΛCDM cosmology, they provide a latest value H 0 = 73.3 +1.7 −1.8 km s −1 Mpc −1 (2.4% precision) [20]. It is consistent with the local measurement of H 0 by the distance ladder, but in 3.2σ tension with respect to the CMB data from Planck satellite. This method is independent of both the distance ladder and other cosmological probes. In addition, the Advanced LIGO and Virgo report a gravitational-wave measurement of the Hubble constant H 0 = 70 +12 −8 km s −1 Mpc −1 using the gravitational wave signal from the merger of a binary neutron-star system [21]. The red giant branch method provides one of the most accurate means of measuring the distances to nearby galaxies. Recently, using the revised measurement, Ref. [22] report H 0 = 69.6 ± 0.8 km s −1 Mpc −1 .
The baryon acoustic oscillation (BAO) surveys provide measurements of three types D A (z)/r d , D V (z)/r d and H(z)r d , where r d is the comoving size of sound horizon at the end of the baryon drag epoch [23,24]. The Hubble constant H 0 and sound horizon r d are closely related and link the late-time and early-time cosmology. If we measure H 0 using the BAO data, an independent distance calibration is required. In other words, r d is the standard ruler which calibrates the distance scale measurements of BAO. In general, r d relies on the physical properties of the early universe, which can be constrained by the precise CMB observations. The CMB measurement relies on the assumption of a ΛCDM model to constrain the cosmological parameters. In most all of BAO measurements r d often be imposed a Gaussian prior of r d from CMB. In this sense, the constraint on the Hubble constant by using BAO data, for example Ref. [25], is not completely independent on the CMB data. Instead of early-time physical calibration of r d , an alternative approach is to combine BAO measurements with other low-redshift observations. Planck public available MCMC chains give r d = 147.05 ± 0.30 Mpc in ΛCDM model. This is a model-dependent theoretical expectation determined from the CMB measurement. Assuming the cold dark matter model with a cosmological constant, Ref. [26] take the sound horizon at radiation drag as a ruler, determine r d = 142.8 ± 3.7 Mpc by adding the clocks and the local H 0 measurement to the SNe and BAO. They find excellent agreement with the derived quantity of the sound horizon deduced from Planck data. In the spline models for the expansion history H(z), Bernal et al. obtain r d = 136.8 ± 4.0 Mpc, and r d = 133.0 ± 4.7 Mpc when Ω k is left as a free parameter from the BAO, SNe Ia and local measurement without CMB-derived r d prior [27]. Combining the data sets from clocks, SNe, BAO and local measurement of H 0 , Verde et al. found r d = 143.9 ± 3.1 Mpc with a flat curvature [28]. Then, using BAO measurements and SNe Ia, calibrated with time-delay from H0LiCOW, Aylor et al. infer the sound horizon r d = 139.3 +4. 8 −4.4 Mpc in ΛCDM model [29]. Using the inverse distance ladder method, Dark Energy Survey Collaboration find r d = 145.2 ± 18.5 Mpc from SNe Ia and BAO measurements [30]. In their analysis, they adopt a prior on r d taken from the Planck 2018. Using the supernovae Ia and BAO measurements combined with H 0 from H0LiCOW, Ref. [31] provide the sound horizon at recombination r d = 137.0 ± 4.5 Mpc in the polynomial expansion of H(z). This apparent discrepancy come from fitting the BAO measurements with or without a prior of CMB from Planck. Comparing the sound horizon obtained from the low-redshift data with the value derived from Planck may give us a better understanding of the discordance between the data sets or reveal new physics beyond the standard model.
In order to solve the discrepancy of H 0 and r d from early and late universe, using the recent low-redshift data to constrain the sound horizon of early universe is the main motivation of this paper. In our analysis, we consider the ΛCDM model and two model-independent reconstructions of H(z). Without any assumption about the early-time physics, we set the standard ruler r d of BAO as a free parameter. Combining BAO with observational H(z) data and gravitational wave measurement, we measure the Hubble constant H 0 and sound horizon r d regardless of the early time physics. In section II we introduce the reconstruction of H(z). The data sets and methodology used in this paper are shown in section III. In section IV we present the results of sound horizon without assuming any early-time physics. We summarize the conclusions in the last section.
II. THE RECONSTRUCTION OF H(z)
We will perform our analyses with following three different form of H(z). Firstly, in flat ΛCDM model the Hubble parameter can be expressed as where Secondly, in order to avoid working within a specific cosmological model, we try to reconstruct H(z) in the a model-independent way. The Hubble parameter is expressed as a cubic expansion of scale factor (1 − a), We can easily determine H 0 from the corresponding reconstructed H(z) ranges. The third one is a polynomal expansion of H(z). We follow [32] and Taylor expand the scale factor with respect to cosmological time. Then the Hubble parameters H(t), deceleration parameters q(t), jerk parameters j(t) and snap parameters s(t) are defined as Using these parameters, the Hubble parameter can be parameterized as a polynomial expansion where the subscript "0" indicates the parameters at the present epoch (z = 0).
III. DATA
We use the observational datasets including the measurements of the BAO, observational H(z) data (OHD) and GW170817. For BAO measurement the angular diameter distance D A and the volume-averaged scale D V are related to H(z) by The sound horizon is given by where c s (z) is the sound speed and z d is the redshift at the end of drag epoch. The sound horizon r d is the standard ruler to calibrate the BAO observations [26,28]. It is often imposed a prior of the CMB measurement from Planck satellite. In this paper, we remove the prior r d from Planck and set r d as a free sampling parameter. We use the constraints on BAO from the following galaxy surveys: the 6dF Galaxy Survey [33], the SDSS DR7 Main Galaxy sample [34], the BOSS DR12 [35], and eBOSS DR14 quasar [36]. We also include eBOSS DR14 Lyα [37]. The datasets are listed in Table I. In the current analysis we do not make use of the OHD extracted from the measurement of BAO. We only consider the OHD from differential age method. The differential age method proposed in Ref. [38]. It can be used to measure [47] the expansion rate of the Universe. The quantity measured in differential age method is directly related to the Hubble parameter This method can be used to determine Hubble constant H 0 . Table II shows an updated compilation of OHD accumulating a total of 31 points given by differential age method [39]. Recently, the Advanced LIGO and Virgo detectors observed the gravitational-wave event GW170817 which is a strong signal from the merger of a binary neutron-star system [21]. The measurement of GW170817 reports To perform joint analyses of the three data sets, we explore the cosmological parameter space by a likelihood function L satisfying −2 ln L = χ 2 . we calculate the χ 2 function with the following equation.
All presented results are computed using the public the Monte Carlo Markov Chain public code CosmoMC [48].
IV. RESULTS
In ΛCDM model, we set Ω m , H 0 and r d as free parameters. FIG. 1 shows our results, including the contours of Ω m -H 0 and H 0 -r d for BAO, OHD+GW and BAO+OHD+GW datasets, respectively. We find the OHD+GW datasets These are consistent with Planck result. The constraints on the parameters of cubic expansion and polynomial expansion are illustrated in FIG. 2 and 3. The blue contours show 68% and 95% constraints in cubic expansion using the BAO+OHD+GW datasets without prior on r d . The orange contours show the constraints cubic expansion using the the BAO+OHD+GW datasets without prior on r d .
We cannot provide a strict constraint on the Hubble constant in these two model-independent reconstructions, and the results are consistent with both Planck 2018 and SH0ES 2019. And the sound horizon r d is nicely consistent with the Planck results as well. In the ΛCDM model and the reconstructions of H(z), the results of r d are basically the same including the mean and the uncertainty. We can conclude that the sound horizon r d is more robust then H 0 for the different parameterizations. In other words, the result of r d is nearly free from dependence on the expansion history H(z).
V. SUMMARY AND CONCLUSIONS
In this paper, we provide a new model-independent measurement on the Hubble constant using the observational datasets including the measurements of the Baryon Acoustic Oscillations, observational H(z) data and GW170817. In order to avoid imposing a prior of sound horizon r d from CMB measurement, we remove the prior from Planck and set r d as a free sampling parameter in BAO distance measure, and we find H 0 = 69.66 +5.88
|
2020-07-01T01:01:44.310Z
|
2020-06-30T00:00:00.000
|
{
"year": 2021,
"sha1": "881db7d50f9da184ed676b7508003e0230143621",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2006.16692",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d0e7342087819dc7557c50bacd823224e470c3e9",
"s2fieldsofstudy": [
"Materials Science",
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
222288363
|
pes2o/s2orc
|
v3-fos-license
|
Automatic Manufacturing Cell in Cyber-physical System
Such manufacturing cells, that need to be able to produce high quantity and variety workpieces, the automatic workpiece handling is indispensable to stay in competition. A real manufacturing cell’s simulation in a cyber-physical system can help the decisions-making in the production, as we receive real data with the simulation of the scenarios. Our manufacturing cell consists of a 6-axis robot, 2 CNC milling machines and 4 conveyors. We tested 10 different scenarios, during which we analyzed the usage of the robot and the milling machines. By upgrading the tools, used for milling, we could reduce the cycle time of the workpieces. Furthermore, one more milling machine could be integrated into the cell due to the low usage of the robot or the speed of the robot could be lowered. Besides, filling the pallets with the same workpieces was the most effective way, however, with mixed workpieces we reached better results when we used optimization for cycle time. In these cases, reducing the order quantity to a daily amount did not cause any capacity reduction.
Introduction
The industrial robots are important parts of the manufacturing systems therefore they use them in a wide variety of application due to advantages, offered by them. For example, the applications they use them in are welding, material handling, assembly and painting. The benefits of industrial robots are the high reliability, precision and repeatability. Essentially, after the robot's program is written correctly, the certain operations can be achieved rapidly and precisely [1].
However, programming these industrial robots are still hard and time-consuming tasks. For instance: programming manually a robotic arc welding system for the manufacture of a large vehicle hull takes several months [2].
Nowadays, programming industrial robots can be categorized into two main methods: online programming and Offline Programming (OLP). Generally, for online programming, the teach pendant is used to manually move the robot to the desired position and orientation and the relevant movements and points are saved by the robot controller. After recording these relevant robot configurations, the next step is writing the robot's program, which can refer to the previously saved points and movements. However, the online programming method is only suitable for programming easier, uncomplicated tasks. The OLP method, which is based on the complete 3D modelling of the robot's environment (e.g. with the help of a 3D scanner), is used in manufacturing systems that produce high quantity workpieces. One of the advantages of this method is that there is no need for the robot's physical presence. Furthermore, the collision-detection can be done safely thanks to the 3D modelling of the robot's environment [2][3][4].
The Cyber-Physical Systems (CPS) are systems of collaborating computational entities which can be physical, computing, and communication elements. They are in intensive connection with the surrounding physical world and its on-going processes. These elements are connected via a communication network over which the system can be overseen. In our case, the software, controlling the robot, will be the cyber physical system itself, where the sensors will be present as memory bits. Thus, the manufacturing cell, created in the software, will be a Cyber-Physical Production System (CPPS), which consists of autonomous and cooperative elements that are getting into connection in certain scenarios [5][6][7].
Our goal is to create such automatic manufacturing cell that offers as much information for the user as possible and the change of the workpiece types can be achieved flexibly. Moreover, identifying the bottleneck is a further benefit and helps the future improvements.
Material and methods
For the robot's offline programming and for the creation of the virtual manufacturing cell we use the EPSON RC+ graphical software. Programs, created in the software, use a BASIC-like programming language ( SPEL + ), which is used for programming Epson robots. The SPEL + programming language mostly consists of commands responsible for the control and check of the robot and the creation of the connection between the robot and its environment [8].
We created the virtual manufacturing cell by using the software's simulator. In the manufacturing cell the material handling is done by an Epson C8XL series 6-axis robot, which serves two CNC milling machines (Fig. 1). The workpieces arrive on the outer conveyors. After finishing the milling process, workpieces are placed into the pallets, placed on the inner conveyors. The 3D models that we imported into the simulator cannot be moved. Furthermore, there is a built-in collision-detection system in the software, which we can test the movements with, whether there is collision or not.
In the robot programs we created memory bits which will replace real sensors. Each memory bit can be treated as both an input and output bits [8]. By changing the value of these variables and bits in the proper situations with commands in the robot program (e.g. "MemOn"), the robot will receive automatic feedbacks [8]. One of these feedbacks is for example, whether there is a workpiece in the CNC milling machine or there is not. In this case, the program considers several variables: if there is a workpiece in the milling machine, the milling is still in process or the milling process has ended. Thus, the robot receives permission from these variables to replace the finished workpiece to a new one.
The goal of these tests is to measure the robot's abilities and to be able to create an optimal operation for the cell. In order to achieve the optimal operation, we need to be able to try the cell's operation with different technological parameters. To create these different technological parameters, we designed three distinct pallet type (1, 2 and 4 block pallet) and three workpieces ( Fig. 2) with the help of the Autodesk Inventor Professional 2020 software. We will measure these workpieces' milling cycle time with different tool combinations and will use them to analyze the operation of the manufacturing cell. During the tests, we designed the pallets to be able to hold 100 workpieces, regardless of the workpiece type. When we worked with mixed workpieces, we filled one of the pallets with A and B models, the other pallet contains B and C workpieces.
To measure the milling cycle time of the models, we created the milling programs for every workpiece using SinuTrain 4.7. Once the programs were ready, we used an Akira-Seiki Performa V2.5 XP CNC milling machine to measure the cycle times. We tested three different cases: one with low-, one with medium-and one with high-priced tool combination. Within these three cases, we created 10 distinct scenarios which we will test on the manufacturing cell ( Table 1).
The CNC milling machines' waiting time for the robot (not including the workpiece replacement) also plays a big role in the decision-making regarding the production as these waiting times increase the whole scenario's time. We calculated the waiting time for the robot according to Eq. (1): where I r is the waiting time for the robot, I mc is the manufacturing cell's operating time and I ct is the cycle time of all the workpieces, including workpiece replacements.
Graphical user interface and operating principles
We were constantly trying to create a user friendly and easy-to-use Graphical User Interface (GUI) when writing the robot programs, which can handle the different variations easily. The graphical user interface, which appears during the automatic operation, was created in the GUI builder of EPSON RC+. The main menu (Fig. 3) is the only interface which we can reach without a password. We limited the access to the other interfaces because of the multiple settings we created. Therefore, these settings would be available only for engineers and maintenance man.
We can choose between two priorities: cycle time or quantity. The priority of cycle time means that the program ranks the workpieces' cycle time into descending order and assigns indexes to the workpieces. The program will make decisions according to these indexes in the program responsible for automatic cycle. The algorithm will match the workpieces with the lowest index and the workpiece with the highest indexes. If one of them runs out, the program continues the matching. This cycle continues until the ordered quantity is finished. This setting makes sense only if we fill the manufacturing cell at least with two workpiece types up.
Using priority for the quantity means that the program ranks the ordered quantity into descending order and assigns indexes to the workpieces here, too. According to the indexes, the robot will start with the workpiece which has the biggest demand regarding the order quantity.
After deciding on the priority, we must set the details of the pallet. Besides the type of the pallets, we must set the types of the workpieces (Fig. 4) too.
To further improve the productivity, we created a program which we can analyze the bottleneck of the manufacturing cell with. Additionally, we can see the usage of the robot and the CNC milling machines. Here, the goal is to create a balanced cell. In other word, we must minimize the waiting times as much as possible. Naturally, the cycle times can be changed only in a certain range, therefore some waiting will always occur.
Results
In order to handle flexibly the changes of the workpiece types, we created the "ROBOT POINTS" program ( Fig. 5). Our goal with this program was to achieve the handling of the robot points during the cell's automatic operation. Therefore, we created several functions which we can change the robot points with, if needed (due to the different workpieces or pallets). The points, used by the robots, can be found in the point table (Fig. 5). Here, the user can load the desired point in and can change its position. The movement of the robot can be controlled with the buttons (short, medium or long stepping mode) or we can simply change its coordinates. Throughout the tests, the easy usage is proven based on our experiences.
The cycle times of the workpieces with the different tool combinations are shown in Table 2. We filled the manufacturing cell with these cycle times up then we did the simulations according to the scenarios (detailed in Section 4). Table 3 shows the results of the scenarios. These results are from the program which monitors the usage of the manufacturing cell.
Discussion
During the simulation we tested 10 different cases. The milling machines were the bottlenecks in every scenario.
In the 1 st , 2 nd and 3 rd cases, we filled up the cell with only one workpiece type. As the cell works only with one workpiece type, the longer the cycle time is, the more the robot stands idly. The solution to this problem can be a better tool combination which we can reduce the milling cycle time of the workpiece with. The low-, medium-and highpriced tool combination resulted in 34.9 hours, 29.6 hours and 24.4 hours. This means that with a medium-priced tool combination the manufacturing cell is 15 % faster, and with a high-priced tool combination the cell is more productive by 30 %. The faster milling cycles mean that there are more workpiece replacements during less time, therefore the usage of the milling machines decreased but it was not more than 1 %.
In these scenarios the usage of the robot was 4 %, 5 % and 6 %. We separately tested the shortest cycle time (C model with high-priced tool combination) but the usage of the robot was not critical in this case eitherit resulted in 9 %. Based on these values, the implantation of another CNC milling machine reasonable, thus increasing the cell's production capacity.
The development of the manufacturing cell requires big investment, so it is not possible to carry out immediately. It was also important to determine the impact of the robot's speed reduction on the production capacity. By lowering the robot's speed, we can spare the robot. Therefore, we can reduce the running expenses. In the 2 nd scenario, we could lower the robot's speed to 35 %. At this point, the scenario's time increased with 15 minutes (0.8 %) and We did further test on the 1 st scenario based on the same principles where the robot worked the least. The cycle time of the workpiece is more than 5 minutes, therefore there is no need to operate the robot on 100 % speed. We reduced the speed to 25 %. The slow-down of the robot reduced the milling machines usage in this case too (Fig. 6). The scenario's time increased by 18 minutes due to the deceleration which meant 0.9 % decrease regarding the cell's performance. By analyzing these two cases, it is apparent that we can greatly reduce the speed of the robot which leads to less than 1 % decrease in the cell's capacity. This capacity loss stands in contrast to the robot's fewer running expenses which is worth to consider in some cases.
When we used mixed workpiece types to complete the order (4 th , 5 th and 6 th scenarios), we changed the tool combination like in the 1 st , 2 nd and 3 rd scenarios. With the worst tool combination, the completion of the order quantity took 49.4 hours. By improving the tool combination, we reached 42.3 hours (medium) and 36.2 hours (high) production time with the same workpiece quantity. Thus, with a better tool combination, the production time can be decreased by 14 % and 27 %. In these scenarios, the usage of the CNC milling machines is not the same. It is because the milling machines work with different workpiece types ( Fig. 2 and Table 2). therefore, the usage commensurately changes with the workpiece replacement. Furthermore, the CNC milling machine, that works with shortest cycle time workpiece, will statistically wait more for the robot.
If we set the priority to quantity, the program will not consider the cycle times so it will prioritize the order quantity. The 7 th scenario represents this case. The only difference between the 5 th and 7 th scenarios was the changed priority which caused 1.1 % production reduction (the production took 30 minutes more to complete). In this case, the most demanded workpiece's production will be prioritized in order to ship it earlier. On the other hand, it caused unfavorable workpiece matchings which increased the milling machine's waiting time for the robot.
Based on the 1 st , 2 nd and 3 rd scenarios, we created a scenario in which we used the 5 th scenario's order quantity. However, in this case, we worked with only one workpiece type at once. The waiting time of the CNC machines for the robot decreased drastically and the production was completed 54 minutes earlier. This scenario shows the ideal case when both milling machines are available for a certain workpiece type. If the pallets contain mixed blank workpiece types then we can work with the previously presented priorities, however, it will reduce the manufacturing cell's capacity. In the 9 th scenario, we commensurately changed the order quantity of the 5 th scenario until it matched a 1-day quantity. Using daily production quantity, the completion of the 5 th scenario's order quantity took 69 seconds longer. If the contract demands daily shipping, breaking down the order quantity into daily quantities will not mean considerable time loss. If we change the priority to order quantity using daily quantities -10 th scenario -(in other word, we need to ship the most demanded workpiece within one day), the production will take 1.1 % more time which we explained in the previous scenarios why it happens.
Conclusion
The robot controller (virtual) is responsible for our automatic manufacturing cell's control and organizes the production by the robot programs, the different settings and the technological parameters. Thanks to the different settings, there are several possible scenarios, of which we tested 10 specific cases. We were constantly trying to create simple and easy-to-use programs, in order to speed up the tests of the scenarios.
We tested the manufacturing cell with different settings and analyzed the usage of the robot and the CNC milling machines. The price of the tools, used for the milling process, fundamentally effects the cycle times which we measured with the help of the Akira-Seiki Performa V2.5 XP CNC milling machine. Thus, the cell's performance increases drastically (14-30 %) with the better tools but the robot will not be the bottleneck as its usage does not exceed 6 %. In the case of the production of mixed workpiece types (4 th , 5 th and 6 th scenarios) we also greatly decreased the scenario's time (49.4 hours, 42.3 hours, 36.2 hours).
The running expenses of the robot can decrease with the reduction of the robot's speed. Even though the performance of the manufacturing cell decreases (0.8-0.9 %) because of the slower workpiece replacement, the running expenses are going to be less too, as we mentioned before. So, if the order demand is not urgent, we can utilize this option.
Moreover, we pointed out the importance of the priority because there is a 1.1 % difference between prioritizing the cycle time and quantity which does not mean cost reduction. Thus, if there is no need for the earlier completion of that workpiece, which has the biggest demand regarding the order, then the priority should be the cycle time. Besides, the best way to reduce the waiting time caused by the robot, is to duplicate the milling machines, however, it means further increase in the costs.
The results prove that the digital duplication of our manufacturing cell helps in the decisions-making in the production. Furthermore, the cyber-physical systems contribute to a more economical operation through conscious decisions (based on data).
|
2020-10-12T06:20:16.693Z
|
2020-09-22T00:00:00.000
|
{
"year": 2020,
"sha1": "cdf792383ebe982b171283ae8b55d3924dd4a031",
"oa_license": "CCBY",
"oa_url": "https://pp.bme.hu/me/article/download/16623/8882",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cdf792383ebe982b171283ae8b55d3924dd4a031",
"s2fieldsofstudy": [
"Materials Science",
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
14013457
|
pes2o/s2orc
|
v3-fos-license
|
ECD promotes gastric cancer metastasis by blocking E3 ligase ZFP91-mediated hnRNP F ubiquitination and degradation
The human ortholog of the Drosophila ecdysoneless gene (ECD) is required for embryonic development and cell-cycle progression; however, its role in cancer progression and metastasis remains unclear. Here, we found that ECD is frequently overexpressed in gastric cancer (GC), especially in metastatic GC, and is correlated with poor clinical outcomes in GC patients. Silencing ECD inhibited GC migration and invasion in vitro and metastasis in vivo, while ECD overexpression promoted GC migration and invasion. ECD promoted GC invasion and metastasis by protecting hnRNP F from ubiquitination and degradation. We identified ZFP91 as the E3 ubiquitin ligase that is responsible for hnRNP F ubiquitination at Lys 185 and proteasomal degradation. ECD competitively bound to hnRNP F via the N-terminal STG1 domain (13-383aa), preventing hnRNP F from interacting with ZFP91, thus preventing ZFP91-mediated hnRNP F ubiquitination and proteasomal degradation. Collectively, our findings indicate that ECD promotes cancer invasion and metastasis by preventing E3 ligase ZFP91-mediated hnRNP F ubiquitination and degradation, suggesting that ECD may be a marker for poor prognosis and a potential therapeutic target for GC patients.
Introduction
Gastric cancer (GC) is a prevalent malignancy in East Asian countries, including China, and is the second leading cause of cancer-related mortality worldwide, with an overall 5-year survival rate of less than 25% 1,2 . Most GCs are diagnosed clinically at an advanced disease stage and thus present with distant metastases, which are the most important cause of cancer-associated death in GC patients. Although surgical resection is considered the gold standard for treating GC patients, GC patient prognosis remains poor due to the high incidence of tumor recurrence and distant metastasis. Conventional chemotherapy has limited effects on GC, especially metastatic GC. Targeted small molecule or antibody therapies designed to inhibit a specific oncogene are promising therapeutic strategies. Anti-HER2-targeted antibody therapies improve the overall survival of HER2-positive GC patients when combined with chemotherapy; however, HER2-positive patients comprise only 7-17% of GC patients. Therefore, new therapeutic targets are urgently needed.
The ecdysoneless (ECD) gene was originally named by authors studying Drosophila melanogaster ECD mutants who exhibited defective development due to reduced production of the steroid hormone, ecdysone, required for insect molting 3 . Subsequent studies showed that the ECD protein is required for cell-autonomous processes in Drosophila development and oogenesis 4 . The human ECD homolog was initially identified in a complementation assay conducted to rescue yeast mutants lacking the glycolysis regulation 2 (Gcr2) gene 5 . ECD gene deletion in mouse embryonic fibroblasts led to cell-cycle arrest at the G1/S checkpoint, suggesting ECD is a novel cell-cycle regulator 4,6 . ECD is overexpressed in pancreatic and HER2/ErbB2-overexpressing breast cancers 7,8 . Our previous studies showed that ACK1 promotes GC metastasis through the AKT-POU2F1-ECD pathway and that ECD is a potential key downstream effector of ACK1 1,9 . However, the roles and molecular mechanisms of ECD in cancer progression and metastasis remain unknown.
hnRNP F belongs to the hnRNP family, a large family of RNA-binding proteins that regulate multiple aspects of nucleic acid metabolism, including alternative splicing, transcription, translation, and mRNA stabilization 10 . hnRNP expression is altered in many cancers 10,11 , and these proteins are crucial in cancer cell proliferation, invasion, and metastasis 10,[12][13][14][15] . hnRNP F/H regulate alternative splicing of the apoptotic regulator, Bcl-x, and the tumor-associated NADH oxidase, ENOX2 [16][17][18] . hnRNP F is a potential marker for colorectal cancer progression 19 ; however, the regulatory mechanism of hnRNP F expression upregulation in cancers remains unknown.
Ubiquitination is a well-studied post-translational modification involved in proteasomal degradation, protein-protein interaction, protein trafficking, and protein activity. Protein ubiquitination is mediated by three enzyme families (E1, E2, and E3). Ubiquitination system activity depends on E3 ubiquitin ligase specificity [20][21][22] . To date, a direct connection between hnRNP F and the ubiquitination pathways remains unobserved, as an hnRNP F-specific E3 ligase that can bind to hnRNP F and induce ubiquitination and proteasomal degradation of hnRNP F has not been identified.
In this study, we found that ECD was overexpressed in GC, especially in metastatic GC, and ECD promotes GC invasion and metastasis by stabilizing hnRNP F. We further found that ZFP91 is the E3 ligase responsible for hnRNP F ubiquitination at Lys 185 and degradation. ECD blocks the interaction between ZFP91 and hnRNP F and the subsequent ubiquitination-and degradation-inducing effects of ZFP91 on hnRNP F by competitively binding to Fig. 1 ECD expression is increased in GC and is correlated with a poor prognosis in GC patients. a The indicated protein levels in six pairs of primary GC samples (T) and matched adjacent non-tumoral gastric tissue samples (N) were detected. b ECD mRNA expression levels were increased in GC compared with those in gastric mucosa, as determined by analyses of three gastric databases from Oncomine. c Representative IHC images of ECD expression in the GC tissue specimens and the corresponding N tissue specimens. d The differences in ECD expression scores between the GC tissue specimens and the corresponding N tissue specimens are presented as a box plot. e ECD expression score differences between the nonmetastatic and metastatic GC tissue specimens are presented as a box plot. f The associations between ECD levels and patient death percentages were analyzed. g Kaplan-Meier survival analysis of patients with GC by their ECD levels hnRNP F. Our findings indicate that ECD facilitates cancer migration and invasion by stabilizing hnRNP F, and ECD may be used as a novel prognostic GC biomarker, as well as an anti-cancer therapeutic target.
ECD overexpression is correlated with aggressive GC phenotypes
To investigate the role of ECD in gastric progression, we analyzed ECD protein levels in six pairs of primary GC tissue samples and matched adjacent non-tumoral gastric tissue (N) samples. We found elevated ECD protein levels in all GC tissue samples compared to those in the N samples (Fig. 1a). We also investigated ECD mRNA expression in gastric mucosal tissues and GC tissues using three microarray gene expression datasets deposited in the Oncomine database. ECD mRNA levels were higher in gastric intestinal-type adenocarcinoma and gastric adenocarcinoma tissues than in gastric mucosal tissues (Fig. 1b), indicating that ECD expression is upregulated in GC.
To investigate the correlation between ECD levels and prognosis, we performed an extensive tissue microarray analysis of 186 GC tissue samples and 154 adjacent nontumoral gastric tissue (N) samples (including 149 GC tissue pairs and matched non-tumoral gastric tissues) using an immunohistochemical (IHC) assay (Fig. 1c). We detected higher ECD levels in GC tissues than in N tissues (Fig. 1d), as well as higher ECD levels in metastatic GC tissues than in non-metastatic GC tissues (Fig. 1e).
ECD overexpression was positively associated with pT status, pN status, lymph node metastasis, more advanced histological grades and clinical stage in GC (Table 1). GC patients with high ECD levels were at higher risk for cancer-related death than those with low ECD levels ( Fig. 1f, g). The mean overall survival time for GC patients with high ECD levels was 19.5 months, while for GC patients with low ECD levels, it was 45.5 months (p = 0.0005, log-rank test). Therefore, ECD overexpression was correlated with poor prognosis in GC patients.
ECD promotes GC invasion and metastasis
To investigate the role of ECD in GC invasion and metastasis, we either silenced or overexpressed ECD in two GC cell line models. ECD silencing suppressed GC cell migration and invasion in SGC-7901 and MGC-803 cells (Fig. 2a), while ECD overexpression promoted GC cell migration and invasion in those cells (Fig. 2b).
Smaller metastatic nodules developed in mouse lungs after injection with luciferase-tagged SGC-7901 cells with stable silencing of ECD expression than those in mouse lungs after injection with SGC-7901 cells (Fig. 2c). Smaller metastatic lung nodules were also confirmed by histological analysis (Fig. 2d). Collectively, these results indicate that ECD promotes GC cell migration and invasion in vitro and metastasis in vivo.
ECD interacts with hnRNP F via the SGT1 domain
To investigate the mechanism by which ECD promotes cancer invasion and metastasis, we identified the proteins that interact with ECD by performing Co-IP and mass spectrometry analyses (Fig. 3a). We identified that hnRNP F might interact with ECD. We further confirmed that ECD interacts with hnRNP F (Fig. 3b).
ECD consists of a conserved SGT1 domain and transactivation region 5 . To determine which domains interact with hnRNP F, we generated truncated ECD constructs with an N-terminal Flag tag (Fig. 3c). When these constructs were co-expressed with HA-hnRNP F in cells, only the constructs containing the ECD N-terminal STG1 domain (13-383aa), but no other ECD regions, could interact with hnRNP F, indicating that the N-terminal STG1 domain is essential for hnRNP F binding (Fig. 3d).
ECD increases hnRNP F protein levels by inhibiting hnRNP F polyubiquitination and degradation
How does ECD affect hnRNP F given that ECD can bind to hnRNP F? To address this, we investigated the effects of ECD on hnRNP F protein and mRNA levels. ECD silencing decreased hnRNP F protein levels, but not hnRNP F mRNA levels ( Fig. 4a, b), while ECD overexpression increased hnRNP F protein levels, but not hnRNP F mRNA levels, in a dose-dependent manner (Fig. 4c, d), indicating that ECD upregulated hnRNP F protein levels at the post-transcriptional level.
Because ECD may upregulate hnRNP F protein levels, and the ubiquitination/proteasome pathway is the quickest known mechanism through which proteins are irreversibly degraded 20,23,24 , we surmised that ECD upregulated hnRNP F by preventing its ubiquitination. To test this hypothesis, we determined the effect of ECD overexpression on the half-life of hnRNP F. As shown in Fig. 4e, ECD overexpression resulted in a longer hnRNP F half-life. We also performed an in vivo ubiquitination assay, which showed that silencing ECD in the presence of MG132 (a specific proteasome inhibitor) increased hnRNP F polyubiquitination levels ( Fig. 4f). Collectively, our results strongly indicate that ECD inhibits hnRNP F protein polyubiquitination and proteasomal degradation, thus increasing hnRNP F protein levels.
Furthermore, we investigated the correlation between ECD and hnRNP F protein levels in GC tissues. We found that hnRNP F protein levels were upregulated in GC tissues compared with those in matched N tissues, consistent with the results of the ECD expression experiments (Fig. 1a). Extensive tissue microarray analysis showed that hnRNP F protein levels were higher in GC tissues than in N tissues (Fig. 4g); however, hnRNP F mRNA levels were unchanged between GC tissues and N gastric tissues (Supplementary Fig. S1). ECD protein levels were also positively correlated with hnRNP F protein levels in GC tissue samples (R = 0.314, p < 0.0001) (Fig. 4h). These findings indicate that ECD increases hnRNP F protein levels by stabilizing hnRNP F. . Results in (a) and (b) are shown as means ± SEM of three independent experiments. *p < 0.05 or **p < 0.01 was considered statistically significant ECD stimulates GC migration and invasion by stabilizing hnRNP F Next, we investigated the role of hnRNP F in ECD functions. We found that silencing of hnRNP F inhibited GC cell migration and invasion, similar to the effects induced by ECD knockdown (lane 3 in Supplementary Fig. S2B). And silencing hnRNP F attenuated the enhanced migration and invasion induced by ECD overexpression ( Supplementary Fig. S2), indicating that ECD promotes GC migration and invasion by regulating hnRNP F stability.
ZFP91 is an E3 ubiquitin ligase responsible for ubiquitinating hnRNP F As ECD is not an E3 ligase, and the E3 ligase responsible for hnRNP F ubiquitination remains unknown, we sought to identify the E3 ligase that regulates hnRNP F protein ubiquitination and degradation. E3 ligases must bind to their substrates to facilitate their ubiquitination. Thus, we identified the proteins that interacted with hnRNP F by performing Co-IP and a mass spectrometry assay to identify the E3 ligase responsible for ubiquitinating hnRNP F. We found that ZFP91 is the potential E3 ligase because it can bind to hnRNP F (Fig. 5a). We further confirmed that ZFP91 interacts with hnRNP F (Fig. 5b, c), thus suggesting that ZFP91 may be the E3 ligase responsible for polyubiquitinating hnRNP F.
To determine whether ZFP91 is the E3 ligase responsible for hnRNP F ubiquitination, we investigated the effects of ZFP91 on hnRNP F mRNA and protein levels, half-life and ubiquitination. We found that silencing ZFP91 increased hnRNP F protein levels, but not hnRNP F mRNA levels, while ectopically expressing ZFP91 decreased hnRNP F protein levels, but not hnRNP F mRNA levels (Fig. 5d, Supplementary Fig. S3). Moreover, ZFP91 overexpression resulted in a shorter hnRNP F protein half-life (Fig. 5e), and our in vivo ubiquitination assay results showed that ZFP91 overexpression increased hnRNP F polyubiquitination levels in the presence of the specific proteasome inhibitor, MG132 (Fig. 5f). This indicates that ZFP91 is the E3 ligase responsible for ubiquitinating and degrading hnRNP F.
ZFP91 induces hnRNP F ubiquitination at Lys 185
To identify the specific ubiquitination modification lysine sites in the hnRNP F protein, we searched the PhosphoSitePlus post-translational modification resource. Four potential ubiquitination sites at lysine residues were found in the hnRNP F protein (Supplementary Fig. S4A). We subsequently generated hnRNP F mutants in which these lysine residues were replaced with arginine. We found that ZFP91 overexpression did not change the protein levels in the hnRNP F mutant with K185R, but it decreased the protein levels in the hnRNP F mutants with K87R, K167R and K171R (Fig. 5g). ZFP91 overexpression also increased the hnRNP F mutant polyubiquitination levels with K87R, K167R, or K171R, but not in the mutant with K185R (Fig. 5h). These data indicate that hnRNP F ubiquitination at Lys 185 was regulated by ZFP91 E3 ligase.
We further investigated the effects of hnRNP F ubiquitination at Lys 185 on ZFP91-mediated cancer cell migration and invasion. We found that ZFP91 overexpression inhibited GC cell migration and invasion, thus exerting effects that contrast with those of hnRNP F ( Supplementary Fig. S4B). ZFP91 co-expression also attenuated the enhanced migration and invasion induced by overexpressing the hnRNP F mutants with K87R, K167R, or K171R, as ZFP91 polyubiquitinated these hnRNP F mutants and facilitated their subsequent degradation by the proteasome; however, ZFP91 coexpression did not block the enhanced migration and invasion induced by overexpressing the hnRNP F mutant with K185R because this protein was not polyubiquitinated and was degraded by ZFP91 (Supplementary Fig. S4B). Collectively, our results indicate that ZFP91 polyubiquitinated hnRNP F at Lys 185.
ECD blocks ZFP91 from binding to hnRNP F
We further investigated, and subsequently found, an interaction between ECD and ZFP91 (Fig. 6a, b). The constructs containing the ECD N-terminal STG1 domain (13-383aa), but not other ECD regions, retained interactions with ZFP91 (Fig. 6c), suggesting that the same ECD domain that bound to hnRNP F interacted with ZFP91. Because ECD binds to ZFP91 and hnRNP F, and ZFP91 binds to hnRNP F, we surmised that ECD inhibits the interaction between ZFP91 and hnRNP F. We cotransfected ZFP91, hnRNP F, and ECD plasmids into HeLa cells and assessed the protein interactions. We found that ECD overexpression dose-dependently blocked the ZFP91 and hnRNP F interactions (Fig. 6d, e). Furthermore, we found that the constructs containing the ECD N-terminal STG1 domain (13-383aa), but not other ECD regions, blocked the ZFP91 and hnRNP F interactions (Fig. 6f). This indicates that ECD blocked ZFP91 from binding to hnRNP F through the N-terminal STG1 domain.
ECD blocks ZFP91-mediated hnRNP F ubiquitination and degradation
Because ZFP91 is an E3 ubiquitin ligase that acts on hnRNP F, and ECD blocks the interaction between ZFP91 with hnRNP F and inhibits hnRNP F degradation, we surmised that ECD blocks ZFP91-mediated hnRNP F ubiquitination and degradation. We co-transfected ZFP91, hnRNP F, and ECD plasmids into HeLa cells and investigated hnRNP F protein and ubiquitination levels. ECD reversed the decreased hnRNP F protein levels induced by ZFP91 overexpression (Fig. 7a). In addition, ECD dosedependently blocked the enhanced hnRNP F ubiquitination levels induced by ZFP91 overexpression (Fig. 7b, c). The proteins that interacted with Flag-hnRNP F (Flag-F) were identified by combining Co-IP and mass spectrometry assays. E3 ligase ZFP91 interacted with hnRNP F. b, c The Flag-F plasmid and HA-ZFP91 vector were co-transfected into SGC-7901 cells, and the resulting Flag-hnRNP F (b) or HA-ZFP91 (c) complexes were co-immunoprecipitated by anti-Flag antibodies. ZFP91 or hnRNP F presence in these complexes was detected using HA or Flag antibodies, respectively. d SGC-7901 cells were transfected with two anti-ZFP91 siRNAs (upper panel) or ZFP91 plasmids (low panel), and the indicated protein levels were determined. e SGC-7901 cells were transfected with HA-ZFP91 plasmids for 36 h and then incubated with CHX for the indicated times. The half-life of the hnRNP F protein was analyzed as described in Fig. 4e. f Cells were co-transfected with the indicated plasmids for 36 h, then treated with or without MG132 (10 μM) for 4 h. The polyubiquitination pattern of hnRNP F was analyzed as described in Fig. 4f. g The indicated Flag-hnRNP F ubiquitin-lysine mutants and HA-ZFP91 plasmid were co-transfected into HeLa cells, and the indicated protein expression levels were detected. h The indicated Flag-hnRNP F mutants and HA-ZFP91 plasmid were co-transfected into GC cells, and cell migration and invasion ability were assessed We further investigated whether the ECD N-terminal STG1 domain can block ZFP91-mediated hnRNP F ubiquitination, as this domain blocked the interaction between ZFP91 and hnRNP F. The mutants containing the ECD N-terminal STG1 domain exerted effects similar to wild-type ECD and blocked the increased hnRNP F polyubiquitination levels induced by ZFP91, while the mutants containing other ECD regions did not (Fig. 7d). This indicates that ECD blocks ZFP91-mediated hnRNP F ubiquitination and degradation through the N-terminal STG1 domain.
Discussion
We found that ECD induces cancer invasion and metastasis, and we elucidated the novel mechanism underlying ECD's effect on cancer progression. Specifically, we found that ECD stabilizes hnRNP F, by blocking the interaction between ZFP91 and hnRNP F and the subsequent ubiquitination and degradation of hnRNP F by ZFP91. ZFP91 is an E3 ubiquitin ligase that ubiquitinates hnRNP F at Lys 185.
Initial studies demonstrated that the Drosophila ECD is required for embryonic development 4,78,25 . Here, we demonstrated that human ECD homolog levels were significantly increased in GC tissues, especially in metastatic GC tissues, compared with those in adjacent nontumoral gastric tissues. ECD mRNA levels were also increased in GC tissues compared with those in the gastric mucosa tissues in three HCC datasets deposited in the Oncomine database. These results indicated that ECD upregulation in GC was induced at the transcriptional level, consistent with our previous study in which we observed that ACK1 stimulated the transcription factor POU2F1 to induce ECD transcription 1 . ECD overexpression was significantly associated with aggressive GC phenotypes, as GC patients with high ECD levels The Flag-ECD plasmid and HA-ZFP91 vector were co-transfected into SGC-7901 cells, and the resulting Flag-ECD (a) or HA-ZFP91 (b) complexes were co-immunoprecipitated by anti-Flag antibodies. ZFP91 or hnRNP F presence was detected using anti-HA or Flag antibodies, respectively. c The Flag-ECD mutants and HA-ZFP91 plasmids were co-transfected into HeLa cells, and the resulting HA-ZFP91 complexes were co-immunoprecipitated by anti-HA antibodies. Presence of Flag-ECD mutants in these complexes was detected by anti-Flag antibodies. d The indicated plasmids were co-transfected into HeLa cells, and the resulting Flag-hnRNP F complexes were coimmunoprecipitated by anti-Flag antibodies. ZFP91 presence in this complex was detected using anti-HA antibodies. e The indicated plasmids and ECD plasmids were co-transfected into HeLa cells at the indicated doses, and the resulting Flag-hnRNP F complexes were co-immunoprecipitated by anti-Flag antibody. ZFP91 presence in this complex was detected using anti-HA antibodies. f The Flag-ECD mutants and indicated plasmids were transfected into HeLa cells, and the resulting HA-hnRNP F complexes were co-immunoprecipitated by anti-HA antibodies. ZFP91 presence in this complex was detected using anti-Flag antibodies exhibited higher death rates and shorter survival times than GC patients with low ECD levels. Our findings suggest that ECD may be a novel independent prognostic factor in GC patients. Silencing ECD significantly suppressed GC metastasis and invasion in vitro and in vivo, indicating that ECD may be a novel therapeutic target for GC.
Previous studies reported that ECD promotes cell proliferation by regulating RB/E2F pathway-dependent cellcycle progression and GLUT4-dependent glycolysis in breast and pancreatic cancer, respectively 4,8 . Here, we elucidated a novel mechanism underlying the effects of ECD on GC invasion and metastasis. We showed that ECD promoted GC metastasis and invasion by stabilizing hnRNP F. We found that ECD bound to hnRNP F to prevent E3 ligase ZFP91 from interacting with hnRNP F, thus blocking ZFP91-mediated hnRNP F polyubiquitination at Lys 185 and increasing hnRNP F protein levels to promote cancer metastasis and invasion. Previous studies showed that hnRNP F mRNA levels were increased by OKI-6 in myelinating glia 26 . In this study, we discovered a novel mechanism through which hnRNP F is upregulated by ECD blocking the ubiquitin/proteasome pathway. The double bands for hnRNP F were detected; the possible causes include the different isoform of hnRNP F and the cleaved hnRNP F. ZFP91 is a novel E3 ligase that activates the NF-κBinducing kinase (NIK) via Lys63-linked ubiquitination in the noncanonical NF-κB signaling pathway 27 . Here, we noted that in addition to activating substrates via ubiquitination, ZFP91 also interacted with and promoted the ubiquitination of hnRNP F at Lys 185, as well as its subsequent degradation by the proteasome. Here, we report for the first time that ZFP91 is an E3 ligase that ubiquitinates and degrades hnRNP F.
In conclusion, we found that ECD is overexpressed in GC, especially in metastatic GC, and that ECD overexpression is correlated with a malignant phenotype and poor prognosis for GC patients. We elucidated the novel molecular mechanism underlying the effects of ECD in cancer progression, as we determined that ECD promotes invasion and metastasis by stabilizing hnRNP F. We further demonstrated that ECD competitively bound to hnRNP F through the N-terminal STG1 domain to Fig. 7 ECD blocks ZFP91 E3 ligase-mediated hnRNP F protein ubiquitination and degradation. a The indicated plasmids were transfected into HeLa cells, and the indicated protein expression levels were determined. b HeLa cells were co-transfected with the indicated plasmids for 36 h, then treated with 10 μM MG132 for 4 h. The hnRNP F polyubiquitination pattern was analyzed as described in Fig. 4f. c His-ECD vectors and the indicated plasmids were transfected into HeLa cells at the indicated doses. The hnRNP F polyubiquitination pattern was analyzed as described in Fig. 4f. d The indicated Flag-ECD mutants and plasmids were transfected into HeLa cells. The hnRNP F polyubiquitination pattern was analyzed as described in Fig. 4f. e A proposed model illustrating that ECD promotes cancer metastasis by blocking the ZFP91 E3 ligase from binding to hnRNP F, and the subsequent ubiquitination and degradation of hnRNP F by ZFP91 E3 ligase prevent ZFP91-mediated hnRNP F ubiquitination and degradation. We discovered the E3 ligase, ZFP91, which is responsible for hnRNP F ubiquitination and degradation at Lys 185. Our findings indicate that ECD may be a new prognostic factor and potential anti-cancer target for GC.
Materials and Methods
Cell culture and tissue samples HeLa cells were obtained from and authenticated by ATCC. The GC cell lines, SGC-7901 and MGC-803, were obtained from and authenticated by isoenzyme assay by the Institute of Biochemistry and Cell Biology, Chinese Academy of Sciences (Shanghai, China) 28 . The cells were cultured as previously described 1,9 . All cell lines were treated with Plasmocin and tested with Mycoplasma PCR Detection Kit (Sigma-Aldrich, USA). Primary GC tissue samples and matched adjacent non-tumoral gastric tissue samples were collected at The Third Affiliated Hospital of Guangzhou Medicine University. Samples with a clear pathological diagnosis were selected and obtained from GC patients who had not received preoperative anticancer treatment. Informed consent was obtained from each patient who participated in the study, and tissue sample collection was approved by the Internal Review and Ethics Boards of The Third Affiliated Hospital of Guangzhou Medicine University. Tissue microarray chips containing GC tissue samples and matched adjacent nontumoral gastric tissue samples along with their corresponding clinicopathological data were obtained from Shanghai OUTDO Biotech Co., Ltd. (Shanghai, China).
Immunohistochemistry (IHC) staining assay
IHC staining assays were performed on tissue microarray chips with anti-ECD and anti-hnRNP F antibodies, as previously described, with minor modifications 29 . All IHC staining results were assessed by two independent pathologists blinded to the sample origins and the corresponding patient outcomes. ECD and hnRNP F expression was evaluated using a previously described semi-quantitative German scoring system, which assesses protein expression based on staining intensity and area. Scores of 0-5 indicated low ECD and hnRNP expression (low) in the GC tissue, and scores of 6-7 indicated high ECD and hnRNP expression (high) in the GC tissue.
Migration and invasion assays
In vitro migration and invasion assays were performed using Transwell chambers, as previously described 1 .
Western blotting
Western blotting was performed as previously described 1 .
Experimental in vivo metastasis model
Male NOD-SCID mice (aged 4-5 weeks) were obtained from Charles River Laboratories in China (Beijing). The mice were bred and maintained under defined conditions at the Animal Experiment Center of the College of Medicine (SPF grade), Jinan University. The mice were randomized to allocate into experimental groups by the researchers blinded to the subject outcomes. In vivo metastatic assays were performed as previously described, with minor modifications 30 . Briefly, five NOD-SCID mice in each experimental group were injected with luciferaselabeled SGC-7901-Luc-NC or SGC-7901-Luc-ECD shRNA-transduced cells (2 × 10 6 cells in 0.2 ml −1 of PBS) via their tail veins. The resultant metastatic foci in the lungs were visualized 2 months after tumor implantation. Animal experiments were approved by the Laboratory Animal Ethics Committee of Jinan University and conformed to the legal mandates and national guidelines for the care and maintenance of laboratory animals.
Co-immunoprecipitation (Co-IP)
Cells were transfected with Flag-ECD, Flag-hnRNP F (Flag-F), HA-ZFP91, or Flag (negative control) vectors for 48 h. Co-IP was subsequently performed using anti-Flagor HA antibodies (MBL), and the immune complexes were captured on protein A/G agarose beads (Santa Cruz, USA) and separated by SDS-PAGE. The SDS-PAGE gels were stained with silver. Protein expression was detected by western blotting using the indicated antibodies.
Protein identification by mass spectrometry
The differential gel bands and their corresponding negative gel bands were excised and digested using in-gel trypsin. The extracted peptide mixtures were analyzed using nano-LC-MS/MS, as previously described 31 . Proteins were identified using the Mascot (v2.3.02) program against the Uniprot human protein database (released Dec. 2014) using the default settings. Protein scores were ≥40, and unique peptides were ≥2.
In vivo ubiquitination assay
This procedure was performed as previously described 22 . Briefly, the cells were co-transfected with the indicated plasmids for 36 h, then treated with 10 μM MG132 for 4 h prior to harvesting. The cells were lysed in RIPA buffer with protease inhibitor cocktail (Roche, Switzerland). Flag-hnRNP F was immunoprecipitated using anti-Flag antibodies and protein A/G agarose beads. Polyubiquitinated hnRNP F was detected using anti-HA antibodies.
Statistical analysis
Two-tailed Student's t-tests or Mann-Whitney U tests were used for comparisons between two groups. Survival analysis was performed using the Kaplan-Meier method and the log-rank test. Statistical analyses were performed using Prism 5.0 software. Data are presented as the mean ± SEM unless otherwise stated. *p < 0.05 or **p < 0.01 was considered statistically significant.
|
2018-05-03T02:53:33.747Z
|
2018-04-30T00:00:00.000
|
{
"year": 2018,
"sha1": "024a7a1cc704107b9b99b185c64a7be8277fb02d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41419-018-0525-x.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b9c7b65ada0fcd446cc23d89612a2bfcdb985205",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
236508989
|
pes2o/s2orc
|
v3-fos-license
|
The strategic role of civil society organisations in handling climate change: A case of Riau in Indonesia
Although many local civil society organisations (CSOs) involve in reducing the impact of climate change, the issue has been reported by a limited study. This paper addresses the gap by investigating the role of local CSOs in helping to anticipate the effect of climate change. This study looked into the case of Riau because it has many CSOs concerned with environmental problems. The objectives of this research are to classify the environmental CSOs in Riau and analyze their contribution to climate change resilience. Using a qualitative approach, the data were collected by using interviews on a series of participants, including CSO’s activists, government officials, academicians, and community leaders. We reveal that local CSOs can be classified as conservation, advocation, empowerment, and conflict resolution. Along with their own and government programs, the CSOs have been contributed to tackling climate change by ensuring forest and peatland preservation. The theoretical and practical contributions of the study are elaborated.
Introduction
Climate change, defined as a change in the structure of the atmospheric composition and the variability of environmental climates over comparable time spans triggered actively or passively by anthropogenic activities [1], is a serious problem faced by many countries around the globe. Climate change is led by the greenhouse effect, which is the process of retaining solar radiation from the atmosphere caused by greenhouse gases raising global warming. Those included in the greenhouse gas category are carbon 2 dioxide (CO2), nitrous oxide (N2O), hydrofluorocarbons (HFCs), methane (CH4), perfluorocarbons (PFCs), and sulfur hexafluoride (SF6) [2]. Human activities such as the use of fossil fuels, deforestation, and land use are the primary causes of greenhouse gas accumulation in the atmosphere.
Climate change has a wide range of consequences for people's lives. The warming of the earth's surface affects not only the surface temperature but also the climate system, impacting various aspects of natural change and human life such as water quality and quantity, biodiversity, forests, health, agricultural land, and coastal ecosystems. Rainfall which is too heavy degrades the consistency of water supplies. Furthermore, as the temperature rises, the amount of chlorine in clean water rises. [3]. Global warming increases the amount of water in the atmosphere, which in turn increases rainfall. Although the increase in rainfall can increase the number of clean water sources, too much rainfall results in a high possibility of water returning directly to the sea, without having time to be stored in clean water sources for human use [4], [5]. climate change is also believed to be the cause of rising global temperatures, rising sea levels, increasing flooding and landslides, and affecting harvests and water supplies [6].
Many actors have contributed to reducing the impact of climate change in a country. The participation of multi-actors in reducing climate change risks establishes climate change governance, which is the process of formulating and implementing policies, regulations, and development priorities that refer to the goals of sustainable development [7], [8]. The actors can be categorized as state and non-state actors. The state actors include government organisations, both central and local governments. The non-state actors are the actor outside government related to climate change programmes, such as the business sector, international institution, and civil or non-government organisations [9]. State and non-state actors have equal responsibility in reducing and mitigating climate change impacts.
The present research focused on the role of non-state actors, particularly civil society organisations (CSO) in mitigating the effect of climate change. CSOs are non-governmental organisations (NGOs) including non-profit organizations, associations, foundations, forums (formal and informal), labour unions, professional associations, and educational and research institutions. Therefore, CSO is also commonly acknowledged as NGOs. Among the many functions of civil society, it considers that the main functions of CSOs are monitoring and advocacy function of public policies and the function of community empowerment. Through its functions, civil society ensures that development carried out by the government and the bureaucracy must be able to anticipate climate change, including climate change adaptation and mitigation.
The growing body of literature has asserted the important role of non-state actors in overcoming the impact of climate change. Several studies have shown the significant role of CSOs in climate change campaigns. Poutiainen et al. investigated adaptation strategy conducted by Canada's CSO to reduce climate change in the health sector [10]. Numerous efforts were carried out by environmental CSOs in Canada, including keep water and air quality, ecosystem intervention, and help vulnerable groups. Granderson studied the role CSOs in providing climate change adaptation planning in Vanuatu [11]. She found that CSO activists used a technocratic approach based on climate science and have concentrated on the seriousness of risks as well as the possibility of external intervention. Kristanti et al. pointed out the role of international donor's CSO in assisting the Indonesian government to reduce the effect of climate change through various programmes associated with central and local government [1]. The research on the role of CSOs in mitigating climate change remains a variety of empty spaces to be further explored. The first one is that prior studies have not explained the involvement of local CSO in a region or country. The second one is that the explanation associated with pattern, activity, and strategic role of CSOs in climate change issues is extremely scarce. To address the research gaps in the current body of the literature, the paper has two manifold purposes. Firstly, this work intends to describe the significant role of local CSO in tackling climate change risk in developing countries. Finally, the present research explains how local CSO takes a part in reducing climate change impact. We analyze the case of Riau, Indonesia to fill the aims of the study. Riau was chosen as the locus of this study because of three important logics. First, Riau was one of the regions in Indonesia having plentiful natural resources, including forests and peatland. Second, along with it, the destruction of natural resources also massively (2) what are the CSOs carrying out to reduce the impact of climate change in Riau?
Methods
This was empirical research conducted using a qualitative approach. We employed a qualitative method because we need to understand the role of CSOs in tackling climate change. It was in line with the purpose of qualitative research generating an understanding of a phenomenon than generalizing [12]. Instead of razing to the ground, we attempted to promote an understanding of the practices of CSOs in developing the programs and activities in reducing the effect of climate change in a developing country. The research was arranged in Riau, Indonesia from October to December 2020. The case of Riau was used to highlight how the CSO took apart and contributed to the process of reducing climate change impact through their programs.
The data were collected from primary sources by interviewing a series of informants. They were grouped into CSO activists, government institutions, media, and academics (Table 1). CSO activists came from various organisations in Riau, such as Jikalahari (Jaringan Kerja Penyelamat Hutan Riau), Wahana Lingkungan Hidup (Walhi), Scale-up Riau, World Resources Institute (WRI), Kaliptra Andalas, Bahtera Alam, Forum Indonesia untuk Transparansi Anggaran (Fitra), and Elang. The respondents from the Provincial Government of Riau included the Regional Agency of Political Affairs, and Forestry and Environmental Office. Lembaga Adat Melayu (LAM) Riau was chosen as constituted of cultural organization. We chose scholars from Universitas Riau and Universitas Lancang Kuning as the representation of academics. Antara of Riau, catatanriau.com, and Green Radio Line represented media and journalists. The interviewees were chosen using purposeful sampling and revolving toward another individual knowing the research's problem. All informants were chosen because they understand the involvement of CSO in handling the climate change effect in Riau. To ensure validity and reliability, we cross-checked the data using the triangulation technique. Triangulation was a method ordinarily used in qualitative research to validate and verify the data and information collected in the field [13]. Specifically, we used triangulation of source and triangulation of methods to ensure that the data were valid and reliable. Triangulation of source was applied by confronting the data taken from one informant to another informant. Triangulation of the method was performed by comparing the interview with secondary data. After verifying the data, we coded and grouped based on specific themes. The data were displayed in the results and analysis.
Current situations of environmental CSOs in Riau
The development of CSO in Riau has grown since a long time ago. It was historically rooted in the tradition of Malay society who favors discussing and criticize reality [14]. However, the rise of the New Order under Soeharto's despotic regime in 1966 was the dark age of CSO and freedom of expression in Indonesia [15]. To ensure political stabilization, New Order restricted and controlled the existence of CSO, including in Riau. As a result, CSOs were pressured and blocked to perform their function. After the fall of the New Order, CSOs back to tremendously grow along with freedom of speech in Indonesian reform. Figure 1. shows the number of CSOs in Riau based on their orientation. Regarding the data, it is known that there are 188 CSOs in Riau. The majority of the CSOs is the CSO focused on socio-cultural issue (120 organizations). The rest is environmental CSO (32 organizations), political oriented CSO (22 organizations), and education-oriented CSO (14 organizations), Unfortunately, the data do not show the real number of CSOs in Riau because not all CSOs were registered to the Political Affairs Agency of Riau Province as a consequence of new regulation about CSOs. According to the new rule, if a CSO has registered to the Ministry of Home Affairs, they have not been a response to register to the political affairs agency of regional or local government. Apparently, it is predicted that the total of CSO in Riau almost 3 times the data in Figure 1. CSO in Riau can be classified into three types in terms of their home base and operating area as shown in Table 2. They are local, national, and transnational CSO. If the CSOs receive a fund from an international donor and take action in Riau and other countries, they are called transnational CSO. There are several transnational CSOs in Riau, such as WRI and World Wide Fund (WWF). National CSO can be understood as CSO funded by a national body and performed in Riau, such as Walhi and Fitra. Local CSO is CSOs located in a city or regency in Riau and funded by a national and international institution, such as Jikalahari, Kaliptra Andalas, Scale-up, Elang, and Bahtera Alam. Practically, all forms of CSO can be collaborated and cooperated in applying for the programs in the field to reach the program's (Table 3). Collaborative actions among CSOs frequently shape in the form of networks, coalitions, and alliances, such as eyes on the forest (EoF). EoF is a national alliance among environmental CSOs, such as Jikalahari, WWF, and Walhi focused on the controlling of forest destruction in Riau. Not only in Riau but also in Kalimantan EoF is occupied. Shortly, it is also enlarged in Papua to address recent environmental damage.
The role of CSOs in climate change
In Riau, CSOs concerned with the reduction of climate change impact largely are environmental CSO. In fact, classifying CSOs into a specific cohort is greatly difficult because they are complex and fluid. Despite their institutional separation, CSOs do not have a particular component or operation. We recognize three CSO roles that are important to reducing climate change: advocacy, conservation, and conflict resolution. Advocacy is a type of action that leads to active support in the form of advocacy, support, or recommendations. Advocacy may also be described as a method of attempting to influence public policy through various forms of persuasion communication. [16]. Advocational CSO is a strategic and integrated action carried out by individuals or groups to put an issue on the policy agenda. Finally, advocacy seeks to find a solution to a problem by enforcing and implementing public policy to address the issue. Face-to-face campaigns, social media campaigns, marches, filing petitions, and persuading others to take action are all examples of advocacy tactics. Advocacy strives to mobilize facts, attention, and social action in order to effect substantive change. The CSOs include in this category in Riau are Jikalahari and Walhi, Conservational CSO is a CSO that aims to preserve the environment while also considering the advantages that can be achieved at the time by preserving each environmental aspect for future use. Conservation may also be referred to as preservation or protection, and it is an attempt made by humans to conserve nature. [11]. The aim of the conservation of living natural resources and their habitats is to ensure that living natural resources and the balance of their ecosystems are preserved so that they can sustain efforts to enhance community health and human life quality. Jikalahari, Elang, WRI, and WWF are some of the Riau conservation CSOs. Conflict resolution CSO is a CSO dedicated to resolving conflicts. Bahtera Alam, Scale-up, Kaliptra Andalas, and Elang are CSO committed to conflict resolution. The conflict usually occurs between corporations and society in controlling the land. In fact, resolving a dispute is a difficult task, there are a lot of people who are opposed to it. As a result, a strong mediator is required to assist in the resolution of the issue. When the dispute resolution process is completed, CSOs typically use empowerment programs to strengthen the conflicted population. The role and programs among all environmental CSOs in Riau can be seen in Figure 2 and Table. 3. The data illustrate that all types of CSOs have a program conducted together. The programs are directly and indirectly connected to the reduction of climate change risks in Riau. The conservational CSOs have the Green Siak Programme conducted among Walhi, Jikalahari, Fitra, and the Government of Siak Regency. Green Siak is a program that aims to encourage the principles of sustainable development in the use of natural resources and increase the economy of the community in the Siak Regency. As advocation function, Jikalahari, Fitra, and Walhi also control corruption in forest and environmental sectors in Riau. The corruption encompasses land-use change practiced by the regional head government. One of the success stories is their advocation in the case of Rusli Zainal and Annas Maamun (former governor of Riau) rendering Rusli Zainal and Annas Maamun were arrested by the Commission of Corruption Eradication (KPK). In their role on conflict resolution, several CSOs such as Scale-up, Bahtera Alam, and Kaliptra Andalas attempt to resolve the land-use conflict in Riau, mainly conflict related to peatland use between general public or custom society and corporation. Through conflict resolution, CSOs in Riau intend to realize fair and equal land ownership in Indonesia.
This research theoretically contributes to the current body of the literature because our findings complete the discussion on the role of CSO in addressing climate change in the local context of Indonesia. CSO plays a pivotal part in addressing climate change in many countries. The results of our study also support previous findings that highlighted the emergence of societal entangle in climate change governance [17]- [22]. Practically, our work is also useful to the CSO's governance in Indonesia. We suggest that the government should strengthen the CSOs and develop mutual cooperation and understanding with them in reducing the impact of climate change in Indonesia. The collaboration between CSO and the government can enhance the effectiveness of climate change programs.
Conclusion
The current study investigates the contribution of CSOs in overcoming the impact of climate change. By analyzing the case of CSOs in Riau, we seek the strategic role of CSOs in mitigating climate change
|
2021-07-30T20:05:05.050Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "8a3bab79ac75fcd3846ed72107f10aa8a241bf5d",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/824/1/012104/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "8a3bab79ac75fcd3846ed72107f10aa8a241bf5d",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
204836306
|
pes2o/s2orc
|
v3-fos-license
|
Symbiotic bacteria motivate the foraging decision and promote fecundity and survival of Bactrocera dorsalis (Diptera: Tephritidae)
Background The gut bacteria of tephritid fruit flies play prominent roles in nutrition, reproduction, maintenance and ecological adaptations of the host. Here, we adopted an approach based on direct observation of symbiotic or axenic flies feeding on dishes seeded with drops of full diet (containing all amino acids) or full diet supplemented with bacteria at similar concentrations to explore the effects of intestinal bacteria on foraging decision and fitness of Bactrocera dorsalis. Results The results show that intestinal probiotics elicit beneficial foraging decision and enhance the female reproduction fitness and survival of B. dorsalis (symbiotic and axenic), yet preferences for probiotic diets were significantly higher in axenic flies to which they responded faster compared to full diet. Moreover, females fed diet supplemented with Pantoea dispersa and Enterobacter cloacae laid more eggs but had shorter lifespan while female fed Enterococcus faecalis and Klebsiella oxytoca enriched diets lived longer but had lower fecundity compared to the positive control. Conversely, flies fed sugar diet (negative control) were not able to produce eggs, but lived longer than those from the positive control. Conclusions These results suggest that intestinal bacteria can drive the foraging decision in a way which promotes the reproduction and survival of B. dorsalis. Our data highlight the potentials of gut bacterial isolates to control the foraging behavior of the fly and empower the sterile insect technique (SIT) program through the mass rearing.
Background
Insects are capable of shaping their foraging behavior and food consumption in a way that favors their growth and reproduction [1][2][3]. In this light, the substrate specific chemoreceptors are key factors in the responses of insects to environmental stimuli such as food and whose latency to respond depends on the nutritional status of the insect [4][5][6].
Many insects are associated with diverse extracellular microorganisms that can be found, among other sites on the exoskeleton, in the hemocoel, or in the gut lumen [7], and with intracellular microorganisms that populate specialized tissues or organs such as bacteriocytes [8]. Their relationships with their hosts are often linked to their status as intra or extra-cellular symbionts and range from parasitic to mutualistic [9][10][11]. The intracellular symbionts are often considered as obligate ones, for, they cannot live outside the host, so they are transmitted vertically (from mother to progeny) [12]. Both, intra and extra-cellular symbionts play a variety of functions on host biology, survival and foraging activity [13][14][15][16][17][18]. For example, the gut microbiome of the vinegar fly Drosophila melanogaster was shown to indirectly influence the foraging behavior of the host by modulating their immune system, lipid and carbohydrate accumulation [19][20][21] and olfactory sensitivity for the own benefits of bacteria [22]. Gut bacteria was shown to be implicated in the resistance and susceptibility of Callosobruchus maculatus to dichlorvos and essential oil [23]. Intestinal probiotic Klebsiella oxytoca (member of Enterobacteriaceae family) restored the ecological fitness of irradiated B. dorsalis males by promoting food intake and metabolic activities [24].
Furthermore, some insects possess the ability to cultivate and digest their own gut bacteria as additional protein source to fuel their metabolic functions [25,26]. These evidences somehow show that gut bacteria and the host nutritional status are intimately associated in driving the fitness and foraging behavior of the insect [18].
Tephritidae fruit flies (Diptera: Tephritidae) harbour bacterial communities dominated by species of Enterobacteriacae [27]. These microbes have been shown in other fruit flies to be involved in host longevity [28,29], nitrogen fixation [30], reproductive success [31,32], protection from pathogens [33] and detoxification of xenobiotics [27,[34][35][36]. In order to survive and reproduce, these flies should acquire nutrients (carbohydrate and protein) from the environment through their foraging activity. The presence of gut microbiome in adult flies contributes to their nutrition by providing essential amino acids missing from their diets. For example, symbiotic olive flies Bactrocera oleae have been able to produce eggs when fed only non-essential amino acids, while aposymbiotic flies have been unable to do so [28,29]. Moreover, bacteria supplemented diets were shown to increase the life expectancy and fecundity of the flies in comparison to normal diets. For instance, female olive flies fed sugar diet inoculated with Pseudomonas putida laid more eggs than those fed sugar diet only [37]. Similarly, Enterobacter agglomerans and Klebsiella pneumoniae improved the dietary outcomes of yeast-based foods that positively affected the longevity and female reproductive capacity of the Mediterranean fly Ceratitis capitata [38].
The oriental fruit fly Bactrocera dorsalis (Diptera: Tephritidae) is a serious pest which causes considerable loss of cultivated crops worldwide and attacks over 350 host species [39,40]. The bacterial populations inhabiting the gut and reproductive organs of this pest were shown to play important roles in host physiology and behavior [39,[41][42][43].
In the present study, we assessed the effects of four bacterial isolates on foraging choice and fitness of B. dorsalis. We hypothesized that intestinal bacteria motivate the foraging decision and enhance the reproduction fitness and survival of the fly. The method consisted of offering to protein starved flies, symbiotic or axenic, a choice between full diets (containing all amino acids, sugar and minerals) or full diets supplemented with individual bacterial isolates (Pantoea dispersa, Enterobacter cloacae, Enterococcus faecalis and Klebsiella oxytoca). In the first experimental setting, we evaluated the effects of the presence or absence of bacteria on the responses of the flies (landing latency, food choice and ingestion) to the diets presented, while in the second experiment, we evaluated the fecundity and longevity of females fed full and probiotic diets, respectively. We predicted that, when deprived of their gut bacteria (axenic), the flies would consistently chose the most profitable diet to sustain their maintenance and reproduction fitness.
Results
Effects of bacterial isolates (probiotics) and symbiotic status on the foraging decision of B. dorsalis Here, we evaluated the overall response of the flies (symbiotic and axenic) to the diets, irrespective of their quality (full diet only or full diet + bacterial isolates). Then we compared the landing events on probiotic diets of axenic flies to those of symbiotic ones together with the time elapsed to the first landing decision (latency) in both flies.
In general, axenic flies responded faster in the experimental chambers than the symbiotic flies, and landings on the full diets inoculated with bacteria occurred faster than landings on the full diets only (Chi-square test, χ 2 = 7.93, R 2 = 0.998, P = 0.001) (Fig. 1). Axenic females and males landed within 1 and 3 min post-presentation on probiotic diets, respectively. Conversely, latency to land on full diets was longer in axenic females than in the symbiotic ones (ANOVA, F = 11.834, df = 1, 59, P < 0.0001) (Fig. 1).
Bacterial effects on fitness parameters Female fecundity
Bacterial isolates of P. dispersa and E. cloacae, significantly increased egg productions in symbiotic and axenic B. dorsalis females from the fifth feeding day as compared with the positive control (ANOVA, F = 111.351, df = 5, 55, P < 0.0001 and F = 177.404, df = 5, 55, P < 0.001, respectively) ( Fig. 3 a & b). Conversely, symbiotic and axenic females fed E. faecalis and K. oxytoca enriched diets drastically reduced the lifelong number of eggs laid compared to positive control (F = 45.297, df = 5, 55, P < 0.0001 and F = 177.404, df = 5, 55, P < 0.0001, respectively) and remained lower throughout the experimental period (Fig. 3 a & b). When B. dorsalis females were fed only sugar diet (negative control), they were not able to produce eggs irrespective of their symbiotic status. The fecundity capacity of symbiotic females fed E. faecalis and K. oxytoca enriched diet was not different
Discussion
The relationships between the flies and their gut microbiome are interlaced in complex webs in which gut bacteria provide essential nutrients to the host and enhance its reproductive capacity and survival [1,37]. The alteration of gut bacteria generally results in the disruption of physiological functions of the host fly. In order to survive and reproduce under such condition, the flies optimally forage on diets which offer higher profitability in terms of nutrient intakes [1,3]. In a similar experimental setting using symbiotic and aposymbiotic flies, the suppression of gut microbiome by antibiotics treatment disrupted the foraging behavior of the oriental fruit fly B. dorsalis and constrained the fly to consume many food droplets at the cost of extending the foraging duration [18]. In our experiment, by creating different feeding environments, we demonstrated how supplementing the normal diet with gut bacteria isolates affected the foraging behavior, diet ingestion and fitness parameters of B. dorsalis in a significant manner. Axenic flies (males and females) showed significant preference for the probiotic meal to which they responded faster (compared to full diet) and maximized the diet ingestion to ensure their maintenance and reproductive fitness. Previously, gut bacterial isolates (Enterococcus cloacae, Citrobacter freundii, Bacillus cereus, Enterobacter, Klebsiella etc.) were demonstrated to produce chemical substances which attract Bactrocera dorsalis and Bactrocera cucurbitae, toward available food source [44][45][46]. In addition to being highly attracted toward the probiotic diets, axenic flies were compelled to ingest as many food drops as possible to become satiated ( Fig. 2 A & B). This finding suggests that bacterial isolates may facilitate the access and assimilation of the available nutrients from the diets [32], either by increasing the food palatability or positively modulating digestive enzymes [29]. Previous studies also demonstrated that alterations of gut microbiota may result in the change of feeding behavior and may constraint the host fly to make rational decision toward diets with higher rewards [13,18] and the use of intestinal bacteria as dietary supplements may help to restore the initial fitness of the host. For example, commensal bacterial isolates Klebsiella oxytoca (BD177) was able to reinstate the ecological and foraging fitness of B. dorsalis irradiated lines by improving the diet ingestion [18] and increasing food metabolism (haemolymph sugar and amino acid contents) [24].
The amount of ingested food (full diet supplemented with Pantoea dispersa and Enterbacter cloacae) by the symbiotic flies (which already have bacteria in their gut) and the axenic ones (which do not harbor any of the given bacteria) was similar in foraging experiment (Fig. 2a). This could be due to the ability of bacteria isolates to recolonize their natural habitat in the gut of the axenic flies and revive appetitive behaviors expedited during the bacterial suppression. On the other hand, since the symbiotic flies contain its intact gut microbiota, the effects of these bacterial isolates could be synergistic to the ones already present in the gut of the normal flies. Also, the quantification of the bacteria (four isolates) in fly or fly gut after they have been provided with full or probiotic diets may answer or provide strong clue that those bacteria (not any other factor) impacted the phenotype or affected the foraging behavior observed here.
Supplementing the full diet with P. dispersa and E. cloacae resulted in an improvement of female fecundity compared to positive control (Fig. 3 A & B). The quality of diets and bacteria were shown to interact together in modulating the fecundity of many fruit flies [37,47,48]. The host fly can either use nutrients from the full diets directly to improve its reproduction fitness [28,29], or simultaneously, the bacterial isolates can use amino acids from the diets to support their own proliferation, before being digested by the host fly and used as additional source of nutrients for eggs production [25,26]. Conversely, when the diets were supplemented with E. faecalis and K. oxytoca, the number of eggs laid was reduced by more than 60% in comparison with the positive control, and no eggs were recorded from the negative control in which flies were fed white sugar diet only ( Fig. 3 A & B). Two implications can be attributed to these observations. First, E. faecalis and K. oxytoca may be deleterious by negatively affecting B. dorsalis sexual maturity and oogenesis, and the little eggs produced were solely sustained by the full diet. Second, the absence of eggs in sugar fed flies indicates that amino acids residues are primary precursors of eggs production in B. dorsalis. Taken together, the association between gut microbiome and diet quality has a nutritional and life history basis [21,37,49,50]. In the same light, the establishment of bacterial isolate (for example, Acetobacter thailandicus) in the gut of Drosophila melanogaster accelerated the host development and enhance the fertility of the offsprings, and their removal represses the oogenesis in comparison to the normal flies [51,52].
The nutrient content of diets has significant impacts on adult longevity [53,54]. For example, the variation of the concentrations of carbohydrate, protein and a phenolic compound (resveratrol) extended the lifespan of Drosophila melanogaster [55]. When flies forage in an environment with varying protein availability, they generally make compromises between some fitness parameters based on life history tradeoffs. As such, either they sacrifice the reproduction and prolong their lifespan or maximize energy for reproduction at the cost of shortening their life expectancy. Irrespective of the compromises made along this process, the gut microbiome may come into play to facilitate peptide synthesis or protein metabolism to sustain the host development and survival [21,49,50].
The presence of E. faecalis and K. oxytoca in adult diets extended the B. dorsalis lifespan in comparison with the positive control ( Fig. 4 A & B). The possible reason could be the ability of these bacterial isolates to reduce biomarkers of physiological and oxidative stresses, and inflammation which are considered as the main cause of early death in flies [56]. Another mechanism of lifespan extension in B. dorsalis could be the indirect repression of genes involved in aging pathways by the bacteria [55]. There have been a growing number of studies indicating the ability of intestinal probiotics to extend fly's life. For example, the inoculation of adult diet with gut bacteria (such as Enterococcus phoeniculicola and members of Enterobacteriaceae) prolonged the lifespan of Ceratitis capitata, B. dorsalis and D. melanogaster [32,56,57]. However, the incorporation of P. dispersa and E. cloacae in the full diet resulted in the reduction of lifespan compared to positive control (Fig. 4 A & B). These bacteria might have obstructed the access to nutrients from the full diets by decreasing the food palatability and/or host appetite and promoting the oxidative stress enzymes as previously suggested with Citrobacter braakii, Klebsiella pneumoniae and Pseudomonas dispersa in B. minax [58].
Conclusion
The study of specific functions of gut bacteria on foraging activity and fitness of B. dorsalis is elusive to date. Here, we evaluate the effects of four bacterial isolates on the foraging choice and fitness of B. dorsalis. Our results show that the axenic flies consistently chose diets inoculated with bacteria to which they responded faster and consumed more droplets than the normal (full) diet. Consequently, diets supplemented with Pantoea dispersa and Enterobacter cloacae enhanced the female fecundity, while Enterococcus faecalis and Klebsiella oxytoca enriched diets extended by far the female life expectancy compared to the control. Although further studies are needed to elucidate the molecular mechanisms underlining the symbiont-host interactions (for example, comparative genomics and transcriptomics, microarray, RT-qPCR etc), our results showed to some extent, the potentials of E. faecalis, K. oxytoca, P. dispersa and E. cloacae to drive the foraging behavior and alternatively improve lifespan and reproduction of B. dorsalis. Since this fly can be controlled by the sterile insect technique (SIT), the intestinal probiotics evaluated in this study could be useful in mass-rearing and longevity extension.
Fly rearing and maintenance
The wild strain larvae of Bactrocera dorsalis were collected from infested orange fruits in the experimental orchard of Huazhong Agricultural University (30°4′N and 114°3′ E) (Wuhan, Hubei Province, China) in September 2014 and were reared as previously described [18]. Briefly, the third instar larvae were allowed to exit the fruits, pupate and eclose in sterile sands under controlled laboratory conditions (12:12 light-dark photoperiod; temperature 25 ± 3°C, and 67 ± 5% relative humidity). The resulting adults were maintained under artificial diet consisting of Tryptone (25 g/L), Yeast extract (90 g/L) (Oxoid LP0021, RG24 8PW, UK), Sucrose (120 g/L), Agar powder (7.5 g/L), Methyl-phydroxybenzoate (4 g/L), Cholesterol (2.3 g/L), Choline chlorite (1.8 g/L), Ascorbic acid (5.5 g/L) and 1 L of distilled Water for preparation [18]. Water was provisioned ad libitum in cotton wool. The larval diet consisted of all the above ingredients that were mixed, added with 250 g/L of wheat bran and autoclaved before use [18]. Twenty (20) adult flies aged 8-10 days were removed from the laboratory culture and anesthetized at − 20°C for 5 min prior to dissection and isolation of individual guts. A culture-dependent technique was employed to isolate gut bacteria (from anesthetized flies) from which, four isolates were later used in bioassays.
Production of experimental flies Symbiotic flies
Symbiotic flies were collected from the laboratory established colony (as described above). A total of 690 newly emerged flies (1 day old) were fed sugar diet and water for seven days prior to bioassays (to starve them of protein source) using 9 cm Petri dishes presented in cotton wool. One hundred fifty flies were used for foraging tests (75 males and 75 females) (Experiment 1). Three hundred sixty males collected from the lab culture were used to fertilize the 180 females assigned for fecundity and longevity assays (Experiment 2). For Experiment 1, the flies were divided by sex (75 males and 75 females) and separately held in 45x30x30cm cages. Then, individual fly was transferred to a 15x15x15cm cages for bioassays. For Experiment 2, 180 females were separately held in 6 cages of 30 flies each. Then, 60 males (same age) from the lab culture were added in each cage (from the fourth sugar treatment day) to mate with the experimental females. Mating couples (duration ≥10 min) were retrieved and held in a separate cage and later used for bioassays.
Axenic flies
Axenic flies were obtained from sterilized eggs (embryos) collected from our lab culture and grown on sterile diets as previously described [51,59]. Briefly, 300 collected eggs (aged 4 h) were individually surface sterilized twice in ethanol 70% and then rinsed twice in deionized distilled water (ddw), before being immersed in phosphate-buffered saline (PBS) solution for 5 min. The resulting embryos were dechorionated in 2.7% sodium hypochlorite solution for 2 min, and then washed twice in sterile ddw, before being transferred to autoclaved larval diet and allowed to develop for about nine days (to reach the third instar larvae). The third instar larvae were allowed to pupate and eclose in sterile sands under lab conditions (12:12 L: D; 25 ± 3°C, and 67 ± 5% RH). The axenic state of disinfected embryos was validated by PCR of the 16S rRNA gene on ten individual egg homogenates using universal primers (27F/ 1459R) and by culturing technique on ten individual egg homogenates using dilution plating of eggs homogenate on agar plate-medium, respectively. The PCR reactions were performed in a programmed thermal cycler under the following conditions: Initial denaturation at 95°C for 5 min, followed by 30 cycles at 94°C for 1 min, annealing at 53°C, 54°C, 55°C or 58°C for 1 min 30 s, 72°C for 1 min and a final extension at 72°C for 5 min. Any disinfected sample containing grown colonies or agarose bands were discarded and repeated until no bacterial colonies or DNA bands were seen. The repartition of the number of axenic flies and procedures in both experimental settings (1 and 2) is similar to that of the symbiotic flies as described in the previous section, with the only difference that we used axenic flies here.
Insect dissection
The 20 flies previously anesthetized (see section 1 above) were individual washed in 70% ethanol for 2 min and rinsed 3 times in sterile distilled water before dissection. The dissection was carried out aseptically with two pairs of sterilized forceps on a sterilized glass slide spotted with 50 μL of sterile distilled water using a stereomicroscope. The intact guts were individually and separately transferred into Eppendorf tubes containing 750 μL TE buffer (10 mM Tris-Cl, pH 8; 1 mM EDTA, pH 8) and manually crushed with adapted pestle. Homogenized gut suspensions were serially diluted by 10 − 4 -10 − 8 , 50 μL of which were plated onto Luria Bertani (LB) agar media (Table 1) and incubated at 37°C for 24-48 h. After the incubation, the representative bacteria colonies were randomly pooled based on the size, color, opacity and morphology of each colony. The pre-selected colonies were purified through repeated sub-culturing before being used for DNA extraction and sequencing or preserved in glycerol at − 80°C in 50% glycerol (v/v) for future use. The whole dissection procedures were performed in a laminar flow hood to avoid aerial contamination.
Bacterial DNA extraction
The DNA was extracted following the CTAB (Cetyl Tri-methylAmmonium Bromide) method. Briefly, single purified colony was cultured in LB broth for~7 h. 1.5 mL of bacterial suspensions were centrifuged at 10,000 rpm for 10 min and the recovered pellets were resuspended in 557 μL of TE buffer. 10 μL of lysozyme (5 mg/ml) was added to the suspension and incubated at 37°C for 20 min. Then, 3 μL proteinases K (20 mg/mL) and 30 μL SDS (10%) were respectively added and incubated again at 37°C for 40 min, afterward 100 μL of NaCl (5 mol/l) and 80 μL of CTAB/NaCl were added to the solution and incubated again at 65°C for 10 min. Then, Phenol/chloroform/ isoamyl alcohol (25:24:1) was finally added to the upper phase solution and centrifuged at 13,400 g for 4 min. Finally, Isopropyl alcohol was added to precipitate the DNA pellets which were later rinsed in 70% ethanol, re-suspended in TE buffer and kept at − 20°C until use for PCR analysis.
Polymerase chain reactions (PCR)
PCR reactions of the 16S rRNA gene were performed using the bacterial universal primers 27F:5′-AGAGTT TGATCMTGGCTCAG-3′ and 1492R: 5′-GGTTAC CTTGTTACGACTT-3′. A total volume of 50 μL of PCR reactions containing 1 μL of DNA template, 1 μL of F/R primers, 5 μL of High Fidelity DNA buffer (× 10), 4 μL of dNTPs (2.5 mM), 1 μL of Hifi DNA polymerase and 38 μL of deionized distilled water was prepared. The amplification was carried out in a programmed thermal Table 1 Ingredients and preparation of the standard Luria Bertani (LB) agar media Ingredients Amounts (g) Preparation procedures 1 Yeast extract 2.5 Ingredients 1-4 were put in an Erlenmeyer containing 250 mL of distilled water and the solution was mixed with a magnetic stirrer. Then distilled water was added to total volume of 500 mL and transferred to 1 L flask.
The liquid was autoclaved for 20 min at 115°C and let to cool at~55°C before pipetting 25 mL onto each petri dish plate. Water 500 Note: The preparation of LB broth follows the same procedures but without agar powder Table 2).
Preparation of experimental diets Full and sugar diet
A total of two different diets were prepared: a full diet (F) containing all amino acids (essential and nonessential), sucrose, and minerals required for an optimal maintenance and reproductive development of adult flies [29]; a sugar diet consisting of 60% sucrose and minerals used to maintain flies for seven days of protein starvation ( Table 3). The diet compositions and preparation procedures were done as previously described [29] and filtered before use.
Probiotic diets
Four bacterial isolates (Pantoea dispersa, Enterobacter cloacae, Enterococcus faecalis and Klebsiella oxytoca) ( Table 2) were separately grown in LB broth (Table 1) and centrifuged at 10,000 rpm for 5 min. The harvested pellets were washed twice and resuspended in sterile distilled water. The concentration of bacteria in the solvent was adjusted to 0.5 optical density (OD) at 550 nm wavelength using a spectrophotometer (Eppendorf AG, Germany) [32]. 500 μL of each bacterial suspension was added to 50 g of full diets (treatment groups) while 500 μL of sterile distilled water only was respectively added to full and sugar diets (positive and negative controls, respectively). Two Petri dishes seeded with 5 drops of 5 μL volume of full and probiotic diets (randomly displayed in Petri dishes), respectively, were used for the foraging experiment. The flies assigned for fecundity and longevity assays were fed with 1 mL daily of each experimental diet presented in petri dishes seeded with autoclaved cotton wool.
Experimental procedures Experiment 1: foraging assays
Following the seven day preparatory period during which flies were fed only sugar (as described above), individual fly was transferred to a 15x15x15cm cage and allowed to acclimatize for 20 min before introducing simultaneously a pair of petri dishes containing combinations of Full diet and bacteria supplemented diets at similar volumes and densities (Fig. 5a). Five treatment groups were set up representing the four bacterial isolates and a control group (Fig. 5a). Female and male flies were tested separately and to motivate their foraging responses, they were all starved for 24 h before experimental trials. Each observation event was replicated 15 times, males and females, symbiotic and axenic flies, representing a total of 300 observation events (15 replicates × 2 symbiotic status × 2 sexes × 5 treatments). Each replicate consisted of observing the protein starved individual male or female (symbiotic and axenic) for an hour and collecting data on latency (time from diet exposure to the initial landing), the number of flies which landed on a food drop, the choice of diet made and the number of drops consumed. All the data collected were then analyzed within and between treatments and symbiotic status.
Experiment 2: fecundity and longevity assays
A total of 720 newly emerged 1 day old flies were preselected for this experiment: 360 females (180 symbiotic and 180 axenic) and 360 symbiotic males exclusively (60 males × 6 treatments). Symbiotic and axenic flies were separately held in six 45x30x30cm cages of 90 flies, each containing 30 females and 60 males, respectively (1:2 proportion). Symbiotic and axenic flies were starved for 24 h to obtain homogenous populations before being separately fed with autoclaved sugar diet and water ad libitum (soaked in cotton wool presented in Petri dishes) for seven days. Thirty individual mating couples, symbiotic and axenic were immediately retrieved from each cage, individually held in 15x15x15cm cages and the mating duration was evaluated. Only a mating time ≥ 10 min was considered, otherwise the couple was returned to initial cages. Six treatment groups containing 60 flies each (30 couples) were set up, representing the different types of diet with which mated couples were maintained (Fig. 5b). At seventh day, the sugar diet was replaced by either full diets (positive control) or probiotic diets (treatments with each of the four bacterial isolates), or maintained under the same sugar diets (negative control) (Fig. 5b). Agar medium inoculated with orange juice served as oviposition substrate. The female fecundity was evaluated by counting the daily number of eggs laid by individual female maintained in 100 mL transparent plastic cups throughout the female life expectancy. A yellow circular paraffin residue (Ø ≈ 6 cm, width ≈ 1 cm) was placed at the bottom of each cup and used as oviposition substrate to collect eggs daily [29]. The female survival was assessed by daily cage inspection and counting of dead flies till their complete death in all treatment groups. The data were pooled and analyzed within and between treatments and symbiotic status.
Statistical analysis
A one-way analysis of variance (ANOVA) was performed to separately test the effects of symbiotic status and diet types on the foraging behavior (landing latency and diet consumption) (male and female), female fecundity and longevity. All data were tested for homogeneity of variances using Levene's tests and only data on the effects of diet types on female fecundity were log transformed. The non-parametric test (Kruskal-Wallis H) was used when ANOVA assumptions were violated (for example, data on survival, F = 18.68, df = 5, 45, P < 0.0001). To assess the responses of experimental flies (symbiotic and axenic), marginal means of all flies which landed on either full diet or probiotic diet were analyzed using chi-square test, Student's t-test was used to determine the statistical difference between both experimental diets and their corresponding latencies (irrespective of treatments) were pooled and computed by ANOVA. The one-way ANOVA was used to analyze the food consumption between males and females using OriginPro To determine the importance of factors that shape the foraging behavior of B. dorsalis, variables of overall response and latency were analyzed using the ordinary least squares regression model (IBM SPSS 20.0 software) with sex, symbiotic status, diet types and treatments (see Fig. 5a) as effects. Similarly, to determine the crucial factors that shape the survival, percentage of daily living flies were analyzed using the Cox's regression model (SPSS 20.0 software) with diet types and symbiotic status as effects. The Pearson chi-square test was used to assess the association between the full and probiotics diets in modulating the foraging behavior, fecundity and longevity of experimental flies. Multiple comparisons between treatments were based on Tukey's post hoc tests at P ≤ 0.05. The IBM SPSS software 20.0 (SPSS Inc., Chicago, IL, U.S.A.) was used to analyze all datasets and expressed as the means with standard errors (SE), except data on the overall responses. OriginPro software version 8.5.1 was used to draw curves and graphs.
|
2019-09-17T00:40:27.286Z
|
2019-08-13T00:00:00.000
|
{
"year": 2019,
"sha1": "8e6cf5103e971644509d6b93e701438b9cf4d7ad",
"oa_license": "CCBY",
"oa_url": "https://bmcmicrobiol.biomedcentral.com/track/pdf/10.1186/s12866-019-1607-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "df26b1b7a73711d2af58b01dd35187fd15229981",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.