id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
255866741
|
pes2o/s2orc
|
v3-fos-license
|
Tetraspanin 7 promotes osteosarcoma cell invasion and metastasis by inducing EMT and activating the FAK-Src-Ras-ERK1/2 signaling pathway
Tetraspanins are members of the 4-transmembrane protein superfamily (TM4SF) that function by recruiting many cell surface receptors and signaling proteins into tetraspanin-enriched microdomains (TEMs) that play vital roles in the regulation of key cellular processes including adhesion, motility, and proliferation. Tetraspanin7 (Tspan7) is a member of this superfamily that plays documented roles in hippocampal neurogenesis, synaptic transmission, and malignant transformation in certain tumor types. How Tspan7 influences the onset or progression of osteosarcoma (OS), however, remains to be defined. Herein, this study aimed to explore the relationship between Tspan7 and the malignant progression of OS, and its underlying mechanism of action. In this study, the levels of Tspan7 expression in human OS cell lines were evaluated via qRT-PCR and western blotting. The effect of Tspan7 on proliferation was examined using CCK-8 and colony formation assays, while metastatic role of Tspan7 was assessed by functional assays both in vitro and in vivo. In addition, mass spectrometry and co-immunoprecipitation were performed to verify the interaction between Tspan7 and β1 integrin, and western blotting was used to explore the mechanisms of Tspan7 in OS progresses. We found that Tspan7 is highly expressed in primary OS tumors and OS cell lines. Downregulation of Tspan7 significantly suppressed OS growth, metastasis, and attenuated epithelial-mesenchymal transition (EMT), while its overexpression had the opposite effects in vitro. Furthermore, it exhibited reduced OS pulmonary metastases in Tspan7-deleted mice comparing control mice in vivo. Additionally, we proved that Tspan7 interacted with β1 integrin to facilitate OS metastasis through the activation of integrin-mediated downstream FAK-Src-Ras-ERK1/2 signaling pathway. In summary, this study demonstrates for the first time that Tspan7 promotes OS metastasis via interacting with β1 integrin and activating the FAK-Src-Ras-ERK1/2 pathway, which could provide rationale for a new therapeutic strategy for OS.
Background
Osteosarcoma (OS) is one of the commonest prevalent form of bone malignancy, arising from immature bone stromal spindle cells and primarily affecting the epiphysis regions of long bones in children and adolescents [1], with an estimated annual incidence of OS is 3/1,000,000 [2]. Historically, patients diagnosed with OS before the 1970s exhibited a poor 5 year survival rate of just 15% owing to amputation being the only available treatment option [3], although these rates have risen to 60-70% with the advent of neoadjuvant chemotherapeutic drugs including cisplatin, doxorubicin, and methotrexate [4]. However, many patients still succumb to this disease in large part owing to the rapidity with which it progresses and metastasizes, with the lungs being the most common site of distant OS tumor metastasis. The further refinement of surgical approaches and neoadjuvant chemotherapy regimens have not significantly improved the survival of OS patients. As such, there is a clear need for further studies exploring relevant molecular targets associated with the mechanistic basis for OS metastatic progression.
Tetraspanins are members of the four-transmembrane protein superfamily (TM4SF) of which 33 have been identified to date in mammals, exhibiting diverse tissue-and organ-specific expression patterns [5]. All tetraspanins exhibit a high degree of structural homology including four transmembrane domains (TM1-4), a large extracellular loop (LEL) and a small extracellular loop (SEL), as well as a small intracellular loop [6]. In functional contexts, tetraspanins bring together a variety of membrane and cytosolic proteins such as integrins, kinases, and receptors within cells in clusters known as tetraspanin-enriched microdomains (TEMs) that orchestrate downstream signaling to regulate proliferation, migration, differentiation, and adhesion in both physiological and pathological contexts such as tumor cell metastasis [7,8]. The epithelial-mesenchymal transition (EMT) is a key process whereby tumor cells acquire a more migratory and aggressive phenotype conducive to metastasis. Several studies have highlighted roles for tetraspanins in the induction or regulation of this EMT process. For example, one recent analysis demonstrated that non-small cell lung cancer (NSCLC) cells overexpressing tetraspanin7 (Tspan7) exhibited enhanced migratory activity attributable to more robust EMT induction [9]. In contrast, CD82 (Tspan27) inhibits fibronectin-induced EMT progression by interacting with the α3β1/α5β1 integrins which form the fibronectin receptor to disrupt downstream focal adhesion kinase (FAK)/Src and ILK pathway activation [10]. Other members of TM4SF family including Tspan8, CD63, and CD151 have also been proved to regulate key steps in EMT in either an oncogenic or tumor suppressor capacity in cancers such as melanoma, colorectal cancer, and renal cell carcinoma [11][12][13]. In light of their structural homologies and the above evidence, we hypothesized that Tspan7 may serve as an important EMT regulator in multiple cancer types. Integrins are α/β heterodimeric adhesion receptors capable of binding to specific molecules within the extracellular matrix (ECM) and on cell surfaces, particularly tetraspanins, and thereupon activating a range of signaling pathways including the FAK pathway to modulate cellular proliferation, survival, migration, and EMT induction [14][15][16]. Many different integrin heterodimers including α3β1, α4β1, α6β1, and αvβ3 have been shown to interact with TM4SF proteins including Tspan1, CD9, CD53, CD63, CD81, and CD82 in the context of oncogenesis [17][18][19]. Currently, it remains poorly understood whether Tspan7 is able to interact with specific integrin partners to regulate tumor progression.
Tspan7 (also known as TM4SF2, CD231, and A15) is encoded on chromosome Xp11.4 in humans and is expressed at high levels by non-hematopoietic cells, with pronounced expression being evident in the hippocampal and cerebral cortex regions of the brain [20,21]. This protein plays a vital role in normal synaptic transmission and the development of hippocampal neurons [22], with Tspan7 mutations having been linked to intellectual disabilities including X-linked mental retardation [23]. Autoantibodies specific for Tspan7 can also aid in the identification of adults with type 1 diabetes mellitus, and may offer value for the immunotherapeutic treatment of certain latent forms of this autoimmune condition [24,25]. Furthermore, Tspan7 has been identified as a promising biomarker and functional regulatory protein associated with several cancers such as multiple myeloma [26], clear-cell renal cell carcinoma [27], head and neck squamous cell carcinoma [28], primary uterine leiomyosarcoma [29], and desmoplastic small round-cell tumors [30]. The complex roles played by this tetraspanin have been explored at length in certain oncogenic settings. For example, the overexpression of Tspan7 in liver cancer and multiple myeloma cells markedly enhances their metastatic potential [26,31], whereas exerts an anti-tumor effects in bladder tumors and suppresses the growth of cancer cell through the PTEN/PI3K/Akt signaling pathway [32]. Wang et al. [9] found that in NSCLC, Tspan7 plays a pro-oncogenic role. As such, Tspan7 may play pro-or anti-tumorigenic roles in a context-dependent manner. Notably, nevertheless, no studies have clearly explored the effect of Tspan7 in OS progression.
Consequently, we explored the expression of Tspan7 in OS cell lines and tumor tissue samples and found it to be elevated therein relative to corresponding controls. Knocking down Tspan7 was sufficient to suppress the proliferation of OS cancer cells. We thus explored the functional impact of Tspan7 expression on the metastatic progression of OS tumors both in vitro and in vivo, revealing that it contributes to tumor cell migratory activity through both the induction of EMT and the interaction with β1 integrin that ultimately results in the activation of FAK-Src-Ras-ERK1/2 pathway. Together, these results offer new insights regarding the mechanistic basis of Tspan7 for OS onset and progression, and they further highlight Tspan7 as a promising therapeutic target in the management of patients with OS. Tspan7 knockdown was achieved by synthesizing two siRNA duplexes specific for this tetraspanin (siT-span7#1 and siTspan7#2) or a corresponding negative control (siNC) construct (Biolino Nucleic Acid Technology Co., Ltd). HOS cells were transfected with appropriate siNC or siTspan7 constructs (100 nM) utilizing Lipofectamine ® RNAiMAX (Thermo Fisher Scientific, Inc.) based upon provided directions.
Microarray data collection and analysis
U2OS cells stably expressing the OE-Tspan7 or mock constructs were treated with Ras inhibitor (Salirasib; #HY-14754; 50 μM) for 48 h, and then either assessed with respect to their migratory and invasive activities or protein expression via western blotting. CCK-8 kit (Dojindo Molecular Technologies, Inc., China) was utilized to assess cellular viability. For colony formation assays, HOS cells were cultured in 6-well plates and treated with appropriate siRNA constructs (5000/ well). Cells were then incubated for 10-14 days, after which colonies were fixed using methanol and stained using 0.1% crystal violet. Experiments were repeated in triplicate.
qRT-PCR
The MiniBEST Universal RNA Extraction kit (Takara, Dalian, China) and the first-strand cDNA Synthesis Kit (Takara) was used to obtain the cDNA from cell lines. All qRT-PCR reactions were prepared with a SYBR Premix Ex Taq kit (Takara) and run under the following settings: 95 °C for 30 s; 40 cycles of 95 °C for 5 s, 60 °C for 30 s in StepOnePlus RT-PCR instrument (Applied Biosystems, Shanghai, China). The relative gene expression was assessed via the 2 −ΔΔCq method, with GAPDH as a normalization control. All primers and siRNA sequences used herein are listed in Table 1.
Plasmid transfection
Tspan7-specific short hairpin RNAs (shTspan7#1, shT-span7#2) and a corresponding negative control (shNC) were inserted into the LV-3 (pGLVH1/GFP + Puro) vector (GenePharma, Shanghai, China). Lipofectamine 3000 was then used to transfect these plasmids into HEK293T cells based on the provided directions. After 24 h, the supernatants including lentiviral particles were collected and used to transduce HOS and Saos2 cells (40-70% confluent) in the presence of polybrene (8 μg/mL). Puromycin (2 mg/mL) was used to select for cells stably deleted Tspan7. The FLAG-tagged Tspan7 expression vector was cloned, and used to prepare lentiviral particles as above. U2OS cells were then transduced with the resultant lentiviral particles, and blasticidin (2 mg/mL) was used to select for the stably transformed cells. Western blotting and qRT-PCR were used to confirm knockdown or overexpression of Tspan7. The shRNA sequences employed in this research were compiled in Table 1.
RNA-sequencing
RNA-seq analyses were conducted with an Illumina Hiseq 2000 instrument (Illumina, Inc., USA). Briefly, total RNA isolated from HOS cells expressing shNC or shTspan7 was collected, and the integrity thereof was confirmed with an Agilent Bioanalyzer 2100 instrument (Agilent Technologies, Inc., USA). Following sequencing, genes that were significantly differentially expressed were identified (fold change ≥ 2 and P < 0.05), and pathway analyses of these differentially expressed genes (DEGs) were conducted using the Gene Ontology (GO) (http:// www. geneo ntolo gy. org) and Kyoto Encyclopedia of Genes and Genomes (KEGG) database (https:// www. genome. jp/ kegg) tools.
Wound healing and transwell assays
Wound healing assays were employed to assess the migratory ability of OS cells. When cells were 90-100% confluent, a sterile micropipette tip was used to generate a scratch wound in the monolayer surface. Cells were cultured in serum-free media, with the wound being imaged via light microscope after 0, 24, and 48 h. The ImageJ software was used to measure the wound area in 10 random fields of view, and the percentage of wound closure was calculated using the following formula: [1-(wound area at 24 h or 48 h/wound area at 0 h)] × 100%.
Transwell filter inserts (#3464, Corning, USA) were additionally used to assess OS cell migration activity. Briefly, 2 × 10 4 cells in 150 uL of serum-free media were added to the upper chambers of the Transwell inserts in 24-well plates with 600 μL of media containing 10% FBS. After incubating 16-18 h, cells were rinsed with PBS, fixed using methanol, stained with crystal violate for 1 h, and the migratory cells were then measured after gently removing cells from the inner surface of the inserts with a cotton swab. For invasion assays, with 1 × 10 5 of cells in 500 uL serum-free media being added to the upper portion of Matrigel invasion chambers (#354,480, BD, USA) that were placed into 24-well plates containing 700 μL of complete media per well. Following an 18-20 h incubation, a cotton swab was used to remove non-invasive cells, while the remaining cells were fixed using methanol and stained for 1 h with crystal violet. All migratory and invasive cells in six random fields of view per sample were counted with a light microscope, and all the experiments were conducted in triplicate.
Western blotting and immunoprecipitation (IP)
RIPA buffer containing a protease inhibitor cocktail (Roche Applied Science, Penzberg, Germany) was used to lyse cells under sonication. The supernatants were collected after a centrifuge and the concentration was quantified using a BCA assay kit (Beyotime Biotechnology, Shanghai, China). Proteins were then separated via SDS-PAGE and transferred onto 0.45 μm PVDF membranes (Millipore, USA). Blots were blocked with 5% non-fat milk in TBST, and were then incubated overnight at 4 °C with the antibodies specific for the following: Tspan7 . Primary antibody dilution buffer (#P0256, Beyotime Biotechnology, China) was used to prepare all antibodies. U2OS cells expressing FLAG-tagged Tspan7 or control constructs were grown in 10 cm dishes. Anti-Flag M2 agarose beads (#A2220, Sigma) were used to purify Flag-tagged Tspan7 proteins. Precipitates were then rinsed thrice with PBS, boiled in sample loading buffer, separated via SDS-PAGE, and analyzed via western blotting as above. Proteins interacting with Tspan7 as identified by mass spectrometry.
Animals experiments
5-6 weeks old female nude mice from Qinglong Mountain Animal Breeding Center (Nanjing, China) were housed under controlled conditions (18-23 °C, 12 h light/dark cycle) with free food and water access. Briefly, a model of OS cell lung metastasis was established by injecting mice with 2 × 10 6 HOS cells (shNC or shT-span7#2) in sterile PBS via the lateral tail vein. Four weeks later, lungs were resected, fixed using 4% formaldehyde for 24 h, and metastatic nodules visible on the lung surface were counted. Samples were then paraffin-embedded, cut into 5 μm sections, and subjected to hematoxylin and eosin (H&E) staining. Prepared sections were imaged using a microscopy (magnification, 5× and 10× ). Euthanasia was performed with an intravenous injection of 150 mg/kg of pentobarbital sodium. The Animal Care Committee of the Third Affiliated Hospital of Soochow University approved this animal study, which was performed in a manner consistent with institutional care guidelines.
Statistical analysis
Data were analyzed using SPSS v21.0 (IBM Corp., NY, USA). The qRT-PCR data were presented as the mean ± standard error of the mean, and the other data were presented as the mean ± standard deviation, with P < 0.05 as the significance threshold.
Human OS tumor tissues and cell lines exhibit Tspan7 upregulation
To assess patterns of Tspan7 expression in OS, we evaluated the microarray data available through the GEO database, revealing Tspan7 to be significantly upregulated in OS tissues relative to OBs in the GSE14359 (P = 0.0031, fold change = 11.9436) and GSE12865 (P = 0.0048, fold change = 8.3701) datasets (Fig. 1A, B). According to the GSE33383 dataset, it was upregulated in OS tissues compared to normal MSCs (P = 0.0001, fold change = 2.6007) and OBs (P = 0.0079, fold change = 2.6563) (Fig. 1C). Consistent with these results, Tspan7 was also expressed at higher levels in OS cell lines as compared with MSCs (P = 0.0039, fold change = 1.4743) according to the GSE42353 dataset (Fig. 1D). To confirm these results, Tspan7 mRNA and protein expression were assessed via qRT-PCR and western blotting in Mg63, HOS, Saos2, and U2OS cell lines (Fig. 1E, F). This analysis revealed Tspan7 to be expressed at notably higher levels in HOS and Saos2 cells, whereas slightly lower in U2OS. Further receiver operating characteristics (ROC) curve analyses from the GSE33383 and GSE42352 datasets revealed Tspan7 to be a valuable biomarker capable of differentiating between healthy and OS tumor tissues and cell lines (Fig. 1G). Tspan7 has previously been shown to play either proor anti-oncogenic roles in different cancer types. The above data, however, suggests that Tspan7 expression is enhanced in OS, highlighting a likely role for this tetraspanin in OS onset and/or progression.
Tspan7 silencing impairs OS cell viability
Given that HOS cells exhibited higher levels of Tspan7 expression relative to other cell lines, we next knocked down this tetraspanin in HOS cells using two siRNA constructs (siTspan7#1 or siTspan7#2), confirming successful knockdown relative to siNC transfection via qRT-PCR ( Fig. 2A). In a CCK-8 assay conducted at 72 h post-transfection, we found that the silencing of Tspan7 led to a significant decrease in HOS cell viability relative to siNC (Fig. 2B). Consistently, in a colony formation assay we observed significantly fewer, smaller colonies in siTspan7 groups on day 12 post-transfection as compared to the control group (Fig. 2C). Together, these data suggested that the knockdown of Tspan7 was sufficient to impair OS cell proliferation, implying a potential role for this protein in the context of OS progression.
Downstream genes of Tspan7 were identified by RNA-seq analysis
To better understand the functional roles of Tspan7 as an adjuster of OS development and progression, we next performed RNA-seq analysis aimed at identifying differentially expressed genes upon Tspan7 knockdown (shTspan7#1 and shTspan7#2) in HOS cells. Using standardized significance criteria (fold-change ≥ 2 and P < 0.05), 764 Tspan7-regulated genes were identified by comparing these groups, including 348 down-and 416 up-regulated, respectively (Fig. 2D, E). Of note, fibronectin 1 (FN1), which is a mesenchymal marker associated with EMT induction and cellular adhesion, was markedly reduced in the shTspan7 group relative to the shNC group, suggesting that Tspan7 may positively regulate FN1 (Table 2), and be involved in the EMT process. GO and KEGG pathway enrichment analyses were then conducted to understand the biological roles of Tspan7related DEGs at a P < 0.05 cutoff threshold. Enriched GO terms indicated that Tspan7 was associated with the regulation of developmental growth (GO: 0,048,638), positive regulation of cell adhesion (GO: 0,045,785), regulation of ERK1 and ERK2 cascade (GO: 0,070,372), and the activation of protein kinase activity (GO: 0,032,147) (Fig. 2F). KEGG pathway enrichment analyses similarly revealed these DEGs to be enriched in the MAPK signaling (hsa04010), Oxytocin signaling (hsa04913), and ECM-receptor interaction (hsa04512) pathways (Fig. 2G).
Tspan7 regulates the migration and invasion of OS cells
Several other tetraspanins including CD82 and CD151 have been identified as tumor suppressors or oncogenic in multiple cancer types owing to their ability to regulate tumor cell metastasis [33,34]. Tspan7-mediated enhancement of EMT induction was recently reported to play a central role in lung cancer metastasis [9]. Upon GO and KEGG analyses, Tspan7 was also likely to be involved in OS cancer cell metastasis. Thus, we attempted to find out the mechanisms how Tspan7 controls the metastatic progression of OS. We stably knocked down Tspan7 in HOS and Saos2 cells using shRNA constructs (shTspan7#1 or shTspan7#2), confirming successful knockdown via qRT-PCR and western blotting (Fig. 3A), as well as fluorescent microscopy (Fig. 3B). In wound healing and Transwell assays, depleting Tspan7 was found to markedly impair GAPDH served as a normalization control. Data are means ± SEM from two separate experiments. F Western blotting results showed that Tspan7 in HOS and Saos2 but not U2OS was markedly higher compared with Mg63 cell. Bands were normalized to the β-actin loading control. Data were presented as the means ± SD of two independent experiments. G ROC curves and AUC values for OS based upon the GSE33383 and GSE42352 dataset. GAPDH, glyceraldehyde-3-phosphate dehydrogenase; ROC, receiver operating characteristic; AUC area under the curve, SD standard deviation. **P < 0.01, ****P < 0.0001. Student's t-test the in vitro migration of HOS and Saos2 cells (Fig. 3C, D). Similarly, when Matrigel-coated invasion cells were used to evaluate the invasive activity of OS cells, Tspan7 knockdown was confirmed to suppress such invasive activity (Fig. 3E). To evaluate the impact of Tspan7 overexpression on such migratory and invasive activities, U2OS cells with low endogenous Tspan7 expression were engineered to overexpress this tetraspanin as assessed by qRT-PCR and western blotting, as well as fluorescent microscopy (Fig. 3F). Then, wound healing and Transwell assays revealed that these OE-Tspan7 U2OS cells exhibited markedly enhanced migratory activity as compared to mock control cells (Fig. 3G, H), with concomitant enhancement in invasiveness (Fig. 3I). These findings thus provided robust evidence that Tspan7 can promote the in vitro invasive and migratory potential of OS cells.
Tspan7 downregulation inhibits EMT induction and OS cell metastasis in vivo
Tspan7 was confirmed to be closely associated with cellular adhesion in our GO enrichment analyses. Therefore, in the present study, we next sought to establish a link between Tspan7 and EMT process in the context of OS cell metastasis. Western blotting results showed that OE-Tspan7 cells exhibited a significantly increased FN1 expression, an EMT biomarker as well as a downstream candidate of Tspan7, relative to control cells (Fig. 4A, B). We further found that the knockdown of Tspan7 in HOS cells caused reductions in interstitial biomarkers (Vimentin and N-cadherin) expression and EMT-related transcription factors (Snai1 and Slug) therein, consistent with the impairment of the EMT process (Fig. 4A, B). Conversely, the overexpression of Tspan7 in U2OS cells was linked to increases in the expression of all four of these EMT-related proteins (Fig. 4A, B). Together, these results suggested that Tspan7 might be able to promote OS cell metastasis via the positive regulation of the EMT process.
To more fully clarify the link between Tspan7 and the metastatic progression of OS in vivo, an OS pulmonary metastasis model was established. Following the injection of tumor cells into the tail vein of mice, only a subset of animals ultimately developed pulmonary metastases at a rate proportional to the overall invasive and metastatic ability of the injected cells. While four animals in the control group developed metastatic lung foci, whereas two in the shTspan7 group. There was a clear reduction in the total number of metastatic nodules in the shRNA . 3 Tspan7 contributes to OS cell metastasis in vitro. A Cells in which Tspan7 was stably knocked down using the indicated shRNA constructs exhibited reduced Tspan7 mRNA levels and protein levels as compared to the shNC group in qRT-PCR (left panel) and western blotting (right panel) assays. B Cells were imaged at 10× magnification, with successfully infected cells exhibiting GFP expression, demonstrating near 100% transduction efficiency. C, D The effect of Tspan7 downregulation on HOS and Saos2 cell migration was assessed by wound healing and Transwell assays. E Matrigel-coated invasion assays were used to assess the invasivity of HOS and Saos2 cells stably expressing shTspan7 or shNC. F Fluorescence microscopy (upper panel), qRT-PCR (lower left panel), and western blotting (lower right panel) were used to gauge the degree of Tspan7 overexpression in OE-Tspan7 and mock U2OS cells. G Wound healing assays were conducted using U2OS cells stably transfected with OE-Tspan7 or mock constructs. H U2OS cells stably transfected with OE-Tspan7 or mock constructs were utilized in Transwell migration assays. I Matrigel-coated invasion assays were used to assess the invasivity of U2OS cells stably expressing OE-Tspan7 or mock constructs. Representative images are shown, and all migratory and invasive cells were counted. Experiments were repeated in triplicate, and data are means ± SD. shRNA, short hairpin RNA; GFP green fluorescence protein. *P < 0.05, **P < 0.01, ***P < 0.001, ****P < 0.0001. Student's t-test group relative to the shNC group, and these results were further supported by histopathological analyses of lung sections following H&E staining (Fig. 4C-E). In addition, there was no obvious change in the body weight of the shTspan7-treated mice compared with the control group (Fig. 4F), whereas the lung weight of the shTspan7 group decreased slightly (Fig. 4G). Collectively, these data indicated that Tspan7 knockdown was sufficient to impair the in vivo metastatic ability of OS cells.
Tspan7 interacts with integrin β1 to activate FAK-Src-Ras-ERK1/2 pathway
In order to induce signal transduction and thereby regulate cell functionality, tetraspanins form plasma membrane complexes with specific integrins. Mass spectrometry analyses indicated that Tspan7 was likely to interact with integrin β1 (Additional file 1: Table S1). Subsequently, in co-IP assays, Tspan7 was confirmed to interact with integrin β1 in OS cells (Fig. 5A). Western blotting analyses further indicated that Tspan7 knockdown resulted in reduced integrin β1 protein levels in OS cells, whereas Tspan7 overexpression led to an increase in the expression of this integrin relative to the levels in control (Fig. 5B). These results demonstrated the interaction of Tspan7 and integrin β1, suggesting that Tspan7 may govern OS metastatic progression through the signaling events downstream of integrin β1.
FAK is the central mediator of canonical integrindependent signaling activity. Following ligand binding, integrins recruit FAK to their β subunit, leading to FAK Try397 autophosphorylation and subsequent association with Src. This leads to further kinase activation and the induction of downstream signaling such as the Ras-MAPK pathway and other signaling mechanisms. Ras-MAPK signaling downstream of FAK is facilitated by the generation of SH2 binding sites upon FAK autophosphorylation [35,36]. Integrin β1 interactions with FAK have reportedly been linked to pancreatic tumor metastatic progression through Ras and ERK1/2 signaling pathway activation [37]. To confirm the relationship between Tspan7 and integrin β1 signaling in OS cells, we analyzed the protein levels of FAK, Src, Ras, and ERK1/2 in OS cell lines in which Tspan7 had been downregulated or upregulated (Fig. 5C, D). This experiment revealed that Tspan7 positively regulated the levels of p-FAK (Y397), p-FAK (Y925), p-Src, Ras, and p-ERK1/2. To confirm a role for Tspan7 in the activation of these signaling pathways, U2OS cells overexpressing Tspan7 were treated with the Ras inhibitor Salirasib. Such treatment markedly suppressed ERK1/2 phosphorylation, which occurs downstream of Ras, in these cells (Fig. 5E). In addition, Ras inhibition was sufficient to impair the enhanced invasive and migratory activities of Tspan7 overexpressing cells as compared with untreated cells (Fig. 5F). In light of these results, it appears likely that Tspan7 can promote OS cell metastasis via forming a complex with β1 integrin and thereby inducing FAK-Src-Ras-ERK1/2 pathway activation.
Discussion
This study is the first to our knowledge to have explored the mechanistic role of Tspan7 in OS. Although the combinations of surgery and chemotherapy afford positive outcomes to a large proportion of patients with this form of cancer, outcomes still remain unsatisfactory for those with recurrent, unresectable, or metastatic disease. It is thus critical that the biological basis for OS might be further elucidated to identify novel approaches to treat this debilitating disease. In the present study, we found that abrogation of Tspan7 resulted in the impaired proliferation of OS tumor cells relative to control cells, suggesting that Tspan7 may function as a key regulator of OS development and/or progression.
Through RNA-seq analyses, we sought to develop a more mechanistic understanding of how Tspan7 shapes OS cell biology. GO term analyses indicated that the DEGs upon Tspan7 knockdown were enriched for biological processes such as fibroblast migration, protein kinase activity, ERK1/2 signaling, developmental growth, and cell adhesion, all of which are central to metastatic progression. In total, we identified 416 upregulated and 348 downregulated genes among Tspan7 deleted OS cells, with FN1 being of particular interest in this context owing to its status as a mesenchymal marker of EMT associated with adhesion, migration, and invasion [38,39]. These results thus supported a key role of Tspan7 in OS metastasis and underscored a likely link between this protein and EMT progress. Metastatic progression, particularly to the lung, is one of the primary causes of poor OS patient outcomes [40]. Early-stage cancer patients exhibit a 5 year survival rate of > 50%, but this rate drops to < 20% when tumors undergo metastasis via the lymphatic or circulatory systems [41], with such metastases being linked to 90% of cancer-associated mortality. The EMT is a dynamic and reversible process that facilitates the migration, dissemination, and metastatic growth of OS cells at distant tissue sites [42,43], and EMT inhibition can thus suppress tumor metastasis. The EMT progression is characterized by the upregulation of the mesenchymal markers including FN1, N-cadherin, and Vimentin together with a loss of the epithelial markers E-cadherin and β-catenin [44]. Several transcription factors, such as Slug and Snai1, have also been identified as key regulators of OS cell invasion and metastasis owing to their ability to control the EMT process [45,46]. The Wnt/ β-catenin, Ras/ERK, TGF-β, and PI3K-AKT pathways have also been shown to shape EMT induction in the oncogenic contexts [47][48][49][50]. To explore the functional importance of Tspan7 in this context, we knocked down or overexpressed this protein in OS cells and assessed their migration and invasion. Tspan7 knockdown in Saos2 and HOS cells markedly reduced the migratory and invasive activities of these cells, whereas its overexpression in U2OS cells had enhanced these capabilities. We also found that FN1 was upregulated in U2OS cells overexpressing Tspan7. Intriguingly, Tspan7 knockdown reduced the expression of markers consistent with EMT induction including N-cadherin, Vimentin, Snai1, and Slug, whereas the opposite effects were observed upon Tspan7 overexpression. Overall, these data suggested that Tspan7 can promote OS development by inducing EMT and thereby driving cellular migration and invasion. Changes in ECM interactions and adhesion molecule expression play central roles in EMT induction [51]. TM4SF proteins broadly regulate integrin-mediated cellular migration through the interactions with integrins, which are adhesion receptors that control the interactions between cells and ECM [52]. Lee et al. [10], for example, found that in prostate cancer cells, CD82 was able to interact with the α3β1/α5β1 integrins to suppress FAK/ Src and integrin-linked kinase (ILK) signaling and thereby disrupted the EMT process. Conversely, the CD151-α3β1 complex can synergize with EGFR to enhance glioblastoma cell metastasis via the activation of FAK Y397 and GTPase signaling pathways [53]. For the present study, we thus primarily focused on key integrins which play a stimulatory role in the context of OS cell metastatic progression. Integrin β1 was shown by Li et al. [54] to suppress the apoptotic death of OS cells and enhanced cell migration, while Jiang et al. [55] similarly found that integrin β1 upregulation was linked to OS cell invasivity and EMT induction. Ren et al. [56] determined that integrin αvβ3 was linked to OS cell metastasis through the induction of ERK1/2 signaling, and the interactions of tetraspanins and β1 integrin have been shown to be key mediators of tumor cell metastasis [8]. β1 integrin recruitment to ECM induced FAK Y397 autophosphorylation, which in turn facilitated Src-family kinase (SFK) recruitment and the subsequent phosphorylation of FAK at the Try407 and/or Try925 residues to trigger Ras-ERK signaling [57,58]. In this research, through mass spectrometric analysis, we identified a series of potential Tspan7-interacting proteins in which integrin β1 was particularly noteworthy. Next, we performed a co-IP assay and determined that Tspan7 and β1 integrin directly interacted with each other. Interestingly, we found that the reduction of Tspan7 reduced integrin β1 expression in protein levels, and Tspan7 overexpression promoted β1 expression. Nevertheless, no change in mRNA level of integrin β1 was observed according to our RNA-Seq analysis. The enhanced β1 integrin protein expression without the alteration at the mRNA level suggests that Tspan7 may upregulate β1 integrin at the posttranscriptional level. Although the mechanisms by which Tspan7 modulates β1 integrin expression remain unknown, Tspan7 contributed to OS development depending on integrin-mediated downstream signaling pathway. Through a series of western blotting assays following Tspan7 transfection, we were able to detect a role for this tetraspanin in the FAK-Src-Ras-ERK1/2 signaling pathway, consistent with our RNA-seq data indicating the enrichment of Tspan7 in the regulation of ERK1/2 cascade (GO: 0,070,372). Lastly, we found that Ras inhibitor Salirasib was able to inhibit Tspan7 overexpression-induced ERK1/2 phosphorylation, migration and invasion in OS cells, implying the role of Tspan7 as a regulator of OS cell metastasis. Of note, because the Ras-ERK are vital regulators of EMT pathway [59], we hypothesized that Tspan7 regulated EMT process was ERK-dependent. As schemed in Fig. 6, Tspan7 may contributes to the metastatic progression of OS cell through interacting with β1 Tspan7 complexes with β1 integrin to promote FAK-Src-Ras-ERK1/2 pathway activation. A Co-immunoprecipitation assays revealed interactions between Tspan7 and integrin β1. B Integrin β1 levels were assessed in HOS-shTspan7 and U2OS-Tspan7 cells via western blotting. C Proteins associated with the FAK-Src-Ras-ERK1/2 pathway (FAK, pFAK Y397 , pFAK Y925 , Ras, ERK1/2, and pERK1/2) were assessed by western blotting, with β-actin as a loading control. D Quantification of the western blotting results presented in C. E Signaling downstream of Ras in Tspan7-overexpressing U2OS cells treated with Salirasib (50 μM) was assessed via western blotting. F U2OS-Tspan7 cells treated with or without Salirasib (50 μM) were assessed by migration and invasion assays. Representative cell images are shown, and cells were counted. All experiments were repeated two or three times. *P < 0.05, **P < 0.01, ***P < 0.001 (See figure on next page.) integrin and subsequently activating FAK-Src-Ras-ERK1/2 pathway, thus resulting in EMT induction. Nevertheless, the mechanisms of whereby Tspan7 regulates the EMT process in OS cells require further exploration.
Conclusion
In summary, our data provide new evidence indicating that Tspan7 serves as a key oncogenic factor that drives OS cell proliferation, EMT induction, and metastasis in vitro and in vivo. Mechanistically, we find that Tspan7 is able to directly interact with β1 integrin to augment FAK-Src-Ras-ERK1/2 signaling within OS cells so as to drive enhanced cell migration and invasion. As such, Tspan7 may represent a promising therapeutic target worthy amenable to pharmacological intervention aimed at improving OS patient outcomes.
|
2023-01-17T14:28:12.018Z
|
2022-05-06T00:00:00.000
|
{
"year": 2022,
"sha1": "1d44e756d87c5448cb1ef19de2811dc57ac99281",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12935-022-02591-1",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "1d44e756d87c5448cb1ef19de2811dc57ac99281",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
}
|
244459953
|
pes2o/s2orc
|
v3-fos-license
|
A comparison of systematic reviews and guideline-based systematic reviews in medical studies
The question of how citation impact relates to academic quality accompanies every decade in bibliometric research. Although experts have employed more complex conceptions of research quality for responsible evaluation, detailed analyses of how impact relates to dimensions such as methodological rigor are lacking. But the increasing number of formal guidelines for biomedical research offer not only the potential to understand the social dynamics of standardization, but also their relations to scientific rewards. By using data from Web of Science and PubMed, this study focuses on systematic reviews from biomedicine and compares this genre with those systematic reviews that applied the PRISMA reporting standard. Besides providing an overview about growth and location, it was found that the latter, more standardized type of systematic review accumulates more citations. It is argued that instead of reinforcing the traditional conception that higher impact represents higher quality, highly prolific authors could be more inclined to develop and apply new standards than more average researchers. In addition, research evaluation would benefit from a more nuanced conception of scientific output which respects the intellectual role of various document types.
Introduction
In contemporary science policies, much emphasis is put on the evaluation of the producers of biomedical knowledge. The most recent trend in science evaluation, the incorporation of seemingly objective tools to measure publication output or citation impact gave rise to the idea that the former represents productivity, while the latter indicates quality, 1 3 professionalism or excellence in biomedical research, (de Rijcke et al., 2016;Jappe, 2020;Petersohn et al., 2020). Biomedical researchers and other scientists have reacted to such incentives, not only by dividing their work into more publishable units, but also by in incorporating such evaluative categories deeply into their epistemic practices (Müller & de Rijcke, 2017). As a result, a growing number of publications, often with lacking novelty or little to add forced leading experts to proclaim a crisis in biomedical research. Beside the increasing efforts for practitioners and researchers to cope with the growing stock of information, experts criticized the decreasing quality of biomedical research, so that findings are less reliable, less credible or just "false" (Ioannidis, 2005, 696).
Ironically, especially the biomedical research community pursued a century of improving the credibility of its research outputs. Facing the outputs of the "clinical trial industry" due to the 1960s FDA regulation (Meldrum, 2000, 755), experts demanded more systematic assessments of medical knowledge in order to improve the health and life of patients (Chalmers et al., 2002). As a result, medical disciplines increasingly employed meta-analyses and systematic reviews in order to combine multiple studies into an overall and reliable result that can be considered as 'evidence' for applicatory contexts and the development of medical treatment guidelines (McKibbon, 1998;Moreira, 2005). In addition, systematic reviews became not only a more common genre in academic periodicals, but also the main focus of multinational organizations that followed the ideal of evidence-based practice, such as the Cochrane Collaboration or the Campbell Collaboration (Simons & Schniedermann, 2021).
Method experts constantly improve the recipes for systematic reviews in order to cope with newly discovered varieties of biases and preserve the epistemic credibility of this genre. For example, in a more recent trend, demands for a more transparent reporting of reviews were uttered and manifested in new standards, most notably the "Preferred Reporting Items for Systematic Reviews and Meta-Analyses" (Moher et al., 2009). Because PRISMA has been implemented in many editorial policies and is highly cited by systematic reviews, its developers ongoingly monitor the actual compliance to further improve the guideline. For example, in their recent evaluation of the PRISMA, Matthew Page and David Moher listed 57 different studies that attempt to analyze to which extent systematic reviews comply with the guideline's rules (Page & Moher, 2017). They conclude that, although there is room for improvements, the guideline already positively affected the reporting quality of systematic reviews and meta-analyses.
Systematic reviews and their PRISMA-improved counterparts are often settled on top of the hierarchy of biomedical evidence (Goldenberg, 2009). As such, this genre is situated at the intersection of what is perceived as high levels of epistemic quality, and what is actually measured by the wide palette of bibliometric indicators. For example, because review articles generally accumulate higher citation counts than reports of primary researchan observation long known to bibliometricians-they are suspected to be used strategically by some academic journals and individual researchers (Blümel & Schniedermann, 2020). Some bibliometric assessments found that systematic reviews and meta-analyses inherit this characteristic and are also used strategically by impact-sensitive actors (Barrios et al., 2013;Colebunders et al., 2014;Ioannidis, 2016;Patsopoulos et al., 2005;Royle et al., 2013). But allegations of strategic behavior in academia are hard to prove solely on the basis of bibliometric measurements. Nevertheless, bibliometric assessments can point towards potential misuses and inform more profound investigations (Krell, 2014;Wyatt et al., 2017).
Several studies elaborated the bibliographic characteristics of systematic reviews, often with a focus on specific domains. Usually, medical researchers and method developers investigate the "epidemiology of [..] systematic reviews" by focusing on characteristics such language, inclusion criteria or whether a protocol was pre-registered or published . Other observations are the high growth rates and publication counts of systematic reviews and resulting problems (Ioannidis, 2016;Bastian, 2010). Other analyses focus on national affiliations, number of authors, funding sources or citation impact. For example, a study by Alabousi et al. found a positive correlation between the Journal Impact Factor and systematic reviews published in the field of medical imaging (Alabousi et al., 2019).
Studies that take methodological standards such as PRISMA into account often focus on specific sub-fields, or use bibliometric indicators that are inappropriate for the assessment of individual articles. For example, one study found a positive correlation between the quality of reporting of systematic reviews and absolute citation rates in radiology (Pol et al., 2015). A study by Nascimento et al. found a positive correlation between the endorsement of PRISMA and the Journal Impact Factor for systematic reviews about low back pain (Nascimento et al., 2020). Similarly, Mackinnon et al. found a weak correlation between reporting quality and 5-Year Journal Impact Factor in dementia biomarker studies (Mackinnon et al., 2018). Molléri et al. conclude that the rigor of software engineering studies is related to normalized citations, but argue that, at the same time, this may come at the costs of relevance (Molléri et al., 2018).
In order to provide a broader overview about the characteristics and dissemination of systematic reviews in biomedicine, this study focuses especially on the role and impact of (reporting) standards such as PRISMA on systematic reviewing. Although standardization is a common phenomenon in medical research and practice, the willingness and speed in applying new standards differs by medical subfields (cf. Timmermans & Epstein, 2010). For this reason, this study wants to highlight the fields, nations and topics that usually deal with systematic reviews and shed some light on their dynamics in effectively implementing PRISMA. By using different methods to identify systematic reviews and separating them from those that use the PRISMA guideline, this study extends existing research by understanding PRISMA-based systematic reviews as standardized and highly appraised forms of scientific output. Therefore, by analyzing impact measures, it is not only attempted to provide scores irrespective of sub-disciplines and publication years, but also contrast them with standardized and more rigorous forms of systematic reviews.
Corpus construction
To compare the bibliometric characteristics of systematic reviews and PRISMA-based systematic reviews, corpora were constructed by a combination of bibliographic data from PubMed and Web of Science. Document types in Web of Science lack in accuracy and necessary detail, especially in the case of review articles (Donner, 2017). In contrast, Pub-Med contains of several document types related to secondary research and introduced the "Systematic Review-type in 2019, because of this genres' importance for medical decision making (NLM, 2018). 1 In a first step, publication types and MeSH keywords were retrieved for 31 million Pub-Med items via the Entrez-API during March and April 2020 and stored in a PostgreSQL database. In contrast to Web of Science, each of these records is linked to 1.73 different document types on average, up to 10 assignments. The reason for this is a much richer classification system in PubMed consisting of 80 different types which also incorporate methodological information such as 'Randomized Controlled Trial' and funding information such as 'Research Support, Non-U.S. Gov't' (Knecht & Marcetich, 2005).
After matching the retrieved records with the inhouse version of Web of Science at the German Kompetenzzentrum Bibliometrie and restricting the publication years from 1991 to 2016 in order to account for changing indexing policies at MEDLINE (NLM, 2017), the resulting set contained 10 million records.
In comparison with the Web of Science classification consisting of "Article", "Review", "Editorial", "Letter", and "News", precision and recall were calculated (Baeza-Yates and Ribeiro-Neto, 2011). While sufficiently high for articles (precision = 0.94/recall = 0.95), they are rather low for reviews (0.85/0.52) which means that while 85% of items labeled "Review" in Web of Science carry the same type in PubMed (true positive), 52% of items are labeled "Review" PubMed, but not in Web of Science (false negative). These results confirm earlier analyses (Donner, 2017), and show the huge differences between document type classification systems in Web of Science and PubMed (see also Harzing, 2013).
To build the guideline-based systematic review set, all items that cite PRISMA or one of its descendants have been separated. PRISMA documents have been identified via DOI and title search and relevant DOI's have been downloaded from the PRISMA website (www. prisma-state ment. org), as well as the EQUATOR network website (www. equat or-netwo rk. org).
Statistical analysis
For the comparison of systematic reviews and PRISMA-based systematic reviews, several variables have been calculated. For the annual and compound growth rates, basic publication outputs were used. All variations other than document types are based on metadata from the Web of Science database. As such, field variations are based on extended subject categories. National variations are based on the fractional assignment of items to the authors' affiliations, taking all affiliations into account. In addition, the country list has been restricted to those that published at least 500 reviews, or have 10% systematic reviews or 2% PRISMA-based systematic reviews over the whole timespan. In addition, countries that belong to the Commonwealth have been separated (cf. Encyclopaedia Britannica, 2021), due to their stronger commitment to evidence-based medicine (Groneberg et al., 2019).
For the impact assessment, all publications in the set have been compared by the major document types. To employ a 3-year citation window, citation analysis is restricted to the publication years between 2009 and 2015. For comparison, the mean citation impact of each group was calculated in terms of absolute citations, mean normalized citation scores (MNCS) and mean cumulative percentile ranks (CPIN). Both synthetic indicators, MNCS and CPIN are based on normalizations by Web of Science's subject categories.
In contrast to traditional mean normalized citation scores, percentile measures are more robust against high impact outliers which skew citation distributions of all fields and are a common issue in bibliometric data (Bornmann, 2013). But at the same time, they reduce the data's level of measurement to an ordinal ranking scale which prohibits any conclusion about the level of differences. However, this does not prohibit conclusions about mean differences of the document types.
Generally, percentile ranks reflect to which citation impact group a publication belongs. Groups can be delineated at wish, for example, a common set is the top 10% percentile, meaning that these papers are cited more often than 90% of rest within this field and publication year (Rousseau, 2012). But citation scores are discrete values which complicate pre-defined rank classes because the percentages are based on the amount of publications while the borders are based on the citation values. For example, if ten publications are cited (0,0,0,0,1,2,2,3,3,3) times, the top ten percent of these would amount one third of the whole set. In order to preserve pre-defined percentile ranking sets, fractional assignments have been introduced which, in turn, give up the binary nature of the ranks (Waltman and Schreiber 2012).
For an optimal percentile measure that can be further aggregated and is also interpretable, Lutz Bornmann and Richard Williams suggeted excluding cumulative percentiles, CPEX (2) and including cumulative percentiles CPIN (1) (Bornmann & Williams, 2020). These are calculated for each combination of classification and year of publication. For these sets all possible citation scores are based on a three-year citation window (I) with j, k ∈ I , the amount of papers with j citations ( c j ) and the total amount of papers (t).
After CPIN or CPEX have been calculated, each focal paper can be assigned such values based on field classification, publication year, citation window. Although both represent cumulative percentages, their interpretation is different. CPEX represents the amount of other publications that are cited less than the focal one within the field and year of publication. CPEX always has zero as its lowest value because no publications are cited less than zero times. CPIN represents the amount of other publications that are cited equal or less than the focal one. CPIN always has 100 as its highest value because no publication is cited more than the most cited ones.
Results and discussion
The growth and dissemination of systematic reviews As displayed in Table 1, the analyzed dataset consists of 9.9 million total items. While most of the items are primary articles (8.1 million), substantial parts of scientific literature are secondary research, or review articles. These consist mostly of non-systematic reviews (950 k), systematic reviews (95 k), as well as PRISMA-based systematic reviews (11 k). Comparing the timespans before (2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)) and after the publication of PRISMA (2009-2016), one can witness the overall decrease of the publication inflation from 52 to 39% overall growth. Beside this demise of genres such as "News" items which has a shrinking annual output by − 2.3%, especially the growth rates of primary research articles slowly decrease from annually 5.9% before, to 4.8% after the publication of PRISMA.
In contrast, the set shows a growing output of research syntheses. Beside a slight increase of the annual growth rates of reviews (from 4.5 to 4.7%), especially systematic reviews and PRISMA-based systematic reviews have increasing outputs. While systematic reviews increased annually by over a fifth every year before 2009 (22%) and over 14% since then, PRISMA-based systematic reviews almost doubled (82%) every year since the publication of the guideline, leading to an overall growth of 2675% until 2016. In addition, the overall growth of systematic reviews that do not comply with PRISMA halved since its inception (from 300 to 148%). Systematic reviews and PRISMA-based systematic reviews play important intellectual roles for various medical fields and communities. Figure 1 shows Web of Science's subject categories that are commonly assigned to systematic reviews or PRISMA-based systematic reviews. "Surgery", with rather moderate rates of systematic reviews (18%) and PRISMAbased reviews (2.4%) accounts for the highest total number of PRISMA-based reviews (756 assignments).
Whenever "Business & Economics" was assigned to a publication in PubMed, it was a systematic review in 42% of the cases which is the top value in the dataset. High rates are also found in "Information Science & Library Science" (38%) and "Medical Informatics" (37%). On the other site, of all assignations of "Chemistry" only 0.5% go to systematic reviews which is the lowest value. Others are "Biophysics" (1.1%) or "Cell Biology" (1.5%).
While "Evolutionary Biology" has a rather low rate of systematic reviews (7.6%), it even was never assigned to PRISMA-based systematic reviews, representing the lowest value in this category. On the other side, "Surgery", a field in which the guideline was published, tops this variable with 756 assignations and a rate of 2.4% among all research syntheses, topping even broader categories like "General & Internal Medicine" (688 assignments, 27% rate) or "Science & Technology-Other Topics" (681 assignments, 23% rate). Not surprisingly, fields in which the guidelines have been published offer higher rates in both types of assignments.
How the field assignments to PRISMA-based systematic reviews changed over time is shown in Fig. 2. While fields such as "surgery" raised their share from 2.3% in 2010 to 7.3% in 2015, fields like "Obstetrics & Gynecology" decreased from 4.5% of all assignments decreased from 14 to 6.6% in 2015 but still marks second rank. Although the ranks change over the years, Fig. 3 shows that the differences in portions and counts between the fields are much smaller compared to what can be observed for systematic reviews in general. The data visualizes some popular narratives about the dissemination and occurrence of systematic reviews and their standardized counterparts. Generally, the differences show that although the guideline developers attempted to make PRISMA applicable for a broad range of fields and academic cultures, the actual levels of implementation and compliance differ substantially. Accepting a new standard means that established practices have to change. Although standardization and the change in standards is a rather regular phenomenon in biomedicine (Whitley, 2000), such attempts are sometimes accompanied by criticism and reluctance (Solomon, 2015;Timmermans & Berg, 2003;cf. Hunt, 1999). For example, communities with a strong track of qualitative research turned out to be rather critical of systematic reviewing (Cohen & Crabtree, 2008).
One reason for strong objections is the more procedural and prescriptive way of how systematic reviews or meta-analyses create medical evidence (Stegenga, 2011). Beside the objection that this employs a rather narrow perspective on scientific methodology and neglects the plurality in medical research (Berkwits, 1998), a proper execution of systematic review methods requires substantial skills in information retrieval, data cleaning or statistics (Moreira, 2007). Not surprisingly, the data shows that when disciplines such as "Information Science & Library Science" or "Medical Informatics" have been assigned to secondary articles, it was often a systematic review or PRISMA-based systematic review.
Beside objections against systematic reviews in some fields, this genre has grown a substantial footprint in fields that are committed to evidence-based medicine. For example, assignations of "Public, Environmental & Occupational Health" and "Health Care Sciences & Services" regularly go to systematic reviews (30% and 33% respectively) or PRISMAbased systematic reviews (3.5% and 4.2% respectively) with accumulating a total of 499 assignations and 323 assignations respectively, to this genre. These values correspond to Overall number of research syntheses, portions of systematic reviews and PRISMA-based systematic reviews by country, and national association with the Commonwealth. Based on fractional assignment of author's affiliations. Labels are restricted to countries that published at least 500 reviews, have at least 10% systematic reviews or 2% for PRISMA-based reviews accordingly the idea that systematic reviews are important for medical decision making, the development of clinical guidelines or their overall role for public health assessments-contexts in which systematic reviews proved to be fruitful in the past (Moreira, 2005;Hunt, 1999). Similarly, systematic reviews are suggested to reach final conclusions on certain topics by aggregating all available evidence. While this function is crucial for disciplines such as surgery, where a multitude of treatments have to be appraised (Maheshwari & Maheshwari, 2012), scientific disputes are not always solvable by this way (Vrieze, 2018).
Systematic reviews by countries
Although the biomedical sciences are an international community, national variations in the production of research synthesis are observable, as Fig. 3 suggests. While top producers like the United States (195,164 items), the United Kingdom (50,033 items), Germany (33,166 items), and China (29,940 items) publish the majority of research syntheses, they differ in their commitment to systematic reviews and PRISMA-based systematic reviews. While in the U.S. and Germany, less than a tenth of published reviews are systematic reviews (6.8% and 7%), this genre accounts for greater portions of reviews in the U.K. (18%), and especially China (39%). All four countries differ in their commitment to PRISMA-based systematic reviews in a similar manner. With U.S. and Germany having the lowest rates (both < 1%), higher rates can be found in the U.K. (2.1%) and China (4.0%). Notably, authors from Egypt published only 587 review articles of which only 11% have been systematic reviews but 6.6% PRISMA-based systematic reviews which is the highest rate in the dataset. The annual ranks of national assignments to PRISMA-based systematic reviews are displayed in Fig. 4. In the first year after the inception of the guideline, leading producers were the United Kingdom (23%), the United States (21%) as well as Canada (12%). While the U.S. took first position in 2013, accounting for 20% of all PRISMA-based systematic reviews in 2015, especially China showed even higher growth rates. While marking seventh with 4.8%, its portions grew steadily year over year, taking the second rank in 2015 with over 1000 items and 18% of overall share. Groneberg et al. have shown that strong commitments towards evidence-based medicine can be associated with the Commonwealth (Groneberg et al., 2019). Similarly, the analysis of document types shows that such countries, especially the U.K., Canada, Australia or New Zealand, produce higher rates of systematic reviews and its PRISMA-based counterpart than other leading science nations (Figs. 3 and 4). Other countries with high rates of these genres, most notably the Netherlands or Denmark, are also among the top ten adopters to evidence-based medicine (ebd.). Although the "evidence-based" movement promotes and comes with a variety of knowledge generation practices, systematic reviewing remains an important cornerstone in EBM as other studies have indicated (Blümel & Schniedermann, 2020;Ojasoo et al., 2001). In that sense, the data provided here corresponds with other studies that mentioned the link between evidence-based paradigms and the emergence of systematic reviews (Timmermans & Berg, 2003;Straßheim and Kettunen 2014).
Besides being fourth in absolute number of reviews, China offers substantially higher rates of systematic reviews (39%) as well as PRISMA-based systematic reviews (4.0%) than all other countries. Although the exact causes may be very complex and unable to identify, two potential causes are proposed. First, being a rising science leader, the majority of Chinese scientists are latecomers to the international community (see Liang et al., 2020). As such, this group may be biased towards a more recent education in the conduct and reporting of biomedical studies. At the same time, Chinese science policy heavily incentivized the production of scientific literature which recently made China the global leader in scientific publishing (Stephen & Stahlschmidt, 2021). Second, due to its recent attempts for healthcare reform, China turned more explicitly towards evidence-based
Citation impact of systematic reviews PRISMA
Differences in the citation impact of the analyzed document types are displayed in Fig. 5. All three different citation indicators reveal the same resulting ranks, with the lowest values for articles and highest values for PRISMA-standardized systematic reviews. The latter receive 14 absolute citations (mean = 22), 1.57 normalized citations (mean = 2.67) and the 87% percentile rank (mean = 81%), meaning that 87% of the other document types in the same field receive equal or less citations in three years after publication. In comparison, other systematic reviews receive 12 absolute citations (mean = 19.5), 1.41 normalized citations (mean = 2.36) and reach the 83% percentile rank (mean = 76.2%). Reviews and articles rank lower correspondingly. Mean differences are highly significant with Kruskal-Wallis (p < 0.001) for all three sets as well as all pairwise Wilcoxon rank sums (p < 0.001). In Fig. 6, the development of the annual mean CPIN's shows the dynamics of the impact ranks and corresponding errors for three different citation windows (see Wang, 2013). While 85% of all publications were equally or less cited in a 3-year window In comparison, systematic reviews came from 80% in 2009 to 74% in 2015 for all three citation windows. Note that since citation data up to 2018 is used, the 10-year window basically represents the whole available citation data. Citation windows seem to play only a minor role. Compared with the 10-year window, the 3-year window bears higher values for articles and reviews, while having slightly lower values for the standardized review formats. Focusing on items published in 2009, the shorter span reveals CPIN differences of + 1.84% for articles, + 2.62% for reviews, -0.03% for systematic reviews and -0.4% for PRISMA-standardized systematic reviews. Rankings are the same for all three citation windows. The structurally higher impact of systematic reviews and PRISMA-based systematic reviews is constantly decreasing, since both genres growing faster in publication counts than ordinary articles or reviews. In addition, the results provide greater error values for PRISMA-based systematic reviews since this group has the fewest item counts (6 items in 2009).
The differences in the changes of CPIN values indicates that the document types have different citation patterns, with articles and reviews are cited more consistently in the long run. On a first glimpse, phenomena like delayed recognition (Ke et al., 2015) or the "kiss of death" (Lachance et al., 2014) may influence long-term citation impact of primary research and syntheses differently. However, the different intellectual function of the systematic review, whether standardized or not, in comparison of what scholars now call "narrative review" (Ferrari, 2015) may explain why the former have a shorter citation lifespan. Since systematic reviews are usually designed to achieve consensus on a very particular research question, they draw much more on recent research and become outdated by new findings. Ideally, they are thought to be updated by the same or other authors whenever new trial results or review methods occur (Elliott et al., 2017). This limits their potential citation lifespan, since medical experts are supposed to rely on the most recent findings. In contrast, common reviews are usually of broader scope and may serve a wider array of intellectual functions, such as introductory or educational material, defining the field's current missions or trends or even render field formation (Grant & Booth, 2009).
The results of the citation analysis provided here show that PRISMA-standardized systematic reviews have a higher citation impact than systematic reviews, reviews or primary articles. In the following, two lines of discussion are proposed.
First, research that complies with reporting standards is credited by higher citation impact afterwards, since it is of higher quality (1). While the conception that 'citation indicates quality' still perpetuates in biomedicine and elsewhere (for example, Abramo & D'Angelo, 2011;Durieux & Gevenois, 2010), authors from other domains at least uphold the idea that citations correlate with some specific epistemic values such as rigor or relevance (Molléri et al., 2018). Such a rather traditional conception of impact was common during the early phase of evaluative bibliometrics, but can only found occasionally today. Speaking of a Kuhnian revolution in bibliometrics, Bornmann and Haunschild argue that in contemporary evaluative bibliometrics, citation impact should be considered as only one aspect of a multidimensional concept of research quality, especially there are many different citation behaviors (Bornmann & Haunschild, 2017). For example, authors cite publications if they value the latter's solidity and plausibility, originality and novelty, scientific value like topical relevance, or societal value, which are complex dimensions that sometimes even conflict each other (Aksnes et al., 2019).
Based on the interpretation that greater quality leads to higher citation impact, there is an important limitation to this study. In the analysis, all systematic reviews that cite the guideline and comply with its first requirement are assigned to the standardized systematic reviews set equally. But meta-studies in biomedicine have shown that the level or quality of guideline compliance varies a lot (Pussegoda et al., 2017). Therefore, to uphold a rather mechanistic relation between quality and citation impact, observed citations in the created set must be normalized by the level of guideline compliance in further research.
Second, beside the interpretation provided above, the results from this study may provide useful input to another interpretation of standards and scientific excellence. So, whatever makes research high impact, it also makes it more open towards standardization and more likely to adopt to new methodological standards (2). Since biomedicine could be understood as what Richard Whitley has called a "professional adhocracy" (Whitley, 2000, 187) standards occur to establish new research practices or tools and further enable formal communication. Authoritative organizations like Cochrane or leading experts in the development of standards form networks in which they negotiate the content and domain of a new standard (Solomon, 2015). By finding consensus, those networks redefine the borders of what counts as a properly reported systematic review which excludes those reviews that do not comply with the standard (for example, Yuan & Hunt, 2009; see also Gieryn, 1999). In this setup, high impact researchers also serve as first movers in adopting the standard. After the leading scientists have defined these standards, other researchers consent afterwards in order to be part of that those that publish 'the good' systematic reviews. Similar dynamics have also been observed for academic communities consenting on theories or methods (cf. Luetge, 2004;Zollman, 2007), or the role of reviews in the formation of new disciplines (Blümel, 2020).
In respect to the two different interpretations, the contribution of this analysis to bibliometric research is twofold. Generally, the provided results have revealed some basic characteristics and dynamics of a specific standardization in biomedical research. Together with further research questions about the dissemination and application of PRISMA, for example the characteristics of its developers or adopters, this analysis represents an informative usage of bibliometric data in studying the social aspects of biomedicine or science in general (Gläser & Laudel, 2001;Wyatt et al., 2017).
But identifying standardized research can also contribute to the evaluation of scientific outputs or the development of institutional quality frameworks. As mentioned above, reporting guidelines such as PRISMA have strengthened conceptions about transparency as a fundamental aspect what has to be considered good research. Not surprisingly, reporting guidelines play an important role in the Hong Kong principles for assessing researchers (Moher et al., 2020). Bibliometric data offers the potential to interpret citation in a more diverse form, for example examining the role of method papers, software packages or standards (cf. Li et al., 2017). Similarly, bibliometric analyses of reporting guidelines can enable research evaluations in this respect. In addition, more fine-grained discriminations of document types can ensure that published items are evaluated according to their intellectual contribution.
Conclusion
This study provided a comparative analysis of different document types in biomedical research. In order to understand the relation between methodological quality and citation impact, indexing data from PubMed was used to differentiate between ordinary systematic reviews and those that comply with the PRISMA reporting standard. Besides providing a general overview about the growth and dissemination of systematic reviews and their standardized counterparts, their citation impact was compared by using different indicators.
The results show that although the growth rates of all biomedical publications decrease, there is still a strong annual growth in systematic reviews and PRISMA-based systematic reviews. Focusing on subject categories, both types of systematic reviewing occur especially in fields that are related to epidemiology and public health. Although the number of assigned subject categories grew, a great portion of PRISMA-based systematic reviews are assigned to fields in which the original guideline was published. While the top producers of scientific literature also dominate the numbers of published systematic reviews, especially countries with a strong focus on evidence-based medicine achieve higher portions of systematic reviews and PRISMA-based reviews. In addition, China is the fourth biggest producer of systematic reviews and also provides substantially higher rates than other leading nations.
Ranking the citation impact of the different document types has revealed that PRISMAbased systematic reviews dominate irrespective of indicator and citation window. It was discussed that this dominance could represent the idea that methodological quality leads to higher citation impact and explain why this conception still perpetuates although it is dismissed in contemporary science studies. In contrast, the results may show that whatever makes authors achieve high citation impact also leads them to willingly apply new methodological standards. Irrespective of which interpretation one favors, bibliometric research could benefit from a more nuanced differentiation of document types in order to evaluate them in respect to their intellectual roles.
Funding Open Access funding enabled and organized by Projekt DEAL. Funded by the German Federal Ministry of Education and Research as part of the program "Quantitative research on the science sector" (01PU17017). Additional support by the German Kompetenzzentrum Bibliometrie (01PQ17001).
Conflict of interest
The authors declared that they have no conflict of interest.
Data availability All relevant material is provided in the Text.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
2021-11-21T16:18:30.397Z
|
2021-11-19T00:00:00.000
|
{
"year": 2021,
"sha1": "00fa4005c357da0806af421c24c3b838cedfec83",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11192-021-04199-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "adb59040a727fa988b15e49522355782e2c414ab",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
17229216
|
pes2o/s2orc
|
v3-fos-license
|
Interaction Analysis between HLA-DRB1 Shared Epitope Alleles and MHC Class II Transactivator CIITA Gene with Regard to Risk of Rheumatoid Arthritis
HLA-DRB1 shared epitope (SE) alleles are the strongest genetic determinants for autoantibody positive rheumatoid arthritis (RA). One of the key regulators in expression of HLA class II receptors is MHC class II transactivator (CIITA). A variant of the CIITA gene has been found to associate with inflammatory diseases. We wanted to explore whether the risk variant rs3087456 in the CIITA gene interacts with the HLA-DRB1 SE alleles regarding the risk of developing RA. We tested this hypothesis in a case-control study with 11767 individuals from four European Caucasian populations (6649 RA cases and 5118 controls). We found no significant additive interaction for risk alleles among Swedish Caucasians with RA (n = 3869, attributable proportion due to interaction (AP) = 0.2, 95%CI: −0.2–0.5) or when stratifying for anti-citrullinated protein antibodies (ACPA) presence (ACPA positive disease: n = 2945, AP = 0.3, 95%CI: −0.05–0.6, ACPA negative: n = 2268, AP = −0.2, 95%CI: −1.0–0.6). We further found no significant interaction between the main subgroups of SE alleles (DRB1*01, DRB1*04 or DRB1*10) and CIITA. Similar analysis of three independent RA cohorts from British, Dutch and Norwegian populations also indicated an absence of significant interaction between genetic variants in CIITA and SE alleles with regard to RA risk. Our data suggest that risk from the CIITA locus is independent of the major risk for RA from HLA-DRB1 SE alleles, given that no significant interaction between rs3087456 and SE alleles was observed. Since a biological link between products of these genes is evident, the genetic contribution from CIITA and class II antigens in the autoimmune process may involve additional unidentified factors.
Introduction
Rheumatoid arthritis (RA) is a relatively common disease of poorly understood aetiology that affects approximately 1% of the world's population. Even though the pathophysiology of the disease is well studied only a limited number of risk factors with low to moderate effect has been described [1]. Even the strongest genetic risk factor for RA, the variants in the HLA-DRB1 gene suggested by the shared epitope (SE) hypothesis [2], confer only a moderate risk increase for RA, with an odds ratio (OR) of 4-6 in European Caucasians with regard to anti-citrullinated protein antibodies (ACPA) positive RA [3,4]. Also, these variations are quite common in the normal population and their predictive value is very low. Therefore, the major fraction of RA risk remains unexplained by existing information and interaction between known risk factors may account for putative ''missing'' risk factors. Indeed, some risk factors for RA has been shown to moderate the risk for disease in the context of the SE [3,5,6] suggesting that interactions with SE may play an important role in the development of RA [7].
We have earlier reported on a variant of the MHC class II transactivator (CIITA) gene, the 2168A/G promoter SNP (rs3087456), which associates with inflammatory diseases and also the expression of CIITA and downstream HLA expression [8]. This association was not consistently replicated in different populations [9,10]. However, other variants in the CIITA locus than rs3087456 have been reported in association with autoimmune disease [11,12,13] warranting further exploration of this locus in the context of RA.
Also, due to involvement of HLA class II in RA, the study of CIITA may reveal more detailed mechanisms of disease development, since the protein is known to be a key regulator of MHC class II expression and therefore may be involved in development of RA in combination with SE alleles [14,15]. A complete lack of expression of CIITA leads to the bare lymphocyte syndrome with a complete abolishment of classical MHC class II gene expression [16]. This is unlikely to be relevant to RA, but less severe changes in efficiency of CIITA expression might be important for autoimmunity development and statistical evaluation of genetic interaction of CIITA and shared epitope alleles may reveal ''missing'' risk factors.
With this as a background, we set out to define a possible genegene interaction between HLA-DRB1 and the CIITA locus in development of RA with a study population of 11767 individuals from four European Caucasian cohorts (6649 RA cases and 5118 controls).
Results
First, we tested the hypothesis of an interactive effect between risk alleles of HLA-DRB1 SE and rs3087456 for developing of RA in the Swedish cohort (Cohort I, Table S1). Interaction was estimated between SE positivity and the risk allele G of rs3087456 in a homozygous state (GG) [8]. The analysis demonstrated no significant evidence of interaction in this model (attributable proportion (AP) = 0.2, 95% CI: 20.2-0.5). Since SE is primarily a risk factor for ACPA positive disease we stratified data according to ACPA status of RA cases. Still, no significant evidence for interaction was found, although a tendency was apparent (AP = 0.3, 95%CI: 20.05-0.6 for ACPA positive status in RA cases, Table 1). Similarly, no significant interaction in additive and multiplicative models was found in the British and the Dutch cohorts. In the Norwegian cohort, however, a significant interaction was detected, both in the total material and in the ACPA positive RA cases (AP = 0.4, 95%CI: 0.03-0.0.7 for RA in total and AP = 0.4, 95%CI: 0.05-0.7 for ACPA positive status in RA cases, Table 1).
To investigate the interaction between HLA-DRB1 SE alleles and rs3087456 in depth, we used a more detailed description of the HLA-DRB1 SE alleles by introducing the allelic groups DRB1*01, DRB1*04 and DRB1*10 as separate risk factors. These analyses did not reveal an SE subgroup allele specific interaction with SNP rs3087456 ( Table 2).
Since other variants in the CIITA locus have been reported in association with autoimmune disease we genotyped an additional 22 SNPs across the CIITA locus for the Swedish cohort (Chr16: 10842650-10931606, details for RA association tests of these SNPs can be found in Tables S2 and S3). Of these 22 SNPs, including rs3087456, only rs8048002 was significantly associated with RA after correction for multiple comparisons (ACPA negative patients, adjusted for 44 test: p = 0.013, data submitted elsewhere, Eike et al. [17]). Rs3087456 was significantly associated with RA before adjusting for multiple comparisons. To exhaust the possibility of an interaction between CIITA variants and SE, we screened all CIITA SNPs for interaction in the Swedish cohort, using the additive model and two alternative models of dominance: with the minor and major allele of each SNP. In these analyses the SNP rs4781019 showed significant interaction with SE for the ACPA positive subgroup (Table 3, see Table S4 for detailed results). However, the statistical significance did not hold after correction for multiple testing (nominal p = 0.02 for RA in total, nominal p = 0.03 for ACPA positive RA, significance threshold for 44 tests is p = 0.0011, Bonferroni correction). In addition the SE interaction with this SNP could not be confirmed in the independent Norwegian cohort (Table 3.)
Discussion
To assess the combinatorial risk of CIITA and HLA-DRB1 we have investigated the interaction between HLA-DRB1 SE alleles and the CIITA 2168A/G polymorphism rs3087456, which was previously found to be associated with RA [8]. It may be that the risk for disease is only detectable in certain combinations and also in certain population. This may be the underlying reason why rs3087456 has been shown as a genetic risk factor for immunological disease in some cohorts [8,18,19], but has not been consistently replicated in other cohorts [20,21,22]. This was why we set out to define a more specific role of this polymorphism through genetic interaction with HLA-DRB1 SE alleles. However, a straightforward interaction model of SE and the rs3087456 G allele did not reveal significant interaction with regard to RA. In addition, we performed a detailed analysis with the specific SE alleles DRB1*01, DRB1*04, DRB1*10 for a more strict allelic interaction. From this we could conclude that the interaction trend stayed with DRB1*04 but it did not reach statistical significance. We observed that the models with DRB1*01 and DRB1*10 showed negative interaction, which led us to remove individuals with these genotypes from the dataset for a more fair measure of effect from DRB1*04. This analysis resulted in a significant interaction, but was not replicated in the Norwegian cohort (Table S5). A reason for this could be the reduction in size of the dataset and decreased statistical power.
Although the involvement of SNP rs3087456 was the main focus of our study, we also addressed genetic variability in this locus on a broader scale by scrutinizing the CIITA locus for other putative risk markers in the Swedish RA cohort. In a recent article addressing the influence of CIITA and HLA-DRB1 in multiple sclerosis [11], a complex risk relationship between these loci is presented. The polymorphism rs4774 is described as the major risk variant in the CIITA locus instead of rs3087456 and with an increased risk in individuals carrying the DRB1*1501 allele. Rs4774 is also reported to be associated with the production of donor-specific HLA antibodies in renal allograft recipients [23]. Indeed, we found some evidence for interaction with another polymorphism than rs3087456 (rs4781019), but this could not be replicated in the independent Norwegian cohort.
In general, it is difficult to estimate the statistical power for measuring low effects of interaction. Also, except for the Swedish cohort, no particular measures were made to match controls with the RA patients, which may reduce the power of our study and to increase genetic heterogeneity of the study. Therefore, to conclude that an interaction is completely absent between CIITA and HLA-DRB1 SE alleles in development of RA is not possible. However, absence of convincing results in four large, independent cohorts makes it highly unlikely that any strong interaction is present or a sizeable subgroup of the disease could be explained by this interaction.
Our study is not directly comparable to association studies of CIITA in RA, since the major aim was to discover a hypothetical interaction between CIITA and HLA-DRB1 SE. Even so, this and previous studies indicates that CIITA plays an ambiguous role for RA where association signals are difficult to replicate. In a recent article by Eike et al. (unpublished) [17], an updated meta-analysis supports the association of CIITA with RA, in particular in the Scandinavian populations. Also, it seems likely that CIITA is involved in other autoimmune disease, with multiple sclerosis being the most pronounced [11,18,24,25]. According to our data, the previously found association of CIITA variations with RA that could not be replicated in different Caucasian populations is not due to a putative interaction between CIITA and SE alleles. Thus, the missing genetic risk factors for RA remain to be discovered.
To conclude, we did not observe any significant interaction between rs3087456 or 22 other SNPs in the CIITA locus and HLA-DRB1 SE alleles with regard to risk of RA.
Description of cohorts
Interaction between HLA-DRB1 SE alleles and rs3087456 in CIITA was primarily investigated in a cohort consisting of 2520 incidence RA cases and 1349 matched controls from Swedish EIRA study, which is described elsewhere [3,26]. The analysis was repeated in three other cohorts: from UK (1916 cases and 1270 controls), the Netherlands (1260 cases and 346 controls) and Norway (953 cases and 2153 controls) with overall 11767 individuals in the study (6649 RA cases and 5118 controls). All RA patients met the American College for Rheumatology 1987 (ACR-87) revised criteria for RA [27]. For the Swedish cohort, patients were recruited to the study by practioners not responsible for the study, who after informing registered a verbal consent in the patients journal. If consent was given, an extensive questionere was filled in by the patient. Controls were invited by letter and were also asked to fill in and send back an extensive questionere and to visit the closest primary care for leaving samples. This active participation was the foundation for informed consent. All subjects can at any moment withdraw from the study. This procedure is in line with the ethical permit and regulation in Sweden. All individuals in the Dutch, Brittish and Norwegian cohort gave their written informed consent to participate. The local regional ethical review boards approved this study (Regionala etikprövningsnämnden i Stockholm, Sweden; The Leiden institutional review board, Commissie Medische Ethiek, Netherlands; The North-West Multi-Centre Research Ethics Committee and University of Manchester Committee on the Ethics of Research on Human Beings, United Kingdom; Regionale komiteer for medisinsk og helsefaglig forskningsetikk (REK) sør-øst, Oslo, Norway).
In the EIRA study, only a minor number of controls were detected to be ACPA positive (1.8%, n = 24), while controls from other cohorts were not tested. When stratifying by ACPA status, controls were always considered as a whole group and not divided in strata.
Dutch cohort
Patient characteristics have been described previously [28]. The healthy controls were randomely selected by the immunogenetics and Transplantation Immunology section of the Leiden University Medical Center.
British cohort
All subjects were white Caucasians and all patients satisfied ACR-87 criteria modified for genetic studies [30].
Genotyping for the UK cohort was performed using the Sequenom MassARRAY iPLEX system in accordance with the manufacturer's instructions (www.Sequenom.com). ACPA was tested using the Axis-Shield DIASTAT kit according to the manufacturer's instructions (positivity: concentration .5 U/ml). The presence of HLA-DRB1 SE copy number (0, 1 or 2 copies) was detected using a semi-automated reverse hybridization method (Dynal Biotech, Wirral, UK).
Norwegian cohort
Norwegian RA patients were from the Oslo RA Registry (ORAR) and the European Research on Incapacitating Disease and Social Support (EURIDISS) cohorts [31,32]. Healthy Norwegian control samples were collected from the Norwegian Bone Marrow Donor Registry (NBMDR), Oslo University Hospital, Rikshospitalet (Controls-1, n = 1121) and blood donors recruited at Oslo University Hospital, Ullevål (Controls-2, n = 1032). An ELISA kit assay (INOVA Diagnostics, San Diego, California, USA) was used to measure ACPA concentrations in the RA samples, with a positivity cut-off defined as levels .25 U/ml. Genotyping of CIITA SNPs in the Norwegian cohort was performed with TaqMan predesigned assays. Genotyping for HLA-DRB1 in the Norwegian RA patients and controls from NBMDR was done by sequencebased genotyping [33], whereas blood donors were genotyped by PCR-based sequence-specific oligonucleotide probe system [34].
Statistical analyses
Additive interaction was defined by departure from additivity of effects originally described by Rothman [35] and was estimated by calculating the attributable proportion due to interaction (AP) [36]. For each individual, variables were defined for having none, either or both risk factors. The statistical tool R was used for logistic regression and estimating ORs for variables and AP was calculated with 95% confidence intervals (version 2.9.0, http:// www.r-project.org/). This procedure was facilitated by scripts developed by Kallberg et al., 2006 [37]. Interaction was calculated between SNP rs3087456 (homozygous for G allele) and HLA-DRB1 SE alleles defined as any of DRB1*01, DRB1*04 and DRB1*10. For the other polymorphisms in the CIITA locus both dominant and recessive models for the risk allele were used. For calculating multiplicative interaction and performing allelic association analysis we used the software PLINK (version 1.06, http://pngu.mgh.harvard.edu/purcell/plink/). We used Bonferroni correction for multiple testing when appropriate.
Supporting Information
Table S1 Description of the cohorts included in the study. A summary of interaction analysis for HLA-DRB1 SE with the CIITA locus for the Swedish cohort. * For recessive models the complementary to the recessive risk allele is used for calcualtion due to a low allele frequency. AP = attributable proportion; SE = shared epitope; ACPA = anit citrullinated protein antibodies. Positions with missing data (2) were not possible to calculate. (DOC) Table S5 Additive interaction between rs3087456 and HLA-DRB1 SE subgroups. # Exclusion of individuals with DBR1*01 or DRB1*10 alleles. Interaction between rs3087456 and DBR1*10 could not be calculated for the Norwegian cohort. AP = attributable proportion; SE = shared epitope; ACPA = anit citrullinated protein antibodies. (DOC)
|
2018-04-03T03:42:29.955Z
|
2012-03-26T00:00:00.000
|
{
"year": 2012,
"sha1": "d713cfbb2706afc0370363cc2f3a8ccfa277be1c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0032861&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "612500e06e2af789416eb91787b3327952c23d72",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
255993248
|
pes2o/s2orc
|
v3-fos-license
|
A de novo complex chromosome rearrangement associated with multisystematic abnormalities, a case report
Complex chromosomal rearrangements (CCRs) are constitutional structural rearrangements that involve three or more chromosomes or that have more than two breakpoints. Here, we describe a four-way CCR involving chromosomes 4, 5, 6 and 8. The patient had mild multisystematic abnormalities during his development, including defects in his eyes and teeth, exomphalos and asthenozoospermia. His wife had two spontaneous abortions during the first trimester. The translocations in 4q27, 5q22, 6q22.3, and 8p11.2 were diagnosed by conventional cytogenetic analysis and confirmed by fluorescence in situ hybridization(FISH). After analysis using a SNP array, we defined three microdeletions, including 0.89 Mb on chromosome 4, 5.39 Mb on chromosome 5 and 0.43 Mb on chromosome 8. His mother had a chimera karyotype of 47, XXX[5]/45, X[4]/46, XX[91]; the other chromosomes were normal. After one cycle of in vitro fertility (IVF) treatment followed by preimplantation genetic diagnosis (PGD), they obtained two embryos, but neither was balanced. The patient’s phenotype resulted from the CCR and microdeletion of chromosomes 4, 5 and 8. The couple decided to use artificial insemination by donor (AID) technology.
Background
Complex chromosome rearrangements (CCRs) are structural aberrations involving more than two chromosome breaks with exchanges of several chromosomal fragments [1]. The phenotype of individuals with CCRs can be normal; this largely depends on whether or not the CCR is balanced or whether developmentally important gene(s) are disrupted at the breakpoints. Balanced CCRs contain unchanged amounts of genes but unbalanced CCRs do not. According to the literature, approximately 70% of CCRs are detected in people without a phenotype, 20-25% are detected in patients with congenital abnormalities and/or mental retardation and 5-10% are detected during prenatal diagnosis [2].
Occasionally, cases have been detected because of psychiatric trouble [3,4]. In addition, CCRs are frequently observed in tumor cells, especially in hematological malignancies [5]. Among phenotypically normal CCR carriers, most suffer reproductive failures, including spontaneous abortions, stillbirths, the delivery of children with congenital malformations, and male infertility [6,7].
There are various classifications of CCRs due to their complex nature. CCRs can be considered familial or de novo, according to the mode of transmission [8]. Based on the number of chromosome breaks, CCRs are divided into two groups: those with four or fewer breaks and those with more than four breaks [9]. CCRs are also divided into three classes according to their structure [10]: 1) three-way rearrangement, which refers to three chromosome breaks and exchanges of chromosomal segments; 2) exceptional CCRs, involving rearrangements in which there is more than one breakpoint per chromosome; and 3) double two-way translocations, which indicates two or three independent, simple reciprocal or Robertsonian translocations that co-exist in the same carrier.
Here, we describe a de novo CCR case that involves four chromosomes and four breakpoints. The patient displayed mild multisystematic abnormalities, which were identified by conventional cytogenetics and molecular genetic technologies. After a failure to obtain normal embryos with PGD, they chose to accept AID with donor spermatozoa.
Case presentation
A 25-year-old man and his 26-year-old wife were referred to our reproductive medical center due to two spontaneous abortions in the past 3 years after their marriage. The abortions occurred at the sixth and seventh week of gestation for the first and second time, respectively.
The physical examination of the husband showed that his eyes had refractive errors; his left eye displayed congenital amblyopia and his vision was 0.2 ( Fig. 1a). He also had bilateral primary open angle glaucoma (POAG) and the intraocular pressure of his eyes was more than 40 mmHg. His eyes were subjected to a trabeculotomy and the intraocular pressure was well-controlled. He also had exomphalos (Fig. 1b). His two central incisors were congenitally lost as implant, his lower right primary canine was retained (Fig. 1c) and his lower left permanent canine was congenitally missing (Fig. 1d). He had graduated from high school and is now employed. He can communicate normally. His routine semen analysis demonstrated a sperm deformity rate of 99%, sperm viability rate of 9.56%, DNA fragmentation index (DFI) of 13.58%, and high DNA stainability (HDS) of 15.36%. He is the second child of non-consanguineous parents and has two sisters. His parents and both sisters are healthy. After discovering his chromosome abnormalities, his family members underwent genetic testing (except his older sister, who was abroad).
After genetic counseling, the couple insisted on preimplantation genetic diagnosis (PGD). Initially, they obtained two embryos to undergo PGD; both were unbalanced. After counseling, they decided to accept artificial insemination with donor spermatozoa.
Methods and results
Metaphase chromosomes obtained from colchicinestimulated cultures of peripheral blood lymphocytes and fibroblast cultures were used for GTL-banding and fluorescence in situ hybridization(FISH) analysis. FISH was performed according to the method described by Wieczorek et al. [11]. A SNP-array was performed using Cyto12 genechip (Illumina, USA) according to the manufacturer's instructions.
To confirm the FISH results and to determine the presence of microdeletions during chromosome rearrangement, we examined DNA from the patient's peripheral blood using a SNP-array assay according to the manufacturer's instructions (Illumina). The results (4) is present on der(5), whereas material from der(5) is present on der(6). Material from der(6) is present on der (8). e-f. Images of FISH with terminal probes. 4pter is green, 4qter is red in E. 5pter is green, 5qter is red in F. g-i. The metaphase spread image of FISH with the EGR (5q31. (Fig. 3). The genes involved in these regions are shown in Table 1. These genes included five OMIM genes (which belong to the phospholipase A2 group XIIA (PLA2G12A)), ELOVL fatty acid elongase 6 (ELOVL6), solute carrier organic anion transporter family member 4C1 (SLCO4C1), diphosphoinositol pentakisphosphate kinase 2 (PPIP5K2, also known as HISPPD1) and nudix hydrolase 12 (Nudt12). These genes are not associated with known disorders. Among them, SLCO4C1 is an organic anion transporter, HISPPD1 is a kinase (which acts as a cell signaling molecule), and the remaining genes are enzyme-encoding genes that are involved in several metabolic processes, including phospholipid, fatty acid and nucleotide metabolism. Interestingly, two genes, RRH and LRIT3, were related to his ocular disorder. RRH (retinal pigment epithelium-derived rhodopsin homolog) belongs to the seven-exon subfamily of mammalian opsin genes [12]; mutation of this gene has been linked to retinitis pigmentosa and allied diseases [13]. The LRIT3 (leucine rich repeat, Ig-like and transmembrane domains 3) encoded protein may regulate fibroblast growth factor receptors and affect the modification of these receptors, which are glycosylated differently in the Golgi and endoplasmic reticulum. Mutations in this gene are associated with congenital stationary night blindness, type 1F [14]. Our results demonstrate that although these genes are not associated with known disorders, they show haploinsufficiency.
To determine whether the patient's microdeletions were inherited from his parents or whether they appeared de novo, the parents were subjected to a SNP-array analysis. The results demonstrated that the parents do not have microdeletions in the three chromosomes mentioned above (Fig. 4), indicating that the loss of chromosome fragments was derived from rearrangement.
Discussion and conclusion
Here, we describe a four-way CCR involving several microdeletions on chromosomes 4, 5, 6 and 8. The patient had mild multisystematic abnormalities during development, including defects in his eyes and teeth, exomphalos and asthenozoospermia. After completing a cycle of PGD, he did not obtain normal embryos and decided to use AID.
CCRs are rare events with an estimated frequency of 0.1% [15]. Most CCR cases are unknown to the carriers 4 The CCR patient's pedigree. I represents the patient's parents; II represents the patient and his sisters; II-2 is the patient or their families. Some chromosomes, including 2, 3, 4, 7, and 11, are more frequently implicated in CCR than would be expected. This is the first CCR case to involve chromosomes 4, 5, 6, and 8 [16,17]. CCRs can involve up to 15 breakpoints. According to a 2011 summary, cases that included four breakpoints accounted for 29.1% of all 251 CCR cases [2].
However, this CCR occurred de novo; the patient's mother's karyotype was 47, XXX [5]/45, X [4]/46,XX[91], and she had a low level of mosaic 47, XXX and 45, X, which was less than 10%. She had three children at 19, 21 and 35 years of age and had no fertility issues during her childbearing age. Her mosaic karyotype is possibly due to a gain or a loss of X chromosomes as she aged [18,19] or chromosomal nondisjunction during the culture of peripheral blood lymphocytes.
Breakpoint analysis of a growing number of complex rearrangements has revealed that translocations involving three or more chromosomes are likely formed via chromothripsis [20][21][22][23]. Most constitutional chromothripsis events occur de novo and those investigated thus far have been verified as paternal in origin [20][21][22][23]. Alternatively, mitotic errors in the early embryo [24] or the pulverization of micronuclei [25] could be responsible for numerous DNA breaks. We speculate that this de novo CCR is due to chromothripsis.
According to the literature, a three-way CCR would theoretically form 64 different gametes: one normal, one balanced, and the rest unbalanced [2]. A four-way CCR, as in this case study, has a probability of producing normal and balanced gametes of less than 1/32. We disclosed this possible risk and as a result of genetic counseling, the couple opted for PGD. After a failure to obtain normal embryos with PGD, they chose to accept AID with donor spermatozoa.
In conclusion, we systematically investigated this CCR and the accompanying microdeletions and were able to characterize the genetic defects that resulted in the patient's multisystematic abnormalities, which had bothered him for many years. After receiving genetic counseling, the couple understood that they could not conceive a chromosomally balanced child because the husband had microdeletions in three chromosomes. They chose to undergo AID.
|
2023-01-19T21:08:58.893Z
|
2017-09-02T00:00:00.000
|
{
"year": 2017,
"sha1": "59e43fcdf74b77965c186f752d6e029e2ca22c1f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13039-017-0332-6",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "59e43fcdf74b77965c186f752d6e029e2ca22c1f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
}
|
11998447
|
pes2o/s2orc
|
v3-fos-license
|
The Hamiltonian H=xp and classification of osp(1|2) representations
The quantization of the simple one-dimensional Hamiltonian H=xp is of interest for its mathematical properties rather than for its physical relevance. In fact, the Berry-Keating conjecture speculates that a proper quantization of H=xp could yield a relation with the Riemann hypothesis. Motivated by this, we study the so-called Wigner quantization of H=xp, which relates the problem to representations of the Lie superalgebra osp(1|2). In order to know how the relevant operators act in representation spaces of osp(1|2), we study all unitary, irreducible star representations of this Lie superalgebra. Such a classification has already been made by J.W.B. Hughes, but we reexamine this classification using elementary arguments.
Introduction
The suggestion that the zeros of the Riemann zeta function might be related to the spectrum of a self-adjoint operator H goes back to Hilbert and Pólya in the early 20th century. It was not until the works of Selberg [1] and Montgomery [2] that this conjecture gained much credibility. Due to papers by Connes [3] and Berry and Keating [4,5] in the late 1990s, it appears that the Hilbert-Pólya conjecture might be related to the classical one-dimensional Hamiltonian H = xp. More precisely, Berry and Keating suggest that some sort of quantization of this Hamiltonian might result in a spectrum consisting of the values t n , where the t n are the heights of the non-trivial Riemann zeros 1 2 + it n . A proper quantization revealing such a correspondence is, however, not known. These interesting observations stimulated us to perform a different quantization of the Hamiltonian H = xp. In Wigner quantization one abandons the canonical commutation relations and instead imposes compatibility between Hamilton's equations and the Heisenberg equations as operator equations. The result is a set of compatibility conditions that are weaker than the canonical commutation relations. This was applied for the first time in a famous paper by Wigner [6]. Wigner's approach has been applied to many different Hamiltonians, leading to various connections with Lie superalgebras [7][8][9]. In the present text, Wigner quantization will lead to the Lie superalgebra osp(1|2). Since it is our interest to determine the spectrum of the operatorsĤ andx, one needs the action of these operators in representation spaces of osp(1|2). We present a classification of all irreducible * -representations of this Lie superalgebra, thus reconstructing and improving some results by Hughes [10].
Wigner quantization of H = xp
The simplest Hermitian operator that corresponds to our Hamiltonian is given byĤ Without the assumption of any commutation relations between the position and momentum operatorsx andp, one can still compute Hamilton's equationṡ and the equations of Heisenberġ and impose that they are equivalent. The resulting compatibility conditions (we choose = 1) are weaker than the usual canonical commutation relations [x,p] = i. We wish to find self-adjoint operatorsx andp such that the compatibility conditions (2) are satisfied. For that purpose we define new operators b + and b − , satisfying One can rewrite the HamiltonianĤ in terms of the b ± as follows: Evidently the operatorsx andp can be expressed as linear combinations of the b ± . Even the compatibility conditions can be reformulated. They are equivalent to [Ĥ, b ± ] = −ib ∓ , which in turn can be written as These equations are recognized to be the defining relations of the Lie superalgebra osp(1|2), generated by the elements b + and b − . So we have found expressions of all relevant operators in terms of Lie superalgebra generators. A question one might ask is to find the spectrum ofĤ andx in an osp(1|2) representation space, which is only possible once these representation spaces are known. The spectral problem will be tackled in a subsequent paper. Right now, we wish to present a straightforward way of classifying the irreducible * -representations of osp(1|2).
Classification of irreducible * -representations of osp(1|2)
Although we are aware of the classification by Hughes in [10], we think it is possible to achieve his results in a more accessible way, based on [11]. In addition we will be able to identify some equivalent representation classes. Before giving the details of our classification, we provide the readers with the necessary definitions and a general outline of how we will construct all irreducible * -representations of osp(1|2).
Basic introduction and outline
We will be dealing with the Lie superalgebra osp(1|2), generated by two operators b + and b − that are subject to the relations (3). The generating operators b + and b − are the odd elements of the algebra, while the even elements are Among others, the following commutation relations can now be computed from the defining relations (3): One can define a * -structure on osp(1|2), which is an anti-linear antimultiplicative involution X → X * . For X, Y ∈ osp(1|2) and a, b ∈ C we have that (aX + bY ) * =āX * +bY * and (XY ) * = Y * X * . Our * -structure is provided by the dagger operation X → X † , so we have b ± * = b ∓ and therefore h * = h, e * = −f and f * = −e. Once we have constructed such a * -algebra, we need to define representations.
Definition 1 Let
A be a * -algebra, let H be a Hilbert space and let D be a dense subspace of H. A * -representation of A on D is a map π from A into the linear operators on D such that π is a representation of A regarded as a normal algebra, together with the condition for all X ∈ A and v, w ∈ D. The representation space D, together with the representation π, is called an A-module. A submodule of D is a subspace that is closed under the action of A. The representation π is said to be irreducible if the A-module D has no non-trivial submodules.
The even operators h, e and f , together with the previously defined * -structure, form the Lie algebra su(1, 1). Both su(1, 1) and osp(1|2) possess a Casimir operator, denoted by Ω and C respectively: The Casimir elements generate the center of the respective (enveloping) algebras. So Ω commutes with every element of su(1, 1) and similary for C. Moreover, we have Ω * = Ω and C * = C.
We will construct all possible irreducible * -representations of osp(1|2) starting from one assumption: h has at least one eigenvector in the representation space with eigenvalue 2µ, or Starting from this one vector, we will build other basis vectors of the representation space V by letting operators of osp(1|2) act on it. After having determined the actions of all osp(1|2) operators on all basis vectors of V , we will extend the representation π to a * -representation. This is done by defining a sesquilinear form ., . : V → C, which is to be an inner product that satisfies (4). The stipulation that ., . should be an inner product will be crucial in limiting the possible representation spaces. However, we will postpone the details of this discussion to the point where we have enough arguments for this end. So let us start with the actual construction of the representation space V .
Construction of the representation space
In this section, the * -structure is of no importance. We will construct an ordinary osp(1|2) representation space that we will extend to a * -representation in the next section. The embedding of su(1, 1) in osp(1|2) implies that any irreducible representation of osp(1|2) is a representation of su(1, 1), the latter being not necessarily irreducible. V can therefore be written as a direct sum of irreducible representation spaces of su(1, 1), or Without loss of generality, we can regard v 0 as an element of W 0 . Since W 0 is a representation space of su(1, 1), we know that v 2k = π(e) k v 0 and v −2k = π(f ) k v 0 must be elements of W 0 . All these vectors span the space W 0 , which is generated by a single vector v 0 . The action of b + on any vector of W 0 must be a vector outside W 0 , provided that this action differs from zero. Let us define We can say that v 1 is an element of W 1 . Similarly, we can look at the action of Since we can neither say that π(f )v 1 is a nonzero multiple of v −1 , nor that π(e)v −1 is a multiple of v 1 , we must regard v −1 as an element of a different subspace W −1 . Note that W 1 and W −1 are the same spaces when either π(b − )v 1 or π(b + )v −1 differs from zero. These actions are zero simultaneously only when µ = 0. We denote the generating vectors of W −1 as v −2k−1 = π(f ) k v −1 and the generating vectors of W 1 as v 2k+1 = π(e) k v 1 .
Lemma 2
The vectors of W 0 , W −1 and W 1 are connected by the actions of b + and b − in the following manner for every positive integer value of k.
It is clear that this can be generalized to the stated formula for v 2k+1 . The result for v −2k−1 can be found analogously. Figure 1 helps to visualize how the representation space is constructed. We emphasize that the relationship between v 1 and v −1 is not yet determined. Figure 1: The action of h on the entire representation space V can already be determined.
Lemma 3 The action of h on V is given by
for all k ∈ Z.
Proof: For even values of k, this follows just from the relations For the odd values of k, we need [h, b ± ] = ±b ± , which is an instant consequence of equation (3). From this, we obtain and similarly for v −2k−1 .
We would like to determine the actions of b + and b − on every vector of W 0 , W −1 and W 1 . Our method involves defining the action of the Casimir operators on the representation space. We write the respective diagonal actions as We will argue that the choice of λ is not independent of δ. It is a nice exercice to show with the help of equation (3) that This can be used to show that C 2 = (1 − 4Ω)(2C + 4Ω). If we let both sides of this equation act on a vector v 2k , we get a quadratic equation in λ. The two possible solutions are λ 1 = 2δ(2δ + 1) and λ 2 = 2(δ + 1)(2δ + 1).
We choose λ = λ 1 and remark that the results for the choice λ = λ 2 can be reproduced with the transformation δ → −δ − 1.
In order to be able to determine the actions of b + and b − on every vector of V , we still need the action of the su(1, 1) Casimir operator Ω on W −1 and W 1 .
Lemma 4
The Casimir operator Ω acts on W −1 and W 1 as given by As desired, the su(1, 1)-Casimir operator is constant on the subspaces W −1 and W 1 as well. Moreover, the actions on both subspaces are the same.
Proof: To prove equation (8), we will calculate π(Ω)v 2k+1 as π(Ωb + )v 2k . From (3) we can immediately derive that Using this and twice the definition of the Casimir element C, we obtain The same formula holds if we change b + into b − in both sides of the equation. All of the operators on the right hand side can be applied to vectors of W 0 . So now π(Ωb + )v 2k can be easily calculated, with equation (8) as a result.
It has now become straightforward to find the actions of b + and b − on all the vectors of V .
Proposition 5
The actions of the operators b + and b − on the vectors of V are given by After the choice λ = λ 2 one would find these actions by means of the transformation δ → −δ − 1.
Since the actions of h, e and f follow directly from these relations, we have now constructed all representations of osp(1|2) generated by a weight vector v 0 . It remains to investigate irreducibility and the * -condition.
Extension to * -representations
Recall that V is the space spanned by all the vectors v k , k ∈ Z. We introduce a sesquilinear form ., . : V → C such that π(X)v, w = v, π(X * )w for all X ∈ osp(1|2) and for all v, w ∈ V . We see that h * = h implies that v k , v l = 0 for k = l. This means that the set S = {v k |k ∈ Z, v k = 0} forms an orthogonal basis for V . We denote by I the index set such that v k ∈ S for all k ∈ I. The form ., . is defined by putting v k , v l = a k δ kl , k, l ∈ I, with a k to be determined and a 0 = 1. The definition of a * -representation requires that the representation space is a Hilbert space, so our sesquilinear form needs to be an inner product. Hence, we want a k > 0 for k ∈ I. From the action of h and from h * = h we obtain so µ must be a real number. Similar calculations for the actions of Ω and C reveal that both δ(δ + 1) and δ(2δ + 1) are real. These two conditions together imply that δ must be real. From the actions of b + and b − and from (b ± ) * = b ∓ , we derive In the same way we find Some readers might care for a closed expression for the a k . This is given by is the classical Pochhammer symbol. We wish to determine under which conditions ., . is an inner product. Alternatively put, for which parameter values is a k > 0 for all k ∈ I? Starting from a 0 = 1 this can be derived inductively using the two previous equations. We find that all a k can be positive only if µ − δ > 0 and µ + δ + 1 > 0. A similar reasoning should yield a positivity condition for the a k for negative k. However, the resulting conditions µ ± δ + k > 0 can never be satisfied for all negative values of k. Hence, the representation π must have a lowest weight vector, because otherwise it would not be possible to define an inner product on the entire representation space. In this case, the restriction of π to an su(1, 1) subspace is known as a positive discrete series representation. There are two choices for δ to obtain a lowest weight representation. One choice is to have v 0 as a lowest weight vector, which will arise when δ = −µ as one sees from the actions (9). For δ = µ − 1 we obtain π(b + )v −2 = 0, in which case v −1 is the lowest weight vector. After one of these choices Proposition 5 must obviously be rewritten. Before we do this, let us make use of the inner product ., . to construct an orthonormal basis {e k }: for k ∈ I. We can now investigate all irreducible * -representations of osp(1|2).
For µ > 1 2 , this representation can occur alongside another one, for which the actions of the generators on the basis vectors {e k | k = −1, 0, 1, 2, . . .} are given by The actions of the other generators follow immediately from these relations and are left for the reader to calculate.
Proof: For δ = −µ, we get the first representation, which is a lowest weight representation since π(b − )e 0 = 0. It is clear that µ must be strictly positive so that all the given actions are well defined. The case µ = 0 is excluded to be sure that π(b + )e 2k differs from zero.
In the case of the second representation, for δ = µ − 1, we must add the condition µ > 1 2 to guarantee that π(b + )e −1 is well defined and different from zero. We end up with the desired classification.
Note that if we were to choose λ = λ 2 in the discussion preceding Lemma 4, we would find exactly the same class of irreducible * -representations. Indeed, these two representations would pop up for the choices −δ−1 = −µ or −δ−1 = µ − 1. It immediately follows that the other actions remain the same in this case.
Finally, we notice an equivalence between both representation classes in Proposition 6. Thus, we end up with only one class of irreducible representations of osp(1|2).
Theorem 7
The only class of irreducible * -representations of osp(1|2) is a direct sum of two positive discrete series representations of su(1, 1), determined by a parameter µ > 0. The actions of the generators on the basis vectors {e k | k = 0, 1, 2, . . .} of the representation space are determined by (10).
Conclusions and further results
In this text we have obtained a classification of all irreducible * -representations of osp(1|2). The latter Lie superalgebra showed up naturally in the Wigner quantization of the considered Hamiltonian H = xp. Our main concern however, was to investigate the spectrum of the operatorsĤ andx. Since these operators are written in terms of generators of osp(1|2) we felt the need to explore representations of this Lie superalgebra. They provide us with a suitable framework in which we know how the crucial operators act.
Results about the spectrum ofĤ andx have already been found and the details will be published in a subsequent paper, but it is interesting to summarize the results here. In order to find all eigenvalues of one of the operators, one defines a formal eigenvector for a specific eigenvalue t, v(t) = where the e n are the eigenvectors of the osp(1|2) representation space V and the α n (t) are unknown coefficients depending on the eigenvalue t. Demanding that v(t) is an eigenvector of the operator in question will gives us a three term recurrence relation for the coefficients α n (t). These coefficients are then identified with the orthogonal polynomials that comply with the same recurrence relation. The spectrum of the operator is then equal to the support of the weight function of this type of orthogonal polynomials. Concretely we have that the spectrum ofĤ is related to Meixner-Pollaczek polynomials and is equal to R with multiplicity two. Generalized Hermite polynomials are connected with the spectrum ofx, which is simply R. Recall that Wigner quantization is a somewhat more general approach than canonical quantization. This means that one should be able to recover the canonical case from the results after Wigner quantization. Indeed, our results prove to be compatible with the well-known canonical case for the representation parameter µ = 1 4 .
|
2010-01-08T14:43:43.000Z
|
2010-01-08T00:00:00.000
|
{
"year": 2010,
"sha1": "c95bc09246c5e0462fbfbf7e566255f64a67457e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1001.1285",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c95bc09246c5e0462fbfbf7e566255f64a67457e",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
33961172
|
pes2o/s2orc
|
v3-fos-license
|
Modelling a new, low CO2 emissions, hydrogen steelmaking process
In an effort to develop breakthrough technologies that enable drastic reduction in CO2 emissions from steel industry (ULCOS project), the reduction of iron ore by pure hydrogen in a direct reduction shaft furnace was investigated. After experimental and modelling studies, a 2D, axisymmetrical steady-state model called REDUCTOR was developed to simulate a counter-current moving bed reactor in which hematite pellets are reduced by pure hydrogen. This model is based on the numerical solution of the local mass, energy and momentum balances of the gas and solid species by the finite volume method. A single pellet sub-model was included in the global furnace model to simulate the successive reactions (hematite->magnetite ->wustite->iron) involved in the process, using the concept of additive reaction times. The different steps of mass transfer and possible iron sintering at the grain scale were accounted for. The kinetic parameters were derived from reduction experiments carried out in a thermobalance furnace, at different conditions, using small hematite cubes shaped from industrial pellets. Solid characterizations were also performed to further understand the microstrutural evolution. First results have shown that the use of hydrogen accelerates the reduction in comparison to CO reaction, making it possible to design a hydrogen-operated shaft reactor quite smaller than current MIDREX and HYL. Globally, the hydrogen steelmaking route based on this new process is technically and environmentally attractive. CO2 emissions would be reduced by more than 80%. Its future is linked to the emergence of the hydrogen economy.
Introduction
The increase in CO 2 level of the troposphere, mainly due to emissions from fossil fuel combustion, is very likely the cause for the climate change observed over the past several decades, characterized by the rapid growth of global average temperatures (IPCC, 2007). Hence, to prevent global warming many countries signed the Kyoto protocol for decreasing their greenhouse gases (GHG) emissions. Among these GHG emissions, the Steel Industry is responsible for 19% of those of # Now, at ArcelorMittal, Maizières-les-Metz, France the industrial sector (GIEC, 2001). In this context, the main European steelmakers have launched, in 2004, a European research and development project called ULCOS ("Ultra-Low CO 2 Steelmaking") to reduce carbon dioxide emissions of today's best routes (1850 kg of CO 2 /ton steel ) (Birat et al., 2008) by at least 50%. To achieve this target, breakthrough technologies for making steel were studied by ULCOS partners. In this paper, we are interested in the use of pure hydrogen (H 2 ) as reducing agent of iron ores in the direct reduction (DR) process, which could be the core process of a new, cleaner way to produce steel with lower CO 2 emissions, as shown below.
In the best hydrogen-based steelmaking breakthrough route studied in ULCOS ( Fig.1), H 2 would be produced by water electrolysis using hydraulic or nuclear electricity. Iron ore would be reduced to direct reduced iron (DRI) by H 2 in a shaft furnace, and C-free DRI would be treated in an electric arc furnace (EAF) to produce steel. This route exhibits promising performance regarding CO 2 emissions: less than 300 kg CO 2 /ton steel , including the CO 2 -cost of electricity (Wagner, 2008), the emissions from the DR furnace itself being almost zero. This represents an 84% cut in CO 2 emissions as compared to the current 1850 kg CO 2 /ton steel of the best blast furnace route. This new route would thus be a more sustainable way for making steel. However, its future development is largely dependent on the emergence of a so-called H 2 economy, when this gas would become available in large quantities, at competitive cost, and with low CO 2 emissions for its production. This should be possible with a significant increase on H 2 demand from other industrial sectors, which could be the case of energy and transportation industries. In ULCOS, this scenario was regarded as a mid-term option. Therefore, to anticipate its possible development, it was decided to check the feasibility of using 100% H 2 in a Direct Reduction shaft furnace. The aim of our work was to develop a mathematical model for simulating a future DR shaft furnace operated with pure H 2 in order to evaluate the process.
Fig.1-Hydrogen-based route to steel
Most of the current DR shaft furnace models proposed in the literature deal with the reduction by the gas mixture H 2 /CO; and they also greatly simplify the description of physical-chemical and thermal phenomena involved. Yu and Gillis (1981), Takenaka et al. (1986) and Negri et al. (1995) considered in their models onedirectional (vertical) fluxes of solid and gas species inside the reactor. However, in current industrial DR furnaces, the reducing gas is mainly fed radially, through holes in the side wall of the lower part of the shaft, which makes gas flow twodimensional. Concerning the kinetic mechanisms which control the reduction of iron ore, the most common simplification is to adopt the classical unreacted shrinking core model (USCM) to describe the reaction. Kam and Hughes (1981) and Parisi and Laborde (2004) A singlepellet sub-model is included in the shaft furnace model to simulate the successive reactions, the different steps of mass transfer and possible iron sintering at the particle scale. The kinetic parameters of the model were derived from experiments (thermogravimetry and sample characterization).
Experimental work
Although numerous studies were devoted to one or another aspect of the kinetics of iron ore reduction by CO or H 2 , neither general kinetic model nor unique kinetic parameter values can be directly used for predicting the reduction rate in variable conditions of gas composition and temperature. Not only do reported rate constants differ by orders of magnitude, but the activation energies also vary over a wide range. For example, the activation energy for the wustite-iron reduction by H 2 varies from 200kJ/mol (Gaballah et al, 1972), to 117kJ/mol (Takenaka et al., 1986) and 92kJ/mol (Tsay et al., 1976). Inconsistencies can also be found in works of the same authors (Valipour et al., 2006;Valipour and Khoshandam, 2009). These discrepancies may be attributed to the wide diversity of experimental conditions (reduction temperature and pressure, gas flow and gas composition) and to the starting material (mineralogical composition, crystal size, porosity and pore distribution of the ore).
To clarify the kinetic mechanisms of the reduction of hematite pellets by pure H 2 , we used thermogravimetry (weight-loss technique) for accurate determination and continuous recording of the weight loss of iron ore during the reduction process as a function of time. The experiments were carried out using a SETARAM TAG 24 thermobalance, which features a pair of symmetric furnaces. This arrangement enables high precision on mass variation measurements ( g accuracy) since it eliminates buoyancy and drag forces effects. A specific steam generator was coupled to the furnace to possibly add controlled water content to the reaction gas. Not to exceed the maximum weight loss imposed by the thermobalance (400 mg), we shaped small hematite cubes (3.5-5 mm side, 200-550 mg weight) from industrial pellets (4g weight), as thermogravimetric samples. Pellets used were Brazilian CVRD hematite pellets composed of approximately 96% of Fe 2 O 3 and 4% of other oxides (CaO, SiO 2 , Al 2 O 3 , MgO, MnO, etc.). The tests were run under isothermal conditions. A single Fe 2 O 3 cube wrapped in a Pt wire was directly hung to the balance beam, placed inside the furnace (18 mm of inner diameter) and preheated to a chosen temperature under an inert gas atmosphere of He. When the desired temperature was reached and stabilized, H 2 was introduced into the furnace and maintained until the end of the reaction, when the weight loss of the sample was no more significant. The reduced sample was then cooled to room temperature under an inert He atmosphere. Experiments were made at temperatures between 500 and 990°C. Gas compositions were 100 % H 2 and 60 vol. % H 2 in He. In some cases, up to 4 vol. % H 2 O were added to study the effect of water in the reducing gas. Experiments showed that a gas flow of 200mL/min was sufficient to ensure a good external mass transfer and not affect the reduction curves. In addition to normal complete experiments, partially reduced samples were prepared by interrupting the H 2 flow inside the furnace before complete reduction took place. The complete and partially reduced samples were analyzed by X-ray diffraction (XRD), scanning electron microscopy (SEM) and Mössbauer spectrometry to characterize the morphological evolution during the reduction and also identify the mechanisms that control the global reaction rate.
3.1
Influence of the temperature The reduction curves obtained experimentally by thermogravimetry at different temperatures are given in Fig.2. The conversion plotted on y-axis represents the fractional oxygen lost by the iron ore in the course of the reduction from hematite to metallic iron (hematite/magnetite/wustite/iron).
Fig.2.
Influence of the temperature on the reduction kinetics of small hematite cubes (5mm side, 550 mg weight) by H 2 (200 mL/min of total gas flow, H 2 /He 60/40 %vol.) These curves show that 800°C seems to be the optimum temperature to reduce hematite cubes by H 2 in lesser time (Fig.2). For experiment temperatures under 800 °C (Fig.2, left), the higher the temperature, the faster the reaction, except at 700 °C, where the reaction slows down at high conversions. At 850°C, 900°C and 990°C, (Fig.2, right), even if an increase in temperature first accelerates the reduction, a significant slowing down of the rate at the end of the reaction increases the total time of reduction. This slowing down of the rate starts earlier and earlier as the temperature increases. In order to clarify this effect of the temperature on the reduction rate, cubes completely reduced at different temperatures were observed by SEM (Fig.3).
Fig.3.
Influence of the temperature on the morphology of the iron obtained after reduction of hematite cubes (5mm side, 550 mg weight) by H 2 (200 mL/min of total gas flow, H 2 /He 60/40 %vol.).
It can be seen from Fig.3 that the higher the temperature, noticeably denser is the iron formed. The iron obtained at high reduction temperatures (990°C) presents fewer, bigger and isolated pores, with a smoother surface as compared with the iron obtained at 600°C. We attributed this densification to the tendency of the freshly formed iron to sinter. Sintering is a mass transfer phenomenon, activated by the temperature and time-dependent, which decreases the specific surface area of the material. Its consequences are a decrease in pore volume, variation of the pore geometry and growth of the grains. At high temperatures, which promote sintering, the pores get thinner and eventually disappear, causing a densification of the iron obtained, as observed in Fig.3. This makes gas phase diffusion through the iron pores more difficult. Therefore, solid phase diffusion of oxygen, a slower process, is the only possible mass transfer mechanism to reach the wustite that remained entrapped inside this dense iron layer and to complete the reaction. This could explain the slowing down of the rate observed at high temperatures, where sintering is favoured, and at high conversions, when a significant quantity of iron phase is already present in the sample.
Microstructure evolution during the reduction: Interrupted experiments.
With the aim of understanding the morphological evolution of the samples during the reduction, some experiments carried out at 800°C were interrupted before completion. Partially-reduced samples were observed by SEM (Fig.4), and XRD and Mössbauer spectrometry were used to quantify the intermediary oxides present in the samples.
The initial hematite sample is made up of relatively large, dense grains, with angular sides (Fig.4-a). After a few seconds, the surface, which is mainly magnetite and wustite, becomes covered with small pores (Fig.4-b) but the main structure of the initial sample remains almost unchanged. Pores slightly enlarge when wustite appears (Fig.4-c), but no significant change is observed in the microstructure. Around 50% of conversion, when the iron content in the sample is quite significant (66% Fe and 34% FeO) ( Fig.4-d), the initial grainy microstructure breaks up into smaller particles, which we termed "crystallites", and is progressively replaced by a molten-like structure, characteristic of the sponge iron ( Fig.4-e).
One of the partially reduced cubes (experiment at 800°C, interrupted at the stage of wustite) was impregnated with resin and polished. A cross-section of this sample observed by SEM (Fig.4-f) further reveals the structure of wustite: very porous and sub-divided into smaller grains, the crystallites. We developed a mathematical model of a shaft furnace for the reduction of iron ore by hydrogen and its corresponding numerical code, called REDUCTOR. Our objective was to build a valuable simulation tool that would help us to optimize the operating conditions and the design of a future, clean, direct-reduction reactor operated with pure hydrogen.
In an industrial DR process, like MIDREX or HYL, iron oxide, in the form of pellets or lump ore, is introduced at the top of the shaft furnace, through a hopper, descends by gravity and encounters a counter-current flow of syngas, a mixture of mostly CO and H 2 , produced by reforming of natural gas. This reducing gas heats up the descending solids and reacts with the iron oxide, converting it into metallic iron (DRI) at the bottom of the cylindrical upper section of the reactor (i.e., the reduction zone). CO 2 and H 2 O are released. For production of cold DRI, the reduced iron is cooled and carburized by counterflowing cooling gases in the lower conic section of the furnace (cooling zone). The DRI can also be discharged hot and fed to a briquetting machine for production of HBI (Hot Briquetted Iron), or fed hot directly to an EAF (Electric Arc Furnace).
The REDUCTOR model was developed to simulate the reduction zone of a shaft furnace, similar to a MIDREXone. Thus, a counter-current moving bed cylindrical reactor, with 9 m height and 6.6 m diameter is described (Fig.5). The iron ore is fed at the top of the furnace as pellets. The gas is mainly injected laterally into the bed, through a 30-cm high ring-like inlet situated at 20 cm above the bottom of the reactor. A small part (2%) of the gas flow is injected from the bottom of the furnace. Reducing gas is composed of H 2 , H 2 O an N 2 . For the sake of simplicity, we consider that the pellets are spherical (1.2-cm diameter) and composed of 100% of hematite.
The mathematical model itself is two-dimensional, axisymmetrical and steadystate. It is based on the numerical resolution of local mass, energy and momentum balances using the finite volume method (Patankar, 1980), and on a single-pellet sub-model for calculating the reduction kinetics. Calculated results include all of the relevant variables of the process (local solid and gas temperatures, compositions and velocities, reactions rates, conversion, etc.).
Fig. 5.
Principle of the REDUCTOR model
Kinetic sub-model of a single pellet
A kinetic sub-model was built according to the experimental findings to simulate the reduction of a single pellet by H 2 . This model is used as a subroutine of the shaft furnace model and it predicts the reaction rate as a function of the local reduction conditions (temperature and gas composition) inside the reactor. It is based on the law of additive reaction times (Sohn, 1978). This law states that the time required to attain a certain conversion is approximately the sum of characteristic times ( i ) corresponding to each elementary step of mass transfer. In our case, these steps are: the chemical reactions, H 2 and H 2 O gas diffusion through the pores of the pellets, oxygen solid diffusion through the dense iron layer formed and H 2 and H 2 O mass transfer through the boundary layer surrounding the pellet. The characteristic time of step i, i , is the time necessary to attain complete conversion in the case of a system controlled only by this elementary step i. This is a useful approximation in the case of gas-solid systems, where the different kind of mass transfer can be considered as a series of resistances. Its great advantage is its ability to represent intermediate (mixed) kinetic regimes in a closed-form equation, drastically reducing the computation time, particularly in the case of complex reactor models (Patisson et al., 2006).
Based on the SEM images of partially reduced samples (Fig.4), assumptions were made concerning the morphology of the pellets during the reduction. A pellet at hematite and magnetite stagesis supposed to be an agglomerate of dense spherical grains of the same diameter (25μm), separated by inter-grain porosity (0.1). At the stage of wustite, however, the grains of the pellet break up into crystallites (Fig.4-f) and become porous. Thus, they can be considered as a combination of dense spherical crystallites (2μm diameter), and pores (intra-grain porosity 0.53). As the wustite crystallites reduce to metallic iron, they become themselves porous due to molar volume difference between wustite and iron. Therefore, besides the intergrain porosity, we attribute also intra-grain and intra-crystallite porosities to the pellets at wustite stage.
In the case of hematite to iron transformation, three reactions (Eq.1 to 3) are involved, each one with a characteristic time ch,i . Moreover, considering the different porosity levels inside the pellet, gas diffusion must be taken into account through inter-grain, intra-grain and intra-crystallite pores, the latter appearing only at wustite stage. In the case of intra-grain and intra-crystallite pores, the Knudsentype diffusion is not negligible compared to molecular diffusion. Oxygen solid diffusion through the dense iron layer formed around the crystallites is also considered at high temperatures and conversions (wustite-iron reaction). The densification of the iron is also described as a function of temperature and time by introducing a characteristic time for sintering ( sint ). Detailed equations to calculate the characteristic times ( i ) and reaction rates (r i ) are given in the work of Wagner (2008) and summed up in Table 1. Notation is given at the end of the paper. As shown by Fig.6-left, this kinetic model represents fairly well the experimental curves obtained with the reduction of hematite cubes with hydrogen. By plotting the variation of the mass fractions of each oxide with time ( Fig.6-right), it appears that the first two reactions are very fast and that the wustite-iron reduction controls the overall transformation. One can also notice, on these curves at 900°C, the effect of sintering around 80% of conversion, when the reaction rate slows down. It is important to emphasize that kinetic parameters used in this kinetic model of a single pellet,as well as in the multiparticle reactor model described below, were obtained from the experimental tests carried out with the small cubes. First, the extrapolation to an entire pellet was made changing only the size and shape of the particle. This seems a reasonable approximation since cubes and pellets are made of the same raw material; anyway, further thermogravimetric experiments should be undertaken with industrial pellets to confirm this hypothesis. Second, from a pellet in the thermobalance to the pellets in the multiparticle reactor, only the external transfer, between the bulk gas and the pellet outer surface, changes. The external transfer coefficients k ext are thus calculated from different correlations in both cases (Culham et al., 2001, for a cube in the thermobalance, Wakao and Kaguei, 1982, for a pellet in the multiparticlebed ).
The Main equations of the REDUCTOR model
The main equations of the REDUCTOR model are local mass, energy and momentum balances. While considering a 2D flow for the gas and 1D flow for the solid within a cylindrical coordinate system, the following assumptions were made: steady-state, axisymmetry, heat of reaction released in the solid phase. The main equations of the model are presented in (4-14). Where are the equations? Do the only appear in the PDF version?
Mass balance for gaseous species i (mol m -3 s -1 )
1 r rc t x i u g,r ( ) -side wall (except gas inlet) -Zero fluxes:
Main simulation results: parametric study
First of all, a reference case was simulated. As mentioned above (section 4.1), the geometry of the problem is that of the reduction zone of a conventional MIDREXshaft furnace. The dimensions of the reactor and the known characteristics of the inlet gas and solid streams are indicated in Fig.7-a. The inlet solid flow rate used in this reference case corresponds to an annual production of 1Mton of iron.
To keep the driving force for the reduction high and, above all, to bring the necessary heat for the reduction, 3.8 times the stoichiometric gas flow is injected inside the bed. Hematite pellets (d p =12 mm) are fed at the top of the furnace at 25 o C, while gas (H 2 /H 2 0, 98/2 %vol.) is injected at 800 o C. Fig.7-b to Fig.7-e show the evolution of solid mass fractions throughout the reactor, at these reference conditions. As expected, the first two reactions are very fast compared withthe wustite-iron transformation. It should be pointed out that complete conversion to metallic iron is attained 2m above the bottom of the furnace (solid outlet). Thus, in these conditions, a 4-m high reactor would suffice to achieve the reduction. This is an important result because, in a conventional MIDREX process, a 9-m high reduction zone is needed to attain 92% of conversion at the solid outlet, using syngas as reducing agent in the same conditions. Other simulations were carried out to test the influence of some operating conditions and physical-chemical parameters. Plots in Fig.8 show the iron mass fractions calculated in different conditions and are to be compared with Fig. 7-e. Fig.8-a and Fig.8-b illustrate the influence of the gas inlet temperature on the iron mass fraction inside the furnace. When the reducing gas is injected at 600 o C ( Fig.8a), the solid leaves the reactor with a low mean metallic iron content and the conversion is not uniform along the radial axis (2D effect). The higher degrees of reduction can be found only near the lateral gas inlet. In a large zone of the bed, the gas temperature is not high enough to promote the reduction. On the other hand, when the gas inlet temperature is 1000 o C ( Fig.8-b), the conversion is neither complete. Comparing with the reference case where gas is injected at 800 o C (Fig.7e), the reduction at 1000 o C is noticeably slower. This is in agreement with the kinetic study (800 o C was found to be the optimum reduction temperature for hematite cubes) and results from the occurrence of sintering at high temperatures (here, 1000 °C). (Fig.8-c), while the simulation of the reference case showed that 4 m were necessary to complete the reduction of 12-mm diameter pellets (Fig.7-e). Conversely, with larger pellets (d p =24mm) (Fig.8-d), it is only possible to obtain a mean metallic iron content of 75% at the bottom of the reactor. These results express that the diffusion inside the pellets is one of the controlling mechanisms of the hematite-iron transformation and also reveal that decreasing the pellet size could represent an interesting option to accelerate the reaction.
Conclusions
The process of direct reduction of iron ore in a shaft furnace operated with pure H 2 was evaluated as a promising mid-term breakthrough technology to produce steel with a dramatic (more than 80%) reduction in CO 2 emissions as compared to the current blast furnace route. We developed a two-dimensional, steady-state model of this future process to evaluate a priori its performance. This model is based on the numerical solution of local balance equations using the finite volume method. A kinetic sub-model, built using the concept of additive reaction times, was incorporated in the furnace model to simulate the successive reactions involved in the reduction of a singlepelletby pure H 2 . The kinetic laws were derived from thermogravimetric experiments performed with small hematite cubes shaped from industrial pellets. An original feature of this model is the description of iron densification by sintering, at temperatures higher than 800 o C. The kinetic parameters obtained for small cubes were extrapolated to full-size pellets considering similar kinetic behaviour. However, further experiments should be performed with industrial pellets to confirm this assumption and to validate the kinetic model for full-size pellets.
The first results from the model showed that complete conversion to metallic iron using pure H 2 could be obtained using a more compact reactor than current industrial DR furnaces, which usesyngas (CO+H 2 ) as a reducing agent. This confirms the fact that reduction by H 2 is faster than that by CO. A parametric study showed that the size of the pellets and the temperature of the inlet gas have a strong influence on the reduction rate: the smaller the pellet diameter, the faster the reduction and thus the more compact the reactor. The optimum temperature was found to be 800 o C. At higher temperatures, the densification of iron due to sintering causes the reaction to slow down at high conversions. The next steps of this work should be to verify if the kinetics used are valid for entire pellets and to adapt the model for the reduction of iron ore by H 2 /CO mixtures to validate the model against operation data of existing industrial DR process.
Finally, the results obtained so far show the technical feasibility and the environmental interest of the hydrogen-based steelmaking route. If the so-called hydrogen economy would emerge, this new hydrogen steelmaking route would become a cleaner, more sustainable way for making steel.
|
2017-09-17T11:49:48.992Z
|
2013-05-01T00:00:00.000
|
{
"year": 2014,
"sha1": "86adc564df25542e04452fa21a00f818f223f879",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1402.1715",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2605c01d13816ae933ef3dd95b0b0b3f92b69191",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Environmental Science",
"Materials Science"
]
}
|
268988173
|
pes2o/s2orc
|
v3-fos-license
|
Accuracy of Genomic prediction for fleece traits in Inner Mongolia Cashmere goats
The fleece traits are important economic traits of goats. With the reduction of sequencing and genotyping cost and the improvement of related technologies, genomic selection for goats has become possible. The research collect pedigree, phenotype and genotype information of 2299 Inner Mongolia Cashmere goats (IMCGs) individuals. We estimate fixed effects, and compare the estimates of variance components, heritability and genomic predictive ability of fleece traits in IMCGs when using the pedigree based Best Linear Unbiased Prediction (ABLUP), Genomic BLUP (GBLUP) or single-step GBLUP (ssGBLUP). The fleece traits considered are cashmere production (CP), cashmere diameter (CD), cashmere length (CL) and fiber length (FL). It was found that year of production, sex, herd and individual ages had highly significant effects on the four fleece traits (P < 0.01). All of these factors should be considered when the genetic parameters of fleece traits in IMCGs are evaluated. The heritabilities of FL, CL, CP and CD with ABLUP, GBLUP and ssGBLUP methods were 0.26 ~ 0.31, 0.05 ~ 0.08, 0.15 ~ 0.20 and 0.22 ~ 0.28, respectively. Therefore, it can be inferred that the genetic progress of CL is relatively slow. The predictive ability of fleece traits in IMCGs with GBLUP (56.18% to 69.06%) and ssGBLUP methods (66.82% to 73.70%) was significantly higher than that of ABLUP (36.73% to 41.25%). For the ssGBLUP method is significantly (29% ~ 33%) higher than that with ABLUP, and which is slightly (4% ~ 14%) higher than that of GBLUP. The ssGBLUP will be as an superiors method for using genomic selection of fleece traits in Inner Mongolia Cashmere goats. Supplementary Information The online version contains supplementary material available at 10.1186/s12864-024-10249-7.
Introduction
By the end of 2021, the number of goats in stock has been up to 133.32 million in China, and the cashmere yield was about 15,102.18tons (http:// www.stats.gov.cn).Inner Mongolia Cashmere goats (IMCGs) is an important breed of cashmere goats in China, which is famous with its high cashmere yield and excellence quality of cashmere.It is a dual-purpose breed producing cashmere and meat.Reducing cashmere diameter and increasing cashmere yield are the breeding objectives for IMCGs.With the development of quantitative genetics and molecular biology, the selection methods of livestock have been improved gradually [1].A central methodology is the BLUP method proposed by Henderson in 1975 [2].Here, genetics parameters can be estimated based on the so-called mixed model equations in which covariance matrices need to be defined.In the standard approach, the pedigree-based relationship matrix (A) is used and the method is referred to as ABLUP.Several studies have demonstrated that using BLUP method can achieve higher genetic gains in pigs compared to individual phenotype selection [3,4].Using BLUP method to estimate the breeding value of litter size traits of Landrace pigs, which indicated that selection by BLUP method is feasible for the improvement of the litter size of swine [5].Jang et al.(2019) assessed that the effect of progeny numbers and pedigree depth on the accuracy of the estimated breeding value (EBV) of Hanwoo beef using BLUP method, the results showed that EBV can show more precise outcome with more progenies [6].
In 2001, the idea of genomic selection (GS) was proposed by Meuwissen.The method can improve estimation accuracy of breeding value, increase genetic gain, in particular by shortening the generation interval and reduce breeding costs [7][8][9][10].Genomic best linear unbiased prediction (GBLUP) utilizes genomic relationships to estimate the genetic merit of an individual [11,12].The genomic relationship matrix (G) defines the covariance between individuals based on observed similarity at the genomic level, rather than on expected similarity A based on pedigree.Thus, more accurate predictions of merit can be obtained.The GBLUP method assigns the same variance to all loci and essentially treats them all as equally important.The single-step genomic BLUP (ssG-BLUP) was provided by Legarra et al. [13].The core idea of ssGBLUP method is to combine pedigree relationship matrix (A) and genomic relationship matrix (G) to reconstruct a new relationship matrix (H) [14][15][16][17].Both, GBLUP as well as ssGBLUP use the same equations as ABLUP, but with different covariance, that is relationship matrices.
This approach is beneficial for traits that are difficult to measure and traits with low heritability.It has been successfully applied to other livestock, such as dairy cattle [18], beef [19], pigs [20], chickens [21], and sheep [22].It demonstrated that accuracies of breeding values for milk fatty acid of dairy cattle were low to high, ranging from 0.13 to 0.72 and from 0.18 to 0.74 considering the pedigree and the genomic information, respectively.It was confirmed that the contribution of genomic information in milk yield is more accurate compared to the ABLUP methodology [18].Zhao (2019) estimated genetic parameters and conducted genomic prediction for five types of sperm morphology abnormalities in a large Duroc boar population by using GBLUP and ssGBLUP method.It showed that the comparative predictive abilities of breeding values with ssGBLUP outperformed that with GBLUP method [20].Zhu (2021) evaluated the effect of statistical model, heritability and marker density on genomic prediction of six wool traits of sheep.The results showed that the prediction ability of GBLUP model for traits with low heritability was better [22].Muir (2015) reported that the accuracy of GEBV was higher than that estimated by using ABLUP method with simulated data when the enough training generations were provided [23].
Genomic selection has been widely applied in animal breeding programs.However, due to the limitations of sequencing costs, and economic benefits, the application of genome selection for goats has not yet fully developed.With the construction of reference populations and the development of 70 K commercial SNP genotyping chips for goats, a routine application of GS is in sight.In this study, the records of phenotype, genotype, pedigree of 2299 IMCGs was used.Genomic prediction of fleece traits in Inner Mongolia Cashmere goats (IMCGs) using the pedigree based Best Linear Unbiased Prediction (BLUP), Genomic BLUP (GBLUP), single-step GBLUP (ssGBLUP) were performed.This study will provide a reference for genome selection breeding of Inner Mongolia Cashmere goats.
Phenotypic data
The phenotypic data were collected from an Inner Mongolia YiWei White Cashmere Goat Limited Liability Company, Wulan Town, Etuoke Banner, Ordos City, Inner Mongolia Autonomous Region, China (39°12′N; 107°97′E).In this study, a total of 33,623 production performance records of fleece traits for 2256 individuals (372 males and 1884 females) at ages of 1 to 8 years old were collected from 2011 to 2021.All animal pedigree can be traced back three generations.The fleece traits included cashmere production (CP), cashmere diameter (CD) and cashmere length (CL), fiber length (FL).The basic statistics of phenotypic data were analyzed with Microsoft Excel 2021 (https:// www.micro soft.com/ zh-cn/ micro soft-365/ excel) and R4.2.2 (https:// www.r-proje ct.org/).
Genotype data
The 2299 individuals were genotyped using the Illumina GGP_Goat_70K BeadChip (Illumina, San Diego, CA).Markers on the sex chromosome were discarded.SNPs were selected based on minor allele frequency (MAF > 0.05), proportion of missing genotypes (missing < 0.05), and Hardy Weinberg equilibrium (HWE > 10 -5 ).Unqualified SNPs were removed.Moreover, individuals with more than 10% missing genotypes were excluded.Use PLINK1.9 software to perform quality control on genotype data.The genotype data after quality control was utilized to draw the SNP density maps by CMplot packages in R language.
Estimation of genetic parameters and genomic breeding value
In this study, the fixed effects including sex, year of production, herd (1 to 11), individual age, dam age, birth type were determined by generalized linear model (GLM).The generalized linear model formula was as follow: where y ijklmno is the vector of observations of the ani- mal, µ is the mean value vector of the observations, S i is the effect of sex, Y j is the effect of year of production, H k is the effect of herd, I l is the effect of individual age, D m is the effect of dams of age, B n is the effect of birth type, e ijklmno is the effect of residual.
After determining the fixed effect, a repeatability animal model was used to estimate the genetic parameters and genomic breeding values with ABLUP, GBLUP and ssGBLUP methods.All methods were performed by the ASREML software [24].
In this study, the model was the same for ABLUP, GBLUP and ssGBLUP: where y is the vector of the observations, µ is the mean value vector of the observations, b is the vector of fixed effects, a is a vector of additive genetic effects, c is a vec- tor of permanent environmental effects and e is a vector of residual.The matrix X is the incidence matrix for the fixed effects, Z is the incidence matrix relating additive genetic effects and W is the incidence matrix relating permanent environmental effects.
In ABLUP, additive genetic effects are sampled from distribution N (0, Aσ 2 a ) ; σ 2 a is the additive genetic variance and A is the identity by descent (IBD) relationship matrix constructed from pedigree information.In GBLUP, the Construction of the inverse of H matrix for ssGBLUP: where A −1 is the inverse matrix of all pedigree rela- tions, G −1 is the inverse matrix of genome relationships, and A −1 22 is the inverse matrix of pedigree relations for the genotype individuals.
Accuracy of genetic evaluation
In this study, five-fold cross-validation was used to evaluate the accuracy of genomic prediction.Firstly, the individuals were randomly divided into five groups, and then one group was selected as the validation population at each time, and the other four groups were used as the training population.The accuracy of genomic prediction is evaluated by calculating the correlation between the estimated phenotypic value and the true phenotypic value in the validation population divided by the square root of heritability.The formula was as follow: The unbiased of genomic prediction is evaluated by the regression coefficient between the true phenotypic value and the estimated phenotypic value. (3)
Results
Basic statistical analysis of phenotypic data Minimums (Min), mean, maximum (Max), standard deviation (SD) and coefficient of variation (CV) values of the fleece traits were presented in S1].
Analysis of genotype data
The 43 individuals and 16,360 SNPs were deleted from the raw genotype data.Finally, 2256 individuals and 50,728 markers were used to analyze.The number of SNPs on each chromosome before and after quality control were shown in Fig. 1.The SNP density after quality control were similar over 29 autosomes (Fig. 2).
(7) b y i y i = cov(y i , y i ) Var(y i )
Determination of fixed effects
The results demonstrated that year of production, sex, herd and individual ages had high significantly effect on the fleece traits (P < 0.01), however, birth type and dams of age had no effect on the fleece traits for FL, CL and CD (P > 0.05) (Table 2).Therefore, year, sex, herd and individual ages should be considered when the genetic parameters of fleece traits in IMCGs were evaluated.
Estimation of genetic parameters
The residual plots of fleece traits in each method were shown in Figure S2-S4 [See Additional file 1, Figure S2, Figure S3, Figure S4].All of these indicated that the models fit well.The variance components and genetic parameters of fleece traits in IMCGs were shown in Table 3.The heritability of FL (fiber length), CL (cashmere length), CP (cashmere production) and CD (cashmere diameter) by using ABLUP method were 0.27, 0.06, 0.15 and 0.24 respectively, and the repeatability of FL, CL, CP and CD were 0.51, 0.08, 0.35 and 0.37 respectively.The heritability of FL (fiber length), CL (cashmere length), CP (cashmere production) and CD (cashmere diameter) by using GBLUP method were 0.31, 0.08, 0.20 and 0.28 respectively, and the repeatability of FL, CL, CP and CD were 0.48, 0.08, 0.34 and 0.36 respectively.The heritability of FL, CL, CP and CD by using ssGBLUP method were 0.26, 0.05, 0.15 and 0.22 respectively, and the repeatability of FL, CL, CP and CD were 0.39, 0.05, 0.26 and 0.24 respectively.Because genome information is considered, the heritability estimated by methods GBLUP is higher than that by ABLUP method, and the repeatability estimated is slightly lower.And the standard error of genomic based methods are lower than pedigree.
Accuracy of GEBV in each method
Akaike's An Information Criterion (AIC) and Schwarz's Bayesian criterion (BIC or SBC) are used to evaluate the effectiveness of model fitting.It was illustrated that the model by using ssGBLUP and GBLUP methods fitted better than that by using ABLUP methods [See Additional file 1, Figure S5].The accuracy of GEBV by using GBLUP and ssGBLUP methods were shown in Table 4 and Fig. 3.
The results demonstrated that the prediction accuracy of four fleece traits by using ssGBLUP and GBLUP were significantly higher than that using ABLUP.The range of predict ability of the fleece traits by using ABLUP, GBLUP and ssGBLUP range are 36.73%~ 41.25%, 56.18% ~ 69.06%, 66.82% ~ 73.70%, respectively.There was no significant difference in prediction accuracy between the GBLUP and ssGBLUP methods for the other three personality traits, except for CL.Numerically speaking, the prediction accuracy of fleece traits in ssGBLUP method is slightly higher than that in GBLUP method.
Discussion
In this study, the results that sex, year, herd and animal age had highly significant effect on the fleece traits in IMCGs, which is similar to the findings in most studies.
Wang (2013) reported that the year of production, sex and herd had highly significant influences on all fleece traits [26].Salehi (2010) was to evaluate effect of some environmental factors on fiber characteristics of Raeini Cashmere goats, and the results of this study indicated that the fixed effects including age and sex should be considered in the breeding programs [27].It may be explained by differences in rearing conditions, rainfall, and quality of grassland.In this study, that the results demonstrated that dams of age and birth type had no significant effect on fleece traits of Inner Mongolia Cashmere goats.Newman (1996) found that the dams of age had no significant effect on cashmere diameter and cashmere length on New Zealand cashmere goats [28].Snyman reported the non-genetic factors affecting the growth and fleece traits of Afrino sheep, in which the dams of age had no significant effect on fiber diameter [29].Bromley used REML method to estimate the genetic parameters of prolificacy, weight and wool traits of Columbia, Polypay, Rambouillet and Targhee sheep, which illustrated that birth type had no significant effect on fleece traits [30].However, Zhou reported that the birth type had significant impact on yearling cashmere length, but had no significant impact on cashmere diameter, it is inconsistent with our results [31].This may be due to the data collection time and the size of the phenotypic data set.Therefore, year, sex, herd and individual ages should be considered when the genetic parameters of fleece traits in IMCGs were evaluated.Many methods, including GBLUP, ssGBLUP and Bayesian methods, have been used to perform genomic selection in plants and animals.To some extent, the methods affected the accuracy of the prediction accuracy.The results in this study show that the estimation accuracy of ssGBLUP and GBLUP is significantly higher than that of ABLUP method.It is basically consistent with that in other studies [32,33].Mrode (2021) reported that the estimates of heritability for daily milk yield from GBLUP and ssGBLUP were essentially the same [34], which is similar to this study.Lourenco reported that prediction accuracy of GEBV for growth traits and calving ease when using single-step genomic BLUP (ssGBLUP) in Angus cattle was higher than that in using BLUP [35].Teissier (2019) illustrated that the accuracy of GEBV for milk production traits, udder type traits, and somatic cell scores in French dairy goats was higher than that using other methods.Similarly, the accuracy of GEBV in ssGBLUP for fiber diameter and live body weight was higher than that with other methods in our study [36].Wei (2020) compared estimates of genetic parameters and the accuracy of breeding values for wool traits in Merino sheep between pedigree-based best linear unbiased prediction and single-step genomic best linear unbiased prediction, the results showed that the heritability of wool traits with ssGBLUP were slightly higher than those obtained with pedigree-based best linear unbiased prediction [37].The accuracies of estimated breeding values were low to moderate, ranging from 0.362 to 0.573 for the whole population.Compared with ABLUP, GBLUP and ssGBLUP has relatively better prediction ability.Therefore, it is suggested to use ssGBLUP method for genome selection of goats.With the continuous progress of breeding work, more efficient and simple models will be optimized and developed.Applying these methods to perform genomic selection of important traits in livestock and poultry will inevitably accelerate the breeding process of population.
Conclusions
In this study, the genetic parameters and genomic breeding values of fleece traits in IMCGs were estimated by using ABLUP, GBLUP and ssGBLUP methods.Regardless of which method is used, the heritability of cashmere length is low, while the heritability of other three fleece traits are medium or low to medium.The prediction accuracy of GEBV for fleece traits by using GBLUP and ssGBLUP is significantly higher than that with ABLUP method.And the prediction accuracy of fleece traits in ssGBLUP method is slightly higher than that in GBLUP method.The accuracy of GEBV with ssGBLUP method for fleece traits ranged from 66.82% to 73.70%, which is 29.03%-33.97%higher than that with ABLUP method.Therefore, ssGBLUP is recommended as the method of genetic evaluation of fleece traits in IMCGs.
( 2 )
y = µ + Xb + Za + Wc + e matrix relating to additive genetic effects for the genomic relationship matrix (G)[12,25]:In ssGBLUP, the matrix relating additive genetic effects for H matrix: Here,the individuals are divided into two parts: Part 1 contains the individuals whose genotype is not available and Part 2 consists of the phenotype individuals.Thus, A 11 denotes the entries of A that provide the relationships within Part 1, A 12 and A 21 the relation- ships between the individuals of the two parts, and A 22 the pedigree relationships within Part 2.Moreover, A −1 22 denotes the inverse ofA 22 .
Fig. 2
Fig.2The distribution of SNP density on each chromosome
Fig. 3
Fig. 3 Comparison of the accuracy of GEBV for fleece traits with three methods
Table 1 .
The averages values of four fleece traits including fiber length, cashmere length, cashmere production, cashmere diameter is 18.89 cm, 6.23 cm, 740.3 g and 15.23 μm, respectively.And the corresponding coefficient of variation were 25.94%, 17.60%, 29.07%and 5.32%.The four fleece traits approximately follow a normal distribution [See Additional file 1, Figure
Table 1
The basic statistics of phenotype values of fleece traits in IMCGs FL Fiber Length, CL Cashmere Length, CP Cashmere Production, CD Cashmere Diameter Fig.1Comparison of SNP numbers on each chromosome before and after quality control
Table 2
The fixed effects of fleece traits in IMCGs P < 0.01: the difference is extremely significant; P < 0.05: the difference is significant; P > 0.05: the difference is not significant; DF Degree of Freedom, SS Sum of Square, MS Mean Square
Table 3
Estimation of genetic parameters of fleece traits in IMCGs σ 2 a the additive genetic effects variance, σ 2 c the permanent environmental effects variance, σ 2 e the residual effects variance, SE Standard error, h 2 heritability, rep repeatability
Table 4
The accuracy and unbiased of GEBV for fleece traits with three methods Note: a, b represents significant differences.The difference is significant with different letters
|
2024-04-08T13:13:36.084Z
|
2024-04-08T00:00:00.000
|
{
"year": 2024,
"sha1": "38d00b6468737cc4f6e9af1bbdb895a708df269a",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "8a6df0f54fa83314fe1cbb39d5e3a03361a71e9d",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
58789499
|
pes2o/s2orc
|
v3-fos-license
|
Efficacy and safety of intrathecal morphine for post cesarean section analgesia
Background: Government of Nepal has been conducting Cesarean section under “Safe Motherhood” program all over country. The purpose of this study was to evaluate the efficacy and safety of intrathecal morphine for post cesarean analgesia under spinal anesthesia. Methods: A total of 300 parturients posted for Cesarean section under spinal anesthesia were divided into two groups of 150 each in this prospective randomized case-control study. Morphine group received 0.15 mg of intrathecal morphine mixed in 12 mg of 0.5% bupivacaine heavy while control group received 12 mg of 0.5% bupivacaine heavy alone, after proper preparation of spinal anesthesia. The parturients were assessed for first request of analgesic as per Visual Analog Scale, frequency of analgesics required within 24 hr, nausea, vomiting, pruritus, sedation and respiratory depression. Results: Postoperative analgesia was significantly greater in morphine group as compare to control group (12.1 ± 7.6 vs 3.7 ± 2.9 hr). Frequency of analgesics requirements was also significantly lower in morphine group (1.7 ± 2.0 vs 3.4 ± 8.1). Visual Analog Scale was below 4 at most of time in morphine group. The incidence of nausea, vomiting and pruritus were more in morphine group as compare to control group but without any respiratory depression. There was no significant difference in APGAR score among fetus. Conclusion: Mixing low dose of intrathecal morphine in standard dose of spinal anesthesia effectively prolongs the duration of post cesarean analgesia and decreases the frequency of analgesics requirement without any major complication in parturients or fetus.
Introduction
Cesarean section is the most common operation in obstetrics.Now, it has been under "Safe Motherhood" program running all over country, under Government of Nepal, which is totally free to the patient.It is also a well known fact that mother has to bear severe post operative pain because of unavailability of better analgesic and modern techniques of pain control at all the centre and that too free of cost, in the current era of cost containment.
Spinal anesthesia is the most common anesthetic technique used for cesarean section in Nepal and across the world.Intrathecal opioids are administered along with a local anesthetic during spinal anesthesia for cesarean delivery to provide postoperative analgesia. 1The effectiveness of intrathecal morphine is well established but there has been a controversial reviews regarding its safety. 2,3,4is study has been conducted to review the efficacy and safety of intrathecal morphine for post cesarean section analgesia under spinal anesthesia, established in the past.
Material and Method
After IRC approval, this prospective, randomized, case-control study was conducted in 300 parturients scheduled for Cesarean section, elective or emergency, under spinal anesthesia from September to November 2013 at Nepalgunj Medical College.They were randomly divided into two groups (case and control) of 150 each by envelope method.Parturients with any contraindication to spinal anesthesia, history of hypersensitivity to morphine and chronic pain syndrome or current regular opioid use were excluded from the study.
All of parturients received Injection (Inj) Ranitidine 50 mg and Inj Metoclopromide 10 mg when decided by obstetricians for Cesarean section, preferably half an hour before.An informed consent was taken from all the parturient.All of them received one litre of Ringer lactate.They were attached to monitor for NIBP, SpO 2 and ECG.The attending obstetrician and the paediatrician were well informed about the use of intrathecal morphine, but were blinded about the case.They were allocated to receive either 0.15 mg of Morphine mixed with 12 mg of 0.5% Bupivacaine heavy (Group M) or 12 mg of 0.5% Bupivacaine heavy only (Group C) under aseptic spinal anesthesia technique.APGAR score was taken at 0 and 5 minutes of birth by pediatrician and Neonatal Intensive Care Unit (NICU) admission was noted, if required.Any patient converted to General Anesthesia or requiring supplementary analgesic intraoperatively was excluded from the study.After surgery, they were transferred to postoperative ward.The patients were monitored for vitals, first request of analgesic, frequency of analgesics required, nausea and vomiting, pruritus, respiratory depression, and sedation by nursing staffs.Observations were done at 5, 15, 30 minutes, then hourly for 12 hours then 4 hourly for next 12 hours.Patients were asked to rate their current level of pain on a Visual Analog scale (VAS) of 0 (no pain) to 10 (worst pain imaginable).A pain score >4 were given analgesia.The indwelling catheter is to remain in situ for 24 hours.
Statistical analysis
All values are expressed as the mean ± standard deviation.SPSS 15.0 was used for the statistical analysis.Independent "t" test was used for analyzing differences between the groups.P values less than 0.05 were considered statistically significant.
Results
The results for each group are shown in Table 1.There are no significant differences between the groups as to patient age, body weight and operative time.The time of first request of analgesic was 12.1 ± 7.6 hours for the morphine group and 3.7 ± 2.9 hours for the control group (p<0.001).The number of times patients required supplemental analgesics until 24 hours after surgery was 1.7 ± 2.0 in morphine group and 3.4 ± 8.1 in the control group (p<0.05).
There was no significant difference in APGAR score at 0 and 5 minutes of birth among the group The side effects were observed more in morphine group than control group and were managed with medication and counselling.No major side effect like respiratory depression occurred in either group.Sedation scores were 0 and 1 in all cases in both the groups.
Respiratory Depression 0 0
The level of satisfaction among parturients, obstetricians, pediatricians and nursing staffs was very good.
Discussion
The purpose of this study was to investigate the efficacy of intrathecal morphine along with standard dose of spinal anesthesia in terms of postoperative analgesia, supplemental analgesic drugs required and side effects observed, among parturients undergoing cesarean section.The study shows that intrathecal morphine adds a further analgesic effect postoperatively but with some minor side effects.Morphine was chosen for the study because of its wide availability at most of the centres of Nepal.
Behar et al 5 and Wang et al 6 in 1979 reported that the intrathecal and epidural opioids were effective for acute and severe pain in human.Despite the early reports regarding the analgesic efficacy of intrathecal morphine, 7,8,9 it failed to gain widespread use due to high incidence of respiratory depression, related to the use of large dose of morphine.Wang et al 6 with 0.5 and 1.0 mg of intrathecal morphine had 15-22 hour of analgesia without respiratory depression whereas others 10,11,12 reported high frequency of delayed respiratory depression with dose of 2-15 mg.Subsequently, mini-dose concept of intrathecal morphine had promising results. 13,14,15,16 1994, Blitt et al 17 coated "avoidance of subarachnoid opiates" as a strategy to improve perioperative safety.Responding to it, Abouleish 18 challenged theses guidelines as being unsubstantiated by the scientific evidence, and warned of the legal consequences of making avoidance the standard of care.In 1999, Gwirtz et al 19 published high patient satisfaction and low incidence of side effects in over 6000 patients.
Baraka et al 1 had effective labour analgesia with 1mg of intrathecal morphine but with high incidence of pruritus, somnolence and nausea/vomiting (85-100%).Intrathecal morphine acts by binding to dorsal horn receptors.In 1988, Abboud et al 13 reported that 0.25 and 1.0 mg doses of intrathecal morphine reduced VAS pain scores by 50% or more for a mean of 27.7 and 18.6 hour respectively.In this study, 12.1 hour of analgesia was found with low dose of 0.15 mg of intrathecal morphine whereas Abouleish et al 18 found 27 hour of analgesia but with 0.2 mg of intrathecal morphine.It clearly signifies that increasing the dose of intrathecal morphine, increases the duration of analgesia but at a cost of higher incidence of side effects.In this study, major side effect such as respiratory depression was not observed in any of case, probably because of use of low dose of morphine.We do noticed more of minor side effects like nausea and vomiting, pruritus especially around the trunk and face in morphine group than in control group, which is similar to others finding.These side effects are caused by the drug gaining access to the spinal cord and brain stem from the cerebrospinal fluid. 20,21Hence, intrathecal morphine requires appropriate postoperative care.
Epidural opioid has established its popularity in this present era of analgesia.Advantages for the intrathecal opioids in comparison to epidural analgesia include technical ease of administration, simplicity of postoperative management and low cost.The failure rate of spinal injection is much lower than that of epidural placement. 22Recent changes in healthcare economics have placed cost control at the forefront of medical care and patient management.Gwirtz and associates reported that intrathecal opioids cost less than one third as much as epidural opioids. 19vernment of Nepal has been conducting Cesarean section under "Safe Motherhood" program with limited resources and expertise.A better modern pain control techniques such as epidural analgesia and patientcontrolled-analgesia or even strong opioids at regular supply are poorly available at most of the centres.Anesthetic assistant are allowed to perform Cesarean section under Obstetrician or MD General Practice but they fail to provide a better pain free post operative periods.Hence, in this study 0.15 mg of intrathecal morphine was added to standard dose of 12 mg bupivacaine in spinal anesthesia so that it can do justice to the parturients for the control of their postoperative pain, cost effectively.
Conclusion
This study establishes that 0.15 mg of intrathecal morphine safely prolongs the postoperative analgesia in the parturients undergoing Cesarean section under spinal anesthesia, despite higher incidence minor side effects like nausea, vomiting and pruritus but without any major complication to parturient or fetus.This cost effective technique also decreases the frequency of further analgesic requirements.
Aknowledgement
I am grateful to all my anesthetic assistants, nursing staffs and crew who helped me with this study.
Fig. 1 3
Fig.1 Visual Analog Scale Level of sedation will be assessed by following ordinal scale.0 -Awake, alert. 1 -Drowsy, respond to call. 2 -Drowsy, respond to tactile stimuli.3 -Deep Sedation, unresponsive.Management of side effects-1.Respiratory depression or Sedation If respiratory rate is < 8 or conscious state = 3 Give oxygen 6 l/m via Hudson mask. Call the anesthetist. Administer Naloxone 0.4mg IV, repeating every 2 minutes to a maximum 8 doses. Intermittent ventilation with a bag and mask or mechanical ventilation, if necessary.2. Pruritus: Itching is common across face, chest and abdomen.Managed with counselling, 5-hydroxytryptamine 3 antagonist such as Ondensetron 4mg, Inj Pheniramine, Inj Propofol 0.25mg/kg or iv/sc naloxone 50-100mcg in severe case.3. Nausea and Vomiting: Managed with Inj Ondensetron.
Fig. 2
Fig. 2 Comparison of Visual Analog Scale (VAS) in morphine and control group
Table 1
Details of the patients in morphine and control groupThe VAS score at different time intervals were depicted in Fig.1.It was significantly lower at a mean value of 2.4 ± 34 for the morphine group and a mean of 7.2 ± 11 for the control group (p<0.001) at 4 hours of surgery.The VAS score remained lower in the morphine group for 12 hours compared to control group (p<0.05).The VAS remained below 4 at most of the period in morphine group.There were up and down in VAS scoring in control group because of breakthrough pain in between analgesics.There was no significant difference between two groups after 24 hours.
Table 2 .
The rate of NICU admission among the group was too non-significant.None of the fetus required naloxone for the resuscitation.
Table 2 .
Details of fetus in morphine and control group
Table 3 .
Details of side effects in morphine and control group
|
2018-12-04T09:11:21.013Z
|
2015-10-03T00:00:00.000
|
{
"year": 2015,
"sha1": "f00d682f7821dca295224d6991bfee5457ad14f6",
"oa_license": "CCBY",
"oa_url": "https://www.nepjol.info/index.php/JSAN/article/download/13583/10990",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f00d682f7821dca295224d6991bfee5457ad14f6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15172219
|
pes2o/s2orc
|
v3-fos-license
|
Skellam shrinkage: Wavelet-based intensity estimation for inhomogeneous Poisson data
The ubiquity of integrating detectors in imaging and other applications implies that a variety of real-world data are well modeled as Poisson random variables whose means are in turn proportional to an underlying vector-valued signal of interest. In this article, we first show how the so-called Skellam distribution arises from the fact that Haar wavelet and filterbank transform coefficients corresponding to measurements of this type are distributed as sums and differences of Poisson counts. We then provide two main theorems on Skellam shrinkage, one showing the near-optimality of shrinkage in the Bayesian setting and the other providing for unbiased risk estimation in a frequentist context. These results serve to yield new estimators in the Haar transform domain, including an unbiased risk estimate for shrinkage of Haar-Fisz variance-stabilized data, along with accompanying low-complexity algorithms for inference. We conclude with a simulation study demonstrating the efficacy of our Skellam shrinkage estimators both for the standard univariate wavelet test functions as well as a variety of test images taken from the image processing literature, confirming that they offer substantial performance improvements over existing alternatives.
1. Introduction. Real-world information sensing and transmission devices are subject to various types of measurement noise; for example, losses in resolution (e.g., quantization effects), randomness inherent in the signal of interest (e.g., photon or packet arrivals), and variabilities in physical devices (e.g., thermal noise, electron leakage) can all contribute significantly to signal degradation. Estimation of a vector-valued signal f ∈ R N given 1 imsart-aos ver. 2009/02/27 file: SkellamShrinkage.tex date: arXiv:0905.3217v2 [stat.ME] 29 May 2009 noisy observations g ∈ R N therefore plays a prominent role in a variety of engineering applications such as signal processing, digital communications, and imaging.
At the same time, statistical modeling of transform coefficients as latent variables has enjoyed tremendous popularity across these diverse applicationsin particular, wavelets and other filterbank transforms provide convenient platforms; as is by now universally acknowledged, such classes of transform coefficients tend to exhibit temporal and spectral decorrelation and energy compaction properties for a variety of data. In this setting, the special case of additive white Gaussian noise is by far the most studied scenario, as the posterior distribution of coefficients is readily accessible when the likelihood function admits a closed form in the transform domain.
The twin assumptions of additivity and Gaussianity, however, are clearly inadequate for many genuine engineering applications; for instance, measurement noise is often dependent on the range space of the signal f , effects of which permeate across multiple transform coefficients and subbands [12]. For instance, the number of photoelectrons g i accumulated by the ith element of a photodiode sensor array-an integrating detector that "counts photons"-is well modeled as a Poisson random variable g i ∼ P(f i ), where f i is proportional to the average incident photon flux density at the ith sensor element.
Recall that for g i ∼ P(f i ) we have that E g i = Var g i = f i , and so in the case at hand f i reflects (up to quantum efficiency) the ith expected photoelectron count, with the resultant "noise" in the form of variability being signal-dependent and hence heteroscedastic. Indeed, the local signalto-noise ratio at the ith sensor element is seen to grow linearly with signal strength as E g 2 i / Var g i = 1+f i , implying very noisy conditions when dealing with inefficient detectors or low photon counts.
Classical variance stabilization techniques dating back to Bartlett and Anscombe [1,2,7,8,34,35] yield an approach to Poisson mean estimation designed to recover homoscedasticity, with [9] providing a summary of more recent work. Here one seeks an invertible operator γ : Z N + → R N , typically by way of a compressive nonlinearity such as the component-wise square root, that (approximately) maps the heteroscedastic realizations of an inhomogeneous Poisson process to the familiar additive white Gaussian setting: Standard techniques may then be used to estimate γ(f ) directly, with the inverse transform γ −1 (·) applied post hoc.
Inhomogeneous Poisson data can also be treated directly. For instance, imsart-aos ver. 2009/02/27 file: SkellamShrinkage.tex date: empirical Bayes approaches leverage the independence of Poisson variates via their empirical marginal distributions [29,30], while multiparameter estimators borrow strength to improve upon maximum-likelihood estimation [4,10,16]; however, this ignores potential correlations amongst elements of f . To address such concerns, multiresolution approaches to Poisson intensity estimation were introduced to explicitly encode the dependencies between the Poisson variables in the context of Haar frames [22,24,27,33]. The relative merits of the various methods described above are well documented [3,19,34,35,37] and will not be repeated here.
In this paper, we address Poisson rate estimation directly in the Haar wavelet and Haar filterbank transform domains by way of the Skellam distribution [31], whose use to date has been limited to special settings [17,18,20,21,38]. After briefly reviewing wavelet and filterbank coefficient models in Section 2, we then describe in Section 3 new Bayesian and frequentist transform-domain estimators for both exact and approximate inference. Here we first derive posterior means under canonical heavy-tailed priors, along with analytical approximations to the optimal estimators that we show to be both efficient and practical. We then show how inhomogeneous Poisson variability leads to a variant of Stein's unbiased risk estimation [32] for parametric estimators in the transform domain. Simulation studies presented in Section 4 verify the effectiveness of our approach, and we conclude with a brief discussion in Section 5.
Wavelet and Filterbank Coefficient Models.
2.1. Haar Wavelet and Filterbank Transforms. Consider a nested sequence of closed subspaces {V k } k∈Z of L 2 (R) satisfying the axioms required of a multiresolution analysis [26]. Then there exists a scaling function φ ∈ L 2 (R) such that the family {2 −k/2 φ 2 −k (· − i) } i∈Z is an orthonormal basis of V k for all k ∈ Z. There also exist a corresponding conjugate mirror filter sequence {h i } i∈Z and admissible wavelet ψ, with Fourier transformŝ h,ψ respectively, satisfying Moreover, for any fixed scale 2 k the wavelet family {2 −k/2 ψ ·/2 k − i } i∈Z forms an orthonormal basis of the orthogonal complement of V k in V k−1 , and for all (i, k) ∈ Z 2 the wavelet families together comprise an orthobasis of L 2 (R).
Recursively expanding the above K times, and defining we see that any f ∈ L 2 (R) admits the following orthobasis expansion in terms of its wavelet and scaling coefficients: The mapping f → {s K,i , x k,i } is termed a K-level continuous wavelet transform, with an analogous discrete wavelet transform defined for sequences in 2 (Z). For the special case of a Haar wavelet transform, we take as our scaling function φ = I [0,1] (the unit indicator), with h i = 2 −1/2 φ(·/2), φ(· − i) yielding h 0 = h 1 = 2 −1/2 as the only nonzero conjugate mirror filter values. This in turn induces a recursive relationship as follows: In fact, this one-level transform is a version of a filterbank transforma canonical multirate system of the type used for time-frequency analysis in digital signal processing. That is,ĥ satisfies the perfect reconstruction condition [26] ĥ * (ω)ĥ(ω) +ĥ * (ω − π)ĥ(ω − π) = const, In the formulation of (1), each sequence {s k−1,i } i is decomposed into lowpass and highpass components {s k,i , x k,i } i in turn. A recursive application of the map {s k−1,i } → {s k,i , x k,i } yields the Haar wavelet transform, whereas the same transform applied to highpass component x k−1,i further decomposes it into narrower bands. Recursive decomposition of both lowpass and highpass sequences in this way yields the Hadamard transform, otherwise known as the Haar filterbank transform.
The low computational requirements of these transforms make them attractive alternatives to other joint time-frequency analysis techniques possessing better frequency localization. The Haar transforms enjoy orthogonality, compact spatial support, and computational simplicity, with the Haar wavelet transform satisfying the axioms of a multiresolution analysis. We later demonstrate how their simplicity serves to admit analytical tractability that in turn enables efficient inference and estimation procedures.
As a final note, we omit subband index k in the sequel, as wavelet coefficients x k,i are always aggregated within a given scale 2 k ; for notational clarity in the finite-dimensional setting, further suppression of subscript i will be used to indicate a generic scalar coefficient x (·) , as distinct from vector-valued quantities (e.g., x) indicated in bold throughout.
Transform-Domain Denoising.
Turning to the problem of transformdomain denoising, consider the case whereupon a vector of noisy orthobasis coefficients y ∼ N (x, σ 2 I N ) is observed, with x deterministic but unknown. Writing an estimator for x as X(Y ) = Y +θ(Y ), Stein's Lemma [32] may be used to formulate an unbiased estimate of the associated 2 risk E X − x 2 2 as follows.
Hence, by replacing the latter expectation of (2) with an evaluation over the vector y of observed transform coefficients, one may directly optimize parameter choices for nonlinear shrinkage estimators-for example soft thresholding, given by As an example that we shall return to later, SUREShrink [6] is obtained from (2) and (3) by writing and thus τ is chosen to minimize the empirical risk estimate 2.3. The Skellam Distribution. In contrast to the above setting of additive white Gaussian noise, the distribution of inhomogeneous Poisson data g : g i ∼ P(f i ) is not invariant under orthogonal transformation-and so transform-domain denoising ceases to be as straightforward in the general setting [12]. However, for the special cases of the Haar wavelet and filterbank transforms described in Section 2.1, we may characterize their coefficient distributions in closed form as sums and differences of Poisson counts.
To this end, let the matrix W ∈ {0, ±1} N ×N denote an (unnormalized) Haar filterbank transform. Taking x := Wf to be the transform of f ∈ Z N + , the resultant wavelet and scaling coefficients comprise sums and differences of elements of f : An analogous definition with respect to the observed data g i ∼ P(f i ) and its Haar filterbank transform y := Wg implies that the empirical wavelet and scaling coefficients themselves comprise sums and differences of Poisson counts: Thus the empirical coefficients defined by (8) are effectively corrupted versions of those in (6). While the sum of Poisson variates y + i and y − i is again Poisson, as indicated by the expression of (9) for empirical scaling coefficient t i , the distribution of their difference also admits a closed-form expression, first characterized by Skellam [31] using generating functions.
Proposition 2.1. Fix x + , x − ∈ R + , and let the random variable Y ∈ Z denote the difference of two Poisson variates y + ∼ P(x + ) and y − ∼ P(x − ). Defining I y (·) to be the yth-order modified Bessel function of the first kind, imsart-aos ver. 2009/02/27 file: SkellamShrinkage.tex date: we have that Proof. A direct verification is provided by series representations of Bessel functions [11]. First, note that via correlation of Poisson densities we obtain directly (11) Pr By change of variables in the summation index of (11) according to max(y, 0) = (|y| + y)/2, we obtain a summand that is symmetric in y ∈ Z as follows: The result follows from the observation that I ν (·) admits, for positive argument and order, the real-valued Taylor expansion coupled with the fact that I −ν (·) = I ν (·) for ν ∈ N.
We have thus proved that the distribution of each empirical coefficient (8) may be described as follows.
Definition 2.1 (Skellam Distribution [31]). Let Y ∈ Z denote a difference of Poisson variates according to (6)-(9), with index i suppressed for clarity as in Proposition 2.1. Then where s ≥ |x|, and variate y takes the Skellam distribution: Remark 2.1 (Support and Limiting Cases). As the difference of two Poisson variates, a Skellam variate ranges over the integers unless either x + , x − = 0, in which case a direct appeal to the discrete convolution of (11) recovers the limiting Poisson cases. On the other hand, as both x + , x − → ∞, it follows from the Central Limit Theorem that the distribution of a Skellam variate tends toward that of a Normal.
Remark 2.2 (Skewness and Symmetry).
The skewness of a Skellam random variable is easily obtained from its generating function as s −3/2 x [31], and hence is proportional to the difference in Poisson means x + and x − , with a rate that grows in inverse proportion to their sum. Indeed, when x = 0 the distribution is symmetric, with variance s proportional to the geometric mean of x + and x − according to (10). A standard S(0, 1) Skellam random variable is shown in Fig. 1(a), with Fig. 1(b) imsart-aos ver. 2009/02/27 file: SkellamShrinkage.tex date: detailing the tail behavior of other symmetric cases S(0, s); examples illustrating skewness as a general function of mean and variance are shown in Fig. 1(c).
Returning now to our context of Haar transforms, we next observe that the density of empirical coefficient y i depends only on the corresponding wavelet and scaling coefficients x i and s i (and similarly for the coarsest Haar wavelet subband).
x := Wf a vector of Haar filterbank transform coefficients, and y that of the empirical coefficients. Then Proof. The relation is a straightforward consequence of the choice of transform. From the definitions in (7), Let v i and w i be row vectors from W such that s i = v i f and x i = w i f , respectively. It is easily verified that the jth entry of (v i + w i )/2 is nonzero if and only if W ij = 1, and hence throughout with the generic scalar quantity Y ∼ S(x, s), where Haar scaling coefficient s is given and Haar wavelet coefficient x is a latent variable, assumed to be random or deterministic depending on context. Although the scaling coefficient is not directly observed in practice, this standard wavelet estimation assumption amounts to using the empirical scaling coefficient t i of (8) as a plug-in estimator of s i in (6). As Haar scaling coefficients constitute sums of Poisson variates in this context, their expected signal-tonoise ratios are likely to be high, in keeping with the arguments of Section 1, and moreover they admit asymptotic Normality.
3.1. Key Properties of the Skellam Likelihood Model. We first develop some needed properties of the Skellam likelihood model; while these follow from standard recurrence relations for Bessel functions of integral order, probabilistic derivations can prove more illuminating. We begin with expressions for partial derivatives of the Skellam distribution.
Property 3.1 (Derivatives of the Skellam Likelihood). Partial derivatives of the Skellam likelihood p(y ; x, s) admit the following finite-difference expressions: Proof. Recall from Definition 2.1 that a Skellam variate Y ∼ S(x, s) comprises the difference of two Poisson variates with respective means x + and x − . Denoting by F the (conjugate) Fourier transform operator acting on the corresponding probability measure, its characteristic function in ω follows as and hence, invoking linearity, we may compute derivatives as: and similarly for the partial derivative of p(y ; x, s) in s. Property 3.1 implies that (∂/∂x)p(y ; x, s) is the normalized first central difference of the likelihood on its domain y, and that (∂/∂s)p(y ; x, s) is onehalf the normalized second central difference. Hence slope and curvature of the likelihood are encoded directly in the Skellam score functions.
This property lends itself to easy calculation of the Skellam likelihood, as fixed initial values may be tabulated and used to initialize the recursion, thus avoiding the evaluation of Bessel functions.
Combining Properties 3.1 and 3.2, we have our final result.
. The Skellam likelihood p(y ; x, s) satisfies a linear, first-order hyperbolic partial differential equation in (x, s), for fixed y, as follows:
Prior Models and Posterior Inference via Shrinkage.
Having developed needed properties of the Skellam likelihood p(y ; x, s) above, and with s assumed directly observed, we now consider the setting in which each underlying transform coefficient x : |x| ≤ s is modeled as a random variable. While determining the most appropriate choice of prior distribution for different problem domains remains an open area of research, with examples ranging from generalized Gaussian distributions through discrete and continuous scale mixtures, we make no attempt here to introduce new insights on prior elicitation. Rather, we focus on optimal estimation for general classes of prior distributions having compact support.
The problem being univariate, exact inference is realizable through numerical methods; however, the requisite determination of prior parameters, possibly from data via empirical Bayes, renders this approach infeasible in practice, as posterior values cannot be easily tabulated in advance. To this end, the main result of this section is an approximate Skellam conditional mean estimator with bounded error, obtained as a closed-form shrinkage rule.
Theorem 2 (Skellam Shrinkage). Consider a Skellam random variable Y ∼ S(X, s), with s fixed but X a random variable that admits a density with respect to Lebesgue measure on [−s, s]. Define the Bayes point estimator whence a projection of the score function in x via conditional expectation. Its squared approximation error, relative to the conditional expectation X MMSE := E(X | Y ; s), then satisfies Proof. Bayes' rule applied to the differential equation of (13) yields the necessary conditional expectations, after which Cauchy-Schwarz serves to bound its latter term.
While we cannot control the second moment of X conditioned on Y in the bound above, its latter term admits by Property 3.1 the equivalence where δ Y 2 (·) denotes the normalized second central difference in Y , analogous to a second derivative. This term therefore goes as the square of the normalized local curvature in the likelihood at Y = y, averaged over the posterior distribution of X; it will be small on portions of the domain over which the likelihood remains approximately linear for sets of X having high posterior probability.
Theorem 2 thus provides a means of obtaining Bayesian shrinkage rules under different choices of prior distribution p(X; s), via evaluation of the expectation of (14) as (15) p While the above formulation is amenable to further approximation via Taylor expansion (akin to Laplace approximation), we focus here on a direct evaluation of E ∂ ∂X ln p(X; s) Y ; s . Discounting the former term of (15), which simply measures the difference in posterior tail decay at x = ±s and goes to zero with increasing s, the imsart-aos ver. 2009/02/27 file: SkellamShrinkage.tex date: derivative on [−s, s] is easily computed for the so-called generalized Gaussian distribution for p > 0, with location parameter µ and scale parameter σ x : with Γ(·) the Gamma function and ζ(p) = [Γ(1/p)/Γ(3/p)] p/2 . This distribution being unimodal and symmetric about its mean, we obtain for µ = 0 the expression from which the Gaussian (p = 2) and Laplacian (p = 1) cases admit straightforward evaluation.
Proposition 3.1 (Truncated Normal and Laplace Priors). Let g(x) denote a generalized Gaussian distribution with exponent p > 0 having mean zero and variance σ 2 x , and set p( with γ(·, ·) the lower incomplete Gamma function, and Fig. 2, whose asymptotic behavior in turn enables a simple soft-thresholding rule to be fitted: The soft-thresholding estimator of (20) can in turn be adapted to yield a piecewise-linear estimator whose slope matches that of (19) at the origin. To accomplish this, note that for any prior distribution with even symmetry, (12) of Definition 2.1 implies odd symmetry of the posterior expectation functional; i.e., E(X | Y = y ; s) = − E(X | Y = −y ; s). Therefore the slope of any shrinkage estimator at the origin may be computed as The slope term E(X | Y = 1 ; s) may in turn be pre-computed to arbitrary accuracy using numerical methods, and indexed as a function of s and prior variance σ 2 x , yielding the following piecewise-linear shrinkage estimator: (20); and piecewise-linear (SBL) approximation of (21) sponding to E(X | Y ; s) and computed numerically, with the soft-thresholding estimator of (20) and the piecewise-linear estimator of (21). The ideas above can be straightforwardly extended to the multivariate case [14], owing to conditional independence properties of the Skellam likelihood; derivatives may also be computed for the case of mixture priors, though no efficient solution is yet known to compute the mixture weights.
Parameter and Risk Estimation for Skellam Shrinkage.
Having derived Bayes estimators for the class of unimodal, zero-mean, symmetric priors considered above, we now turn to parameter and risk estimation for Skellam shrinkage. With only a single observation of each Haar coefficient in this heteroscedastic setting, maximum-likelihood methods will simply return the identity as a shrinkage rule. However, by borrowing strength across multiple coefficient observations we may improve upon the risk properties of this approach; as we now detail, this is equally attainable in a frequentist or Bayes setting. Here we consider coefficient aggregation within a given scale, with notation i (·) i below indicating summation over location parameter i within a single Haar subband.
The main result of this section is the following theorem, which yields a procedure for unbiased 2 risk estimation in the context of soft thresholding and other shrinkage operators.
Proof. The risk E X(Y, T ) − x 2 2 may be expanded as To evaluate the final term in (23) (6) and (8), and furthermore that Y ± = (T ± Y )/2. By conditioning on Y + or Y − we in turn obtain Poisson variates, and thus general results for discrete exponential families [10,16] apply, yielding the final relations needed to complete the proof of Theorem 3: Parameters of any chosen estimator form X(Y, T ) may thus be optimized by minimizing the unbiased risk estimate of Theorem 3 with respect to observed data vectors y and t. As an important special case, we obtain the following corollary.
Recall the Stein's unbiased risk estimate SUREshrink result [6] for soft thresholding in the case of additive white Gaussian noise of variance σ 2 , as described in (3)-(5) of Section 2.2. Recasting the objective function of (5) for SUREshrink threshold optimization as we see that t i in (24) plays a role analogous to σ 2 in the homoscedastic SUREShrink setting represented by (25), with the dependence on coefficient index i reflecting the heteroscedasticity present in the Skellam likelihood case.
3.3.1. SkellamShrink with Adjusted Thresholds. We may also consider a generalization of the SkellamShrink soft thresholding estimator of Corollary (3.1), inspired by the Bayes point estimator sgn(Y i ) max(|Y i | − τ (s i ), 0) of Theorem 2, in which individual coefficient thresholds depend in general on the corresponding scaling coefficient. By treating the quantity σ x appearing in the Bayesian estimators of Section 3.2 not as a prior variance parameter, but simply as part of a parametric risk form to be optimized, we may appeal directly to the unbiased risk estimation formulation of Theorem 3. Since a priori knowledge limitations may well preclude exact prior elicitation in practice, this flexible approach provides a degree of robustness to prior model mismatch, as borne out by our simulation studies below.
As an example, consider a shrinkage estimatorX i (Y i , T i ) = y i +θ(Y i , T i ; σ x ) that depends on T i and unknown parameter σ x as per the soft thresholding formulation of (20): with · and · denoting the floor and ceiling operators, respectively, and c(t i ) and d(t i ) adjusting for the singularity at |y i | = τ i ± 1.
Unbiased Risk Estimates for
Variance-Stabilized Shrinkage. The strategy outlined above naturally generalizes to any form of parametric estimator via the unbiased risk estimation formulation of Theorem 3, enabling an improvement over the variance-stabilization strategies of Section 1 by direct minimization of empirical risk. As a specific example, consider the Haar-Fisz estimator of [9], in which each empirical Haar wavelet coefficient y i is scaled by the root of its corresponding empirical scaling coefficient as K. HIRAKAWA AND P. J. WOLFẼ y i := y i / √ t i in order to achieve variance stabilization, after which standard Gaussian shrinkage methods such as SUREShrink are applied and the variance stabilization step inverted.
For the case of nonlinear shrinkage operators, of course, neither the resultant estimators nor the risk estimates themselves will in general commute with this Haar-Fisz strategy, leading to a loss of the unbiasedness property of risk minimization-in contrast to the direct application of Theorem 3. Taking Haar-Fisz soft thresholding with some fixed threshold τ as an example, the equivalent Skellam shrinkage rule is seen to beX T i in contrast to the scaling of T i implied by Theorem 2, as in the adjusted-threshold approach of (26) above. In an analogous manner, the corresponding exact unbiased risk estimate for this shrinkage rule can in turn be derived directly by appeal to Theorem 3, rather than relying on the heretofore standard Haar-Fisz approach of SUREShrink empirical risk minimization via (25), applied to the variancestabilized coefficientsỹ i .
Empirical Bayes via Method of Moments.
We conclude this section with a simple and effective empirical Bayes strategy for estimating scaling coefficients {s i } and prior parameter σ x for the Bayesian shrinkage rules derived in Section 3.2 above. Recall from (9) that s i = E T i , implying the use of the empirical scaling coefficient t i as a direct substitute for s i in the Bayesian setting. Note that s i = j: |W ij |=1 f j for Haar transform matrix W , with T i a corresponding sum of Poisson variates with means f j representing the underlying intensities of interest to be estimated. In turn, as the sum s i increases, the relative risk E |T i − s i | 2 / s 2 i of the plug-in estimator s i = T i will rapidly go to zero precisely at rate 1/s i .
Next note that under the assumption of a unimodal, zero-mean, and symmetric prior distribution p(X; s), only Var X remains to be estimated. A convenient moment estimator is available, since T i ∼ P(s i ) and Y i ∼ S(x i , s i ) together imply that Var T i = Var Y i = E Y 2 i − X 2 i , and hence we obtain VarX = (1/N ) i y 2 i − t i . Once estimates VarX and {s i } are obtained for the coefficient population of interest, the implicit variance equations of (16) and (17) may be solved numerically to yield scale parameter σ x of the truncated generalized Gaussian distribution considered earlier, with σ 2 x = Var X in the limit as s grows large. In our simulation regimes, we observed no discernable difference in overall wavelet-based estimation performance by setting σ 2 x = VarX directly. mators derived above. We considered exact Skellam Bayes (SB) posterior mean estimators, computed numerically with respect to a given prior; the Skellam Bayes Gaussian approximation (SBG) linear shrinkage of (18); the Skellam Bayes Laplacian soft-thresholding (SBT) approximation of (20); the Skellam Bayes Laplacian piecewise-linear (SBL) approximation of (21); the SkellamShrink (SS) soft-thresholding estimator with empirical risk minimization of Corollary 3.1; and the SkellamShrink hybrid (SH) adjustedthreshold shrinkage of (26). Estimators were implemented using a 3-level undecimated Haar wavelet decomposition, with empirical risk minimization or the moment methods of Section 3.3.3 above used to estimate parameters for the corresponding shrinkage rules. As first comparison of relative performance, Figs. 4 and 5 tabulate results in mean-squared error (MSE) for Skellam likelihood inference in cases when the latent variables of interest are drawn from Normal and Laplacian distributions with known parameter σ 2 x ∈ {32, 64, 128}. The accompanying box plots are shown on a log-MSE scale for visualization purposes, in order to better reveal differences between estimator performance. These figures confirm that exact Bayesian estimators (SB) outperform all others, but indicate that prior-specific Skellam Bayes approximations SBG and SBL are comparable, respectively, for the Gaussian and Laplacian cases over the range of prior parameters shown here. Among soft-thresholding approaches, the frequentist SkellamShrink methods SS and SH in turn offer improvements over the Bayesian soft-thresholding estimator SBT.
Evaluation via Standard Wavelet Test Functions.
We next consider the standard set of univariate wavelet test functions: "smooth," "blocks," "bumps," "angles," "spikes," and "bursts," as illustrated in Fig. 6. A thorough comparative evaluation of several Poisson intensity estimation methods using these test functions is detailed in [3], and here we repeat the same set of experiments using the estimators outlined above, along with the best-performing methods reviewed in [3]-including variance stabilization techniques currently in wide use as well as the more recent methods of [22,33]. To retain consistency with the experimental procedure of [3], all methods except for [22] were implemented using a 5-level translationinvariant wavelet decomposition; the implementation of [22] provided by [3] employs a decomposition level that is logarithmic in the data size, which we retained here.
As can be seen from Fig. 7, the Skellam-based techniques we propose here measure well against alternatives despite the diversity of features across these test functions, and the corresponding possibilities of model mismatch Prototype intensity functions and corresponding Poisson-corrupted versions [3] with respect to any assumed prior distribution of wavelet coefficients. Overall, it can be seen that only the multiscale model of [22] offers comparable performance.
4.2.
Error and Perceptual Quality for Standard Test Images. We now consider an image reconstruction scenario using a test set of well-known 8-bit gray scale test images that feature frequently in the engineering literature: "Barbara," "boat," "clown," "fingerprint," "house," "Lena," and "peppers." Corresponding pixel values are considered as the true underlying intensity function of interest; both noise level characterization and reconstruction results are reported in terms of signal-to-noise ratio (SNR) in decibels, a quantity proportional to log-MSE. By way of competing approaches we consider [6,22,28,33], with [6,28] used in conjunction with the variance stabilization methods of [1,9]. Implementations were set at an equal baseline implementation comprising a 3-level undecimated Haar wavelet decomposition, with no a priori neighborhood structures assumed amongst the coefficients.
The performance of the Skellam methods proposed here offers noticeable improvements over alternative approaches, in terms of visual quality (Fig. 8), mean-squared error (Table 1), and perceptual error ( Table 2). In terms of visual quality, we have generally observed that the proposed Skellam Bayes approaches yield restored images in which the spatial smoothing is appro- [1] with hard/soft universal thresholding [5]; CH/CS denotes corrected hard/soft thresholding [23]; K indicates the multiscale model of Kolaczyk [22]; and TN is the multiscale multiplicative innovation model of Timmermann & Nowak [33] imsart-aos ver. priately locally adaptive-for example, these methods yield effective noise attenuation in both bright (see forehead) and dark (see black background) regions of the example image shown in Fig. 8. A comparison of Figs. 8(c) and (f) reveals the importance of incorporating the scaling coefficient s explicitly in the estimator; images processed via SBT tended to be similar to those for which SH was used, but with softer edges. In comparison, methods based on variance stabilization typically fail to completely resolve the heteroscedasticity of the underlying process, as evidenced by the under-and over-smoothed noise in bright regions such as the forehead and hair textures of Fig. 8(g). The Bayesian method of [33] typically yields far smoother output images, in which texture information is almost entirely lost; see, for example, the hair in Fig. 8(h). With the exception of SS, Skellam-based estimation methods suffer considerably less from the reconstruction artifacts typically associated with wavelet-based denoising, as can be seen in the cheek structure of the "clown" image. We also report numerical evaluations of estimator performance in this setting, by way of both SNR in Table 1 and the widely-used perceptual error metric of Structural Similarity Index (SSIM) [36] in Table 2, for input SNR of 0, 3, and 10 dB. The results readily confirm that Skellam-based approaches outperform competing alternatives, with only that of [33] remaining competitive-though as described above, its oversmoothing results in a great deal of loss of texture. The SkellamShrink adjusted-threshold hybrid (SH) method measures the best in terms of both SNR and SSIM, with other Skellam-based approaches generally outperforming all alternatives save for [33].
5. Discussion. In this article we derived new techniques for waveletbased Poisson intensity estimation by way of the Skellam distribution. Two main theorems, one showing the near-optimality of Bayesian shrinkage and the other providing for a means of frequentist unbiased risk estimation, served to yield new estimators in the Haar transform domain, along with low-complexity algorithms for inference. A simulation study using standard wavelet test functions as well as test images confirms that our approaches offer appealing alternatives to existing methods in the literature-and indeed subsume existing variance-stabilization approaches such as Haar-Fisz by yielding exact unbiased risk estimates-along with a substantial improvement for the case of enhancing image data degraded by Poisson variability. We expect further improvements for specific applications in which correlation structure can be assumed a priori amongst Haar coefficients, in a manner similar to the gains reported by [28] for the case of image reconstruction in the presence of additive noise.
|
2009-05-29T17:33:23.000Z
|
2009-05-20T00:00:00.000
|
{
"year": 2009,
"sha1": "fd0e2b84060b4e36497c15a62e265240f191947a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0905.3217",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fd0e2b84060b4e36497c15a62e265240f191947a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
222336026
|
pes2o/s2orc
|
v3-fos-license
|
RECONSTRUCTION OF THE SLIP DISTRIBUTION ALONG THE WEST HELANSHAN FAULT, NORTHERN CHINA BASED ON HIGH-RESOLUTION TOPOGRAPHY
The increasing wealthy of high-resolution topography allows for remotely measuring and analysing offset features and their associated surface slip distributions at a very high resolution and along a significant length of a fault, hence providing important insights into many aspects of the fault behaviour. The West Helanshan Fault is a Holocene active fault located at the junction of the Tibetan Plateau, Alashan, and Ordos blocks. Despite its special tectonic location, it has rarely been studied before. In this study, a 2m-resolution DEM of the West Helanshan Fault was built from the high-resolution (0.5 m) WorldView-3 stereo satellite imagery based on the photogrammetry method, and a total of 181 strike-slip offsets and 201 vertical displacements were acquired along different segments of the fault. By statistical analysis of the offset observations, we conclude that at least six large paleoearthquakes have ruptured the fault, producing a minimum rupture length of ~50 km, and the paleoearthquakes have followed a characteristic slip pattern with a coseismic strike slip of ~3 m and a vertical slip of ~1 m, corresponding to a geologic moment magnitude of 7.1~7.5. * Corresponding author
INTRODUCTION
Detailed mapping and surveying of the offset geomorphic features and their associated surface slip distribution along a fault can guide understanding of many aspects of the fault behaviour, including the fault kinematics and mechanics, earthquake recurrence patterns, and earthquake magnitudes (Wells, Coppersmith, 1994;Zielke et al., 2010Zielke et al., , 2012Klinger et al., 2011;Manighetti et al., 2015;Perrin et al., 2016;Haddon et al., 2016). Following the early pioneering work of Wallace (1968Wallace ( , 1990, a number of studies have been conducted to measure the offsets of geomorphic markers and constrain the along-fault surface slip distributions (Sieh, 1978(Sieh, , 1981Sieh, Jahns, 1984;Schwartz, Coppersmith, 1984). However, most of these early studies were based on conventional field mapping which is particularly susceptible to vegetation cover and access conditions. More recently, technologies that enable acquisition of high-resolution topographic data have become increasingly available, e.g., the terrestrial and airborne light detection and ranging (LiDAR) (Cunningham et al., 2006;Arrowsmith, Zielke, 2009;Lin et al., 2013), high-resolution satellite optical imagery (e.g., QuickBird, WorldView, and Pleiades), and the Structure from Motion (SfM) approach for DEM generation (Westoby et al., 2012;Bemis et al., 2014;Bi et al., 2017). These technologies have provided us an unprecedented opportunity to remotely measure and analyse the offset features at a very high resolution and along a significant length of a fault, thereby greatly improving both the reliability and density of the offset measurements. A number of studies have been conducted in the last few years to constrain the along-fault slip distribution of the most recent earthquake ruptures as well as the slip accumulation due to multiple earthquakes based on the LiDAR technology (Zielke et al., 2010(Zielke et al., , 2012Salisbury et al., 2012;Manighetti et al., 2015;Haddon et al., 2016;Chen et al., 2018). However, the high costs and logistical demands of LiDAR surveys have limited its extensive application in active faulting study (Johnson et al., 2014, Zhou et al., 2015Bi et al., 2018).
Recently, a few studies have demonstrated that the photogrammetry method based on high-resolution satellite stereo imagery can provide topographic data of comparable resolution and precision to LiDAR surveys, but at significantly lower cost and with greater availability, thus becoming a very promising way for reconstruction of surface slip distribution along a fault (Zhou et al., 2015, Middleton et al., 2016, Bi et al., 2018. The Helan Mountain (Helanshan) has a very special tectonic location. It is not only situated at the northern section of the North-South Seismic Belt in China, but also locates at the junction of the Tibetan Plateau, Alashan, and Ordos blocks (Molnar, Tapponnier, 1975;Tapponnier, Molnar, 1977;Zhang et al., 1990;Deng et al., 1996;Zhang et al., 1998;Liu et al., 2010) (Figure 1). The Helan Mountain is bounded at its eastern margin by a normal fault, the East Helanshan Fault. While on the western side of the mountain, it is bounded by a strike-slip fault, the West Helanshan Fault. However, previous studies have mostly focused on the faults located on the eastern side of the Helan Mountain or northeastern margin of the Tibetan Plateau (Deng et al., 1984;Zhang et al., 1986;Deng, Liao, 1996;Middleton et al., 2016), while the fault on the western side of the mountain has rarely been studied. Previous studies have reported that the West Helanshan Fault is a Holocene active fault (Lei, 2016;Lei et al., 2017). The fault strikes nearly northsouth and distributes on the piedmont alluvial fan at the western foot of the Helan Mountain. It extends straightly and connects with the Sanguankou-Niushoushan Fault with a right-stepping zone between them to the south and disappears into the Tengger Desert to the north, with a total length of about 90 km ( Figure 1). According to the geometric structure, the fault has been divided into three different segments, i.e., north, middle, and south segment. It passes through the Alxa League, a relatively densely populated area in Northwestern China, and is also less than 40 km far from the Yinchuan city, capital of the Ningxia Hui Autonomous Region. Therefore, it is of great significance to study the behaviour this fault for future seismic hazard assessment and loss mitigation. In this study, WorldView3 satellite imagery (0.5 m) was used to derive high-resolution topographic data of the faulted landforms along the fault based on the photogrammetry method. Combined with the topography derived from the Unmanned Aerial Vehicle (UAV) images based on the SfM method, we carefully identified and measured displaced geomorphic markers along different segments of the fault. By statistically analysing both the strike-slip and vertical offset measurements along the fault, we restored the rupture histories and recurrence patterns of paleoearthquakes on the fault. Tapponnier, Molnar, 1977, Zhang et al., 1986, and Deng, Liao, 1996. The seismic data are from the China Seismic Information Network (http://www.gopi.com.cn/). (c) Geometry of the West Helanshan Fault which has been divided into three different segments, i.e., north, middle, and south segment. The base map is the Google Earth image. The blue polygon indicates the coverage area of the WordView-3 satellite imagery.
High-resolution Topography
Since no archived high-resolution satellite imagery are available in the West Helanshan Fault zone, we ordered one stereo pair of WorldView-3 panchromatic images (0.5 m) which were acquired on 5 July 2018, covering a total area of 315 km 2 . It should be noted that since the north segment of the fault is covered by desert, leading the offsets unable to be well preserved in the landscape, our satellite imagery only cover the middle and south segments of the fault (Figure 1c). The WorldView-3 satellite was launched on August 13, 2014 by the Digital Globe. It operates at an altitude of 617 km with an inclination of 97.2° and an average revisit time of <1 day, providing 0.31 m panchromatic resolution, 1.24 m multispectral resolution, 3.7 m short-wave infrared resolution, and 30 m CAVIS (Clouds, Aerosols, Vapors, Ice, and Snow) resolution. It is the first multi-payload, super-spectral, high-resolution commercial satellite which provides significant improvements in both image resolution and geo-positioning accuracy (Digital Global, 2014). It should be noted that though the original spatial resolution of the panchromatic images is 0.31 m, they have been down-sampled to 0.5 m for commercial sales. The stereo WorldView-3 images were processed using the Leica Photogrammetry Suite module of the ERDAS IMAGINE 2015 software. First, since the rational polynomial function (RPF) model that high-resolution satellites always use to approximate the relationship between the image space and the ground space is not a rigorous model, a total of 235 tie points, evenly distributed over the imagery, were generated to compensate for the orientation errors of the RPF model. Second, a pixel-bypixel matching procedure was performed with a search window size of 9 × 9 pixels and a correlation coefficient of 0.3 to 0.7. A lot of combinations of parameters were tried and this combination was found to be a good compromise between both the density of matching points and the smoothing of topographic details. With the image coordinates of the matching points, the three-dimensional coordinates of their corresponding ground points were derived from the refined RPF model, resulting in a dense ground point cloud. Afterwards, the ground point cloud was filtered by averaging within a block of 2 m and then gridded with a spacing of 2 m using continuous curvature splines in tension and a tension factor of 0.75. The WorldView-3 imagery was finally ortho-rectified based on the generated DEM. Both the DEM and the orthophoto were geometrically corrected and projected to the WGS-84 UTM Zone 48N system. The hill-shaded DEM and the corresponding orthophoto are shown in Figure 2a and Figure 2b respectively. It can be clearly seen that two areas along the fault are heavily covered by clouds in the WorldView-3 imagery (Region A and Region B in Figure 2a). Thus, we used a small four-rotor UAV, i.e., the Motoarsky MS670, to acquire high-resolution topographic data of these two areas based on the SfM method. The UAV is approximately 67 cm in diameter and has a maximum load of 5 kg and a flying duration of about 45 minutes. It is equipped with both a GPS and an inertial navigation system for positioning and navigation of the UAV during the flight. It also has a stabilized camera mount fitted with a Sony ILCE-QX1 camera with a fixed focal length of 16 mm. The photographs were acquired on 27 September 2018, and the flying height of the UAV was set to 120 m, with a 70% forward and side overlap. The total number of collected photographs is 982, covering an area of about 0.5 km × 3.7 km for region A. While for region B, the flight area is about 1.0 km × 4.3 km and a total of 2225 images were finally acquired. For scaling and georeferencing of the point cloud, 25 self-made markers (the size of the marker is 60 cm × 60 cm) were evenly placed in two areas as ground control points (GCP) before the flight. The three-dimensional coordinates of each GCP were measured after the flight using a Trimble R8 real-time kinematic (RTK) differential GPS system. After image acquisition, the UAV images were processed using the Agisoft PhotoScan software package. We finally obtained DEM with a high spatial The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B2-2020, 2020 XXIV ISPRS Congress (2020 edition
Fault Offset Measurement
The West Helanshan Fault is dominantly a strike-slip fault, but also has a small component of vertical slip. In this study, we measured both the strike slip and vertical displacement of this fault. The strike slip of the fault was measured using the socalled "back-slipping" method which aims at reconstructing the pre-earthquake morphology of the geomorphic markers (stream channels, terrace risers, etc.) that have been displaced by the fault (Figure 3). This method has been employed to measure the cumulative lateral offsets on strike-slip faults for decades (Sieh, 1978;Zielke et al. 2010Zielke et al. , 2012Klinger et al., 2011;Manighetti et al., 2015). To obtain the offset, we firstly identified wellpreserved displaced geomorphic markers (stream channels, terrace risers, etc.) along the fault based on the ortho-rectified high-resolution WorldView-3 imagery (0.5 m). Then the piercing lines of the geomorphic markers on both sides of the fault trace were mapped and projected onto the fault. Afterwards, the offsets between the projections of upstream and downstream piercing lines to the fault were measured. To determine a plausible range of the offset, different portions of the geomorphic marker were measured, e.g., measuring the thalweg as well as both edges of the stream channels. Thus for each offset measurement, it has an optimal value (Opt.offset), a minimum value (Min.offset), and a maximum value (Max.offset). The range of uncertainties might be asymmetric with respect to the optimal value, and the plus and minus uncertainties were calculated as Max.offset-Opt.offset and Opt.offset-Min.offset respectively. The long-term vertical slip of the fault has displaced different geomorphic surfaces and formed clear fault scarps with different heights along the fault. To obtain the vertical displacement, well-preserved fault scarps that have experienced little erosion or degradation process were firstly identified based on the generated high-resolution WordView-3 DEM. Then topographic profiles were extracted across the fault scarps on different geomorphic surfaces along the fault manually. The topographic profile should follow the strike of the geomorphic surface, and vegetation and gullies should also be avoided. Afterwards, the topographic data of the hanging wall and footwall of the fault were linearly regressed by two separate straight lines respectively, which were then projected to the fault trace to estimate the vertical displacement. The root mean square error (RMSE) in fitting the lines was considered as the error of the displacement, as shown in Figure 4.
RESULTS AND DISCUSSIONS
In this study, a total of 181 strike-slip offsets and 201 vertical displacement measurements were finally acquired along the middle and south segments (about 50 km) of the West Helanshan Fault. The strike-slip offsets vary from 1.7 m to 112.8 m. While for the vertical displacement, the observations are between 0.5 m and 66.9 m. To analyse a large amount of data with variable values and uncertainties, a probabilistic method was often adopted Manighetti et al., 2015). Following many previous studies, we adopted a Gaussian probability density function (PDF) to constrain the physically plausible range of an offset measurement (McGill, Sieh, 1991;Zielke et al., 2012;Scharer et al., 2014;Manighetti et al., 2015;Haddon et al., 2016). In this representation, each measured offset value describes the mean of a Gaussian PDF, while the standard deviation σ of the PDF is represented by the uncertainty of the measurement. Summing the individual PDFs forms a cumulative offset probability distribution (COPD), whose dominant peaks indicate the most frequent values within the entire dataset. Provided that the peaks in the COPD correspond to frequent values of surface slip during past large earthquakes, offsets during the most recent event typically contribute to the first smallest strong peak, while the subsequent peaks in the COPD may reflect the cumulative slip of multiple preceding events (McGill, Sieh, 1991;Klinger et al., 2011;Zielke et al. 2012;Manighetti et al., 2015). (Figure 8). Overall, the COPD peaks roughly tend to decay exponentially with the increasing offset amount. In fact, this decay has also been observed in many previous works (Wallace, 1968;Klinger et al., 2011;Zielke et al., 2012), which may be caused by the on-going degradation or erosion process of geomorphic features over time, leading the larger offset peaks generally unable to be well preserved in the landscape. Similar to many previous studies, the COPD peaks were employed to distinguish paleoearthquakes along the West Helanshan Fault in this study. However, utilizing the geomorphic records to constrain the rupture history along a fault is based on a fundamental assumption that the geomorphic markers used to measure displacement form more frequently than the surface-rupturing earthquakes that offset them (McCalpin, 2009;Zielke et al., 2012;Manighetti et al., 2015). The relatively arid climate and well-preserved landscapes combined with the comparatively slow slip rate and the millennial recurrence time of great events on the West Helanshan Fault suggest that the landscapes may provide a nearly continuous geomorphic record of recent large earthquakes along this fault (Lei, 2016;Lei et al., 2017). Thus, the smallest offset peak of the COPD plot can be attributed to the coseismic slip of the most recent event, while the successive larger offset peaks may represent the cumulative slip of multiple preceding earthquakes. In this study, we mainly focused on observations less than 25 m and 8 m for the strike slip and vertical displacement respectively, since these values are most likely reliable and well-preserved in the landscape. On the middle and south segments, the COPD plots of the strike-slip offsets both display six pronounced offset peaks (2.5/2.6 m, 5.5/6.1 m, 8.8/9.0 m, 11.6/11.9 m, 14.8/15.4 m, and 19.7/21.3 m), and these peak values are separated by a similar amount of incremental slip, averaging ~3.0 m (except between the fifth peak and the sixth peak where the slip increment is about 2 times of 3.0 m). This incremental slip is also very similar to the slip produced by the most recent large earthquake on the fault (2.5/2.6 m). Similarly, the vertical displacements also show six prominent peaks at 0.9/1.0 m, 2.2/1.8 m, 3.4/3.2 m, 4.6/4.2 m, 5.6/5.3 m, and 6.7/6.5 m on both two segments. The resulting, clearly separated COPD peaks suggest that the vertical displacements associated with the several recent events are repeated in about ~1.0 m increments, similar to the coseismic vertical slip of the most recent event (0.9/1.0 m). Therefore, we interpret that at least six large earthquakes have occurred repeatedly on the West Helanshan Fault to produce these measured cumulative offsets, and these large paleoearthquakes have been fairly characteristic in terms of coseismic slip along this fault, with a strike slip of ~3.0 m and a vertical slip of ~1.0 m. Apart from the smallest six pronounced offset peaks, we also observed several relatively large-magnitude offset peaks along both segments of the fault, which however are more poorly constrained due to the limited number of offset observations, and these offset peaks may have developed over the course of many seismic cycles. Furthermore, previous studies have found that strike-slip ruptures generally do not propagate across stepovers larger than ~4 km (Wesnousky, 2006(Wesnousky, , 2008. Since the step-over zone between the middle and south segment is only about 2.5 km wide, it is unlikely to stop the propagation of earthquake ruptures on the West Helanshan Fault. Moreover, it can be observed from the COPD plots that the offset observations show very similar COPD peaks on the middle and south segments, especially the smallest six peaks which display good consistence with each other on both two segments, further suggesting that the two segments may have ruptured together in the same paleoseismic event. Following the relationship between moment magnitude (M) and surface rupture length (SRL) for strike-slip fault, i.e., 5.16 1.12*log( ) M SRL (Wells, Coppersmith, 1994), the moment magnitude of these paleoearthquakes is estimated to be ~7.1. However, this moment magnitude may be a minimum value, since our satellite imagery only cover the middle and south segments of the fault. Furthermore, the COPD plots suggest that the average coseismic strike slip of paleoearthquakes along the West Helanshan Fault is about ~3.0 m. Based on the relationship between moment magnitude (M) and average displacement (AD), i.e., 7.04 0.89*log( ) M AD (Wells, Coppersmith, 1994), we can estimate the moment magnitude of these paleoearthquakes to be ~7.5. Thus, the moment magnitude of paleoearthquakes occurred along the West Helanshan Fault may be in the range of 7.1~7.5.
CONCLUSIONS
Reconstructing the surface slip distribution along a fault can guide understanding of many aspects of the fault behaviour. Especially in recent years, high-resolution topographic data sets, such as LiDAR and photogrammetric DEMs, have provided us an unprecedented opportunity to remotely measure and analyse the offset features at a very high resolution and along a significant length of a fault, thereby greatly improving both the reliability and density of the offset measurements and providing new insights into earthquake slip behaviour of the fault. However, previous works have mainly focused on reconstructing the horizontal slip distribution for strike-slip faults. In this study, both the horizontal and vertical slip distribution along the West Helanshan Fault have been constrained based on high-resolution topographic data derived from the WordView-3 stereo satellite imagery and UAV images. A total of 181 strike-slip offsets and 201 vertical displacements were acquired along the middle and south segments (~50 km) of the fault. By statistical analysis of the offset observations, we conclude that the at least six large paleoearthquakes have occurred along the West Helanshan Fault. The earthquakes ruptured both the middle and south segments of the fault together, producing a minimum rupture length of ~50 km, and the coseismic strike slip and vertical slip of these paleoearthquakes is about ~3 m and ~1 m respectively, corresponding to a geologic moment magnitude of 7.1~7.5. Furthermore, both the strike-slip and vertical cumulative offsets are approximately multiples of the coseismic slip of the most recent event, suggesting that the paleoearthquakes may have followed a characteristic slip pattern along the West Helanshan Fault.
|
2020-08-20T10:06:53.639Z
|
2020-08-12T00:00:00.000
|
{
"year": 2020,
"sha1": "87b68f6dff4f3124da43a8192c820551c7562e1f",
"oa_license": "CCBY",
"oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIII-B2-2020/1025/2020/isprs-archives-XLIII-B2-2020-1025-2020.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "00ff569036166c79cd7de76ffa901633362257e9",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
}
|
253225979
|
pes2o/s2orc
|
v3-fos-license
|
Sentiment analysis of tweets on alopecia areata, hidradenitis suppurativa, and psoriasis: Revealing the patient experience
Background Chronic dermatologic disorders can cause significant emotional distress. Sentiment analysis of disease-related tweets helps identify patients’ experiences of skin disease. Objective To analyze the expressed sentiments in tweets related to alopecia areata (AA), hidradenitis suppurativa (HS), and psoriasis (PsO) in comparison to fibromyalgia (FM). Methods This is a cross-sectional analysis of Twitter users’ expressed sentiment on AA, HS, PsO, and FM. Tweets related to the diseases of interest were identified with keywords and hashtags for one month (April, 2022) using the Twitter standard application programming interface (API). Text, account types, and numbers of retweets and likes were collected. The sentiment analysis was performed by the R “tidytext” package using the AFINN lexicon. Results A total of 1,505 tweets were randomly extracted, of which 243 (16.15%) referred to AA, 186 (12.36%) to HS, 510 (33.89%) to PsO, and 566 (37.61%) to FM. The mean sentiment score was −0.239 ± 2.90. AA, HS, and PsO had similar sentiment scores (p = 0.482). Although all skin conditions were associated with a negative polarity, their average was significantly less negative than FM (p < 0.0001). Tweets from private accounts were more negative, especially for AA (p = 0.0082). Words reflecting patients’ psychological states varied in different diseases. “Anxiety” was observed in posts on AA and FM but not posts on HS and PsO, while “crying” was frequently used in posts on HS. There was no definite correlation between the sentiment score and the number of retweets or likes, although negative AA tweets from public accounts received more retweets (p = 0.03511) and likes (p = 0.0228). Conclusion The use of Twitter sentiment analysis is a promising method to document patients’ experience of skin diseases, which may improve patient care through bridging misconceptions and knowledge gaps between patients and healthcare professionals.
Introduction
Twitter, with over 320 million users, allows close to real-time exchange of ideas about current affairs through microblogging that consists of up to 280 characters (1,2). The use of sentiment analysis on Twitter posts in medicine was first published in 2009 (3). This technique is a subfield of natural language processing whose aim is to automatically classify the expressed sentiment in texts (4). Since then, it has been widely applied to predict disease outbreaks (5)(6)(7)(8), prescription of drugs and adverse drug reactions (9)(10)(11)(12)(13), patient satisfaction (14), public perceptions (15), and many others (16,17). Other features of Twitter such as "likes" and "retweets" enable users to share, to show appreciation, and to propagate information that can be used to monitor trends in public perceptions. Sentiment analysis on this large dataset can provide an overview of the moods and emotional outcomes that are associated with specific diseases and physiological status. This method has the advantage of covering larger populations and geographic areas compared to traditional questionnaire-based methods (18).
As more and more people are turning to social media for health advice, understanding the sentiments of social media posts has become increasingly relevant (19,20), as patients frequently report a lack of opportunity to express their psychosocial needs (21,22). However, analysis of social media data in dermatology remains underutilized. Because dermatologic diseases are linked to numerous mental, physical, and emotional stressors that may not be easily captured during clinical visits, we believe that leveraging social media posts can help elucidate the subjective experience of dermatologic disorders. Thus, the objectives of this study were (1) to analyze the expressed sentiments in tweets related to alopecia areata (AA), hidradenitis suppurativa (HS), and psoriasis (PsO) (9-25); (2) to compare the sentiments related to skin disorders with that related to fibromyalgia (FM), a chronic musculoskeletal disease (26) without cutaneous manifestations; and (3) to validate the use of social media analysis for disease surveillance.
Data collection
We used the standard Twitter application programming interface (API) to collect tweets containing keywords or tags for the diseases of interest. For HS, these included #Hidradenitis, #Suppurativa, #HidradenitisSuppurativa, #HSawareness, and "Hidradenitis Suppurativa"; for AA, "Alopecia areata", #AlopeciaAreata, Areata, and AAAwareness; for PsO, Psoriasis and #Psoriasis; and for FM, Fibromyalgia, #Fibromyalgia, and #ChronicFatigueSyndrome. Searches using the Twitter API were case insensitive. There was a 180 requests per minute limitation with the standard API limits, which was considered sufficient for this study. Requests to the Twitter API were made through the "retweet" package in R Studio. Tweets that were publicly available and written in English were collected every day for 1 month (from April 1st, 2022, to April 30th, 2022). For each tweet, we obtained data on the date and time of creation, the user's publicly displayed name, device type, tweet body text, and like and retweet status. A subgroup analysis of private/individual vs. public/organizational accounts (both types of accounts were open to public access) was carried out to determine whether discrepancies in illness experience exist.
Sentiment analysis
To determine the expressed tones in each tweet, we used the AFINN lexicon developed by Finn Arup Nielsen and downloaded from the R "tidytext" package (27). The AFINN lexicon assigned a score between −5 (e.g., "bastard" and "twat") and + 5 (e.g., "breathtaking" and "superb") to each word, with negative scores suggestive of negative sentiment.
Statistical analysis
The sentiment of each post was determined by the summation of the sentiment score of each word in the post. Independent t-tests were used to compare the means and standard deviations (SD) of sentiment scores between the diseases of interest. A p-value less than 0.05 suggested statistical significance. The data were collected and analyzed with RStudio 2022.07.1 + 554 for Mac (Boston, USA).
Results
We identified 243, 186, 510, and 566 tweets related to AA, HS, PsO, and FM, respectively. The mean sentiment score was −0.239 ± 2.90. The median and mode were 0. The average scores [mean ± SD (range)] for AA, HS, and PsO were −0.021 ± 3.29 (−10-+ 10), −0.341 ± 2.41 (−10-+ 6), and −0.308 ± 2.86 (−17-+ 14), respectively (Figure 1). There was no significant difference among the three disorders (p = 0.482). There were 2-3 times more tweets from private accounts than from public accounts for all diseases. Posts from public accounts were significantly more positive (−0.128 ± 2.95 vs. Words in negative and positive tweets on dermatologic disorders Figure 2 displayed the most frequent positive or negative words used in each specific disease. "Pain", "bad", and "hard" were used frequently in negative posts about HS and PsO; while "loss" was overwhelmingly presented in negative tweets on AA. "Anxiety", "fear", "wrong", and "burden" were seen in posts on AA but not in posts on HS and PsO. Words expressing negative internal emotions such as "crying" were observed in posts on HS; words connoting external influence like "contagious" and "hate" were more commonly observed in posts on PsO than in posts on HS or AA. "Care" and "natural" were found in positive tweets related to all three diseases. Words describing a supportive system including "help", "love", "support", and "god" were most frequently identified in positive posts on PsO. The sentiment of tweets on dermatologic disorders and fibromyalgia 510 tweets about FM were identified. Like skin disorders, "pain", "bad", and "hard" were commonly seen in negative posts on FM. Besides, emotional terms used in AA tweets, like "anxiety", "suffering", "guilt", and "sucks", contributed to a significant portion of negative posts on FM. The average sentiment score was significantly lower for FM (−1.11 ± 33.47) than for the three skin disorders, AA, HS, and PsO (−0.239 ± 2.90, p < 0.0001). Unlike skin disorders, tweets from public (−0.953 ± 3.70) or private (−1.170 ± 3.39) accounts expressed similar negative sentiment (p = 0.5969).
Discussion
Findings of the present study provided an effective and efficient approach to measure sentiments surrounding AA, HS, and PsO via analysis on tweets. Words that have been given negative polarity, like "anxiety", "pain", and "crying", are common in tweets related to AA, HS, and PsO. Sentiment regarding these three skin diseases is slightly polarized to the negative side, with less negative polarity compared to FM.
This study utilizes posts from popular social media to understand sentiments related to dermatologic disorders. The results seem to correlate well with previously documented psychiatric comorbidities. "Anxiety" was the most common emotional word in posts on AA. Patients with AA are particularly susceptible to generalized anxiety disorder (GAD) (28). A systematic review reported a 39-62% lifetime prevalence of GAD in patients with AA, giving an odds ratio (OR) of 7.28 compared to the general population. The ratio was higher than that for major depressive disorder (MDD) (OR = 5.87-6.77), social phobia (OR = 1.59-3.89), and paranoid disorder (OR = 4.4) (28). In contrast, for HS, words like "crying, " as well as aggressive words like "fuck" were commonly seen in tweets on HS, possibly reflecting the prevalence of bipolar disorders and MDD in this population. One meta-analysis on the psychiatric comorbidities of HS concluded that among the investigated psychiatric disorders, bipolar disorders (OR = 1.96) and MDD (OR = 1.75) were the most significantly increased comorbidities in patients with HS. Also in contrast to AA, for posts on PsO, we did not identify "anxiety" nor "bad" in the top 10 negative words. A recent meta-analysis reported a hazard ratio (HR) of 1.29-1.31 for anxiety in patients with PsO; on the contrary, the ratios were slightly lower than those found for AA and HS (29). The same study found that the OR for depression was 1.57 in patients with PsO (29). For comparison with all three skin disorders, tweets on FM were also examined. "Anxiety" and "guilt" were commonly used in negative posts on FM, for which patients with FM display a higher prevalence of GAD (20-80%) and MDD (13-63.8%) (30,31). Thus, the approach adopted in the present study may be a powerful tool to conceptualize real-time emotional experience of dermatologic disorders, which may be used to predict or reflect their psychiatric comorbidities. Analyzing the psychological foundations of the affective lexicon allows for a better understanding of the emotional impact of diseases from patients' perspectives and direct psychosocial interventions (32)(33)(34). Interestingly, the overall sentiment scores were neutral for the three AFINN sentiment scores of tweets on different diseases. dermatologic diseases and did not differ from one another. Despite their various health impact, previous studies suggested that the quality of life in dermatologic diseases was the most affected by the severity of diseases rather than the type of diseases (35)(36)(37). Our data may support this finding although we were not able to stratify sentiment scores by disease severity. Besides emotional words, the dataset provided insight into other patient priorities. "Natural" and "care" were recurrent themes in all three diseases, suggesting growing interest in non-pharmacologic options. Words like "contagious" in tweets on PsO hinted at common misconceptions and could guide the development of future campaigns. Finally, a sentiment gap appeared between public and private accounts in tweets about skin disorders but not about FM. While a strong association between FM, depression, and anxiety is widely reported by lay media, many skin diseases were considered largely "cosmetic" and ignored for their emotional impact. Thus, this gap may reflect a failure of physicians and public organizations to identify occult emotional burdens. An empathetic and systematic approach may be beneficial and should be encouraged when caring for patients with dermatologic diseases. Furthermore, a previous study on tweets related to HS concluded that the analysis on social media data allowed the identification of some treatment beliefs not easily detected by traditional surveys (38). Collectively, these findings necessitated the presence of medical professionals and institutions to monitor and validate educational information on social media (39).
Despite continuous data collection for one month, the sample sizes were still small. In addition, we only analyzed one social media platform (i.e., Twitter), and therefore its external validity might be limited. A limitation specifically of Twitter API is the random selection of a number of tweets (set by users) during a period of time (set by users) from the pool of tweets using the specified hashtags/keywords. Twitter does not allow access to all qualified tweets with one search. Second, microbloggings on social media are usually used to express temporary emotions and may not adequately reflect long-term psychological status; and patients may be reluctant to publicly share either negative or positive experiences. Sentiment classification might fail when negation or irony are used. For example, profanity words can be used to modify a positive term, reversing their original polarity (14). Although irony may be indicated by emojis, previous studies did not show a significant improvement in sentiment classification with emoticons (40). Therefore, we did not include emojis in the analysis. Some people may use text embedded in images to trespass the 280-word limitation. These longer posts, which may be more personal, may be missed in the algorithm. Lastly, different sentiment lexicons can result in different results based on individual sensitivity and specificity. SentiStrength is another lexicon commonly used in health-related sentiment analysis (41,42). That said, since AFINN was shown to have a similar or higher accuracy than SentiStrength, thus was preferred in this study (41).
Conclusion
The use of sentiment analysis on tweets is a promising method that can reflect psychological comorbidities, illness experience, and public perceptions of patients with dermatologic disorders. This technique has the potential to improve patient care by bridging misconceptions and knowledge gaps between patients and medical professionals.
Data availability statement
In the current study, the dataset regarding the reported results could be accessed via the publicly archived tool of Twitter application programming interface (https://help.twitter. com/en/rules-and-policies/twitter-api).
Author contributions
ITL, SEJ, STC, CK, and KSM contributed to the conceptualization, data analysis, writing, and editing of the manuscript. All authors contributed to the article and approved the submitted version.
|
2022-10-31T13:30:58.822Z
|
2022-10-31T00:00:00.000
|
{
"year": 2022,
"sha1": "432e609d319784d30412bba4551e177c54b95a3d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "432e609d319784d30412bba4551e177c54b95a3d",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236565665
|
pes2o/s2orc
|
v3-fos-license
|
Spectroscopic Analysis of Ancient Ceramic From Agror Valley Khyber Pakhtunkhwa, Pakistan
The study of material remains has been very essential to reconstruct the past human lifestyle. Archaeologists use different scientic techniques to analyze the elemental composition of the material remains to locate the raw materials, to discover production sites and to understand ancient manufacturing technologies. Of these, Laser Induced Breakdown Spectroscopy (LIBS) method have been extensively used for the compositional analysis of different crystalline and non-crystalline materials of archaeological, historical and artistic interest for the last two decades. The present study was carried out to investigate the elemental composition of ceramic potsherds collected from the historic period. The present paper focuses on the major and minor elements identied through LIBS in the ceramic samples collected from different archaeological sites located in Mansehra, the easternmost district of Khyber Pakhtunkhwa Province of Pakistan. The LIBS results show the presence of Fe, Mg, Ca, and Na as major elements in the ancient ceramic along with traces of Si, Ti, Al and K. LIBS results show differences in the concentration of each elements present in every selected ceramic potsherds which indicates the source of raw material, production strategies and time periods of these objects were related to each other. study out to investigate the elemental composition of ceramic potsherds collected from the historic sites in district Mansehra KP, Pakistan. To investigate the major and minor elements present in the selected samples, Laser induced breakdown spectroscopy (LIBS) by using Nd: YAG laser at fundamental (1064 nm) and second (532 nm) harmonics was used. LIBS data revealed large number of spectrum of the same elements in a sample, here only those lines are considered which have huge intensity and can provide clear information. Each spectrum line was veried by its wavelength and intensity with the help of NIST database and also conrmed through Spectrum Analyzer, an instrument that graphically displays signal amplitude against frequency, in the frequency domain. It shows which frequency components are present in a signal and how strong these are. The intensity and wavelength of every peak have been recorded individually. The concentration of Iron (Fe), Magnesium (Mg), Sodium (Na), Calcium (Ca), Silicon (Si), Titanium (Ti), Aluminum (Al) and Potassium (K) have been calculated in the selected samples. The constituents like Fe, Na, Mg and Ca, Si indicates that these are the major elements that have been used in the production of analyzed ancient ceramic potsherds. The presence of Calcium in considerable amount in every sample reveals that the raw material for the production of ceramic objects in the selected region was composed of calcareous clay. The elemental analysis of selected samples provides information that all the shards belongs to same historic time period of the region and source of raw material for the production of ceramics objects is associated with surrounding valley and Balakot valley as studied The elemental analysis can play vital role for the preservation of these archaeological ceramic housed in different museums in Pakistan. of and
Introduction
The study of ceramic is an important aspect of prehistoric and historic archaeology. Pottery was probably the rst synthetic material [1] made by humans in the past and broken pottery fragments are among the most common artifacts found at most of the archaeological sites around the world.
The study of ancient ceramics provides valuable information about different aspects of past human behavior such as their trade and commerce, their rituals, technologies as well as exploitation of the resources [2]. The visual properties of ancient ceramic such as size, shape, and surface decoration are mostly used as cultural and chronological indicators. The chemical properties including texture of the paste (the clay and temper combination) help to understand their manufacturing techniques. This can also be used to locate the source and provenance of the objects. While oxidation state of iron-bearing constituents can be used to reconstruct ring conditions [3]. The primary purpose of compositional analysis of ceramics is to understand whether the objects were made locally or imported. The source of raw material i.e. clays and temper that do not overlap with the place of discovery also indicated in the compositional analysis of ceramic [4]. This can be obtained using the modern technology which is now playing an important role in the preservation, interpretation and promotion of archaeological objects [5].
Literature shows that various analytical techniques have been used extensively in study of archaeological objects. Of these, scanning electron microscopy (SEM) [6], X-ray diffraction (XRD) [7], X-ray uorescence (XRF) [8], atomic absorption spectrometry (AAS), proton induced X-ray emission (PIXE) [9] inductively coupled plasma coupled to optical emission or mass spectroscopy (ICP-OES) [10] and Raman are among the most widely used techniques [11]. But the limitations of all these techniques are simple enough, sample preparation, land handling procedures, loss of originality and transportation of archaeological Page 3/17 objects. These limitations were effectively covered by the recently developed technique Laser Induced Breakdown Spectroscopy (LIBS) [12].
Laser-induced breakdown spectroscopy, an elemental analysis technique is based on the characteristic emission of atomic lines emitted after laser ablation of selected sample [6]. The LIBS results of ancient Roman sculpture were also compared with other analytical techniques and it was found that LIBS is valid and accurate technique for quantitative analysis which can also be used for the studies of ancient jewelry [13]. Another important similar work has been carried out with LIBS analysis on a corroded bronze gun which was found on the Adriatic seabed and dated back to 16th to 17th century AD. [14] The double-pulse LIBS also have been used for the analysis of ancient objects such as iron, copper-based alloys, and precious alloys found in water and provided suitable quantitative results [15]. Laser Induced Breakdown Spectroscopy has been applied ancient Roman pottery sherds recovered from different archaeological sites from Hispanic Terra Sigillata which was dated as 1st to 5th century AD. It is possible to classify the ceramic objects based on its location of production by looking at the element content and linear spectra correlation analysis [16]. The compositional analysis of ceramic sherds coming from eastern Turkey dated during the excavations process and dated back to Middle Iron Age by using laser induced breakdown spectroscopy [17].
Unfortunately, in Pakistan study on ancient pottery's compositional analysis have found in rare cases, so for only single article have found in which samples were collected from historic site of Aziz Dheri and the percentage of elements present in the samples measured by using different elemental techniques including, XRD, XRF and SEM etc. [18].
In current research, the elemental analysis of the ceramic potsherds collected from Agror Valley, Khyber Pakhtunkhwa province of Pakistan was carried out by the authors. The numbers of samples collected from each site are pieces of body sherds, rim sherds and the sherds from the base. The elemental analysis was performed using Laser Induced Breakdown Spectroscopy. The important features of LIBS which make it distinctive analytical technique includes its simple and easy process, availability of instruments which are easy to use, rapidity of analysis, analysis without any preparation of samples, capability of analysis to be perform on the site and the ability to analyze any type of material i.e. solid, liquid and gracious etc. analysis to be performed at the site. These features make LIBS, a more reliable technique for current study as compared to other techniques which are being used commonly in archaeological science for elemental analysis.
Area of Study
The history of the valley as Agror Valley (Fig. 1) is not well known till the British came in the area. It might have been the sway of Abhisaries when Alexander invaded India. Many archaeological remains have been found around the ravines in neighbouring Tanawal Valley which are generally related to a side branch of commonly prevalent Buddhist Cultures in the region. This is testi ed with Shadore Inscription in the eastern side of the valley. It is an inscription in Khroshthi script on a rock reminiscent to one found at the Asokan Rock Edicts of Mansehra in adjacent Pakhli plain. During the visit of Hiuen Tsang in 630 CE, it was probably the part of Urasha (Hazara) as his Man-ki-lu (Current Mangli Town between Mansehra and Abbottabad) lies very close to the Agror [19].The area was a subsidiary to Pkhali Plain under Mughals and in 17th or 8th century all of Hazara region saw great waves of invaders from the north, particularly Swatis came during this time [20]. Currently these Swatis are there as majority and elites. Khan of Agror (the ruler) during British time till the merging of the state into Pakistan was also a Swati.
James Abbot, the rst British Period o cer and later Deputy Commissioner of Hazara District describes this area as an abode of large dacoit groups taking refuge here after raids in the surrounding valleys [21]. Barron Charles Hugel [22] also explains the same fear in the Sikh Period. British gave a detailed account of their skirmishes with the tribes of Tor Ghur, i.e. Akazai, Hassanzai, Chigharzai and Syeds which generally settled in Agror Valley [23].
Archaeological Samples Selection
The survey report [24] shows that the selected Archaeological sites as given in Table 1, based on architectures and some others features, are assigned to the historic period. Each site revealed large numbers of potsherd but single sample selected randomly from six historic site of the region. All the ceramic objects were recorded with initial description followed by the process of cleaning, labeling, detailed description and drawing properly before the analysis.
Siddique Mound
With the elevation of 1129m the site presents very ne collection of pottery including all general categories and types found in the area with the most interesting shards that is perforated as shown in
Qalagai Mound
Locally the site is known as the place built by Prophet King Suleiman (A.S.) of Israelites. They say that he ordered giants to do the job. According to local tradition, Raja Salu attests the possibilities of ancient king's attack. There is a similar name of Raja Rasalu, a mythical king in Taxila region who fought demons/giants. He fought against does to subdue. Some large round stones are recognized in the local traditions as broken bangles of his sister while she was being kidnapped by giants. Similarly, some large/lengthy stones in the surrounding graveyards are said locally as his arrows. Pottery is very ne and includes rims bases handles body shards as shown below in the Fig. 3.
Chinkot Mound
The
Bela Mound
The site is located 1229m above the sea level to the south of well-known Khroshthi Inscription site of Shahdore on a small spur leading uphill steeply, the newly discovered rock inscription of few Aksharas just to its east uphill. The site is badly disturbed as cut through by antiquity hunters. Pottery shards including channels, lugs, and bases rims found as shown in Fig. 6.
Qalagia Mound
It is on the top of a steep mountain overlooking the Agror plain in Khabbal village 1880m above the sea level. It is accessible from a jeepable road (Pham Gali-Khabbal Road) which reaches at Nakkah in the north of this mound. From there one can walk a furlong to the south to reach here. The site is stepped mound exposing remains of ruined walls all around the mound. The surface is littered with pottery shards including bases, rims, and channels.
Experimental Work
LIBS Setup
The experimental arrangement consists of Nd: YAG laser and a LIBS 2000 detection system as shown in Fig. 8. The Nd: YAG Laser having pulse duration of 5ns and repetition rate of 10Hz is operated in the Q-Switch mode and capable of delivering 400mj at fundamental (1064nm) and 200mJ at second harmonic generation (532nm). The laser is focused through a 20cm focal length lens perpendicular to the target ceramic sample. The LIBS 2000 system consists of spectrometer covering range between 200-700 nm. The gate delay time and the gate width were optimized for single pulse (SP) experiments.
Laser pulse strikes the sample surface; it provides the energy to the constituent particles and ignites the plasma. Laser pulse energy exceeds the breakdown threshold of the sample surface the material becomes heated and ablated due to bonds breaking, evaporation and atomization. After laser ablation and vaporization the plasma plume is generated on a small spot. The nature of the plume and spot size depend on the laser parameters, atmospheric conditions and surface properties. The laser pulses emitted from Nd: YAG laser are de ected by using prism or mirror and focused on the sample through focusing lens having 75 mm of focal length. The spot size on the sample surface is comparable with diameter of laser beam that is 6 mm to 8 mm. The sample is taken at the rotating stage at the front of focusing lens. The stage is a rotator system that provides the sample surface to the incident laser beam. The uniform rotation of stage provides the fresh surface for each time for better measurement.
Spectral lines are emitted from plasma; they are collected by collimating lens of 100 mm of focal length.
The ber optic cable is attached with the lens for the transmission of LIBS signal to the detection part. In the given detection system the Spectrometer has spectral resolution of 0.05 nm. The delay time recorded for the data is 3.5 µs and integration time is 2ms. The plasma light consisted of different wavelengths is dispersed and diffracted by spectrometer and detected by photo sensor with ICCD detector. The Delay generator is used for resolution of spectral lines. The LIBS signal is converted into electronic signal by using gating electronics. At the end the spectrum of atomic lines are obtained on computer screen in the form of spectrum.
Calculation of Element's Concentration by Intensity Ratio Method
For the analysis of elemental concentration in an ancient ceramic object, various analytical methods have been used such as calibration curve method, Calibration free method and Intensity Ratio Method etc. Intensity Ratio method is considered as a useful and rapid method for calculation through which
Results And Discussion
Sample collected from all selected archaeological sites were subject to LIBS process in the laboratory where various strong emission lines of elements have been detected. Analysis of the samples show the presence of both high and low concentrated elements in the ceramics collected from Bela Mound (BA04), Titoli Mount (TTA23), Chinkot Mound (CKA09), Siddique Mount (SA03). In addition, these samples contain atomic lines of Calcium (Ca), Iron (Fe), Si (Silicon) and Sodium (Na) as key constituents and similarities in wavelength points, concentration and numbers of strong emission peaks that have close resemblance with each other as shown in Fig. 9.
For the analysis of elemental concentration in ancient ceramic objects the intensity ratio method is considered as a useful and rapid method. As the LIBS data revealed large number of spectra of the same elements in a sample, in this research only those lines are considered which have large intensity and can provide clear information as much as it can, as shown in Fig. 9. Each spectrum line in the selected sample has been veri ed by its wavelength with the help of National Institute of Standards and Technology (NIST) database. LIBS result demonstrates three major elements with the same concentration in sample CKA09. The concentration of Fe, Na and Mg seems to be the same as 17.27% in this ancient ceramic potsherd reported from Chinkot Mound as shown in the Table 2. The fourth major element found in this sample is Calcium (Ca) with 14.20% concentration. The graph of Calcium (Ca) is closer to the other three major elements. Silicon (Si) also found with good quantity in this sample, 11.51% Si detected like other samples. The concentration of Titanium (Ti), Aluminum and Potassium also seems to be similar concentration have been found in this sample with minor difference.
The sample TTA23 shows leading concentration of Sodium (Na) with 29.02% percentage represent that Na was a major constituent while preparation of raw material for ceramic production. Iron (Fe) with 26.13% concentration to be very closer in the above graph. The 10.72% concentration of Magnesium (Mg) observed in this sample which seems to be low percentage of Mg as compare to other samples.
13.91% concentration of Silicon also seems to be the highest percentage in all samples from selected sites. Concentration of Titanium found below 10% in almost all the samples but in this sample its concentration's graph is on its lowest point with 9.56% concentration. The concentration of Aluminum (Al) and Potassium (K) looks similar to each other. The graphs of Al with 4.74% and K with 4.09% have been shown in the Fig. 10.
Spectral analysis of BA04 contains different elemental composition such as, Calcium (Ca), Fe (Iron), Si (Silicon) and Sodium (Na) as key constituents and similarities in wavelength points, concentration and numbers of strong emission peaks have close resemblance with other samples from the selected sites.
Like other selected samples from various archaeological sites with high altitude and latitude in District Mansehra elements such as Mg (Magnesium), Titanium (Ti), Aluminum (Al) and Potassium (K) have also been observed in the samples obtained from Bela Mound. The strong emission lines at various wavelength points have been detected in this sample with highest and lowest intensity. By looking as the concentration of the elements it is found that Fe, Mg, Ca, and Na were the major ingredients for the ceramic production of selected region with good percentage as shown in Fig. 9. Other minor elements detected include Si, Ti, Al and K whose concentration varied from sample to sample. The concentration of two elements Al and K seems to be very low as compare to other elements but still their existence has been observed in every sample. Figure 10 shows randomly selected samples which were used for the present research. In this table, the elemental concentrations of various elements are shown given. It is noticed that the concentration of elements is not the same in all the samples, which means during the production of ceramic objects of selected region, there was no concept of systematic selection of raw materials for ceramic production or it may also be assumed that the potters were not highly skilled rather they were following the traditional methods handed over to them from their ancestors.
Conclusion
The study was carried out to investigate the elemental composition of ceramic potsherds collected from the historic sites in district Mansehra KP, Pakistan. To investigate the major and minor elements present in the selected samples, Laser induced breakdown spectroscopy (LIBS) by using Nd: YAG laser at fundamental (1064 nm) and second (532 nm) harmonics was used. LIBS data revealed large number of spectrum of the same elements in a sample, here only those lines are considered which have huge intensity and can provide clear information. Each spectrum line was veri ed by its wavelength and intensity with the help of NIST database and also con rmed through Spectrum Analyzer, an instrument that graphically displays signal amplitude against frequency, in the frequency domain. It shows which frequency components are present in a signal and how strong these are. The intensity and wavelength of every peak have been recorded individually. The concentration of Iron (Fe), Magnesium (Mg), Sodium (Na), Calcium (Ca), Silicon (Si), Titanium (Ti), Aluminum (Al) and Potassium (K) have been calculated in the selected samples. The constituents like Fe, Na, Mg and Ca, Si indicates that these are the major elements that have been used in the production of analyzed ancient ceramic potsherds. The presence of Calcium in considerable amount in every sample reveals that the raw material for the production of ceramic objects in the selected region was composed of calcareous clay. The elemental analysis of selected samples provides information that all the shards belongs to same historic time period of the region and source of raw material for the production of ceramics objects is associated with surrounding Tanawal valley and Balakot valley as studied [25]. The elemental analysis can play vital role for the preservation of these archaeological ceramic housed in different museums in Pakistan.
Declarations
Ethics approval and consent to participate Not applicable
Consent for publication
Not applicable Availability of data and materials The authors declare that the ceramic sherds analyzed here in this manuscript were selected from the collections of Archaeological survey by the Department of archaeology Hazara University Mansehra, Pakistan.
Competing interests laboratory, in which, Plasma and solar cell devices are designed for laboratory use. He has the expertise to designed and operated liquid discharge and diagnosed for optimum condition. He also Industrial tools are processed for surface hardness using different sources such as DC, Pulsed DC, and RF source. He is working on also working plasma processing of foodborne and Diarrheal pathogens.
4. Tahir Iqbal (tahirawan2@gmail.com) is an Assistant Professor of Physics in the Department of Physics at University of Gujrat. His main area of scienti c interest is in Nano-photonics and Plasmonics, Nano-fabrication and Nano-characterization near-eld optics, scanning probe microscopy, optical properties of metallic nanostructures and Nanotechnology.
Abdul Hameed (hameedarch2014@gmail.com) is is currently working as Assistant Professor at
Department of Archaeology Hazara University Mansehra. He is specialist in Buddhist Archaeology of South Asia.
Figure 1
Area of sample collection Note: The designations employed and the presentation of the material on this map do not imply the expression of any opinion whatsoever on the part of Research Square concerning the legal status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. This map has been provided by the authors. Step by Step analysis of solid ancient ceramic potsherd using LIBS Figure 10 LIBS results of selected samples
|
2021-08-02T00:05:20.554Z
|
2021-05-17T00:00:00.000
|
{
"year": 2021,
"sha1": "3c4fb7e4aa97b06b245574b917b19e7506437437",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-510290/v1.pdf?c=1631897216000",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0ec2ffd34894fa9101ad19ef021f0565337df130",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Geography"
]
}
|
246904339
|
pes2o/s2orc
|
v3-fos-license
|
Pattern recovery and signal denoising by SLOPE when the design matrix is orthogonal
Sorted $\ell_1$ Penalized Estimator (SLOPE) is a relatively new convex regularization method for fitting high-dimensional regression models. SLOPE allows to reduce the model dimension by shrinking some estimates of the regression coefficients completely to zero or by equating the absolute values of some nonzero estimates of these coefficients. This allows to identify situations where some of~true regression coefficients are equal. In this article we will introduce the SLOPE pattern, i.e., the set of relations between the true regression coefficients, which can be identified by SLOPE. We will also present new results on the strong consistency of SLOPE estimators and on~the~strong consistency of pattern recovery by~SLOPE when the design matrix is orthogonal and illustrate advantages of~the~SLOPE clustering in the context of high frequency signal denoising.
Introduction and motivations
The Linear Multiple Regression concerns the model Y = Xβ+ε, where Y ∈ R n is an output vector, X ∈ R n×p is a fixed design matrix, β ∈ R p is an unknown vector of predictors and ε ∈ R n is a noise vector. The primary goal is to estimate β. In the low-dimensional setting, i.e., when the number of predictors p is not larger than the number of explanatory variables n and X is of full rank, the ordinary least squares estimatorβ OLS has an exact formulaβ OLS = (X X) −1 X Y . For practical reasons there is an urge to avoid the highdimensionality curse, therefore we want the estimate to be sparse, i.e., to be descriptible by a smaller number of parameters. Several solutions were proposed to deal with such problem. One of them, the Least Absolute Shrinkage and Selection Operator (LASSO [7,24]) involves penalizing the residual sum of squares Y − Xβ 2 2 with an 1 norm ofβ multiplied by a tuning parameter λ: The LASSO estimator is not unbiased, but is a shrinkage estimator which shrinks somê β LASSO j completely to zero, resulting in a sparser estimate. In the case of X being an orthogonal matrix, i.e. 1 n X X = I p , the exact formula forβ LASSO introduced by Tibshirani [24] is based onβ Another approach to reduce the dimensionality is the Sorted 1 Penalized Estimator (SLOPE [3,2,25]), which not only generalizes the LASSO method, but also allows to clusterize the similar coefficients of β. In SLOPE, 1 -norm is replaced by its sorted version J Λ , which depends on the tuning vector Λ = (λ 1 , . . . , λ p ) ∈ R p , where λ 1 ≥ . . . ≥ λ p ≥ 0: where {|β| (i) } p i=1 is a decreasing permutation of absolute values of β 1 , . . . , β p : The case of Λ being an arithmetic sequence was studied by Bondell and Reich [5] and called the Octagonal Shrinkage and Clustering Algorithm for Regression (OSCAR). The special case of SLOPE with λ 1 = λ 2 = . . . = λ p > 0 is LASSO. For Λ = (0, . . . , 0) we obtain the OLS estimator. Clustering the predictors allows for additional dimension reduction by identifying variables with the same absolute values of regression coefficients. One may recently observe the rise of interest in methods, which cluster highly correlated predictors [6,12,14,17,18,23]. SLOPE is ideal for this task, since it is capable to identify the low-dimensional structure, which is called the SLOPE pattern, defined by Schneider and Tardivel with the subdifferential of the SLOPE norm J Λ [20]. For the convention of this article we let sign(0) = 0. As k we will denote the number of clusters of patt(b) = (m 1 , . . . , m p ) i.e., the number of nonzero components of |b|.
As M p we denote the set of all possible SLOPE patterns of b ∈ R p .
Remark 1.0.1 (Subdifferential description of the SLOPE pattern [20]). Let Λ = (λ 1 , . . . , λ p ) satisfy λ 1 > . . . > λ p > 0. Then where ∂ f (b) is a subdifferential of the function f : R p → R at b, i.e.: The subdifferential approach may be applied to a wider class of penalizers being polyhedral gauges, cf. [22]. Definition 1.0.2 (Pattern recovery by SLOPE). We say that the SLOPE estimator β SLOP E recovers the pattern of β when The clustering properties of SLOPE have been studied before, cf. [5,11], but the researchers consider strongly correlated predictors, which are being used in financial mathematics to group the assets with respect to their partial correlation with the hedge fund return times series [13]. In our article we decided to suppose the orthogonal design This is a classical and natural assumption in the case where n and p are fixed, cf. [24]. Moreover, in the asymptotic case, where n → ∞ and p is fixed, it is usually supposed that X X/n → C > 0, cf. [26,27]. In (1) the design matrix X is orthogonal. Then, the Euclidean norm of each n-dimensional column of X equals n. If it was 1, the terms of X would vanish to zero for large n, which is not natural. Such class of matrices is being widely used in signal analysis, [19,8]. For general X the problem is considered in our parallel article [4].
To study the properties of SLOPE we often use the closed unit ball C Λ in the dual norm of J Λ , which was studied e.g. by Zeng and Figueiredo [25]. This dual ball is described explicitely as a signed permutahedron, see e.g. [16,20].
In this article we prove novel results on the strong consistency of SLOPE both in estimation and in pattern recovery. We also introduce a new, based on minimaxity, approach to relations betweenβ SLOP E andβ OLS .
Outline of the paper
In Section 2 we derive the connections betweenβ SLOP E andβ OLS in the orthogonal design. We use the minimax theorem of Sion, cf. [1]. In Section 3 we focus on the properties ofβ SLOP E . We use the geometric interpretation of SLOPE to explain its ability to identify the SLOPE pattern and provide new theoretical results on the support recovery and clustering properties using a representation of SLOPE as a function of the ordinary least squares (OLS) estimator. Similar approach for LASSO was used by Ewald and Schneider, cf. [10]. To analyze the asymptotic properties of the SLOPE estimator, e.g. its consistency, we have to assume that the sample size n tends to infinity. Therefore, in Section 4 we define a sequence of linear regression models In this sequence, the response vector Y (n) ∈ R n , the design matrix X (n) ∈ R n×p and the error term ε (n) = (ε n ) ∈ R n vary with n. The error term ε (n) = (ε (n) 1 , ε (n) 2 , . . . , ε (n) ) ∈ R n has the normal distribution N (0, σ 2 I n ). We make no assumptions about the relations between ε (n) and ε (m) for n = m. In this paper we consider the specific, but statistically important model in which n ≥ p and columns of X are orthogonal. The orthogonality assumption allows us to derive, by simple techniques, relatively precise results on the SLOPE estimator (e.g. Theorem 3.1 theorem), which seems unavailable when columns of X are not orthogonal. Substantially more difficult techniques based on subdifferential calculus are developed in [4]. These techniques are used in [4] to establish the properties of the SLOPE estimator in the general case, where the columns of X are not orthogonal and p may be much larger than n. In the asymptotic theorem proved in [4] under different assumptions on X n X n stronger restrictions on tuning λ n are considered. We provide the conditions under which the SLOPE estimator is strongly consistent. Additionally, in case when for each n the design matrix is orthogonal, we provide the conditions on the sequence of tuning parameters such that SLOPE is strongly consistent in the pattern recovery. In Section 5 we show the applications of the SLOPE clustering in terms of high frequency signal denoising and illustrate them with simulations. The Appendix covers the proofs of technical results.
Saddle point
Let the function r : M × C Λ → R be defined by As an immediate consequence of ( (3)) and Proposition 2.0.1 we obtain It turns out that the order of the maximization over π ∈ C Λ and the minimization over b ∈ M can be switched without affecting the result. To see this, note that both C Λ and M are convex and compact. Moreover, for each fixed π ∈ C Λ , r(b, π) is a convex continuous function with respect to b ∈ M and, for each fixed b ∈ M, r(b, π) is concave with respect to π ∈ C Λ (in fact, it is linear). Therefore, all assumptions of the Sion's minimax theorem are fulfilled (see [1, p. 218]) and thus there exists a saddle point In the next section we shall see that the first coordinate of any saddle point (β * , π * ) is the SLOPE estimator.
SLOPE solution for the orthogonal design
Since for each fixed π ∈ C Λ , the function r(b, π) is convex with respect to b ∈ M, any point b π ∈ M, at which the gradient and differentiate with respect to b, we obtain Equating this gradient to 0 gives the following equation for the optimum point b π : Substituting this into the equation for r(b π , π), we find that Let p j = |{i : |m i | = k + 1 − j}| be the number of elements of the j th cluster of β, P j = i≤j p i and P k+1 = p.
and let β * = (β * 1 , . . . , β * p ) be the corresponding point from M given by Then, (π − π * ) β * ≤ 0, for all π ∈ C Λ and hence . The proof is given in the Appendix. An immediate consequence of the Lemma is the following result.
Lemma 2.2. The point (β * , π * ) defined as in Lemma 2.1 is the saddle point of the function r(b, π).
The proof is given in the Appendix. We use the last lemma to prove the main result of this section. Proof. Using the fact that max Corollary 2.3.1. In the linear model satisfying 1 n X X = I p we havê is the proximal projection ofβ OLS onto C Λ/n .
Projections onto C Λ are widely used in [15] in the study of the notion of degrees of freedom. However, the Corollary 2.3.1 is not stated there explicitely. is the image ofβ OLS by the proximal operator of the SLOPE norm. Therefore, this operator has a closed form formula [2,21,9]. This explicit expression gives an analytical way to learn that SLOPE solution is sparse and built of clusters. As (6) is not proven in [3, Equation (1.14)], we give the proof in the Appendix. The next theorem gives a sufficient condition for the clustering effect of the SLOPE estimator in the orthogonal design.
Theorem 3.2. Consider a linear model with orthogonal design 1 n X X = I p . Let π be a permutation of (1, 2, . . . , p) such that Proof. By Lemma 3.1, in the orthogonal design, β In the following theorem we derive necessary and sufficient conditions under which SLOPE in the orthogonal design recovers the support of the vector β = (β 1 , . . . , β p ) , i.e.
Asymptotic properties of SLOPE
In this section we discuss several asymptotic properties of SLOPE estimators in the lowdimensional regression model in which p is fixed and the sample size n tends to infinity. For each n ≥ 1 we consider a linear model where Y (n) = (y n ) ∈ R n is a vector of observations, X (n) ∈ R n×p is a deterministic design matrix with rank(X (n) ) = p, β = (β 1 , β 2 , . . . , β p ) ∈ R p is a vector of unknown regression coefficients and ε (n) = (ε (n) 1 , ε (n) 2 , . . . , ε (n) n ) ∈ R n is a noise term, which has the normal distribution N (0, σ 2 I n ). We make no assumptions about the dependence between ε (n) and ε (m) for n = m. In particular, ε (n) does not need to be a subsequence of ε (m) . When defining the sequence ( β SLOP E n ) of SLOPE estimators, we assume that the tuning vector varies with n. More precisely, for each n ≥ 1 its coefficients λ
Strong consistency of the SLOPE estimator
Let us recall the definition of a strongly consistent estimatorβ SLOPE n of β, i.e. ∀β ∈ R p we haveβ SLOPE n → β almost surely.
Below we discuss consistency of the sequence ( β SLOP E n ) of SLOPE estimators, defined by (8). is not strongly consistent for β.
Before proving the above theorems we start with stating a simple technical lemma. It follows quickly from the Borel-Cantelli Lemma and the tail inequality: Lemma 4.2. Assume that (Q n ) n∈N is a sequence of Gaussian random variables, defined on the same probability space, which converges in distribution to N(0, σ 2 ) for some σ ∈ (0, ∞). Then, for any δ > 0, lim n→∞ Q n (log(n)) 1/2+δ = 0 a.s.
Our proof of the strong consistency of SLOPE is based on the strong consistency of the OLS estimator. The latter result is a folklore and we prove it in our setting. Proof. We have
The assumption that rank(X (n) ) = p implies that the matrix (X (n) ) X (n) is invertible and hence the least squares estimator of β is unique and has the form β OLS n = ((X (n) ) X (n) ) −1 (X (n) ) Y (n) . Combining with ( (9)) the fact that β OLS n a.s.
−→ β, we conclude that and since λ The last equality follows from the fact that ( β Suppose to the contrary that β −→ β and that lim n n −1 (X (n) ) X (n) = C, we have For λ 0 > 0 this provides a contradiction since the inequality λ 0 β ∞ ≤ 1 2 β Cβ does not hold when the value of β is sufficiently close to 0. However, the definition of strong consistency requires the convergence for any value of the parameter β. We prove that if true parameter β satisfies λ 0 β ∞ > β Cβ/2 and lim n λ 1 /n = λ 0 > 0, thenβ SLOPE is not convergent for β.
Asymptotic pattern recovery in the orthogonal design
We again consider a sequence of linear models ( (7)) but this time we assume that for each n the deterministic design matrix X (n) of size n × p satisfies (X (n) ) X (n) = nI p .
Let β (n) be the SLOPE estimator defined by (8). With the above notation we present the main result of this section. and that there exists δ > 0 such that Then we have → patt(β).
Since the space of models is discrete, we have to show that for large n, patt(β SLOPE n ) = patt(β) a.s. We divide the proof into the following four parts: (n) a.s. for large n, The points (b) and (d) follow quickly by the strong consistency ofβ SLOPE (n). To prove (a) and (c) we reduce the problem to the orthogonal design case. We have arg min where Y β + ε (n) / √ n. Let π n be a permutation of (1, 2, . . . , p) satisfying By the strong consistency of the OLS estimator, taking n sufficiently large, we may ensure that the clusters of β do not interlace in β We will show that if π n (k), π n (k + 1) ∈ S i , then for large n thus β SLOPE j (n) = β SLOPE k (n) for j, k ∈ S i , which finishes the proof of (a). Now assume that π n (k), π n (k +1) ∈ S i . Then, by Theorem 3.2, the condition (12) is satisfied if holds for large n and both β OLS πn(k) (n) and β OLS πn(k) (n) have the same sign. The latter is ensured by the strong consistency of the OLS estimator and the fact that β i > 0. If π n (k), π n (k + 1) ∈ S i , then we have the following bound Take any j ∈ S i . Since both β OLS j (n) and β OLS i (n) have the normal distribution with the same mean, by Lemma 4.2, we have In view of (14) and (11), this implies that (13) holds true for large n. Hence, (a) follows. It remains to establish (c). Assume that β p0 > 0 = β p0+1 = . . . = β p . Clearly, condition (a) from Theorem 3.3 is satisfied thanks to the strong consistency of the OLS estimator. For (b), we have for k = 1, 2, . . . , p 0 , which converges to 0. On the other hand, the left-hand side of (b) converges a.s. to p0 i=k β i , which is positive. Thus, condition (b) from Theorem 3.3 holds for large n. Condition (c) from Theorem 3.3 follows from Lemma 4.2. Indeed, we have for δ > 0 and k = p 0 + 1, . . . , p, Thus, all assumptions of Theorem 3.3 are verified and the proof is complete.
Applications and simulations
Below we present an application of SLOPE in signal denoising. In our example X ∈ R 300×100 is an orthogonal system of trigonometric functions, i.e. X i,(2 * j−1) = sin(2πij/n) and X i,(2 * j) = cos(2πij/n) for i = 1, . . . , 100 and j = 1, . . . , 150. Here β ∈ R p is a vector consisting of two clusters: 20 coordinates with absolute value 100 and 20 coordinates with absolute value 80. The absolute values of coordinates of β are sorted in a decreasing way. The signs of the nonzero coordinates are chosen independently with random uniform distribution. To avoid large bias caused by the shrinkage nature of LASSO and SLOPE, we debias them by combining with the OLS method. For that reason we use the following definition of the pattern matrix U M and the clustered design matrix X M , which is based on the SLOPE pattern: To perform the debiased SLOPE, we begin with recovering the support and clusters of a true vector β with SLOPE. Then, using the obtained SLOPE pattern M , we replace the design matrix with its clustered version X M = XU M . Then we perform the Ordinary Least Squares regression for the model Y = X M b + ε, where b consists only of distinct absolute values ofβ SLOPE . Analogously we proceed with the debiased LASSO. However, in this method we use the LASSO pattern matrix defined in a following way: For LASSO we have the LASSO pattern being a sign vector, cf. [22]. For S ∈ {−1, 0, 1} p , S 1 denotes the number of nonzero coordinates. If bS 1 = k ≥ 1, then we define the corresponding pattern matrix U S ∈ R p×k by U S = diag(S) supp(S) , i.e. the submatrix of diag(S) obtained by keeping columns corresponding to indices in supp(S). Then we define the reduced matrixX S bỹ X S = XU S .
Equivalently, we haveX S = (S i X i ) i∈supp(S) . The notion of pattern matrix also appears in [4]. In our example ε ∈ N (0, σ 2 I n ) and σ = 30. We compare the Mean Square Error and the signal denoising of the classical OLS estimation, the LASSO with the tuning parameter λ cv minimizing the cross-validated error, the denoised version of LASSO with λ = 5λ cv and the denoised version of SLOPE with the tuning vector Λ chosen with respect to the scaled arithmetic sequence (λ i = 3.5(p + 1 − i)). We also compare debiased SLOPE with debiased LASSO, as shown in Figure 4. The horizontal lines correspond to the true values of β. As one may observe, in the presented setting LASSO does not recover the true support, while debiased SLOPE perfectly recovers support, sign and clusters. is convex in π. Therefore, at the point π * = (π * Proof of Lemma 3.1. Observe that Therefore both optimization problems differ by 1 2 (Y Y −Y XX Y ), which does not depend on b, which implies their equivalence.
|
2022-02-18T06:42:44.024Z
|
2022-02-17T00:00:00.000
|
{
"year": 2022,
"sha1": "d3b8f635798e3751d27931931c011357d6bdee74",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "bdca7fdd1a86d328003be92d84617ef0a576fbde",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
266471968
|
pes2o/s2orc
|
v3-fos-license
|
Biomechanical Comparison of the Simple Suture Technique, Meniscal Matrix-Assisted Repair, and a Novel Meniscus Cap Suture Technique for Complex Meniscal Repair
Background: Meniscal repair is the gold standard for simple morphology tears. However, when the morphology and chronicity of the tear are less favorable, the success of the standard techniques is reduced. Purpose/Hypothesis: To compare meniscal repair augmented by a new bioresorbable implant (Meniscus Cap) versus a traditional simple suture technique and the currently available augmented repair collagen matrix meniscus wrapping technique. It was hypothesized that the Meniscus Cap suture technique would increase ultimate failure load and less displacement during cyclic loading. Study Design: Controlled laboratory study. Methods: A total of 80 fresh porcine menisci were harvested. Complex tears were created in 60 menisci, and 20 intact menisci were tested as the control group. Repairs were performed on the 60 meniscal tears using 1 of the 3 techniques (20 menisci each): an inside-out H-suture group (SS), the collagen matrix wrapping technique (CMW), and the Meniscus Cap bioresorbable implant group (CM). The menisci were subjected to 500 loading cycles from 4 to 20 N at a frequency of 1 Hz, and the total displacement was recorded. Then, the specimens underwent load to failure testing at a rate of 3.15 mm/s, and the failure mode was noted. Results: After 500 cycles of cyclic loading, there were no significant differences in displacement between the controls and CM group (0.524 vs 0.448 mm; P = .95). The displacement after the CM was significantly smaller compared with the CMW and the SS (0.448 vs 1.077 mm [P = .0009] and 0.448 vs 0.848 mm [P = .04], respectively). The ultimate load to failure was significantly greater for the controls and the CM group compared with the SS and CMW groups (controls, 1278.7 N and CM, 628.5 N vs CMW, 380.1 N and SS, 345.1 N; P < .05). The failure mode was suture breakage (suture failure) for all repairs. Conclusion: In a porcine specimen meniscal repair model, the biomechanical properties of a novel Meniscus Cap repair technique were superior to that of the simple suture and CMW techniques. Clinical Relevance: The results suggest that the Meniscal Cap repair technique may provide sufficient primary stability of the meniscal fixation even in the cases of complex meniscal tears.
annually, comprising 10% to 20% of the entirety of orthopaedic surgical procedures each year. 1 A simple suturing of the meniscus utilizing various devices and materials is the best-known surgical technique for meniscal repair, but its indications are limited. 20,23Among the key factors influencing the healing of the meniscus after repair are the mechanical properties of the device or the suturing technique. 20,2315,28 Similar testing was performed for artificial meniscus scaffolds such as the Collagen Meniscus implant, marketed in Europe by Ivy Sports Medicine (also known as Menaflex) or Actifit polymer (polyurethane) meniscus implant (Orteq). 14,17ecause of the known benefits of meniscal repair, there is a growing tendency to expand the indications for meniscal repair utilizing sutures. 5,6,19,21,22,24Such procedures commonly include biological factors to boost the regeneration and healing of the tissue-for example, fibrin clots or bone marrow products.In the past 20 years, various strategies have been developed for biologically supporting the meniscus healing, sometimes with augments. 3,16An example of such a strategy is the suturing method of Jacobi and Jakob, 18 which involves wrapping the meniscal repair site with a collagen membrane.While initial results were satisfactory, the technique proved difficult to recreate by other surgeons.In 2010, this technique was modified by Piontek et al 26 into an arthroscopic procedure, placing a suture on the damaged meniscus and wrapping it with the collagen membrane, with bone marrow aspirate being injected between the collagen membrane and the meniscus.Follow-up results after 2 and 5 years demonstrated that the arthroscopic matrix meniscal repair (AMMR) technique is safe and that such augmented repair techniques may expand the indications for the repair of previously nonsalvageable menisci..29 Despite efforts to encourage meniscal repair, there is still a difference of opinion between higher-and lowervolume surgeons on which meniscal tears are potentially repairable and need meniscectomy. 4Perhaps more education is needed-as was the conclusion in the article by Ba ˛kowski et al 4 -nonetheless, an alternative solution would be a simpler surgical technique for augmented repair.In our opinion, this approach requires a new implant to facilitate simple arthroscopic introduction over the damaged meniscus.The implant forms a scaffold for cells to regenerate the injured meniscus, potentially leading to a full recovery because of the restoration of meniscal function.The implant mechanically stabilizes the fragments of the injured meniscus immediately after the surgical procedure to physically support the healing processes and facilitate early rehabilitation.The Meniscus Cap (Meniscus Cap; Sp.Z o.o.) is a new collagenpolycaprolactone meniscus covering comprising a set of 2 wings in the shape of the lateral or medial meniscus.Each plate is a composite structure of collagen membrane and a bracing polymer (polycaprolactone) skeleton, connected by a flexible hinge along the inner curvature of the implant.The polycaprolactone skeleton incorporated into the collagen membrane contributes to the stability of the repaired construct to a far greater extent than just the collagen membrane.Theoretically, forces generated in the meniscal repair area may be partially transferred to the intact edges of the meniscus through the augmentation graft.This load shielding by the graft may directly protect the underlying repair from gap formation.The morphological structure of the collagen membrane, together with the polycaprolactone skeleton fibres, is shown in Figure 1.
This study aimed to determine the biomechanical properties of the 3 meniscal repair techniques-simple suture, repair with collagen membrane, and repair with the novel Meniscus Cap implant-for a complex tear of the medial menisci compared with the intact meniscus (control group).In particular, we wanted to assess the biomechanical strength, stiffness, and ultimate mode of failure for each repair technique.It was hypothesized that the Meniscus Cap suture technique would provide increased ultimate failure load, increased stiffness, and less displacement during cyclic loading.
Overview of the Research Design
This was an in vitro biomechanical study on fresh humansized porcine medial menisci aged 18 months old.The porcine knees were obtained from a local supplier.The trial was performed in accordance with all recommendations established by Good Laboratory Practice regulations.Ethical approval was not required for this study.
Specimen Preparation
A total of 80 specimens were harvested intact from young adult pigs by resecting the tissue at the meniscocapsular junction and with the 2 insertional roots.All the medial menisci were inspected and exhibited no macroscopic signs of meniscal tear or degeneration.The resected menisci were wrapped with normal saline-soaked gauze before testing.
Complex tears were then formed in 60 of the menisci with a No. 11 surgical blade in the midbody section equidistant from the anterior and posterior horns.The radial tears extended from the central margin to 1 mm from the peripheral meniscus rim.Next, the vertical tear was created perpendicular to the radial tear, extended for 10 mm, and crossed with the radial tear halfway.For reproducibility of the radial and vertical tears pattern, a template was prepared to define the position of the meniscal transection (Figure 2, A and B).Finally, the horizontal tear was created on both sides of the radial tear, and the depth of the horizontal tear was 5 mm.The tear was a defect in 3 planes, through the 3 vascular involvement zones [1][2][3] of the meniscus, and was thus designated a complex tear (Figure 2C).
Randomization, Allocation, and Blinding
After preparation, the 80 meniscal specimens were randomly assigned to 4 equal groups of 20 with 1 control group and 3 groups of different meniscal repair techniques-the control group of intact porcine meniscus; the H-suture without scaffold group (SS); the suture with collagen matrix group (CWM); and the suture with Meniscus Cap group (CM).For all specimens, the sutures were tied manually in an open fashion (Figure 3).SS Group.A rip-stop meniscal technique was employed using an ''H-suture configuration'' performed with 2-0 absorbable meniscal sutures (PDS No 2.0) (Figure 3B).The radial elements were performed first analogous to an inside-out technique in knee surgery, with vertical mattress suture configuration serving to reduce both vertical and horizontal components.These vertical radial sutures function as a rip-stop style reinforcement for the 2 horizontal mattress sutures that follow, performed as an insideout technique, perpendicular to and over the top of the vertical mattress sutures at the radial tear area.One horizontal suture was placed on the femoral surface of the meniscus and another on the tibial surface.
CMW Group.The tear area was wrapped with a wet collagen matrix (Evolution; Osteobiol-Tecnoss) before suturing.The suture configuration was exactly as described for the SS group (Figure 3C).
CM Group.The tear area was covered with a wet Meniscus Cap before suturing.The suture configuration was exactly as described for the SS group (Figure 3D).The specimen to be tested was then mounted on a dedicated, custom-made test device that was connected to a material testing machine (Figure 4).Distracting loads were applied perpendicular to the radial meniscal tear to simulate a worst-case scenario of maximal traction across the tear/ repair zone, which may be experienced at maximal axial loading in physiological conditions.
Assessments
Before cyclic testing, each repaired specimen was inspected to check for any suture slippage or tissue damage that may have been incurred during the repair and mounting process.Specimens with visible suture damage or unsecured knots were discarded; 3 of 60 were excluded.
Cyclic Load Testing
The intact and repaired menisci were securely fastened to the dedicated, custom-made universal tissue clamps with
The Orthopaedic Journal of Sports Medicine
Techniques for Complex Meniscal Repair 3 textured surfaces to prevent tissue slippage.The menisci were aligned perpendicular to the radial tear and subsequently mounted to a mechanical testing system (Insight 50 KN; MTS Systems) (Figure 4).Because of the wide measuring range of the standard Insight 50 KN testing machine header, the pilot tests were performed with the simultaneous use of a Hottinger Messtechnik sensor; its measuring range made it possible to measure the traction force on the meniscus with the required degree of accuracy and precision.During pilot testing, the specimens were tested and subject to the above-described roughening technique to confirm negligible slippage at the interface between the tissue and the clamps.
Primary Outcomes
The primary outcomes of this study were the amount of displacement during cyclic loading and ultimate failure load.After a preload of 2 N was applied to the specimen, cyclic loading from 4 to 20 N was performed at 1 Hz.Both the intact and repaired menisci are characterized by different rigidity; therefore, to obtain the deformation frequency at the level of 1 Hz, the displacement velocity of the testing machine traverse was set at 2 to 5 mm/s.The load and frequency were chosen based on previous studies and reflect in vivo postsurgical rehabilitation. 2,8,9,11,12,14,15,17,28Specimens underwent 500 submaximal loading cycles, and the MTS device was programmed to execute a 45-second pause at 500 cycles to facilitate data collection.Normal saline was applied to preserve the moisture of the specimens.The displacement value and corresponding load were recorded continuously in the software (Test Work 4.0; MTS Systems).Gap formation-the increased distance between the clamps-was measured and recorded as the mean distance across the tear at 20 N at 500 cycles.Cycle 0 was a reference point for reporting the displacement in the subsequent 500 cycles.After completion of cyclic loading, load to failure testing was performed at a rate 15 of 3.15 mm/s.
Secondary Outcomes
The mode of failure was documented after careful inspection.The 3 possible failure modes were tissue failure (suture pulled through the tissue), suture failure (breakage of the suture material), or knot failure (usually knot slippage).
Statistical Analysis
One-way analysis of variance, together with post hoc Tukey analysis, was performed to determine the mean differences in the load to failure across the 4 groups and between any 2 groups.A similar analysis was performed for the results of specimen stiffness measurement during the load cycles (500 cycles total) to determine the differences between specimen groups.Data were analyzed using Statistica software Version 13.3 (TIBCO Software Inc).All comparisons were 1-tailed tests, and P \ .05 was considered statistically significant.To perform the analysis of variance with the Shapiro-Wilk test, the hypothesis assuming a normal distribution of ultimate failure load test results was verified.The Shapiro-Wilk result for each examined group (n = 20 per group), with a P value ..95, proved the hypothesis that test results in each population followed a normal distribution.In the next step, the Brown-Forsythe test was used to test the equality of group variance.The P value was ..05, confirming the assumption that there was equality of result variance across groups.Successful confirmation of the initial hypotheses then allowed analysis of variance employing the post hoc Tukey test.The resulting variance between individual populations was statistically relevant, excluding the SS and CMW group pairing.
Analogous the description provided earlier, the test results for extension after cyclic loading were analyzed in a similar fashion.In the Shapiro-Wilk test, the W parameter value had a P value of ..90 for populations = 20, whereas the Brown-Forsythe test had a P value ..29.Thus, the test results confirmed the hypotheses that the populations had a normal distribution and that group variance was uniform.In the next step, the post hoc Tukey test was performed.
Primary Outcomes
Table 1 and Figure 5 show the mean ultimate failure load value for the given populations, with minimum and maximum values, and first, second, and third quartile values.The P values for each pairing of ultimate failure load in the compared populations are recorded in Table 2.
Table 3 and Figure 6 provide each study group's mean displacement after cyclic loading.P values for each pairing of mean displacement values are provided in Table .4. The differences in results between individual populations were statistically significant for the following pairings: Control versus CMW, CM versus CMW, and CM versus SS.
Secondary Outcomes.In all repairs, the failure mode was suture breakage (suture failure).
DISCUSSION
The present study is the first to report on the biomechanical behavior of menisci repaired with the Meniscus Cap implant compared with intact menisci and with menisci repaired with simple sutures or collagen matrix wraps.The most important finding was that after 500 cycles of cyclic loading, there were no significant differences in
The Orthopaedic Journal of Sports Medicine
Techniques for Complex Meniscal Repair 5 displacement between the intact meniscus and the Meniscus Cap repair method.We found that the mechanical characteristics of the complex meniscal tear repair can be significantly improved by using an additional Meniscus Cap scaffold.Consequently, the mechanical characteristics of the complex meniscal tear repaired and augmented with Meniscus Cap implant can be similar to the biomechanical properties of the intact porcine medial meniscus.primary stability of the repaired meniscus is the most important goal when aiming for a successful meniscal repair. 7,20,23,26To this end, numerous suture techniques have been developed and biomechanically tested.It was proven that sutures oblique to circumferential collagen fibrils showed better fixation than those parallel to circumferential fibrils.28,30 Buckley et al, 9 in a cadaveric comparison of 3 radial repair techniques, showed no particular benefit of 1 technique; nevertheless, the use of vertical mattress sutures-as a ripstop device-significantly reduced the likelihood of the sutures pulling through the meniscal tissue during ultimate failure testing in any radial repair method.This is an important part of any radial repair technique but is particularly relevant for complex or 3 zone radial tears or where any meniscal adjuncts are being considered, perhaps in patients with suboptimal meniscal tissue.Furthermore, as opposed to previous studies of radial repair techniques, which reported failure strengths ranging from 62 N to 250 N, 2,8,9,11-13,15,28 all 3 repair techniques tested in this study had failure strengths of .267N. The results obtained for our SS study group (simple suture) samples are promising in the context of mechanical stabilization of complex (3-zone radial) meniscal tears.
The second possible way to enhance the mechanical properties of the suturing area is the augmentation of the treated area with scaffolds-such a technique was initially proposed by Henning et al. 16 This technique entails wrapping the meniscus with autologous fascia harvested from the pes anserinus area.This procedure was further developed to employ the collagen membrane as the material for meniscus wrapping, leading to what is presently referred to as the AMMR procedure. 10,18,25,26Although it seems possible to alter the mechanical characteristics of the sutured area by employing a scaffold, no scientific evidence is available to confirm this.The double-sided scaffold-augmented repair technique that we propose in this study has integrated an additional structural element to the collagen membrane, creating a stronger repair construct, which is proven in our biomechanical testing.Sutures supported with scaffolds offered better primary stability and improved the strength of the repair construct.The Meniscus Cap scaffold has greater potential for biomechanical enhancement of meniscal suturing than the simple collagen wrap (CMW), which is employed in the previously reported AMMR technique.The superior strength of the Meniscus Cap and suture construct was also proven after 500 loading cycles.Perhaps the most striking result of our study was that the Meniscus Cap and suture group was improved even compared with the intact meniscus group in relation to displacement; in other words, the length change of an experimentally divided medial meniscus that was then repaired with an H suture and a Meniscus Cap was less than, but in the same order as, the stretch of an intact meniscus.The necessary failure strength to resist displacement at the repair site to allow for an optimal healing environment for the meniscus has not been determined.However, the enhanced apposition of each meniscal tear end is likely beneficial, and a stronger construct that can resist displacement is likely to be favorable, as long as the construct is not too stiff.Given that the overall length changes are similar in the control group and the Meniscus Cap group, we are cautiously optimistic that the tensile strength of the Meniscus Cap is in the correct therapeutic range that when applied to a damaged meniscus produces a physiological response of the composite construct to loading.
The failure mode in the present study did not differ across all 4 groups.5,28 There is a contrast with the above findings and the failure mode of currently available meniscal scaffolds.In all specimens tested by Gwinner et al, 14 the failure mode of the Collagen Meniscus Scaffold was a complete disruption of the scaffold integrity.The same results were observed by Hoburg et al 17 for the Actifit implant.The mechanical stabilization of both artificial meniscus implants depends on the suture materials and the biological environment of the knee joint.The main load to failure was 36.2 6 13.1 N for the Collagen Meniscus Scaffold and 53.3 6 6.5 N for the Actifit implant.Consequently, their primary stability was not favorable when employed in the meniscus as a bridging technique (in study by Gwinner et al 14 ) was not favorable compared with the Meniscus Cap and direct meniscal suture (in this study).
Limitations
There are several limitations to the present study.Although porcine menisci are similar in shape and function to human tissue, they are not perfect surrogates.Porcine menisci are thicker, denser, and smaller than human menisci and therefore may not serve to accurately mimic human meniscal biomechanical properties.However, 1 benefit was that porcine menisci were harvested from sameaged pigs, allowing them to test their biomechanical behavior in a standardized fashion, without the confounding factors of highly variable degenerative menisci harvested from cadaveric donors.Porcine menisci were used in previous studies and found to be a good biomechanical model. 11,12,15,30lthough human meniscus tissue would be the most representative explant model for meniscus injury healing, live tissue specimens from human sources are very limited in availability and would never be available at the same time as a fresh specimen for biomechanical testing, meaning that there would be an additional variable for freezing, storing, and thawing.Typical human patients who sustain meniscus injuries are members of the young and active population.Only fragmented explant is likely available from young patients undergoing partial meniscectomy.Whole meniscus explants, possibly obtained from total knee joint replacement surgeries, are likely undergoing an age-or osteoarthritis-associated degradation process.Therefore, menisci obtained from various animals are the primary sources for explant injury healing models.Menisci harvested from bovine, porcine, canine, equine, and caprine specimens are commonly used as explants to study injury repair and healing. 29his study aimed to determine the biomechanical properties of repair techniques for complex meniscal lesions.To eliminate confounding factors, complete radial, horizontal, and longitudinal tears were made with axial force applied perpendicular to the radial tear.However, such a setup did not fully reflect the actual physiological conditions in which compression, tension, and shear forces are applied to the meniscus simultaneously.Furthermore, the repair knots were tied manually, in an open fashion, and uniformly for all specimens, leading to very small gaps forming after cyclic loading.This is just not achievable in the arthroscopic all inside or inside-out suture techniques that are contemporary knee surgical practice today.This study design simulates the immediate postsurgical rehabilitation, where there is no healing and the repair is more vulnerable to damage.Since the authors could not find studies on suture strength in complex meniscal lesions in the literature, test protocols for radial lesions were adopted for this study. 2,15,28The results were also compared with the studies on the biomechanical evaluation of sutures available on the market for meniscal scaffolds. 14,17However, as a control group, we used undamaged porcine menisci harvested from the same specimen series to test the suturing methods.In our opinion, such an approach allowed us to draw scientifically valid and clinically relevant conclusions regarding the tested methods of meniscal suture with or without augmentation by a scaffold.Despite all 3 techniques demonstrating significantly improved strength and stiffness, it is still unknown to what degree the strength and stiffness of the repaired construct contribute to the ideal healing environment to achieve the best clinical outcome.
CONCLUSION
In a porcine specimen meniscal repair model, the biomechanical properties of a novel Meniscus Cap repair The Orthopaedic Journal of Sports Medicine Techniques for Complex Meniscal Repair 7 technique were superior to those of the simple suture and CMW techniques.Future studies of biomechanical and clinical outcomes in human meniscal repairs with this device are warranted to explore whether this repair method is valuable to clinical practice and patient outcomes.
Figure 1 .
Figure 1.(A) A macroscopic view of the Meniscus Cap implant comprising a set of 2 wings in the shape of the meniscus, each in the form of a composite layer of collagen membrane (black star) and bracing polymer (polycaprolactone) skeleton (black arrow) connected by a flexible hinge along the inner curvature of the implant.(B) A scanning electron microscopy image showing the polycaprolactone fiber diameter of the Meniscus Cap scaffold (white arrow) and the collagen membrane (white star).
Figure 2 .
Figure 2. Template of a meniscal tear.(A) meniscal samples: a medial meniscus placed in the template.(B) The template of radial and vertical tears created with a No. 11 surgical blade.(C) A cross-section of the complex meniscal tear created in 3 planes across vascular involvement zones 1 to 3, indicating that the tear crossed the varying vasculature of the meniscus.
Figure 3 .
Figure 3. Meniscal specimens.(A) The medial meniscus harvested intact from adult pigs without intervention (control group).(B) The medial meniscus repaired without scaffolding by the ''H'' suture technique.(C) The medial meniscus repaired with a collagen membrane with the same suture pattern.(D) The medial meniscus repaired with a Meniscus Cap scaffold with the same suture pattern.
Figure 4 .
Figure 4. (A) Mechanical testing setup.(B) A repaired meniscus securely fastened to the dedicated, custom-made universal tissue clamps with a textured surface to prevent tissue slippage.
Figure 5 .
Figure 5.A box-and-whisker plot of the ultimate failure load according to study group.The X and the horizontal line represent the mean and the median, the top and bottom of the box are the first and third quartiles, and error bars represent the range.CG, control group; CM, Meniscus Cap bioresorbable implant; CMW, collagen matrix wrapping technique; SS, inside-out H-suture.
Figure 6 .
Figure 6.A box-and-whisker plot of displacement after cyclic loading.The X and the horizontal line represent the mean and the median, the top and bottom of the box are the first and third quartiles, and error bars represent the range.CG, control group; CMW, collagen matrix wrapping technique; CM, Meniscus Cap bioresorbable implant; SS, inside-out H-suture.
TABLE 1 Ultimate
Failure Load by Study Group a a Data are reported in N.CM, Meniscus Cap bioresorbable implant; CMW, collagen matrix wrapping technique; SS, inside-out H-suture.
TABLE 2 P
Values for Each Pairing of Ultimate Failure Load Between Study Groups a
TABLE 3 Displacement
After Cyclic Loading by Study Group a a Data are reported in mm.CM, Meniscus Cap bioresorbable implant; CMW, collagen matrix wrapping technique; SS, inside-out H-suture.
TABLE 4 P
Values for Each Pairing of Displacement After Cyclic Loading a Bold P values indicate a statistically significant difference between groups (P \ .05).CM, Meniscus Cap bioresorbable implant; CMW, collagen matrix wrapping technique; SS, insideout H-suture. a
|
2023-12-23T16:12:05.389Z
|
2023-12-01T00:00:00.000
|
{
"year": 2023,
"sha1": "7d02210c1a9a57af3293f6fa818b9cada812c0b8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Sage",
"pdf_hash": "ecf4dd57ae0b6e66b9b8d2611ea4f806e438d166",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3562432
|
pes2o/s2orc
|
v3-fos-license
|
Rhetorics of the Visual : Graphic Medicine , Comics and its Affordances
Affordances, in the context of comics, connote to the general attributes of the medium such as temporality, spatiality, gestures, tone/handwriting and economy. Although comics evinces a dynamic relationship among these elements, it is possible to delineate functional and rhetorical role of these affordances on a conceptual and technical level. Taking these cues, the paper after briefly reviewing the definition and scope of graphic medicine aims to demonstrate the functional and rhetorical role of the aforementioned affordances in communicating illness and illness related experiences. Among other issues this article also seeks to address the following: how do comics engage in the visual and verbal translation of the experiences of chronic illness? how do affordances of comics facilitate the readers’ haptic experience of an author’s subjective trauma? Despite its juvenile legacy, comics capacitates graphic medicine to represent physical and emotional aspects of narrating subjective illness experiences within the medium. The paper concludes that comics is a uniquely suited communicative medium as it diagrams the interiority of illness experience, and, in the process, evolves itself as a locus of tacit knowledge through its translation and mediation of emotional truths and affective states altered by illness.
Introduction
Affordances, in the context of comics, connote to the general attributes of the medium such as temporality, spatiality, gestures, tone/handwriting and economy.Although comics evinces a dynamic relationship among these elements, it is possible to delineate functional and rhetorical role of these affordances on a conceptual and technical level.Taking these cues, the paper after briefly reviewing the definition and scope of graphic medicine aims to demonstrate the functional and rhetorical role of the aforementioned affordances in communicating illness and illness related experiences.Among other issues this article also seeks to address the following: how do comics engage in the visual and verbal translation of the experiences of chronic illness?how do affordances of comics facilitate the readers' haptic experience of an author's subjective trauma?Despite its juvenile legacy, comics capacitates graphic medicine to represent physical and emotional aspects of narrating subjective illness experiences within the medium.The paper concludes that comics is a uniquely suited communicative medium as it diagrams the interiority of illness experience, and, in the process, evolves itself as a locus of tacit knowledge through its translation and mediation of emotional truths and affective states altered by illness.
Graphic Medicine: Definition and Scope
Before addressing the aforementioned issues, it would be instructive to examine the definition and scope of graphic medicine 1 .There has been over the past decade a burgeoning of a new breed of comics dealing with the patient experience of illness or caring for others with an illness.Christened as 'graphic medicine' by the British doctor and graphic novelist Ian Williams, these graphic illness narratives allude to "the intersection of the medium of comics and the discourse of health care" (Czerwiec, 2015, p. 1).Emphasizing the uniqueness of these narratives, Sathyaraj Venkatesan defines graphic medicine as "comics' distinctive engagement with and performance of illness experience" (2016, p. 93).Although these narratives are predominately autobiographical, they also address various socio-cultural issues impinging health care such as medical negligence; the vexed doctor-patient relationship; industrialism of health care; patient identity; role of insurance providers; challenges of caretaking; demands of being a doctor in a commercialized healthcare sector, among others.
Although as early as Justin Green's Binky Brown Meets the Holy Virgin Mary (1972) explores the mental torments of Obsessive Compulsive Disorder (OCD), it is Al Davison's The Spiral Cage (1990) and Harvey Pekar and Joyce Brabner's Our Cancer Year (1994) which launched graphic medicine definitively.Till then there has been a plethora of graphic medical narratives which deliberate on various illness experience ranging from AIDS to Asperger Syndrome narrated by doctors, professional caregivers or patients themselves.Prominent graphic medical narratives include Marisa Acocella Marchetto's autobiographical Cancer Vixen (breast cancer), Brain Fies's memoir Mom's Cancer (lung cancer), MK Czerwiec's Comic Nurse, Sarah Leavitt's Tangles (Alzheimer), Stan Mack's Janet and Me (breast cancer), among others.While the pedagogical role of comics within healthcare (a.k.a.education comics) has been acknowledged, these graphic illness narratives mostly from patient's perspective constitute a new form of knowledge which challenges mere medical and diagnostic approach of a disease.
"Choreographing and Shaping Time": Spatio-temporal Dimensions of Comics
Comparing comics with other media, McCloud contends thus, "[i]n every other form of narrative that I know of, past, present, and future are not shown simultaneously-you are always in the now.And future is something that you can anticipate, and the past is something you can remember.Comics is the only form in which, past, present, and future are visible simultaneously" (McCloud, 2007).Discerned as one of the seminal features of the comics, temporality allows the cartoonists to "choreograph and shape time" in immobile frames in that it juxtaposes past, present and imagined future within a single visual design (Spiegelman, 2005, p. 4).Lefevre exalts this dexterous palimpsesting of different temporalities on the same visual scheme as "multitemporality" (2011, p. 24).In the context of illness narrative, this juxtaposition of past, present and future in a single frame facilitates the diseased subjects to visually construct their lived body experience while mapping their traumatic progression from past to present.Especially in cases of chronic illnesses like AIDS and breast cancer, the sufferer's yearning for the past healthy body is often juxtaposed with the trauma and stigma of present sick body (usually in a single panel) to differentiate and identify their lived experience before and during/after the illness.In effect, spatio-temporal affordances of comics help the cartoonists to delineate "hybrid subjectivities" (Chute, 2010, p. 5) caused by illness.
Further reinforcing the faculty of comics, Lefevre's idea of "temporal flexibility" (2011, p. 24) denotes the stretching of a moment in time to reproduce varied vignettes of an experience.Acknowledging that there is no objective way to determine time encapsulated within a panel, Lefevre accents the flexibility of the comics medium in that it can disrupt the spatio-temporal conventions of traditional narrative which follows a linear succession of events.Attesting to the complex relationship between "the sequence of events happening (chronology) and the sequence in which they are narrated (narrative line)," Bredehoft argues that comics offers the possibility of a narrative mode that disrupts the uni-directional and irreversible nature of time.This twodimensional architecture of the comics page "allows comics narration to break the linearity of a time-sequenced narrative line" (2006, p. 872).It is this diversion from linearity coupled with the temporal flexibility that makes comics the preferred medium for narrating auto/biographical, historical, and illness experiences.
In fact, comics allows the author to recount memories experientially in narrative/subjective time sequence against the ideal of chronological/objective time and thus remains faithful to the experiences of the self.Invoking Henri Bergson's philosophy of clock time and subjective/psychic time, Elisabeth El Refaie differentiates between chronological and narrative time sequence in comics.Accordingly, distinct from clock time which consists of "abstract, homogenous, infinitely divisible segments," la durée or psychic time is a single indivisible temporal entity which is felt "intuitively, from our actual experience" (2012, p. 96).A segment in the clock time may reduce or extend according to an individual's actual experience of time.In case of trauma and illness this experience of time could be truthfully and authentically represented by stretching it across multiple panels.As the multiple self-portraits and avatars in illness narratives depict the yearnings and threats of a diseased body, spatio-temporality aids the cartoonist to suture various layers of a person's self which is fragmented by disease.For instance, Michael Green and Ray Reick's medical intern-narrative Missed It (2013) demonstrates the specific potentials of temporality.Green recollects in his five page comics a clinical error that he has committed during his internship and projects the lasting effects of traumatic guilt that haunts Green for years.Juxtaposing two panels that trace Green's transformation from an intern to a doctor, Missed It culminates with the image that palimpsests two phases of Green economically-the present self engaging major portion of the panel and the past self in the inset-both manifesting the same expression of regret over the clinical error (see fig. 1).In spite of the temporal distance of twenty years, the guilt in his eyes remains unaltered signifying the enduring effect of the past in the present.
In a different vein, Marisa Acocella Marchetto's Cancer Vixen: A True Story recounts the author's cancer diagnosis in subjective time sequence.While the author reiterates the specific moment "10:12 am" (see fig. 2) in the caption she also in seriating panels narrates her subjective experience of trauma as being siphoned into a black hole (see fig. 2).This surreal experience of illness trauma detailing Marisa's sense of shock and despair extends to pages as opposed to one second as per objective clock time.Elsewhere her subjective experience of time freezes for a moment as she retorts thus, "I was alone, afraid, frozen in time for an eternity in a vast expanse of nothingness, surrounded by dark matter" (Marchetto, 2006, p. 9).The subjective time which Marchetto converges and sedates in her narrative, as Miller reminds, is "cancer time" that pervades the everyday experience of time (2014, p. 219).Marchetto vindicates her savouring of each moment in cancer time as a veritable expression of her subjective experience of illness.As a matter of fact, it is the dexterous manipulation of comics medium's temporal flexibility which aids the artist to retrieve and translate her subjective experience of trauma in la durée/psychic time.
Gutter Space and Closure
Gutter, as Scott McCloud asserts, "plays host to much of the magic and mystery that are at the very heart of comics" by enabling "the human imagination [to synthesize] two separate images and transform[] them into a single idea" (1994, p. 66).Put differently, readers understand episodes of a comic story through the gutter space and by imagining the action that happens between the panels.This process of filling in by "observing the parts but perceiving the whole [is] called closure" (1994, p. 63).Closure in comics not only aids the readers to understand the narrative but also makes them an active collaborator in the process of making meaning.Moreover, the gutter stimulates time and motion between two panels as it performs a cross-panel function despite the space which separates them.Here the movement and experience are recreated in the gutter where the reader shares the author's creative vision in fragments of time.
Graphic pathographies trace bizarre shift in the routine and order of life caused by illness.Invoking the theoretical postulates of Heidegger, Carla Willig remarks how the experience of illness would remind one's own existence and mortality.Particularly in the case of chronic illness, one experiences unpleasant "throwness" "apparently without rhyme or reason, without warning or preparation, without explanation and without any choice in the matter" (2009, p. 182).Such traumatic conditions that constitute the illness experience is the major theme of pathographiesto quote Ann Hawkins, "[l]ife becomes filled with risk and danger as the ill person is transported out of the familiar everyday world into the realm of the body that no longer functions. . .life in all its myriad dimensions is reduced to a series of battles against death" (1999, p. 1).This rupture and disruption in the normal order of events parallels the intrinsically fragmented structure of comics.Otherwise stated, graphic medical narratives make use of the fragmented layout of comics contained in "boxes of time" (Chute, 2010, p. 6) in order to faithfully capture a patient's fractured sense of time and (corrupted) memory.Trauma is a "comparable liminal moment, which sharply demarcates a before and an after and which eludes both representation and interpretation" (Versluys, 2006, p. 986).In a conventional comics page layout, the gutter functions as a liminal space between the fractured pieces of traumatic memory wherein the reader sutures the disjointed "wounds" (Chaney, 2011, p. 5) and provides a desired closure.The closure thus achieved provides a textual completion and conclusion to trauma.Put differently, Versluys borrowing Huyssen's notion of "mimetic approximation" outlines how "traumatic experience [which] is inaccessible to language" (2006, p. 988) and characterized by a complete rupture of symbolic resources is mimetically approximated in the comics medium by the means of gutter space and layout.Although trauma resists exact representation, the comics medium induced by basic page structure and gutter space grants a mediated authentication which in turn enables the graphic pathographers to revive traumatic moments from the past and reconstitute the disrupted self out of its inherent meaninglessness.
In Billy, Me & You: A Memoir of Grief and Recovery, Nicola Streeten reconstructs the memories of Billy, her two year old son, his death and the subsequent trauma in her life.As the panels recollect and record the traumatic vignettes of the past, the reader seams the grief caused by untimely death and loss of the child by providing closure to the fragmented pieces of memory (see fig. 3).Though the time span between each panel extends to hours of sorrow and relief, the reader absorbs the intensity of trauma which saturates in the gutter space.In fact, the author and subsequently the reader participates in the mimetic approximation of a traumatic event and provide the much desired closure of the wound.Illness apart from disrupting the biographical continuum fragments and sometimes corrupts the memory of a sick person.The intrincically fragmented structure of comics facilitates the artist to recount their episodes of trauma caused by illness.Michael Chaney in Graphic Subjects characterizes this as "a serial recuperation of trauma on a structural level. . . the way gutters (or wounds) separating one pictorial panel from another are routinely resolved in order to create meaning and coherence [through] . . .'closure'" (2011, p. 5).In effect, the untranslatable trauma that the author leaves in the gutter space is transformed into an affective experience as the reader moves across the panels.Engaging the readers thus in the fragmented traumatic moments or experiences, "comics offer a window into the subjective realities of other sufferers and provide companionship through shared experience in a more immediate manner" than any other medium (Williams, 2012, p. 5).
"Expressive anatomy": Gestures
Identified as one of the seminal aspects of the comics medium, gestures embody human emotions and facilitate the effectual and eloquent narration of "internal feelings" (Eisner, 2008, p. 105) of the characters.Comics, as Squier attests, "direct our attention to the meaning conveyed by the body and its movements, gestures, and postures" (Czerweic, 2015, p. 49), and, in being so, it emerges as a performative and dynamic medium.As Berninger puts it in Comics as a Nexus of Cultures: Essays on the Interplay of Media, Disciplines and International Perspectives, comics is a "collection of gestures, complicated assemblages of bits and pieces which can be dissembled, labelled and examined carefully" (2010, p. 244).Depending on how well the readers relate to a certain image, gestures "invoke a nuance of emotion" to the character (Eisner, 2008, p. 106).Providing a leeway for the author to establish a visceral bond with the reader, gestures render immediacy by inviting the reader into the diegetic premises.As Eisner explains, "[i]n comics, body posture and gesture occupy a position of primacy over text.The manner in which these images are employed modifies and defines the intended meaning of the words" (2008, p. 106).In graphic medical narratives, especially while depicting trauma, gestures accentuate ethereal emotions when pain erodes verbal language.Elaborating erasure/erosion of language caused by pain, Elaine Scarry contends thus, "[p]hysical pain does not simply resist language but actively destroys it, bringing about an immediate reversion to a state anterior to language, to the sounds and cries a human being makes before language is learned" (1985, p. 4).In kindred states of verbal deficit, comics through utilizing facial movements, eyebrows, eyes, eyelids, lips, jaws and cheeks "amplify meaning" and articulate the emotional aspect of human experiences that usually circumvent even the nuances of verbal language (Quesenberry, 2016, p. 77).Elsewhere Squier states thus, "[i]n their attention to human embodiment, and their combination of both words and gestures, comics can reveal unvoiced relationships, unarticulated emotions, unspoken possibilities, and even unacknowledged alternative perspectives (2008, p. 130).Stated otherwise, through its "expressive anatomy" 2 , comics embodies subjective illness experiences and visualizes subtle unspeakable emotions thereby deepening the reader's involvement in the diegesis.
Fies' Mom's Cancer is an account of the author's mother, Barbara's struggle with metastatic lung cancer and her subsequent recovery from the same.Created in the latticed traditional format of comics, Mom's Cancer evinces how gestures facilitate the lucid expression of the inscrutable human emotions and trauma.Towards the end of her cancer treatment in chapter 32, titled 'The Five Percent Solution,' Fies pictures mom facing the readers with a melange of emotions as she realizes that only five percent of cancer patients survive the treatment.Mom absorbs this medical fact with horror, befuddlement and disbelief.For a moment Barbara becomes intensely fervid with conflicting emotions of contentment and trepidation.Accordingly, Mom's eyes are dilated with panic, tears overflowing, mouth agape with disbelief depicting illness trauma (see fig. 4).Succinctly capturing the moment when language is eroded by emotional excess, the panel demonstrates the communicative faculty of gestures.
In a typical graphic medical narrative, the meaning and implications of medical treatment and illness experience is distilled through "movements, posture, and gestures of the body" and, in most cases, gestures take "precedence over the words in a text" (Squier, 2015, p. 49).In short, the incoherence, uncertainty, pain and other such extralinguistic phases of illness experience often assumes the form of gestures and body movements.Put differently, the inadequacy of verbal structures is leveled by "the alternative cognitive structures of the visual" (Hirsch, 2004(Hirsch, , p. 1211)).Gestures, which constitute the alternative structure, thus become inevitable for graphic pathographies in relaying human emotions and feelings of pain and trauma of illness.
"The mark of its maker": Handwriting
Comics medium engages with and concretizes the abstract subjective experience of illness also through lettering, coloring, and handwriting.Eisner asserts the pictorial quality of letters in Comics and Sequential Art thus, "[l]etters are symbols that are devised . . .out of familiar forms, objects, postures and other recognizable phenomena" (2008, p. 8).When an artist engages in the skillful manipulation of this "seemingly amorphous structure," she is in fact infusing her stories with deeper significance and relevance.In short, handwriting in its own terms provides comics a visual dimension.
In the context of graphic pathographies, handwriting through the marks/traces on the paper engages the reader to the realism of the text, accords immediacy to the narrative and also authenticates author's subjective experience.Although the rise of printing technology allowed experimenting with the typeface and color, Eisner contents that "[t]ypesetting does not have a kind of inherent authority" of the written expression (2008, p. 26).That said, unlike textual narratives, comics engages with the reader not only through the images but also through visual attributes of the word.Johanna Drucker while tracing the effect of print technology in relation to graphic novels associates handwritten language in comics and graphic novels with "voice, authorial enunciation, and character" (2008, p. 42).Handwriting is thus part of "extra-semantic information" that the reader receives making, in one sense, all graphic narratives "autographic" (Chute and Marianne DeKoven, 2006, p. 767).Unlike the experience of reading a verbal text, "the reader is urged to feel a more personal connection to the text" through the inclusion of handwriting and the varied styles of it (Cour, 2010, p. 52).Fredrich Kittler's distinction between the "private exteriority" of handwriting and the "anonymous exteriority" of the print also explains the reader's sense of immediacy to the author's experience (as cited in Cour, 2010, p. 52).Put differently, handwriting renders access to the subjective world of the author as against the objective quality of print.Sarah Leavitt's Tangles: A Story About Alzheimer's, My Mother, and Me which mediates on the degenerative neurotic disease of Midge, her mother, deploys 'handwriting' as a visual tool to diagram the degradation of her mother from personhood to patienthood.Leavitt deploys handwriting as a reflection of the "cruel relentless progression of losses" (n.p) caused by alzheimer-as she herself admits thus, "[i]t took me a while to decipher her new handwriting" (2012, p. 40).Vaidehi Ramanathan's notion of identity erosion postulated as "meet[ing] their own absence" (2009, p. 78) finds a philosophical resonance in Midge's loss of linguistic identity.In a chapter titled "Diagnosis," Midge's transmuted personhood characterized by loss of linguistic competence and obliterated identity is traced through "new handwriting" (n.p) (see fig. 5).
In reproducing the handwritten letters/notes of her mother or the "tiny notes" (2012, p. 34) as Leavitt calls them, the author not only divulges the personal identity crisis and the loss of linguistic identity but also unravels the degeneration of Midge's cognitive and motor skills.Reflective of her deranged mind, Midge's struggle with herself is evident in her handwriting which resists the norms of writing neatly and without errors in a straight/linear line.Tangled with emotions, the handwriting as a technique of embodiment relays the angst of self-expression.Similarly, notes that are written and crossed out signify Midge's endless scuffle to appropriate to her prior self.Interestingly, beyond the aforementioned uses, handwriting as subjective mark of the creator is a strategic tool of the author to win the reader's trust.
Eloquence of Visual Expression: Economy
Yet another feature of comics is its economy of expression which aids readers to comprehend the intricacies of illness more lucidly than a written text.As comics spatializes time it also has the potential to narrate life stories in a succinct manner.Randy Duncan refers to "reductive devices" (2015, p. 112) such as synecdoche, metonymy, symbols and sequence metaphors as various means to accomplish economy in comics.When these devices function to generate a distinct meaning from juxtaposed images or parts of it, they economise the comics page and add eloquence to its expression.For instance, in Ellen Forney's Marbles: Mania, Depression, Michelangelo, and Me (2012) the memoirist depicts her bipolar disorder through the visual iconography of her riding a carousel horse in a single panel (see fig. 6).Here the image poignantly captures the sufferer's varying mood as a ride on a carousel-thus, in the manic state, she is at a great height balancing on the horse's back with one leg; in the depressed condition, she slides down from her horse and is curled up on the floor.Although such an emotional shift would mean a week or a month in real time, Forney encapsulates her particular mental state of bipolar disorder in a single panel.
Similarly, Fies modulates colour, focus, and panel size to illustrate his mom's absence seizure and its effect on her daughter (see fig. 7).From the first panel, the background of the panel and other features alter as mom assumes a static pose.While mom's eye colour changes from black to white as someone experiencing a slip from consciousness, the background colour shifts across shades of grey to black and the panel size contracts to foreground mom's blank expression.The effect of seizure and the anxiety and fear that grips her daughter is commendably economised in these accordion like panels.
Implications of Comics Affordances in Graphic Medicine
Confirming the potency of comics as opposed to the verbal language, Green asserts thus, "comics can give voice to the unsettling worries and concerns that may be difficult to articulate through words alone" (2015, p. 774).The affordances unique to the medium of comics provide multiple ways for the artists' to combine "subjective feelings and perceptions with the objective visual representation" (Czerweic, 2015, p. 19).Although comics establishes and accomplishes a performative and vital relationship among all its attributes within a diegetic space, it is possible to delineate their functional/rhetorical role on a technical and conceptual level.Accordingly, if the spatio-temporal aspect enhances the efficacy of the medium to map the fragmented illness experiences then gestures and economy evoke affective response in the reader.In a different way, gestures make comics more dynamic through its realistic relaying of human emotions while handwriting reveals interiority and emotional state of the artist.Put differently, the exhaustiveness of all these attributes makes comics highly expressive, economic yet succinct and an ideal medium to represent illness experience.
To conclude, comics is a befitting medium for the cogent encapsulation and translation of trauma/illness into "imagetexts" in that the affordances of the medium such as temporality, spatiality, gestures, handwriting and economy of expression capacitate graphic pathographers to articulate emotional accounts of subjective illness experiences and their attendant phenomenological truths.While visibilizing the intricate intrapsychic experiences of illness, these affordances also allow a unique reincarnation of the author into the intradiagetic realms of the narrative through embodiment.In essence, the structural singularity and formal affordances of
|
2018-02-28T18:07:45.916Z
|
2016-08-18T00:00:00.000
|
{
"year": 2016,
"sha1": "d96bee1c5220965a5bf340da0526533de8155bba",
"oa_license": "CCBYNC",
"oa_url": "http://rupkatha.com/V8/n3/23_Visual_Rhetorics.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d96bee1c5220965a5bf340da0526533de8155bba",
"s2fieldsofstudy": [
"Art",
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
271071143
|
pes2o/s2orc
|
v3-fos-license
|
A Case of Significant Coagulopathy Due to Vitamin K Deficiency Caused by the Administration of Cefazolin and Rifampicin and Hyponutrition After a Postoperative Infection of the Lumbar Spine
Postoperative surgical site infection in the lumbar spine is one of the serious complications that sometimes results in death. Herein, we describe a case in which a patient was found to have coagulopathy due to vitamin K deficiency when he was transferred to a hospital for treatment for a postoperative infection of the lumbar spine. The coagulation disorder was caused by antimicrobial agents administered to the patient, who was suffering from hyponutrition. The patient was a 70-year-old man with a history of diabetes mellitus. He was diagnosed with lumbar spinal canal stenosis and underwent posterior decompression of the L2-L5 and S1 laminae at a previous hospital five months before transfer to our hospital. Four months before transfer, purulent discharge was observed from the wound, and methicillin-susceptible Staphylococcus aureus was detected in the wound culture. Cefazolin was administered for two weeks, resulting in initial improvement. However, one month before the transfer, the wound infection recurred, accompanied by bacteremia and a psoas abscess. He had been treated with cefazolin, levofloxacin, rifampicin, trimethoprim, and sulfamethoxazole, but the antibiotics' effects were insufficient. Upon transfer for debridement surgery due to uncontrolled infection, his coagulation parameters were as follows: prothrombin time (PT) 74.0 sec, PT-international normalized ratio (INR) 6.69, PT% 9.0, activated partial thromboplastin time (APTT) 138 sec, fibrinogen (FIB) 664 mg/dl, fibrin degradation products (FDP) 7.1 μg/ml, and protein induced by vitamin K absence-II (PIVKA-II) 34400 mAU/ml. Because we suspected vitamin K deficiency, vitamin K 40 mg was administered as a test dose, and coagulation function improved to PT 16.4 sec, PT-INR 1.30, PT% 65.2, and APTT 79 sec after four hours. The diagnosis of vitamin K deficiency was confirmed, vitamin K was administered for four days, and the coagulation status normalized five days after transfer. Debridement was performed for the left psoas abscess. Cefazolin was administered for eight weeks, and administration was completed. The coagulation abnormality did not recur due to careful attention to his nutrition. We experienced a case of coagulopathy due to vitamin K deficiency caused by antimicrobial agents administered to a hyponourished patient with a postoperative infection of the lumbar spine. The cause of vitamin K deficiency, in this case, was thought to be low nutrition, suppression of vitamin K-producing bacteria by cefazolin and rifampicin, and the use of cefazolin with a methyl-thiadiazole thiol group. It should be kept in mind that severe coagulopathy due to vitamin K deficiency caused by antimicrobial treatment with hyponutrition can occur in postoperative infections.
Introduction
The incidence of lumbar spinal canal stenosis in Japan is about 9.3% [1], and it is a common disease in the elderly.Posterior decompression is one of the standard procedures for lumbar spinal canal stenosis, and the incidence of postoperative infection for lumbar spinal canal stenosis is 2.1%~3.1% [2].The most common organism causing postoperative wound infection is Staphylococcus aureus, followed by Staphylococcus epidermidis [3].Delayed treatment of postoperative lumbar infections can progress to serious infections such as iliopsoas abscesses and epidural abscesses.Because a serious postoperative infection is often complicated by osteomyelitis of the vertebrae, the duration of antimicrobial therapy for postoperative infection is similar to that for pyogenic spondylitis.For pyogenic spondylitis, a minimum of six weeks of antimicrobial therapy is recommended [4].If the infection is difficult to control with only antimicrobial agents, such as abscess formation, which leads to the general condition deteriorating, surgical treatment may be necessary to save Purulent discharge was found to drain from the wound four months before transfer to our hospital, and methicillin-susceptible Staphylococcus aureus was detected in the wound culture.The patient was treated with cefazolin 3 g/day for two weeks in the previous hospital, and no additional antimicrobial therapy was given.However, his back pain worsened.He visited the previous hospital and was readmitted 37 days before transfer to our hospital.Blood biochemistry tests revealed a white blood cell (WBC) count of 12000/mm 3 , Creactive protein (CRP) of 45.97 mg/dl, and serum albumin (Alb) of 2.9 g/dl.Blood culture revealed methicillin-susceptible Staphylococcus aureus and the patient was diagnosed as having a postoperative infection with bacteremia.Cefazolin 3 g/day was started, but the patient developed septic shock the day after readmission.Emergency debridement of the lumbar surgical site was performed 35 days before transfer to our hospital, but complete debridement was not achieved due to the extensive extent of the abscess.After readmission, the patient was treated with cefazolin (3 g/day) for seven days, and he developed mucous and bloody stools.His antibiotics were changed to levofloxacin 500 mg/day, rifampicin 450 mg/day, trimethoprim 320 mg/day, and sulfamethoxazole 1600 mg/day for 14 days due to the side effects of cefazolin.
After the mucous and hematochezia disappeared, he was treated with cefazolin 6 g/day, rifampicin 450 mg/day, trimethoprim 320 mg/day, and sulfamethoxazole 1600 mg/day (Figure 2, Table 1).From the time of readmission to the previous hospital until transfer to our hospital, the patient had a loss of appetite, and intake of food continued to decline, so intravenous fluids were administered.Because of an uncontrollable extended epidural abscess and an iliopsoas abscess, the patient was transferred to our hospital for additional treatment.
At the time of the initial examination in our hospital, the patient was 163 cm tall and weighed 60 kg.He could not keep sitting and standing due to low back pain.There was residual numbness in both legs, although there was no motor paralysis.His weight before lumbar surgery was about 75 kg, and he looked gaunt during the initial presentation at our hospital.The MRI showed posterior decompression between the second lumbar and the first sacrum, with bone marrow edema in the vertebral bodies of the second and third lumbar vertebrae and an epidural abscess.Contrast-enhanced CT showed left psoas abscess and endplate destruction between the second/third lumbar (Figure 3).We diagnosed osteomyelitis of the second and third lumbar, an epidural abscess, and a left psoas abscess.The first treatment plan was to perform debridement surgery after transfer.But the first blood test in our hospital revealed a WBC count of 5000/mm 3 , a CRP of 12.95 mg/dl, and an Alb of 2.7 g/dl.As for the coagulation system, his blood test revealed the following: prothrombin time (PT) 74.0 sec, PT-international normalized ratio (INR) 6.69, PT% 9.0, activated partial thromboplastin time (APTT) 138 sec, fibrinogen (FIB) 664 mg/dl, fibrin degradation products (FDP) 7.1 μg/ml, and D-dimer 1.5 μg/ml.Since significant coagulation abnormalities were observed, it was determined that a thorough examination was necessary, and the scheduled debridement surgery was canceled.The acute disseminated intravascular coagulation (DIC) score was only one point, which was negative for DIC.Liver and renal function were within the standard range.To search for the cause, protein induced by vitamin K absence or antagonist-II (PIVKA-II) was measured and found to be high at 34400 mAU/ml.
Suspecting vitamin K deficiency, we administered 40 mg of vitamin K as a test dose, and coagulation function improved to PT: 16.4 sec, PT-INR: 1.30, PT%: 65.2, and APTT: 79.0 sec after four hours (Figure 4).Since the diagnosis of vitamin K deficiency was confirmed, vitamin K was administered for four days (Table 2).We also considered the patient's nutritional status, and he took the supplemental foods suggested by the nutritionist.
FIGURE 4: Coagulation test changes around vitamin K dosage
We administered vitamin K after coagulation testing on day two.
TABLE 2: Coagulation function changes in response to vitamin K administration
We administered vitamin K after coagulation testing on day two.
PT: prothrombin time; INR: international normalized ratio; APTT: activated partial thromboplastin time Debridement was performed only for the left psoas muscle abscess, which appeared to be refractory six days after transfer to our hospital (Figure 5).Because the epidural abscess was small and did not compress the dural sac with many neurologic symptoms, we decided to treat it conservatively with antimicrobial therapy instead of surgical treatment.
FIGURE 5: Intraoperative view shows left psoas abscess
The patient was discharged from the hospital after the completion of administration of cefazolin (6 g/day) for eight weeks, as his general condition had improved (Figure 6, Table 3).He had recovered to a body weight of 75 kg, was able to walk without a cane, and the numbness had resolved nine weeks after his transfer to our hospital.There was no recurrence of infection or coagulation abnormalities.
FIGURE 6: Changes in white blood cell (WBC) and C-reactive protein (CRP) after the patient's transfer to our hospital
Laboratory findings were as follows: WBC count 6900/mm 3 , CRP: 0.10 mg/dl, Alb 4.0 g/dl, PT 11.4 sec, PT-INR 0.93, PT% 112.8, and APTT 31.6 sec one year after surgery.The last observation was three years after surgery.Imaging findings included an MRI showing residual bone marrow edema in the second/third lumbar spine, but the iliopsoas abscess had resolved (Figure 7).
Discussion
In this study, we experienced a case of postoperative infection of the lumbar spine that resulted in a significant coagulation disorder caused by vitamin K deficiency due to the administration of antibiotics in hyponutrition.Although vitamin K deficiency due to cefazolin administration for spontaneous pyogenic spondylitis has been reported [6], vitamin K deficiency due to cefazolin administration for postoperative spinal infection is very rare.
Many antibiotics list vitamin K deficiency as a potential side effect, and it is a common cause of coagulation abnormalities during antibiotic treatment.For the diagnosis of vitamin K deficiency, diagnostic therapy is very common.If PT and APTT improve within two to four hours after intravenous vitamin K administration, the patient is considered to have responded to the trial, confirming the diagnosis of vitamin K deficiency.In this case, PT and APTT were significantly improved following vitamin K administration, thereby confirming the diagnosis.
The three possible causes of vitamin K deficiency in this patient are (i) a marked decrease in vitamin K intake due to poor nutrition; (ii) decreased vitamin K production due to changes in the intestinal microflora; and (iii) impairment of the vitamin K reduction cycle.
However, vitamin K deficiency is very rarely caused by any of the causes mentioned above because the daily requirement of vitamin K is very low.Multiple causes of these were required to develop significant vitamin K deficiency, resulting in coagulation disorder.
Poor nutrition is one of the most common causes of vitamin K deficiency [7].In the present case, the patient had poor nutritional intake due to inadequate intake of hospital food as a result of sustained postoperative infection and septic shock.Additionally, the COVID-19 outbreak made the previous hospital prohibit anyone, even the family, from visiting the patient during hospitalization, and no one could bring his favorite food to him.The patient had lost 20% of his body weight during the first visit to our hospital compared to his weight before the surgery for canal stenosis.His weight loss indicated that his food intake was inappropriate, suggesting that his vitamin K intake had decreased.
Decreased vitamin K production due to changes in the intestinal microflora is also a cause of vitamin K deficiency [8].Normal intestinal microflora produce adequate vitamin K.However, antibiotic treatment decreased the intestinal microflora, changed the intestinal microflora, or shifted to non-vitamin Kproducing bacteria.Some bacteria, such as Staphylococcus aureus, Escherichia coli, and Bacteroides spp., have been reported as vitamin K-producing bacteria [8].Cefazolin has been reported to have antibacterial activity against Staphylococcus aureus and Escherichia coli, although some Escherichia coli are resistant to cefazolin.Rifampicin has been reported to have antibacterial activity against Bacteroides spp..In this case, the patient had been treated with a combination of cefazolin and rifampicin at the previous hospital.This may have decreased the number of vitamin K-producing bacteria such as Staphylococcus aureus, Escherichia coli, and Bacteroides spp., leading to vitamin K deficiency.
Disruption of the vitamin K reduction cycle is another cause of vitamin K deficiency [9].A factor that can lead to vitamin K deficiency is that antibiotics themselves can impair the vitamin K reduction cycle.
Antibiotics, including N-methyl tetrazole thiol (NMTT) groups, are well-known as cephalosporin antibiotics that interfere with the vitamin K reduction cycle [9].Conversely, antibiotics including a methyl-thiadiazole thiol (MTD) group, such as cefazolin, are known to disrupt the coagulation cycle [9].The MTD group has also been shown to inhibit coagulation factor activation [9].As the use of first-generation cephalosporins has increased with the spread of antimicrobial guidelines, an increasing number of cases have been reported in which the MTD group inhibits the activation of coagulation factors [6,10].In the present case, cefazolin, including MTD groups, was also used for treatment.
The causes of vitamin K deficiency, in this case, were considered to be (i) prolonged low nutritional status for more than one month, (ii) suppression of vitamin K-producing bacteria by treatment with cefazolin and rifampicin in the past week, and (iii) the use of cefazolin, including MTD groups.We speculate that the patient's coagulation was not evaluated at the previous hospital, which focused on infection control.This patient developed severe coagulopathy.It is important to recognize that severe coagulopathy due to vitamin K deficiency caused by antimicrobial administration can occur not only in postoperative infections but also in spontaneous infections.As a preventive measure, coagulation tests should be performed periodically during antimicrobial therapy associated with anorexia to monitor for purpura due to bleeding tendency.
Conclusions
In this study, we reported a case of postoperative infection of the lumbar spine that resulted in vitamin K deficiency due to the administration of antibiotics in a patient with hyponutrition.In this case, vitamin K deficiency was attributed to low nutrition, suppression of vitamin K-producing bacteria with cefazolin and rifampicin, and the use of cefazolin, including MTD groups.It should be kept in mind that severe coagulopathy due to vitamin K deficiency caused by antibiotic therapy can occur in postoperative infections.
FIGURE 1 :
FIGURE 1: The patient's preoperative images a: A plain x-ray (anteroposterior view) of the lumbar shows multiple bony spurs and no osteolysis.;b: A plain x-ray (lateral view) of the lumbar shows multiple bony spurs and no osteolysis.;c: An MRI (T2-weighted, sagittal view) of the lumbar shows multiple canal stenosis (white arrows).
FIGURE 2 :
FIGURE 2: Changes in white blood cell (WBC) and C-reactive protein (CRP) before the patient's transfer to our hospital
FIGURE 7 :
FIGURE 7: Images three years after the treatment a: An MRI short tau inversion recovery (STIR) (sagittal view) shows slight bone marrow edema of L2/3 (white arrow); b: An MRI STIR (coronal view) shows slight bone marrow edema of L2/3 (white arrows); c: An MRI STIR (axial view) shows no epidural abscess; d: A plain X-ray (anteroposterior view) of the lumbar shows bony fusion of L2/3; e: A plain X-ray (lateral view) of the lumbar shows bony fusion of L2/3.
|
2024-07-10T15:19:48.667Z
|
2024-07-01T00:00:00.000
|
{
"year": 2024,
"sha1": "5119e6406b49a5dea9a352902c92ffc04f6187b7",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "e3de362214a31c72bc5e12d647657679da7eed77",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
22328142
|
pes2o/s2orc
|
v3-fos-license
|
Substrate tRNA Recognition Mechanism of tRNA (m 7 G46) Methyltransferase from Aquifex aeolicus *
Transfer RNA (m 7 G46) methyltransferase catalyzes the methyl transfer from S -adenosylmethionine to N 7 atom of the guanine 46 residue in tRNA. Analysis of the Aquifex aeolicus genome revealed one candidate open reading frame, aq065, encoding this gene. The aq065 protein was expressed in Escherichia coli and purified to homogeneity on 15% SDS-polyacrylamide gel electrophoresis. Although the overall amino acid sequence of the aq065 protein differs considerably from that of E. coli YggH, the purified aq065 protein possessed a tRNA (m 7 G46) methyltransferase activity. The modified nucleoside and its location were determined by liquid chromatography-mass spectroscopy. To clarify the RNA recognition mechanism of the enzyme, we investigated the methyl transfer activity to 28 variants of yeast tRNA Phe and E. coli tRNA Thr . It was confirmed that 5 (cid:1) m M , m M 2-mercaptoethanol, and 50 m M KCl) incubated for 5 min at 60 °C. An aliquot (20 (cid:2) l) was then used for the conventional filter assay. The tRNA transcripts were prepared as reported previously (23) and purified by 10% polyacrylamide gel electrophoresis (7 M urea). Each transcript was annealed (cooling down from 80 to 40 °C for 40 min) in the buffer C before use. Apparent kinetic parameters, K m and V max , were determined from a Lineweaver-Burk plot of the methyl transfer reaction with [ methyl - 3 H]AdoMet by the filter assay. Briefly, the wild-type transcript and tRNA precursor were assayed at 60 °C, and the other variants were assayed at 50 °C. Methyl transfer was visualized after gel electrophoresis using a Fuji Photo Film BAS2000 imaging analyzer system. A mixture of 100 ng of purified enzyme, 38 (cid:2) M [ methyl - 14 C]AdoMet, and 13.5 (cid:2) M transcript in 30 (cid:2) l of the buffer C was incubated at 30, 50, or 60 °C for 10 min. The relevant assay temperature of each experiment is stated in “Results.” The reac- tion mixture (5 (cid:2) l) was then analyzed by 10% polyacrylamide gel electrophoresis (7 M urea). The gel was stained with methylene blue and dried. The incorporation of 14 C methyl group was monitored with a Fuji Photo Film BAS2000 imaging analyzer system. Methyl transfer using poor substrates required an extended incubation period and a large amount of the enzyme. Specifically, the reaction mixture comprised 300 ng of enzyme, 38 (cid:2) M [ methyl - 14 C]AdoMet, and 13.5 (cid:2) M transcript in 30 (cid:2) l of the buffer C. The mixture was incubated at 50 °C for 30 min, and then the aliquot (5 (cid:2) l) was loaded onto the gel for analysis.
Transfer RNAs have now been shown to contain Ͼ80 modified nucleosides (1)(2)(3). All of the modified nucleosides of tRNA are generated post transcriptionally by specific tRNA modification enzymes (3)(4)(5)(6). The substrate recognition is an important feature of these enzymes. Transfer RNA modification enzymes must distinguish specific tRNAs from the other RNA species as substrates and recognize the target site for modification. In general, tRNA modification enzymes act at a single specific site (4,5), although some enzymes possessing multi-site specificity have been reported (7)(8)(9)(10)(11). Recent genome-wide research and in vitro transcription techniques have accelerated the study of RNA recognition mechanisms (12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23). Genetic analysis and microinjection techniques have allowed the in vivo tRNA recognition mechanisms of several tRNA modification enzymes to be elucidated (24 -30). Furthermore, crystal structure studies of some enzymes have begun to elucidate the interaction between RNA and the protein (31)(32)(33)(34)(35). N 7 -Methylguanosine at position 46 (m 7 G46) of tRNA is widely found in eukaryotes and bacteria as well as some Archaea. This modification is found in almost class I tRNAs that have the semi-conserved G46 residue (1)(2)(3). In addition, there are a limited number of examples where the m 7 G modification is found at other positions, for example, G34 in mitochondria tRNA Ser from starfish (36) or squid (37) and G36 in chloroplast tRNA Leu (38). The m 7 G46 modification is catalyzed by tRNA (m 7 G46) methyltransferase (tRNA (guanine-N 7 -)-methyltransferase, EC 2.1. 1.33). This enzymatic activity was first detected in a cell extract of Escherichia coli (39) and then purified more than 1000-fold (40). The enzyme activity has also been purified from Salmonella typhimurium (41) and Thermus flavus (42). Furthermore, the tRNA m 7 G46 modification activity has been detected in the crude extracts from higher eukaryotes (i.e. Xenopus laevis (43), human (44), and plant (45)). Recently, it has been reported that yeast tRNA (m 7 G46) methyltransferase is composed of two protein subunits (Trm8 and Trm82), and their genes have been identified (46). Furthermore, an E. coli gene encoding a tRNA (m 7 G46) methyltransferase activity (yggH) has been reported (47). We independently identified yggH as a responsible gene for m 7 G46 modification in E. coli by using a systematic gene disruption system. 1 We searched for yggH homologs in several genomes of thermophilic organisms, since proteins from these sources are generally more stable than their mesophilic counterparts (15, 23, 49 -51). After some preliminary trials we selected Aquifex aeolicus, a hyper-thermophile eubacterium, as the target species. A. aeolicus, which was isolated from a hot spring in Yellowstone National Park, can grow at nearly 95°C (52). The 16 S rRNA gene of A. aeolicus has been analyzed from the perspective of molecular evolution, and it was suggested that this bacterium is the earliest diverging eubacterium (53). The complete genome sequence of this organism was determined in 1998 (54). Characterization of the yggH homolog from A. aeolicus should help clarify the molecular evolution of m 7 G46 modification in tRNA.
In this paper, we report that A. aeolicus open reading frame aq065, which shares relatively poor homology with E. coli yggH, encodes a tRNA (m 7 G46) methyltransferase. We have studied the substrate RNA recognition mechanism of the enzyme.
* The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
Construction of A. aeolicus aq065 Protein Expression System in E. coli-The aq065 gene was amplified by PCR from A. aeolicus genomic DNA using following primers: AYGGHN, 5Ј-GGG GCA TAT GCT CTG TTA CGT AAA TTA CAA AAG Ϫ3Ј; AYGGHC, 5Ј-GGG GGT CGA CTT AAC TCA GCA ATT GAG CCA CCG TT-3Ј. The recognition sites of restriction enzymes NdeI and SalI are underlined. The amplified DNA was cloned into pET30a expression vector (Novagen) utilizing the restriction enzyme sites. The resulting expression construct, pET30a-AYGGH, was introduced into E. coli BL21(DE3)-Codonplus-RIL strain (Stratagene) for expression.
Expression and Purification of the aq065 Protein-The expression of the aq065 protein in E. coli cells was carried out according to the manufacturer's manual (Novagen). After the isopropyl 1-thio--D-galactopyranoside induction, the cells were collected by centrifugation at 6500 ϫ g for 20 min, snap-frozen in liquid nitrogen, and stored at Ϫ80°C until required. The cells (5 g) were suspended in 25 ml of the buffer A (50 mM Tris-HCl (pH 7.6), 5 mM MgCl 2 , 6 mM 2-mercaptoethanol, and 50 mM KCl) and disrupted with an ultrasonic disruptor model UD-200 (Tomy, Japan). The cell debris was removed by centrifugation (8000 ϫ g, 20 min), and the supernatant fractions were heated at 70°C for 30 min. The denatured proteins were removed by centrifugation (8000 ϫ g, 30 min), and the supernatant fractions were applied onto a DE52 column (column volume, 10 ml). The column was washed with 35 ml of buffer A and then 40 ml of buffer A containing 150 mM KCl. The enzymatic activities were eluted by the addition of 40 ml of buffer A containing 200 mM KCl. The relevant fractions were pooled and then dialyzed against buffer B (50 mM Hepes-KOH (pH 6.8), 5 mM MgCl 2 , and 6 mM 2-mercaptoethanol). The dialyzed sample was applied onto a CM-Toyopearl 650M column (column volume, 20 ml). The column was washed with 60 ml of buffer B and then 60 ml of buffer B containing 150 mM KCl. The enzyme was eluted by the addition of 60 ml of buffer B containing 300 mM KCl. The fractions were combined, dialyzed against the buffer A, and concentrated with Amicon ultra centrifugal filter device (10,000 M r cut-off) (Millipore). Glycerol was added to the sample to give a final concentration of 50% v/v. Aliquots were then frozen using liquid nitrogen and stored at Ϫ80°C until required. Protein was quantified using a Bio-Rad protein assay kit with bovine serum albumin as the standard.
Measurements of the Enzymatic Activities-The standard assay for the purification was carried out by measuring the incorporation of a radiolabeled methyl group from [methyl-14 C]AdoMet into yeast tRNA Phe transcript. The assay mixture (300 ng the protein, 13.5 M transcript, and 38 M [methyl-14 C]AdoMet) in 30 l of buffer C (50 mM Tris-HCl (pH 7.6), 5 mM MgCl 2 , 6 mM 2-mercaptoethanol, and 50 mM KCl) was incubated for 5 min at 60°C. An aliquot (20 l) was then used for the conventional filter assay. The tRNA transcripts were prepared as reported previously (23) and purified by 10% polyacrylamide gel electrophoresis (7 M urea). Each transcript was annealed (cooling down from 80 to 40°C for 40 min) in the buffer C before use. Apparent kinetic parameters, K m and V max , were determined from a Lineweaver-Burk plot of the methyl transfer reaction with [methyl-3 H]AdoMet by the filter assay. Briefly, the wild-type transcript and tRNA precursor were assayed at 60°C, and the other variants were assayed at 50°C. Methyl transfer was visualized after gel electrophoresis using a Fuji Photo Film BAS2000 imaging analyzer system. A mixture of 100 ng of purified enzyme, 38 M [methyl-14 C]AdoMet, and 13.5 M transcript in 30 l of the buffer C was incubated at 30, 50, or 60°C for 10 min. The relevant assay temperature of each experiment is stated in "Results." The reaction mixture (5 l) was then analyzed by 10% polyacrylamide gel electrophoresis (7 M urea). The gel was stained with methylene blue and dried. The incorporation of 14 C methyl group was monitored with a Fuji Photo Film BAS2000 imaging analyzer system. Methyl transfer using poor substrates required an extended incubation period and a large amount of the enzyme. Specifically, the reaction mixture comprised 300 ng of enzyme, 38 M [methyl-14 C]AdoMet, and 13.5 M transcript in 30 l of the buffer C. The mixture was incubated at 50°C for 30 min, and then the aliquot (5 l) was loaded onto the gel for analysis.
Mass Spectrometry-Yeast tRNA Phe (50 g) was methylated with an excess amount of enzyme and cold AdoMet for 4 h at 60°C in 100 l of the buffer C. The RNA was extracted with phenol, recovered by ethanol precipitation, and loaded onto a 10% polyacrylamide gel (7 M urea). After electrophoresis, the RNA was visualized by UV (254 nm) irradiation on a thin layer plate (Funacell P-254, Japan), excised, and extracted with 400 l of gel elution buffer (0.5 M ammonium acetate, 10 mM MgCl 2 , 1 mM EDTA, and 0.1% SDS). The extracted sample was passed through a Steradisc 13 filter unit (0.2 m, Kurabo, Japan), and the RNA was recovered by ethanol precipitation. For nucleoside analysis, the sample was completely digested with nuclease P 1 and then treated with bacterial alkaline phosphatase. For fragment analysis, methylated or unmethylated control tRNA (25 g of each) was digested with RNase T 1 (2.5 unit) in 25 mM ammonium acetate (pH 5.3) at 37°C for 1 h and subjected to mass spectrometric analysis. Oligonucleotides produced by RNase T 1 digestion were separated and analyzed by LC/MS in negative ion mode as described previously (49,55).
Expression and Purification of A. aeolicus aq065
Protein-To investigate the origin of m 7 G46 modification in tRNA, we searched for homolog(s) of E. coli yggH in the A. aeolicus genome (NC_000918) by BLAST search. One candidate open reading frame, aq065, was found. The aq065 protein has three amino acid sequence motifs that resemble those of E. coli YggH (Fig. 1A). However, the N-terminal portion of aq065 protein is much shorter than E. coli YggH, and the C-terminal portion does not share homology (Fig. 1B). To characterize this protein, we cloned the target gene by PCR and engineered it for expression in E. coli BL21(DE3)-Codonplus-RIL strain/pET30a vector system as described under "Experimental Procedures." The purified recombinant protein appeared to be homogeneous by 15% SDS-polyacrylamide gel electrophoresis (Fig. 1C).
The aq065 Protein Has a tRNA (m 7 G46) Methyltransferase Activity-We tested the methyl transfer activity of the purified recombinant protein using yeast tRNA Phe transcript by the conventional filter assay. The time-dependent experiment at 60°C clearly showed that 14 C methyl group was incorporated into the transcript (data not shown). The kinetic parameters for yeast tRNA Phe transcript were determined at 50 and 60°C (Table I). The initial velocity of the enzyme is comparable with other tRNA methyltransferases from A. aeolicus, such as tRNA (Gm18) methyltransferase (23) and tRNA (m 1 G37) methyltransferase. 3 However, the kinetic analysis revealed that this enzyme has relatively small K m and V max values for this transcript. The small K m may suggest that many more amino acid residues are involved in the substrate RNA recognition as 3 compared with the other tRNA methyltransferases. This assumption is in line with the results using the mutant RNA variants described in "Results." Furthermore, the slow V max may be caused by the relatively poor reactivity of N 7 atom of the guanine (6).
To identify the modified nucleoside and the precise position Although the 5Ј-leader and 3Ј-trailer sequences are drawn as single-stranded RNAs to clarify the cleavage sites in RNA processing, UGU in the 5Ј-leader and ACA in the 3Ј-trailer RNAs probably form double-stranded RNA. B, methyl group incorporations into the precursor were monitored by the imaging analyzer system. The mature size transcript had the 5Ј-leader sequence removed and a CCA sequence at 3Ј-termini added instead of the trailer sequence. The RNAs treated with the purified YggH protein and [ 14 C]AdoMet at 60°C for 10 min were analyzed by 10% polyacrylamide gel (7 M urea) electrophoresis. The gel was stained with methylene blue (left, MB staining) and the corresponding autoradiogram is shown (right, autoradiogram). The bars show the bands of the precursor and the mature size transcript, respectively. of the methylated site, we employed LC/MS using electrospray iontrap mass spectrometry (Figs. 2 and 3). Yeast tRNA Phe transcript was methylated by aq065 protein using unlabeled AdoMet at 60°C for 4 h. For nucleoside analysis, the methylated transcript was completely digested with nuclease P 1 and bacterial alkaline phosphatase and then subjected to mass spectrometric analysis as described previously (49,55). As shown in Fig. 2, m 7 G nucleoside was eluted at 18.8 min by LC. Mass signals of the protonated molecule of the m 7 G nucleoside (m/z ϭ 298) and the fragment ion derived from the base moiety (m/z ϭ 166) were clearly detected. The position of the modified site was determined by RNase T 1 fragment mapping (Fig. 3). The methylated RNA was completely digested with RNase T 1 and subjected to mass spectrometric analysis. The fragments were separated on LC (Fig. 3A, top). When the guanine base was modified to m 7 G, RNase T 1 did not cleave the site. As shown in Fig. 3A, the methylated fragment m 7 GUCCUGp (fragment 11 at 31 min) was clearly identified as a triplecharged negative ion (m/z ϭ 1944.2). Fragment 11 corresponds to the nucleotides at positions 46 -51 in the transcript (Fig. 3B). We also detected trace amounts of the unmodified fragment UCCUGp (fragment 9 at 30 min), indicating an incomplete m 7 G46 modification of the substrate tRNA (Fig. 3A). The methylation efficiency was estimated to be around 90% in this experiment. Furthermore, all other fragments derived from yeast tRNA Phe were analyzed (Fig. 3B), and no m 7 G modification was detected in any of them. Based on these results, we concluded that the aq065 protein has a tRNA (m 7 G46) methyltransferase activity. According to the nomenclature established for the E. coli gene encoding tRNA (m 7 G46) methyltransferase, we tentatively renamed aq065 as A. aeolicus yggH (accession number AB167817), although there is still some debate concerning the gene name of the E. coli enzyme as described by De Bie et al. (47).
5Ј-Leader and 3Ј-Trailer Sequences in tRNA Precursor Are Not Required for the Recognition of A. aeolicus YggH-In vivo, m 7 G46 modification occurs in the tRNA precursor before removal of the 5Ј-leader and 3Ј-trailer RNAs (57). First, we tested the effect of these RNAs on A. aeolicus YggH activity. We chose E. coli tRNA Thr (GGU) precursor as a model because m 7 G46 modification of this precursor in living E. coli cells has been reported (57). We prepared E. coli tRNA Thr precursor and the mature size transcript; a CCA sequence was added at 3Ј-end of the mature size transcript instead of the 3Ј-trailer sequence (Fig. 4A). The methyl transfer activities to them were compared with each other. As shown in Fig. 4B, both RNAs were efficiently methylated, and kinetic parameters for them were determined by the filter assay (Table II). Although the initial velocity for the precursor was slightly slower due to an increase in K m , it was not significantly different. The increase of K m for the precursor may be caused by nonspecific interaction of the enzyme with the 5Ј-leader and/or 3Ј-trailer RNAs. Thus, our results demonstrate that the 5Ј-leader and 3Ј-trailer RNAs are not required for substrate recognition of A. aeolicus YggH. Nevertheless, this modification to the precursor RNA probably occurs in vivo.
Substitution of the Variable Loop of Yeast tRNA Phe Transcript by the E. coli tRNA 2 Gly Variable Loop-The three-dimensional structure of yeast tRNA Phe is now well established (Refs. 58 and 59 and see Fig. 9). Therefore, we selected yeast tRNA Phe as a model substrate instead of E. coli tRNA Thr for further studies. For comparison of the A. aeolicus YggH with the E. coli protein, we tested one yeast tRNA Phe variant whose variable loop is substituted by the E. coli tRNA 2 Gly variable loop (Fig. 5).
E. coli tRNA 2
Gly has a special variable loop composed of only four nucleotides. In living E. coli cells, unlike other class I tRNAs, tRNA 2 Gly does not undergo m 7 G modification (1, 2). Thus, E. coli YggH does not recognize tRNA 2 Gly as a substrate. As shown in Fig. 5B, this variant was not methylated at all by A. aeolicus YggH, demonstrating that the A. aeolicus enzyme strictly recognizes the size of the variable loop. This recognition mechanism of the A. aeolicus YggH is common with that of E. coli YggH.
Truncated Variants of Yeast tRNA Phe Transcript-To clarify the essential region in tRNA, we prepared seven truncated yeast tRNA Phe variants (Fig. 6). The experiments were carried out at 30 and 50°C after annealing of the RNAs since temperature-induced structural changes were expected. To our surprise, methyl group incorporations into the truncated RNAs at 30°C apparently coincided with those at 50°C (data not shown). In addition to the conventional filter assay, we employed gel electrophoresis and imaging analysis to detect very slow methyl transfer (Fig. 7). Fig. 7 shows the results at 50°C. Kinetic parameters are given in Table III. The RNA fragment corresponding to nucleotides at positions 34 -48 (Fig. 6B) of the full-length tRNA was not methylated, suggesting that A. aeolicus YggH does not simply recognize the sequence of the variable loop. Thus, the three-dimensional structure of the RNA molecule is a critical factor for the methyl transfer reaction to occur. When the anticodon-and T-arms are formed (Fig. 6C), the methyl transfer reaction into the transcript can be observed. The imaging analyzer system detected very slow methyl transfer under the conditions described under "Experimental Procedures" (Fig. 7, C and D). The kinetic parameter analysis showed that the decrease of the activity was caused by a decrease of V max . The small K m value means that the enzyme effectively captures and releases the truncated RNA in the turnover of the reaction complex. Furthermore, this result clearly shows that the structure of the variable loop inserted into the two stems is an absolute requirement for the methyl transfer reaction. In contrast, when the anticodon and aminoacyl stems are formed (Fig. 6D), recovery of the methyl acceptance activity was not observed (Fig. 7, C and D). This result suggests that the T-arm structure plays a key role in the recognition of the substrate RNA. To confirm this idea, we individually deleted four domain structures of tRNA: aminoacyl stem, D-arm, anticodon-arm, and T-arm (Fig. 6, E-H). Intriguingly, not all truncated variants lost methyl acceptance activity, although deletion of the T-arm did cause a dramatic decrease in activity (Fig. 7). These results suggest the existence of multiple recognition sites dispersed on the tRNA structure, of which the T-arm is clearly important. It is also clear that the L-shaped structure of tRNA is not required for the methyl transfer reaction. This is in line with RNA recognition mechanisms of several tRNA modification enzymes, which modify the nucleotide(s) in the three-dimensional core of tRNA (15,22,23,29). Furthermore, several truncated variants (Fig. 6, C, E one has a disrupted T-stem (Fig. 8A), and the other has a disrupted anticodon-stem (Fig. 8B). The methyl acceptance activity of the variant with the disrupted T-stem (Fig. 8A) was not detectable by the imaging analyzer system. Together with the results obtained from the truncated variants (Figs. 6 and 7), the most important site for the methyl transfer reaction appears to be on the T-stem. In contrast, the variant disrupted anticodon-stem (Fig. 8B) was clearly methylated, although the initial velocity was considerably decreased via increase of the K m value. These results indicate that YggH recognizes the G46 residue from the T-stem side. Fig. 6. Lane 1, the wild-type transcript; 2, the transcript shown in Fig. 6B; 3, the transcript shown in Fig. 6C; 4, the transcript shown in Fig. 6D; 5, the transcript shown in Fig. 6E; 6, the transcript shown in Fig. 6F; 7, the transcript shown in Fig. 6G; 8, the transcript shown in Fig. 6H. Panels A (methylene blue staining) and B (autoradiogram of the same gel) show the results of the standard assay using 100 ng of enzyme for 10 min (see "Experimental Procedures"). Panels C (methylene blue staining) and D (autoradiogram) show the results of assays using 300 ng of enzyme for 30 min. An asterisk indicates the methyl transfer into the transcript shown in Fig. 6C.
Disruptions of Tertiary Base Pairs in the Three-dimensional Core-As described above, we have demonstrated the importance of the T-stem. Nevertheless, the variant with a deleted T-arm was still methylated (Fig. 6H and 7). This apparent discrepancy can be resolved by considering the formation of tertiary base pairs between the D-arm and the extra loop in the variant with the deleted T-arm, as found in mitochondria tRNAs from nematoda (48,56). Thus, the variable loop structure allows the formation of tertiary base pair(s), which probably compensates for the lack of T-arm. In yeast tRNA Phe , the m 7 G46 residue is located in the three-dimensional core of tRNA (Fig. 9A) and forms a m 7 G46-C13-G22 tertiary base pair (Fig. 9B). This tertiary base pair is stacked among A9-U12-A23, G15-C48, and m 2 G10-C25-G45 tertiary base pairs in the core (Fig. 9C). Although the N 7 atom of G46 residue is located on the surface of yeast tRNA Phe , disruption of the core is necessary for the recognition of the entire G46 base. This is in line with the results from the truncated variants where several variants showed small K m values as compared with the full-length transcript. Therefore, we tested the effect of the disruption of the tertiary base pair(s) on the methyl acceptance activity. We made 10 variants possessing a point mutation; eight were either disrupted or had altered tertiary base pairing, and two had disrupted base pairing in the stem (Fig. 10A). In addition, we made five variants possessing double mutations that had disrupted base pairing (Fig. 10B). The results are summarized in Table IV. These results reveal that no tertiary base pair is essential for the RNA recognition. However, the mutations around G46 have an important effect on the methyl acceptance activity. The variants (9A3 U, 9A3 C, 15G3 C, and 13C3 G/ 22G3 C) have relatively poor methyl acceptance activities. Because these mutation sites are nearest to G46 in the core (Fig. 9C), the decrease in methyl acceptance activities seems to be caused not only by disruption of the tertiary base pairs but also by the alteration of the stacking effect between the mutation site and G46. In contrast, disruptions of the C25-G10-G45 tertiary base pair had no significant effect on the methyl acceptance activity. Furthermore, the disruption of the interaction between the D-and T-loop (54U3 A/55U3 A variant) had only a small effect on the methyl acceptance activity. Furthermore, it was found that the 65G3 C variant, which disrupted the C49-G65 base pair in the T-stem, displays low methyl acceptance activity, consistent with the importance of the Tstem structure. DISCUSSION In this paper we have demonstrated that the open reading frame aq065 from A. aeolicus encodes a protein that has a tRNA (m 7 G46) methyltransferase activity. The purified protein catalyzed methyl transfer to the N 7 atom of G46 in a tRNA transcript, as confirmed by LC/MS. Based on the experimental results, we renamed aq065 as A. aeolicus yggH (accession number AB167817) according to the nomenclature established for the E. coli gene. However, comparison of the A. aeolicus and 10 20 a The relative V max /K m values are expressed with respect to that of the full length transcript, which was taken as 100%. b "Not detectable" means that the methyl transfer was not detectable by the gel electrophoresis and imaging analyzer system as shown in Fig. 7. E. coli YggH proteins highlighted considerable differences in terms of the overall size of the two enzymes and the locations of the conserved sequences (Fig. 1, A and B). During the course of this study, De Bie et al. (47) reported that E. coli yggH encodes the tRNA (m 7 G46) methyltransferase activity in E. coli. They compared the amino acid sequence of E. coli YggH with those of the other methyltransferases and found that E. coli YggH has several motifs conserved among the Rossmann-fold methyltransferases (47). These conserved motifs probably constitute parts of the catalytic domain. The key sequences used for our Blast search corresponded to the conserved motifs highlighted by De Bie et al. (47). As described under "Results," we found only one candidate open reading frame when screening the A. aeolicus genome. A. aeolicus and E. coli YggH proteins probably FIG. 9. The location of m 7 G46 in the L-shaped tRNA structure and the tertiary base pairs. The Protein Data Bank accession number of yeast tRNA Phe is 1EHZ. This figure was generated by RasMac Version 2.6 with slight modifications. A, the m 7 G46 nucleotide in the L-shaped structure of yeast tRNA Phe is indicated in red. B, the C13-G22-m 7 G46 tertiary base pair is shown by the ball and stick model. Carbon, nitrogen, oxygen, and phosphorous atoms are indicated in white, blue, red, and yellow, respectively. C, the location of the C13-G22-m 7 G46 tertiary base in the three-dimensional core is shown by a wire-frame model. The tertiary base pairs, G15-C48, A9-U12-A23, and m 2 G10-C25-G45 are indicated in green, blue, and light blue, respectively. The C13-G22 base pair and m 7 G46 are indicated in brown and red, respectively. have the same catalytic mechanism because they are homologous within the catalytic core. However, the protein structure involved in the RNA recognition may be different from each other.
In this paper we investigated the RNA recognition mechanism of A. aeolicus YggH. The 5Ј and 3Ј RNAs in the precursor tRNA are not required for the methyl transfer reaction. Likewise, the L-shaped structure of tRNA is not essential for the reaction since the enzyme can catalyze the methylation of truncated RNA fragments. We also demonstrated that the recognition sites are dispersed on the tRNA structure. The most important site is located on the T-stem. The D-arm compensates for deletion of the T-arm through the formation of tertiary base pairs with the variable loop. Our results reveal that the tertiary base pairs in the three-dimensional core are not essential for methylation, although substitutions of the nucleotides around G46 cause a marked decrease in methyl acceptance activity. The tertiary base pairs and the stacking of the bases probably permit entry of the G46 base into the catalytic pocket of the enzyme.
Our results seem to explain a general mechanism of bacterial m 7 G46 modification in tRNA. Although the primary amino acid sequences of YggH proteins from A. aeolicus and E. coli differ considerably, the RNA recognition patterns are similar. Substitution of the yeast tRNA Phe variable loop by the E. coli tRNA 2 Gly variable loop causes a complete loss of the methyl acceptance activity. Our results are consistent with the modification patterns of tRNAs not only from eubacteria but also from eukaryotes (1)(2)(3)(43)(44)(45). However, the marked differences in the amino acid sequences of the two enzymes indicate structural variation among bacterial YggH proteins. Some differences derived from the protein structure may influence substrate recognition mechanisms of YggH enzymes. Structure analysis of both A. aeolicus and E. coli YggH proteins will be necessary to determine the precise nature of the RNA recognition process. A detailed understanding of the molecular recognition process in these two enzymes will help to explain the evolution of the m 7 G46 modification of tRNA.
|
2018-04-03T06:02:15.050Z
|
2004-11-19T00:00:00.000
|
{
"year": 2004,
"sha1": "6cbcc78df15e49fed8536b64fadaccb2aba9b9d4",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/279/47/49151.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "78c82c5120819226053be8048bbd23cfaafe052f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
255036441
|
pes2o/s2orc
|
v3-fos-license
|
The Stumbling Stone of Sharia in a Post-Westphalian Order.
Introduction to the special issue, Sharia and the Scandinavian Welfare States
sy' condition, this special issue sets out to explore some of the complex problems of 'producing sharia' in Scandinavia today by investigating the production of sharia through the prism of the modern state. The state has always been a producer of religion, and the argument is that the pluralisation of the post-Westphalian state-and-religion relations facilitates breaking the monopoly on goods of salvation by establishing sharia as a competitor in the services of salvation, through free-schooling, religious welfare services, legal and quasi-legal religious mediation and more.
These assumptions and questions are some of those that inform the research project on "Producing Sharia in Context" (Danish Independent Research Fund, 0163-00070B, PI Vinding, 2021-2024 at the University of Copenhagen. The argument is that reinvention and transformation of sharia is currently ongoing (or rather still and continuously happening) and that state institutions still require Muslims to transform sharia into a domesticated relationship across all sectors of society, as was the case in the Westphalian Order. Our task in the research project is to see sharia in the context of the modern Western state and society, but not as a sole product of Muslim pious intellectuals but as a co-product of social generative structures, with the modern welfare state as the first and strongest amongst such structures.
This relatively recent focus on state-and-religion interdependence gives a renewed perspective onto strategies of domestication governmentality (Dean 2010), institutionalization (Scott 2008), production and subjectivation (Foucault 1982), and may be argued using the frame of generative structuration (Calhoun 1991), which Bourdieu (1985) and Giddens (1984) have demonstrated and analyzed. In Bourdieu's analysis of the power relations in religion vis-à-vis the state (Bourdieu 1991: 23;Kühle 2009), he speaks of the administration of the goods of salvation as the source of symbolic religious power, which a church or religious entity may only be allowed to administer if loyal to the governing polity. This is the insight on the power of the state and politics as regards religion that is studied in terms of sharia in the post-Westphalian context of contemporary Scandinavia.
Sharia as a new case for old problems
The problem of sharia seems to rest on the scholarly, legally and politically contested questions of what sharia is and the public moral questions of what sharia in society should be. Two self-reinforcing circumstances make the issue of what sharia in society is or should be very difficult and acutely important to answer. Firstly, there is very little scholarly agreement on the definition of sharia, and even considerable disagreement over the reasons for this. Secondly, because of this ambiguity, the political fictions and legal conceptualizations of what sharia is, which are being produced in an increasingly mainstream political environment, are taking over. In this light, our project pursues a subjective-reflective turn in the scholarly conceptualization of sharia, namely, one that presents sharia as inherently flexible, modulable and primarily grounded in living Muslim practice in the relational contexts of state and society, rather than a stale discursive tradition defined by Islamic legal dogmatists as well as contemporary scholars. In modern welfare states, this renders sharia a social, political and moral co-product of state and society rather than of something alien or outside society. This is not only due to the historically, empirically and normatively demonstrable modulability of sharia, but also of the hegemonic governmentality and socially generative nature of the modern welfare state.
The important change comes with the domestication of sharia in the colonial and modern governance of religion, as Wael Hallaq has examined in several major volumes over the past 15 years (Hallaq 2009;2012;. As the pre-modern, organic, knowledge-and-justice system that is sharia (in Hallaq's view) encounters the modern colonial state, sharia is fundamentally transformed. What is left from such a devastating encounter, he argues, is merely the textual residue of what was once the systemic nature of sharia, to be applied and reproduced as fits the state governmental logics (Hallaq 2009, 547). As Hallaq has argued, along with Talal Asad amongst others, much postcolonial/ post-oriental scholarship is inherently outraged by the encounter between the state and sharia and repeatedly highlights the incompatibility of the two. Such a sentiment seems to be accepted and taken on board by post-oriental, Western sociologists and anthropologists, including Mauritz Berger (2018) and John Bowen (2013). However, while Hallaq's point is a historical one in which he laments the devastation of systemic sharia, the observation is highly appropriate today. His identification of the power of the state is accurate and still relevant, yet he never takes the observation to its logical conclusion by investigating the full extent of the power relations between the modern state and sharia; nor does he transcend the banal moral judgment of bad state versus good sharia.
However, as Leon Buskens has historically demonstratedthus echoing Cavanaugh's (1995) and Beyer's (2013) observations from European history -the encounter between the state and traditional society did not just collapse the epistemic system of sharia romanticized by Hallaq; it also created and produced it in its modern expressions. Buskens famously (re-)discovered that there were also contemporary arguments for aligning state and sharia institutions writing, "The dichotomy between Westernization and local reception is misleading. Muslim scholars, civil servants, and politicians were active participants… in transforming sharia" (Buskens 2014, 211). Such critical analysis might lead to "a radical epistemological critique of our contemporary scholarly study of Islamic law". Indeed, the consequence for the study of sharia is that it is dependent on appreciating the entirely dialectical relationship between European state modernity and Islamic forms of law, ethics and practice, which has been missing from much of the prevailing 'understandings of sharia' in both public and current research.
Scandinavia as a laboratory for the study of sharia.
This is our starting point as we examine sharia in Scandinavia from sociological, anthropological, linguistic and political perspectives -in the research project, but most significantly also here in the articles of this volume. In our particular Scandinavian context, there are telling instances of research that comes thematically close to the themes explored in this current issue. Ulrika Mårtensen's introductory article for the special issue of the Danish Islamic Studies Journal Vol. 8 No. 1 (2014) -"'Public Islam' and the Nordic Welfare State: Changing Realities?"is written about the dynamic relationship between public and Islamic institutions and values (Mårtensen 2014, 4). In it she notes the problematic role of Islam in public debates, and considers the discussions of Islam as resistant to secularization and inimical to the established Nordic Lutheran division of power between church and state (Mårtensen 2014, 5). In such a context, she investigates the "institutionalization of Islam in the Nordic context with reference to both the theses of de-seculari-zation and studies of change within the Nordic welfare state" (ibid.), tracing her argument in broad strokes back to the lasting impact of the reformation on Nordic welfare administration. She demonstrates how schools, hospitals, charity and poor relief were surrendered to the Danish and Swedish monarchs, enabling them to expropriate the Church's lands and with them the ultimate responsibility for education, care, the poor and infirm. This clearly demonstrates how the conjunction or agreement of state and church in the Lutheran state model is one that rests on sovereignty first and foremost, with the actual organization of affairs to be sorted out incidentally. As such, a 'Scandinavian model' is distinct from other European models or the American model, which "represents the logical opposite of the Nordic state model and its way of organizing welfare, civil society, and religion" (Segdwick & Mårtensen 2014, p 1).
In positively defining the Scandinavian welfare states as highly relevant contexts for studying sharia in the public moral negotiations, three arguments may briefly be outlined. Firstly, Scandinavia makes for an interesting 'laboratory' for the study of sharia because of the far reach of the state and the attuned welfare governmental technologies and logics, especially in regard to church and religion (Iversen 2006;Christoffersen et al. 2012). Secondly, and this is related, the Scandinavian welfare states are known for their spirit of social development, with inclusion in the labor force as the main objective, combined with an ethos of social class and gender equality (Martikainen 2014, 79). In this sense, the mechanisms of inclusion in welfare institutions work to challenge segregating or disintegrating ideas, seeking deliberately to challenge Islamic law, ethics and practices -or perceptions thereof.
Thirdly, Danish and Scandinavian scholarship seems to be at the forefront in approaching the interpretive and co-productive understanding of sharia in state and societal contexts. A case in point is Christoffersen's and Nielsen's Sharia as Discourse (2010), wherein the authors, while not moving much beyond the perspectives of comparative law (Modéer 2010, 89ff) or the nature of sharia (Christoffersen 2010, 57), nevertheless explore sharia as a norm system that explicitly addresses the relation between state, law and religion in Nordic contexts. At the most recent forefront of Scandinavian research on relations between sharia and the state, Marianne Bøe has investigated Norwegian legislation as it addresses Islam (Bøe 2018) and Torkel Brekke in Norway directed a cross-Scandinavian research project FINEX on "Financial exclusion, Islamic finance and housing" to see if Islamic norms about money result in the exclusion of Muslims from the financial system (Brekke et al., 2019). Sayed in Sweden investigates The Impact of Islamic Family Law on the Swedish Legal Landscape (Sayed 2016), and al-Sharmani and Mustasaari (2018)
Contributions to this issue.
In this historically grounded, theoretically established and geographically scoped context, we present the five contributions to our thematic issue.
In her article, titled "The concept of sharia in Scandinavia in Quran translation and literature", Nora Eggen writes about the concept from many angles and across contexts, yet the text follows a distinct logic. Her starting point is a linguistic, grammatical and exegetic analysis, which she then applies to a selection of Scandinavian Qur'an translations. Eggen traces these linguistic uses into the more technical terms developed in the 19 th and 20 th century, which are closely related to the growing body of codified national legislation. As she writes, In this period, one often comes across the adjectivized form al-sharīʿa al-islāmiyya, which links the term both descriptively and normatively to a specific tradition. As a borrowed technical term, the link between law and the specific tradition and religion becomes so clear that the more detailed definition becomes redundant, and the loan word in European languages simply takes the short form sharia. (Eggen 2022, 28) An important further contribution to this special issue is that Eggen continues her critical probe into the growing Scandinavian literature as well as application and reception of sharia concepts. A major point made by Eggen is that the translation of both words and concepts associated with sharia does not just entail a loss of meaning, but rather speaks to the addition of mean ing. She argues that the added meaning is colored by the conventions and discourses of the Scandinavian languages, both concrete semantic alterations and more diffuse additions of mean ing such as value loading and nuances. She demonstrates that two dominant translation and application strategies have been to reproduce a basic semantic content 'way' , or a terminological content from the legal domain. In the mix of this, words such as 'norm' , 'regulation' and 'law' can also understood in very different ways. As such, understanding and defining a concept like sharia is closely linked to the legally regulated relationships between the state and the individual, as well as all types of moral, social and ritual relationships, and it is a term, Eggen concludes, over which many actors want defining power.
Continuing with a deep conceptual discussion is Jesper Petersen in the second contribution to the issue, titled "Parallelsamfundseffekten: Sprog, følelser og diskurs i aeresrelaterede konflikter" (The parallel society effect: language, emotion and discourse in honor-related conflicts). As a pilot study with five Muslim informants, he investigates politically inspired concepts such as 'parallel society' and 'negative social control' and how the informants adopt these in their self-narratives in what he calls the 'parallel society effect' . Petersen focuses on a number of expressions of this in relation to the wide association of ideas and discourses linked to sharaf. In addition, he pursues a similar argument regarding the headscarf and the theologies of heaven and hell as they are applied in upbringing, as related by the five informants. Common to these are the emotional effects that the behavior of parents or others in their context instigate. One informant says that she wore the headscarf as a "fuck you to mainstream society". Another relates how the hypocrisy of their parents in honor-related questions revealed that religion was reduced to a social game to protect and produce a certain perception in their immediate social field. To Petersen, this is the most obvious explanation for why four out of five informants did not internalize religion to the degree that they experienced feelings of sin in connection with violating religious injunctions and prohibitions. The article demonstrates that while Islam plays a role -in upbringing, for example -the informants see this as associated with their parents' honor or society's perception, and not as something that corresponds very well to their understanding of Islam as religion and practice. As such, and very much related to the theme of the issue, the article shows us how the context of minority religious honor or majority political culture has significant direct and indirect effects on the feelings, thinking and practices of the Muslim informants.
The third contribution is by Anika Liversage, who writes about Muslim women and divorce in a Danish context, in "Muslimske kvinder og skilsmisse i en dansk kontekst" (Muslim Women and Divorce in a Danish Context). The article investigates the great diversity in ethnic minority women's divorce processes, with a particular focus on Muslim practices linked to terminating a nikah. Liversage's article builds on her and Jesper Petersen's major study from 2020, Ethnic Minority Women and Divorce -Focusing on Muslim Practices, published by the Danish Center for Social Science Research (Liversage & Petersen 2020). The case studies are drawn from more than 80 interviews with professionals, Islamic authorities and ethnic minority women. As she reviews the divorce processes of four Muslim women, Liversage follows their paths through marriage, marital conflicts and divorce, exemplifying both the wide range of divorce experiences and the serious challenges some women experience. She elegantly uses the theoretical device of 'gendered geographies of power' , which is a "framework for analyzing people's social agency -corporal and cognitive -given their own initiative as well as their positioning within multiple hierarchies of power operative within and across many terrains" (Liversage citing Mahler & Pessar 2001, 447).
Some of the most problematic cases arise when husbands maintain their sole right to declare the Talaq divorce and no one is able to mediate or help the women. Liversage argues that while religious family courts in the country of origin might be able to help, there are no such Islamic courts in Denmark. In their absence, the women in their need turn to persons who can be immediately recognized as Islamic authorities -imams in the local mosque, for example. However, more often than not, this leaves the women disappointed, as they often get no help from such sources. In light of the question of the role of the welfare state posed by this issue, a major point is that social policies on social control, Danish law or a moral panic facing Islam do not necessarily determine the outcomes for Muslim women navigating their divorces. Rather, Liversage argues that an additional three factors seem to come into play, namely: 1) where and how women entered into their nikah; 2) the women's own resources; and 3) the degree of support the women can mobilize from others.
The fourth contribution is by Mikele Schultz-Knudsen and Janet Janbek, who investigate "The role of religion in health promotion: How the Danish health authorities use arguments from Islam". The context of the article is the debate that recently emerged in Denmark when the Danish Health Authority decided to cooperate with seven Islamic organizations in creating the pamphlet, "About Islam and vaccination against COVID-19". It was not well received, politically, and the Minister of Health argued that he considered it to be a mistake. Responding to parliamentary critique, the Minister held the view that the expertise of government agencies should not be mixed with religious arguments.
Through a chain of telling cases, Schultz-Knudsen and Janbek investigate how Danish health authorities have used or considered Islam in their efforts and in communications. After introducing public health theory and the existing literature on the interplay between religion and health, they discuss examples of how Danish health authorities have interacted with Islam before the pamphlet on Covid-19 vaccines, beginning with how the authorities have worked with Islamic authorities to deal with female genital mutilation. Secondly, Islamic arguments are considered in a case involving health workers' dress. Thirdly, it turns out the Danish health authorities have also referred to the opinions of religious leaders in the case of a vaccine which contained gelatin. Additional cases discuss diabetes in Ramadan, circumcision and burials, and a number of other practical situations between patients and health care personnel.
Taken together, these cases raise questions regarding the extent to which religion can and should be considered in such interactions, and whether religion should be taken into account in health promotion. However, the authors conclude that considering that religion is a core part of many people's lives and is an important health determinant, the initiatives by the Danish health authorities may fruitfully contribute to sustainable inte-ractions between health authorities and religion for all involved parties. They support their case by citing employees of the health authorities who observe that they "would do the same again, also for other religions, when a health problem had its roots in theology".
The fifth and final contribution is by Olav Elgvin, and investigates "Regulations in flux: Theology, politics and halal slaughter in Norway". The article describes for the first time the history of the halal debate among Muslims in Norway, showing that halal regulations have been influenced by both theology and politics in bids for influence and status. The approach of the Islamic Council of Norway, the organization mainly in charge of halal regulations, has shifted no less than four time: from acceptance of stunning, to skepticism, to acceptance, to skepticism, and finally to renewed acceptance. He further argues that theological concerns among Muslims have clearly played a role in this process, but politics and power have also mattered. The juridical and political opportunity structures in Norwegian society have laid out the limits for possible approaches to Islamic slaughter, but halal regulations as a field have also been influenced by internal power struggles among the various actors and factions of Islamic institutions in Norway, who have contested each other's interpretations in bids for influence and status.
The argument that the state co-produces sharia as it is briefly and theoretically sketched in this introduction and as exemplified in the five contributions of the volume will be further explored in the ongoing "Producing Sharia in Context" research project. However, this introduction and the volume is also an open invitation to colleagues across academic fields to get involved in the exploration of state co-productions of sharia, or to discuss and challenge the assumptions and observations we have made herein and which will continue to unfold.
|
2022-12-24T16:25:40.421Z
|
2022-11-24T00:00:00.000
|
{
"year": 2022,
"sha1": "748f64155dff2a30d994274f2b28ea7352eeca2c",
"oa_license": "CCBYNCSA",
"oa_url": "https://tifoislam.dk/article/download/134801/179626",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ecba33ab364b607a6ac255a6b360997eb138cc3f",
"s2fieldsofstudy": [
"Political Science",
"Sociology"
],
"extfieldsofstudy": []
}
|
25288695
|
pes2o/s2orc
|
v3-fos-license
|
Protection patterns of tRNAs do not change during ribosomal translocation.
The translocation reaction of two tRNAs on the ribosome during elongation of the nascent peptide chain is one of the most puzzling reactions of protein biosynthesis. We show here that the ribosomal contact patterns of the two tRNAs at A and P sites, although strikingly different from each other, hardly change during the translocation reaction to the P and E sites, respectively. The results imply that the ribosomal micro-environment of the tRNAs remains the same before and after translocation and thus suggest that a movable ribosomal domain exists that tightly binds two tRNAs and carries them together with the mRNA during the translocation reaction from the A-P region to the P-E region. These findings lead to a new explanation for the translocation reaction.
Ribosomes contain three tRNA binding sites, the A, P, and E site, viz. the A site where the decoding takes place, the P site, where the peptidyl-tRNA is located before peptide bond formation, and the E site, which is specific for deacylated tRNA (1)(2)(3)(4)(5)(6). During elongation of the nascent peptide chain, each tRNA passes through the ribosomal binding sites in the sequence A 3 P 3 E. To elongate the nascent peptide chain by one amino acid, the ribosome goes through a cycle of reactions, the socalled elongation cycle. The three basic reactions of an elongation cycle are 1) occupation of the A site by an aminoacyl-tRNA according to the corresponding codon at the A site, 2) peptidebond formation, which transfers the already synthesized peptidyl residue to the aminoacyl-tRNA so that the resulting peptidyl-tRNA, prolonged by an amino acid, now resides at the A site, and 3) the translocation reaction, which moves the peptidyl-tRNA to the P site and the deacylated tRNA to the E site.
Cross-linking and footprinting studies have identified components of the ribosome that interact with the tRNAs in the specific sites (reviewed in Refs. 7 and 8). Recently, a technique developed by Eckstein and co-workers (9) has been used to investigate the interactions of tRNAs with ribosomal binding sites (10). This method exploits the fact that the addition of iodine (I 2 ) causes a breakage of the sugar-phosphate backbone of phosphorothioated RNA (here tRNA). The cleavage works equally well with phosphates in single or double strands. However, if tight contacts of a distinct phosphate group of a thioated tRNA with a synthetase or a ribosomal component prevents the access of iodine, the phosphate group under observation is protected, and a cleavage is not observed at this position. Highly differentiated and distinct cleavage patterns were found for A and P site-bound tRNAs in the pre-translocational (PRE) 1 state (10). Here we show that the protection patterns of both tRNAs at the A and P sites in the PRE state hardly change when translocated to P and E sites, respectively. Because probably most of the protection patterns are caused by interactions of the tRNA with the respective ribosomal binding sites, the data suggest the existence of a movable ribosomal domain that binds tightly the two tRNAs before, during, and after translocation.
EXPERIMENTAL PROCEDURES
The sources of chemicals and plasmids are described by Dabrowski et al. (10).
Preparation of Thioated tRNA Phe and AcPhe-tRNA-The thioated tRNA Phe was obtained by in vitro T7-dependent transcription (10). The tRNAs were charged, N-acetylated, and purified by reversephase HPLC as described in Bommer et al. (11). The specific activity of Ac[ 14 C]Phe-tRNA Phe was 1000 -1200 dpm/pmol for different preparations.
tRNA Binding to Ribosomes-The binding of thioated tRNA Phe to the P i , A, P Pre P Post , and E positions was performed under the ionic conditions of the binding buffer (20 mM Hepes/KOH, pH 7.8 (0°C), 6 mM MgCl 2 , 150 mM NH 4 Cl, 2 mM spermidine, 0.05 mM spermine, and 4 mM 2-mercaptoethanol) essentially as described (11), with the following changes. The binding of deacylated thioated tRNA to the P i position was performed in step one with 80 pmol of 70 S ribosomes, 80 pmol of thioated tRNA Phe , and 50 g of poly(U) mRNA in a total volume of 100 l. The mixture was incubated for 10 min at 37°C. Binding of deacylated thioated tRNA Phe to the P Pre position was performed by taking a 50-l aliquot from the P i binding assay and adding 60 pmol of native Ac-[ 14 C]Phe-tRNA Phe in step two. The incubation of this step was for 30 min at 37°C. For location of the deacylated tRNA Phe in the E position, half of the ribosomal complexes from step two were further processed in step three, where the translocation reaction took place.
Binding of thioated Ac-[ 14 C]Phe-tRNA Phe to the P i position was performed by incubating 80 pmol of 70 S ribosomes, 104 pmol of thioated Ac-[ 14 C]Phe-tRNA Phe , and 50 g of poly(U) in step one. Note that we used a slightly higher amount for AcPhe-tRNA than for deacylated tRNA in the corresponding experiment described in the preceding paragraph to account for the slightly lower affinity of the P site for AcPhe-tRNA as compared for tRNA Phe (see Ref. 1). Thioated Ac-[ 14 C]Phe-tRNA Phe was bound to the A position by incubating 80 pmol of 70 S ribosomes, 104 pmol of native deacylated tRNA Phe , and 50 g of poly(U) in step one and adding 104 pmol of thioated Ac-[ 14 C]Phe-tRNA Phe in step two. Half of the aliquot obtained in step two was further processed in step three, yielding ribosomal complexes with thioated Ac-[ 14 C]Phe-tRNA Phe in P Post position and a native deacylated tRNA in the E site.
The translocation efficiency for both complexes was at least 70%, as determined by puromycin reaction (11) under the ionic conditions of the binding buffer. None of the bound thioated AcPhe-tRNA that was added in step two to occupy the A site reacted with puromycin, whereas more than 70% of the AcPhe-tRNA bound to the P i position reacted with puromycin under the conditions applied.
Footprinting Experiments with Ribosome-bound tRNA-The method * The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
‡ To whom correspondence should be addressed. Tel.: 49-30-8413-1594; Fax: 49-30-8413-1690; E-mail: NierhausKH@mpimg-berlindahlem.mpg.de. used for footprinting experiments followed that described in Dabrowski et al. (10) except that the iodine cleavage was started by adding to the mixture with the thioated tRNAs 1/50 volume of 50 mM iodine in ethanol solution (2% ethanol final concentration). After 1 min at 0°C, the nonbound tRNA was removed from the ribosomal complexes by gel-filtration over a Sephacryl-S300 cDNA Spun column (2 min, 4°C, 1500 rpm, HB4 rotor). We included in Fig. 1A (left panel) the cleavage pattern of nonthioated, transcribed tRNA in the presence of iodine as a control. Practically no cleavage was observed with thioated tRNA bound to ribosomes in the absence of iodine (Fig. 1B, right panel). We also tried to add I 2 after the gel-filtration step instead of before. However, the resulting cleavage pattern was less defined. A possible explanation is that the reaction time of iodine was well terminated by the gel-filtration if added before, thus preventing secondary reactions.
Processing of the Data-The relative intensities of the free tRNAs in solution have been determined previously by vertical scanning relative to a set of control bands ( Table I in Ref. 10). The intensities of all As are given relative to that of A9. Likewise G10, C11, and U12 were the reference bands for all Gs, Cs, and Us, respectively. The control bands have an intensity comparable with most of the other bands and do not seem to be affected by the tertiary structure of the tRNAs in solution. The data shown in Table I were obtained in the following way. The values of the horizontal comparison (the intensity of a band of a bound tRNA relative to the corresponding band of a tRNA in solution) were multiplied by the relative intensities of the corresponding bands derived from the vertical scanning of free tRNA. The result of the multiplication gives the intensity of a band of ribosomal bound AcPhe-tRNA or tRNA Phe relative to the respective control band of AcPhe-tRNA or tRNA Phe in solution, respectively (vertical comparison). For example, U16 of AcPhe-tRNA in solution has an intensity of 0.24 relative to U12 of AcPhe-tRNA in solution (vertical scanning); U16 of AcPhe-tRNA in the P posÿ site has a relative intensity of 4.33 compared with U16 of AcPhe-tRNA in solution (horizontal comparison). The product of both numbers, 0.24 ϫ 4.33 ϭ 1.04, gives the intensity of U16 of AcPhe-tRNA in the P Post site relative to U12 of AcPhe-tRNA in solution (vertical comparison). U16 of AcPhe-tRNA shows an enhanced accessibility when bound to the ribosome, because iodine cleavage at U16 of the free AcPhe-tRNA is restricted because of the conformation of the tRNA. A conformational change of AcPhe-tRNA upon binding relieves this restriction.
The average difference between the intensities of any corresponding bands of two patterns were calculated in the following way: The absolute values of the differences for all corresponding bands were taken, and the mean of these values was calculated. AcPhe-tRNA in the P Post site has only 4 differences of 63 positions when compared with AcPhe-tRNA in the P i site (not shown), and the average deviation is 0.14. If the P i and P Pre site of tRNA Phe are compared, the average deviation 0.11 is also low.
Some Remarks to the Methods Applied-Phosphorothioated
tRNAs were the product of an in vitro transcription. About four phosphorothioates were randomly incorporated at either the A, G, U, or C positions during transcription. The resulting tRNAs could be easily acylated with phenylalanine and were active in binding to ribosomal binding sites and poly(Phe) synthesis (10). The thioated deacylated tRNA Phe and AcPhe-tRNA were bound to poly(U)-programmed ribosomes to either P or A sites, respectively, and the corresponding contact patterns were analyzed before and after translocation.
After forming the ribosomal complexes, the thioated (5Ј-32 P)tRNA was cleaved with iodine as described previously, and the RNA was applied to a denaturing polyacrylamide sequencing gel (10). The band intensity of each position was assessed by scanning and compared with the corresponding position of a tRNA in solution (see "Experimental Procedures").
A weaker band intensity resulting from binding to the ribosome means a restriction of the cleavage reaction because of steric hindrance, thus impairing the access of I 2 so that a close ribosomal contact can be assumed at this position of the tRNA. An enhancement of the band intensity of a ribosome-bound tRNA compared with the one in solution indicates a conformational change of the tRNA upon binding to the ribosome; phosphates in bent regions of tRNA seem to be more accessible to iodine (12).
There is still another possibility why a band intensity could be weakened. Thioates at certain positions could hinder the binding of the tRNA to the ribosome so that these tRNAs are removed by gel-filtration and are not applied to the sequencing gel. Control experiments have demonstrated that deacylated tRNA Phe could not be bound to ribosomes when thioated at position A9; the same was true for AcPhe-tRNA when it was thioated at position U8 (10). These two positions and those in gel extremes (ϳ1-5 and ϳ69 -76) could not be identified (because of their localization) and were classified as "not determinable" in Table I and Figs. 2 and 3.
Ribosomal Contact Patterns of AcPhe-tRNA and tRNA Phe -The elongating ribosome contains at least two tRNAs, either at the A and P sites before the translocation reaction or at the P and E sites after (for review see Ref. 13). Only the initiating ribosome contains a single tRNA at the P site. Therefore we distinguish the initiating ribosome with one tRNA at the P i site (i for initiating) and the elongating ribosome with tRNAs at the A and the P Pre sites (PRE for pre-translocational state) or at the P Post and the E sites (POST for post-translocational).
PRE complexes were prepared with a thioated AcPhe-tRNA at the A site or a thioated tRNA Phe at the P site. The second tRNA in the complex was a native tRNA Phe or AcPhe-tRNA, respectively. According to the puromycin reaction, 100% of the bound AcPhe-tRNA was present at the A site. The cleavage patterns of both complexes were analyzed before and after an elongation factor-G-dependent translocation. More than 70% of the AcPhe-tRNA bound to the A site was translocated to the P Post site as indicated by the puromycin reaction. Furthermore, the cleavage pattern of AcPhe-tRNA and tRNA Phe was analyzed in the P i site. An autoradiogram of such a footprinting experiment for AcPhe-tRNA in the three investigated sites (A, P Post , and P i ) is shown in Fig. 1A. The cleavage patterns for tRNA Phe in P Pre , E sites, and P i is shown in Fig. 1B.
The gels were scanned, and the relative intensities of the bands were compared with the corresponding bands of AcPhe-tRNA or tRNA Phe in solution. The error spread between independent experiments was around Ϯ15% as in previous experiments (10). Fig. 2 shows a summary of the results. Strongly protected nucleotides (intensity Ͻ 30% that of of the corresponding band of the tRNA in solution) and nucleotides with enhanced accessibility (intensity Ͼ 130%) are indicated within the tertiary model of the tRNA.
Comparison of the Protection Patterns-Previously we have shown that the protection patterns of AcPhe-tRNA in the A site and deacylated tRNA Phe in the P site are different, whereas there is hardly a difference between tRNA Phe in the P i and P Post sites (10). The surprising result in this study is that there is a striking difference between P site-bound AcPhe-tRNA (P Post or P i ) and P site-bound tRNA Phe (P Pre or P i ), whereas the three protection patterns of AcPhe-tRNA in the various states seem to be very similar. The same is true for the three protection patterns obtained with tRNA Phe . All are very similar to each other (Fig. 2, A-F). In all three states of tRNA Phe (P i , P Pre , E sites) the protection is much stronger than the protection patterns observed with AcPhe-tRNA. tRNA Phe shows more than 30 strongly protected positions in each ribosomal binding state, in contrast to the maximal 9 positions found in the AcPhe-tRNA (A and P Post ). On the other hand, only a few positions of enhanced accessibility (maximum 3) are found with tRNA Phe , whereas for AcPhe-tRNA these positions are numerous (minimum 13). The stronger protection of the deacylated tRNA at the P and E sites probably reflects a more compactly folded structure. The data obtained by horizontal comparison (intensity of a position relative to the corresponding band of the tRNA in solution, Fig. 2) indicate the changes of the reactivities upon binding to a ribosomal site compared with the reactivities of the tRNAs in solution. We have shown previously that tRNA Phe and AcPhe-tRNA in solution show a different cleavage pattern, probably because there is a conformational change in the tRNA upon aminoacylation (10). An alternative explanation, namely that the differences are caused by selection by the aminoacyl-tRNA synthetase, is unlikely because (i) it was not observed in binding studies of thioated tRNAs to the corresponding synthetase (9), and (ii) the levels of aminoacylation of native tRNAs and phosphorothioated tRNA were almost the same (10) but should be severely reduced with phosphorothioated tRNA if such a selection would take place. Indeed, a conformational change caused by aminoacylation was directly observed in flu- orescence studies (14,15) and revealed by electrophoresis through perpendicular denaturing gradient gels (16). Therefore, the data derived from the horizontal comparison might not properly reflect the differences between the patterns of AcPhe-tRNA and tRNA Phe . To eliminate the influence of the conformation of the free tRNA, the reactivities of the bound tRNAs were normalized to a fixed set of reference bands of the corresponding free tRNAs, namely A9, G10, C11, or U12 (see "Experimental Procedures"). These values are given in Table I (vertical comparison) and were used for the difference calculations (Fig. 3). We consider differences between the intensities of two positions as significant if they are different by at least a factor of two and if the higher value is not below 0.3. Furthermore, we calculated an average deviation between two patterns (see "Experimental Procedures") to be independent from arbitrarily chosen criteria of differences. If AcPhe-tRNA in the A site is compared with AcPhe-tRNA in the P Post site (Fig. 3A) or the P i site (not shown), 8 of 61 positions are different. The average deviation of a band from an AcPhe-tRNA at the P Post site to that at the A site is only 0.16. Practically no difference is observed when the patterns of tRNA Phe in P Pre and E site, respectively, are compared (Fig. 3C), and the average deviation is 0.05. It follows that the protection patterns of AcPhe-tRNA are very similar before and after translocation. The deacylated tRNA before and after translocation at the P Pre and the E sites, respectively, shows practically an identical pattern.
In contrast, when P Post -site bound AcPhe-tRNA is compared with P Pre -site bound tRNA Phe , 40 of 60 positions differ in their protection pattern (Fig. 3B), and the average deviation (0.44) of a band is much larger. Similarly, a comparison of the pattern of AcPhe-tRNA with that of tRNA Phe in the P i sites reveals a large number of differences (35 of 60; not shown); the average deviation is 0.40. Therefore, the tRNAs seem to interact with different binding sites, although conventional understanding assigns them to the same binding site, the P site.
A pattern specific for deacylated tRNA Phe was found regardless of whether the tRNA Phe was present at the P or E sites. Similarly, a pattern specific for AcPhe-tRNA was found when bound to either A or P sites. Do the two distinct contact patterns depend on the charging state of the tRNA rather than reflect different features of ribosomal binding sites?
We analyzed this problem in the following control experiment. Ribosomes were programmed with the MF-mRNA (17), which is a heteropolymeric mRNA of 46 nucleotides and carries the two unique codons AUG-UUC in the middle. A deacylated and thioated tRNA Phe was bound either to the A site after prefilling the P site with a deacylated tRNA f Met or to the P i site. When the thioated tRNA Phe was present at the P i site, a pattern known for deacylated tRNA was observed (not shown). In contrast, tRNA Phe in the A site showed a different pattern ( Table I). 30 of 51 positions differed in their intensities as compared with tRNA Phe in the P Pre site (Fig. 3D). 19 of these 30 positions approach a reactivity of the corresponding positions of AcPhe-tRNA in the A site. It seems that tRNA Phe in the A site adopts a protection pattern similar to that of AcPhe-tRNA in the same site. Some nucleotides still show a protection state typical for the tRNA Phe pattern (especially the anticodon, e.g. A35, A36). In contrast, many others show a typical AcPhe-tRNA protection (Table I) the average difference between the corresponding nucleotides of two patterns revealed the following. The pattern of a deacylated tRNA Phe at the P Pre site is very different to that of a deacylated tRNA Phe at the A site (average difference 0.31). The difference between P Post site-bound AcPhe-tRNA and P Pre sitebound tRNA Phe (0.44) is significantly larger than the corresponding value of A site-bound tRNAs (0.26), i.e. the difference between AcPhe-tRNA and tRNA Phe bound to the A site. Thus, the pattern of a tRNA Phe at the A site deviates sharply from that of a tRNA Phe at the P site but approaches that of an AcPhe-tRNA at the A site, although significant differences are observed. It is clear that both the charging state of a tRNA as well as the features of a ribosomal binding site severely affect the corresponding protection pattern of the tRNA. DISCUSSION It has been shown previously that strikingly different contact patterns were observed with deacylated tRNA Phe at the P site and AcPhe-tRNA at the A site of ribosomes before translocation (10). Here we demonstrate that the distinct pattern of either tRNA hardly changes during translocation to the P and E sites, respectively.
Two ways were used to calculate and compare the relative intensities of a band. 1) The intensity of a position of a bound tRNA was calculated relative to the corresponding band of the tRNA in solution (horizontal comparison). 2) The intensities of, for example, all A nucleotides of a bound tRNA were related to the reference band A9 of the corresponding tRNA in solution (vertical comparison, see "Experimental Procedures," "Processing of the Data"). Both methods of data processing clearly reveal that the protection patterns of AcPhe-tRNA in the investigated binding sites (A, P Post , P i ) are similar as well as those of tRNA Phe in the P Pre , E, and P i sites, whereas both types of patterns are strikingly different. The intensities of the corresponding nucleotide positions of an AcPhe-tRNA at either the A or the P Post site differ by 0.16 on average. A difference of only 0.05 is found, if correspondingly, the patterns of deacylated tRNA Phe at P Pre and E sites are compared. However, the average difference is about 0.4 between an AcPhe-tRNA pattern and that of a tRNA Phe .
The type of pattern observed with AcPhe-tRNA at the A site before and at the P site after translocation is termed ␣ (␣, because only the ␣-like pattern is found at the A site). Correspondingly, we define the type of pattern seen of deacylated tRNA at the P and the E sites before and after translocation, respectively, as the ⑀ pattern (⑀, because only the ⑀-like pattern is observed at the E site). A few positions at the 5Ј and the 3Ј ends could not be judged in our analysis, but at least the CCA-3Ј end has been tested already by chemical probing. The protection pattern at the CCA end did not change during translocation from the A to the P site (18), in agreement with our findings.
Although the conformation of both tRNA species in solution seems to be different, there is evidence that binding to the ribosome equalizes the conformations of deacyl-tRNA and AcPhe-tRNA (10). Furthermore, a control experiment demonstrated that the protection pattern is not mainly determined by the charging state of the tRNA, because tRNA Phe , when bound to the A site, definitively shows not an ⑀ pattern (Fig. 3D) but approaches the ␣ pattern. Therefore we prefer the view that most of the protections of the ribosome-bound tRNAs are because of direct contacts with the ribosome, and that the differences in the ␣ and ⑀ protection patterns reflect interactions with different ribosomal components. This interpretation seems to be justified because the contact patterns of thioated
Contact Patterns of tRNAs Do Not Change during Translocation
tRNAs in a complex with synthetase correlates well with x-ray analysis data (12,19), i.e. the protection pattern is directly related to the binding site. Even if some protections are caused by a conformational change upon binding, this change is caused by interactions with the ribosome, viz. reflects interactions between the tRNA and the binding site.
The data suggest that the micro-environment of the tRNAs does not change during translocation, i.e. components of the ribosome, which are in contact with the tRNAs and are responsible for the protection patterns, are the same before and after translocation. This is surprising, because according to previous models of translocation, differences would be expected between A site-and P site-bound AcPhe-tRNA as well as between P site-and E site-bound tRNA Phe , whereas the patterns of P site-located deacyl-tRNA and AcPhe-tRNA would be expected to be similar.
Our results are supported by other studies. (i) AcPhe-tRNA protects a similar set of nucleotides of 23 S rRNA in the A and in the P site, whereas a different set of nucleotides was protected by P site-or E site-located tRNA Phe (20). (ii) Recently, the 23 S rRNA neighborhood of deacylated tRNAs was analyzed in the P and E sites (21). The tRNAs contained up to five randomly incorporated 4-thiouridines to which a phenanthro- (10), because the results were the same (within the error borders) and are shown again to facilitate a comparison. Gray, not determinable. A red cross on a tRNA in a ribosome insert indicates that the respective tRNA is thioated, the contact pattern of which is shown.
line was attached. This residue cleaved nucleic acids in the presence of Cu 2ϩ ions within about 20 Å of the site of the attachment. 118 cleavage sites were detected, 85% of which were identical at P and E sites, also supporting the view that the ribosomal micro-environment of the tRNAs in the two sites does not change. (iii) Finally, our observation that tRNA Phe and AcPhe-tRNA at the P i site show highly different patterns is in agreement with results from energy transfer studies using fluorescent probes. The fluorescence signals from deacylated tRNA and AcPhe-tRNA at the P i position were so different that the authors concluded that that these tRNAs must be present at different sites (14,15). Those results also suggested that AcPhe-tRNA did not change the site when translocated from the A to the P position, again in accord with the findings reported here.
On one hand, we find that the micro-environment of both tRNAs on the elongating ribosome does not change during translocation. On the other hand, it is known that the mRNA moves three nucleotides through the ribosome in the course of translocation (22), and also the mass centers of gravity of both the mRNA (23) and the two tRNAs move 12 Å within the ribosome (24), in good agreement with the length of one codon. Most important, the two tRNAs turn by an angle of at least 20°d uring translocation (25,26). Such a turn means that the elbows of the two L-shaped tRNAs change their positions considerably and move even more than 12 Å during translocation. This paradox can be resolved if one assumes that a domain of the ribosome moves together with the tRNAs and the mRNA during the translocation reaction. This assumption gives a simple explanation for the translocation reaction: the tRNAs do not dissociate from A and P sites to diffuse to P and E sites, respectively; instead the tRNAs are tightly bound to the movable conveyor before, during, and after the translocation reaction. Translocation can be achieved by the ribosome via a movement of this conveyor from the A and P positions to the P and E positions, respectively; thereby the bound tRNAs are translocated. The movable ribosomal domain that binds two tRNAs tightly and transports them during the translocation reaction is termed ␣-⑀ domain according to the corresponding protection patterns. According to this model the critical step of the elongation cycle is not the translocation as in previous models but rather the A site occupation. During this reaction, the conveyor has to be moved back to A and P positions, whereas the peptidyl-tRNA has to stay at the P site, and the deacyl-tRNA has to be released from the E site. The view that the A site occupation is characterized by a major rearrangement between the (tRNA) 2 ⅐mRNA complex and the ribosome is consistent with the observations that the occupation of the A region and not the translocation reaction is the rate-limiting step of the elongation cycle (27,28).
A candidate for a movable ␣-⑀ domain is the so-called "bridge 2," which is the most massive connection between the ribosomal subunits going from the decoding region of the small sub- Table I). Red, positions where the respective relative intensities of the two states to be compared differed at least by a factor of two, and the higher intensity of the two values was Ն0.3; green, no difference according to this criterion. A, differences in the protection patterns of AcPhe-tRNA before and after translocation. B, differences of tRNA Phe in the P Pre site and AcPhe-tRNA in the P Post site. C, differences in the protection patterns of tRNA Phe before and after translocation. D, differences in the protection patterns of deacylated tRNA Phe in the P Pre site and in the A site (A(deac)).
unit to the region of the peptidyltransferase on the large ribosomal subunit (29). Indeed, there are close contacts between P i site-bound fMet-tRNA f Met and this intersubunit bridge, as revealed by cryo-electron microscopy (30). Interestingly, those regions of the AcPhe-tRNA exposed at the P i site (part of the anticodon stem and the D loop, see Fig. 2C) seem to be also freely accessible in the cryo-electron microscopy analysis, whereas higher densities of protection sites are found at regions of contacts between the fMet-tRNA and the ribosomal matrix (30). Therefore, a good correspondence between protections and contact sites exists with tRNA-ribosome complexes as has been already observed with tRNA-synthetase complexes mentioned above.
The conclusions drawn from the protection patterns are not easy to reconcile with hybrid states of tRNA binding. The hybridsite model suggests that in the PRE state, AcPhe-tRNA is bound in an A/P and deacyl-tRNA in a P/E hybrid state (20). After translocation, the AcPhe-tRNA should be bound in the P/P state and the deacyl-tRNA in the E site. In the case of true hybrid states, one would expect striking changes in the protection mainly of the anticodon stem loop structure of the tRNAs. In contrast, we observe that the few differences between the patterns of AcPhe-tRNA before and after translocation at the A and P site, respectively, are scattered over the whole molecule (Fig. 3A) and are not clustered at the anticodon domain. In contrast, the experimental data on which the hybrid-site model is based can be reconciled with an elongation model that incorporates the concept of a movable, ribosomal ␣-⑀ domain translocating the tRNAs, i.e. the ␣-⑀ model (for discussion see Refs. 31 and 32).
In summary, our results suggest that the same ribosomal components are in contact with the tRNAs before and after translocation, viz. are moving together with the tRNAs during the translocation reaction, thus providing a new explanation for this reaction. The mRNA are connected with the two tRNAs on the ribosome via codon-anticodon interactions. It seems that the mRNA passively follows the movement of both tRNAs during the translocation reaction, because a similar analysis with thioated mRNA has revealed that no contacts exist between phosphate groups of an mRNA and the ribosome outside the decoding region (33). In this view, essentially the tRNAs are pulling the mRNA through the ribosome, underscoring the importance of maintenance of codon-anticodon interactions before and during translocation (13,33). It turns out that it is not the translocation reaction but rather the molecular mechanism of A-site occupation that is puzzling.
|
2018-04-03T04:02:10.907Z
|
1998-12-04T00:00:00.000
|
{
"year": 1998,
"sha1": "23467211ebfb43912bc78c0ac0677126df07d6af",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/273/49/32793.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "436a39a592877632c02c97d6dcfad9f25333863e",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
16592026
|
pes2o/s2orc
|
v3-fos-license
|
Joint state and parameter estimation with an iterative ensemble Kalman smoother
. Both ensemble filtering and variational data assimilation methods have proven useful in the joint estimation of state variables and parameters of geophysical models. Yet, their respective benefits and drawbacks in this task are distinct. An ensemble variational method, known as the iterative ensemble Kalman smoother (IEnKS) has recently been introduced. It is based on an adjoint model-free variational, but flow-dependent, scheme. As such, the IEnKS is a candidate tool for joint state and parameter estimation that may inherit the benefits from both the ensemble filtering and variational approaches. In this study, an augmented state IEnKS is tested on its estimation of the forcing parameter of the Lorenz-95 model. Since joint state and parameter estimation is especially useful in applications where the forcings are uncertain but nevertheless determining, typically in atmospheric chemistry, the augmented state IEnKS is tested on a new low-order model that takes its meteorological part from the Lorenz-95 model, and its chemical part from the advection diffusion of a tracer. In these experiments, the IEnKS is compared to the ensemble Kalman filter, the ensemble Kalman
Introduction
Data assimilation in geophysics is often concerned with the estimation of the state of the system (e.g. atmosphere, ocean). Yet, non-observed parameters of the model can also be seen as control variables. They can indirectly be estimated through the assimilation of observations. In such context, data assimilation can be a powerful inverse modeling tool.
With the progress in techniques as well as the rise in popularity of data assimilation in geosciences, this topic has become of increasing interest. Parameter estimation is useful because it can account for model error through a parametric representation of the uncertain processes, and could serve as a tool to enhance the system state estimation. For instance, it is now accepted that air quality forecasting can benefit considerably from the online estimation of forcing parameters. Parameter estimation is also a fundamental tool per se in the estimation of the parameters which are often of physical or societal interests. For instance, again regarding air quality, data assimilation can help assess effective kinetic rates of interest to chemists, or it can help assess regulated pollutant emissions of interest to policy makers.
Data assimilation techniques for parameter estimation
As is the case with data assimilation for state estimation, two types of approach have been used for parameter estimation: filtering methods and variational methods. The estimation of parameters by the filtering approaches is based on the augmentation of the state vector with the parameter variables. If the state space has dimension M and if the number of parameters is P , then the augmented control vector has dimension M + P . Through the assimilation of observations, the joint analysis of the state variables and the parameters aims at building covariances (or higher-order dependencies for non-Gaussian filters) between them; these are crucially needed because of the non-observability of most parameters. The augmented state principle is likely to be used with any type of filter: extended Kalman filters (e.g., Kondrashov et al., 2008), ensemble Kalman filters (e.g., Aksoy et al., 2006;Wirth and Verron, 2008;Barbu et al., 2009), particle filters (e.g., Vossepoel and van Leeuwen, 2007;Weir et al., 2013), and stochastic sampling and genetic algorithms (e.g., Jackson et al., 2004;Liu et al., 2005;Bocquet, 2012;Posselt and Bishop, 2012). In an enlightening review, Ruiz et al. (2013) have discussed the use of ensemble Kalman filters (EnKFs) for parameter estimation. When the filtering method accounts for asynchronous observations by building covariances between parameter errors defined at distinct times, the method is usually referred to as a smoother (Evensen, 2003;Hunt et al., 2004;Sakov et al., 2010;Cosme et al., 2010).
The estimation of parameters with the variational approach is based on the explicit dependence of the cost function in not only the state variables, but also the parameters. If the dependence is not explicit, one should at least be able to compute the gradient of the cost function with respect to the parameters. The four-dimensional variational method, or 4D-Var (Le Dimet and Talagrand, 1986;Talagrand and Courtier, 1987;Rabier et al., 2000), has the distinct advantage of being a natural smoother since it works within a temporal window to assimilate asynchronous observations. However, it requires the use of the adjoint evolution model to compute gradients of the cost function. Computing the gradient with respect to the model parameters requires the same adjoint model, and also the extra effort of computing the explicit derivative of the cost function with respect to the parameters, in terms of the adjoint variables. It has been used for parameter estimation by, e.g., Pulido and Thuburn (2006), Bocquet (2012), and Kazantsev (2012).
This list of contributions to the field is far from being exhaustive and merely illustrates some of the methodologies used in atmospheric, ocean and climate sciences. In particular, there is a vast literature in atmospheric chemistry dedicated to the inversion of sources of pollutants and tracers. The extended and ensemble Kalman filters and variational methods (3D-Var and 4D-Var) have been employed in this field for over two decades (Zhang et al., 2012, and references within). Owing to the (quasi-)linearity of some chemical species, simpler four-dimensional smoothing analysis merely using a Best Linear Unbiased Estimator (BLUE) have also been extensively used to estimate sources.
The iterative ensemble Kalman smoother
The iterative ensemble Kalman smoother (IEnKS) has been recently proposed (Bocquet and Sakov, 2013) as an extension of the iterative ensemble Kalman filter Bocquet and Sakov, 2012). It is meant to solve the variational problem of 4D-Var with the help of a 4D ensemble. As such, it is a 4D ensemble variational method of the type used in the work by Buehner et al. (2010), Chen and Oliver (2012) and Fairbairn et al. (2013), and has (more remote) connections with ensemble of variational methods (Raynaud et al., 2009;Bowler et al., 2013). It does not require the use of the adjoint observation and evolution models since the sensitivities are estimated with the ensemble (Gu and Oliver, 2007;Liu et al., 2008). Moreover, the IEnKS generates the posterior ensemble using Gaussian assumptions and forecasts the ensemble to the next update step in the same way an ensemble square root Kalman filter does. Note that the scheme is not a hybrid method since it does not combine two distinct methods.
Because the IEnKS fundamentally solves a variational problem, it may require iterations for the cost function minimization. The number of iterations depends on the nonlinearity of the system. This number is expected to be small (1 or 2) for weak nonlinearity (typical of synoptic scale meteorology).
Using perfect model assumptions, Bocquet and Sakov (2013) have tested the IEnKS on two low-order models in different regimes representing different nonlinearities and lengths of the data assimilation window (DAW). The IEnKS (often significantly) outperforms EnKF and the standard ensemble Kalman smoother (EnKS) in all these regimes, not only regarding the smoothing performance (retrospective state estimation) but also regarding the filtering performance (state estimation at present and future time). Here, we will also show that the IEnKS also outperforms 4D-Var in this context.
In addition, the IEnKS has been shown on these models to be able to handle long DAWs, especially when assimilating observations several times (in a mathematically consistent manner).
Because the IEnKS offers the advantages of both filtering and variational methods, and because it is capable of operating on long DAWs, it has considerable potential as an efficient parameter estimation method.
Objective and outline
The objective of this article is to introduce a straightforward extension of the IEnKS to joint state and parameter estimation, and to test the potential of the approach on low-order models. The physical context is that of chaotic geophysical models, and of atmospheric chemical/tracer models, in which a joint state and parameter estimation is, in our opinion, a key to successful forecasts.
The algorithm of the IEnKS will be described in Sect. 2, in a compact but comprehensive manner. The method will then be generalized to joint state and parameter estimation. In Sect. 3, the capabilities of the IEnKS on the Lorenz-95 model (Lorenz and Emmanuel, 1998) will be reported. Additional tests will be performed: a comparison with the state-of-theart EnKF and standard EnKS, as well as with a 4D-Var, and with a new cycling of the IEnKS DAWs. Then the IEnKS will be tested for joint state and parameter estimation on the Lorenz-95 model (Lorenz and Emmanuel, 1998). In Sect. 4, an original extension of the Lorenz-95 with the advection of a tracer will be introduced. It is meant to represent the dynamics of an online atmospheric chemistry model, or meteorological models with a constituent such as moisture, with two unobserved parameters: the Lorenz-95 forcing parameter, and the emission flux. The IEnKS, the EnKF/EnKS, and a 4D-Var will be tested and compared in this context. The results will be discussed in Sect. 5. Conclusions will be drawn in Sect. 6.
2 The iterative ensemble Kalman smoother for joint state and parameter estimation
The algorithm
A Bayesian derivation of the IEnKS can be found in Bocquet and Sakov (2013). However, we would like to introduce the IEnKS comprehensively in this article: reference to Bocquet and Sakov (2013) will only be made regarding details that are not directly relevant to this study. Here, we describe the algorithm with its main justifications, and then provide its pseudo-code.
The core algorithm
Observation vectors y ∈ R d are assumed to be collected every time step t. Time is discretized into the times t k when the observations are collected. The number d of scalar observations within y can be time-dependent. The observations are related to the state vector through a possibly nonlinear, possibly time-dependent observation operator H k . The observation errors are assumed to be Gaussian-distributed, unbiased, and uncorrelated in time, and to have an observation error covariance matrix R k . The analysis step of the assimilation scheme is performed over a window of length L t in time units. Unless otherwise stated, time index k is relative to present time. With this convention, present time is set to be always t L , so that the initial condition of the DAW is conveniently always t 0 .
Let us first describe the update step. At t 0 (i.e. L t in the past), the background is obtained from an ensemble of N state vectors of R M : x 0,[1] , . . . , x 0,[n] , . . . , x 0, [N] . Index 0 refers to time while [n] refers to the ensemble member index. They can be stored in a matrix E 0 = [x 0,[1] , . . . , x 0, [N ] ] ∈ R M×N . One can equivalently represent the ensemble with its mean x 0 = 1 N N n=1 x 0,[n] and its anomaly matrix As in the ensemble Kalman filter, this background is approximated as a Gaussian distribution of mean x 0 , and covariance matrix A 0 A T 0 /(N − 1), the first-and second-order empirical moments of the ensemble. The background is rarely full rank since the anomalies of the ensemble span a vector space of dimension smaller than or equal to N − 1 and in a realistic context N ≪ M. Therefore, one solves for the analysis state vector x 0 in the ensemble space x 0 + Vec x [1] − x 0 , . . . , x [N] − x 0 , which can be written x 0 = x 0 + A 0 w, where w ∈ R N is a vector of coefficients in ensemble space. The analysis of IEnKS over [t 0 , t L ] is obtained from a cost function. The restriction of this cost function in state space to the ensemble space yields: The tilde symbol signifies that J is a mathematical object defined in ensemble space. M k←0 is the possibly nonlinear transition operator from t 0 to t k . {β k } 1≤k≤L are scalars in [0, 1] that weight the observations within the DAW. The choice of the β k can be made mathematically consistent and can have dramatic consequences on the performance of the data assimilation system. We refer to Bocquet and Sakov (2013) for a justification and numerical tests. Nonetheless, the rational for the choice of the {β k } 1≤k≤L will be discussed later.
This cost function is iteratively minimized in the ensemble space following the Gauss-Newton algorithm: using the gradient ∇ J (j ) and an approximate Hessian H (j ) of the cost function: x (j ) H (j ) is an approximation of the full Hessian because it disregards the contribution of the second-order derivatives of the innovation vectors δ k (w) in the cost function. The notation (j ) refers to the iteration index of the minimization. At the first iteration one sets A 0 is the tangent linear of the operator from ensemble space to the observation space. The estimation of this sensitivity using the ensemble is what allows one to avoid the use of the model adjoint. Two implementations, referred to as the transform and the bundle variants, have been put forward Bocquet and Sakov, 2012). With the bundle scheme, for instance, the ensemble is rescaled closer to the mean trajectory by a factor ε. It is then propagated through the model and the observation operators, after which it is rescaled back by the inverse factor ε −1 . The operation reads: where 1 = (1, . . . , 1) T ∈ R N . Note that each iterative update Eq.
(2) solves the inner quadratic variational problem: where z 2 G = z T G −1 z. The iteration is stopped when w (j ) − w (j −1) becomes smaller than a predetermined threshold e. Let us denote w ⋆ the solution of the cost function minimization. The symbol ⋆ will be used with any quantity obtained at the minimum. Subsequently, a posterior ensemble can be generated at t 0 : where U is an orthogonal matrix that is arbitrary but satisfies U1 = 1 -meant to keep the posterior ensemble centered on the analysis -and x ⋆ 0 = x 0 + A 0 w ⋆ . The Gauss-Newton minimization scheme shown in Eq.
(2) can easily be replaced by a quasi-Newton scheme that avoids the computation of the Hessian, or by a Levenberg-Marquardt algorithm that guarantees convergence of the minimization. These alternatives have been suggested and successfully tested in Bocquet and Sakov (2012). In the context of the standard models tested in Sects. 3 and 4, the nonlinearity is mild enough that a Levenberg-Marquardt scheme is unnecessary, and the Gauss-Newton scheme is very efficient.
This ends the part of the analysis step that is required to cycle the data assimilation scheme. An optional analysis step is required when a state estimation is desired at times t 1 , . . . , t L , i.e. up to present time, or when a forecast to future times is desired. This additional step depends on the choice of the β k and whether the DAWs are overlapping. In the simplest case, when observations are assimilated once and only once, this subsequent analysis takes the form of a forecast of the mean state x ⋆ k = M k←0 (x ⋆ 0 ), or a forecast of the full ensemble if one is additionally interested in estimating forecast uncertainty E ⋆ k = M k←0 (E ⋆ 0 ). During the forecast step of the scheme cycle, not to be confused with the forecast of the analysis step we just mentioned, the ensemble is propagated for S t, with S an integer: Fig. 1. Chaining of the SDA IEnKS cycles. The schematic illustrates the case L = 5 and a shift of S = 2 time intervals t is applied between two updates. The method performs a smoothing update throughout the window but only assimilates the newest observations vectors (that have not been already assimilated) marked by black dots. Note that the time index of the dates and the observations are absolute, not relative, for this schematic.
If the optional analysis step implied forecasting the ensemble to or beyond t S , then there is no need to forecast it again. This ensemble at t S will form the background for the next analysis.
A typical chaining of the analysis and forecast steps is schematically displayed in Fig. 1.
A pseudo-code of the IEnKS is displayed in Algorithm 1. It does not show the optional analysis step, since the cycling of data assimilation does not depend on it. It is the same as the one presented in Bocquet and Sakov (2013), except that here it is given in the general case, 1 ≤ S ≤ L, rather than the specific case S = 1. The pseudo-code accounts for the possible use of inflation (lines 20, 21).
In summary, the IEnKS solves the variational problem of 4D-Var in the ensemble range. Because the variational problem is solved in a reduced space, there is no need for the adjoint evolution and observation models. The IEnKS generates and propagates the posterior perturbations following the scheme of the ensemble Kalman filter. As such, it uses sampled errors of the day.
Single and multiple assimilation of observations
There are some degrees of freedom in the choice of L, S and the {β k } 1≤k≤L . Let us just mention a few legitimate choices.
Firstly, for any L and S, such that 1 ≤ S ≤ L, the most natural choice for the {β k } 1≤k≤L is β k = 1 for k = L − S + 1, . . . L, and β k = 0 otherwise. That way, the observations are assimilated once and only once. We call this the single data assimilation scheme (SDA IEnKS). It is simple, and the optional analysis of the update step is merely a forecast of the analyzed state at t 0 , or possibly a forecast of the full ensemble from t 0 . When S = L, the DAWs do not overlap, but they do so when S < L. The chaining of the data assimilation cycles in the SDA case is displayed in Fig. 1.
Require: t L is present time. Transition model M k+1←k , observation operators H k at t k . Algorithm parameters: ǫ, e, j max . E 0 , the ensemble at t 0 , y k the observation at t k . λ is the inflation factor. U is an orthogonal matrix in R N×N satisfying U1 = 1. β k , 1 ≤ k ≤ L, are the observation weights within the DAW. 1: For very long data assimilation windows, the use of multiple assimilation (or splitting) of observations, denoted MDA in the following, can prove numerically efficient (Bocquet and Sakov, 2013). An observation vector y is said to be assimilated with weight β (0 ≤ β ≤ 1) if the following Gaussian observation likelihood is used in the analysis: where |R| is the determinant of R. The upper index of y β refers to the partial assimilation of y with weight β. The prior errors attached to the several occurrences of one observation are chosen to be independent. In that light, the {β k } 1≤k≤L are merely the weights of the observation vectors {y k } 1≤k≤L within the DAW. Statistical consistency necessitates that a unique observation vector is assimilated in such a way that the sum of all its weights in the data assimilation experiment is 1. For instance, if 1 = S ≤ L, consistency requires that L k=1 β k = 1. In a more general case in which the observation vectors have the same number of non-zero weights, L is a multiple of S : L = QS, where Q is an integer. As a result, consistency requires Q−1 q=0 β Sq+l = 1 with l = 1, . . . , S. In the MDA case (except the SDA subcase) the optional analysis step is more complex since it requires re-weighting Chaining of the MDA IEnKS cycles. The schematic illustrates the case L = 5, and S = 2. The method performs a smoothing update throughout the window potentially using all observations within the window (marked by black dots), except for the first observation vector assumed to be already entirely assimilated. Note that the time index for the dates and the observations are absolute for this schematic, not relative.
the observations within the DAW to obtain the correct analyses for states t 1 to t L and beyond. More details that are not directly relevant to this study can be found in Bocquet and Sakov (2013). Note that when the constraint L k=1 β k = 1 is not satisfied, the underlying smoothing probability density function (pdf) will not be the one targeted, but, with well chosen {β k } 1≤k≤L , could be a power of it (Bocquet and Sakov, 2013).
These MDA approaches are mathematically consistent in the sense that they are demonstrated to be correct in the linear model, Gaussian statistics case. An heuristic argument based on Bayesian ideas justifies the use of the method in the nonlinear case (Bocquet and Sakov, 2013).
The chaining of the data assimilation cycles in the MDA case is displayed in Fig. 2.
In the experimental Sects. 3 and 4, both SDA and MDA schemes will be used.
Augmented state formalism
We wish to estimate a set of model parameters θ ∈ R P along with the state variables. To do so, the state space is aug- of the joint state and parameter space. From the mathematical point of view, the analysis step of the IEnKS is unchanged.
As is usual in a parameter estimation context, a forward model needs to be introduced for the parameters. This model could be, for instance, the persistence model (θ k+1 = θ k ), or some jittering such as a Brownian motion, could be assumed (θ k+1 = θ k + ǫ k ). Depending on the constraints on the parameters, this jittering could also be constrained.
Technically, there is nothing more in the joint state and parameter IEnKS than in the state IEnKS. As opposed to the EnKF and EnKS, the objective of the joint state and parameter IEnKS is not to build covariances to help estimate hidden parameters, but instead to minimize a cost function that depends on the full augmented state. In a strongly nonlinear context, this approach could prove superior to the standard EnKF and EnKS.
As mentioned in the introduction, the estimation of model parameters within 4D-Var requires the adjoint model. Besides, the computation of the derivative of the cost function with respect to the parameters in terms of the adjoint field can be tedious. Parameter estimation with the IEnKS avoids this time-consuming task.
A potential advantage of the IEnKS over 4D-Var is that the errors of the day are by construction estimated within the IEnKS for all types of variables or parameters, whereas the 4D-Var modeling of background statistics of heterogeneous variables and parameters can be complex (see, for instance, Elbern et al. (2007), relating the modeling of inter-species correlation in a 4D-Var applied to air quality, or Montmerle and Berre (2010) in a meteorological convective scale context).
Similarly to state estimation, joint state and parameter estimation with the IEnKS in theory combines appealing features of both variational and ensemble Kalman filtering techniques. The purpose of the following numerical exploration is to investigate whether this holds true in experiments with low-order models.
Numerical experiments with the Lorenz-95 model
The Lorenz-95 one-dimensional model (Lorenz and Emmanuel, 1998) represents a mid-latitude zonal circle of the global atmosphere. It has M = 40 variables {x m } m=1,...,M . Its dynamics is given by the following set of ordinary differential equations: for m = 1, . . . , M, and the domain is periodic (circle-like). F is chosen to be 8 so that the dynamics is chaotic and has 13 positive Lyapunov exponents. A time step of t = 0.05 is meant to represent a time interval of 6 h in the real atmosphere. Unless otherwise stated, the time interval between each observational update will be t = 0.05, meant to be representative of a data assimilation cycle of global meteorological models. With such a value for t, the data assimilation system is considered weakly nonlinear, leading to statistics of errors weakly diverging from Gaussianity. This model is integrated using the fourth-order Runge-Kutta scheme with a time step of 0.05.
Setup
Twin experiments are conducted. The truth is represented by a free model run (nature run), meant to be tracked by the data assimilation system. The system is assumed to be fully observed (d = 40) every t, so that H k = I d , with the observation error covariance matrix R k = I d . The related synthetic observations are generated from the truth, and perturbed according to the same observation error prior. The performance of a scheme is measured by the temporal mean of a root mean square difference between a state estimate (x a ) and the truth (x t ). Typically, one averages the following analysis root mean square error (RMSE): over the data assimilation cycles. When this RMSE concerns the system state at present time, i.e., the state at the end of the DAW, we call it the filtering RMSE. When this RMSE concerns the state defined L t in the past, i.e., at the beginning of the DAW, we call it the smoothing RMSE. All data assimilation runs will extend over 10 5 cycles after a burn-in period of 5×10 3 cycles. This guarantees a sufficient convergence of the error statistics. Unless otherwise stated, the size of the ensemble used with the ensemble methods will be N = 20, which is greater than the size of the unstable subspace, and, in the case of this model, makes localization unnecessary.
In this context, we have chosen to implement the inflation using the finite-size counterparts of the filters/smoothers (Bocquet et al., 2011). For this model, except in quasilinear conditions ( t ∼ 0.01), this inflation leads to performances that are quantitatively very close to the same filter/smoother with optimally tuned uniform inflation (Bocquet et al., 2011;Bocquet and Sakov, 2012). In the following methods like EnKF/IEnKS/EnKS should be understood as EnKF/IEnKS/EnKS with optimally tuned uniform inflation, and will actually be implemented with a single run of the finite-size variants, i.e. EnKF-N/IEnKS-N/EnKS-N, which is much more economical. Any reader not interested in implementing the finite-size IEnKS (whose pseudo-code is presented in Algorithm 2), or IEnKS-N, can alternatively optimally tune the uniform inflation of an EnKF/IEnKS/EnKS to attain very similar results.
New experiments with the IEnKS
This section is meant to recall and extend to 4D-Var and the case S = L several numerical tests of Bocquet and Sakov (2013), before considering joint state and parameter estimation. The following five systems are compared: Require: Same requirements as algorithm 1. ε N = 1. 12: -The SDA IEnKS, S = 1.
-The MDA IEnKS, S = 1. The {β k } 1≤k≤L are chosen to be uniform in the DAW and constant in time.
-The SDA IEnKS, with S equal to the length of the DAW S = L, so that the DAWs do not overlap. This approach is meant to be computationally economical, and is much more economical than the quasi-static case S = 1, since there is no overlapping of DAWs.
-4D-Var with a shift S = 1, corresponding to overlapping DAWs and quasi-static conditions. The gradient is obtained by finite differences, which is affordable and precise enough in this small dimensional context. The performance of 4D-Var strongly depends on the background statistics. Since the correlations in the Lorenz-95 system are rather short-ranged, the B-matrix is chosen diagonal. The performance of 4D-Var does not vary much if we introduce some correlation and offdiagonal terms. However, the scaling of the B-matrix is crucial in this context (Kalnay et al., 2007). The longer the DAW is, the smaller the scaling factor should be, since the first guess becomes more accurate. For each experiment, we tuned this scaling so as to obtain the best filtering analysis RMSE.
To avoid tuning inflation, the finite-size variants of the filters and smoothers are employed (SDA IEnKS-N, MDA IEnKS-N, EnKS-N). All EnKF and EnKS, and their finitesize variants in this article are based on the ensemble transform square root Kalman filter (Bishop et al., 2001;Hunt et al., 2007;Bocquet et al., 2011). These five data assimilation systems are compared in weakly nonlinear conditions ( t = 0.05) chosen to roughly represent synoptic scale meteorology dynamics (Lorenz and Emmanuel, 1998), and more nonlinear conditions ( t = 0.20 between updates). The time-averaged analysis RMSE is plotted in Fig. 3 for the former case, and in Fig. 4 for the latter case, as a function of the length of the DAW.
Let us first notice that the filtering performance of the EnKS is, by construction, given by that of the EnKF, whatever the length of the DAWs. This explains why the filtering RMSE of EnKS is constant, modulo statistical noise. When comparing the filtering performances of the EnKF/EnKS and 4D-Var, the conclusions of Kalnay et al. (2007) are reinforced. 4D-Var does not perform as well for short DAWs and performs better for long DAWs. In addition, we note that the same conclusion applies to the smoothing performance, even though the crossover point might be different.
Considering filtering as well as smoothing, the MDA IEnKS S = 1 significantly outperforms 4D-Var and the EnKF/EnKS in all regimes. The SDA IEnKS S = 1, also performs very well, but its performance wanes with longer DAWs, which is why the MDA IEnKS was introduced by Bocquet and Sakov (2013). For very short DAWs (L = 1, 2 in the case t = 0.05), the performances of the SDA IEnKS S = 1 and MDA IEnKS S = 1 are equal (L = 1) or very close (L = 2). For intermediate DAW lengths, the SDA IEnKS S = 1 can slightly outperform MDA IEnKS S = 1. This is not surprising, since the SDA IEnKS algorithm is meant to be optimal for sufficiently short DAWs, whereas the MDA IEnKS algorithm is only guaranteed to be optimal in linear/Gaussian conditions. Practically, in weakly nonlinear conditions ( t = 0.05), the IEnKS S = 1 only requires one to two propagations of the ensemble within the DAW. Consistently, it was shown in Bocquet and Sakov (2013) that a linearized variant of the algorithm, requiring one propagation of the ensemble within the DAW to compute the sensitivity, performed just as well in these conditions. It is nevertheless tempting to check whether this cost can be reduced by using non-overlapping windows S = L, and performing the analysis every L t. This would divide the cost of model runs by L, but this effect might nevertheless be offset by an higher number of iterations required for the analysis.
Quite surprisingly, the SDA IEnKS S = L performs very well for DAWs of length smaller than 0.80 (about twice the doubling time of the Lorenz-95 model). It is useless beyond that length, which was to be expected since the background at the beginning of the DAW results from a long forecast within the DAW, as opposed to a forecast of only t in the quasistatic S = 1 case.
In stronger nonlinear conditions, the variational methods (4D-Var and IEnKS) easily outperform the EnKF/EnKS. In particular, 4D-Var outperforms the EnKF/EnKS as soon as the the DAW reaches L = 2.
Joint state and forcing F estimation
A twin experiment is conducted in a situation where F is unknown. The true model (nature run) has forcing F = 8. The model used for assimilation and forecast has the initial value F = 7.
In addition to the state variables, the forcing parameter F will be estimated as well. Hence, the state vector x ∈ R M with M = 40 will be extended to the joint vector of size M + P = 41, with its 41st entry being the forcing parameter. The persistence model will be assumed for the evolution of the model parameter.
Because the filters and smoothers used here are all deterministic, the only source of stochasticity to generate the variability in F comes from the initialization of the ensemble. The forcing parameter of a member is initialized to 7 + ε, where ε is independently drawn from a normal distribution of standard deviation 0.1. The augmented state IEnKS will be compared to several augmented state alternatives. Specifically, we shall consider in this experiment: -The ensemble Kalman filter (EnKF). corresponds to the optimal performance for the smoothing estimation of F by the EnKS. The forcing parameter is plotted in Fig. 5, over a 5 × 10 3cycle-long segment of the experiment. In the case of the EnKS, the smoothing estimator for F (at the beginning of the DAW) is plotted because it is better than the filtering estimate of F (at the end of the DAW). Because the persistence model is assumed for F , the smoothing and the filtering estimates of F are the same for the IEnKS and 4D-Var. In addition, because the true F is static, the smoothing and filtering RM-SEs should coincide. From Fig. 5, it is clear that the IEnKS significantly outperforms the EnKF and the EnKS.
The time-averaged analysis root mean square errors (RM-SEs) are computed over a much longer run of 10 5 cycles. The scores for the state variables are reported in Fig. 6. The filtering RMSEs (i.e., the RMSEs at present time) of the EnKF or of the EnKS for any L are, by construction, the same. The estimation of the forcing F is good enough that the performance is indistinguishable from the EnKF performance when F = 8 is known. Nevertheless, even in this weakly nonlinear regime, the IEnKS with L ≥ 1 outperforms the EnKF and EnKS. Confirming the results of Bocquet and Sakov (2013), the gap in the smoothing performance between the EnKS and the IEnKS significantly increases as L increases. In this weakly nonlinear regime, the number of iterations required by the IEnKS is close to one, and its performance equals that of the linearized IEnKS (Bocquet and Sakov, 2013).
The scores for the estimation of the forcing parameter are reported in Fig. 7. By construction, the filtering performance of the EnKF and the EnKS at any L is the same, approximately 0.018. The parameter smoothing RMSE for the EnKS is approximately 0.015 and is optimal for L ∼ 100. By construction, the analysis at present time and retrospective analysis of F by the IEnKS is the same. Even in the case L = 1, the so-called iterative ensemble Kalman filter (IEnKF) outperforms the EnKS with an RMSE of 0.013. With increasing L, this performance improves more and more, reaching the RMSE of 7.5 × 10 −4 for L = 50.
The estimation of 4D-Var only becomes better than that of the EnKF for DAWs of length L = 50. This counterperformance can only be explained by a poor specification of the error covariance matrix. Indeed, the scaling of the background error statistics for the state variables should be different from the scaling of the background error statistics for the parameter. However, the separate tuning of scalings requires additional work that the IEnKS does not require. This hypothesis will be checked in Sect. 5.
Numerical experiments with a coupled Lorenz-95tracer model
In this section we introduce a simple extension of the Lorenz-95 model with a tracer field advected by the Lorenz-95 field to represent an advective wind. This is meant to test the ability of the IEnKS to carry out joint state and parameter estimation in the dynamical context of an online atmospheric chemical model, with heterogeneous variables.
Extending the Lorenz-95 model
We shall think of the variables x m of the Lorenz-95 as wind speed and direction variables defined on the circle. A tracer field c m+ 1 2 , m = 1, . . . , M = 40 will be added to the model variables, for a total of 80 variables. These variables are defined on the circle using a C-grid. A schematic of the grid is shown below: The tracer is advected by the wind field of the Lorenz-95 model. We have chosen to use the simple Godunov upwind scheme, which is positive and conservative. It is quite diffusive but this diffusion could be seen as a feature of the modeled physics. The equations read: The tracer is emitted on the whole domain, and the emission fluxes are denoted E m+ 1 2 . It is deposited on the whole domain, using a simple scavenging scheme parameterized by a scavenging ratio λ. A stationary point of the dynamics is x m = F and c m+ 1 2 = E m+ 1 2 /λ. This provides orders of magnitude for the wind and concentration variables.
For simplicity, the emission flux will be made constant and uniform: E m+ 1 2 ≡ E. Obviously, however, a more complex setting with urban/rural/sea emission type and diurnal/nocturnal cycle could be chosen. The values of our reference simulation's parameters are λ = 0.1, and E = 1, so that the typical concentration value is 10.
The Courant-Friedrichs-Lewy (CFL) condition is almost always satisfied: the Lorenz-95 model variables |x m | very rarely exceed 15; by construction, one has t = 0.05 and x = 1, so that CFL ≤ 0.75 < 1. Free run simulations help one to understand some dynamical characteristics of the model. The model exhibits features of a realistic tracer model. For instance, consider two of the model's distinct trajectories in which the wind fields model trajectories are the same. It turns out that the concentrations c m+ 1 2 of the two trajectories converge with each other. The positive part of the Lyapunov spectrum of the model is consistently very close to that of the Lorenz-95 model, with a number of positive Lyapunov exponents equal to 13, and a much broader negative part of the Lyapunov spectrum. However, we observed that the relaxation time of such two trajectories is quite long (typically τ = 10), so that it seems difficult to break down the system into fast and slow dynamics.
A free run (after spin-up) is displayed in Fig. 8. The peaks of the tracer are correlated with the waves of the Lorenz-95, though not in an obvious way (see Sect. 5).
The causality and propagation of information in this model is special, and presumably similar to much more complex online atmospheric chemistry models. This impacts the effectiveness of data assimilation. For instance, measuring a tracer plume at t 0 (actually a peak in this one-dimensional context) does not enable one to detect a swift change in the local wind at t 0 . Only future observations of the tracer concentrations will enable a diagnosis of this change in the local wind. As a consequence, variational schemes such as 4D-Var and the IEnKS that work over larger DAWs appear to be ideal tools in this context.
Numerical tests
We have performed data assimilation tests of the IEnKS using this model in order to estimate winds and concentrations, and unknown parameters F and E. Initially, we had carried out the same test but estimating E and λ instead of F and E. Parameters E and λ are typical of the kind one would like to control in an atmospheric chemistry model to improve forecast and re-analysis, when they are not themselves the focus of interest (Bocquet, 2012). The results were quite similar to those presented here. Yet, because the deposition and emission are antagonistic processes, the inverse problem of estimating them is very ill posed, requiring a specific prior distribution for those two parameters. In the absence of such strongly constraining prior, 4D-Var's performance would be hampered. That is why we choose to estimate F and E instead.
One of the potential difficulties in data assimilation with this model is the positivity of the concentrations c m+ 1 2 ≥ 0 and of the parameters F ≥ 0 and E ≥ 0. This problem can be dealt with straightforwardly with 4D-Var, since the positivity of the variables can be enforced by the minimizer, or by a change of variables that is easy to implement in this context. The problem is more severe with the EnKF, since the Best Linear Unbiased Estimator (BLUE) analysis and the ensemble generation that are at the heart of the methods will generate negative concentrations or parameters. A simple but fairly effective trick is to perform clipping by setting all negative variables of the analysis to zero; this, however, is suboptimal and could also induce imbalances and harming positive biases. A more elegant solution is to perform an analytical anamorphosis (Cohn, 1997;Bocquet et al., 2010;Simon and Bertino, 2012), so that the BLUE analysis and the ensemble generation are carried out in a space where the variables are defined on R and their statistics are closer to a Gaussian. For instance, in our case one could perform the state augmentation using the extended state vector: Note that in practice this problem does not apply to F , which is well estimated and close to F = 8 because of a strong sensitivity of the model to F . The choice of ln(F ) as the parameter to be estimated is only justified by the need of an homogeneous error metric for the two parameters.
Our numerical tests (EnKF as well as IEnKS) showed that the anamorphosis on the concentration variables is useless for improving precision, and can even lead to instability. This is at variance with the findings of Simon and Bertino (2012) who applied anamorphosed analysis on a 1D ocean ecosystem model, and who also found benefit in using anamorphosis on the state variables. Choosing a more complex gamma or lognormal distribution for anamorphosis function would avoid favoring large concentration values, as does the instability-prone logarithm anamorphosis (L. Bertino, personal communication, 2013). Aside from the choice of the anamorphosis function, this difference can also be explained by the fact that static anamorphosis is more efficient on distributions that are not too dynamical.
In addition, we found that occurrences of negative concentrations in the analysis and the posterior ensemble are extremely rare in the present case. By contrast, we found that the anamorphosis on E is useful and avoids instabilities. Because parameters F and E are not observed, their anamorphosis is a mere change of variables (as in 4D-Var) that does not require much work. Therefore, in the following the extended state vector will be A twin experiment, similar to that described in Sect. 3, is performed with t = 0.05. The winds and the concentrations are fully observed, with R d = I d , d = M = 40, in the wind observation space as well as in the tracer concentration space. The observations are generated from the truth and perturbed according to these error statistics. All runs are performed over 10 5 cycles after a burn-in period of 5 × 10 3 cycles. The following methods are compared: -The SDA IEnKS S = 1.
-The MDA IEnKS S = 1. The {β k } 1≤k≤L are chosen to be uniform in the DAW and constant in time.
-4D-Var with S = 1, corresponding to overlapping windows, quasi-static conditions. The scaling of the background is tuned so as to minimize the global (on all 82 extended variables) RMSE.
To avoid tuning inflation, the finite-size variants are employed: SDA/MDA IEnKS-N and EnKS-N. The time-averaged analysis RMSEs on the wind and concentration variables are plotted in Fig. 9 as a function of the DAW length. Both the mean filtering and smoothing RMSEs are reported. Again, the results are consistent with those of Kalnay et al. (2007). 4D-Var is not as precise as the EnKF/EnKS for short DAWs (L ≤ 20), but it outperforms the EnKF/IEnKF for large DAWs, in both filtering and smoothing. Moreover, the IEnKS significantly outperforms the EnKS/EnKF in all regimes and for both filtering and smoothing. In terms of performance, the difference between the SDA IEnKS and the MDA IEnKS is very similar to that reported in Sect. 3. However, the RMSE differences are much weaker, which may be explained by the doubled number of observations. The RMSEs of the logarithm of the two parameters, i.e., where F t = 8 and E t = 1 are plotted in Fig. 10 as a function of the DAW length. The filters and smoothers perform significantly better than 4D-Var. The EnKF/EnKS and 4D-Var remain quite far from the performance of the SDA and MDA IEnKS. This shows that smoothing over a large window and flow-dependent error statistics are both crucial for the parameter estimation.
Discussion
One (1 ≤ S ≤ 15), the IEnKS is performing well, and better that the EnKF/EnKS and 4D-Var. In the case of weak nonlinearity t = 0.05, only one iteration of the minimization is required on average for the computation of the sensitivities Y k,(j ) . Additionally, accounting for the propagation of the ensemble of the (ensemble) forecast step, an average of two propagations of the ensemble through the DAW is required. A further exploration of the computational performance of the IEnKS is out of scope of this article, but it seems quite promising for the success of the IEnKS with complex models.
In the numerical experiments, parameters F and E were chosen to be static. This type of parameters is frequently modelled in geophysical systems. Furthermore, they make 4D-Var and the IEnKS with large DAW ideal tools. When the parameters evolve in time, the variational methods may not perform as well as the EnKF, EnKS and IEnKS with smaller DAWs. In particular, the persistence model for the parameters becomes imperfect. We have repeated the same experiment as in Sect. 3.3, but with F varying in time according to a sinusoid and a step-wise function, within the interval [7.5; 8.5], with a period of one year (1456 time units of the . Not only is model error made intrinsic by incorporating parameter F in the control variables as has been done so far, but model error also becomes extrinsic because the assumed persistence model for F is wrong (permanently in the sinusoid case and intermittently in the step-wise case). Some results are displayed in Fig. 11. The evolution of the retrospective analysis of F is shown for the EnKF-N, the EnKS-N L = 50, the MDA IEnKS-N L = 50 S = 1, and 4D-Var L = 50 S = 1. The RMSEs are indicated in parenthesis in the legends. Although the IEnKS-N L = 50 remains the best performer in both cases, the gap in performance is narrower, because of the incorrect persistence assumption within the DAW. Let us remark that, in these cases, the RMSE of the retrospective analysis of the IEnKS is different from the RMSE of the filtering analysis because the truth that serves as a point of comparison changes within the DAW. Note also that because of the imperfection of the persistence model, a multiplicative inflation of 1.01 of the ensemble anomalies has been applied to the finite-size methods since they are not meant to intrinsically account for extrinsic model error (Bocquet et al., 2011), whereas the EnKF requires an inflation of 1.05 here to account for both model and sampling errors.
The last and main point of the discussion is dedicated to the improvement of the 4D-Var background and its comparison to the IEnKS. The background error statistics that determine the prior of variational methods, such as 4D-Var and the IEnKS, have less impact with longer data assimilation windows. These methods estimate the background state by a forecast of the analysis or of the posterior ensemble. Nevertheless, in the context of our low-order models, the IEnKS outperforms 4D-Var, especially in the joint state and parameter case. Therefore the difference should lie in the specification of the background error covariance matrix. It could be that the time-dependence of the background error statistics remains essential in the long DAWs length limit. Or it could be that the climatological statistics of the background in our implementation of 4D-Var is poorly specified.
To explore those hypotheses, we derived climatological statistics of the errors on the initial state of the DAW inferred from the SDA IEnKS-N. We first considered the Lorenz-95 model without parameter estimation. Since the system is statistically homogeneous, the error covariance matrix is circulant, so that is can be represented by a one-dimensional structure function that depends on the distance between sites on the circle. The correlation structure function is plotted in Fig. 12 instead of the full correlation matrix. When L is varied, the differences are small, except in the case L = 1 that shows slightly modified next-to-nearest correlations. The related covariance matrix, defined up to an optimally tuned scaling parameter, is used in 4D-Var as a new prior in place of the identity matrix. The new 4D-Var scores barely change from when using a covariance matrix proportional to the identity. This is consistent with the findings of Kalnay et al. (2007) who also tried, in a similar experimental context, to improve the performance of 4D-Var with a finer error covariance structure.
In a second experiment, we derived the climatological statistics of the errors on the initial extended state vector, in the Lorenz-95 case when F is unknown and estimated. The error covariance matrix turns out to be almost identical to the Lorenz-95 case with a fixed F = 8. The only difference is in the covariances that involve F . As expected, the correlation between the error on F and the error on any state variable is uniform. The covariance of the identity that was used in Sect. 3.3 is clearly not a good model for this case. Furthermore, the errors on F and the state variables are not homogeneous, so, again, choosing the error covariance matrix proportional to the identity matrix is not ideal. The climatological statistics of the errors have been inferred from the SDA IEnKS-N, when the state vector is augmented to incorporate F . It was done for each L because the ratio of the variances of the error on a typical state variable to the error on F is a non-uniform but increasing function of L. Using this procedure, we did not obtain real improvement in the state variable RMSE. However, the precision of the parameter estimation was remarkably improved to the level of the IEnKS-N. This shows that a fine specification of the background statistics is very helpful in the estimation of the static parameter F . Nevertheless, these statistics are static, and they do not significantly aid the estimation of the rapidly changing state variables.
In the last experiment, we applied the same procedure to the online tracer model based on the Lorenz-95 model. The error covariance matrix derived from the SDA IEnKS, for several L, entails significant covariances between the wind field and the concentration field. Again, because of the statistical homogeneity of the subsystems, one can represent the correlations of the winds and of the concentrations, and the cross-correlations between the winds and the concentrations, by using structure functions. These structure functions are displayed in Fig. 13, obtained from the IEnKS-N, L = 20.
We believe that the structure function of the crosscorrelations is non-symmetric because of the preferred orientation of the winds. The waves in the Lorenz-95 preferentially travel westward and create fronts of tracer on one preferred side of the wave, yielding a non-trivial crosscorrelation structure function. For 4D-Var, results similar to those from the previous experiment were obtained. Using the climatological priors, the errors of the state variables barely reduce. The fine correlations that build between the errors of the wind and concentration variables are dynamical and seem to be of little use when averaged in the climatological background error covariance matrix. However, the parameters are much better estimated. Nevertheless, unlike in the previous experiment, they do not quite match the precision of the IEnKS. For instance, in the L = 1 case, the 4D-Var RMSE of the logarithm of the parameters is reduced from 1.5 × 10 −1 to 1.3 × 10 −2 , but is still far from 1.0 × 10 −3 of IEnKS.
Conclusion
In this article, the iterative ensemble Kalman smoother (IEnKS) has been explored numerically. Using the Lorenz-95 low-order model, it has been compared to the ensemble Kalman filter (EnKF) and the standard ensemble Kalman smoother (EnKS). It has also been compared to a 4D-Var, for a wide range of data assimilation window (DAW) lengths. The IEnKS systematically outperformed the EnKF, EnKS and 4D-Var. This conclusion holds true even when the background error covariance matrix of 4D-Var is better specified and tuned. The IEnKS has been extended to joint state and parameter estimation, using the augmented state formalism. Endowed with assets of 4D-Var and ensemble Kalman methods, IEnKS appeared to us as an ideal candidate method to tackle such problems.
It was applied to the joint estimation of the Lorenz-95 state vector and its forcing parameterF . The IEnKS outperformed the EnKF, EnKS and 4D-Var for a wide range of lengths of the DAW. In addition, the estimation of F was shown to be even more precise compared to the standard methods.
Motivated by future applications of the IEnKS to atmospheric chemistry models where the estimation of the forcings is crucial, we introduced an extension of the Lorenz-95 model, adding a tracer field advected by the Lorenz-95 field. Key parameters of the tracer emission and deposition were also meant to be estimated. Again, the IEnKS managed to finely estimate the parameters without any tuning.
A better specification of the error covariance matrix of 4D-Var (obtained from the IEnKS) led to a spectacular improvement in the estimation of the static parameters. Yet, it did not help in the improvement of joint estimation of the state variables. This stresses the importance of the time-dependence of the error covariance matrix for rapidly varying variables. By contrast, the IEnKS completely avoids the need to build any background error covariance matrix.
One way to account for model error is to parameterize this error and estimate the related parameters, as was suggested in this study. Another way is to implement a weak constraint formulation of the underlying variational problem. However, this formulation remains to be defined for the IEnKS, whereas it is already implemented in the standard EnKS.
Following this study, we are planning to test the IEnKS on a more complex low-order model with several reactive species and test the estimation of the concentration variables as well as some parameters such as kinetic constants. If this is successful, our eventual plan is to implement the method on a high-dimensional air quality model. However, the definition of a satisfying implementation of localization in the IEnKS context will first be needed (work in progress).
|
2016-01-29T17:58:53.149Z
|
2013-10-23T00:00:00.000
|
{
"year": 2013,
"sha1": "213ac9f5a4f106b198f0cb7656be0a6fe8468541",
"oa_license": "CCBY",
"oa_url": "https://npg.copernicus.org/articles/20/803/2013/npg-20-803-2013.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "938e1653a8e7d0dcd5977ba743e3eb5e2c770b40",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
256682425
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of Apparent Diffusion Coefficient Repeatability and Reproducibility for Preclinical MRIs Using Standardized Procedures and a Diffusion-Weighted Imaging Phantom
Relevant to co-clinical trials, the goal of this work was to assess repeatability, reproducibility, and bias of the apparent diffusion coefficient (ADC) for preclinical MRIs using standardized procedures for comparison to performance of clinical MRIs. A temperature-controlled phantom provided an absolute reference standard to measure spatial uniformity of these performance metrics. Seven institutions participated in the study, wherein diffusion-weighted imaging (DWI) data were acquired over multiple days on 10 preclinical scanners, from 3 vendors, at 6 field strengths. Centralized versus site-based analysis was compared to illustrate incremental variance due to processing workflow. At magnet isocenter, short-term (intra-exam) and long-term (multiday) repeatability were excellent at within-system coefficient of variance, wCV [±CI] = 0.73% [0.54%, 1.12%] and 1.26% [0.94%, 1.89%], respectively. The cross-system reproducibility coefficient, RDC [±CI] = 0.188 [0.129, 0.343] µm2/ms, corresponded to 17% [12%, 31%] relative to the reference standard. Absolute bias at isocenter was low (within 4%) for 8 of 10 systems, whereas two high-bias (>10%) scanners were primary contributors to the relatively high RDC. Significant additional variance (>2%) due to site-specific analysis was observed for 2 of 10 systems. Base-level technical bias, repeatability, reproducibility, and spatial uniformity patterns were consistent with human MRIs (scaled for bore size). Well-calibrated preclinical MRI systems are capable of highly repeatable and reproducible ADC measurements.
Introduction
Water mobility, quantified via apparent diffusion coefficient (ADC), is being utilized in preclinical and clinical studies as a quantitative MRI biomarker that is sensitive to tissue alteration due to disease evolution and response to treatment [1][2][3][4]. ADC measurement has desirable features of being largely independent of magnet field strength, being derived by a simple mathematical model of monoexponential MRI signal decay as a function of diffusion weighting (b-value), and being widely available as a standard technique on preclinical and clinical MRI systems. Despite these advantages, disparity in diffusion measurement across sites and scanner platforms has hampered the adoption of ADC as a reliable objective readout of disease/tissue status in (pre-) clinical trials and routine medical practice [5][6][7]. Aside from biological variability attributable to the subject/patient being scanned, technical Tomography 2023, 9 376 sources undermining ADC reproducibility include variable acquisition protocols, scanner manufacturer and platform capabilities, gradient calibration, and software that convert diffusion-weighted images (DWI) to ADC. Ideally, base-level technical sources of variability are identified, characterized, and mitigated independent of incremental patient-related variability [7][8][9]. Once an overall level of variability is estimated, realistic confidence thresholds can be established for use of the quantitative biomarker in disease detection, progression, or response to treatment. Degree of variability relative to anticipated effect size has a major impact on study design, feasibility, and financial cost, as well as on scientific expense due to underpowered studies [8][9][10]. Given this, there is a strong incentive to identify and minimize all technical sources of variability and bias in both clinical and preclinical settings.
Physical phantoms with known properties are essential for technical performance assessments in the quality control (QC) programs [11][12][13][14]. Several diffusion phantom materials have been developed over the years, although aqueous solutions of polyvinylpyrrolidone (PVP) are popular and comprise diffusion coefficient standards within homemade and commercially available phantoms [15][16][17][18]. PVP is stable and exhibits monoexponential diffusion that is tunable over the full tissue ADC range, although internal phantom temperature must be known and controlled to ≈0.5 • C to measure diffusion coefficients to within 1% accuracy [15,19]. Ice-water-based diffusion phantoms provide an effective inexpensive means for absolute temperature control and a precisely known true diffusion value for MRI system bias assessment [20][21][22]. Ice-water DWI phantoms have been employed in multicenter clinical studies [22][23][24] and demonstrate generally good repeatability/reproducibility, reasonable platform and field strength independence, and low absolute bias (≈3%) at magnet isocenter on human scanners [22]. Gradient nonlinearity was identified as the main source of inter-scanner variability and spatial bias patterns as a function of location from isocenter [22,25,26]. Overall good repeatability/reproducibility was also noted previously on preclinical systems, though significant positive bias relative to ground truth was reported [27]. Despite the phantom materials being characterized by specific diffusion coefficients, as opposed to apparent diffusion, the nomenclature "ADC" will be used in this article for consistency with most prior publications.
A central goal of the NCI Co-Clinical Imaging Research Resource Program (CIRP) [28] is to develop quantitative imaging biomarkers applicable to both human and corollary preclinical domains to advance state-of-the-art translational quantitative imaging methodologies from mouse to human. Given its independence of field strength, water diffusion in reference standards should be equivalent on human and mouse MRI systems. The goals of this work were to measure on CIRP preclinical MRI scanners the (1) ADC bias at isocenter; (2) short-and long-term repeatability and cross-system reproducibility; (3) ADC spatial uniformity; and (4) degree of agreement between site-generated ADC versus central-labgenerated ADC values. To achieve these goals, the CIRP image acquisition data processing (IADP) working group (WG) performed a round-robin study of an ice-water-based DWI phantom using a detailed phantom preparation procedure and standardized DWI acquisition protocol, with both site-and core-lab-generated ADC measurements being derived from common DWI datasets.
DWI Phantom
The phantom shown schematically in Figure 1a was constructed from a 50 mL plastic centrifuge tube with a 29 mm outer diameter (OD) lined with a 3 mm thick closed-cell insulation foam and a 100 mm long (8 mm OD) glass measurement tube centrally held in place by foam end plugs. As detailed in the phantom preparation instructions [29], the distilled-water-filled measurement tube was replaced by an air-filled 8 mm OD glass tube, while the phantom interstitial space was filled with water and then frozen overnight in a conventional freezer (−18 • C). The foam insulation lining and end plugs allowed the ice to expand without cracking the plastic centrifuge tube and served to extend the ice hold time. Immediately prior to scanning, the air-filled glass tube was flushed with 50-60 mL of room-temperature water to melt a thin layer of water so that the air-filled tube could be removed and quickly replaced with the water-filled measurement tube. For RF coils that could accommodate a 45 mm diameter object, the phantom was scanned within an outer foam sleeve (provided with the phantom kit) to further extend the phantom thermal hold time; otherwise, the 29 mm diameter phantom was scanned without the outer foam insulation. Benchtop measurements of temperature versus time following insertion of the measurement tube (initially at room temperature) into the frozen phantom were performed using a 1.37 mm OD optical temperature probe (OTP-M, OPSens, Quebec QC, Canada) located in the center of the measurement tube. Plot of temperature versus time in Figure 1c indicates that the water in the measurement tube quickly achieves thermal equilibrium (<0.5 • C in ≈5 mins) and holds this temperature for at least 90 min, which is sufficient to position the phantom and acquire two sequential DWI scans using the standardized protocol.
distilled-water-filled measurement tube was replaced by an air-filled 8 mm OD glass tube, while the phantom interstitial space was filled with water and then frozen overnight in a conventional freezer (−18 °C). The foam insulation lining and end plugs allowed the ice to expand without cracking the plastic centrifuge tube and served to extend the ice hold time. Immediately prior to scanning, the air-filled glass tube was flushed with 50-60 mL of room-temperature water to melt a thin layer of water so that the air-filled tube could be removed and quickly replaced with the water-filled measurement tube. For RF coils that could accommodate a 45 mm diameter object, the phantom was scanned within an outer foam sleeve (provided with the phantom kit) to further extend the phantom thermal hold time; otherwise, the 29 mm diameter phantom was scanned without the outer foam insulation. Benchtop measurements of temperature versus time following insertion of the measurement tube (initially at room temperature) into the frozen phantom were performed using a 1.37 mm OD optical temperature probe (OTP-M, OPSens, Quebec QC, Canada) located in the center of the measurement tube. Plot of temperature versus time in Figure 1c indicates that the water in the measurement tube quickly achieves thermal equilibrium (<0.5 °C in ≈5 mins) and holds this temperature for at least 90 min, which is sufficient to position the phantom and acquire two sequential DWI scans using the standardized protocol.
DWI Acquisition Protocol
To eliminate potential variability due to acquisition protocol, the CIRP IADP WG achieved consensus on a standardized DWI test procedure that was within the capabilities of all preclinical MRI systems at participating sites. Details of the DWI scan procedure are provided elsewhere [29], though key parameters include Stejskal-Tanner [30] spin-echo DWI sequence (Δ = 10 ms; δ = 5 ms); field-of-view, FOV = 32 mm × 32 mm; acquisition matrix = 64 × 64; 29 axial slices (2 mm thick, 0 mm gap); three-orthogonal DWI directions,
DWI Acquisition Protocol
To eliminate potential variability due to acquisition protocol, the CIRP IADP WG achieved consensus on a standardized DWI test procedure that was within the capabilities of all preclinical MRI systems at participating sites. Details of the DWI scan procedure are provided elsewhere [29], though key parameters include Stejskal-Tanner [30] spin-echo DWI sequence (∆ = 10 ms; δ = 5 ms); field-of-view, FOV = 32 mm × 32 mm; acquisition matrix = 64 × 64; 29 axial slices (2 mm thick, 0 mm gap); three-orthogonal DWI directions, target b-values = 0, 1000, 2000 s/mm 2 ; number of averages, NSA = 1; and repetition time/echo time, TR/TE = 2000/30 ms for nominal scan duration of 15 min.
Participating Site Procedures
Participating sites were asked to (1) prebuild the standardized DWI acquisition protocol on their MRI system(s); (2) prepare the phantom and scan it twice within a given scan session for intra-exam (short-term) repeatability; (3) repeat the prep/scan process on a second day for inter-exam (long-term) repeatability; (4) provide reconstructed DWI and ADC data in MRI vendor-native format and an insight tool kit (ITK)-compatible format (e.g., DICOM, NIFTI, or MHD) [31][32][33][34]; and (5) use its own preferred workflow to generate ADC maps and perform ROI measurements using a 4 mm diameter circular ROI defined within the measurement tube on each slice. This allowed sites to either use scanner-vendorgenerated ADC maps, or their own in-house software for off-scanner conversion of DWI into ADC maps, although site-specific workflow details were not the focus of this study. DWI and ADC maps in vendor-native and ITK-compatible formats from each site were uploaded to the core lab site via shared network storage account (DropBox, per institutional policy), along with each site's ROI measurements. Seven CIRP institutions participated in the study. DWI data were acquired on 10 preclinical MRI systems, from 3 vendor platforms at 6 field strengths. A summary of MRI system demographics and data provided for each system are shown in Table 1. Systems 8 and 9 did not provide the second scans on both days and were excluded from the short-term repeatability evaluation. Inspection of DWI received from all sites (data not shown) confirmed that an adequate cylinder of ice surrounded the measurement tube (Figure 1b), indicating the water was at ≈0 • C, so absolute bias was measurable on all systems.
Core Lab Processing
To mitigate variability in data processing workflow, core lab Matlab version R2019b (Mathworks Inc., Natick, MA, USA) scripts were adapted to convert all sites' vendor-native DWI into ADC maps using a pixelwise linear fit of log DWI signal intensity versus b-value, where slope (ADC) and intercept were the fit parameters. Each of three orthogonal DWI directions were fit independently using vendor-provided b-values (when available), then averaged for the mean diffusivity (i.e., ADC). Trace DWI (b = 0 and geometric mean of 3-orthogonal b > 0 DWI) and ADC maps were output in the MHD format. While data input and sort elements of the core lab scripts were tailored for each site datasets, ADC fit routine was held essentially constant. 3D Slicer (version 4.6.2) [35] was used to inspect DWI/ADC MHDs for definition of a 4 mm circular ROI within the measurement tube on each slice independently, then export ROI statistics of the ADC and trace DWI as a function of location along the MRI system z-axis. Additional Matlab scripts were used to convert each site's ITK-compatible DWI into ADC, as well as for conversion of the site-generated ADC maps for output as MHDs. Analysis of the core-lab-generated ADC derived from a vendor-native-format DWI was used to measure baseline repeatability/reproducibility for the studied systems, whereas the ADC derived from an ITK-compatible DWI aided interpretation of the potential differences between site-generated and core-lab-generated ADC maps. Low signal-to-noise (SNR) can bias ADC calculation [36,37]; therefore, noise was estimated by the standard deviation (SD) of an ROI drawn in a signal-free background on the first slice, scaled by 1.53 since noise is Rician on magnitude DWI [38]. DWI SNR was estimated by the mean ROI DWI signal in the measurement tube divided by the noise SD, plotted as a function of location along the z-axis (Supplemental Figure S1).
Site-generated ROI mean ADCs (as a function of z-location) were averaged over all available scans (intra-and inter-day exams) from each site's MRI system(s). Likewise, core-lab ROI mean ADC was calculated from the average of core-lab results derived from the corresponding vendor-native-format DWI. Core-lab versus site processing workflows were compared graphically by plotting the relative difference: 100% (ADCsite-ADCcorelab)/Dtrue as a function of z-axis location for each system, where Dtrue = 1.1 µm 2 /ms is the known diffusion coefficient of water at 0 • C [39].
Statistics
The difference in ROI mean ADC from a pair of consecutive DWI scans within each scan session was used to calculate the short-term repeatability. Each site was also instructed to repeat phantom preparation and paired DWI scan acquisition on a second day, yielding two short-term ADC differences (except Systems 8 and 9, Table 1) and two day-to-day differences used to estimate long-term repeatability. For a given pair of ROI mean ADC values (ADC 1 , ADC 2 ) from the ith scanner, mean (M i ) and variance (V i ) were constructed as [10,40] Repeatability representative of N MRI systems was quantified by estimates of withinsystem standard deviation (wSD), coefficient of variation (wCV), and repeatability coefficient (RC), defined as [10,40] For cross-system reproducibility, available ROI mean ADC values for each system were first averaged across all scans and days for the given system, then mean and standard deviation (SD) across the N systems was calculated. Analogous to repeatability coefficient, reproducibility coefficient (RDC) was assessed as 2.77 SD. All repeatability and reproducibility metrics were derived as a function of location along the z-axis relative to the magnet isocenter, defined as z = 0. Graphical display of the percent ADC bias was plotted relative to the known diffusion coefficient of water at 0 • C, Dtrue = 1.1 µm 2 /ms [39], as 100% [(ADC-Dtrue)/Dtrue]. Likewise, wSD and SD were scaled by 100%/Dtrue on plots so that the degree of variability could be directly compared relative to the systematic bias. Unrealistic ADC values < 0.5 µm 2 /ms were automatically dropped from the plots and analysis. Each system's absolute ADC bias was measured at the isocenter by averaging the ADC in the measurement tube over the three central slices.
Results
Measured SNR at the isocenter for low b-value exceeded 150 on all systems evaluated in our study (Supplemental Figure S1a,b). Numerical simulation of noise-induced error using the standardized protocol indicates that bias (i.e., ADC underestimation) would occur for low b-value SNR below 20 (Supplemental Figure S1c). Figure 2 illustrates the median and range of the absolute ADC bias measured at the isocenter over all scans for each system, with respect to the true diffusion coefficient of water at 0 • C (Dtrue = 1.1 µm 2 /ms). The small error-bar range relative to the offset from the truth indicates that the system absolute bias was measurable and repeatable on each system. Eight of ten MRI systems were within ±2.5% bias (mostly positive) and the remaining two exceeded +10% bias. Figure 2 illustrates the median and range of the absolute ADC bias measured at the isocenter over all scans for each system, with respect to the true diffusion coefficient of water at 0 °C (Dtrue = 1.1 µm 2 /ms). The small error-bar range relative to the offset from the truth indicates that the system absolute bias was measurable and repeatable on each system. Eight of ten MRI systems were within ±2.5% bias (mostly positive) and the remaining two exceeded +10% bias. All systems are displayed on the same scale to aid visual comparison. Again, the solid horizontal line represents 0% bias relative to Dtrue. Automatic rejection of unrealistic ADC values (<0.5 µm 2 /ms) only occurred outside |z-offset| > 20 mm. Most systems display the pattern of maximum ADC at isocenter, with lower ADC as |z-offset| distance increases, which is consistent with gradient nonlinearity patterns for horizontal bore gradients on human scanners. Systems 9 and 10 showed >10% bias for ADC at isocenter, while others had low isocenter bias (comparable to measurement error). These two systems also showed the greatest gradient nonlinearity over the central region, within ±15 mm of isocenter. All systems are displayed on the same scale to aid visual comparison. Again, the solid horizontal line represents 0% bias relative to Dtrue. Automatic rejection of unrealistic ADC values (<0.5 µm 2 /ms) only occurred outside |z-offset| > 20 mm. Most systems display the pattern of maximum ADC at isocenter, with lower ADC as |z-offset| distance increases, which is consistent with gradient nonlinearity patterns for horizontal bore gradients on human scanners. Systems 9 and 10 showed >10% bias for ADC at isocenter, while others had low isocenter bias (comparable to measurement error). These two systems also showed the greatest gradient nonlinearity over the central region, within ±15 mm of isocenter. All systems are displayed on the same scale to aid visual comparison. Again, the solid horizontal line represents 0% bias relative to Dtrue. Automatic rejection of unrealistic ADC values (<0.5 µm 2 /ms) only occurred outside |z-offset| > 20 mm. Most systems display the pattern of maximum ADC at isocenter, with lower ADC as |z-offset| distance increases, which is consistent with gradient nonlinearity patterns for horizontal bore gradients on human scanners. Systems 9 and 10 showed >10% bias for ADC at isocenter, while others had low isocenter bias (comparable to measurement error). These two systems also showed the greatest gradient nonlinearity over the central region, within ±15 mm of isocenter. Plots of the relative bias (with respect to Dtrue), repeatability, and reproducibility of all 10 systems combined are illustrated in Figure 4. Note, relative bias (solid blue line) was unchanged in Figure 4a-c to display the degree of variability (width of shaded region denotes ±100·wSD/Dtrue) relative to bias for short-term (Figure 4a) and long-term repeatability ( Figure 4b) and cross-system reproducibility (Figure 4c). Aggregate bias of ten CIRP systems at the isocenter was within 5% of truth (marked by dashed lines). Short-term repeatability (Figure 4a) wCV relative to Dtrue was <1% for all 10 systems at the isocenter and was fairly uniform with respect to location on z-axis. There was a slight increase in wSD observed at the isocenter for long-term repeatability (Figure 4b) that further increases with distance from the isocenter. As expected, reproducibility across all systems (Figure 4c) shows the greatest variance (shaded region denotes ±100·SD/Dtrue), with standard deviation (SD) < 7% across systems for locations within ±15 mm of the isocenter, which increased to 10-15% for greater |z-offset| locations. Summary statistics (with 95% confidence intervals) relevant to ADC measurements at isocenter on all systems are provided in Table 2. The Bland-Altman analysis for isocenter ADC measurements excluding the outliers Sys9 and Sys10 is summarized in Supplemental Figure S2. Without the two outlier systems, the short-term and long-term repeatability were comparable, and crosssystem reproducibility was within 3%, with 1% average bias (Supplemental Figure S2). all 10 systems combined are illustrated in Figure 4. Note, relative bias (solid blue line) was unchanged in Figure 4a-c to display the degree of variability (width of shaded region denotes ±100·wSD/Dtrue) relative to bias for short-term ( Figure 4a) and long-term repeatability ( Figure 4b) and cross-system reproducibility (Figure 4c). Aggregate bias of ten CIRP systems at the isocenter was within 5% of truth (marked by dashed lines). Shortterm repeatability (Figure 4a) wCV relative to Dtrue was <1% for all 10 systems at the isocenter and was fairly uniform with respect to location on z-axis. There was a slight increase in wSD observed at the isocenter for long-term repeatability (Figure 4b) that further increases with distance from the isocenter. As expected, reproducibility across all systems (Figure 4c) shows the greatest variance (shaded region denotes ±100·SD/Dtrue), with standard deviation (SD) < 7% across systems for locations within ±15 mm of the isocenter, which increased to 10-15% for greater |z-offset| locations. Summary statistics (with 95% confidence intervals) relevant to ADC measurements at isocenter on all systems are provided in Table 2. The Bland-Altman analysis for isocenter ADC measurements excluding the outliers Sys9 and Sys10 is summarized in Supplemental Figure S2. Without the two outlier systems, the short-term and long-term repeatability were comparable, and crosssystem reproducibility was within 3%, with 1% average bias (Supplemental Figure S2). short-term (intra-exam) repeatability relative to Dtrue, plotted as a function of z-axis location. Note, systems 8 and 9 did not provide short-term repeatability data (Table 1). (b) Corresponding plots for long-term (inter-exam) repeatability. Shaded regions in (a) and (b) represent bias ± 100% wSD/Dtrue. (c) Cross-system reproducibility, where shaded region represents bias ± 100% SD/Dtrue. Green line denotes ideal 0% bias. (d) Difference between site-generated and core-labgenerated ADC relative to Dtrue. Difference for core-lab systems 1 and 10 is zero (not plotted). Plots are on the same scale to aid visual comparison of bias, short-and long-term repeatability, reproducibility, and difference between site-versus core-lab ADC generation routines.
All data in Figure 4a-c were derived from the core-lab-generated ADC maps. Figure 4d illustrates the percent difference between the core-lab-generated and the site-generated short-term (intra-exam) repeatability relative to Dtrue, plotted as a function of z-axis location. Note, systems 8 and 9 did not provide short-term repeatability data (Table 1). (b) Corresponding plots for long-term (inter-exam) repeatability. Shaded regions in (a) and (b) represent bias ± 100% wSD/Dtrue. (c) Cross-system reproducibility, where shaded region represents bias ± 100% SD/Dtrue. Green line denotes ideal 0% bias. (d) Difference between site-generated and core-lab-generated ADC relative to Dtrue. Difference for core-lab systems 1 and 10 is zero (not plotted). Plots are on the same scale to aid visual comparison of bias, short-and long-term repeatability, reproducibility, and difference between site-versus core-lab ADC generation routines. All data in Figure 4a-c were derived from the core-lab-generated ADC maps. Figure 4d illustrates the percent difference between the core-lab-generated and the site-generated ADC values relative to Dtrue. Sys1 and Sys10 are not plotted in Figure 4d since these were core lab MRI systems, and thus would show "0%" difference. There were large random differences at peripheral slices (|z-offset| > 20 mm), potentially due to how various site algorithms deal with low SNR conditions. Of greater interest and significance were the clear ADC discrepancies within ≈15 mm of the isocenter, since measurements were derived from the very same good-quality DWI data. Root-mean-square differences within ±16 mm of the isocenter, between core-lab and site processing, were negligible (<0.6%) for five systems (Sys3, Sys4, Sys5, Sys8, and Sys9); ≈1% for Sys7; ≈3% for Sys2; and ≈5% for Sys6.
Discussion
Standardization of DWI acquisition and data processing protocols is essential to identify and mitigate technical sources of variance to enhance the scientific yield in preclinical and clinical studies. Even with consensus on primary acquisition parameters, platform and scanner-specific idiosyncrasies in protocol implementation can lead to unanticipated variance from system instability and chronic effects such as gradient nonlinearity and amplitude miscalibration. The pattern of peak ADC at/near the isocenter (z = 0) that falls off with |z| distance along the bore axis, as observed on these preclinical systems, is consistent with gradient nonlinearity observed on horizontal-bore clinical MRIs [26,41,42] due to known physical characteristics of gradient coils. In the context of quantitative ADC, gradient nonlinearity results in a spatially variable b-value. Secondary acquisition factors (e.g., shim routine and subject positioning) add yet more variance.
In this study, the CIRP IADP WG sought to determine base-level technical variance in performing ADC measurements on preclinical MRIs. To achieve this, a shared temperaturecontrolled DWI phantom with known diffusivity was used along with a detailed phantom preparation and scan procedure. To reduce variance due to data processing, centralized analysis was used to assess bias, short-term and long-term repeatability, and cross-system reproducibility. A key finding of this work was that performance of preclinical MRIs at isocenter resembles clinical MRIs [22] in terms of low average bias at isocenter (<4%), good repeatability (short-term wCV = 0.73%; long-term wCV = 1.26%), and cross-system reproducibility (SD = 0.068 µm 2 /ms or 6.2% of Dtrue). Spatial nonuniformity of ADC measurements along the z-axis on preclinical MRIs also resembles the gradient nonlinearity observed on human MRIs [26,42], though scaled for bore size. While reasonable ADC uniformity over the central region (within ≈10 mm of the isocenter) was observed for most systems, the importance of repeatable subject (mouse) positioning of the organ/lesion of interest at/near isocenter must not be overlooked.
In terms of bias at the isocenter, it is clear that systems 9 and 10 are the dominant contributors to bias in this study. The aggregate CIRP system bias of <4% reported here would be reduced to <1.5%, and cross-system variability improved from RDC = 17% to 3% if these two outlier systems were excluded from analysis. Systems 9 and 10 happened to be at field-strength extremes, though we expect this is incidental and not the source of their bias. Elevated phantom temperature and data processing were eliminated as sources of bias since ample ice surrounded the measurement tubes and there was excellent agreement between site-and core-lab-generated ADC results for system 9. Of the CIRP scanners evaluated, system 9 operated on the oldest Bruker software version and system 10 was the only MR Solutions platform, which may be contributing factors along with gradient amplitude miscalibration. Vendor-provided directional b-values were used for core-lab ADC map generation, although only system 10 (MR Solutions) b-values were numerically identical to nominal b-values, suggesting that calibrated values are perhaps unknown for this system.
Multiple studies of bias/repeatability/reproducibility of ADC on clinical MRIs using DWI phantoms are available [22][23][24][25][26]. Spatial nonuniformity of ADC as a function of x-y offset, as well as z-location on clinical scanners, has been studied using specialized phantoms and procedures [43][44][45][46] within FOVs much larger than typical for preclinical scanners. In our study, we limited ADC nonuniformity measurements to small offsets (±25 mm) relevant for mouse DWI along the bore axis (z-direction), since this typically is the most nonuniform direction on clinical scanners, as predicted by horizontal bore gradient coil design specifications [42][43][44][45]. The only other prior work on multisystem preclinical MRIs [27] did not specifically address spatial nonuniformity. In addition, while the repeatability and reproducibility results of our study were comparable to the prior work [27], the overall system bias was much lower in our study. Reduced overall bias may be the result of improved system calibration procedures, updated scanner software, and/or use of a standardized acquisition protocol with centralized data processing. Our study used higher maximum b-value than the previous work [27], and inadequate SNR at high b-values is known to lead to ADC underestimation. However, our analysis showed that the average ADC bias at isocenter was positive with respect to true diffusion value, suggesting that low SNR was not a major contributor to the measured bias. Furthermore, our simulations for the observed SNR > 150 (at low b-value) indicated that our ADC results could not be significantly biased by Rician noise. Lastly, ADC was overestimated for systems 9 and 10, which dominated bias (as opposed to underestimating ADC, as predicted by low-SNR simulation). This suggested gradient miscalibration as a more likely source of the detected bias, possibly similar to systems in the prior multi-scanner study [27].
Observed discrepancies between a few site-generated and the core-lab-generated ADC values were greater than those explainable by noise or slight shift in ROI location. Fit algorithm details (e.g., log-linear versus nonlinear fit with possible accommodation of noise), use of nominal versus scanner-specific calibrated b-values, and/or incorrect scaling of scanner output are potential contributors to the detected deviations. In this study, core-lab processing utilized directional b-values discovered within Bruker and Agilent native data formats, although only nominal b-values were available in MR Solutions output data files. Except for the system 4's enhanced DICOM, which contained the b-matrix, all other ITK DWI datasets did not contain diffusion b-values and direction information; thus, one would need to know and use nominal b-values for quantitative ADC map generation. Site processing of system 6 was the most disparate relative to core-lab processing, particularly in terms of high slice-by-slice variability in ADC. Inspection of the system 6 ADC maps provided in classic DICOM revealed that the rescale slope (DICOM tag (0028,1053)) varied substantially (by ≈50%) with the slice number. Ignoring the rescale slope was the likely source of the high slice-by-slice variability observed in the system 6 site-based measurements [47]. These observations underline the importance of a standardized ADC generation workflow, along with standardized acquisition protocols and metadata (b-value and scale) recording.
Conclusions
Well-calibrated preclinical MRI systems are capable of highly repeatable and reproducible ADC measurements with low bias using standardized DWI data acquisition and processing protocols. Base technical-level repeatability and reproducibility metrics and spatial uniformity patterns are comparable to those observed on human systems using similar phantoms and test procedures.
|
2023-02-09T16:16:27.872Z
|
2023-02-01T00:00:00.000
|
{
"year": 2023,
"sha1": "5eba2d8f97bb52fbe0e266a292335e01c0f5831b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2379-139X/9/1/30/pdf?version=1675752940",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ede8650982055db614f1c6622137f71f01100b64",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18270069
|
pes2o/s2orc
|
v3-fos-license
|
Equilibrium States and SRB-like measures of C1 Expanding Maps of the Circle
For any C1 expanding map f of the circle we study the equilibrium states for the potential -log |f'|. We formulate a C1 generalization of Pesin's Entropy Formula that holds for all the SRB measures if they exist, and for all the (necessarily existing) SRB-like measures. In the C1-generic case Pesin's Entropy Formula holds for a unique SRB measure which is not absolutely continuous with respect to Lebesgue. The result also stands in the non generic case for which no SRB measure exists.
Introduction
For any map f on a compact manifold, if no invariant measure is equivalent to Lebesgue, or if f is non ergodic with respect to the invariant measures that are equivalent to Lebesgue, many substitutive concepts of natural invariant measures have been defined. They translate the statistical asymptotic behavior of Lebesgue-positive sets of orbits, into spacial probabilities. Nevertheless, except under specific conditions in the C 1+α scenario, those statistically good measures do not necessarily coincide, and moreover, they do not necessarily exist. For instance in [5] and [14] the natural measures are defined as the weak * limit (if it exists) of the averages (1/n) n−1 j=0 (f * ) j ν for any probability ν ≪ m, where m is the Lebesgue measure and f * denotes the pull back operator in the space of Borel-probabilities. Similarly, SRB or physical measures are defined as the weak * limit (if it exists) of the averages σ n (x) := (1/n) n−1 j=0 (f * ) j δ x for a Lebesgue-positive set of initial states x. In [14] a method is exhibited to construct C 0 non singular expanding maps of the circle S 1 , for which there exists a unique natural measure with respect to m, and nevertheless, for malmost all the points x ∈ S 1 the averages σ n (x) are non convergent. Thus, even in the case that topological expansion is exhibited, the notions of SRB and natural measures differ widely. In [12], [22], [10] diverse maps are constructed without any natural limit measure, but with other good ergodic properties (for instance, the existing of a mixing probability). In a general context, neither the existence of natural measures nor of SRB measures is required for a map f exhibit statistically good properties with respect to the Lebesgue measure ( [8]).
A third notion of good measure from the statistical viewpoint raises from the thermodynamic formalism when considering, if it exists, a probability µ for which the Pesin's Entropy Formula holds [15]. If the hypothesis of C 2 (or C 1+α ) regularity is added, plenty of tight relations were proved among the SRB measures, the absolute continuity with respect to Lebesgue (of the conditional measures along the unstable manifolds), and the Pesin's Entropy Formula. See for instance [13], [18], [20], [1], [3]. But to prove those results, the C 1 -plus Hölder regularity is essential. In the C 1 scenario, generic volume preserving diffeomorphisms still have an invariant measure satisfying the Pesin's Entropy Formula [21], [7]. But contrarily to the situation of the C 1+α maps, the C 1 -generic dynamical systems, under some hyperbolic-like assumptions, have no invariant measure µ being (either µ or its conditional unstable measures) absolutely continuous with respect to Lebesgue [2], [6]. Nevertheless, those C 1 -generic systems still have a unique SRB measure [9], [16].
Along this paper we consider the family E 1 of all the C 1 expanding maps of the circle S 1 . We recall that a C 1 map f : if and only if f ∈ E 1 and besides f ′ is α-Hölder continuous. We will focus with particular emphases on the systems in E 1 \ E 1+α . Our purpose is to state and prove a reformulation of the Pesin's Entropy Formula for all those systems, including the non generic ones for which no SRB exists.
For C 1 systems Ruelle's Inequality [19] states that for any f -invariant prob-ability measure µ on the Borel σ-algebra of S 1 , the corresponding measure entropy h µ (f ) satisfies: Therefore, h µ (f )− log |f ′ | dµ ≤ 0. By definition, the Pesin's Entropy Formula holds if the latter difference is equal to zero: For any map f ∈ E 1+α , [15] and [13] prove that Formula (2) holds if and only if µ ≪ m, where m is the Lebesgue measure. On the contrary, as said above, if f is only C 1 then E 1 -generically f has no invariant measure µ such that µ ≪ m [9]. Thus, two questions raise: First, do there exist, for any f ∈ E 1 , invariant probability measures satisfying Formula (2)? From the thermodynamic formalism, the answer to this question is known to be affirmative, since f is topologically expansive. Second, what statistical properties do those probabilities exhibit with respect to the (non invariant) Lebesgue measure? In Theorem 2.3 of this paper we give a statistical simple description of a nonempty subset of invariant measures that satisfy Formula (2). We call that description the SRB-like property [8]. As a Corollary, if the measure that satisfies Formula (2) is unique, then it is SRB (i.e. physical), also when E 1 -generically it is mutually singular with respect to the Lebesgue measure. Besides, if there exist physical measures, all of them satisfy Formula (2). Finally, if SRB measures do not exist, there still exist (uncountably many) probabilities that are distinguished from the general invariant measures by a weak physical condition, which is similar to the statistical property of SRB measures, and besides, satisfy the Pesin's Entropy Formula.
Even if we conjecture that the results are also true for C 1 expanding maps in any dimension, the proofs along this paper work only on one-dimensional compact manifolds. In fact, in Lemma 4.1, we use that there exists a partition of the ambient manifold with arbitrarily small norm, such that the measure of the union of the boundaries of its pieces is zero, for all the invariant probability measures. This property is trivially satisfied by any one-dimensional map whose set of periodic orbits is, at most, countable.
Definitions and Statement of the Result.
The classic thermodynamic formalism defines the pressure P f respect to the potential ψ := − log |f ′ | by where M f is the set of all f -invariant Borel probabilities in S 1 . For any f ∈ E 1 the pressure P f is equal to zero (see [17]). Let us denote with ES f the (a priori maybe empty) set of all the f -invariant probability measures µ that realize the pressure P f as a maximum equal to zero. Precisely: Namely, the set ES f is the set of invariant measures that satisfy the Pesin's Entropy Formula (2) of the entropy. The thermodynamic formalism (see for instance [11]) for expansive maps states that ES f is weak * compact and convex in the space M of all the Borel probabilities in S 1 , and, if non empty, its extremal points are ergodic measures. The measures in ES f are called equilibrium states of f for the C 0 real function ψ = − log |f ′ |. Let us recall some definitions from the statistical viewpoint. Consider for each initial point x ∈ S 1 , the following sequence of probabilities {σ n (x)} n≥1 , that are called empirical probabilities. In general they are non f -invariant: In the Equality above δ y denotes the Dirac-delta measure supported on y. It is standard to check that any physical measure is f -invariant. After the definition above, if there exist physical measures, then they describe the spatial probabilistical distribution in S 1 of the asymptotic behavior of the empirical distributions in Equality (4), for a Lebesgue-positive set B(µ) ⊂ S 1 of initial states. This is the physical role of the SRB measures from the statistical viewpoint. As said in the introduction, the existence and uniqueness of an SRB measure µ are generic for f ∈ E 1 , but µ is mutually singular with respect to the Lebesgue measure [9]. On the other hand, for any f ∈ E 1+α , the existence and uniqueness of the SRB measure µ are necessary (Ruelle's Theorem). Besides, in this case µ is equivalent to Lebesgue and is the unique equilibrium state for the potential ψ = − log |f ′ |. Namely, it is the unique probability that satisfies the Pesin's Entropy Formula (2). In Theorem 2.3 we prove a generalization of Ruelle's Theorem and of Pesin's Entropy Formula (2) for any C 1 expanding map of the circle. We apply the definition of SRBlike (weakly physical) measure, instead of considering only physical or SRB measures. The gain in this generalization is that the SRB-like measures always exist, they still have a physical-like meaning (see Proposition 2.2) and they are equilibrium states for − log |f ′ | of any C 1 expanding map in S 1 , regardless weather physical measures exist and weather such equilibrium state is unique.
Before stating the precise result, we need to revisit the definition of SRBlike measure. In brief, the nonempty set O f of the SRB-like probability measures (defined for any continuous map acting on a compact manifold) is the minimal weak * -compact nonempty set of M that contains all the limits of the convergent subsequences of (4) for Lebesgue-almost all the initial states x ∈ S 1 (see Definition 3.2). Immediately, if there exist SRB measures, they are SRBlike; and there exists a unique SRB-like measure if and only if there exists a unique SRB probability µ and its basin B(µ) has full-Lebesgue measure. But besides, in the cases that no SRB measure exists, the SRB-like measures still exist and preserve the statistical role that the nonexisting SRB measures would exhibit. Although the construction above is global, each SRB-like measure µ ∈ O f preserves an individual weakly physical meaning, independently of the other measures in the set O f . This is stated in the following Proposition 2.2. It gives a characterization of the SRB-like measures. To state Proposition 3.3, and to argue along the paper, the space M of all the Borel probabilities on S 1 is endowed with the weak * topology. For each point x ∈ S 1 we denote: where σ n (x) is the empirical probability defined in Equality (4). The set pω(x) is the limit set in M of the empirical sequence with initial state x. We call pω(x) the p-limit set of x.We fix any weak * -metric in M. We denote this metric with dist * .
Proposition 2.2 A probability measure µ is SRB-like if and only if for all
ǫ > 0 the following set A ǫ (µ) ⊂ S 1 (called basin of ǫ-weak attraction of µ) has positive Lebesgue measure: For a seek of completeness, and although Proposition 2.2 can be easily obtained from the results in [8], we give an independent proof in the paragraph 3.3 of this paper. Let us state now our main result: The following assertions are immediate consequences or restatements of Theorem 2.3, for all the C 1 expanding maps that are non necessarily C 1+α : Besides, O f = {µ} if and only if µ is SRB and its basin B(µ) has full Lebesgue measure.
Any SRB-like measure µ (and in particular any SRB measure if it exists) satisfies the Pesin's Entropy Formula
(2) of the entropy.
2.3.4
There exist f -invariant probability measures such that m(A ǫ (µ)) > 0 for all ǫ > 0, where m denotes the Lebesgue measure and A ǫ (µ) denotes the basin of ǫ-weak attraction of µ defined by (7). All those measures satisfy the Pesin's Entropy Formula.
We prove Theorem 2.3 in Section 5. It is a stronger version of Theorem 6.1.8 of the book of Keller [11], that states that observable measures belong to ES f . In fact, the definition of SRB-like measures in Section 2 of this paper is non trivially weaker than the definition of observable measures in [11]. While SRB-like measures do exist for any f ∈ E 1+α , the stronger observable measures according to [11] may not exist. Nevertheless, some of the arguments that we use to prove Theorem 2.3, are taken from the proof of Theorem 6.1.8 in [11]. The different idea when using SRB-like measures instead of the observable measures defined in [11], resides in the proof of Lemma 4.3. We have to manage with sets of probabilities (neighborhoods of the SRB-like measures), instead of fixed probabilities (the observable measures according with [11]). The statement 2.3.2 can be equivalently reformulated, substituting the assumption #ES f = 1 by the following condition (see Lemma 2.4 [16]): For any expansive map (in any finite-dimensional manifold) the above condition is C 1 generic (see Corollary 2.5 and Proposition 3.1 of [16]). Thus, the statement 2.3.2 provides a new proof of a remarkable result in [9]: C 1generically the expanding maps of the circle have a unique ergodic SRB measure whose basin covers Lebesgue-almost all the orbits.
Let us state three Corollaries of Theorem 2.3. We say that a probability measure is atomic if it is supported on a finite set.
There is no atomic SRB-like measure of a C 1 expanding map in S 1 .
We prove this Corollary in the paragraph 5.1. Besides, if the conditions above hold, then µ is SRB and its basin B(µ) has full Lebesgue measure.
We prove this Corollary in the paragraph 5.1 at the end of this paper. This corollary has a similar version for natural measures, when they exist, instead of SRB-like measures (Theorem 2.4, Part (3) of [10]). From the definition of SRB-like measure, it is immediate that if there exists some ergodic SRB-like measure µ such that µ ≪ m, then it is SRB. Nevertheless it may exist non ergodic invariant measures µ ≪ m that are neither SRB nor SRB-like (see [17]). In this latter case the weak * closure of the ergodic components of µ is composed by SRB-like measures, but they are not necessarily absolutely continuous with respect to m. In spite of the considerations above, we have Corollary 2.6 Let f be a C 1 expanding map of S 1 . Let µ be a non ergodic f -invariant probability such that µ ≪ m, where m is the Lebesgue measure. Then µ satisfies the Pesin's Entropy Formula.
The proof of Corollary 2.6 is in the paragraph 5.5. This corollary has a similar formulation for C 1 -diffeomorphisms in any dimension with a dominated splitting (see [20]).
SRB-like measures.
We revisit the definition and properties of the SRB-like (weakly physical) measures. The content of this section is a reformulation of a part of [8]. Proof: Consider the family Υ of all the non empty and weak * compact sets A ⊂ M such that pω(x) ⊂ A for a full Lebesgue set of initial states x ∈ S 1 . The family Υ is not empty, since trivially M ⊂ Υ. Define in Υ the partial order A 1 ≤ A 2 if and only if A 1 ⊂ A 2 . Each countable chain {A n } n≥0 ⊂ Υ (namely, A n+1 ≤ A n for all n ≥ 0) has a minimal element A = +∞ n=0 A n ∈ Υ. In fact, if m denotes the Lebesgue measure in S 1 and if for each fixed n ≥ 0 the Borel set B n ⊂ M contains all the points x such that pω(x) ⊂ A n , then m(B n ) = 1, m(∩ +∞ n=0 B n ) = 1, pω(x) ∈ A for all x ∈ ∩ +∞ n=0 B n , and thus A ⊂ Υ. Therefore, after Zorn Lemma there exist minimal elements in Υ, namely, minimal non empty and weak * compact sets O ⊂ M such that pω(x) ⊂ O for Lebesgue almost all x ∈ S 1 . Finally, this minimal O ⊂ Υ is unique since the intersection of two of them is also in ⊂ Υ. It is immediate that any SRB-like measure is f -invariant. In fact, the set of f -invariant Borel probabilities is non empty, weak * -compact and contains pω(x) for all x ∈ S 1 . It is also immediate that all the physical or SRB measures (according with Definition 2.1), if they exist, are SRB-like measures. In fact, if µ ∈ O f , then since pω(x) ⊂ O f for Lebegue-almost all x ∈ S 1 , the set B(µ) = {x ∈ S 1 : pω(x) = {µ}} has zero Lebesgue-measure, and thus µ is non SRB.
Proof of Proposition 2.2
As said in Section 2, this Proposition gives an individual (weakly) physical meaning to each of the SRB-like measures.
Proof In the equality above P q := q−1 j=0 f −j (P), where for any pair of finite partitions P and Q it is defined P Q := {X Y = ∅ : X ∈ P, Y ∈ Q}. It is a classic result that the limit defining h(P, µ) exists. Finally, the measure theoretic entropy h µ of an f -invariant measure µ is defined by h µ := sup P h(µ, P), where the sup is taken on all the Borel measurable finite partitions P of the space.
A well known result (see Proposition 2.5 of [4]) states that if f is expansive (in particular if f ∈ E 1 ), and if P is a partition with norm smaller than the expansivity constant, then h µ = h(µ, P) for all µ ∈ M f . Applying this result, in the sequel we will consider only finite partitions with norm smaller than the expansivity constant α of f ∈ E 1 . So, we will compute the measurable theoretic entropy by For any (non necessarily f -invariant) Borel probability ν, denote f * ν to the probability defined by f * ν(B) := ν(f −1 (B)) for any Borel-measurable set B. For a given finite partition P denote ∂P := X∈P ∂X, where ∂X denotes the topological boundary of the piece X. The only step along the proof of Theorem 2.3 (which is one of the key-points in its proof) for which we use the dimension one of the space, resides in the application of the following lemma, in particular in its statements (ii) and (iii): Proof: Take any finite covering U of S 1 with open intervals with length smaller than δ. Denote ∂U := X∈U ∂X. It is a finite set. Therefore µ(∂U) = 0 for all µ ∈ M f if and only if ∂U does not contain periodic points of f . Since f ∈ E 1 , the set of periodic points is countable. Then, changing if necessary the open intervals X ∈ U to slightly smaller ones such that they still cover S 1 and their boundary points are non periodic, we get a new covering U ′ = {Y i } 1≤i≤p such that µ(∂U ′ ) = 0 for all µ ∈ M f . Therefore, the partition P = {X i } 1≤i≤p defined by X 1 := Y 1 ∈ U ′ , X i+1 := Y i+1 \ (∪ i j=1 X i ), satisfies the assertions (i) and (ii). Let us prove that (i) and (ii) imply (iii). Fix the integer numbers q ≥ 1, and n ≥ q. Write n = Nq + j where N, j are integer numbers such that 0 ≤ j ≤ q − 1 Fix a (non necessarily invariant) probability ν. From the properties of the entropy function H of ν with respect to the partition P, we obtain To obtain the inequality above recall that H(P, ν) ≤ log p ∀ ν ∈ M, where p is the number of pieces of the partition P. The inequality above holds also for f −l P instead of P, for any l ≥ 0, because it holds for any partition with exactly p pieces. Thus: Adding the above inequalities for 0 ≤ l ≤ q − 1, we obtain: On the other hand, for all 0 ≤ l ≤ q − 1 Therefore, adding the above inequalities for 0 ≤ j ≤ q − 1 and joining with the inequality (9), we obtain: Recall that n = Nq + j with 0 ≤ j ≤ q − 1. So Nq + q ≤ n + q and then In the last inequality we have used that the number of nonempty pieces of P q is at most p q . Now we put ν = ν n and divide by n. Recall that the convex combination of the function H for a finite set of probability measures is not larger than the function H for the convex combination of the measures. We deduce: For any fixed ǫ > 0 (and the natural number q ≥ 1 still fixed), take n ≥ n(q) := max{q, 9 q log p/ǫ} in the inequality above. We deduce: q n H(P n , ν n ) ≤ qǫ 3 + H(P q , µ n ) ∀ n ≥ n(q) ∀ q ≥ 1.
The inequality above holds for for any fixed q ≥ 1 and for any n large enough, depending on q. By hypothesis, µ is f -invariant equal to the weak * -limit of a convergent subsequence of µ n . After Equality (8) there exists q ≥ 1 such that Fix such a value of q. Since µ(∂(P)) = 0 for all µ ∈ M f : Therefore, there exists i 0 such that for all i ≥ i 0 : Joining the last assertion with Inequalities (10) and (11) we deduce (iii), as wanted.
Notation
: For any f ∈ E 1 denote ψ := − log |f ′ | < 0, and for all r ≥ 0 construct The notation above is taken from the book [11]. The set K r is non empty, weak * compact and convex. In fact, join the proof of Theorem 4.2.3 of the book in [11] with Theorem 4.2.4 and Remark 6.1.10 of the same book. Joining the assertions (1), (3) and (12) we deduce that ES f = K 0 is weak * compact and convex.
For all integer n ≥ 1 and all x ∈ S 1 recall the definition of the empirical probability σ n (x) in Equality (4), and the definition of the p-limit set pω(x) in the set M of Borel probabilities, according with Equality (6). In M fix the following weak * metric: where ψ 0 := − log |f ′ | and {ψ i } i≥1 is countable family of continuous functions, dense in the space C 0 (S 1 , [0, 1]). Trivially with this distance, for any µ 0 ∈ M and any ǫ > 0 the ball B := {ν ∈ M : dist(µ 0 , ν) < ǫ} is convex.
Lemma 4.3
Let f be a C 1 expanding map on S 1 . Let m be the Lebesgue measure on S 1 . Fix r > 0 and let K r be defined by Equality (12). Consider the weak * distance defined in (13). Then, for all 0 < ǫ < r/2 there exists n 0 ≥ 1 such that Proof: Fix 0 < ǫ < r/2. Observe that the set {µ ∈ M : dist(µ, K r ) ≥ ǫ} is weak * compact, so it has a finite covering {B i } 1≤i≤k , for a minimal cardinal k ≥ 1, with open balls B i ⊂ M of radius ǫ/3. For any fixed n ≥ 1 denote Therefore, to prove the lemma it is enough to find n 0 such that m(C n ) ≤ e n(ǫ−r) for all n ≥ n 0 . Fix 1 ≤ i ≤ k. We claim: ∃ n i such that m(C n,i ) ≤ e n(−r+(ǫ/2)) ∀ n ≥ n i (to be proved) (15) First, let us see that it is enough to prove Assertion (15) to end the proof of the lemma. In fact, if Assertion (15) holds, define: n 0 := max{2(log k)/ǫ, max k i=1 n i }. Then, we deduce the following inequalities for all n ≥ n 0 , as wanted: m(C n,i ) ≤ k e n(−r+ǫ/2)) = e n(−r+(ǫ/2)+(log k/n)) ≤ e n(ǫ−r) .
Second and last, let us prove Assertion (15). Consider an expansivity constant α > 0 of f . Take ǫ/10 and for such value, fix a continuity modulus 0 < δ < α of the function ψ = − log |f ′ |. Namely |ψ(x) − ψ(y)| < ǫ/6 if dist(x, y) < δ. Take a finite partition P = {X i } 1≤i≤p of S 1 with norm smaller than δ and satisfying also the conditions (ii) and (iii) of Lemma 4.1. The map f is conjugated to a linear expanding map. Therefore, if the norm of the partition P is chosen small enough, the restricted map f n | X : X → f n (X) is a diffeomorphism for all X ∈ P n and for all n ≥ 1. Thus, for all X ∈ P n : Either C n,i = ∅, and Assertion (15) becomes trivially proved, or the finite family of pieces {X ∈ P n : X ∩ C n,i = ∅} = {X 1 , . . . , X N } has N = N(n, i) pieces for some N ≥ 1. In this latter case, take a unique point y k ∈ X k ∩ C n,i for each k = 1, . . . , N. Denote by Y (n, i) = {y 1 , . . . , y N } the collection of such points. Due to the construction of δ > 0, and since the partition P has norm smaller than δ, we deduce: (ψ(f j (y k )) + ǫ/6) ∀ y, y k ∈ X k , ∀ k = 1, . . . , N.
Therefore m(C n ) ≤ e nǫ/6 N k=1 e n−1 j=0 ψ(f j (y k )) m(f n (X k ∩ C n,i )), and thus: ψ(f j (y k )) = n ψ dµ n , N k=1 λ k log λ k = H(P n , ν n ), and then m(C n ) ≤ exp nǫ 6 + log L = exp n ǫ 6 + ψ dµ n + H(P n , ν n ) n Take a subsequence n l → +∞ such that • lim l→+∞ m(C n l ) = lim sup n→+∞ m(C n ) and • the sequence {µ n l } l≥1 is weak * -convergent. Denote µ = lim i→+∞ µ n i . After Assertion (iii) of Lemma 4.1 and the definition of the weak * topology, there exists n i ≥ 1 such that By construction y k ∈ C n,i for all k = 1, . . . , N. Thus σ n (y k ) ∈ B i . Since the ball B i is convex and µ n is a convex combination of the measures σ n (y k ) (recall that N k=1 λ k = 1), we deduce that µ n ∈ B i . Therefore, the weak * limit µ of any convergent subsequence of {µ n } n≥1 belongs to B i . Since the ball B i has radius ǫ/3 and intersects {µ ∈ M : dist(µ, K r ) ≥ ǫ}, we have µ ∈ B i ⊂ M \ K r . Therefore, by the definition of the set K r , we have: h µ + ψ dµ < −r. Substituting this last inequality in (16) we conclude (15) ending the proof.
End of the proof of Theorem 2.3
For any r > 0 consider the compact set K r ⊂ M defined by Equality (12). Since {K r } r is decreasing with r: (1) and (3) and from the definition of K 0 in Equality (12), we have So, to prove Theorem 2.3 me must prove that the set O f of SRB-like measures satisfy: O f ⊂ K r for all r > 0. Since K r is weak * compact, we have where B(r, ǫ) := {µ ∈ M : dist(µ, K r ) ≤ ǫ}, with the weak * distance defined in (13). Therefore, it is enough to prove that O f ⊂ B(r, ǫ) for all 0 < ǫ < r/2 and for all r > 0. After Proposition 3.1, and since B(r, ǫ) is weak * compact, it is enough to prove that the following set B(r, ǫ) (called basin of attraction of B(r, ǫ)) has full Lebesgue measure: In other words for m-a.e. x ∈ S 1 , there exists n 0 ≥ 1 such that σ n (x) ∈ B(r, ǫ) for all n ≥ n 0 . Hence, pw(x) ⊂ B(r, ǫ) for m-almost all the points x ∈ S 1 , as wanted.
5
Proofs of the Corollaries
To prove Corollaries 2.5 and 2.6 we will use the following definition: 2 For any f -invariant probability measure µ the weak * -closure K(µ) of the ergodic components of µ, is the minimal nonempty and weak *compact set of probabilities such that lim n→+∞ 1 n n−1 j=0 δ f j (x) ∈ K(µ) µ-a.e. x ∈ S 1 .
Proof of Corollary 2.5
Trivially (b) implies (a) and also implies that µ is SRB and that its basin of attraction has full-Lebesgue measure. So, it is only left to prove that (a) implies (b). Assume (a). Since µ is SRB-like, it is invariant. Since m ≪ µ, applying Birkhoff Theorem and the Ergodic Decomposition Theorem, for m a.e. x ∈ S 1 : pω(x) = {µ(x)}, where µ(x) is an ergodic component of µ. Applying Proposition 2.2 to the SRB-like measure µ, for all ǫ > 0 there exists a m-positive set A ǫ (µ) such that dist(pω(x), µ) < ǫ. Joining with the assertion above, we deduce that µ is in the weak * -closure of the set of its ergodic components. As proved in Lemma 5.3, any non ergodic invariant measure is isolated from the weak * closure set of its ergodic components. So, we deduce that µ is ergodic. Therefore for m-a.e. x ∈ S 1 pω(x) = {µ}. This implies that µ is the unique SRB-like measure, and so it is SRB and its basin of attraction covers Lebesgue almost all the points. To end the proof of (b) it is left to check that µ ≪ m. Take any Borel set B ⊂ S 1 such that µ(B) > 0 and construct the set C = ∪ n j=0 (f −n )(B), which satisfies f −1 (C) ⊂ C. Since µ is ergodic and µ(C) ≥ µ(B) > 0, we have µ(C) = 1. As m ≪ µ we deduce m(C) > 0. Therefore m(f −n (B)) > 0 for some n ≥ 0. Recall that f is absolutely continuous respect to Lebesgue (because f is C 1 ). We conclude the m(B) > 0. This shows that m(B) > 0 if µ(B) > 0, or in other words µ ≪ m, ending the proof.
Proof of Corollary 2.6
Proof: After the Birkhoff ergodic theorem and the ergodic decomposition theorem we have lim n (1/n) n−1 j=0 δ f j (x) = µ x (where µ x is an ergodic component of µ) for µ-almost all the points x. Denote by K to the weak * -closure of the ergodic components of µ, as defined in 5.2. We claim that any ν ∈ K is SRBlike. In fact, arguing by contradiction and applying Proposition 2.2, if there existed a weak * neighborhood B of ν such that m({x ∈ S 1 : pω(x) ∩ B}) = 0, since µ ≪ m, we would have µ({x ∈ S 1 : pω(x) ⊂ K \ B}) = 1, contradicting the minimality of K. Applying theorem 2.3 and recalling Assertion (3): K ⊂ ES f . From the ergodic decomposition theorem µ is in the weak * -compact convex hull of K ⊂ ES f . Since ES f is weak * -compact and convex (because f is expansive), then µ ∈ ES f , as wanted.
|
2012-07-19T21:48:20.000Z
|
2012-02-29T00:00:00.000
|
{
"year": 2012,
"sha1": "3ac2999f7fed4f3fec2165e24ebddc5539612a27",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1202.6584",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3ac2999f7fed4f3fec2165e24ebddc5539612a27",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
6890912
|
pes2o/s2orc
|
v3-fos-license
|
Surgery and radiotherapy in the treatment of cutaneous melanoma
Adequate surgical management of primary melanoma and regional lymph node metastasis, and rarely distant metastasis, is the only established curative treatment. Surgical management of primary melanomas consists of excisions with 1–2 cm margins and primary closure. The recommended method of biopsy is excisional biopsy with a 2 mm margin and a small amount of subcutaneous fat. In specific situations (very large lesions or certain anatomical areas), full-thickness incisional or punch biopsy may be acceptable. Sentinel lymph node biopsy provides accurate staging information for patients with clinically unaffected regional nodes and without distant metastases, although survival benefit has not been proved. In cases of positive sentinel node biopsy or clinically detected regional nodal metastases (palpable, positive cytology or histopathology), radical removal of lymph nodes of the involved basin is indicated. For resectable local/in-transit recurrences, excision with a clear margin is recommended. For numerous or unresectable in-transit metastases of the extremities, isolated limb perfusion or infusion with melphalan should be considered. Decisions about surgery of distant metastases should be based on individual circumstances. Radiotherapy is indicated as a treatment option in select patients with lentigo maligna melanoma and as an adjuvant in select patients with regional metastatic disease. Radiotherapy is also indicated for palliation, especially in bone and brain metastases.
radical surgery
Before the 1970s, margins of therapeutic excision ranged from 3 to >5 cm. However, since then, six randomised, prospective trials (Table 1) have evaluated the effect of width of excision margins on local recurrence rates and survival. Rates of local control have been similar in five of these studies, but one has shown a 25% increase in locoregional recurrence in patients with narrow margins. None have shown a survival disadvantage for narrower compared with wider radial excision margins in melanoma of any thickness, although most were not powered to detect this, and one has shown a trend toward reduced survival [5][6][7].
Three trials were conducted for melanomas thinner than 2 mm: the French Cooperative Group Trial [8] and the Scandinavian Melanoma Group Study [9] compared 2 cm with 5 cm margins, and the World Health Organization (WHO) Melanoma Program Trial 10 compared 1 cm and 3 cm margins [10]. None of these trials demonstrated a benefit for wider margins (Table 1).
For melanomas of intermediate thickness (1-4 mm), 486 patients in the Intergroup Trial [11,12] were randomised to 2 cm or 4 cm margins. Local recurrence rates (2.1% compared with 2.6%, respectively) and overall survival (OS) rates were similar (79% compared with 81%, respectively). In the group of patients with 2 cm margins, skin grafts were necessary in only 11% of cases, as compared with 46% of cases in the group with 4 cm margins (P < 0.001). Two large trials were conducted in patients with melanomas thicker than 2 mm: the UK Melanoma Study Group (MSG) trial of 900 patients compared 1 cm and 3 cm margins and the Scandinavian trial of 936 patients compared 2 cm and 4 cm margins [13,14].
In the UK MSG trial, OS was similar in both groups, although a 25% higher rate of locoregional recurrence was noted in the group with the narrow margin [hazard ratio (HR) 1.26, P = 0.05]. The Scandinavian trial, which randomised patients with melanomas >2.0 mm (pT3, pT4) between 2 cm and 4 cm margins, reported no differences in outcome for disease-free survival (DFS) or OS [5,14].
There are fewer data for melanomas thicker than 4 mm, since disease in this thickness band is uncommon. A large, but non-randomised study [15] showed that excisions with margins wider than 2 cm do not have any impact on local recurrence rates, DFS and OS. The UK MSG and Scandinavian studies included patients with melanomas >4 mm [13,14].
For melanoma not thicker than 1 mm, excisions with a 1 cm margin are sufficient. Recommendations about 1-2 mm thick invasive melanoma are less clear; however, many national guidelines indicate that a 1-2 cm margin is sufficient, especially in regions of anatomic constraint associated with anticipated functional or cosmetic deformities (e.g. face, distal part of limbs). For melanomas >2 mm, a 2 cm margin is appropriate. In all surgical trials of primary melanoma, depth of excision has always been to at least muscle fascia, and this is the recommended deep margin, since more superficial excision has not been shown to be equivalent. Deeper excision has not been shown to improve outcome [16,17].
In summary, this means that in general, surgical management of primary melanoma consists of excision and primary closure.
special melanoma types
In certain subgroups of patients, recommendations on excision margins are based only on opinion.
For melanoma in situ, the recommended margin is 0.5 cm. Although thought to have no risk of metastasis, it can recur as in situ and then progress to invasive melanoma, and some data indicate that 1 cm margins may be required [18,19].
Wider margins have been recommended for desmoplastic melanoma because of its increased tendency toward local recurrence. If this is due to contiguous subclinical spread, micrographically controlled excision may reduce risk. Margin size should probably be determined by tumour thickness [19].
technical aspects
The long axis of the excision should be in the direction of the lymphatic drainage and parallel to the long axis of the limb. This decreases the risk of lymphoedema (especially in the case of subsequent lymph node dissection). Primary closure without dog ears usually requires that the longest axis of an elliptical incision be at least three times longer than the short axis. Excision should also include subcutaneous tissue down to, but not including, the underlying muscle fascia. The majority of wounds with 1-2 cm margins of excision can be closed primarily. Split skin grafting or local random-pattern flaps are used in a minority of cases. Full-thickness grafts are commonly used on the face or hands for better aesthetic and cosmetic results and may be taken from behind the ear, from the supraclavicular or inguinal region, or from a site of sentinel node biopsy (SNB). Free-tissue transfer with microvascular reconstruction is used mainly for extensive disease on the head and neck. Mohs' micrographic surgery is not appropriate for treating primary melanoma, since the purpose of this is removal of local micrometastases, which, by definition, are discontiguous from the primary lesion. Mohs' surgery may be useful for extensive contiguous disease such as large, clinically ill-defined in situ melanoma of the lentigo maligna type, and possibly desmoplastic melanoma.
primary melanoma at specific sites
Melanoma on the palms and soles, nail unit, and head and neck should probably be treated as usual on the basis of tumour thickness. Patients with such lesions have generally been excluded from surgical trials. There are few adequate data on surgical margins and adjuvant chemoradiotherapy for mucosal and anogenital melanoma.
Mucosal and anogenital melanoma. Primary melanoma located on mucosal surfaces represents <3% of all melanoma but is aggressive, with only 20% of patients alive at 5 years [20]. Among mucosal sites, the most frequent are the head and neck ( ‡50%), female genital tract (mostly vulva, 20%) and anorectal region (20%) [20,21]. The rarest are primary melanomas originating from the urinary tract sites and stomach/bowel. Early detection is unlikely because of the occult anatomic locations. The diagnosis must be established after a full thickness biopsy of the suspicious lesion with the exception of small lesions suitable for excisional biopsy. Incisional biopsy should include a representative sample from the border of the lesion to help the pathologist in differentiating a primary mucosal melanoma from mucosal melanoma metastasis.
Head and neck mucosal melanoma affects mainly the nasal and oral cavity. The primary approach to treatment of mucosal melanoma is wide surgical resection; however, 5-year OS is only 13-22% [22,23]. While many cases of mucosal melanoma are treated with surgery alone, radiotherapy or chemotherapy as an adjuvant therapy or even the only modality (radiotherapy) is employed more frequently than in cutaneous melanoma, although the benefit of this is unclear. The most frequent primary site of genital melanoma is the vulva [24]; there is a high incidence of local and distant metastasis. Multiple studies of more than 350 cases of vulvar melanoma indicated that radical vulvectomy (with or without lymphadenectomy) does not improve OS and DFS compared with more limited resection (wide local excision or partial vulvectomy) [25][26][27]. Radical vulvectomy, in contrast to wide local excision, is associated with very high morbidity and is not recommended. Most cases of melanoma of the penis are treated by amputation [28]. In genital melanoma, staging with SNB may be considered. The majority of melanoma of the anorectal region arises below the dentate line in the squamous mucosa, and so often presents late. No significant differences between abdominoperineal resection and local excision either in OS or DFS have been found [29]. The procedure of choice is a wide local excision with histologically clear margins (ultrasound can be helpful in delineating lesions) that avoids permanent colostomy.
Subungual melanoma. Subungual melanoma accounts for <1% of tertiary referral cases [30]. Amputation of a finger or a toe can only be considered. Distal, function-preserving amputations or even non-amputational approaches are now the usual practice [31].
Melanoma of the face and scalp. For melanoma of the face, normal excision margins may have to be compromised to preserve aesthetic features and functions. There are no data to quantify any adverse outcome of this practice [32]. Melanoma of the ear is treated by wedge excision, or by partial or complete pinnectomy, depending on tumour thickness and patient preference for reconstruction or prosthesis.
Lentigo maligna and lentigo maligna melanoma. Lentigo maligna (LM) is a type of in-situ melanoma, and occurs on the head and neck usually in patients >50 years old. Risk of progression to invasive lentigo maligna melanoma (LMM) is well recognised but poorly quantified. Lesions may grow to 5-10 cm or larger. Biopsy is prone to sampling error and may incorrectly indicate benign disease or miss early invasion. Clinical definition may be poor but can be helped by illumination under a Wood's light. The surgical margin required for LM has not been confirmed by any randomised controlled trial: 5 mm or more is usual and gives cure rates of 90-95% [33]. A recent retrospective study of 117 cases of LM and LMM treated with a staged, margin-controlled excision technique found that a mean total surgical margin required for excision of LMM was 10.3 mm [33]. However, it is difficult to distinguish between LM melanocytes and atypical melanocytes on sun-exposed skin. Orthovoltage radiotherapy using 7-10 mm margins can give cure rates similar to surgery, but there are fewer data to support this and it is generally only suitable when surgery is not feasible [34]. surgery of regional lymph nodes sentinel node biopsy In the last 10 years, the experimental procedure of SNB has been increasingly used. This technique replaced elective lymph node dissection, a method previously recommended for early treatment of the regional nodal basin despite no effect on OS in several randomised trials (WHO-1 and WHO-14, Mayo Clinical Surgical Trial and Intergroup Melanoma Surgical Trial [35][36][37][38][39][40]) and significant morbidity. SNB allows identification of the first draining lymph node; if positive for melanoma, subsequent completion lymphadenectomy might improve survival. The Melanoma Selective Lymphadenectomy Trial (MSLT-I) [41] was designed to test this idea. The study confirmed the value of SNB as a staging procedure [42], but failed to detect a difference in survival between patients in the SNB with early lymphadenectomy cohort and those treated later after clinically detected lymph node relapse. However, the 5-year survival rate of a subgroup of patients with intermediatethickness melanoma (1.2-3.5 mm) (72.3% compared with 52.4%, respectively) did appear to be increased by SNB and early lymphadenectomy. The 5-year survival rate for sentinel node (SN)-negative patients was 90.2 6 1.3% [41]. An ongoing MSLT-II study is designed to test whether completion lymphadenectomy is required in patients with a positive SNB. Lymphoscintigraphy and lymphatic mapping is an essential part of the SNB procedure, since lymphatic drainage cannot be accurately predicted [43,44]. Lymphoscintigraphy provides topographic information about the number of lymph node basin(s) and SN(s). One day or 2-4 h before surgery, dynamic lymphoscintigraphy is performed by intradermal injection of 99m Tc-labelled colloid particles of human serum albumin (as lymphoscint, nanocoll or albu-Res) into both sides of the melanoma excision scar. Different types and doses of labelled solution can be injected; we recommend the use of 99m Tclabelled colloids with 10-200 nm particles. For trunk, and head and neck, an anterior-posterior view and a lateral view must be obtained to localise all SNs [45]. If only one lymphatic basin is involved in the axilla or groin, SNB may be feasible under local anaesthesia, but in the neck or popliteal fossa, or when multiple basins are involved, it is generally better to operate under general anaesthesia. Following lymphoscintigraphy and 10 min after intradermal injection of Vital Blue dye into the same point(s) as the colloid, the surgical procedure can be conducted. Vital Blue should never be used in the head and neck because of the risk of leaving a permanent tattoo on a visible part of the skin [46], or during pregnancy. During the SNB procedure, a cdetector probe (cDP) is used to track the radiolabelled tracer towards a single or multiple SNs. This permits a safe, minimal dissection towards the SN. If no vital dye is visible, the cDP should be used immediately after the incision of the superficial fascia in order to reduce the surgical dissection. Devices with intraoperative c detector in association with intraoperative c camera have recently reached the market and this way keep confirming that the percentage of SNs detected approaches 100%.
SNB provides accurate staging information, but at present is not known to have any therapeutic value. It is generally used in patients with primary melanomas ‡1.0 mm in thickness, although some investigators question its utility in melanomas thicker than 4 mm. Ulceration, Clark IV and V, mitotic rate per mm 2 and patients choice can also be considered for melanomas <1 mm Breslow thickness.
Histopathology of the sentinel node. The histopathology of the SN is crucial, and the extent of the procedure determines the positivity rate [47][48][49]. Topography of the SNB metastases [50,51], and their volume, determine prognosis [52]. Metastases <0.1 mm have only a 2% positivity rate for non-SN on completion lymphadenectomy and the same DFS, distant metastasis-free survival and OS rates as SN-negative patients [52,53]. However, increasing size of the metastases in the SNB is associated with increasing risk of a positive completion lymphadenectomy and reduced survival.
therapeutic lymph node dissection
The most frequently affected basins are the neck, axilla and groin; involvement of popliteal fossa or epitrochlear lymph nodes is rare. Lymphadenectomy for melanoma has two goals: it may be curative, or it may simply prevent further relapse at that site. Both can only be achieved by meticulous and thorough removal of all involved and at-risk nodes. In general, this means dissection of all five levels of lymph nodes in the neck plus superficial parotidectomy if the primary site is thought to drain to parotid nodes, all three levels in the axilla, and the superficial, deep inguino-femoral and ilio-obturator nodes. Pelvic lymph nodes should always be included if enlarged on preoperative imaging.
Although no convincing data support selective lymphadenectomy, in clinical practice some compromises are sometimes made. For example, when the metastases lie in the posterior triangle nodes (level 5), submandibular (level 1) nodes might be conserved. The ilio-obturator nodes might not be excised unless clinically involved [54], although the greater the burden of superficial inguinal disease the greater the risk of their involvement. Some carry out a frozen section examination of Cloquet's node during the inguinal-femoral dissection; if positive, a deep pelvic dissection is carried out, although a negative Cloquet's node does not guarantee negative pelvic nodes. The dissection of pelvic nodes does not increase long-term post operative complications. These are decisions made on the basis of opinion and experience, and should only be made by melanoma specialists.
surgery of locoregional recurrences local metastases
The terminology of metastasis between the primary melanoma and draining lymph nodes is confusing, inconsistently defined and unhelpful. In adequately treated primary melanoma, the terms local recurrence, local metastasis, in-transit metastasis and satellite metastasis are all likely to reflect the same biological process of intralymphatic spread beyond the site of therapeutic excision [55]. Since all are characterised by poor prognosis, they should be treated similarly, and are best collectively referred to as in-transit metastases (ITM). It is important to point out that in primary melanoma where adequate surgical treatment has not been carried out, recurrence of melanoma in or adjacent to the scar might represent regrowth of residual primary disease rather than metastasis. In this situation it would be wise to treat the lesion as a thick primary melanoma, since this might offer a chance of cure.
in-transit metastases
Prevention: prophylactic isolated limb perfusion. Early adjuvant treatment of high-risk primary limb melanoma with regional chemotherapy might effectively treat subclinical ITM and improve survival. Although retrospective studies of isolated limb perfusion (ILP) with melphalan indicated improved outcome in high-risk primary melanoma, a prospective randomised study of wide excision compared with wide excision plus adjuvant ILP in 832 patients [56] did not show any benefit. Rates of progression to systemic metastases and OS were unchanged with only a small improvement in locoregional control.
Treatment of apparent in-transit metastases. Treatment of ITM of the limb depends on their number, site and size [57]. Resectable ITM should be treated surgically with narrow but clear margins. Amputation is not indicated and does not improve survival. With multiple dermal ITM, carbon dioxide laser ablation can be used, but the recurrence rate is very high and this technique is limited to lesions <1 cm in diameter. Other local modalities including radiotherapy, cryotherapy, intralesional injections and electrochemotherapy may be used in specific situations. Regional chemotherapy with ILP or isolated limb infusion (ILI) is the preferred method of treating multiple and frequently recurrent ITM. It treats the whole limb below the point of tourniquet isolation, can achieve 20-50 times higher concentrations of melphalan compared with systemic therapy, and can be performed with minimal locoregional toxicity and minimal systemic leakage [58]. ILP with melphalan can be used in combination with tumour necrosis factor (TNF)-a [59,60], especially in the case of bulky lesions [60,61] or after failure of a prior ILP or ILI using melphalan alone [62,63]. Iliac ILP has the advantage of treating the whole limb up to the groin; ILI only treats to the upper third of the thigh. ILI is probably slightly less effective than ILP, but is less invasive and easier to repeat.
Electrochemotherapy can be indicated for palliation of superficial metastatic lesions when ILP or ILI is not indicated for the general conditions of the patient; a 90% response on the superficial metastases has been reported [64,65].
surgery of distant metastases
The purpose of treatment of distant metastases is palliation. Surgery is the most effective means of providing this if it is technically feasible, if risk of morbidity and mortality is low and if the patient is likely to live long enough to accrue benefit. A positron emission tomography scan may be used to confirm the finding of computed tomography scanning of a locoregional or distant lesion that is surgically treatable. Good examples are single or localised metastases to the brain, bowel, lung or spinal cord. After careful consideration it may be reasonable to resect a single or localised liver metastasis. Completely resected single distant metastases may occasionally be associated with long survival [66,67]. More common examples are symptomatic soft-tissue metastases. No prospective study compares surgical with medical approaches to treatment of melanoma patients with a single or very few distant metastases. radiotherapy Radiotherapy (RT) is a cancer treatment modality that contributes to the cure or palliation of cancer patients. Cutaneous melanoma has long been considered a relatively radioresistant tumour, due to a distinctly broad shoulder in the low-dose portion of the survival curve [68].
Early studies in melanoma demonstrated that the response rate depended on the size of the dose per fraction; complete response rates were 82% (range 67-92%) for fractions of >4 Gy, but only 36% (range 21-46%) for those of <4 Gy [69][70][71][72][73]. However, recent studies on cell lines show characteristics similar to those of acutely and late-responding normal tissue with a broad variation of intrinsic radiosensitivity [74,75].
The only randomised study that evaluated the effectiveness of the high-dose-per-fraction irradiation in the treatment of melanoma was planned by the Radiation Therapy Oncology Group (RTOG) in 1983. One hundred and thirty-seven patients without abdominal or brain metastases and with 50% of the lesions >5 cm were randomised to four fractions of 8 Gy or 20 fractions of 2.5 Gy. In both arms, the overall and complete response rates were 59% and 24%, respectively [76]. Conventional fractionation schedules should be preferred, since they are equally effective in tumour control. In some situations, such as palliation of bone metastases or relief of metastatic lesions in patients with a short life expectancy, a larger dose per fraction is more convenient.
primary melanoma
Surgical resection has proved effective at low risk, so radiotherapy is not a primary treatment for invasive cutaneous melanoma. Radiotherapy should be considered in lentigo maligna, especially in elderly patients with extensive or unresectable disease [33,69]. It has not been shown to be effective in lentigo maligna melanoma. It may also be used in desmoplastic melanoma, but only when adequate surgical margins are not obtainable [70,71]. No data support the utility of adjuvant radiotherapy in other forms of cutaneous melanoma. It may rarely be used by melanoma specialists in the presence of positive or close margins where re-resection is difficult to carry out, and local failure could jeopardise the probability of cure.
Radiotherapy can be successfully used in the treatment of mucosal melanoma of the nasal cavity and paranasal sinuses. In contrast to other forms of mucosal melanoma, lesions of the head and neck have the tendency to fail locally before systemic spread, and a radical resection is often difficult to achieve in these regions.
In mucosal melanomas, primary radiotherapy techniques lead to regression rates of 80% [77,78]. Postoperative radiotherapy looks more efficacious than surgery alone [79,80], and some authors consider surgery with postoperative radiotherapy a current standard of treatment for malignant mucosal melanoma of the head and neck [81]. However, prospective randomised trials are needed in the adjuvant setting in order to assess the real impact of radiotherapy on local control, quality of life and OS.
regional lymph nodes The regional recurrence rate after lymph node dissection can be as high as 20-50% [72]. Many factors have been related to an increased risk of regional recurrence including the number of involved lymph nodes, their size (>3 cm), location (cervical) and the presence of extracapsular extension, which remains the single most important risk factor for relapse. Regional recurrence in the dissected lymph node basin may become unmanageable and can have a serious adverse impact on quality of life and survival. Several phase II studies observed an increase in locoregional control (87-95%) after irradiation with 30-36 Gy in five or six fractions or 50-60 Gy in 25-30 fractions, depending on the site, risk and patient [73,[82][83][84][85][86]. The only published, randomised study, which used 50 Gy in 28 fractions, five fractions/week, found no effect of postoperative radiotherapy on either OS or DFS [87]. The cohort of patients, however, was insufficient to detect small differences in survival and was not stratified for significant prognostic variables.
disseminated and recurrent melanoma
Radiotherapy has an important role in the palliation of many symptoms in melanoma patients. A short course of radiotherapy is generally preferred, and good palliation can be obtained in approximately two-thirds of cases; however, the exact degree of the tumour response depends greatly on the tumour size at the time of irradiation [76,77,88]. Pain relief and/or decompression in 67% of patients with bone metastases and good palliation in 80-85% of similarly treated patients have been reported using 30 Gy in 10 fractions or 20 Gy in 5 fractions [78,79,89]. The overall response rate reported with different fractional doses ranges from 9% to 92%, with a median of 50% [80,81,88]. The same percentage was achieved in the RTOG 83-05 randomised study, confirming that radiotherapy represents still the best palliation whenever surgery is not applicable.
brain metastasis
Brain (CNS) metastases affect 10-40% of melanoma patients in clinical studies and represent a sharp decrease in quality of life and survival. CNS is the first site of recurrence in 15-20% of patients with stage IV melanoma. In the majority of patients with multiple lesions, surgery is rarely indicated, and chemotherapy alone is largely ineffective [90]. The median survival in untreated patients has been reported to be as low as 1 month [91], and despite early detection of frequently asymptomatic metastatic disease using conventional imaging modalities, the prognosis remains poor with reported median survival ranging from 2 to 8 months.
Multiple brain metastases. The median survival of symptomatic patients with multiple brain lesions treated with anti-oedema therapy (corticosteroids and osmotic diuretics) is only 2 months and can be extended to 4-6 months after whole-brain radiation therapy (WBRT). With both treatments, 60-70% of patients experience improvement in neurological symptoms and performance status with no significant differences between various conventional fractionation schemes (20 Gy in 5 fractions, 30 Gy in 10 fractions, 40 Gy in 20 fractions) [92]; however, the procedure is not without morbidity (hair loss, brain oedema, lethargy, cognitive impairment).
Single or few brain metastases. Treatment options for patients with just one or a few, smaller brain metastases include neurosurgical resection and stereotactic irradiation. The feasibility of resection depends on the lesions' number, size and location, neurologic symptoms and deficits, and also on the presence of extracranial disease, age and performance status. Patients with multiple, but resectable brain lesions may have a prognosis similar to that of patients with solitary brain lesions [93] and may benefit from surgical resection of a symptomatic or life-threatening brain lesion [94]. Surgery followed by WBRT improved survival compared with WBRT alone [95,96]. Stereotactic radiosurgery (SRS) is a highly effective local treatment of brain metastases that provides targeted high-dose irradiation of one to six lesions with a diameter not exceeding 3-4 cm, in a single or multiple sessions [97]. Recent publications indicate that its efficacy using either multiple cobalt sources (gamma-knife) or a linear accelerator (Linac) is similar to that of surgical resection. The reported local control rates from uncontrolled studies range from 80% to 96% with a median survival in the range 7-12 months; however, in patients with multiple lesions, median survival decreases to 4 months [98]. Adjuvant WBRT was found to decrease the distant brain failure in SRS-treated patients from 64% to 17% after 6 months [99].
|
2017-04-03T15:37:00.102Z
|
2009-08-01T00:00:00.000
|
{
"year": 2009,
"sha1": "6cbba57fc95d02f7650a6edc6cd7718275423895",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1093/annonc/mdp257",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6cbba57fc95d02f7650a6edc6cd7718275423895",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
41944521
|
pes2o/s2orc
|
v3-fos-license
|
The Study of Clinico-Pathological Condition of Acute Appendicitis In Srikakulam
Object: To study clinical and pathological presentation, management and outcome of appendicitis Methodology: During 2 years study period. We studied 100 cases of acute appendicitis admitted in Rajiv Gandhi institute of medical & general Hospital srikakulam. out of them 97 case are appendicitis and 3 were other causes A detailed history and through clinical examination was done the diagnosis of Appendicitis was based upon Alvarado Score, Total W.B.C. count and ultrasonography and Histopathological examination. Results: The study group consisted of 100 patients. Majority (62%) of patients were males and (38%) was Females most, common symptoms were pain in the right iliac fossa 98%, anorexia in 88%, Nausea 87% and Vomiting 83%, Total leucocyte count >10.000 in 50%, of patients, and USG findings of localized adynamic illeus in 88%, and Alvarado Score 7 or> 7 are 90% and Histopathology 89% the overall negative appendicitis of 16.7% in female and 3.8% in Males. Conclusion: Thus from above findings, it can be concluded early diagnosis and appendicectomy is mandatory for better outcome of the patients. The definitive appendicectomy is the line of management.
Introduction
It is a well-known adage that abdomen is a temple of surprises and a magic box as well. Since the abdomen accommodates innumerable viscera and other anatomical compliments, diseases of the abdomen constitute a topic full of clinical curiosity. A meticulous examination of abdomen is one of the most rewarding diagnostic procedures available to the doctor, especially the surgeon and plans an ideal treatment. As had been said by Bailey "A correct diagnosis is the hand maiden of successful operation". Despite the advancements in the fields of diagnosis the surprises never Cease (1) .The appendix a cul-de-sac is crudely referred as "worm of the bowel" in ancient medical books and also called as abdominal tonsil". Acute appendicitis is the most common acute surgical condition of the abdomen (2) .
Approximately 7 percent of the population will have appendicitis in there life time (3) , with the peak incidence occurring between 10 and 30 years (4). Despite technological advances the diagnosis of appendicitis is still based primarily on the patients history and the physical examination, prompt diagnosis and surgical referral may reduce the risk of perforation and prevent complications (5). The mortality rate in non-perforated appendicitis is less than 1 percent, but it may be as high as 5 percent or more in young and elderly patients in whom the diagnosis (2). Preoperative diagnosis of acute appendicitis is sometimes challenging in young women, children and aged despite all round improvements in medical field and ultrasonography. Diagnostic scores are useful easy methods, which help to reach in decision-making. Delay in diagnosis will lead to complication, which increases morbidity where as overzealous diagnosis may lead to negative appendicectomy rate (6) .
This study involves to correlate the acute appendicitis between clinically diagnosed and histopatologically examined specimen and the role of ultrasound in early diagnosis of acute appendicitis and to exclude negative appendicectomy, in patents admitted in Rajiv
Results
In this series of 100 cases, all the patients who presented with acute symptoms and diagnosed to have acute appendicitis were included in the study. 1 -10 2 1 11 -20 26 16 21 -30 28 18 31 -40 6 3 41 -50 ----Acute appendicitis is more common in males than females. Boyd discussing acute appendicitis disease says it is more than twice as common in males as in females and explains it may be due to fact that young males is more subject to strain and trauma and that his diet is usually richer in protein than that of the females. In our series the male to female ratio is 3.1:1.9 In Levis et al series of 1000 cases, the incidence of acute appendicitis was found to occur most commonly in the age group of 20-30 years in both males and females. The male to female ratio was 3:2.In our series, the maximum incidence as found in the age group of 20 to 30 years. In 90 patients, 54 were males and 36 were females with a score of 7 and more than 7. all of them were subjected to surgery with confirmation in 52 out of 54 males -96.2% and 30 out of 36 females -83.3%. The negative appendicectomy rate of males is 3.8% and in females 16.7%. Women with normal appendix who underwent operation were having pelvic inflammatory disease in 5 patients, and ruptured follicular cyst in 1 patient. One of the males with normal appendix had Meckels diverticulities while the other had regional ileitis. 90 patients were given spinal anesthesia and 7 were given general anesthesia. Incision: The incision commonly employed was grid-iron incision and was extended whenever posed with difficulties and better exposure was needed. In one case, the appendix was normal and a Meckel's diverticulum was present. Appendicectomy with excision of Meckel's diverticulum was done. The position and condition of the appendix noted intra-operatively.
Discussion
The discussion is based on the observations and analysis of the results in the study of 100cases with regard to incidence, age, sex, symptoms, signs, investigations operative findings, and histopathological examinations using Alvarado scoring system.
Clinical Features Age incidence:
In the present study the common age group found was 20.30 year (46%) and the median age being 24 years. Gallendo Gallego et al (7) found was 20-30 yr(52%) Sex Incidence: It has been established beyond doubt by several authors, that male Sex predominated over female in the incidence of acute appendicitis. Levis et al Pain: Pain was a complaint in all the cases in this study. The initial location of pain in most cases (59%) presented with pain around umbilicus followed by (41%) in the right lower quadrant and 98% of the patients lately presented with pain in the right iliac fossa, which adds a diagnostic point of acute Appendicitis. 94% Rebound Tenderness: In the present series, in 44% of the cases there was presence of rebound tenderness, and this is noted when there is local peritoneal involvement and it depends upon the time of presentation. In the Present Study 44% and in previous study P.K. Owen Td et al (14) 60%, GallindoGallego,et al (13) 56%.
Fever: Fever was present in 48 cases (48%) in present series in the major of cases fever was of low grade and continues: the incidence of fever in the Literature and the present series is compared in the following tables. In the Present Study 48 % and in previous study Kallan M et al (15) 40%, Gallindo Gallego, et al (13) 74% Leucocyte count: W.B.C. count more than 10.000 cells/ cumm was found in 50% of cases and only 2% it was raised above 20,000 Cells/ cumm. (16) 60.00 5.00 GallindoGallego,et al (13) 65.00 3.00 Elangovan's (17) 80.00 -Doraiswamy (18) In the present series 12% of the patients were reported as normal study of ultrasound and use has a role excluding the diagnosis of acute appendicitis. USG specificity and sensitivity in diagnosis of acute appendicitis: In the present study USG findings showed 88% sensitivity and 88% specificity in diagnosing acute appendicitis. Alvorado Score: In this series 87% are Males and 94.7% were females of score 7 or more than 7. (12) 82% .To prove accuracy of scoring, ultrasound sensitivity and specificity histopathological confirmation is needed.
Negative Appendicectomy Rate: The present study shows negative Appendicectomy rate of 16.7% in females and 3.8% in male. In females, negative appendicectomy rate is high. This is probably due to pelvic inflammatory diseases, and ruptured follicular cysts. The conditions are not properly diagnosed on ultrasound and mimic acute appendicitis.
Conclusion
The Alvarado scoring system combined with ultrasound can therefore be used as a cheap and inexpensive way of confirming acute appendicitis thus reducing negative appendicectomy rate. History and clinical examination was more diagnostic. Ultrasonography increases the diagnostic accuracy in patients with suspected acuter appendicitis to the tune of 90-95%.Alvarado score with less than 6 leads to more than 25% negative appendicectomy rate. If the scoring is above 7, the overall accuracy of diagnosis of acute appendicitis gives up to 90%.
Summary
A study of 100 cases who presented with pain in right iliac fossa was conducted at Rajiv Gandhi institute of medical & general Hospital srikakulam during the period February 2015 to February 2017.Emergency appendicectomy constituted 23.3% of the total abdominal surgeries. Acute appendicitis is more common in males than females and the highest incidence is in 2 nd & 3 rd decade of life. The patients presented with symptoms of pain in RIF, vomiting or nausea, anorexia, and sings of RIF tenderness, rebound tenderness, and rise in temperature. The patients were examined clinically thoroughly by using Alvarado scoring system. The patients are subjected to investigations like total count and ultrasonography which are considered in the score. Ultrasonography has diagnosed 88% of cases as acute appendicitis. 90 of the total cases which has score 7 and >7 were managed surgically and the remaining 7 patients with score of 6 and 5 were operated and 3 were managed conservatively.90% of the cases were confirmed intra-operatively and 89% of histopathological examinations confirmed the diagnosis of acute appendicitis. Complications like wound infection was seen only in 5% of the patients.
|
2019-03-16T13:14:02.657Z
|
2017-04-08T00:00:00.000
|
{
"year": 2017,
"sha1": "4a4670c8b1adbdcfbe3d48e29e3b840955f6246f",
"oa_license": null,
"oa_url": "https://doi.org/10.18535/jmscr/v5i4.51",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "743fa20351e48dcf0a5b5f0cf2f26207e5bb14a2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233033385
|
pes2o/s2orc
|
v3-fos-license
|
Obliquely propagating ion-acoustic shock waves in degenerate quantum plasma
A theoretical investigation has been carried out on the propagation of nonlinear ion-acoustic shock waves (IASHWs) in a collsionless magnetized degenerate quantum plasma system composed of inertial non-relativistic positively charged light and heavy ions, inertialess ultra-relativistically degenerate electrons and positrons. The reductive perturbation method has been employed to drive the Burgers' equation. It has been observed that under consideration, our plasma model supports only positive potential shock structure. It is also found that the amplitude and steepness of the IASHWs have been significantly modified by the variation of ion kinematic viscosity, oblique angle, number density, and charge state of the plasma species. The results of our present investigation will be helpful for understanding the propagation of IASHWs in white dwarfs and neutron stars.
The presence of positrons in white dwarfs and neutron stars has been extensively discussed in the Refs. [17,18,19,20]. Sultana and Schlickeiser [16] investigated IA solitary waves in DQP containing degenerate electrons, light ions, and inertial mobile non-degenerate heavy ions. Gill et al. [21] investigated the IA shock waves (IASHWs) in relativistic DQP composed of electrons, positrons and ions, and found that the height of the potential is maximum for the lower positron density. Ata-ur-Rahman et al. [22] considered an unmagnetized DQP containing inertial ions, and inertialess electrons and positrons, and demonstrated that the amplitude of the IAWs decreases with the increase of positron number density. Hossen et al. [23] studied IASHWs in a four-component plasma system having inertialess electrons and positrons, and inertial heavy and light ions, and reported that the amplitude of the shock profile decreases with positron number density. Mamun and Shukla [24] investigated electrostatic solitary waves propagating in ultrarelativistic plasma medium consisting of degenerate electrons and cold mobile ions, and reported that the wave amplitude increases with the increase of ion number density.
The strong magnetic field (i.e., about 1 Mega Gauss) in white dwarfs was predicted by Blackett [25] and observed by Zeeman spectroscopy [26,27]. El-Taibany et al. [28] analyzed solitary waves in a magnetized degenerate electron-positron plasma, and found that the wave amplitude increases with the oblique angle which is the angle between the external magnetic field and the direction of wave propagation. Shaukat [29] investigated IA solitary waves in the presence of magnetic field, and observed that the solitary wave amplitude increases with increasing oblique angle.
The energy dissipation of the shock wave, which is governed by the Burgers' equation [30], may arise due to the kinematic viscosity of the medium. Hafez et al. [31] studied IASHWs in weakly relativistic plasma containing electrons, positrons, and ions, and noticed that the steepness of the IASHWs decreases with the increase in the value of viscosity of plasma species but the amplitude of the IASHWs does not change. Abdelwahed et al. [32] analyzed IASHWs in a pair-ion plasma, and also found that the shock steepness decreases with increasing ion viscosity.
Recently, Saini et al. [33] investigated heavy nucleus acoustic periodic waves in DQP. Haider [34] examined the shock profiles in the presence of degenerate inertial ions, and inertialess electrons and positrons. Atteya et al. [35] studied IASHWs in a DQP which contains ion fluids, degenerate electrons, and stationary heavy ions. To the best of authors' knowledge, still no one investigated IASHWs in a magnetized DQP having inertialess ultra-relativistically degenerate electrons and positrons, and inertial positively charged non-relativistic light and heavy ions. In this manuscript, we will derive the Burgers' equation, and will also use associated solution to examine the basic features of IASHW in DQP.
The manuscript is organized in the preceding way: The governing equations are described in section 2. The Burgers' equation and associated shock solution are presented in section 3. The results and discussion are presented in section 4. A brief conclusion is presented in section 5.
Governing Equations
We consider a magnetized DQP system consisting of inertial positively charged light ion (mass m l ; charge eZ l ; number density N l ), positively charged heavy ion (mass m h ; charge eZ h , number density N h ), inertialess electron (mass m e ; charge −e, number density N e ), and positron (mass m p ; charge e; number density N p ); where Z l (Z h ) is the charge state of the light (heavy) ion. An uniform external magnetic field B is existing in the direction of z-axis (B = B 0ẑ andẑ is the unit vector). The propagation of IAWs in DQP system is governed by the following equations: where U l (U h ) is the fluid speed of light (heavy) ion;Φ is the electrostatic wave potential; P l (P h ) is the pressure for light (heavy) ion;η l = µ/m l N l (η h = µ/m h N h ) is the kinematic viscosity for light (heavy) ion. The degenerate pressure equations for electrons and positrons can be expressed, respectively, as Now, we have introduced the normalizing parameters: n h → N h /n h0 ; n l → N l /n l0 ; n e → N e /n e0 ; Dh where IAWs speed C h = (Z h m e c 2 /m h ) 1/2 ; plasma frequency ω ph = (4πZ 2 h e 2 n h0 /m h ) 1/2 ; the Debye length λ Dh = (m e c 2 /4πZ h e 2 n h0 ) 1/2 , and for simplicity we have considered η = η l = η h ]. At equilibrium, the charge neutrality condition can be written as n e0 = n p0 + Z l n l0 + Z h n h0 . By using these normalizing parameters, Eqs. (1)-(5) can be expressed in the normalized form where Now, by normalizing and integrating Eqs. (6) and (7), the number densities of the inertialess electrons and positrons can be obtained in terms of electrostatic potential φ, respectively, as where K 3 = n e0 γ e −1 K e /m e c 2 and K 4 = n p0 γ p −1 K p /m e c 2 . By expanding the right hand side of Eqs. (13) and (14) up to second order in φ, and substituting in Eq. (12), we get where
Derivation of the Burgers' Equation
To study IASHWs, we derive Burgers' equation by employing the reductive perturbation method (RPM) [36,37], and the stretched coordinates for independent variables can be written as [38,39,40] ξ = ǫ(l x x + l y y + l z z − v p t), where v p is the phase speed and ǫ is a smallness parameter measuring the weakness of the dissipation (0<ǫ<1). The l x , l y , and l z are the directional cosines of k (wave vector) along x, y, and z-axes, respectively (i.e., l 2 x + l 2 y + l 2 z = 1). Then, the dependent variables can be expressed in power series of ǫ as [39] Now, by substituting Eqs. (16)-(24) into Eqs. (8)- (11) and (15), and collecting the terms containing ǫ, the first-order equations reduce to Now, the phase speed of IASHWs can be written as where a = αµ The x and y-components of the first-order momentum equations can be manifested as Now, by taking the next higher-order terms, the equation of continuity, momentum equation, and Poisson's equation can be written as Finally, the next higher-order terms of Eqs. (8)- (11) and (15), with the help of Eqs. (25)-(39), can provide the Burgers' equation where Φ = φ (1) for simplicity. In Eq. (32), the nonlinear coefficient A and dissipative coefficient B are given where Now, we look for stationary shock wave solution of this Burgers' equation by considering ζ = ξ−U 0 τ (where ζ is a new space variable and U 0 is the speed of the ion fluid). These allow us to write the stationary shock wave solution as where the amplitude Φ 0 and width ∆ are, respectively, given by It is clear from Eqs. (43) and (44) that the IASHWs exist, which are formed due to the balance between nonlinearity and dissipation, because B > 0 and the IASHWs with
The oblique angle δ indicates the angle between the direc- tion of the propagation of IASHW and the direction of existing external magnetic field which is parallel with z-axes. The variation of electrostatic shock potential structure (i.e., Φ > 0) associated with A > 0 with δ under consideration of non-relativistic light and heavy ions (i.e., α = 5/3) and ultra-relativistic degenerate electrons and positrons (i.e., γ e = γ p = 4/3) can be observed in Fig. 2. It is obvious from this figure that the electrostatic shock potential (i.e., Φ > 0) associated with A > 0 increases with the increase of δ. Physically, the interaction between the electrostatic shock potential associated with A > 0 and the external magnetic field is clearly increased with the increase in δ. Figure 3 describes the effects of the kinematic viscosity of the positively charged non-relativistic light and heavy ions (via η) on the electrostatic shock potential structure (i.e., Φ > 0) associated with A > 0. We have noticed that the amplitude of the electrostatic shock profile (i.e., Φ > 0) associated with A > 0 is independent to the variation of the ion kinematic viscosity but the steepness of the electrostatic shock profile (i.e., Φ > 0) associated with A > 0 is rigorously dependent to the variation of ion kinematic viscosity. The steepness of the electrostatic shock potential structure (i.e., Φ > 0) associated with A > 0 decreases with η, and this result agrees with the result of Refs. [31,32].
The amplitude of the positive electrostatic shock structure (i.e., Φ > 0) associated with A > 0 is so much sensitive to the change of the charge state of non-relativistic light and heavy ions. Figures 4 and 5 show that the amplitude of the positive shock structure (i.e., Φ > 0) associated with A > 0 increases with the charge state of non-relativistic light and heavy ions. Similarly, the increasing number of non-relativistic light and heavy ions enhances the positive electrostatic shock structure (i.e., Φ > 0) associated with A > 0. It can easily demonstrate from Figs. 6 and 7 that as we increase n l0 and n h0 , the amplitude of the positive electrostatic shock structure (i.e., Φ > 0) associated with A > 0 increases. Physically, the charge state and number density of non-relativistic light and heavy ions can control the dynamics of the DQP system in the similar way.
We have studied the characteristics of IASHWs for different values of positron number density (via n p0 ) under consideration of non-relativistic light and heavy ions (i.e., α = 5/3) and ultra-relativistic degenerate electrons and positrons (i.e., γ e = γ p = 4/3) in Fig. 8. It can be highlighted from this figure that the amplitude of electrostatic shock structure (i.e., Φ > 0) associated with A > 0 decreases with positron number density, and this finding is analogous to the result of Refs. [21,22] .
Conclusion
We have studied the basic characteristics of IASHWs in an extremely dense DQP containing non-relativistic light and heavy ions, and inertialess ultra-relativistic degenerate electrons and positrons in the presence of external magnetic field. The RPM [41] has been utilized to derive Burgers' equation. The results that have been found from our present investigation can be summarized as follows: • Our plasma model supports only positive potential shock structure (i.e., Φ > 0) associated with A > 0 under consideration of non-relativistic light and heavy ions (i.e., α = 5/3), and ultra-relativistic degenerate electrons and positrons (i.e., γ e = γ p = 4/3).
• The amplitude of electrostatic shock structure (i.e., Φ > 0) associated with A > 0 is to be found to increase with the charge state and number density of non-relativistic light and heavy ions.
• The increasing positron number density decreases the height of the positive shock profile.
It may be noted that the self-gravitational effect of the plasma species is important to be considered in our governing equations but beyond the scope of our present work. Overall, the outcomes from our present investigation will be helpful to understand the IASHWs in white dwarfs and neutron stars.
|
2021-04-07T01:16:31.591Z
|
2021-04-05T00:00:00.000
|
{
"year": 2022,
"sha1": "bdb9c3c8a816618ff2492b042edeb20046bb2c54",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2104.02121",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "bdb9c3c8a816618ff2492b042edeb20046bb2c54",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
51872782
|
pes2o/s2orc
|
v3-fos-license
|
Is weight a predictive risk factor of postoperative tonsillectomy bleed?
Objective To determine if a correlation exists between weight‐for‐age percentile and post‐tonsillectomy hemorrhage in the pediatric population. Study Design Retrospective study. Methods 1418 patients under the age of 15 who underwent tonsillectomy with or without adenoidectomy at a tertiary children's hospital between June 2012 and March 2015 were included in this retrospective study. Patient demographic information, operative and postoperative variables, as well as category and day of postoperative tonsillectomy bleed, if one occurred, were recorded. Fisher's exact and ordinal logistic regression analyses were performed on the full cohort. Results The overall post‐tonsillectomy hemorrhage prevalence was found to be 2.2%, with primary and secondary rates of 0.78% and 1.34%, respectively. Weight‐for‐age percentile, sex, indication for or method of tonsillectomy, or postoperative use of NSAIDs, antibiotics or narcotics were not significantly associated with post‐tonsillectomy hemorrhage. There was a significant relationship between postoperative use of dexamethasone and higher rate of Category 3 post‐tonsillectomy hemorrhage (P = .028). Conclusion The post‐tonsillectomy hemorrhage rate in our study is consistent with that cited in the literature. No correlation was demonstrated between weight‐for‐age percentile and occurrence of post‐tonsillectomy hemorrhage. Postoperative administration of dexamethasone was associated with a significant increased rate of post‐tonsillectomy hemorrhage requiring surgical intervention, a novel finding. Level of Evidence 4
INTRODUCTION
Tonsillectomy is one of the most common surgeries in otolaryngology, roughly half a million performed annually in the United States, with a majority of these surgeries performed on children under 15 years old. 1,2 Indications for tonsillectomy can include tonsillitis, peritonsillar abscess, streptococcal carriage, hemorrhagic tonsillitis, tonsillar asymmetry, and obstructive sleep apnea, 3 with the last accounting for 80% of tonsillectomy indications currently. 4 Despite its common practice, risks are associated with the procedure, including pain, respiratory difficulty, dehydration and bleeding. 5 Posttonsillectomy hemorrhage (PTH) is one of the most common and concerning risks, accounting for one-third of complications. However, the cited mortality associated with PTH ranges from 16% to 54% in the literature. 6,7 The incidence of PTH, regardless of primary or secondary bleed status, ranges from 0.12% to 18%. [8][9][10][11][12][13][14][15][16][17][18][19][20][21][22] There are numerous studies investigating causative relationships between PTH and the use of perioperative steroids, non-steroidal anti-inflammatory drugs (NSAIDS), antibiotics and narcotics, as well as the surgical method, indication, and sex. 8,[23][24][25][26] However, the literature lacks adequate studies on the relationship of obesity to PTH. Approximately 35% of adults and 17% of children are obese within the United States. 27,28 To the authors' knowledge, there has been only one study, from Austria, researching this association, wherein no association between obesity and PTH was found. However, their study, in a country where only 12% of the adult and 7% of the children populations are obese, included both children and adults. 9 Obesity is known to result in chronic inflammation. Within the general surgery literature, it has been shown to cause increased rates of surgical complications such as delayed wound healing and infection, leading to higher morbidity and mortality rates. 29 With the This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made.
From increasing rate of obesity, in addition to the high number of tonsillectomies performed each year, it is important to study the effects obesity has on post-tonsillectomy complications, including hemorrhage. Our population is an ideal place to study the relationship of obesity and PTH as Mississippi has one of the highest rates of obesity in the nation. 27,28 Obesity in children is defined as a body mass index (BMI) greater or equal to the 95th percentile for sex and age, while overweight is defined as having a BMI greater or equal to the 85th percentile, but less than the 95th percentile for sex and age. 30 Unfortunately, less than 50% of primary care physicians assess BMI regularly in children, citing time constraints as a main barrier. 31 Weight stratified for age percentiles can be used as a screening tool for childhood obesity in lieu of BMI. Gamliel et al demonstrated that using the 90th and 75th weight-for-age percentile cutoffs for identifying obese and overweight children were highly sensitive, 94.3% and 93.2% respectively, and that these cutoffs corresponded appropriately with BMI in the pediatric population. 30 This study aims to elucidate if a correlative relationship exists between weight stratified for age and PTH within the pediatric population. Our null hypothesis was that no correlation exists. Secondarily, we sought to identify if any other modifiable factors previously studied in the literature were predictive of PTH, including perioperative NSAID, narcotic, antibiotic, or steroid use, surgical method, tonsil size, sex, age, resident postgraduate year or indication.
Study Setting and Design
This retrospective study was performed at the University of Mississippi Medical Center (UMMC), a tertiary, academic hospital. The Institutional Review Board at University of Mississippi Medical Center (UMMC) approved this study. For the study, 1418 patients under the age of 15 who underwent tonsillectomy with or without adenoidectomy at UMMC between June 2012 and March 2015 were identified for inclusion. No cases meeting inclusion criteria were excluded from the study.
Study Definitions
The category of bleed was divided into Type 1 (reported bleeding), Type 2 (bleeding requiring ER visit/hospitalization) and Type 3 (bleeding requiring control in the operating room). Bleeds were also recorded as primary bleed (within 24 hours of surgery) or secondary bleed (more than 24 hours after surgery).
Data Collection
Charts were reviewed to collect the following data points for each patient: patient age; sex; primary indication for surgery; weight in kilograms; weight percentile for age in accordance to the Center for Disease Control (CDC) guidelines; operation (tonsillectomy 1/-adenoidectomy); method (Bovie electrocautery, Snare, Coblator); Surgeon; Resident postgraduate year; tonsil size; adenoid size; prescribed postoperative narcotics, antibiotics, steroids or Carafate; instructions for use of NSAIDS for postoperative pain control; occurrence of PTH; category and day of PTH occurrence if applicable. Operative charts, discharge instructions, postoperative visits, telephone communications and nursing notes were reviewed to gather the above data.
All chart reviews were performed by two of the authors, a chief otolaryngology resident (AO) and senior medical student (AJH). The chart reviews were performed together and consecutively over a several day time period to counteract fatigue and variability, respectively. All chart review was overseen by AO. Inter-rater variability was not felt to be significant due to the obvious objective variables recorded, the short time course in which the review was performed, as well as the limited number of reviewers.
Statistical Analysis
Patient characteristics are shown as median (interquartile range) for continuous variables or n (%) for categorical variables, as the skewedness of the data required nonparametric statistical methods of analysis. Continuous variables are compared using the Wilcoxon two-sample t-test approximation and categorical variables are compared using Fisher's exact test. Unadjusted logistic regressions were used to model the relationship between tonsillectomy bleeds and predictors of interest. P-values less than .05 are considered statistically significant. Analyses were conducted using SAS software, Version 9.4 (SAS Institute, Inc., Cary, North Carolina, U.S.A.).
Demographic Information
Table I provides the demographic information of our study population. Sex distribution was fairly equal with a slight male predominance of 53% (748 of 1418). The average age was 5.45 years with a range of 1.26-14.97 years. A majority of the tonsillectomies were performed using Bovie electrocautery (85.4%). Multiple indications were listed for many of the surgeries. Approximately half (50.1%) of the patients received a prescription for postoperative dexamethasone, while three-quarters received a narcotic prescription. A total of six attending surgeons performed tonsillectomies over this time with the majority of cases (82%) performed or overseen by our fellowship trained pediatric otolaryngologists with participation of 27 residents over the time span. There was no significant difference in bleed rate between different levels of post-graduate year or attending.
Association of Variables with Post-tonsillectomy Hemorrhage
Postoperative dexamethasone use was the only variable that showed a statistically significant relationship with Category 3 PTH (P 5.028). Category 1 and 2 bleeds did not show a significant relationship with PTH, P 5 0.078 and P 5.082, respectively. Overall, patients receiving postoperative dexamethasone were 1.8 times more likely to have a tonsil bleed than those not given dexamethasone, but were 10.1 times more likely to have a tonsil bleed requiring surgical intervention (Category 3), as seen in Tables II and III. Increased weight-for-age percentile was not associated with PTH. This was true whether the weight for age was treated as a continuous variable or categorized in percentile brackets (0-10%, 19-20%, etc.). Overall, the bleed rate was low and there was no significant difference between surgical methods for ratio of bleeds (P 5.657). Similarly, sex, indication for tonsillectomy, or postoperative use of NSAIDs, antibiotics or narcotics were not significantly associated with post-tonsillectomy hemorrhage, as demonstrated in Table III.
DISCUSSION
As tonsillectomy is one of the most common operative procedures in children in the United States, many studies have examined major complication rates and the presence of causative factors, following tonsillectomy. While much of the data in the literature is conflicting, these studies are important in order to improve the safety of the procedure as well as limit the cost burden of visits associated with these complications. Recent studies show the rate of unplanned return visit to the hospital following adenotonsillectomy ranged from 6.3-11.4%, 12,13,32 with PTH accounting for 24.2-37.4% of these visits. 13,14 Curtis et al found the average cost associated with PTH to be $1502. 14 Furthermore, PTH is described as one of the most common complications following tonsillectomy resulting in malpractice claims and death, accounting for 33.7% of claims and 52.4% of deaths in one study. 7 However, a questionnaire from 2013 distributed through the Academy of Otolaryngology-Head and Neck Surgery showed PTH to be the cause of post-tonsillectomy mortality only 16% of the time. 6 Despite the variability, it is important to note the PTH can often be fatal, and thus remains an important topic of study. The overall PTH occurrence in our study was found to be 2.2%, with primary and secondary PTH accounting for 0.78% and 1.34%, respectively. These rates are comparable to that currently described in the literature of 0.2-2.2%, and 0.1-3%, respectively. 2 After observing several consecutive tonsil bleeds in obese children at the authors' institution, our study aimed to determine if a correlative relationship exists between weight-for-age percentile and PTH. It is the only study in the English literature specifically investigating this association. Weight-for-age percentile was chosen over BMI as the study variable as it has been shown to be a sensitive screening tool for obesity in children that corresponds well with BMI. 30 Furthermore, in our institution, as with most, weight is necessary prior to proceeding with surgery and an accurate measurement is taken on the day of surgery, while height was invariably recorded.
We chose to use the CDC weight-for-age percentiles in our study and not the World Health Organization (WHO) reference. The decision to do so was based primarily on the current recommendation to use the CDC data for children over the age of 24 months, accounting for 98% of the patients in our study. Furthermore, the WHO curves were created to describe the growth of healthy children under optimal conditions, excluding infants not breastfed for more than 12 months. The CDC data comes from a sample population where only 50% were ever breastfed. 33 According to the most recent breastfeeding data collected by the CDC, approximately 50.5% of Mississippi children are ever breastfed. 34 Therefore, we felt the weight for age percentile would be most accurate using CDC data in our patient population.
Our study did not shown a positive correlation between weight-for-age percentile and occurrence of post-tonsillectomy hemorrhage, similar to the previously mentioned study performed in Austria. 9 However, ours is the first to demonstrate a lack of correlation in a primarily overweight and obese population, with a median weight-for-age percentile of 75.26%. Therefore, our study suggests further precautions in the obese patient population are likely unnecessary, at least in regards to posttonsillectomy hemorrhage. We also found no statistically significant correlation between PTH with postoperative use of NSAIDs, antibiotics, narcotics; tonsil or adenoid size; indication; method; resident post-graduate year; sex or age. In general, the findings in the literature regarding the causative relationship of these variables with PTH is mixed.
Our study revealed a statistically significant increased risk of Category 3 PTH on multi-variate analysis with postoperative administration of dexamethasone. Routine prescription of a single dose of dexamethasone on postoperative day three following tonsillectomy was started at the author's home institution in 2014 in an effort to reduce postoperative pain and dehydration. Routine prescription of narcotics slowed around the same time in response to the Food and Drug Administration's (FDA) black box warning recommending against the prescription of codeine for the management of post-tonsillectomy pain, secondary to the danger in patient's deemed "ultra-rapid metabolizers". 35 While 75% of our population received a narcotic prescription, these were skewed toward the beginning of the study period, prior to the FDA warning. The current clinical practice guidelines set forth by the American Academy of Otolaryngology-Head and Neck Surgery in 2011 strongly recommends a single, intraoperative dose of intravenous dexamethasone be given to children undergoing tonsillectomy. 2 Of note, it is common practice in our institution to give a single intraoperative dose of dexamethasone, as recommended by the AAO-HNS. This recommendation was made based on evidence from several randomized controlled trials that have shown a single perioperative dose of dexamethasone serves to significantly lower rates of postoperative nausea and vomiting (PONV), pain and facilitate a quicker return to oral intake, without increasing rates of significant PTH. 36,37 Since the publication of the guideline, Gallagher et al showed support of the guideline in their RCT showing no significantly increased rate of PTH requiring hospitalization or re-operation following a single perioperative dose of dexamethasone in post-tonsillectomy children. 24 To the author's knowledge, no studies exist examining postoperative use of steroids in tonsillectomy, and thus our finding is novel. However, the authors acknowledge that this finding is both incidental and preliminary given the wide confidence interval and highlights the need for further studies on this manner.
In designing our study, inconsistent classification of PTH within the literature was evident with no single accepted method to date. One method categorizes them based on history of bleeding, bleeding requiring direct pressure or electrocautery under local anesthesia, or reoperation under general anesthesia, 10 while another substituted "requiring non-invasive treatment" for historical bleeding as the first category. 38 Others studies used Windfuhr and Seehafer's classification which uses five grades of bleeding: spontaneous cessation, infiltration anesthesia, treatment under general anesthesia, ligature of the external carotid artery, and lethal outcome. 11 Walner and Karas proposed standardization of a very similar five category system ranging from reported bleed to death. 39 We chose our PTH classification based on several factors. One, we wanted to make sure those with reported bleeds were included, as they are not with every classification system. Two, children do not tolerate direct pressure or electrocautery under local anesthesia like adults, and many of the proposed classification systems are based off of the adult population. Three, we felt it pertinent to separate those requiring re-operation from those requiring hospitalization/ER visit in order to determine a re-operation rate. Our classification most closely resembled that used in the Gallagher et al RCT, except we included ED visits along with hospitalizations in category 2. 24 Future efforts to standardize and gain acceptance of a single PTH classification would help greatly when comparing amongst studies.
There are several limitations in our study. First, the overall low prevalence of tonsil bleeds make it difficult to study. The retrospective nature limited the variables studied, and the ability to control for confounders. For instance, postoperative antibiotic use was prescribed to 45% of patients due to surgeon preference prior to 2014. The authors recognize the use of such is not recommended by the AAO-HNS. Furthermore, recall bias by patients' families may have influenced the number of reported (Type 1) bleeds at postoperative visits or telephone calls. Additionally, there was a significant number of patients who missed their follow-up visit or telephone call, though all were given specific postoperative discharge instructions encouraging them to call with any witnessed bleeding. Therefore, we assume a majority of those missed to follow-up had a normal postoperative course. We can also not control for the possibility that patients presented to another hospital with bleeding; however, as the only academic tertiary care center in the state, these patients are most frequently transferred to our center. Furthermore, we do not know if postoperative medications prescribed were taken as directed, or even filled. Lastly, the decision for reoperation is a subjective one made by the attending surgeon on-call, and therefore unable to be controlled in this study.
CONCLUSIONS
The post-tonsillectomy hemorrhage rate in our study is consistent with that cited in the literature. No correlation was demonstrated between weight-for-age percentile and occurrence of post-tonsillectomy hemorrhage. Postoperative administration of dexamethasone was associated with a significant increased rating of post-tonsillectomy hemorrhage requiring surgical intervention, a novel finding. Future prospective and larger scale multi-institutional studies are necessary to investigate the correlation between PTH and additional postoperative steroid use, in light of our results.
|
2018-08-14T11:26:57.186Z
|
2018-05-14T00:00:00.000
|
{
"year": 2018,
"sha1": "4841495312b75a3c5acc16f58d527f2e996537d0",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/lio2.155",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4841495312b75a3c5acc16f58d527f2e996537d0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
265438948
|
pes2o/s2orc
|
v3-fos-license
|
KAPPA: A Package for the Synthesis of Optically Thin Spectra for the Non-Maxwellian κ-Distributions. III. Improvements to Ionization Equilibrium and Extension to κ < 2
The KAPPA package is designed for calculations of optically thin spectra for the non-Maxwellian κ-distributions. This paper presents an extension of the database to allow calculations of the spectra for extreme values of κ < 2, which are important for accurate diagnostics of the κ-distributions in the outer solar atmosphere. In addition, two improvements were made to the ionization equilibrium calculations within the database. First, the ionization equilibrium calculations now include the effects of electron impact multi-ionization (EIMI). Although relatively unimportant for Maxwellian distribution, EIMI becomes important for some elements, such as Fe and low values of κ, where it modifies the ionization equilibrium significantly. Second, the KAPPA database now includes the suppression of dielectronic recombination at high electron densities, evaluated via the suppression factors. We find that at the same temperature, the suppression of dielectronic recombination is almost independent of κ. The ionization equilibrium calculations for the κ-distributions are now provided for a range of electron densities.
Detection of the κ-distributions in the low solar corona or transition region (including source regions of the solar wind) relies on analyses of emission spectra.In principle, electron κdistributions can be detected from ratios of emission-line intensities, while ion κ-distributions can be detected by analysis of well-resolved emission-line profiles.Indications of the presence of electron κ-distributions from line intensity ratios have been obtained, for example, by Dudík et al. (2015), Lörinčík et al. (2020), andDel Zanna et al. (2022), who showed that in active regions, the distribution is likely strongly non-Maxwellian with a very low value of κ ≈ 2. Similarly, emission lines from flaring plasma also show strong departures from Maxwellian distributions (Dzifčáková et al. 2018).Contrary to that, the quiet Sun or even bright-point plasma are Maxwellian (Lörinčík et al. 2020;Del Zanna et al. 2022;Savage et al. 2023).In Del Zanna et al. (2022), the quiet Sun was observed within the same data set directly in the vicinity of the active region.In Lörinčík et al. (2020), the quiet Sun was observed at a date similar to the active region.Therefore, in both cases, the degradation of the instrument sensitivity with time could have been neglected.This is important, since the spatial variations of κ are therefore independent of the instrument calibration used.Thus, the veracity of detections of κ-distributions in active region coronae can be established.Emission-line profiles from the transition region, corona, and flaring plasma are also consistent with being strongly non-Maxwellian, with low values of κ (Jeffrey et al. 2016(Jeffrey et al. , 2017(Jeffrey et al. , 2018;;Dudík et al. 2017a;Polito et al. 2018).In addition, some continuum bremsstrahlung emission from flaring plasma has also been found to be consistent with κ-distributions in some instances (Kašparová & Karlický 2009;Oka et al. 2013Oka et al. , 2015;;Battaglia et al. 2015Battaglia et al. , 2019)).Finally, other indications of the presence of κ-distributions in solar flares have been obtained from hydrodynamic modeling of various spectral properties (see Allred et al. 2022).
Recently, Mondal et al. (2020) reported the presence of weak radio bursts (in the mSFU and sub-picoflare range) originating in the solar corona.Such weak bursts are likely associated with small EUV brightenings (Mondal 2021).Subsequently, Sharma et al. (2022) showed that such radio events are ubiquitous, occur both in the quiet Sun and active regions, carry sufficient energy to heat the solar corona, and are associated with the acceleration of electrons to 0.4-4 keV.The more energetic events were found in the vicinity of active regions.We note that these electron energies are similar to the electron energies possessed by the high-energy tail of the κ-distributions detected from analysis of emission-line spectra (see, e.g., the top panel of Figure 10 in Del Zanna et al. 2022).We also note that the energies of the bursts detected by Mondal et al. (2020) are possibly too low to be detected with the dedicated hard X-ray instrumentation used to observe energies only above 2-4 keV, depending on the instrument (see, e.g., Hannah et al. 2010Hannah et al. , 2016;;Marsh et al. 2017;Buitrago-Casas et al. 2022;Paterson et al. 2023), while focusing on constraining the nonthermal electrons in the quiet Sun.
The spectroscopic diagnostics of κ-distributions in the outer solar atmosphere rely on the availability of spectral synthesis, which in turn relies on the availability of the atomic data sets containing rates for individual processes for κ-distributions.This task is accomplished by the KAPPA database and software. 1In Paper I (Dzifčáková et al. 2015), the basic concept was established and presented.In Paper II (Dzifčáková et al. 2021), the database was updated to be compatible with the latest release of CHIANTI, version 10 (Dere et al. 1997;Del Zanna et al. 2021), including additional processes such as the two-ion model.Here, we provide improvements to the existing database and software by including calculations for values of κ < 2 (Sections 3.1 and 4), as well as processes such as electron impact multi-ionization (EIMI; Section 3.2) and density suppression of dielectronic recombination (Section 3.3).Finally, we note that since 2020, the synthetic spectra for non-Maxwellian κ-distributions can also be calculated using the AtomDB project and its extensions (see Smith et al. 2001;Foster et al. 2012;Cui et al. 2019;Foster & Heuer 2020).The AtomDB project relies on the Maxwellian decomposition method (Hahn & Savin 2015a; see also our Section 3.3), while the KAPPA package relies on its own calculations of the individual non-Maxwellian rates (see Papers I and II for details) and follows the CHIANTI database formats and procedures.
The Non-Maxwellian Electron κ-Distributions
The KAPPA database allows for calculations of the synthetic optically thin spectra for the non-Maxwellian electron κdistributions (Olbert 1968;Vasyliunas 1968aVasyliunas , 1968b;;Livadiotis & McComas 2009).We note that several definitions of the κ-distributions exist (see, e.g., Livadiotis & McComas 2013;Lazar et al. 2016;Livadiotis 2017;Lazar & Fichtner 2021).At present, the KAPPA database uses a simple formulation for the isotropic electron distribution f κ (E)dE, which depends only on the electron kinetic energy E and has two parameters, κ and T: where k B is the Boltzmann constant, A κ is a normalization constant, and the mean kinetic energy á ñ E is given by the expression á ñ = E k T 3 2 B .The 3/2 < κ < ∞ is an independent parameter describing the degree of departure from Maxwellian, which corresponds to the limit of κ → ∞ (see Figure 1).We also note that Equation (1) corresponds to a "Kappa-A," distribution as defined and discussed by Lazar et al. (2016).
At present, such a definition of the κ-distribution is sufficient to evaluate the effects of the non-Maxwellian electron distributions (NMEDs) characterized by a high-energy powerlaw tail on the optically thin spectra.This is because for the κdistributions, the tail is relatively strong (see Figure 1), compared, for example, to the distributions composed of a core Maxwellian and a power-law tail (Dzifčáková et al. 2011), used to describe the electron distribution derived from X-ray bremsstrahlung in flares.Still, even with a relatively strong tail, the changes in intensity of most emission lines with κdistributions are of the order of several tens of percent, and rarely more than a factor of 2 compared to the Maxwellian (see, e.g., Dzifčáková & Kulinová 2010;Mackovjak et al. 2013;Dudík et al. 2015;Lörinčík et al. 2020;Del Zanna et al. 2022; Savage et al. 2023).The reasons for this choice are chiefly twofold.First, the observational uncertainties in the emission-line intensities are relatively large, due to the radiometric calibration and its degradation, with the precision of the calibration of spectroscopic instruments such as Hinode's Extreme-Ultraviolet Imaging Spectrometer being about 20% (see, e.g., Culhane et al. 2007;BenMoussa et al. 2013;Del Zanna 2013).Second, the κdistributions (and any other NMEDs) are usually detectable using line intensity ratios involving two lines with different sensitivity to κ, such as an allowed and a forbidden line (see, e.g., Dudík et al. 2017b;Dzifčáková et al. 2018;Lörinčík et al. 2020;Del Zanna et al. 2022).Forbidden lines are usually much weaker than allowed ones, which means their photon noise uncertainty also limits the determination of κ only to a range of values.Therefore, resolving different types of κ-distributions or indeed differentiating between κ-distributions and other NMEDs with a high-energy power-law tail in the outer solar atmosphere is very difficult at present.
Finally, we note that we do not repeat here the equations for the calculation of the line intensities or individual ionization, recombination, and excitation coefficients, as these have been presented and discussed at length in previous literature, and are summarized in the previous Papers I and II.In the following, we describe the upgrades to the existing database and software.
Ionization and Recombination Rates for κ < 2
Previous versions of the KAPPA database contained individual rates for discrete values of κ = 2, 3, 4, 5, 7, 10, 15, 25, and 33 (see Paper I).However, recent diagnostics of electron distribution in the solar corona showed that the value of κ can be lower than 2 (see Dudík et al. 2015Dudík et al. , 2017a;;Dzifčáková et al. 2018;Polito et al. 2018;Lörinčík et al. 2020;Del Zanna et al. 2022).Previously, the value of κ = 2 was considered a relatively extreme one and was the lowest one available in the KAPPA database.We have now extended our calculations of individual rates, as well as ionization equilibria for values of κ < 2. Namely, the newly available values are κ = 1.9, 1.8, and 1.7.We note that the asymptotic value of κ is κ → 1.5.However, our choice of κ = 1.7 is about the lowest detected (Dudík et al. 2017a;Polito et al. 2018) and should be sufficient to illustrate the effects of such low κ-distributions on the spectra, taking into account the increasing uncertainty of the approximations used for the calculation of the recombination and excitation rates (Dzifčáková et al. 2015).The value of κ = 2.5 was also added to bridge the relatively large gap between κ = 2 and 3.This value of κ also corresponds approximately to one of the critical κ indices in nonextensive thermodynamics (see Livadiotis & McComas 2010).
We calculated the ionization rates directly from the cross sections, similar to other values of κ (see Paper II).The cross sections we used are those from the compilation of Hahn et al. (2017), similar to the latest version 10.1 of CHIANTI (Dere et al. 2023).The recombination rates were calculated using the approximate method of Dzifčáková (1992) and Dzifčáková & Dudík (2013), and we subsequently obtained the ionization equilibria.We note that the ionization equilibria for such low κ values were for the first time calculated by Hahn & Savin (2015a), and we checked that our calculations are in good agreement with those using the method of Hahn & Savin (2015a).
The changes in the ionization equilibrium of iron for low κ in comparison with κ = 2 and other values of κ are shown in Figure 2.For transition-region ions such as Si IV and Fe VIII (top row)-that is, for temperatures below about log(T [K]) ≈ 5.7-the changes in the peaks of the relative ion abundance for κ = 1.7 are small compared to κ = 2.The peaks are widened and shifted to slightly higher temperatures.Note that this shift is in the reverse direction to the shifts for higher κ values (see Dzifčáková & Dudík 2013).That is, with decreasing κ, the peaks are first shifted to progressively lower T until about κ = 2; then, for κ < 2, the peaks shift to slightly higher T.This effect is well visible for Fe VIII (see the top right panel of Figure 2).
At coronal temperatures and higher degrees of ionization, a small change of κ means substantial changes in the ionization peaks (bottom row of Figure 2).The shape of the peaks is wider, but the relative ion abundances are shifted significantly to higher temperatures.For κ = 1.7, the peak temperatures can be up to a factor <2 higher than for κ = 2.This happens for both Fe XII, where the shift of the log (T max [K]) is from 6.35 to 6.55 (a factor of ≈1.6 higher), as well as for Fe XVII, where the ionization peak shifts from log (T max [K]) = 6.65 for κ = 2 to 6.85 for κ = 1.7 (again a factor of ≈1.6).This means that the peak of the relative ion abundance continues its shift to higher temperatures with decreasing κ (a fact previously described up to κ = 2 by Dzifčáková 1992Dzifčáková , 2002;;Dzifčáková & Dudík 2013).Reliable detection of such extremely low values of κ in the solar corona would have significant consequences for the thermal energy content of the solar corona and thus coronal heating requirements.
Electron Impact Multi-ionization
A single electron-ion collision can lead not only to ionization or excitation, but also to multiple ionization of the target ion.For example, it is possible to produce Fe XIV directly from Fe XII if the impacting electron has sufficient energy.Triple ionizations are also possible, although the cross sections fall rapidly with each additional ejected electron (see Hahn et al. 2017).Such EIMIs are typically neglected for equilibrium (Maxwellian) plasmas, because the EIMI becomes important only at very high temperatures, where the abundance of the target ion is already small.For iron, EIMI changes the ionization equilibrium (charge state distribution) by less than about 5%; see Figure 3 of Hahn & Savin (2015b).However, EIMI becomes important both in situations when the plasma is rapidly heated (where EIMI reduces the time plasma needs to reach ionization equilibrium) or when the electron distribution is non-Maxwellian with high-energy tails (Hahn & Savin 2015b).For a κ-distribution with extremely low κ, the EIMI changes the relative ion abundances of Fe by a much larger amount, up to a factor of 2-6, depending on the ion and T (see Figure 9 of Hahn & Savin 2015a).Clearly, EIMI becomes an important process for such NMEDs and needs to be taken into account in calculating the ionization equilibrium.
Denoting I km and R mk as the ionization and recombination rate coefficients for ions in the kth and mth ionization state, k < m Z, and N k = N(X + k )/N(X), with the relative abundance of the ion X + k and Z being the proton number, the ion populations in equilibrium should satisfy the set of linear equations: Ionization rates for single ionization and for EIMI were calculated using the approximation formulae for ionization cross sections provided by Hahn et al. (2017).The new ionization equilibria in the KAPPA database now include the effects of EIMI for all of elements and κ.
As pointed out by Hahn et al. (2017), the effect of multiple ionization on the ionization equilibrium for a Maxwellian distribution is small, but it can become important for the elements with high Z and low κ-values or high T.These effects are shown in Figure 3, where the ionization equilibrium of iron, including EIMI, is plotted for a Maxwellian distribution and several values of κ.These Fe ionization equilibria are compared to calculations without EIMI and in the low-density limit (see Section 3.3).The effects of EIMI are of importance for coronal Fe ions such as Fe X-Fe XVIII between log(T [K]) ≈ 6-7 (see Figure 9 of Hahn & Savin 2015a), although the details depend on the ion.Generally, for the affected ions, EIMI shifts the peak formation temperature T max toward lower values.These shifts of T max toward lower values of T occur since additional ionization leads to the increase of the ionization state at a given electron kinetic energy.For some ions, the changes in the relative ion abundances occur at > T T ; max for others, they occur at all temperatures (for example, Fe XIV; see Figure 3), and for other ions still, such as Fe XVII, the EIMI dominantly affects temperatures below T max .Therefore, the EIMI is also a potentially significant process affecting the X-ray spectra, such as those observed by MaGIXS (Savage et al. 2023), should the X-ray lines of Fe XVII-Fe XVIII be formed in non-Maxwellian conditions.We note that for coronal ions, the shifts in the ionization equilibrium with κ occur in the opposite direction to the shifts due to the inclusion of EIMI.That is, with decreasing κ, the peaks are shifted toward higher T compared to Maxwellian calculations.This shift with κ is a result of two competing processes.The increase of the ionization rate due to highenergy electrons pushes the ionization peaks toward lower T, while the increase of the radiative recombination rates due to the excess of electrons at low energies and low κ (see Figure 1) pushes them to higher temperatures.In addition, changes in dielectronic recombination rates with κ also affect the shift.The resulting net effect of κ-distributions on coronal Fe ions is the shift of their peaks toward higher temperatures, as the changes in the total recombination rate are larger than the changes in the ionization rate at the temperatures where the ion peak occurs (see Figure 2 of Dzifčáková & Dudík 2013).When EIMI is added, the total ionization rate is increased, resulting in the shift of the ionization peak to lower T, i.e., in the opposite direction to the shifts with κ.
Density Suppression of Dielectronic Recombination
Recombination in the outer solar atmosphere consists of two processes, radiative and dielectronic recombination.Burgess (1964) has shown that the latter one can be the more important one.Therefore, the accurate implementation of dielectronic recombination is critical for analysis of the observed transition region and coronal spectra.The KAPPA database contains recombination rates for κ-distributions calculated using the approximate methods developed by Dzifčáková (1992) and summarized in Section 3.1 of Dzifčáková & Dudík (2013).These rates are valid, similar to the corresponding Maxwellian ones in CHIANTI, only in the limit of low electron densities log(N e [cm −3 ]) → 0.
However, in relatively high-density plasmas, additional electrons within the plasma can lead to electron-ion collisions and ionization from the doubly excited resonance states.Thus, the radiative rates from the doubly excited states are diminished and the total dielectronic recombination rate R DR is suppressed.This suppression of dielectronic recombination was initially studied by Burgess & Summers (1969) and Summers (1972Summers ( , 1974)), then later by Summers & Hooper (1983), Badnell et al. (1993Badnell et al. ( , 2003)), Nikolić et al. (2013Nikolić et al. ( , 2018)) In the last three of these works, the generalized collisionalradiative modeling was employed for carbon (Dufresne & Del Zanna 2019), oxygen (Dufresne et al. 2020), and then for low charge states generally (Dufresne et al. 2021).The generalized collisional-radiative modeling incorporates not only density suppression of dielectronic recombination, but also other processes affecting the ion population, such as photoinduced processes and charge transfer.Although these processes can be of importance for transition-region ions, such modeling relies on a vast quantity of reliable atomic data that are not yet readily available.Implementing the generalized collisional-radiative modeling would also mean significant divergence from the methods of calculating synthetic spectra employed in the present version 10.1 of the CHIANTI database, on which the KAPPA package is based.Therefore, for the present, we focus on studying the behavior of the density suppression of dielectronic recombination with κ.Although this process is important, we caution the reader that the calculations presented below are not a substitute for generalized collisional-radiative modeling.
Generally, the suppression of dielectronic recombination can be expressed through a dimensionless factor of S M (T, N e , q) (Nikolić et al. 2013(Nikolić et al. , 2018)): where R DR (T) is the dielectronic recombination rate in the log(N e [cm −3 ]) → 0 limit, R DR (N e , T, q, M) is the densitysuppressed rate, q is a parameter depending on the ion, and M is the isoelectronic sequence.Nikolić et al. (2013) used earlier generalized radiative-collisional models to develop an approximation formula for the suppression factor as a function of isoelectronic sequence, charge, electron density, and temperature.Nikolić et al. (2018) presented improved fits to calculate the suppression of dielectronic recombination at intermediate electron densities, but only for the Maxwellian electron distribution.These authors have shown that the suppression factor depends on the atomic parameters q of ion, on the electron density N e , and through activation density on T 1/2 (Equation (3) of Nikolić et al. 2018).Therefore, the effect of electron density dominates over the effect of electron temperature.
As no attempt to calculate the suppression factors of dielectronic recombination for any other distribution than Maxwellian has been made so far, we calculated the suppression of dielectronic recombination ( ) for κdistributions.To do that, we employed the Maxwellian decomposition approach of Hahn & Savin (2015a).These authors approximated κ-distributions by a sum of several Maxwellians with different temperatures T j : This allowed them to use the linearity property and thus calculate the non-Maxwellian rates R κ (T) for any collisional process as a weighted sum of the corresponding Maxwellian rates (see Equation (12) of Hahn & Savin 2015a): where the coefficient a j and temperatures T j are tabulated in Hahn & Savin (2015a).It follows that the suppression factor for κ-distributions can then also be calculated as a weighted sum of the Maxwellian suppression factors: The suppressed dielectronic recombination rate for κ-distributions is then simply given by the expression analogous to Equation (3): where R DR,κ is the dielectronic recombination rate for κdistributions in the low-density limit.We note that although the present method is based on the fits of Nikolić et al. (2018), and thus not exact, it allows us to estimate the effect of the electron density on the ionization equilibrium for κ-distributions.
We found that the suppression factors ( ) for κdistributions are usually very close to the Maxwellian ones at the same temperature.The differences reach only about 10% in most cases, although larger differences can occur, as discussed below.Generally, the differences are largest for low values of κ.Examples of the behavior of the suppression factors ( ) with κ are shown in Figure 4.The cases shown include dielectronic recombination from Si IV to Si III, as well as Fe VIII → Fe VII, Fe X → Fe IX, and Fe XV → Fe XIV.Where the Maxwellian suppression factor increases with temperature, the changes with κ are small, and the suppression factor for low κ is slightly smaller than for a Maxwellian distribution.This behavior with κ is independent of electron density.For ions such as Fe VIII and Fe X, this behavior of the suppression factor occurs in the entire range of T (see Figure 4), while for ions such as Fe XV, this behavior occurs at temperatures log(T [K]) 6, where the ion is formed.Clearly, the suppression factor in these cases depends primarily on log(N e [cm −3 ]) and only weakly on κ.These small differences in the suppression factors with κ are however comparable with the precision of the fits to the suppression factors themselves as obtained by Nikolić et al. (2018).
Larger differences with κ in the suppression factors occur in cases where the Maxwellian suppression factors decrease with T. With decreasing κ, the suppression factor progressively increases, i.e., the dielectronic recombination rate becomes progressively less suppressed (see Equation ( 7)).This behavior of the suppression factor is important for transition-region ions, such as Si IV, at temperatures where the ion is formed, and it occurs for a range of electron densities (see the top panel of Figure 4).It also occurs at high densities for ions such as Fe XV, but only at relatively low temperatures below log(T [K]) 6, where the ion is not formed in ionization equilibrium.
Figure 5 shows the resulting density-suppressed dielectronic recombination rates for Maxwellian (left panels) and κdistributions with κ = 2 (right panels).Other κ-distributions are not shown, as in most cases the suppression coefficient is not strongly sensitive to κ.It can be seen that the density suppression strongly depends on log(N e [cm −3 ]), the atomic parameters of ions (with Fe VIII being much more strongly affected than Fe X), and only slightly on T.
How the resulting relative ion abundances are affected varies depending on all parameters, mostly on the ion and N e , but also on T and κ, as the changes in the low-density ionization equilibrium with T and κ (see Dzifčáková & Dudík 2013) are compounded or reduced by changes with N e for the individual ion.Generally, the peaks of the relative ion abundances are shifted to lower log(T [K]) for higher electron densities.Elements with higher Z are typically more affected, and the largest changes can occur at transition-region temperatures, i.e., for log(T [K]) 6.
A comparison of the resulting ionization equilibria (i.e., relative ion abundances) for Si and Fe are shown in Figures 6, 7, and 8, respectively.Figure 6 shows the behavior of the relative ion abundance for several important ions, Si IV, Fe VIII, and Fe X, while Figures 7 and 8 show multiple ions of Si and Fe, respectively.Aside from the density-dependent dielectronic recombination, these ionization equilibria also contain the effects of EIMI (Section 3.2).As already noted, however, other processes could still influence the ionization equilibrium, especially for low charge states (see, for example, Dufresne et al. 2020).
For the elements Si and Fe, the resulting ionization equilibrium in the transition region is sensitive to the electron density regardless of the electron distribution, with some ions, such as Si IV and Fe VI-Fe VIII, being affected more than others.For example, at electron densities log(N e [cm −3 ]) ≈ 10-11, typical of the transition region, the peak of Fe VII is shifted to lower T by a factor of 0.7 both for the Maxwellian and κ = 2, and the peaks get wider with progressively lower κ (a well-known effect; see Figure 8 and Dzifčáková & Dudík 2013).The relative abundance of Si IV is very sensitive to N e , but more so for Maxwellian distributions than for κ = 2.For the Maxwellian distribution, the peak of Si IV is shifted slightly, to about log(T [K]) = 4.8, and at densities of log(N e [cm −3 ]) = 10, it is nearly 2 times higher than for the low-density limit (see also Figure 13 of Polito et al. 2016).For κ = 2, however, the peak of Si IV is almost independent of N e .This is due to the increase in the suppression factor for such low κ (see the top panel of Figure 4).
At temperatures and electron densities typical of the solar corona, the effects of suppression of dielectronic recombination at higher N e are much more subtle.Regardless of the value of κ, for log(N e [cm −3 ]) = 9-10, the shift of the ionization peaks with N e is small and the peaks almost do not change their shapes (see Figure 9).This is important for diagnostic purposes, as the previous diagnostics of κ (Dudík et al. 2015;Lörinčík et al. 2020;Del Zanna et al. 2022) were based on the low-density ionization equilibrium calculations.For flare temperatures and densities, the density suppression of dielectronic recombination is negligible and thus does not affect the diagnostics of flaring plasma (see Dzifčáková et al. 2018).
We note that the CHIANTI software and database as yet does not contain ionization equilibria with density-suppressed dielectronic recombination or EIMI.Therefore, the respective Maxwellian ionization equilibria are also included in the present version of the KAPPA database.The naming conventions for the respective ionization equilibrium file names are detailed in Appendix A. Finally, we caution the reader that the effects of finite density due to the suppression of dielectronic recombination and the effects of κ-distributions may not be readily distinguishable, especially in the transition region, and that caution should be exercised in interpreting the observed spectra arising from plasma at electron densities where these effects play a role.
Excitation Rates
We endeavor to maintain the database compatible with the latest version of CHIANTI, currently in version 10.1.2Following that, there are no major changes in the excitation data within KAPPA since Paper II.The only exception is that the KAPPA database now includes the requisite containing collisional excitation and deexcitation rates for the respective values of κ = 1.7, 1.8, 1.9, as well as 2.5 (see also Section 3.1), so that the respective synthetic spectra can be calculated.The naming conventions for the corresponding files are described in Appendix B.
We note that the atomic data sets are huge, with some ions containing hundreds of energy levels.Therefore, maintaining compatibility with CHIANTI is a huge task.As the atomic data within CHIANTI can change at any time, KAPPA contains since Paper II its own branch of Maxwellian excitation cross sections for spectral synthesis.This is done so that the corresponding Maxwellian calculations are always available.Should the compatibility of the NMED data within KAPPA with respect the atomic data in CHIANTI be broken at any time, users are encouraged to contact the KAPPA team with a request to update our atomic data for the κ-distributions.
As an example of the synthetic spectrum calculated for low values of κ for multiple ions, in Figure 10 we show a portion of the X-ray spectrum at 14-18 Å observable by the MaGIXS instrument (Savage et al. 2023), currently scheduled for second launch in 2024.There, three spectra are shown: a Maxwellian one in black, a κ = 2 one in red, and κ = 1.7 spectrum in violet.The spectra are calculated for a constant log(T [K]) = 6.6 and scaled in emission measure so that the ratio of the two Fe XVII lines, 15.01 Å/15.26Å, is kept constant.This approach follows the example spectrum provided in Figure 3 of Dudík et al. (2019), and enables one to immediately recognize which spectral lines are sensitive to κ if the value of log(T [K]) is held constant.Figure 10 shows that at this temperature typical of active region cores, the O VII and O VIII lines become strongly enhanced with respect to the neighboring Fe XVII ones, especially for extremely low κ = 1.7.However, we note that the present spectra are calculated for a simple case of constant log(T [K]) = 6.6; a value where the Fe XVII abundance is relatively low for κ = 1.7 (see Figure 2).The temperature is typically a parameter determined from observations using a range of synthetic spectrum calculations where the log(T [K]) can vary (see, for example, Figures 14 and 16 of Savage et al. 2023 and the discussion therein).Nevertheless, our example calculations demonstrate that at least in principle, it could be possible to distinguish the extreme case of κ = 1.7 even from κ = 2 using the optically thin spectra of the solar corona.
Summary
We performed an update of the ionization equilibrium calculations in the KAPPA database together with adding several improvements.These include: 1. extension of the calculations toward low values of κ < 2 and adding κ = 2.5; 2. addition of EIMI; and 3. addition of density suppression of dielectronic recombination.
The extension of the KAPPA database to extremely low values of κ < 2 was also done for the excitation rates, so that full calculations for such values of κ can now be performed.This extension of the database was prompted by the recent results indicating that the value of κ in the solar transition region, corona, and flares can be quite low (Dudík et al. 2017a;Dzifčáková et al. 2018;Lörinčík et al. 2020;Del Zanna et al. 2022).
The process of EIMI is generally not important for the Maxwellian electron distribution, but it becomes important for strongly NMEDs, such as those with low values of κ.Therefore, its inclusion is necessary for proper diagnostics of the electron distribution using emission-line intensities formed in neighboring ionization stages (see Lörinčík et al. 2020;Del Zanna et al. 2022).In agreement with Hahn & Savin (2015a), we find that the ionization balance for Fe is significantly modified at log(T [K]) ≈ 6-7, where multiple ions are affected if the value of κ is low.
To evaluate the density suppression of dielectronic recombination for κ-distributions, we followed the approach of Nikolić et al. (2018) by calculating the respective suppression factors for κ-distributions.To do that, we employed the Maxwellian decomposition method of Hahn & Savin (2015a).We find that the suppression factors are in most cases very close to Maxwellian.At the same temperature, they are within 10% of the respective Maxwellian ones, although larger differences can occur, especially for transition-region ions such as Si IV.The ionization equilibria were subsequently calculated for all values of κ and a range of electron densities, and made available within the KAPPA database.Notably, the suppression of dielectronic recombination affects the coronal iron ions, as well as several transition-region ions, such as Si IV.For the latter, the density suppression of dielectronic recombination becomes relatively unimportant for extremely low values of κ.
We note that the present improvements are not a substitute for the generalized collisional-radiative modeling that is required for some emission lines, such as those from the transition region (see, e.g., Dufresne et al. 2020 and references therein for a detailed discussion).Nevertheless, the processes listed above should serve to increase the accuracy of diagnostics of κ-distributions, especially low values of κ, in the solar corona and possibly in other astrophysical environments.
Figure 1 .
Figure 1.Electron κ-distributions as a function of electron kinetic energy E, plotted for various values of κ and log(T [K]) = 6.6.
Figure 2 .
Figure 2. Relative ion abundances and their changes with decreasing κ.The individual colors denote the values of κ.Violet stands for κ = 1.7, while red denotes κ = 2.A variety of ions are shown, from the transition-region Si IV and Fe VIII to the coronal Fe XII and Fe XVII.
Figure 3 .
Figure3.The effect of multi-ionization on the Fe ionization equilibrium for a Maxwellian distribution (top) and κ-distributions with κ = 5, 2, and 1.7 (rows 2-4).Calculations for all distributions, including the Maxwellian, were based on the ionization cross sections ofHahn et al. (2017) and recombination rates from CHIANTI version 10.1.
Figure 4 .
Figure 4. Examples of the behaviors of the suppression factors for dielectronic recombination for a selection of ions.Individual values of κ and log(N e [cm −3 ]) are indicated.
Figure 5 .
Figure 5. Dielectronic recombination rate coefficients, where the effects of finite density on dielectronic recombination are shown for Maxwellian distributions (left) and κ-distributions with κ = 2 (right).Several ions are shown: Si V, i.e., recombination from Si V to Si IV (top), Fe VIII (middle), and Fe X (bottom).
Figure 6 .
Figure 6.Relative ion abundances of Si IV (top), Fe VIII (middle), and Fe X (bottom), where the effects of finite density on dielectronic recombination are shown for Maxwellian distributions (left) and κ-distributions with κ = 2 (right).The calculations for all distributions, including the Maxwellian, were based on the ionization cross sections of Hahn et al. (2017) and recombination rates from CHIANTI version 10.1.
Figure 7 .
Figure 7. Silicon relative ion abundances (ionization equilibrium) for a Maxwellian distribution (top) and κ-distributions with κ = 5, 2, and 1.7 (rows 2-4).The individual line styles correspond to different values of log(N e [cm −3 ]).The calculations for all distributions, including the Maxwellian, were based on the ionization cross sections ofHahn et al. (2017) and recombination rates from CHIANTI version 10.1.Note that the effect of EIMI was included.
Figure 8 .
Figure8.The same as Figure7, but for iron.
Figure 9 .
Figure 9. Fe Maxwellian ionization equilibrium (top) and κ = 2 (bottom) in the vicinity of T = 10 6 K for the low electron density (full lines) and density-dependent equilibria for log(N e [cm −3 ]) = 9 (dotted-dotted-dotted-dashed lines) and 10 (dashed lines).Calculations for all distributions, including the Maxwellian, were based on the ionization cross sections of Hahn et al. (2017) and recombination rates from CHIANTI version 10.1.
Figure 10 .
Figure10.Simulated MaGIXS spectra in the 14-18 Å range showing multiple Fe XVII, Fe XVIII, O VII, and O VIII lines at constant log(T [K]) = 6.6.Maxwellian spectra are denoted by the black lines, while κ = 2 and 1.7 are denoted by red and violet, respectively.Note the spectra are scaled to keep the Fe XVII 15.01 Å/ 15.26 Å ratio constant, to highlight changes in other spectral lines.
|
2023-11-26T16:02:48.235Z
|
2023-11-24T00:00:00.000
|
{
"year": 2023,
"sha1": "9c0238be558e2b19dd9e1728722827fa001f5bcd",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.3847/1538-4365/ad014d/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "9a77f43eb084961ae84683b4d0e6c6f5c6238d78",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
75918479
|
pes2o/s2orc
|
v3-fos-license
|
Changes of apoptosis regulation in the endometrium of infertile women with tubal factor and endometriosis undergoing in vitro fertilization treatment
Objective To establish the relationship between the endometrial apoptosis parameters and endometrial receptivity in infertile women undergoing IVF treatment. Methods 73 women with tubal infertility, 27 infertile women with endometriosis and 13 healthy fertile women (control group) were recruited into the study. 53 women with tubal factor of infertility and 17 patients with endometriosis later entered in the IVF protocol. Samples of endometrial tissue were used as the material for investigation. Endometrium was collected on the 20-24 days of non-conceptual cycle, using the Pipell suction curette. XIAP, PTEN and HSP27 mRNAs expression in the endometrial tissue was assessed by real time RT-PCR. Results In women with tubal infertility the high level of XIAP, HSP27 and PTEN mRNAs expression in the endometrium was found. In infertile women with endometriosis the increase of XIAP and HSP27 mRNAs expression was noted. Success of the IVF outcome in women with tubal infertility was associated with maximal level of PTEN synthesis and in endometriosis group the pregnancy achievement after IVF treatment was noted in women with lower expression of XIAP mRNA. Conclusion The high level of pro-apoptotic factors synthesis in the endometrium during window of implantation is associated with the readiness of endometrium to implantation.
INTRODUCTION
Today in vitro fertilization (IVF) is widely used for the treatment of infertility in couples with unexplained subfertility, male subfertility, endometriosis or tubal pathology (Van Loendersloot et al., 2010). But analysis of the data received from different clinical centers shows that the pregnancy rate after IVF treatment is still rather low and does not exceed 40-50% per embryo transfer cycle (Strowitzki et al., 2006). The impact of some different clinical factors such as infertility etiology, female age, duration of infertility, number of 10-14-mm follicles, and progesterone level on the day of hCG injection on the outcome of the IVF treatment has been proven (Van Loendersloot et al., 2010;Coccia et al., 2011;Kummer et al., 2011;Cai et al., 2011). Endometrial receptivity also is estimated as the important factor of the IVF success, because to achieve a successful implantation, pregnancy and subsequent live child birth, endometrium should be able to accept the embryo (Strowitzki et al., 2006;Garrido et al., 2002). It is known that during menstrual cycle the human endometrium undergoes morphologic and biochemical modifications until a receptive endometrium is developed (Strow-itzki et al., 2006). Apoptosis helps to maintain the cellular homeostasis in the endometrium during the menstrual cycle. During the proliferative phase of menstrual cycle, which is characterized by the active cells proliferation and angiogenesis, apoptotic cells practically are absent in the endometrium (Garrido et al., 2002). Supposedly the high level of Bcl-2 (B cell lymphoma/leukaemia-2) expression leads to the inhibition of apoptosis in the endometrium during this phase of cycle (Harada et al., 2004). In the secretory phase of cycle, namely during the short period of endometrial receptivity, known as the "window of implantation", the first signs of apoptosis in the glandular epithelia take place (Szmidt et al., 2010). In humans the endometrium becomes receptive to blastocyst implantation at 6-8 days after ovulation and remains receptive for approximately 4 days (cycle days 20-24) (Szmidt et al., 2010). Normally apoptosis of endometrial cells is significantly increased at this period, providing the successful invasion of blastocyst by locally induced cells death in the site of the embryo and endometrial surface contact (Rashid et al., 2011). With progressing of implantation, the regression of decidual cells allows a restricted and coordinated invasion of trophoblast cells into the maternal compartment due to the balanced expression of Bax, Bcl-2 and caspase-9 proteins in the decidual compartment and the high level of caspase-3 synthesis in the apoptotic uterine epithelium (Joswig et al., 2003). So, apoptosis regulation is crucial for the embryo implantation. But it is obvious that many apoptosis-regulating factors directly implicated in the cross-talk between the embryo and endometrium preceding implantation have not yet been elucidated. The relationship between the endometrial apoptosis, readiness of endometrium to implantation and IVF outcome are not yet clear. In our work we attempted to: a) define the level of the expression of mRNAs of factors with pro-and anti-apoptotic activity in the endometrium of women with tubal infertility and with endometriosis-associated infertility, b) compare the rate of IVF success in groups of women with tubal infertility and endometriosis-associated infertility; c) retrospectively analyze the character of apoptosis regulation in the endometrium of infertile women with different outcome of IVF treatment to elucidate the possible new endometrial markers of IVF success.
Patients
Women undergoing IVF treatments at the Center of Family Planning of Ivanovo State Research Institute of Maternity and Childhood between 2009 and 2011 years were recruited into the study. The first study group consisted The study was presented on the 13th International Symposium for Immunology of Reproduction, 22-24 June, 2012, Varna, Bulgaria of 73 women with tubal factor infertility, aged 33.13±0.42 years. The second study group consisted of 27 women with infertility associated with endometriosis, aged 33.14±1.02. Diagnosis of endometriosis was earlier confirmed by laparoscopic investigation. 13 gynecologically healthy women with proved fertility, aged 30.60±1.66, who were admitted for tubal ligation, were taken into the investigation as the control group. Samples of endometrial tissue were used as the material for investigation. Bioptats of endometrial tissue were received at the time of "window of implantation" (cycle days 20-24) of a non-conceptual cycle. On the same day of endometrial sampling serum levels of FSH and LH were determined. In infertile women samples of the endometrial tissue were taken 1-4 months before IVF treatment. Biopsy was performed using the Pipell (De Cornier, Laboratoire C.C.D., France) suction curette. All patients had regular menstrual cycle and were not taking any hormone therapy at the time of biopsy. The Ethics Committee of Ivanovo State Research Institute of Maternity and Childhood approved this study and each woman participating in our study gave informed consent. 53 women with tubal factor of infertility and 17 patients with endometriosis later entered in the IVF protocol. Ovarian hormonal stimulation was conducted according to a standard "long" protocol using gonadodrophin-releasing hormone (GnRH) agonist (Diphereline daily; Ipsen, France) and recombinant follicle-stimulating hormone (Gonal-F;Merck Serono Switzerland). Serum hCG was checked 14 days after embryo transfer and patients with a hCG level (>50 IU/L) were considered pregnant. An ultrasound scan was performed one week later and then again three weeks later in order to determine the number of intrauterine gestational sacs present and fetal viability, respectively. All the pregnant women subsequent to IVF treatment were followed until the termination of pregnancy or live birth. A living child one week after delivery is defined as a live birth.
Real-time RT-PCR
Total RNA was isolated from the whole endometrial tissue using the standard guanidinum thiocyanate-phenol-chloroform method (Van Velden et al., 2003). RNA was converted to complementary DNA (cDNA) using random hexamers and murine leukaemia virus reverse transcriptase (Promega, USA). For real time quantitative RT-PCR the commercial sets of gene-specific primers and probes for β2-microglobulin (housekeeper gene), X-linked inhibitor of apoptosis (XIAP), phosphatase and tensin homolog, deleted on chromosome 10 (PTEN) and heat shock protein 27 (HSP27) (Sintol, Moscow, Russia) were used. detection System (BIO-RAD Laboratories, California, USA) was used. The amount of copy numbers of cDNAs of specific genes was assessed using the control cDNA dilution series. For each sample the amount of copy numbers of β2-microglobulin and specific genes were determined from the appropriate standard curve generated by iCycler iQ software. The amount of specific gene was subsequently divided by the housekeeper gene amount to obtain the normalized specific gene value and results were presented as the ratio in a sample x 10 1 per μl for PTEN and as the ratio in a sample x 10 4 per μl for HSP27 and XIAP.
Statistics
Data were estimated using STATISTICA 6.0 software. Results for the level of mRNAs expression were presented as the mean ± standard error. Statistical analysis was performed using Student's t-test for parametric variables and the chi-square test for categorical variables. A P value <0.05 was considered statistically significant.
Expression of pro-and anti-apoptotic factors mRNAs in the endometrium of infertile women with tubal factor and endometriosis
Our results evidence that in groups of infertile women with tubal factor and endometriosis the regulation of apoptosis was impaired (Table 1). In women with tubal factor of infertility the increased levels of the expression of mR-NAs of pro-apoptotic factor PTEN and also of anti-apoptotic factors XIAP and HSP27 were noted comparing to that in healthy women (P<0.05 in all cases). In the group of infertile women with endometriosis the significantly higher level of the XIAP and HSP27 mRNA expression was found comparing to that in the control group (P<0.05 in both cases). The comparative analysis of the endometrial gene profile expression in two groups of infertile women showed that in the endometriosis women the lower expression of PTEN mRNA was seen compared to that in the group of women with tubal infertility (P<0.05).
Apoptosis regulation in the endometrium of infertile women with different IVF treatment outcome
To establish the relationship between the character of endometrial apoptosis regulation and endometrial receptivity in infertile women, we have compared the endometrial apoptosis parameters in women who later entered in the IVF protocol in dependence to the outcome of IVF treatment. It was found that the presence of endometriosis is a factor, limiting the successful implantation and pregnancy outcome after IVF ( .08* * -differences between the control group and groups of women with tubal factor and endometriosis are statistically significant (*-P<0.05); † -differences between the group of women with tubal factor and endometriosis are statistically significant ( †-P<0.05). Table 1. The character of mRNA expression of genes, regulating apoptosis, in the endometrium of healthy women and patients with tubal infertility and endometriosis tile women were comparable to each other for the age, numbers of oocytes received and embryos transferred, but the pregnancy rate after IVF was significantly lower in infertile women with endometriosis comparing to that in infertile women with tubal factor (P<0.05). The endometrial genes profiles were also different in women with endometriosis and tubal factor who achieved ongoing pregnancy after IVF (Table 3). In women with tubal factor of infertility the implantation success was associated with the initially high level of PTEN mRNA expression in the endometrium (P<0.05). In infertile women with endometriosis we didn't find statistically significant differences in PTEN mRNA expression among endometriosis women with different IVF outcome but the ongoing pregnancy was seen in women with higher PTEN mRNA expression (P>0.05) ( Table 3). Lower level of XIAP mRNA expression was associated with IVF treatment successes in women with endometriosis (P<0.05).
DISCUSSION
According to our results, the regulation of apoptosis in the endometrium of infertile women during the window of implantation is significantly impaired. We also found that the different types of infertility (tubal factor and endometriosis) were accompanied by the different changes in the expression of apoptosis-regulating genes.
In women with tubal factor infertility the high level of mR-NAs expression of both anti-apoptotic (XIAP and HSP27) and pro-apoptotic (PTEN) factors was noted comparing to that in the fertile control. It is well known that XIAP is one of the important inhibitors of apoptosis, which is able to bind with caspase-9, -2 and -7 and inactivate their activity (Schimmer et al., 2006). The XIAP expression is strongly elevated in patients with different types of tumors and the high level of XIAP production is associated with the poor prognosis (Schimmer et al., 2006). The elevated expression of HSP27 effectively protects cells from apoptosis (Schmitt et al., 2007). HSP27 has been shown to interact and inhibit components of both stress-and receptor-induced apoptotic pathways. It was demonstrated that HSP27 could prevent the activation of caspases. It does so by directly sequestering cytochrome c when released from the mitochondria into the cytosol (Schmitt et al., 2007). According to the literature data during the window of implantation the increase of apoptosis in the endometrial tissue is noted (Szmidt et al., 2010). It was suggested that this phenomenon regulates the establishment of endometrial receptivity and provides the adequate invasion of the implanted blastocyst in the endometrial stroma (Szmidt et al., 2010). So, the high level of the production of anti-apoptotic factors might be estimated as a negative factor that can reduce the receptivity of endometrium of women with tubal factor infertility. Surprisingly, we found that the synthesis of pro-apoptotic factor PTEN was also significantly increased in the endometrium of women with tubal factor infertility. This factor Table 3. The level of mRNA expression of genes, regulating apoptosis, in the endometrium of infertile women with different IVF treatment outcome was discovered as tumor suppressor in 1997 (Zhang & Yu, 2010). Somatic mutations of PTEN have been identified as a prevalent event in different type of tumors, particularly those of the endometrium, brain, skin and prostate (Zhang & Yu, 2010). PTEN has been shown to be a non-redundant, evolutionarily conserved dual-specific phosphatase that is capable of removing phosphates from protein and lipid substrates (Bononi et al., 2011). The primary target of PTEN is the lipid second messenger intermediate PIP3 (phosphatidylinositol 3,4,5-trisphosphate). PTEN removes the phosphate from the three-position of the inositol ring to generate PIP2 (phosphatidylinositol 4,5-bisphosphate) thereby directly antagonizing intracellular signaling through the PI3K/Akt pathway (Zhang & Yu, 2010). It is known that Akt is the major downstream effector of PI3K (phosphoinositide 3-kinase) signaling that can phosphorylate a wide array of substrates and, thus, stimulates cell growth, proliferation and survival (Zhang & Yu, 2010). Today it is well documented that PTEN function affects diverse cellular processes such as cell-cycle progression, cell proliferation, apoptosis, aging, DNA damage response, angiogenesis, muscle contractility, chemotaxix, cell polarity and stem cell maintenance (Bononi et al., 2011). The requirement of PTEN for embryonic development was also established (Bononi et al., 2011).
Recently the role of PTEN in the mammalian uterus as well as its requirement for proper trophoblast invasion and decidual regression was demonstrated (Laguë et al., 2010). Taking into account these properties of PTEN we suggested that the high level of PTEN mRNA expression in the endometrium of patients with tubal factor infertility might be estimated as positive mechanism, which compensates the overexpression of anti-apoptotic factors and facilitates the readiness of endometrium to the implantation. Endometrium of women with endometriosis was also characterized by the high level of XIAP and HSP27 mRNAs expression. But we didn't find any changes in the expression of pro-apoptotic factors mRNAs in this group and the level of PTEN mRNA expression in the group of endometriosis women was significantly lower than that in infertile women with tubal factor. Thus, the high synthesis of the anti-apoptotic factor is characteristic to the endometrium of women with endometriosis. Earlier, it was suggested that there are some fundamental differences in the endometrium of women with endometriosis, such as resistance to apoptosis, which could contribute to the survival of the regurgitating endometrial cells into the peritoneal cavity and the development of endometriotic lesions (Harada et al., 2004). Likely, the high synthesis of XIAP and HSP27 in the endometrium can also provide the decrease of apoptosis of endometrial cells and facilitates its viability and growth in the peritoneal cavity. From another side, the high level of anti-apoptotic activity in the endometrium during the period of implantation window without counterbalanced elevated expression of the pro-apoptotic factors possibly can lead to the decrease of the endometrial receptivity of women with endometriosis. This suggestion is confirmed both by literature data (Coccia et al., 2011) and by our results about the association of endometriosis with poor IVF outcome.
To elucidate the possible interaction between the character of endometrial apoptosis regulation and endometrial receptivity we have analyzed the apoptosis-associated genes expression in the endometrium of women with different outcome of IVF treatment.
In women with tubal factor infertility the pregnancy achievement was associated with the maximal level of PTEN mRNA expression.
These results let us to suggest that PTEN is essential for endometrial receptivity and successful implantation. Likely, the estimation of PTEN synthesis in the endometrium might be used as the predictor of endometrial receptivity at least in infertile women with tubal factor. In women with endometriosis the positive IVF outcome was associated with the low level of the XIAP mRNA expression in the endometrium. So, in this group the same association of high endometrial apoptosis during window of implantation and IVF successes was traced.
CONCLUSION
In summary, the estimation of the activity of apoptosis in the endometrium during the window of implantation surely gives us the important information about the endometrial receptivity.
The high level of apoptosis-inducing factors synthesis in this period provides the optimal conditions for preparing of the endometrium to implantation. The high level of PTEN mRNA expression in endometrium of infertile women with tubal factor associates with successes of IVF treatment. The lower expression of PTEN mRNA in the endometrium of endometriosis women might be responsible for lower rate of IVF success in this group. The further investigations of this problem would likely let us to develop new predictors of IVF outcome.
|
2019-03-13T13:30:45.823Z
|
2014-03-27T00:00:00.000
|
{
"year": 2014,
"sha1": "c7dc52413c541f6321975813be436cfea9014578",
"oa_license": null,
"oa_url": "https://doi.org/10.5935/1518-0557.20140084",
"oa_status": "BRONZE",
"pdf_src": "Adhoc",
"pdf_hash": "995645478f9cb0d18c3f6dc3825129a28763d3b7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
229694704
|
pes2o/s2orc
|
v3-fos-license
|
Fossil leaf wax hydrogen isotopes reveal variability of Atlantic and Mediterranean climate forcing on the southeast Iberian Peninsula between 6000 to 3000 cal. BP
Many recently published papers have investigated the spatial and temporal manifestation of the 4.2 ka BP climate event at regional and global scales. However, questions with regard to the potential drivers of the associated climate change remain open. Here, we investigate the interaction between Atlantic and Mediterranean climate forcing on the south-eastern Iberian Peninsula during the mid-to late Holocene using compound-specific hydrogen isotopes from fossil leaf waxes preserved in marine sediments. Variability of hydrogen isotope values in the study area is primarily related to changes in the precipitation source and indicates three phases of increased Mediterranean sourced precipitation from 5450 to 5350 cal. BP, from 5150 to 4300 cal. BP including a short-term interruption around 4800 cal. BP, and from 3400 to 3000 cal. BP interrupted around 3200 cal. BP. These phases are in good agreement with times of prevailing positive modes of the North Atlantic Oscillation (NAO) and reduced storm activity in the Western Mediterranean suggesting that the NAO was the dominant modulator of relative variability in precipitation sources. However, as previously suggested other modes such as the Western Mediterranean Oscillation (WeMO) may have altered this overall relationship. In this regard, a decrease in Mediterranean moisture source coincident with a rapid reduction in warm season precipitation during the 4.2 ka BP event at the south-eastern Iberian Peninsula might have been related to negative WeMO conditions.
Introduction
In recent years much effort has been made in reconstructing and understanding the socioenvironmental dynamics associated with the 4.2 ka BP event.Initially, the 4.2 ka BP event was described as an "archaeological event" in the Near East, where the Akkadian Empire potentially collapsed due to an increase in regional aridity [1].Similar climatic related collapses or transformations within ancient societies at that time have also been documented in different regions across the northern hemisphere [2][3][4] including southern Iberia [5,6].However, regional heterogeneity in both, climatic conditions and social developments, have been indicated within the Mediterranean region [6][7][8].Still, the narrative of a megadrought affecting ancient societies across Asia, the Mediterranean, and northeast Africa arose [9,10].
Such a first evidence of climate related collapses or transformations in ancient societies have promoted intense investigations of the "climatic" 4.2 ka BP event, which resulted in a variety of associated paleoclimatic studies from the Mediterranean area [6,7,11,12], Asia [13][14][15], North America [16], the northern North Atlantic region [17][18][19], and the southern hemisphere as well [20,21].Altogether, paleoclimatic studies point to a series of climatic anomalies between 4400 and 3800 cal.BP, which across the Mediterranean region are often registered as dry and cool events [6,7].But, climatic conditions may have been variable on a regional and seasonal scale with humid conditions prevailing occasionally [6,11,22].
However, potential drivers of the more widespread drier and cooler climate periods across the mid-latitudes of the northern hemisphere are not yet understood.Since the North Atlantic Oscillation (NAO) is modulating winter precipitation across large parts of the Mediterranean region, in particular the Western Mediterranean [23,24], it is often regarded as one important driver for drought associated with the 4.2 ka BP event [11,25,26].At that time, a major NAOlike forcing is also suggested by modelling studies [27].On the other hand, recent studies have indicated that the 4.2 ka BP event in the Mediterranean region could have been more pronounced during the summer season [6,7,28].Thus, the search for a potential driver of the climatic 4.2 ka BP event particularly in the Western Mediterranean region has to include seasonal variability with Atlantic winter versus Mediterranean summer forcing.
To shed further light on the potential driver of climate variability around 4200 cal.BP, we investigated the temporal variability of the interaction between Atlantic and Mediterranean climatic regimes.Therefore, we analysed compound-specific hydrogen and carbon isotopes from fossil leaf waxes, i.e. land-derived n-alkanes, as tracer of past atmospheric circulation patterns in a well-suited marine sediment core from the Alboran Sea-an area actually located at the interface of Atlantic and Mediterranean climate regimes (Fig 1).
Study area
The terrestrial organic compounds investigated in this study are mainly sourced from the catchment areas of the Guadiaro and Guadalhorce rivers (Fig 1).Both catchment areas are draining the Thermo-and Mesomediterranean vegetation belts (Fig 1).These vegetation belts in the southern Iberian Peninsula are generally characterized by scrubland and grassland with only patchy forests in favourable locations such as water courses [29].While the main tree species are different Quercus species, water courses are dominated by Salix, Fraxinus, and Ulmus [29].Important shrub species in the study area are Juniperus, Phillyrea, Erica, Olea, and Arbutus unedo [29].
Most of the aforementioned species are adapted to the high seasonality in the study area, which is currently under a Mediterranean climate influence characterized by a cool and rainy winter season and a hot and dry summer season.Based on data from the Spanish State Meteorological Agency (AEMET) modern winter conditions in the study area range between a minimum air temperature of 12.1˚C at Ma ´laga airport and 13.0˚C at Tarifa close to Gibraltar.At these stations, precipitation reveals a maximum during winter of up to 100 and 118 mm at Ma ´laga and Tarifa stations, respectively.In contrast, summer conditions are characterized by maximum temperatures between 22.3 and 26.0˚C at Tarifa and Ma ´laga stations.Between June and August there is almost no precipitation occurring at both stations.
The temporal variability of precipitation in the study area is mainly controlled by the North Atlantic Oscillation (NAO), which primarily transports moisture from the Atlantic during the winter season [24,43,44].During positive NAO (NAO + ; i.e. a high difference in sea level pressure between the Azores and Iceland) the study area experiences drier conditions, because the main storm track of the westerlies lead towards northern and central Europe (Fig 2 ), and vice versa.A secondary mode responsible for temporal climate variability in the area is the Western Mediterranean Oscillation (WeMO) [45].In its positive phase (WeMO + ) the Western and red crosses show locations of additional archives used for regional paleoclimatological analysis (see Table 1 for more detailed information): Navarre ´s [31], Villena [32], Elx [33], Antas [34], Padul [26], Sierra de Ga ´dor [35], Roquetas de Mar [34], San Rafael [34], Cabo de Gata [36], El Refugio [37], TTR14-300G [38], ODP-161-976A [11,39] and, MD95-2043 [7,[40][41][42].B) Map showing the spatial distribution of amount-weighted long-term (1961-2016) annual mean hydrogen isotopic composition of precipitation (δD prec ).The raw data was downloaded from the Global Network of Isotopes in Precipitation (GNIP) database and interpolated using an inverse distance weighted (IDW) approach (unlimited search radius and power value = 3.0).Red square indicates area shown in A).Triangle denotes the location of Gibraltar meteorological station, which data is shown to the right.C) Average monthly air temperature (T), precipitation (P) and, amount-weighted δD prec of Gibraltar meteorological station for the period 1961-2016 (bottom) as well as the correlation of their annual means (top).All raw data from Gibraltar meteorological station have been downloaded from the GNIP database.Please note, that no δD prec data is available for July due to the scarce precipitation during that month. https://doi.org/10.1371/journal.pone.0243662.g001 Mediterranean Oscillation is associated with relatively cool and dry north-westerly winds, while during its negative phase (WeMO -) relatively warm and humid easterly winds are prevailing (Fig 2).The WeMO is active throughout the year, but WeMO -conditions are particularly associated with increased winter precipitation along the Catalonian and Valencian coasts [45,46].The scarce precipitation during the summer season in the study area, when the NAO driven influence is less pronounced [47], is mainly driven by mesoscale synoptic patterns and local convective systems [48].These are responsible for short torrential rainfall events, which are particularly evident along the Mediterranean coast during times of a high land-sea temperature contrast [48].Such a high land-sea temperature contrast may be promoted by the advection of cool northerly air masses under WeMO + conditions during summer [46].
Overall, both atmospheric modes-the NAO and the WeMO-modulate the temporal variability of the two main precipitation sources at the Iberian Peninsula, which are an Atlantic source dominant during winter and a Mediterranean source during summer [49].These two precipitation sources are also reflected in the spatial distribution of the hydrogen isotopic composition within the amount-weighted long-term annual mean precipitation (δD prec ).In the Atlantic dominated inland areas amount-weighted annual mean δD prec values are typically .However, on the monthly scale amount-weighted δD prec values are highly correlated with precipitation amount (R = 0.849; p < 0.001) and temperature (R = 0.756; p = 0.004) (not shown).This is because the precipitation source, which generally varies on a seasonal scale (Atlantic winter and Mediterranean summer) [49], might be responsible for the observed seasonal variability of amountweighted δD prec values in the study area [48].
n-Alkane data as paleoclimatic proxy
n-Alkanes are synthesized by terrestrial plants as leaf waxes in order to protect themselves against water loss due to evapotranspiration [50].After eolian or riverine transport, leaf waxes are deposited in soils, lakes, and marine sediments [51][52][53].After deposition in the sediments, n-alkanes are relatively resistant to diagenesis [54,55] and may serve as paleoclimatic indicator.Potential alteration of n-alkanes can be investigated by the carbon preference index (CPI), which illustrates the ratio of odd versus even chain lengths [56].Usually, a CPI above 2 indicates that n-alkanes are not altered and, thus, can be reliably used as paleoclimatic indicator [57].The average chain length (ACL) of n-alkanes might be used as such a paleoclimatic indicator since it has been found to record regional aridity in Mediterranean settings [58,59].This is based on the observation, that plants produce on average longer n-alkane homologues resulting in an increasing ACL, when water availability is reduced [60].Also, isotopic analyses of individual n-alkanes, have been used extensively during the last years in tropical [61][62][63][64] but also in Mediterranean regions [65][66][67][68] in order to assess terrestrial environmental and climatic parameters.
Analysing individual n-alkane homologues for carbon isotopes (δ 13 C Cx ) provides important information on the distribution of C3 and C4 plants [69].Due to their different photosynthetic pathway, C4 plants typically exhibit elevated δ 13 C Cx values varying between -20 to -15 ‰ compared to those from C3 plants, which vary between -45 to -30 ‰ [69,70].In environmental settings, which are characterized by stability of the vegetational record with regard to C3 vs. C4 plant distribution, δ 13 C Cx values may also record plant water stress [71].Increasing δ 13 C Cx values would point to isotopic enrichment within the plant's source water pool due to enhanced evapotranspiration.In Mediterranean settings, δ 13 C Cx data of the long chain nalkanes have been suggested to be most suitable for studying changes in humidity [58].
In settings mixed with C3 and C4 plants, the δ 13 C Cx values are further needed for potential correction of the hydrogen isotopic data of the individual n-alkanes (δD Cx ) [72,73].Despite of potential alteration through the vegetation type, the δD Cx data is highly correlated with that of the water source, i.e. precipitation (δD prec ) during the plants growing season [74].Additional factors controlling δD prec and thus, also δD Cx are atmospheric temperature, the amount of rainfall, evapotranspiration, and precipitation source [74][75][76].
Sediment core and age model
Sediment core ODP-161-976A (36˚12.320'N; 4˚18.760'W; 1108m water depth) was retrieved in the Alboran Sea during the JOIDES RESOLUTION cruise in 1995 [77].The sampling of this sediment core was already described in a previous paper [11].To achieve multi-decadal resolution, the section from 100.0 to 149.0 cm was continuously sampled at 0.5 cm distances in the IODP (International Ocean Discovery Program) core repository at MARUM (Center for Marine Environmental Sciences) in Bremen (Germany).Also, the age model of sediment core ODP-161-976A has already been published in earlier publications [6,11].The final age model of ODP-161-976A is based on 11 AMS 14 C dates.The sediment core encompasses an analysed time period between ca.5750 to 3000 cal.BP with a temporal resolution varying between 8 and 114 years for ODP-161-976A.
Sample preparation and calculations
The sample preparation of sediment core ODP-161-976A followed the protocol of the biomarker laboratory at Kiel University and has already been described in an earlier study [11].In short, n-alkanes were extracted from the freeze-dried and finely ground sediment samples with an accelerated solvent extractor (ASE-200, Dionex) at 100 bar and 100˚C using a 9:1 (v = v) mixture of dichloromethane (DCM) and methanol.After extraction samples were desulfured by stirring for 30 minutes with activated copper.The de-sulfured n-alkanes were subsequently separated by silica gel column chromatography using activated silica gel and hexane.n-Alkanes were further separated using silver nitrate (AgNO 3 ) coated silica gel.Subsequently, individual n-alkane homologues have been identified with an Agilent 6890N gas chromatograph equipped with a Restek XTI-5 capillary column (30 m x 320 μm x 0.25 μm) based on the comparison of their retention times with an external standard containing a series of n-alkane homologues of known concentration.On this basis, n-alkanes were also quantified using the FID peak areas calibrated against the external standard.The concentrations of odd terrestrial n-alkanes are provided in the supplement of this article.
Based on the quantified n-alkane concentrations the carbon preference index (CPI) has been calculated in order to assess potential alteration of the sedimentary n-alkanes and thus, their reliability as paleoclimatic indicator: As paleoclimatic indicator the average chain length (ACL) has been calculated following Norstro ¨m et al. [58]: In both equations n-C x refers to the concentration of the n-alkane with x carbon atoms.For final evaluation of the δD and δ 13 C data, their weighted-mean averages (WMAs) of all three individual isotopic records have been calculated according to the following equation:
Compound-specific isotope analysis
with δD x and n-C x representing the hydrogen isotopic value and the concentration of the nalkane with x carbon atoms, respectively.The same equation has been applied to calculate the δ 13 C WMA data.
Regional analysis
The regional analysis of past seasonal precipitation development is based on a compilation of various climatic proxies from speleothems, marine, lacustrine, and terrestrial archives from the Iberian Peninsula published in a previous publication [6].For this study, this compilation has been regionally subsampled for the Thermo-and Mesomediterranean vegetation belts in the southeast of the Iberian Peninsula providing new analysis on a regional scale.Furthermore, the ACL 29-35 calculated in this study has been included into the compilation.Altogether, 14 records are reflecting annual, 11 records are reflecting cold season, and 2 records are reflecting warm season precipitation variability (Table 1; Fig 1).The interpretational background of the used proxies is explained in detail in a previous publication [6] or in the individual publications listed in Table 1.However, because the majority of archives are based on pollen percentages, in the following their interpretational background is briefly recalled.Based on the modern relationship between arboreal pollen and cold season precipitation on the Iberian Peninsula [78], we interpreted decreasing arboreal pollen percentages as indicator for decreasing cold season precipitation.This is further in line with the application in other paleoclimatological studies from the area [26,40].On the other hand, xerophytic species including Chenopodiaceae, Amaranthaceae, and Artemisia have been shown to be indicative of prolonged annual dry periods [79].Consequently, an increase in xerophytic pollen percentages have been interpreted as indication of dry annual conditions.The z-scores [80] of all paleoclimatic proxies reflecting either annual, cold, or warm season precipitation have been combined to regional time-series of qualitative precipitation change.Prior to analysis, the speleothem data of El Refugio Cave [37], which has an average temporal resolution of 3 years, has been downscaled to a temporal resolution of 50 years.The calculation of 50-year means prevents the over-representation of this archive in the regional time-series since it had a much higher temporal resolution compared to all other archives.The regional time-series have then been smoothed using a gaussian LOESS smooth with a 2 nd order polynomial and a smoothing parameter of 0.2.
Results
The carbon preference index for odd n-alkanes between 27 and 33 carbon atoms (CPI 27-33 ) varies around mean of 6.1 with a minimum of 2.9 and a maximum of 8.7 (Fig 3).The average chain length (ACL) has been calculated for odd n-alkanes with carbon atoms ranging between 29 and 35.The ACL 29-35 reveals no long-term trend and varies between 31.2 and 30.6 with a mean of 30.9 (Fig 3).
Carbon and hydrogen isotopic data of three n-alkane homologues (n-C 29 , n-C 31 , and n-C 33 ) are presented for the period between ca.5800 and 3000 cal.BP.In this interval, the carbon isotopic values of the n-C 29 homologue (δ 13 C C29 ) vary between -31.4 and -32.3 ‰ (Fig 3).δ 13 C C31 values vary between -30.5 and -31.7 ‰, while δ 13 C C33 values range from -29.5 to -31.6 ‰.There is no obvious trend in any of the carbon isotopic time series, but average values progressively increase with increasing chain length.δ 13 C C29 values exhibit an average of -31.9 ‰, while δ 13 C C31 values vary around a mean of -31.3 ‰ and δ 13 C C33 values are on average -30.5 ‰ (Fig 3).The weighted-mean average of the carbon isotopes δ 13 C WMA varies between -31.8 and -30.9 ‰ with a mean value of -31.3 ‰ (Fig 3).
Within the studied timespan, the hydrogen isotopic values vary between -137.3 ‰ and -157.5 ‰ for the n-C 29 homologue (δD C29 ), between -141.7 ‰ and -160.8 ‰ for δD C31 , and between -113.8 ‰ and -139.9 ‰ for δD C33 (Fig 3).While the absolute values and the amplitude of δD C29 and δD C31 are similar, δD C33 values are slightly higher and variability is of larger amplitude.Apart from these differences, hydrogen isotopic values of all three homologues reveal a slightly increasing trend and significantly covary in the analysed period (Table 2).Based on the weighted-mean combination of all three δD records, periods of high and low isotopic values can be distinguished in the δD WMA data (Fig 4).High δD WMA values (-144.5 ‰ on average) are evident around 5400, from 5100 to 4300 with a short-term decrease around ca. 4800 cal.BP, from 3400 to 3300, and at 3000 cal.BP.In contrast, low δD WMA values averaging -149.4 ‰ are noticed at 5500, at 5300, and from 4200 to 3400 cal.BP.
Drivers of hydrogen isotopic variability
Variability of hydrogen isotopic data from individual n-alkane homologues is potentially driven by five major parameters, which are changes in (1) vegetation types, (2) precipitation amount, (3) atmospheric temperature, (4) evapotranspiration, and (5) precipitation source [73][74][75].In the following, we conclude that the variability of hydrogen isotopic values from individual n-alkanes originating from southeast Iberia and deposited in marine sediment core ODP-161-976A is primarily driven by changes in the source of precipitation.
Several studies have highlighted the dependence of n-alkane hydrogen isotopic composition on the distribution of C3 and C4 plants in modern [81] and paleoclimatic studies [72,82,83].This is due to the different photosynthetic regulatory pathways of these plant types, which results in a different apparent fractionation between the hydrogen isotopic value of precipitation (δD prec ) and that of n-alkanes (δD Cx ) [73].Any variation in the C3 vs. C4 plant distribution can be tested in parallel through n-alkane δ 13 C values.Typically, C4 plants exhibit δ 13 C Cx values between -20 to -15 ‰, while δ 13 C Cx values of C3 plants vary between -45 to -30 ‰ [69,70].In our case, the similarity and low variability in absolute values for all three compoundspecific carbon isotopic data (δ 13 C C29 , δ 13 C C31 , and δ 13 C C33 ) in sediment core ODP-161-976A suggest a dominant C3 vegetation cover throughout the entire period (Fig 3).This is in line with various pollen records from southeast Iberia, which show a dominance of C3 vegetation and no major change towards increased C4 vegetation between 6000 and 3000 cal.BP [26,39,41].Furthermore, δ 13 C and δD values within each n-alkane homologue, including the weighted-mean average (WMA) isotopic records, are not correlated at a significant level, with the exception of the n-C 31 homologue revealing a significant but moderate correlation (Table 3).This rules out any significant changes in plant types that may have affected the hydrogen isotopic data at the core site.
Other crucial parameters driving the hydrogen isotopic signal of n-alkanes are changes in atmospheric temperature, precipitation amount, and related evapotranspiration [73,74,84].Meteorological data from Gibraltar implies a general connection of δD prec values to modern changes in the amount of precipitation and atmospheric temperature on a monthly scale (Fig 1).However, no significant statistical correlation of δD prec values with precipitation amount (R = 0.228, p = 0.112) and atmospheric temperature (R = 0.200, p = 0.166) exists during modern times on the annual scale (Fig 1).Moreover, there is also no significant correlation between all three compound-specific hydrogen isotope records and their concentration as well as with other ODP-161-976A data such as the ACL 29-35 (reflecting changes in regional aridity [58]) and alkenone-based annual mean SST (Table 3), which are closely coupled to atmospheric temperature variability in southeast Iberia [6].However, correlations between the δD Cx records appear to increase with chain length (Table 3).Overall, a moderate correlation of ACL 29- 35 and the δD WMA record suggests that regional aridity may have had a minor influence on the hydrogen isotopic records.Aridity is closely related to evapotranspiration, which is potentially an important parameter to be considered, when interpreting δD Cx data in semi-arid climates such as southeast Iberia.Evapotranspiration can be studied using δ 13 C values, which are claimed to record plant water stress [71,85].As noted before, all individual δ 13 C records suggest a constant dominance of C3 plant species, enabling the interpretation of the δ 13 C WMA record as indicator of plant water stress [71].We also note, that δ 13 C WMA values are moderately correlated with ACL 29-35 values (R = 0.442, p = 0.001), further corroborating the use of the δ 13 C WMA record as recorder of past plant water stress in this study.However, δ 13 C WMA values are not significantly correlated to δD WMA values (Table 3) arguing against an influence of plant water stress and further, aridity on the hydrogen isotopic composition within the studied n-alkane homologues.Altogether, the ACL 29-35 and δ 13 C WMA records indicate a minor, if any, influence of aridity and related evapotranspiration on the δD WMA values.
Thus, we assume that these parameters as well as precipitation amount and atmospheric temperature have not been dominant drivers for the observed n-alkane hydrogen variability in ODP-161-976A during the analysed period.Another potential driver is the source of precipitation [75].Indeed, modern analyses of meteorological data from the Western Mediterranean indicate that δD prec values in the area depend on the source of precipitation with Atlantic derived precipitation exhibiting significantly lower δD prec values compared to Mediterranean derived precipitation [48,86].Since Atlantic sourced precipitation in the study area is much more prominent during the winter season [30,48], changes in precipitation source likely account for the apparent correlation of monthly amount-weighted δD prec values with the monthly variability of atmospheric temperature and precipitation amount (Fig 1) . n-Alkane δD values were also considered to reflect changes in the precipitation source by previous paleoclimatic studies from the Iberian Peninsula [65,67,68].Accordingly, we conclude that the dominant parameter driving hydrogen isotopic variability of individual n-alkanes in marine sediment core ODP-161-976A is the source of precipitation with low δD WMA values reflecting increasing Atlantic derived precipitation and high δD WMA values reflecting an increase in Mediterranean sourced precipitation.
Over-regional driver of climate variability
Based on high δD WMA values we define three major periods of overall enhanced Mediterranean sourced precipitation (Fig 4).These range from ca. 5450 to 5350 cal.BP, from 5150 to 4300 cal.BP including a short-term decrease in δD WMA values around 4800 cal.BP, and from 3400 to 3000 cal.BP with an interruption around 3200 cal.BP.The latter two periods of enhanced Mediterranean sourced precipitation correspond well with times of dominant positive modes of the North Atlantic Oscillation (NAO + ) [87] and reduced Western Mediterranean storminess [88,89] (Fig 4).The Western Mediterranean storminess record, however, shows higher variability, which might be related to its distant location in the Gulf of Lions.But generally, NAO + conditions would have favoured a northward shift of the Atlantic storm track towards northern and central Europe [43] (Fig 2).Consequently, this resulted in reduced storminess across the Western Mediterranean as generally evidenced by an increased clay mineral ratio from the Gulf of Lions during dominant positive NAO modes [88].Along with the northward displacement of the Atlantic storm track, the majority of Atlantic sourced precipitation was shifted to northern and central Europe [90].This results in high δD WMA values as Atlantic sourced precipitation in southeast Iberia was reduced and the Mediterranean sourced precipitation gained more importance.One exception is observed at ca. 3350 cal.BP, when the NAO reconstruction indicates a prominent change to a negative mode (NAO -).These NAO - conditions, however, represent a very abrupt, event-like feature, which might not be recorded in our data.However, based on the modern NAO-precipitation relationship one would expect an increase in cold season precipitation when the Atlantic influence increases [24,44].Therefore, it is interesting to note that the rapid shift in δD WMA values towards relatively increased Atlantic moisture source at 4300 cal.BP is not coincident with elevated cold season precipitation levels (Fig 4).In fact, after 4300 cal.BP cold season precipitation at the south-eastern Iberian Peninsula reveals a prominent decreasing trend.Regarding the 4.2 ka BP event, which occurred during this period, previous studies indicate that this period was characterized by decreasing summer precipitation [6,7,28].This is in line with our reconstruction of regional warm season precipitation, which indicates a rapid decrease between ca.4200 to 3700 cal.BP (Fig 4).Taken into account that modern warm season precipitation in the area has a dominant Mediterranean source, a significant reduction in warm season precipitation might be able to explain the relative increase in Atlantic sourced precipitation at 4300 cal.BP, even though cold season precipitation was gradually decreasing.
According to the previous finding, a dominant role of the NAO-active during the cold season-as driver for the observed reduction in warm season precipitation during the 4.2 ka BP event is not plausible.However, based on n-alkane hydrogen isotopic analysis the Western Mediterranean Oscillation (WeMO) has recently been suggested as potential driver for changes in the precipitation source across southern Iberia during the studied period [68].During modern times warm season precipitation at the southeast Iberian Peninsula is mainly derived from local convective systems [48], which benefit from a high land-sea temperature contrast (i.e.warm ocean and cool land).Such a high land-sea temperature contrast during the summer months is favoured by a positive mode of the Western Mediterranean Oscillation (WeMO + ) due to the advection of cool air masses from the north [46,48] (Fig 2).In contrast, WeMO -conditions would favour a rather low land-sea temperature contrast because warm, easterly winds prevail [46].Thus, our generally decreased δD WMA data between 4300 and 3400 cal.BP along with the observed reduction in warm season precipitation could have been promoted by persistent WeMO -conditions at that time.
Altogether, the NAO appears to be the dominant modulator of δD WMA values, and thus, relative variability in moisture sources in the studied area between 6000 and 3000 cal.BP.This is plausible because annual precipitation is dominated by cold season precipitation on the Iberian Peninsula, which is also the season when the NAO is active [24].However, secondary modes active during different seasons such as the WeMO may alter this overall relationship.
Conclusion
In order to investigate the interaction between Atlantic and Mediterranean climate at the south-eastern Iberian Peninsula during the mid-to late Holocene, compound-specific hydrogen (δD Cx ) and carbon isotopic records (δ 13 Overall, δD WMA variability appears closely related to variability of the North Atlantic Oscillation (NAO) between 6000 and 3000 cal.BP with positive NAO modes resulting in reduced storm activity across the Western Mediterranean and a relative increase of Mediterranean sourced precipitation in southeast Iberia.However, secondary atmospheric modes such as the Western Mediterranean Oscillation (WeMO) may alter this overall relationship.Such an example is the period of the 4.2 ka BP event, which on the southeast of the Iberian Peninsula was identified as a period of a rapid reduction in warm season precipitation between ca.4200 and 3700 cal.BP.According to low δD WMA values, these reductions in warm season precipitation were related to a decreasing Mediterranean influence during the summer months.While it is not plausible that the NAO serves as driver for the reduced warm season precipitation, a hypothesized influence of the WeMO by lowering the land-sea temperature contrast is in line with a previous study [68].
Fig 1 .
Fig 1. Study area.A) Major vegetation belts at the south-eastern Iberian Peninsula are shown along with main rivers Guadiaro (a) and Guadalhorce (b) in the study area.Vegetation belts have been calculated after [29] based on 0.5g ridded climate data from WorldClim V2.0 [30].Star indicates the location of marine sediment core ODP-161-976Aand red crosses show locations of additional archives used for regional paleoclimatological analysis (see Table1for more detailed information): Navarre ´s[31], Villena[32], Elx[33], Antas[34], Padul[26], Sierra de Ga ´dor[35], Roquetas de Mar[34], San Rafael[34], Cabo de Gata[36], El Refugio [37], TTR14-300G [38], ODP-161-976A [11, 39] and, MD95-2043 [7, 40-42].B) Map showing the spatial distribution of amount-weighted long-term (1961-2016) annual mean hydrogen isotopic composition of precipitation (δD prec ).The raw data was downloaded from the Global Network of Isotopes in Precipitation (GNIP) database and interpolated using an inverse distance weighted (IDW) approach (unlimited search radius and power value = 3.0).Red square indicates area shown in A).Triangle denotes the location of Gibraltar meteorological station, which data is shown to the right.C) Average monthly air temperature (T), precipitation (P) and, amount-weighted δD prec of Gibraltar meteorological station for the period 1961-2016 (bottom) as well as the correlation of their annual means (top).All raw data from Gibraltar meteorological station have been downloaded from the GNIP database.Please note, that no δD prec data is available for July due to the scarce precipitation during that month.
below - 40 ‰
VSMOW, while coastal areas, which are more intensely affected by local precipitation sources, vary between -15 and -35 ‰ VSMOW (Fig 1).Moreover, at Gibraltar meteorological station amount-weighted δD prec values show no significant correlation with precipitation amount or temperature on the annual scale (Fig 1)
Fig 2 .
Fig 2. Main atmospheric drivers of temporal precipitation variability on the Iberian Peninsula.Map showing areas on the Iberian Peninsula, where temporal winter (October-March) precipitation variability is dominated by the NAO (dotted red) and the WeMO (hatched blue).Areas have been redrawn from [45].Red arrows indicate dominant wind direction under NAO + and NAO -conditions, respectively.Blue arrows indicate the same for WeMO + and WeMO - conditions.Star indicates the location of marine sediment core ODP-161-976A.https://doi.org/10.1371/journal.pone.0243662.g002
Fig 4 .
Fig 4. Atlantic versus Mediterranean influence.a) Regional z-score of annual precipitation variability, b) regional zscore of cold season precipitation variability, and c) regional z-score of warm season precipitation variability.Numbers of individual archives (n) included into the regional data are also shown.d) Weighted-mean average hydrogen isotopic data (δD WMA ) is displayed together with means of periods characterized by prolonged high (P high ) and low δD WMA values (P low ) for orientation (see results-section).e) Reconstructed NAO from lake SS1200 in Greenland [87] and f) clay mineral ratio of Smectite (Sm), Illite (Ill), and Chlorite (Chl) from Gulf of Lions sediment core PB06 indicative of Western Mediterranean storminess [88] are shown for interpretation.Vertical grey bars indicate periods of increased Mediterranean precipitation in southeast Iberia based on the hydrogen isotopic data from ODP-161-976A.https://doi.org/10.1371/journal.pone.0243662.g004 given for the hydrogen isotopes of every individual n-alkane and their carbon isotopic values, their individual concentrations, cumulative concentration of long-chained odd n-alkanes, and the alkenone-based SST.Cross-plots of individual parameters are provided in the supplement of this article.https://doi.org/10.1371/journal.pone.0243662.t003 C Cx ) from fossil leaf waxes (i.e.n-alkanes) have been analysed.Detailed comparison with δ 13 C Cx values, sea surface temperature variability, and changes in precipitation amount indicate that δD Cx values and their weighted-mean average (δD WMA ) analysed in this study are related to changes in the precipitation source.While low δD WMA values are indicative of increased Atlantic origin, high δD WMA values indicate a Mediterranean precipitation source.
Table 1 . Dataset used for the regional precipitation analysis. Site Season Archive Proxy Reference
MAT = modern analogue technique, WAPLS = weighted-average partial least squares regression technique.https://doi.org/10.1371/journal.pone.0243662.t001 For this study, terrestrial-sourced n-alkane homologues of sufficient concentration (i.e.n-C 29 , n-C 31 , and n-C 33 ) have been analysed by gas chromatography-isotope ratio mass spectrometry (GC-IRMS) for δD and δ 13 C at the Leibniz Laboratory for Radiometric Dating and Stable Isotope Research at Kiel University.Samples have been measured on an Agilent 6890 gas chromatograph equipped with a Gerstel KAS 4 PTV injector and an Agilent DB-5 capillary column (30 mx 250 μm x 0.25 μm) coupled to a Thermo Scientific MAT 253 isotope ratio mass spectrometer (IRMS).Depending on the n-alkane concentration, between 5 and 30 μl of each sample has been injected 2-4 times in order to achieve a statistically robust analytical error for each n-alkane homologue.The δD and δ 13 C values are reported relative to Vienna Standard Mean Ocean Water (‰ VSMOW) based on Arndt Schimmelmann's A6 reference mixture from 2015 and Vienna Pee Dee Belemnite (‰ VPDB) scales using Arndt Schimmelmann's A7 reference mixture from 2017, respectively.
|
2020-12-24T09:04:25.726Z
|
2020-12-23T00:00:00.000
|
{
"year": 2020,
"sha1": "5bc1ba57c871d538225b1ea15aef2a588f4f7423",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0243662",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e0dfd62f5f055696a711fb5d6bb2ff176788f130",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
101041762
|
pes2o/s2orc
|
v3-fos-license
|
Characterization of dielectric materials by the extension of voltage response method
The voltage response method measures the decay and return voltages on a charged and shorted dielectric and from the slopes of these voltages the specific conductivity and the polarization conductivity of the material can be determined. These two numbers can characterize the main dielectric processes, namely the conductivity and the polarization, however the phenomenon of polarization is the sum of numerous elementary polarization processes with different intensities and time constants. By changing the shorting time of a charged insulation the elementary polarization processes can be examined separately and the whole polarization spectrum can be investigated more precisely. This paper introduces how the voltage response measurement technique is extended by the variation of discharging times and the mathematical model of the measurement is described. Measurements on a model dielectric and on a dielectric material were carried out and the results have been compared by the results of the calculations.
Introduction
The voltage response method measures the decay and return voltages on a charged dielectric. The timing diagram and arrangement of the measurement can be seen in figure 1 and 2, respectively. The decay voltage (V d (t)) can be measured after the relatively long (t ch = 100−1000 s) charging period of an insulation, while the return voltage (V r (t)) appears after a few seconds shorting (t dch ) of the charged arrangement. The initial slope of the decay voltage (S d ) is directly proportional to the specific conductivity (γ) of the material and the slope of the return voltage (S r ) is directly proportional to the polarization conductivity (β) of slow polarization processes having time constant higher than the discharging time (t dch ) [1]. These two figures (S d , S r ) are successfully used for condition monitoring of the insulation of electric power equipment and cables [2,3,4,5], however the weakness of this method is that the whole polarization spectrum of the material is characterized by only one number (S r ), whereas the distribution of elementary polarization processes by time constant is not uniform. The different ranges of polarization spectrum can be investigated by the changing of charging (t ch ) and discharging (t dch ) times and in this way the original voltage response method can be extended [6]. In this investigation, the charging time is chosen to constant (1000 s) and the discharging times were increased step by step. The relationship between the variation of charging-discharging times of return voltage measurement and the peaks of return voltages has been published elsewhere [7], however the usage for characterization of dielectric materials by the slopes of return voltages has not been introduced, yet.
where the V ch is the charging voltage, γ is the specific conductivity of the material and 0 is the permittivity of free space. Similarly, the slope of the return voltage is directly proportional the polarization conductivity of the material (β): As equation 2. shows in real dielectrics the polarization conductivity is the sum of the polarization conductivities of the elementary processes (β pi ), which remain in activated state after charging and discharging of the insulation. The relation between the polarization conductivity and the polarizability (α pi ) of a given elementary process is β pi = α pi /τ pi , where τ pi is the time constant of the process. For the calculation of return voltages the extended Debye model of the insulation is used (figure3.). In this model C 0 represents the capacitance of the electrode arrangement without the insulating material (vacuum capacitance). R 0 is the d.c. resistivity of the insulation and the R pi −C pi branches (Debye elements) represent the elementary polarization processes of the insulation. The elementary processes can be characterized by their time constants τ pi = R pi C pi and intensities, namely the polarizability (α pi = C pi /C 0 ). The relation between the equivalent circuit of an insulation (figure 3) and the parameters provided by the voltage response measurement can be expressed: where τ 0 = R 0 C 0 . The initial slope of return voltage (S r ) can be expressed as the sum of the gradients of return voltages generated by Debye (elementary polarization) processes: where V C pi is the remaining voltage of a C pi capacitor in a Debye element after charging and discharging of the insulation and τ ri = R ri C 0 . The relation between the extended Debye circuit and the material properties can be expressed by the susceptibility (κ = n i=1 C pi /C 0 ), which represents the intensity of polarization process in the material (κ = ( r − 1)) at zero frequency.
Computation of return voltages on extended Debye model of dielectrics
The aim of the extension of voltage response measurement is the investigation of the elementary polarization processes and to determine the parameters of the extended Debye model of the measured insulation. For this reason the measurement technique is developed: the charged insulation is not only discharged once, but more shorting is executed, therefore different polarization processes remain activated after the different discharging times and more slopes of return voltages are measured. For the calculations and the measurements, thirteen discharging times between 2 − 200 s were chosen but the charging time was 1000 s in all cases. At the calculations, the dielectric processes were modeled in the 0.1 − 10000 s time constant range and the intensities of the processes were varied till the calculated slopes of return voltages became acceptable close to the measured values.
Measurement on EPR insulation
The measurements were carried out on the insulation of a four core low-voltage ethylene propylene rubber (EPR) insulated low voltage cable. After the measurement the model of the EPR insulation have been determined by the variations of the parameters of an extended Debye model, which contains four Debye elements. The time constants of the elements were chosen to 1, 10, 100 and 1000 s. The calculated parameters of the model are in table 3. The result shows that the slow polarization processes (time constants higher than 1 s) of such a complex dielectric like an EPR insulated cable can be characterized by only four Debye elements (table 4).
Conclusion
The voltage response measurement is a tool, which developed for measuring the slow polarization processes of an insulation. The initial technique can characterize the polarization processes by only one figure (S r ) but by the systematic changing of the discharging times, the voltage response method have been developed. By this extension of the method, the parts of the polarization spectrum can be investigated more precisely. From the results provided by the extended method the equivalent circuit of the insulation can be determined. The results have been validated on a model dielectric with two Debye processes. Measurement have been carried out also on the insulation of a low voltage EPR cable and the parameters of a Debye circuit have been calculated. The result shows that the slow polarization processes of such a complex dielectric, like an EPR insulated cable, can be characterized by only four Debye elements.
|
2019-04-07T13:03:49.017Z
|
2015-10-26T00:00:00.000
|
{
"year": 2015,
"sha1": "9746132024d482bcb5fa3d81fc619ce3d2fa1958",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/646/1/012043",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "dab598f01dc6c02d9e08902a6a50c69f48db1cf5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Chemistry"
]
}
|
238533508
|
pes2o/s2orc
|
v3-fos-license
|
Multi-Domain Touchscreen-Based Cognitive Assessment of C57BL/6J Female Mice Shows Whole-Body Exposure to 56Fe Particle Space Radiation in Maturity Improves Discrimination Learning Yet Impairs Stimulus-Response Rule-Based Habit Learning
Astronauts during interplanetary missions will be exposed to galactic cosmic radiation, including charged particles like 56Fe. Most preclinical studies with mature, “astronaut-aged” rodents suggest space radiation diminishes performance in classical hippocampal- and prefrontal cortex-dependent tasks. However, a rodent cognitive touchscreen battery unexpectedly revealed 56Fe radiation improves the performance of C57BL/6J male mice in a hippocampal-dependent task (discrimination learning) without changing performance in a striatal-dependent task (rule-based learning). As there are conflicting results on whether the female rodent brain is preferentially injured by or resistant to charged particle exposure, and as the proportion of female vs. male astronauts is increasing, further study on how charged particles influence the touchscreen cognitive performance of female mice is warranted. We hypothesized that, similar to mature male mice, mature female C57BL/6J mice exposed to fractionated whole-body 56Fe irradiation (3 × 6.7cGy 56Fe over 5 days, 600 MeV/n) would improve performance vs. Sham conditions in touchscreen tasks relevant to hippocampal and prefrontal cortical function [e.g., location discrimination reversal (LDR) and extinction, respectively]. In LDR, 56Fe female mice more accurately discriminated two discrete conditioned stimuli relative to Sham mice, suggesting improved hippocampal function. However, 56Fe and Sham female mice acquired a new simple stimulus-response behavior and extinguished this acquired behavior at similar rates, suggesting similar prefrontal cortical function. Based on prior work on multiple memory systems, we next tested whether improved hippocampal-dependent function (discrimination learning) came at the expense of striatal stimulus-response rule-based habit learning (visuomotor conditional learning). Interestingly, 56Fe female mice took more days to reach criteria in this striatal-dependent rule-based test relative to Sham mice. Together, our data support the idea of competition between memory systems, as an 56Fe-induced decrease in striatal-based learning is associated with enhanced hippocampal-based learning. These data emphasize the power of using a touchscreen-based battery to advance our understanding of the effects of space radiation on mission critical cognitive function in females, and underscore the importance of preclinical space radiation risk studies measuring multiple cognitive processes, thereby preventing NASA’s risk assessments from being based on a single cognitive domain.
Astronauts during interplanetary missions will be exposed to galactic cosmic radiation, including charged particles like 56 Fe. Most preclinical studies with mature, "astronaut-aged" rodents suggest space radiation diminishes performance in classical hippocampal-and prefrontal cortex-dependent tasks. However, a rodent cognitive touchscreen battery unexpectedly revealed 56 Fe radiation improves the performance of C57BL/6J male mice in a hippocampal-dependent task (discrimination learning) without changing performance in a striatal-dependent task (rule-based learning). As there are conflicting results on whether the female rodent brain is preferentially injured by or resistant to charged particle exposure, and as the proportion of female vs. male astronauts is increasing, further study on how charged particles influence the touchscreen cognitive performance of female mice is warranted. We hypothesized that, similar to mature male mice, mature female C57BL/6J mice exposed to fractionated whole-body 56 Fe irradiation (3 × 6.7cGy 56 Fe over 5 days, 600 MeV/n) would improve performance vs. Sham conditions in touchscreen tasks relevant to hippocampal and prefrontal cortical function [e.g., location discrimination reversal (LDR) and extinction, respectively]. In LDR, 56 Fe female mice more accurately discriminated two discrete conditioned stimuli relative to Sham mice, suggesting improved hippocampal function. However, 56 Fe and Sham female mice acquired a new simple stimulus-response behavior and extinguished this acquired behavior at similar rates, suggesting similar prefrontal cortical function. Based on prior work on multiple memory systems, we next tested whether improved hippocampal-dependent function (discrimination learning)
A thorough review of the literature, however, does not support a uniformly negative impact of HZE particles on rodent brain and behavior (Kiffer et al., 2019;Britten et al., 2021b). Two even more recent studies highlight that 56 Fe particle exposure can have seemingly beneficial effects on the mouse hippocampus, a brain region critical for memory and mood regulation. One study showed exposure to whole-body 56 Fe particle irradiation (IRR) improves hippocampal-dependent spatial learning 12 and 20 months (mon) post-IRR in male and female mice (Miry et al., 2021). Another study exposed 6-mon-old ("astronaut-age") male mice to whole-body 56 Fe particles (Whoolery et al., 2020) and used a rodent touchscreen platform to probe the functional integrity of brain circuits (Oomen et al., 2013;Hvoslef-Eide et al., 2016;Kangas and Bergman, 2017), drawing similarity to the way astronauts undergo touchscreen testing Moore et al., 2017). This study found mice exposed to 56 Fe particles had better discrimination learning (location discrimination, LD) vs. Sham mice, suggesting astronauts may show an improvement in this mission-critical skill. However, 56 Fe mice were not different from Sham mice in many other tasks (pairwise discrimination, PD; visuospatial/associative-learning, Paired Associates Learning, PAL; stimulus-response habit or "rule-based" learning, visuomotor conditional learning, VMCL; cognitive flexibility, PD reversal) (Whoolery et al., 2020). Taken together, these studies suggest caution in concluding that HZE particle exposure decreases rodent cognition; the reality is likely that there are time-, task-, species-dose-, and energy-, etc., dependent effects (Miry et al., 2021). In addition, these studies point out the importance-recently underscored for the space radiation field (Britten et al., 2021b)-of measuring multiple cognitive processes in rodents, thereby preventing NASA's risk assessment from being based on a single cognitive domain.
Another important factor to consider in assessing the impact of HZE particle exposure on cognition is biological sex. As of 2019, <10% of preclinical studies assessing the cognitive effects of HZE particle exposure used female rodents (Kiffer et al., 2019). Human research is only slightly better; the available data from astronauts is heavily skewed in favor of males (n = 477) vs. females (n = 57). Thus there is an urgent need for studies to determine the role of biological sex in the body's response to space flight stressors (Mark et al., 2014), including exposure to GCR and HZE particles. While some work suggests female rodents are more susceptible than males to space radiation exposure, other preclinical work suggests the female rodent brain may be protected from radiation-induced immune and cognitive deficits (Villasana et al., 2006(Villasana et al., , 2010Cherry et al., 2012;Krukowski et al., 2018;Hinkle et al., 2019;Liu et al., 2019;Parihar et al., 2020). Considering the astronaut class is 40% women (Mark et al., 2014) and the translational relevance of the rodent touchscreen platform (Oomen et al., 2013;Hvoslef-Eide et al., 2016;Kangas and Bergman, 2017), it is striking that the touchscreen platform has not yet been used to assess how female rodent cognition is influenced by whole-body exposure to an HZE particle, such as 56 Fe.
To address this knowledge gap, mature "astronaut aged" C57BL/6J female mice received either Sham or whole-body 56 Fe particle IRR (3 × 6.7cGy 56 Fe, 600 MeV/n) and were assessed on a battery of operant touchscreen and classical behavior tasks to probe aspects of cognition. These radiation exposure parameters are identical to those that reportedly improve location discrimination in mature male mice (Whoolery et al., 2020). This dose was chosen as it is submaximal to that predicted for a Mars mission (Cucinotta and Durante, 2006;. The fractionation regimen was selected based on its relevance for a Mars mission and its known impact on rodent brain structure and function (Rivera et al., 2013;Whoolery et al., 2017Whoolery et al., , 2020. For the present study, female Sham and IRR mice were tested for touchscreen performance of instrumental learning, discrimination learning, extinction learning, and stimulus-response habit (rule-based) learning. Given literature suggesting the female rodent brain may be spared from the negative impact of HZE particle exposure (Rabin et al., 2013;Krukowski et al., 2018), we hypothesized that wholebody 56 Fe IRR would spare or even improve their performance in touchscreen-based behaviors, especially hippocampal-reliant discrimination learning. This touchscreen battery revealed an unexpected finding: improved discrimination learning, but worse stimulus-response habit learning in 56 Fe-irradiated vs. Sham mice. We also tested several classical behaviors to examine these anxiety, stress response and repetitive behaviors, but both Sham and IRR mice performed similarly in those tests. Taken together with pre-planned, in-depth analysis of key aspects of their touchscreen performance on this and many other tasks, these data suggest whole-body exposure to 56 Fe particle IRR in mature female mice may support or enhance hippocampal tasks like discrimination learning, but may diminish striatum-dependent tasks like stimulus-response habit learning.
MATERIALS AND METHODS
The ARRIVE 2.0 guidelines were used to design and report this study (Percie du Sert et al., 2020). A protocol was prepared for this study prior to experimentation, but this protocol was not registered.
Animals
Two-month-old female C57BL/6J mice were purchased from Jackson Laboratories (stock #000664) and housed at Children's Hospital of Philadelphia (CHOP, Figure 1A) or UT Southwestern Medical Center (UTSW, Figure 1B) and shipped to Brookhaven National Laboratories (BNL) for IRR at 6-mon of age. Experiments were performed at these two different institutions due to the Eisch Lab moving institutions. Data collected at these two institutions are both presented here given the welldocumented reliability of the operant touchscreen platform (Beraldo et al., 2019;Dumont et al., 2020;Sullivan et al., 2021). Housing conditions at all facilities are 3-4/cage, light on 06:00, lights off 18:00, UTSW/CHOP: room temperature 68-79 • F, room humidity 30-70%, BNL: room temperature 70-74 • F and room humidity 30-70%. During shipping and housing at BNL, mice were provided Shepherd Shacks (Bio-Serv); no other enrichment was provided during housing. After IRR, mice were transferred to either UTSW (Sham n = 16, IRR n = 16) or CHOP (Sham n = 11, IRR n = 9). At both facilities, food (CHOP and BNL: LabDiet #5015; UTSW: Envigo Teklad global 16% protein) and water were provided ad libitum except during the appetitive behavior tasks. When placed in CHOP quarantine after return from IRR (below
Particle Irradiation
Mice received whole-body HZE particle IRR at BNL's NASA Space Radiation Laboratory (NSRL) during NSRL campaigns 17B and 18A. The 56 Fe ion beams were produced by the AGS Booster Accelerator at BNL and transferred to the experimental beam line in the NSRL. Dosimetry and beam uniformity was provided by NSRL staff. Delivered doses were ±0.5% of the requested value. All mice-regardless of whether control (Sham) or experimental ( 56 Fe)-were singly-placed for 15 min in modified clear polystyrene rectangular containers (AMAC Plastics, Cat #100C, W 5.8 × L 5.8 × H 10.7 cm; modified with ten 5-mm air holes). Although confined to a container, mice had room to move freely and turn around during confinement. A maximum of six containers were placed perpendicular to the beam for each cave entry. Mice received either Sham exposure (placed in cubes Monday, Wednesday, Friday, but received no 56 Fe exposure) or Fractionated (Frac) 20 cGy 56 Fe IRR (600 MeV/n, LET 174 KeV/µ, dose rate 20 cGy/min; placed in cubes and received 6.7 cGy on Monday, Wednesday, and Friday). Post-IRR, mice were returned to UTSW ( Figure 1B) or CHOP ( Figure 1A) and housed in quarantine for 1-1.5 mon prior to initiation of touchscreen behavior testing (Figure 1). Body weights (Figure 2A) were taken multiple times: prior to IRR, at IRR, and at least weekly post-IRR until collection of brain tissue. This dose of 56 Fe was selected as it is submaximal to that predicted for a Mars mission (Cucinotta and Durante, 2006; and the fractionation interval (48 h) was determined by the inter-fraction period for potential repair processes (Thames, 1985).
Overview of Behavioral Testing
Mice exposed to HZE particles in NSRL campaigns 18A and 17B were divided into parallel groups (Figures 1A,B, respectively) FIGURE 1 | Timeline of experimental groups and overview of behavior tests. Six-month-old C57BL/6J female mice received whole-body exposure to 56 Fe [0-Month (Mon) Post-Irradiation (IRR)] and subsequently were run on a variety of touchscreen and non-touchscreen behavioral tests, including (A) touchscreen training with a twelve-window (2 × 6) grid followed by LDR Train and Test, Acquisition and Extinction of stimulus-response habit learning, and classic behavior tests (EPM, MB, OF, SI, and FST) or (B) touchscreen training on a 3-window (1 × 3) grid followed by VMCL Train and Test. Acq, acquisition; Ext, extinction; LDR, location discrimination reversal; Test, testing; Train, training; TS, touchscreen; VMCL, visuomotor conditioning learning. that underwent touchscreen behavioral testing 1-3 mon post-IRR. Touchscreen experiments were performed between 08:00 and 14:00 during weekdays. As is standard in most rodent touchscreen experiments, mice were food restricted during touchscreen experiments. Mouse chow was removed from each cage at 17:00 the day prior to training or testing. Each cage was given ad libitum access to chow for 3 h (minimum) to 4 h (maximum) immediately following daily touchscreen training/testing, and from completion of training/testing on Friday until Sunday 5 p.m. Mice were weighed each Wednesday to ensure weights > 80% initial body weight. While weights below this threshold merited removal of the mouse from the study, zero mice reached this threshold Oomen et al., 2013). Luminescence emits from the touchscreen chamber screen and reward magazine, and from the house light during one stage of general touchscreen training, and thus the mice were not performing in darkness. In one group of mice ( Figure 1A), mice began touchscreen behavioral testing 3-mon post-IRR. Operant touchscreen platform procedures included general touchscreen training (with 2 × 6 window grid), Location Discrimination Reversal (LDR, Train and Test), and Extinction (Ext, training/"Acquisition" and testing). Total beam breaks as a measure of baseline locomotor activity were gathered in touchscreen operant chambers during general touchscreen training (Habituation 1). After all animals completed LDR Test, mice received unrestricted food pellets for 2 weeks before beginning Ext to allow them to recover from potential stress associated with food restriction. Following Ext testing, mice were tested in a variety of non-touchscreen tests classified here as "classical behavior tests" (Figures 1A, 2A) (Figure 1B), mice began touchscreen behavioral testing 1-mon post-IRR. Operant touchscreen platform procedures performed on this group were general touchscreen training (with 1 × 3 window grid) and Visuomotor Conditional Learning (VMCL) Train and Test. Subject number for each group in each figure panel is provided in Supplementary Table 1.
General Touchscreen Training
General Touchscreen Training (prior to LDR) consists of six stages, as previously published (Whoolery et al., 2020): Habituation 1, Habituation 2, Initial Touch, Must Touch, Must Initiate, and Punish Incorrect (PI). Methods for each stage are described in turn below. Mice went through general touchscreen training with twelve windows (2 × 6) for the LDR experiment.
Habituation
Mice are individually placed in a touchscreen chamber for 30min (max) with the magazine light turned on (LED Light, 75.2 lux). For the initial reward in each habituation session, a tone is played [70 decibel (dB) at 500 Hz, 1,000 ms] at the same time as a priming reward (150-µl Ensure Original Strawberry Nutrition Shake) is dispensed to the reward magazine. After a mouse inserts and removes her head from the magazine, the magazine light turns off and a 10-s delay begins. At the end of the delay, the magazine light is turned on and the tone is played again as a standard amount of the reward (7-µl Ensure) is dispensed. If the mouse's head remains in the magazine at the end of the 10s delay, an additional 1-s delay is added. A mouse completes Habituation training after they collect 25 rewards (25 × 7 µl) within 30 min. Mice that achieve habituation criteria in <30 min are removed from the chamber immediately after their 25th Frontiers in Behavioral Neuroscience | www.frontiersin.org FIGURE 2 | Weights, locomotion, and general touchscreen training learning are generally unaffected in 6-mon old female mice exposed to whole-body 56 Fe IRR compared to Sham. (A) No gross weekly weight difference was detected between Sham or 56 Fe mice during touchscreen testing. (B) Beam breaks measured in the novel TS operant chambers in Habituation 1 revealed no gross baseline difference in locomotion after exposure to Sham or 56 Fe IRR. (C) Sham and 56 Fe IRR groups performed similarly in each of the first six steps of general touchscreen training with twelve windows: Habituation 1 and 2, Initial Touch, Must Touch, Must Initiate, and Punish Incorrect. (D-L) During the Punish Incorrect stage of general TS training, 56 Fe IRR female mice had a longer first, but not last, training session vs. Sham mice (D). However, Sham and 56 Fe IRR mice did not differ in the number of completed trials (E),% correct (F), total ITI touches (G), correct touch latency (H), and correct left or right touch latency (I,J). (K) 56 Fe IRR female mice were ∼5 s faster vs. Sham mice to touch a blank window in the final session of testing. Aside from these differences, IRR and Sham mice had similar reward collection latency (L). Error bars depict mean ± SEM. Mixed-effects analysis was used in panel (A) main effects: Time F 36,1067 = 66.05, p < 0.0001 and Treatment F 1,30 = 0.02450, p = 0.8767; interaction: Treatment × Time F 36,1067 = 1.586, p = 0.0161, post hoc: all p > 0.05. Unpaired t-test was used in panel (B); p = 0.6979. Two-way RM ANOVA was used in panels (C-L): main effect *p < 0.05; **p < 0.01; ***p < 0.001; ****p < 0.0001, Bonferroni's post hoc analysis a' p < 0.05. In panel (D), main effects: Session F 1,29 = 0.5688, p = 0.4568 and Treatment F 1,29 = 4.532, p = 0.0419 (post hoc: a' p = 0.0228, in Sham vs. 56 reward in order to minimize extinction learning. The measure reported for Habituation is days to completion.
Initial touch
A 2 × 6 window grid is placed in front of the touchscreen for the remaining stages of training. At the start of the session, an image (a lit white square) appears in a pseudo-random location in one of the 12 windows on the touchscreen. The mouse has 30 s to touch the lit square (typically with their nose). If the mouse does not touch the image, it is removed, a reward (7 µl Ensure) is delivered into the illuminated magazine on the opposite wall from the touchscreen, and a tone is played. After the reward is collected, the magazine light turns off and a 20-s intertrial interval (ITI) begins. If the mouse touches the image while it is displayed, the image is removed and the mouse receives three times the normal reward (21-µl Ensure, magazine is illuminated, tone is played). For subsequent trials, the image appears in another of the 12 windows on the touchscreen, and never in the same location more than three consecutive times. Mice reach criteria and advance past Initial Touch training when they complete 25 trials (irrespective of reward level received) within 30 min. Mice that achieve Initial Touch criteria in <30 min are removed from the chamber immediately after their 25th trial. The measure reported for Initial Touch is days to completion.
Must touch
Similar to Initial Touch training, an image appears, but now the window remains lit until it is touched. If the mouse touches the lit square, the mouse receives a reward (7-µl Ensure, magazine is illuminated, tone is played). If the mouse touches one of the blank windows, there is no response (no reward is dispensed, the magazine is not illuminated, and no tone is played). Mice reach criteria and advance past Must Touch training after they complete 25 trials within 30 min. Mice that achieve Must Touch criteria in <30 min are removed from the chamber immediately after their 25th trial. The measure reported for Must Touch is days to completion.
Must initiate
Must Initiate training is similar to Must Touch training, but a mouse is now required to initiate the training by placing its head into the already-illuminated magazine. A random placement of the image (lit white square) will then appear on the screen, and the mouse must touch the image to receive a reward (7-µl Ensure, magazine lit, tone played). Following the collection of the reward, the mouse must remove its head from the magazine and then reinsert its head to initiate the next trial. Mice advance from Must Initiate training after they complete 25 trials within 30 min. Mice that achieve Must Initiate criteria in <30 min are removed from the chamber immediately after their 25th trial. The measure reported for Must Initiate is days to completion.
Punish incorrect
PI training builds on Must Initiate training, but here if a mouse touches a portion of the screen that is blank (does not have a lit white square), the overhead house light turns on and the lit white square disappears from the screen. After a 5-s timeout period, the house light turns off and the mouse has to initiate a correction trial where the lit white square appears in the same location on the screen. The correction trials are repeated until the mouse successfully presses the lit white square; however, correction trials are not counted toward the final percent correct criteria. Mice reach criteria and advance past PI training and onto Location Discrimination Reversal Train/Test after they complete 30 trials within 30 min at ≥76% (≥19 correct) on day 1 and >80% (>24 correct) on day 2 over two consecutive days. Mice that achieve PI criteria in <30 min are removed from the chamber immediately after their 30th trial. As with the other stages, a measure reported for PI is days to completion (to reach criteria). However, since the PI stage also contains a metric of accuracy, more measures were analyzed relative to the other five stages. Therefore, other measures reported for PI are session length, trial number, percent correct responses, ITI, latency to make a correct touch (for total touches, left touches, and right touches) and an incorrect touch (touching a blank window), and latency to collect a reward.
Location Discrimination Reversal
Location Discrimination Reversal (LDR; program LD1 choice reversal v3; ABET II software, Cat #89546-6) tests the ability to discriminate two conditioned stimuli that are separated either by a large or small separation. The reversal component of LDR is used here and in classic LDR studies (Clelland et al., 2009;Oomen et al., 2013) tests cognitive flexibility; prior work showing space radiation improved LD function in male mice used LD, not LDR (Whoolery et al., 2020). Taken together, LDR is a hippocampal-dependent task (Clelland et al., 2009;Oomen et al., 2013) which allows assessment of both discrimination ability as well as cognitive flexibility. In our timeline ( Figure 1A), mice received one additional training step ("LDR Train") prior to the actual 2-choice LDR Test.
Location discrimination reversal train
In LDR train, mice initiated the trial, which led to the display of two identical white squares (25 × 25 pixels, Figure 3A) presented with two blank (unlit) squares between them, a separation which was termed "intermediate" (8th and 11th windows in 2 × 6 high grid-bottom row). One of the left (L) or right (R) locations of the squares was rewarded (i.e., L+) and the other is not (R-), and the initial rewarded location (left or right) was counterbalanced within-group. On subsequent days, the rewarded square location was switched based on the previous day's performance (L+ becomes L-and R-becomes R+, then L-becomes L+ and R+ becomes R-, etc.). A daily LDR Train session is complete once the mouse touches either L+ or R-50 times or when 30 min has passed. Once 7 out of 8 trials had been correctly responded to, on a rolling basis, the rewarded square location was switched (becomes L-), then L+, then L-, etc.; this is termed a "reversal." Once the mouse reached >1 reversal in 3 out of 4 consecutive testing sessions, the mouse advanced to the LDR Test. A daily training is considered a "session." Measures reported for LDR Train are: percent of each group reaching criteria over time (survival curve), days to completion, trial number, and percent correct during trials to the 1st reversal.
Location discrimination reversal test
In LDR test, mice initiated the trial, which led to the display of two identical white squares, either with four black squares between them ["large" separation, two at maximum separation (7th and 12th windows in the bottom row of a 2 × 6 grid)] or directly next to each other ["small" separation, two at minimum separation (9th and 10th windows in the bottom row of a 2 × 6 grid; Figure 3F)]. As in LDR Train, only one of the square locations (right-most or left-most) was rewarded (L+, same side for both large and small separation, and counterbalanced withingroups). The rewarded square location was reversed based on the previous day's performance (L+ becomes L-, then L+, then L-, etc.). Once 7 out of 8 trials had been correctly responded to, on a rolling basis, the rewarded square location was reversed (becomes L-, then L+, then L-, etc.). Each mouse was exposed to only one separation type during a daily LDR Test session (either large or small) and the separation type changed every 2 days (2 days of large, then 2 days of small, 2 days of large, etc.). A daily LDR Test session was completed once the mouse FIGURE 3 | On an appetitive, touchscreen discrimination learning task, female mice exposed to whole-body 56 Fe IRR at 6-mon of age perform better than Sham mice in discriminating the location of two identical visual cues. Extinction Learning (Ext; ABET II Software, Cat #89547) Acquisition of simple stimulus-response learning (schedule name: extinction pt 1) Acquisition of simple stimulus-response learning (schedule name: Extinction pt 1) is the first part of the extinction test, and is a task that involves the amygdala (Fernando et al., 2013). The start of acquisition was marked by the magazine light turning on and the delivery of a free reward. Mice initiate the trial, which leads to the display of an image (lit white square stimulus) in the center window (middle square in 1 × 3 grid; Figure 4A; Mar et al., 2013). The mouse must touch the stimulus displayed in the center window to elicit tone/food response. The two side windows were left blank throughout the experiment. No response ensued if the mouse touches a blank part of the screen. A daily acquisition session was complete once the mouse touched the center window 30 times or when 30 min had passed. Once the mouse completed 30 trials within 15 min on each of five consecutive sessions (criteria for acquisition), the mouse advanced to the extinction test. Measures reported for Acquisition are: percent of each group reaching criteria over time (survival curve), days to completion, session length, and number of correct responses.
Extinction test (schedule name: extinction pt 2)
Extinction test (schedule name: Extinction pt 2) is a test that involves the prefrontal cortex (PFC) . A 5s ITI marked the start of extinction. Following the initial ITI, an image (the lit white square stimulus) was presented in the center window (middle window in 1 × 3 grid; Figure 4F).
The stimulus display was held on the screen for 10 s during which the mouse could elicit or omit its learned response to the square. The two side windows were left blank throughout the experiment. If the mouse touched the blank window of the screen, no response occurred. If the white square was touched, no food delivery was made but the image was removed, the magazine light was illuminated, a tone was played and the ITI period (10 s) was started; this is the "correct" action, even though no reward is provided. If the white square was not touched, then the image was removed and the ITI period started. Mouse entry into the reward magazine during the ITI would turn off the magazine light. Following an ITI, the magazine light was turned off and the next trial began automatically. A daily extinction session was complete once the mouse was presented with the white square stimulus 30 times. When the mouse reached ≥80% response omissions on each of at least three out of four consecutive sessions, the mouse was considered to have reached daily criteria. Mice that reached criteria first continued to be tested daily until all of the mice's performance was synchronized and completed before advancing to classical behavior battery testing. Measures reported for Extinction are: percent of each group reaching criteria over time (survival curve), days to completion, and number of omissions across testing days as well as session length, number of touches and latency to touch to a blank part of the touchscreen, number of touches during the ITI, and latency to make a correct response on the last day of the first, 8th, and last testing session.
General touchscreen training (prior to VMCL)
General Touchscreen Training (prior to VMCL) occurred as in section "General Touchscreen Training (Prior to LDR)" with two differences: mice went through only one habituation stage (Habituation 2) and training occurred with a three window grid (1 × 3). As in section "General Touchscreen Training (Prior to LDR), " the measure reported for these five general touchscreen stages is days to completion (to reach criteria). Due to computer issues, "days to completion" was the only metric extracted for this group, precluding in-depth accuracy analyses of Punish Incorrect as was done in the LDR cohort. After these five training stages, mice then went through VMCL Train and finally VMCL Test.
Visuomotor conditional learning train (schedule name: punish incorrect II)
Visuomotor Conditional Learning Train was designed to teach the mouse to touch two images (both lit white squares) on the screen in a specific order and in rapid succession. The first touch must be to an image presented in the center of the screen, and the second touch must be to an image presented either on the left or right of the screen. Specifically, after trial initiation, the mouse must touch a center white square (200 × 200 pixels; Figure 5A), which then disappears after it is touched. A second white square immediately appears on either the left or right side of the screen in a pseudorandom style, such that a square was located on each side 5 out of 10 times, but not more than three times in a row. If the mouse selected the location with the second white square, a reward (7 µl) was provided, and a 20-s ITI began.
However, if the mouse selected the location without a lit white square, then the second stimulus was removed, the house light was illuminated for 5 s to indicate a timeout period, and then finally a 20-s ITI occurred. Then the mouse was presented with a correction trial which must be completed prior to a new set of locations being displayed. VMCL Train was complete when the mouse completed 2 out of 3 consecutive days of 25 trials in 30 min with ≥75% correct. Measures reported for VMCL Train are: days to completion and distribution of percent of mice which reach criteria over time (survival curve), as well as session length, trial number, % correct responses, and number of correction trials on the first and last VMCL Train session.
Visuomotor conditional learning test
Mice were provided with one of two black-and-white images (spikes or horizontal bars, Figure 5C) placed in the center window. Once touched, the center image disappeared and white squares simultaneously appeared on the right and left of the screen. For this task, the center image of the spikes signaled that the mouse should touch the right square, while the center image of the horizontal bars signaled that the mouse should touch the left square. The two center images were presented pseudorandomly for an equal number of times, and the mice had 2 s to touch the white square on the either right or left side of the central image depending on the center image type. If the mouse touched the appropriate image (right or left side), this was considered a correct trial, and the mouse received a reward (7 µl) and then a 20-s ITI occurred. If the mouse touched the inappropriate image (right or left side), this was considered an incorrect trial, and the house light was illuminated for a 5-s timeout period followed by a 20-s ITI. If the mouse did not touch the white square (right or left side) within 2 s, this was considered a "missed" trial, and the house light was illuminated for a 5-s timeout period followed by a 20-s ITI. After either an incorrect or missed trial, a correction trial was run to protect against side bias. was evenly distributed to cover the bottom of the cage and 20 glass marbles were laid gently on top of the bedding (four rows with five marbles in each row, evenly spaced). Mice were placed in the marble burying arena and given 20 min to explore and interact with the marbles. After 20 min of testing, marbles were scored by two independent observers and only marbles that were two-thirds or more covered in bedding were counted. The measure reported for this test is percent of marbles buried.
Open field
Open Field (OF) is a test for exploration and anxiety-like behavior (Yun et al., 2018;Tran et al., 2020). The open field arena measured 42 cm × 42 cm × 42 cm (opaque white Plexiglas, customer design Nationwide Plastics). The center zone was established in EthoVision as 14 cm × 14 cm and corner periphery zones were set as 5 cm × 5 cm each. Each mouse was placed in the arena and allowed free movement of the novel environment with recording for 5 min.
The parameters of open field (total distance of movement, entries and duration in the center zone, entries and duration in the periphery zone) were scored via EthoVisionXT software (Noldus Information Technology) using nose-center-tail tracking to determine position.
Social interaction
Social Interaction (SI) is a test of exploration, locomotion, and sociability (Yun et al., 2018). Test mice were individually placed in a white open-field chamber (42 cm × 42 cm × 42 cm) that had a discrete interaction zone against one wall (26 cm × 14 cm) inside of which there was an empty plastic and wire mesh container (10 cm × 6 cm). For the first trial, the mouse was placed randomly into either corner of the box opposite to the interaction zone, and the movements of the mouse were tracked using Ethovision software (Noldus Information Technology). Specifically, the time the mouse spent either in the interaction zone or in corners opposite to the interaction zone during a 2.5 min trial was quantified. For the second trial, which began ∼5 min after the first trial, an unfamiliar age and sex-matched C57BL/6J mouse was placed into the plastic and wire mesh container and the container was placed in the interaction zone. Again, the time the mouse spent either in the interaction zone or in corners opposite to the interaction zone during a 2.5 min trial was quantified. Measures reported for Social Interaction are time spent in the interaction zone without and with another mouse placed inside the plastic and wire mesh container.
Forced swim test
Forced Swim Test (FST) is a test of despair-like responses (Yun et al., 2018). The FST was performed to evaluate behavioral withdrawal induced by stress. Mice were placed in a 5L beaker (Corning Inc. Life Sciences, Lowell, MA, United States) filled with 4 L of 25 ± 2 • C water. The movements of the mouse were tracked using Ethovision software (Noldus Information Technology) for the entirety of the 6-min session. Mice each went through two, 6min sessions on consecutive days. Immobile time was measured and the last 4-min of data are reported.
Rigor, Sex as a Biological Variable, Additional ARRIVE 2.0 Details, and Statistical Analysis The experimental unit in this study is a single mouse. For behavioral studies, mice were randomly assigned to groups. Steps were taken at each experimental stage to minimize potential confounds. For example, mice from the two experimental groups (Sham and 56 Fe IRR) were interspersed throughout housing racks at UTSW, CHOP, and BNL (to prevent effects of cage location) and were interdigitated for all weighing and behaviors (to prevent an order effect). In regard to sex as a biological variable, this study only used females due to equipment limitations. Specifically, only eight touchscreen chambers were available (enabling us to train/test a maximum of 56 mice/day) and the chambers were in use 5-6 days/week for several months. Mice had to be trained/tested continuously (preventing us from alternating sexes on different days or running females in the first few months and males in the later months). Male mice irradiated at the same time were tested in classical behavior tests and are the focus of another study. Thus, this study design was intended to examine the impact of space radiation on female mice, not to examine sex differences in response to space radiation. Sample sizes were pre-determined via power analysis and confirmed on the basis of extensive laboratory experience and consultation with CHOP and PennMed statisticians as previously reported (Whoolery et al., 2020). Exact sample number for each group in each figure panel is provided in Supplementary Table 1. Data for each group are reported as mean ± SEM. All analyses were hypothesis-based and therefore pre-planned, unless otherwise noted. Testing of data assumptions (normal distribution, similar variation between control and experimental groups, etc.) and statistical analyses were performed in GraphPad Prism (ver. 9.0.0). Normality was tested via Quantile-Quantile (Q-Q) plots followed by the Shapiro-Wilk test if needed. Since all data were found to be normally distributed, parametric tests were used for further statistical analysis. Analyses with two groups were performed using an unpaired, two-tailed Student's t-test. Analyses with more than two variables were performed using two-way ANOVA or Mixedeffects analysis with Bonferroni post hoc test; repeated measures (RM) were used where appropriate. Analysis of the distribution of subjects reaching criteria between control and experimental groups (survival curve) was performed with the Mantel-Cox test and significance was defined as * p < 0.05. Following best practices to move beyond null hypothesis significance testing and reliance on the p-value and to incorporate estimation statistics (Lakens, 2013;Halsey et al., 2015;Born, 2019; Calin-Jageman and Cumming, 2019; Makin and de Xivry, 2019;Wasserstein et al., 2019;Drummond, 2020), statistical approaches and results including statistical analysis significance (p-values) and effect size (when RM two-way ANOVA p < 0.05: partial omegasquared ω p 2 where ≤0.05 small, ≥0.06 medium, ≥0.14 large) are provided in Supplementary Table 1. All data that are helpful for interpreting touchscreen performance are provided in main figures to enable consideration of "positive" data (p < 0.05) in the context of "negative" data. Detailed statistical results are also provided in figure legends for panels in which there is a main effect of Treatment and/or an interaction. A total of n = 8 mice (n = 5 Sham, n = 3 IRR) were outliers based on a priori established experimental reasons (n = 1 Sham did not complete the "Punish Incorrect" stage even by Day 53; n = 1 Sham and n = 1 IRR did not complete "Acquisition" even by Day 58; n = 3 Sham and n = 2 56 Fe did not complete "Extinction") and the data from these mice were excluded from LDR, Acquisition, and Extinction analyses, respectively. Due to health issues, n = 2 Sham and n = 1 IRR mice were not run on classical behavior tests. Experimenters were blinded to treatment until analysis was complete.
Scripts
Prior to statistical analysis, extinction and acquisition data were sorted and extracted. We used a custom Python 3.8.3, SQLite3 2.6.0, and Tkinter 8.6 script developed by the Eisch Lab to extract, calculate needed values, and compile the data into a database. Extracting the data into an output CSV file was managed with another custom script, and these outputs were verified manually. Following this verification, the data were analyzed using GraphPad Prism 8 according to the tests detailed in the Statistical Analysis section. These scripts along with sample data files are available at https://github.com/EischLab/18AExtinction.
Whole-Body 56 Fe Particle Irradiation Does Not Change Body Weight or Locomotor Activity and Has Modest Effects on Operant Learning in Female Mice
Six-month-old female C57BL/6J mice received either Sham IRR or Frac whole-body 20 cGy 56 Fe (3 exposures of 6.7 cGy everyother day, total 20 cGy; Figure 1). This total dose is submaximal to that predicted for a Mars mission, and the fractionation interval (48 h) was used to allow potential repair processes to occur (Thames, 1985;Cucinotta and Durante, 2006;. Consistent with a prior report (Whoolery et al., 2020), this dose and fractionation interval of 56 Fe do not interfere with normal weight gain between groups (Figure 2A) or locomotor activity ( Figure 2B, unpaired t-test, t 30 = 0.3919, P = 0.6979; Supplementary Table 1).
Beginning 3-month post-IRR, Sham and 56 Fe IRR female mice began training on a touchscreen platform extensively validated in rodents (Figure 1A; Horner et al., 2013;Oomen et al., 2013;Hvoslef-Eide et al., 2016). Mice initially went through six stages of general touchscreen testing (Figure 1A), with performance reflecting operant learning. Sham and 56 Fe IRR mice completed all stages of general touchscreen training in similar periods of time ( Figure 2C and Supplementary Table 1). Thus, there was no gross difference in operant performance between these Sham and 56 Fe IRR mice.
The final stage of general touchscreen testing, Punish Incorrect (PI, where an incorrect trial results in timeout), was next analyzed to a greater extent for two reasons. First, PI is the sole general touchscreen training stage that has an accuracy criterion (% correct). Second, PI is the stage that takes the longest to learn. We measured relevant to accuracy on first vs. last PI day [including individual session length, trial #, % correct, correct touches and latency, intertrial interval (ITI) touches, blank window touches, and reward latency, Figures 2D-L and Supplementary Table 1] to see how 56 Fe influenced operant learning (via changes in speed, impulsivity, motivation, and side bias, etc.) Swan et al., 2014). 56 Fe IRR mice took longer than Sham mice to complete a maximum of 30 trials on the first-but not the last-PI session ( Figure 2D). However, on the first and last PI day Sham and 56 Fe IRR mice performed similarly in many other PI metrics, including number of trials completed (Figure 2E), percent correct ( Figure 2F), ITI touches (Figure 2G), latency of total correct touches to the left and right side of screen (Figure 2H), left correct touch latency (Figure 2I), and right correct touch latency ( Figure 2J). In regard to touching a blank window when the stimulus was presented, 56 Fe IRR mice had a nearly 5s shorter latency vs. Sham mice during the last session of PI ( Figure 2K), suggesting late-developing impulsivity in 56 Fe IRR mice. Motivation did not appear changed, as Sham and 56 Fe IRR mice had similar reward collection latencies ( Figure 2L). This further analysis of PI training stage suggests that 56 Fe IRR has modest effects on certain aspects of operant learning (IRR mice are slower to finish in daily PI session and may be more impulsive late in PI), but in many other ways perform indistinguishably from Sham IRR mice in PI and all other operant learning stages.
Female Mice Given Whole-Body 56 Fe Particle Irradiation Perform Better Than Sham Particle Irradiation Mice in an Appetitive-Based Location Discrimination Reversal Touchscreen Task
As previously reported, male mice given whole-body 56 Fe IRR perform better than Sham IRR mice in an appetitive-based location discrimination (LD) touchscreen task, suggesting improved behavioral discrimination or behavioral pattern separation (Whoolery et al., 2020). The effect of 56 Fe IRR exposure on translationally-relevant female mouse touchscreen performance is unknown, and specifically its effect on discrimination and cognitive flexibility is unknown. These are important knowledge gaps, as the proportion of United States female astronauts is increasing and some preclinical work shows the female rodent brain may be less susceptible after charged particle exposure vs. the male rodent brain (Krukowski et al., 2018;Parihar et al., 2020).
To test if whole-body 56 Fe IRR improves discrimination learning and impacts cognitive flexibility in adult female mice, Sham and 56 Fe female mice were trained and tested on LDR performance (Figure 1). In the LDR Train sessions (Figures 3A-E and Supplementary Table 1; presentation of a two-choice stimulus response with lit squares intermediately-separated), there was a visual difference in percent of Sham and 56 Fe subjects reaching criteria, but this difference was rejected by survival curve analysis ( Figure 3B). Sham and 56 Fe also had similar average days to complete LDR Train ( Figure 3C) and completed a similar number of LDR Train trials ( Figure 3D) and also had similar accuracy before the first reversal (Figure 3E), indicating similar competency during the overall LDR Train sessions.
Sham and 56 Fe IRR female mice were then assessed on overall LDR performance (LDR Test, Figures 3F-V), considering performance when the LDR Test squares were maximally separated (Large separation, Figures 3G,I,K,M,O,Q,S,U) or minimally separated (Small separation, Figures 3H,J,L,N,P,R,T,V), and are analyzed by block (1 block = 4 days LDR counterbalanced with two Large and two Small separation daily sessions, Supplementary Table 1). Sham and 56 Fe IRR took a similar amount of time to complete sessions for both the Large ( Figure 3G) and Small separation LDR Test trials (Figure 3H), completed a similar number of trials for both Large ( Figure 3I) and Small separation LDR Test trials (Figure 3J). Sham and 56 Fe IRR mice performance was also assessed for location discrimination reversal learning, which provides insight into both discrimination learning and cognitive flexibility (Vivar et al., 2012;Swan et al., 2014;Graf et al., 2018). First, we assessed discrimination learning by analyzing the percent correct trials made in each group prior to the 1st reversal. In Large -but not Small -separation trials, 56 Fe IRR female mice were ∼34% more accurate vs. Sham mice in percent correct trials prior to the 1st reversal (Large: Figure 3K; Small: Figure 3L). However, Sham and 56 Fe IRR mice had a similar number of reversals (an index of cognitive flexibility) in both Large ( Figure 3M) and Small separation LDR Test trials ( Figure 3N). Together these data suggest 56 Fe IRR mice have better discrimination learning than Sham mice, but 56 Fe IRR mice and Sham have similar cognitive flexibility.
We next probed how 56 Fe IRR mice were achieving greater accuracy in the Large separation LDR Test sessions (via possible changes in impulsivity, motivation, and side bias, etc.) Swan et al., 2014). 56 Fe IRR female mice touched the blank, non-stimulus window more than Sham mice during the 6th (last) block of the Large separation LDR Test block ( Figure 3O) and the 4th and 6th blocks of the Small separation LDR Test block (Figure 3P), implying IRR-induced increased impulsivity. 56 Fe IRR and Sham mice had similar reward collection latencies in both the Large ( Figure 3Q) and Small separation LDR Test blocks ( Figure 3R). 56 Fe IRR and Sham mice took just a long to press the correct selection on the display in the Large ( Figure 3S) and Small separation LDR Test blocks ( Figure 3T) as well as the incorrect selection in the Large ( Figure 3U) and Small separation LDR Test blocks (Figure 3V). Together these data suggest 56 Fe IRR mice have improved accuracy (yet increased impulsivity) in Large separation trials late in LDR Test, but no change in attention or motivation vs. Sham mice. 56 Fe IRR mice also have increased impulsivity in Small separation trials late in LDR Test, but no change in performance. These results suggest 56 Fe IRR mice are better than Sham IRR mice in key aspects of discrimination learning, despite showing impulsivity.
Whole-Body 56 Fe Particle Irradiation Does Not Change Acquisition or Extinction Learning of a Simple Stimulus-Response Task
To determine whether the observed IRR-induced increase in cognitive performance was limited to hippocampal-dependent tasks, we next tested for PFC-dependent executive function (Figure 4 and Supplementary Table 1). The same cohort of Sham and 56 Fe IRR female mice underwent simple stimulusresponse learning (acquisition; Figures 1, 4A) followed by extinction of acquired learning (Figure 4F). Stimulus-response learning was similar between the groups (Figure 4A) as >75% of mice in both Sham and 56 Fe IRR groups reached criteria by 60 days (Figure 4B) and both groups completed the task in a similar number of days ( Figure 4C). In addition, when looking at general performance over the course of acquisition, Sham and 56 Fe-IRR mice gave a similar number of correct responses to a stimulus ( Figure 4E) in comparable times (Figure 4D), again suggesting similar simple stimuli-response learning between groups.
In extinction testing (Figure 4F), Sham and 56 Fe IRR mice took a similar number of days to reach criteria, indicating no difference in the rate of extinction learning (Figure 4H). Sham and 56 Fe IRR mice also had similar individual session length ( Figure 4I) and reached a stable omission criteria (>24 out of 30 response omissions) over the course of testing ( Figure 4J). To assess whether these same Touchscreenexperienced mice differed in measures of potential impulsivity or general engagement with the screen, we analyzed blank touches, blank touch latency, and ITI touches (Figures 4K-M). Sham and 56 Fe IRR mice had a similar number of blank touches (Figure 4K), speed to make a blank touch (Figure 4L), and ITI touch's number (Figure 4M). Together these results suggest no effect of 56 Fe IRR on task-specific impulsivity. However, in the last extinction session, 56 Fe IRR mice took ∼1.5 sec longer to give a correct response vs. Sham mice ( Figure 4N). This small but significant increase in latency for 56 Fe IRR mice to give a correct response did not influence extinction performance. Therefore, taken together, these data suggest 56 Fe IRR does not influence extinction performance.
Female Mice Given Whole-Body 56 Fe Particle Irradiation Took Twice as Long as Sham Particle Irradiation Mice to Reach Stimulus-Response Habit Learning Criteria
It has been suggested that systems of "declarative" vs. "habit" memory-which rely on the medial temporal lobe (e.g., hippocampus) and basal ganglia (e.g., caudate-putamen), respectively-compete with one another during behavioral tasks (Poldrack and Packard, 2003). These systems are proposed to be identifiably separable and function simultaneously, "overriding" one another during various learning tasks. To assess whether the observed IRR-induced increase in hippocampal-dependent discrimination learning occurs at the expense of striatal memory circuit functional integrity, a parallel group of mice was used to assess the influence of 56 Fe IRR on visuomotor conditional learning (VMCL; Figures 1, 5 and Supplementary Table 1). VMCL reflects stimulus-response habit or "rule-based" learning and relies on intact circuits of the striatum and posterior cingulate cortex .
Similar to what was seen with the parallel cohort of mice (Figure 2C), during general touchscreen training this cohort of Sham and 56 Fe IRR mice completed each training phase in a similar number of days ( Figure 5A). Thus, in two parallel cohorts -one assessed 3-mon post-IRR and the other 1-mon post-IRR -there was no overt effect of 56 Fe IRR on operant learning. Unfortunately in-depth accuracy analysis of the last stage (Punish Incorrect) was not possible due to computer file inaccessibility. In VMCL Train -an intermediate training phase prior to VMCL Test -Sham and 56 Fe IRR mice also did not differ in completion days (26.27 vs. 25 days, respectively, Figures 5B,D). However, in the VMCL Test (Figure 5C), 56 Fe IRR mice took nearly twice as many days vs. Sham to reach criteria ( Figure 5D). These data suggest a 56 Fe IRR-induced impairment in the rate of striatal-mediated learning.
To assess whether a slower rate of VMCL learning in 56 Fe IRR mice could be explained by behavioral deficits evident in earlier training stages, we analyzed VMCL Train performance indepth (Figures 5B,E-I). In VMCL Train, a similar proportion of Sham and 56 Fe IRR mice reached criteria over time (50% subjects reached criteria at 31 days in both Sham and 56 Fe IRR mice; Figure 5E). Sham and 56 Fe IRR mice also had similar length of training sessions (Figure 5F), number of training trials (Figure 5G), response accuracy (Figure 5H), and number of correction trials following an incorrect response ( Figure 5I). Taken together, these results suggest no gross impact of 56 Fe IRR on the ability to complete VMCL Train.
In VMCL Test (Figure 5C), a similar distribution of the proportion of Sham and 56 Fe IRR mice reach criteria over the entire VMCL Test period ( Figure 5J). Sham and 56 Fe IRR mice had similar VMCL Test performance as indicated by similar session length (Figure 5K), accuracy (Figure 5L), percentage of trials missed due to inactivity (Figure 5M), and number of incorrect trials made during the initial choice stage ( Figure 5N). Therefore, while 56 Fe IRR mice took more days to complete VMCL Test at certain accuracy vs. Sham mice, this was not due to a difference in other VMCL Train and Test performance measures.
Whole-Body 56 Fe Particle Irradiation Does Not Change Measures Relevant to Anxiety, Depression, Repetitive Behavior, and Sociability 56 Fe IRR-induced improvements in discrimination learning and the increased number of blank touches during LDR Test could be explained by increased compulsivity or other stereotypic behaviors, or alterations in anxiety-or despair-like behaviors. To assess these possibilities, the same touchscreen-experienced Sham and 56 Fe IRR mice were run on a variety of classic nontouchscreen behavior tests including elevated plus maze, marble burying, open field, social interaction, and forced swim test (Figures 1, 6A).
To analyze and compare anxiety-like behavior between Sham and 56 Fe IRR groups, mice were exposed to the elevated plus maze and open field, both well-validated anxiety tests in rodent models. In the elevated plus maze, Sham, and 56 Fe IRR mice spent a similar amount of time in both the open and closed arms (Figures 6B,C and Supplementary Table 1). In the open field, Sham and 56 Fe IRR mice traveled a similar total distance and spent a similar amount of time in both predefined center and corner areas (Figures 6E-G).
An additional test for anxiety-like behavior that also provides an index of repetitive, compulsive-like behavior is marble burying (Thomas et al., 2009;Angoa-Pérez et al., 2013;de Brouwer et al., 2019). Thus, the same female mice also were assessed in a marble burying test. Both Sham and 56 Fe IRR mice buried a similar percentage of marbles during a 30-min session, which implies a lack of potentially pathological 56 Fe IRR-induced stereotypic and compulsive behavior ( Figure 6D).
In certain transgenic rodent models of autism, mice show improved behavioral "pattern separation" alongside social deficits (Benevento et al., 2017). To assess whether 56 Fe IRR-induced improvements in discrimination learning shown here were accompanied by social deficits, Sham and 56 Fe IRR female mice were exposed to an age-, sex-matched conspecific in a social interaction test. Sham and 56 Fe IRR mice spent a similar amount of time in a predefined interaction zone in the absence and presence of a conspecific target (in a plastic and wire mesh enclosure), and spent relatively more time in the interaction zone in the presence vs. absence of a conspecific ( Figure 6H). These data indicate no effect of 56 Fe IRR on sociability.
We finally looked at whether 56 Fe IRR-induced improvement in learning related to behavior shown under conditions that mimic despair (Lezak et al., 2017) by exposing the mice to the forced swim test, which is often used to assess anti-depressive-like efficacy and stress coping (Armario, 2021). Both Sham and 56 Fe IRR mice spent a similar amount of time immobile during the 6-min test (Figure 6I), suggesting no 56 Fe IRR-induced despairlike phenotype.
DISCUSSION
Here we provide a behavioral profile of female C57BL/6J mice that received fractionated exposure to a Mars mission-relevant dose of whole-body 56 Fe IRR at 6-mon of age. From 7-to 18-mon of age, 56 Fe IRR mice and their Sham counterpartswhich received every experimental manipulation but placement in front of the beam line-were examined via touchscreen and classical behavior tests to assess a range of cognitive abilities. We leveraged the power of the touchscreen platform to provide a holistic, multi-dimensional perspective on mouse behavior, performing in-depth analyses that have become the gold standard in the field Oomen et al., 2013;Beraldo et al., 2019). Six differences emerged between Sham and 56 Fe IRR mice. Relative to Sham female mice, 56 Fe IRR female mice (1) took longer (27% longer) to complete the first session of the last stage of general training; (2) took ∼5 s less (71% less time) to touch a blank window during the last stage of general training; (3) had a greater percent of correct trials (34% more) when distinguishing conditioned stimuli separated by a large (but not small) distance, specifically prior to their first reversal and late in location discrimination reversal testing; (4) touched a blank window more when distinguishing conditioned stimuli separated by a large or small distance (range of 43-63% more), specifically late in location discrimination reversal testing; (5) took ∼1 s longer (34% longer) to touch the lit window (an incorrect response) in the last extinction session; and (6) took more than twice as many days (57% as many) to reach criteria in the visuomotor conditioned learning test. Sham and 56 Fe IRR female mice were similar in the other many touchscreen and classical behavior metrics collected. Below we discuss these findings in mice as they relate to cognitive domains, indicate the strengths and limitations of our work, and speculate what these findings mean for NASA's risk assessment for female astronauts in future deep space missions.
Our touchscreen data show that 56 Fe IRR female mice had ∼34% more correct trials relative to Sham mice during Large separation trials prior to their first reversal in LDR Test. The effect size for this 56 Fe IRR-induced increase in location discrimination is small (Supplementary Table 1), and it is only seen in trials that use a Large, not Small, separation. In fact, both Sham and 56 Fe IRR female mice perform just above chance (∼50% correct prior to first reversal, which is expected given the challenge of the task and the age of the mice at testing), and it is only in the last block that the 56 Fe IRR mice perform better. These caveats aside, these data suggest 56 Fe IRR improves location discrimination reversal learning in female mice vs. Sham female mice. Given the welldescribed role of the hippocampus in LD and LDR (Clelland et al., 2009;McTighe et al., 2009;Swan et al., 2014), one interpretation of these data is that 56 Fe IRR improves hippocampal function or perhaps integrity. There are three notable aspects of this interpretation. First, while we cannot make any direct sex comparisons or claim sex-specific effects of fractionated 56 Fe exposure on cognition, it is appropriate to mention prior work performed with male mice. Prior work with male mice reported that the same 56 Fe IRR parameters used here improved discrimination learning in an LD test (Whoolery et al., 2020); LDR was not assessed in that study. In that study, male mice had a higher percentage of accurate responses and reached LD criteria in fewer days relative to Sham male mice. While there are additional distinctions between these studies, it is notable that both female and male mice exposed to 56 Fe IRR show indices of improved LD or LDR-and thus perhaps improved hippocampal function-vs. Sham mice. Second, it is interesting to compare the interpretation of the present data ( 56 Fe IRR female mice have improved LDR and hippocampal function) to prior literature on the impact of space radiation on hippocampal function. Many rodent studies suggest HZE particle exposure is detrimental to brain physiology and functional cognitive output, with noted negative impact on hippocampal function and also on operant behavior (Rabin et al., 2004(Rabin et al., , 2007bBlakely and Chang, 2007;Rabin, 2012;Cucinotta et al., 2014;Kokhan et al., 2016;Nelson, 2016;Cekanaviciute et al., 2018;FIGURE 6 | Timeline and results of classical behavior tests show anxiety-, compulsive-like behavior, and despair-like behavior and sociability are generally unaffected in female mice exposed to whole-body 56 Fe IRR at 6-mon-old compared to Sham. Jandial et al., 2018;Cucinotta and Cacao, 2019;Kiffer et al., 2019;Limoli, 2020;Britten et al., 2021b;. It is only relatively recently that rodents have been irradiated at "astronaut age, " that low doses of space radiation have been used to perturb hippocampal function, and that female rodents have been more commonly studied. Indeed, some work suggests female rodents are more susceptible than males to space radiation, while other preclinical work suggests the female rodent brain may be protected from radiation-induced immune and cognitive deficits (Villasana et al., 2006(Villasana et al., , 2010Cherry et al., 2012;Krukowski et al., 2018;Hinkle et al., 2019;Liu et al., 2019;Parihar et al., 2020). With these studies in mind, it is notable that touchscreen analysis of both female and male mice (of "astronaut age" at time of exposure) shows 56 Fe IRR improves LDR (present results) or LD (Whoolery et al., 2020), respectively, vs. Sham exposure without influencing other cognitive domains (exceptions in female mice are discussed below). However, in contrast to our present work in female mice, another study with mature male mice exposed to a single bolus of whole-body low dose 56 Fe showed a doseand time-dependent impact on a non-touchscreen hippocampaldependent task: novel object recognition (Impey et al., 2016). Specifically, 2 weeks post-IRR, male mice exposed to 0.1 or 0.4 cGy spent a similar percentage of time investigating a familiar and novel object, while mice exposed to 0.2 cGy spent a greater percentage time investigating a novel object. Twenty weeks post-IRR, all IRR mice spent a greater percentage of time investigating a novel object. Future studies are needed to assess whether female mice have a similar disruption of hippocampal-dependent function soon after IRR, and to understand how results from a behavioral test that takes relatively few days to run (novel object) relate to a behavioral test that requires weeks to months to run (touchscreen training and testing). A third perspective on these data with 56 Fe IRR improving LDR in female mice is highlighted in recent work showing that the rodent brain has a compensatory, dynamic, time-dependent response to 56 Fe IRR (Miry et al., 2021). More longitudinal studies are needed to clarify the time course of the LDR "improvement" reported here in 56 Fe IRR female mice. Another outcome of our touchscreen data is that 56 Fe IRR female mice take more days to reach criteria relative to Sham mice in VMCL Test, suggesting impaired in stimulus-response habit learning. Given the reliance of stimulus-response rulebased habit learning on intact striatal circuits Delotterie et al., 2015), our data suggest 56 Fe IRR female mice have striatal/basal ganglia dysfunction. Of note, while we did not observe behavioral changes that are indicative of gross striatal dysfunction (normal locomotor, and marble burying, etc.), both high and low doses of HZE particles can produce maladaptive striatal plasticity and/or compromise the dopaminergic system of rodents (Joseph et al., 1992(Joseph et al., , 1993(Joseph et al., , 1994Kiffer et al., 2019). Our finding that 56 Fe IRR female mice have impaired stimulusresponse rule-based habit learning in an operant touchscreen task may call to mind work in retired breeder male rats where exposure to ≤15 cGy of 600 MeV/n 56 Fe particles impairs the acquisition-but not the long-term memory-of rules in the attentional set-shifting assay (Jewell et al., 2018). However, the operant nature of the touchscreen task used in the present work is in contrast to the associations that must be made in attentional set-shifting, thereby limiting comparisons between these studies. On a more comparable level, our finding that 56 Fe IRR female mice have impaired stimulus-response habit learning is distinct from what is seen in male mice, as 56 Fe IRR male mice perform similarly to Sham mice on VMCL (Whoolery et al., 2020). These results are of course not directly comparable since the interval between IRR and touchscreen training/testing was 2 mon for females but 4 mon for males. A 56 Fe IRR-induced deficit in stimulus-response rule-based habit learning in female mice is actually opposite of our hypothesis, which was fueled by studies suggesting the female rodent brain may be spared from the negative impact of HZE particle exposure (Rabin et al., 2013;Krukowski et al., 2018). More specifically, when exposed to space radiation, female mice-do not show deficits in social interaction or novel social and object recognition memory, do not show anxiety-like phenotypes, and do not have the microglia activation and hippocampal synaptic losses seen in IRR male mice (Krukowski et al., 2018;Parihar et al., 2020). Thus, our data presented here add to the growing literature that wholebody exposure to HZE particles-such as 56 Fe-affects cognition of female mice in a circuit-specific manner.
While there are many manuscripts that report the influence of HZE particle exposure on rodent operant behavior (Rabin et al., 2002(Rabin et al., , 2005a(Rabin et al., ,b,c, 2007a(Rabin et al., , 2011(Rabin et al., , 2013(Rabin et al., , 2014a(Rabin et al., ,b, 2015a(Rabin et al., ,b, 2018(Rabin et al., , 2019aShukitt-Hale et al., 2007;Cahoon et al., 2020;Whoolery et al., 2020), few use Mars-relevant doses of HZE (<1Gy) and delivery (e.g., whole-body exposure) (Rabin et al., 2014b;Cahoon et al., 2020;Whoolery et al., 2020), and only one uses rodents that are "astronaut age" at irradiation (∼6 mon old at time of exposure) (Whoolery et al., 2020). "Operant behavior" is an umbrella term, and these three publications test different types of operant behavior. Despite this and other distinct experimental parameters (e.g., rodent species, radiation particle used), it may be useful to compare the impact of space radiation on operant behavior as reported in these three studies. In studies with young male rats (Rabin et al., 2014b;Cahoon et al., 2020), whole-body exposure to doses of 16 O (1, 5, 10, and 25 cGy) decreased performance of a striatal-dependent operant task (responding on an ascending fixed-ratio reinforcement schedule) 2-mon post-IRR, while 56 Fe had dose-and time-post-IRR-dependent effects; both 25 and 50 cGy decreased performance 3-mon post-IRR, and 25cGy actually improved performance 11-mon post-IRR. In the study with mature ("astronaut age") male mice (Whoolery et al., 2020), whole-body fractionated exposure to 56 Fe did not change performance on either an operant task that engages PFC-perirhinal cortex-striatal circuits (pairwise discrimination) 2-mon post-IRR or a task that engages the striatum (VMCL) 4-mon post-IRR, but improved performance on LD. The present work is the only study to test operant behavior in mature female rodents after whole-body exposure to a Mars-relevant space radiation regimen. Clearly more work is needed to understand how space radiation influences the very broad spectrum of operant performance in both female and male rodents.
Taken together with our data presented here on the performance of female mice on LDR, it is notable that female mice perform "worse" on a striatal-dependent task, VMCL, but "better" on a hippocampal-dependent task, LDR. The present VMCL Test results are therefore interesting in regard to the theory of multiple memory systems (Nadel, 1992;Poldrack and Packard, 2003). Human and non-human memory studies suggest memory formation and consolidation are dependent on both hippocampal and non-hippocampal (i.e., basal ganglia or striatal) cooperative networks or memory systems. These systems encode for different memory types, with hippocampal circuits encoding relational memory for declarative past events and striatal circuits encoding for acquisition of stimulus-response rule-based habit learning and some forms of Pavlovian conditioning (Poldrack and Packard, 2003). These networks also compete. In amnesic patients with partial temporal lobe damage, hippocampal-reliant recognition memory is decreased while striatal-mediated motor learning is spared (Tranel et al., 1994). Conversely, in patients with basal ganglia damage, striatal memory function is decreased while hippocampal memory function is spared (Heindel et al., 1989). Here we report an 56 Fe IRR-induced improvement in hippocampal-based discrimination learning in female mice yet deficits in striatal-dependent rule-based learning. We advocate for more specific evaluation of striatal-reliant behavioral patterns after HZE exposure using other touchscreen (i.e., autoshaping) or other operant paradigms (rodent psychomotor vigilance test) (Davis et al., 2016), as such studies may clarify whether the improved discrimination learning shown here is accompanied by general basal ganglia-learning deficits. Additional study is also needed to determine if the results presented here in female mice-improved hippocampal-based LDR, worse striatal-based VMCL-are a result of memory system competition.
In addition to the improved performance in LDR and decreased performance in VMCL, there are two other aspects of our data worth discussing. One, 56 Fe IRR and Sham female mice in general performed similarly in Acquisition and Extinction, suggesting no difference in prefrontal cortical function. An exception is response latency; 56 Fe IRR female mice take ∼1.5 s longer than Sham female mice to press the image. While this difference did not influence any other metric in Extinction, it is notable since the "correct" response in Extinction is to not touch the image. Here the 56 Fe IRR female mice still press the image (which is an incorrect response) but take slightly longer to press it. Future studies will be needed to assess whether this longer latency means the 56 Fe IRR mice are "in conflict" about making a response (but press it anyways). Two, there are indications that 56 Fe IRR mice may be more impulsive. In the last and longest stage of general touchscreen training (PI), 56 Fe IRR mice tested 3-mon post-IRR took 71% less time to touch blank windows in the last session vs. Sham mice. This is notable in that 56 Fe IRR mice initially had 27% longer sessions early in PI. This faster blank touch in 56 Fe IRR mice is not due to changes in attention, locomotor ability, or motivation since there are no differences between 56 Fe IRR and Sham mice in the latency to touch the correct image or collect the reward. We interpret the shorter latency to touch a blank window in the last PI session 3-mon post-IRR as 56 Fe IRR-induced impulsivity. It is unclear if the faster blank touch latency in 56 Fe IRR mice late in PI is due to time post-IRR; we were unable to assess latency and other accuracy metrics in Sham and 56 Fe IRR mice tested 1-mon post-IRR due to computer file issues. Another suggestion of impulsivity from our data is that 56 Fe IRR mice touched blank windows more in both Large and Small separation trials near the end of LDR Test. As research suggests striatal circuits can also be involved in impulsive as well as habit behaviors (Fineberg et al., 2010;Lipton et al., 2019), it will be interesting for future space radiation studies to more specifically target assessment of impulsivity as it relates to striatal function and integrity.
The mechanism underlying 56 Fe IRR-induced improvement in discrimination learning and decrement in stimulus-response rule-based is unknown, although the hippocampus and striatum, respectively, are linked to these functions (Clelland et al., 2009;McTighe et al., 2009;Horner et al., 2013;Oomen et al., 2013;Delotterie et al., 2015). Interestingly, a recent study in both female and male mice reports 56 Fe IRR-induced changes in hippocampal cellular, synaptic, and behavioral plasticity 2-mon post-IRR normalize 6-mon post-IRR, and are actually enhanced 12-mon post-IRR (Miry et al., 2021). Therefore a reasonable hypothesis is that the improved hippocampal-dependent discrimination learning and decreased striatal-based habit learning shown here in female mice are due to dynamic and compensatory processes post-IRR that are brain-region specific. Future assessment of this hypothesis ideally would continue to include behavioral tests reliant on other brain regions (such as the PFC) as we have done here and as others have done as well (e.g., Britten et al., 2016;Parihar et al., 2018;Acharya et al., 2019;Liu et al., 2019;Allen et al., 2020;Whoolery et al., 2020;Miry et al., 2021).
There are limitations to the present study. The first limitation is our use of a fractionated exposure. In principle, in vivo wholebody fractionated exposure to single (or mixed) particles of space radiation has translational relevance even beyond NASA (Held et al., 2016;Simonsen et al., 2020). In the brain, there is a limited literature on the effect of fractionated vs. non-fractionated in vivo whole-body exposure to a space radiation-relevant single particle; more studies have been done in other systems, such as the cardiac system (Leith et al., 1982;Chang et al., 2007;Rivera et al., 2013;Whoolery et al., 2017Whoolery et al., , 2020Mao et al., 2021). An increasing number of in vivo studies on space radiation and cognition deliver fractions of mixed-rather than single-beams (Kiffer et al., 2018;Krukowski et al., 2018;Raber et al., 2019;Holden et al., 2021). However, HZE particle exposure is stochastic, making it challenging for the field to agree on an in vivo fractionation exposure paradigm. This challenge likely has contributed to fractionation being underutilized in in vivo basic science experiments, which raises another obstacle: the difficulty in comparing data from fractionation experiments to studies where a similar dose and energy of radiation are given in a non-fractionated manner (e.g., single bonus to the wholebody). On a related note, the present work uses a fractionation interval (48 h) that putatively allows potential repair processes to occur (Thames, 1985;Cucinotta and Durante, 2006;. While we and others have published indices of DNA damage (e.g., 53BP1) in brain tissue after exposure to space radiation (DeCarolis et al., 2014) and DNA damage indices are evident in normal brain tissue and after injury or radiation (Robbins et al., 2012;Watson and Tsai, 2017;Davis and Vemuganti, 2021), further research is needed to determine if this 48 h fractionation interval is applicable to the brain and if a model of DNA damage and repair that is highly-influential in radiobiology has relevance to brain tissue and cognitive function (Curtis, 1986). While no fractionation regimen will suit all scientists, for relevance to deep space missions future in vivo studies examining the influence of space radiation on the brain may benefit from using protracted lowdose per fraction regimens or just chronic exposure, as have been used in rodents (Brown et al., 2005;Acharya et al., 2019;Borak et al., 2019;Britten et al., 2021a;Krishnan et al., 2021). A second limitation is that the two cohorts of female mice assessed here (tested on LDR/Ext/classical behaviors vs. VMCL, Figure 1A vs. 1B) underwent touchscreen testing at two different institutions due to the lab moving institutions. Since the experiments were distinct between the two institutions, it was not possible to perform an analysis in which institution was a covariate. Ideally the behavioral experiments here would be performed again in the future at a single institution, and with sufficient resources and equipment to enable several dependent measures to be assessed in both groups at the same time post-IRR. However, the reliability (including inter-institutional reliability) of the operant touchscreen platform is well documented (Beraldo et al., 2019;Dumont et al., 2020;Sullivan et al., 2021), and therefore we felt it appropriate to present these data in the same work. Third, due to equipment limitations, this study focuses only on female mice. It is inappropriate for us to compare the female performance reported here with our prior work on male mice irradiated with similar exposure parameters (Whoolery et al., 2020), as that prior work used a distinct LD paradigm relative to the one used here. For example, in that prior work with male mice, the CS+ alternated at the beginning of each day, while in the present work the CS+ "reversed" throughout each test after criteria was met. Also, those male mice received a random mix of large and small separation stimuli within each day of testing, while in the present work female mice received 2 days of large separation and two subsequent days of small separation. While the goal of the present study was to examine the impact of space radiation on female mice (not to examine sex differences in response to space radiation) and thus only female mice were examined, ideally future mechanistic studies would assess both female and male mice in parallel. Fourth, the touchscreen and radiation work shown here in mice and elsewhere in rats are appetitive tasks, typically employing food (or water) restriction to increase the rodent's motivation to perform (e.g., Davis et al., 2014;Hadley et al., 2015;Jewell et al., 2018;Cahoon et al., 2020). In male mice, food restriction and touchscreen training transiently increase corticosterone levels in fecal boli (Mallien et al., 2016). Notably, levels of this stress hormone return to baseline after 2-6 weeks, a period of time during which general touchscreen training can be completed. While the benefits of a reward-based test (where aversive stimuli are avoided but food restriction is employed) may outweigh the drawbacks (Bussey et al., 2012;Horner et al., 2013), the influence of food or water restriction on stress is a limitation that should be kept in mind when interpreting these touchscreen behavior results (as well as results from any appetitive tasks that use food restriction). A final limitation of this work is the classical behavior tests in the present work were performed months after the touchscreen testing. The length of time between these two types of testing makes it difficult to know what effect space radiation would have on classical behavior if tested at the same time post-IRR as touchscreen testing. Ideally future experiments will test if there is indeed a lack of IRR-induced change in classical behavior performance by examining parallel groups of mice tested in classic vs. touchscreen behaviors.
In conclusion, we have used a translationally-relevant rodent touchscreen battery to analyze the functional integrity of female mouse cognitive domains and associated brain circuits following exposure to the HZE particle 56 Fe, a major component of space radiation that is a potential threat to the success of future crewed interplanetary missions. Our data in female mice: (1) suggest an IRR-induced competition between memory systems, as we see improved hippocampal-dependent memory and decreased striatal-dependent memory, (2) show that IRR induces sex-specific changes in cognition, (3) suggest the power that extensive multimodal behavioral analyses would have in helping standardize reporting of results from disparate behavioral experiments, and (4) underscore the importance of measuring multiple cognitive processes in preclinical space radiation risk studies, thereby preventing NASA's risk assessments from being based on a single cognitive domain.
DATA AVAILABILITY STATEMENT
Data will be made available on written request to the corresponding authors.
ETHICS STATEMENT
The animal study was reviewed and approved by three Ethics committees [the Institutional Animal Care and Use and a 2021 NASA HERO grant (80NSSC21K0814). CW was supported by an NIH Institutional Training grant (DA007290, PI AE). YR was supported by an Undergraduate Translational Research Internship Program under Penn's Institute for Translational Medicine and Therapeutics which is supported by an NIH Institutional Clinical and Translational Science Award TR001878, PI: G. A. Fitzgerald). AS was supported by the American Heart Association (14SDG18410020), NIH/NINDS (NS088555), the Dana Foundation David Mahoney Neuroimaging Program, and The Haggerty Center for Brain Injury and Repair (UTSW). FK was supported by the Translational Research Institute for Space Health (TRISH) through NASA cooperative agreement NNX16AO69A. This research was also supported by NASA grants NNX07AP84G (co-I AE), NNX12AB55G (co-I AE), and NNX15AE09G (PI AE), NIH grants DA007290, DA023555, DA016765, and MH107945 to AE and R15 MH117628 (PI K. G. Lambert), and a pilot grant from the University of Pennsylvania Perelman School of Medicine Department of Radiation Oncology (co-PI with Y. Fan). The content of this work is solely the responsibility of the authors and does not necessarily represent the official views of the NIH or NASA.
|
2021-10-11T13:16:40.765Z
|
2021-10-11T00:00:00.000
|
{
"year": 2021,
"sha1": "e60a576287d3c9db06b9d64bd5883661244aa0f1",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnbeh.2021.722780/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e60a576287d3c9db06b9d64bd5883661244aa0f1",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
254795583
|
pes2o/s2orc
|
v3-fos-license
|
A model of the twin-cam compound bow with cam design options
A mathematical model of an archery twin-cam compound bow is introduced. The deflection of the limb tip is based on the modified Hickman model of the traditional bow. The cams are modelled with the help of cubic splines, and the derived non-linear equations are simplified and solved numerically. The force-draw curve of a commercial twin-cam compound bow is measured and compared to the prediction of the model. As an example, the cams of the compound bow are virtually modified in order to improve the effectiveness of the bow. The model presented here can be used when adjusting the twin-cam compound bow and also when designing the cams.
Introduction
The archery compound bow is a bow with pulleys at the tips of the bow limbs, which offer mechanical advantage to the archer. When drawing a typical compound bow, the required force increases to the maximum more or less rapidly, but then decreases to the local minimum value in the full draw [1]. This nonlinear behaviour can be controlled by careful design of the shape and size of the pulleys (or cams).
In the earlier studies concerning compound bow models, the pulleys at the tips of the limbs are either similar systems both consisting of two round eccentrics [2], similar systems both consisting of one round wheel and one non-round cam [3], or there is one round wheel at the other tip of the limb and the system of three non-round cams at the other [4,5]. In the models including non-round cams, the cams are approximated as components with mixed properties of a circle and a changing lever at the same time, in order to simplify the admittedly complex system of the compound bow. However, in the viewpoint of cam design, a more detailed treatment of cams is needed.
This paper provides a more accurate solution to the problem of modelling the compound bow with nonround cams. The considerations are restricted to the bow with similar two-cam systems at the tips of the limbs. We may call this bow type as a twin-cam compound bow.
Mathematical model
Let us consider a compound bow with similar cam systems at the tips of the limbs. Let us assume that the bow is symmetric with some vertical line, which is M. Tiermas (&) Department of Physics, University of Helsinki, Helsinki, Finland e-mail: marko.tiermas@helsinki.fi also the line in which the arrow moves when the bow string is drawn or released from the center of the string; the reader is noted that this is not always the case for real twin-cam bows. We further assume that the bow is in a horizontal position so that the grip and riser are above the cables and the string, and the line between the axle points of the cams is horizontal, as in Fig. 1.
The cam system at the tip of the limb consists of two cams, which are firmly attached to each other, and the system can rotate only around the axle, from which the system is connected to the tip of the limb. The string is wrapped around the string cams of both the left-hand and the right-hand limb. From one end, the upper cable is twisted around the cable cam of the left-hand limb, and the other end of this cable is connected to the axle of the right-hand cam system, as presented in Fig. 1. The lower cable is similar with respect to the right-hand cam system. In real bows there is also a cable guard, which shifts both cables slightly aside to clear the way for the arrow. However, for the sake of simplicity and symmetry we shall ignore the cable guard and assume that the cables are straight and on the same plane with the string. We shall also suppose that the cables and the string are inextensible, and that they do not slide with respect to the cam systems.
Let us also further assume that the Hickman [6] model with the modification as presented in [2] can be used to approximate the bending of the limbs, when the limb is assumed to be a rigid rod which bends only on one hinge point, which locates somewhere between the tip and the bottom of the undeflected straight limb. Due to symmetry, we may restrict our considerations to the left-hand limb and cam system of Fig. 1. The symbols used are : a the angular coordinate of the cable cam; the angle between the ray from the axle point to the edge point of the cable cam and the polar axis (positive horizontal x-axis with the axle point as origin) Moreover, a dot is used when referring to the derivative of the cable or string cam radius function with respect to its variable, and the subscript ''0'' is used when referring to the value of the respective variable in the initial position.
Let us choose a conventional Cartesian coordinate system and the axle point of the left-hand cam system of Fig. 1 as its origin. For the polar coordinates (a, r c ðaÞ) and (b, r s ðbÞ), let us also choose a polar coordinate system with the same axle point as its pole and the positive x-axis of the Cartesian system as its polar axis in the initial position. However, we shall fix the polar coordinates to the cam system, so if the cam system rotates with respect to the initial position the polar coordinate system rotates with it. We assume the initial values e 0 , L, A, k, g, h U are given. If r c ¼ r c ðaÞ and r s ¼ r s ðbÞ are differentiable and known in polar coordinates, the slopes to the tangent lines of these cam curves in our Cartesian system can be expressed [7] as Using the tangent angle addition formula, it is easy to derive the equations for the slopes of the tangent lines in our Cartesian system after the cam system has rotated counter-clockwise the angle u, The following equivalences can be seen from Figs. 2, 3, 4, 5 and 7, In the initial position (Figs. 2, 4 and 6) the polar axis of the polar coordinate system coincides with the Cartesian x-axis, hence a 0 ¼ d 0 and b 0 ¼ e 0 . Then Eqs. (7) and (8) simplifies as Further, the following relations can also be seen from Figs. 3 and 7, In the initial position the rotation angle u ¼ 0, then n c ða 0 ; 0Þ ¼ m c ða 0 Þ and n s ðb 0 ; 0Þ ¼ m s ðb 0 Þ ¼ 0.
Remembering that a 0 ¼ d 0 and b 0 ¼ e 0 , Eqs. (11) and (2) gives in the initial position from which a 0 and b 0 can be obtained. There is no analytical solution for these equations, so for iteration the Brent-Dekker (BD) method [8] was chosen. From Eqs. (13) and (14) we get, by using Eqs. (3), (5) and (10) in the initial position, where u ¼ l ¼ 0, from which the initial values c 0 and s 0 can now be solved. The length of the straight cable and the length of the straight half-string are so c can now be calculated immediately. By using Eqs. (3), (13) and (9) from which b can be obtained using BD method for example. After this, the unknowns m s , n s , f, e, l and s can be calculated from Eqs.
so the angle h and its initial value h 0 can be solved. The draw, as defined in this paper, is according to Fig. 7, The rest equations needed for completing the model can be concluded from paper [2], where k ¼ a AL is the spring constant of the elastic portion of the bow limb (N/rad).
Results of model testing
The model was first tested by using the parameters of the round-wheel compound bow measured in [2]. The string and the cable cam radius were gained by cubic spline interpolation of the polar transformations of the known eccentrics of paper [2]. The first and the second derivatives of the cubic spline are also continuous, which is an important minimum requirement for the displacement diagrams of the high-speed cam systems [9].
Using the parameter values of Table 1 in [2] the values of a 0 and b 0 were first obtained from Eqs. (15) and (16). The domain for the prime variable a was chosen to be a 0 À 260 a a 0 , and 2000 evenly distributed values of a were selected from that domain. After this, the procedure described in Sect. 2 was executed separately with every value of a, resulting also the respective values of D and F.
In order to compare the force values with the same value of draw, the cubic spline function was fitted to the calculated (D, F) -values. Then, using the same parameter values and the same draw domain, the force-draw (FD) curve was calculated with the model of [2], using also 2000 evenly distributed knot points for the prime variable. The force differences between this model and the model presented in [2] with the same draw values were \3 Á 10 À3 N.
Another test was made by measuring the FD curve of the twin-cam compound bow ''Smoke'' by Hoyt Archery. The parameters of the bow are presented in Table 1.
The values e 0 , g and h U were measured as in [2]. The limbs of the bow in question are slightly recurved, hence A and L were estimated together from the measured limb tip data using Levenberg-Marquardt algorithm [10] for curve fitting as described in [2].
The FD curve of the compound bow was measured with the arrangements explained in [2]. The value of the constant k was set only after the measurements of the FD curve and the string and cable cams. The constant k was chosen so that the model fits to the measurements as well as possible in the sense of least squares.
In order to find the cable and the string cam radius functions r c ðaÞ and r s ðbÞ, the cams were photographed, and the radius of the string and the cable cam with different cam angles were measured from the enlarged photo. The grooves for the cable and the string were also taken into account as in [2]. The cable and the string cam radius functions were then gained by cubic spline interpolation of the angle-radius data of the cams. It turned out that the data needed additional processing. We shall discuss about this later on. The shapes of the cams, which can be also concluded from the before mentioned cam radius functions using Cartesian conversion, are presented in Fig. 8.
The cams in Fig. 8 are in the same initial position as the left-hand cams in Figs. 1, 2, 4 and the cam in Fig. 6. Using the cable and the string cam radius functions and the bow parameters of Table 1, the FD curve of the model was calculated as before using 2000 knot points for the prime variable a in the domain of a 0 À 235 a a 0 , where a 0 was first obtained from Eq. (15). The calculated FD curve of the model and the measured FD data are presented in Fig. 9. We notice from Fig. 9 that the model match on the FD data is good. We also see that while drawing the bow, the force values are systematically bigger when compared to the values of relaxing the bow. Evidently this is caused by the friction of the wheels and the hysteresis of the limbs, string and cables. In the full draw the calculated curve is a bit aside the measured FD data. The reason for this is probable the elongation of the string and the cables, and the small measurement errors of the cams.
The area between the FD curve of the compound bow and draw-axis defines also the energy stored to the bow limbs, We may also write the potential energy stored to the limbs as in [2], which can be used for checking the computations.
With the parameter values presented in Table 1
Other results
With the model introduced, an individual twin-cam compound bow can now be virtually adjusted by the user. After the parameters of Table 1 and the parameters concerning the cams are measured, it is possible virtually for example to rotate the cams or change the initial limb angle and check the effects on the FD curve and on the string or cable forces. On the following we shall consider the interesting possibility of cam design. Let us take the rather ''aggressive'' cam measured before as our starting point and try to modify it such a way that the FD curve is even more energetic, with the following restrictions: 1. The original full draw may not change significantly, 2. The maximum force of the FD curve may not change significantly.
The shape of the FD curve which satisfies these restrictions and can not be made more energetic is a perfect rectangle. In this sense our example bow is not quite optimal, for according to Fig. 9 there seems to be some room in both the rising and the descending part of the curve. When manipulating the cam data by trial and error it was found that even one slightly divergent data point in the cam data may cause serious computational problems. This is due to the fact that both the cable and the string cam must be convex everywhere, otherwise the contact point of the cable (or string) and the cam will ''jump'' on the cams in impractical way. The respective condition equations for the cable and string cam radius functions are which after differentiating Eqs. (1) and (2) and simplifying can be expressed as r 2 s þ 2ðr 0 s Þ 2 À r s r 00 It was not a simple task to fulfil these condition equations everywhere on the domain of the cam radius functions. This was also the reason why the original cable and string cam angle-radius data measured in Sect. 3 had to be smoothened by measuring and inserting some additional data points and leaving out some troublesome data points.
Finally, by trial and error the more ''aggressive'' cable and string cam radius functions which satisfy the restrictions 1 and 2 and also Eqs. (34) and (35) were found. The respective modified cable and string cams are presented in Fig. 10, where the cams are again in the initial position as in Fig. 8.
The calculated FD curves with both the original and the modified cams are presented in Fig. 11. The bow parameters of Table 1 were used in the calculations of both curves. From Fig. 11 we notice that the FD curve with the modified cams is more energetic mainly for the more pronounced front part. Note also that with the modified cams, the initial draw is reduced. The calculated energy from the initial to the draw D ¼ 0:764 m is 96.5 J for the bow with original cams and 106.5 J for the bow with modified cams, so the increment of energy is 10 %. Moreover, this was only one example of modifying the cams while seeking the maximum energy of the bow with the restrictions 1 and 2 mentioned before. Yet, with real cams it must always be checked whether the minimum radius of the cable and the string cams are sufficient in the viewpoint of material strength, for the axle has dimensions as well.
Conclusion
A mathematical model of an archery compound bow with non-round cams is introduced. The shapes of the string and the cable cams were modelled with the help of cubic splines. The non-linear equations of the model were simplified and solved numerically. The consistency and the validity of the model were checked in several ways, and the model was found to be accurate.
The path of the limb tip can be approximated with the modified Hickman model also in case when the limbs are slightly recurved. However, if the profile of the undeflected limb differs from straight rod, it is reasonable to estimate parameter L (the value of ''straight'' limb length) together with constant A by curve fitting.
It turned out that the model is particularly sensitive to the cam radius functions r c and r s . When modelling the cams, it must be checked that Eqs. (34) and (35) are valid everywhere on the domain of the cam radius functions.
With the help of the model, the cams of an individual bow can be virtually modified for example to get the FD curve of the bow more effective. Depending on the desired shape of the FD curve this may be a hard job in practice.
The model presented here offers interesting possibilities when adjusting the twin-cam compound bow and when designing the cams. The reader is still reminded that the model is static only.
|
2022-12-18T15:11:16.407Z
|
2016-02-27T00:00:00.000
|
{
"year": 2016,
"sha1": "b15509f080507a9e4ccb13753af2c9746ce9013e",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11012-016-0395-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "b15509f080507a9e4ccb13753af2c9746ce9013e",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
98138767
|
pes2o/s2orc
|
v3-fos-license
|
Preparation and Crystallization Property of Ternary Composites of WBG/AA-RCC/PP
A series of ternary composites of WBG/AA-RCC/PP were prepared and their crystallization behaviors were investigated by XRD and DSC. The results indicate that WBG and AA-RCC affect the crystallization of PP matrix with a mutual inhibition effect. This effect is enhanced with increasing content of WBG and induces the growth of β 2 crystal. AA-RCC promotes the heterogeneous nucleation of α crystal and offers a template for α crystal growth along a specific lattice plane and promotes the epitaxial crystallization of PP matrix, while the specific arranged α crystal could induce the formation of β crystal.
Introduction
Polypropylene (PP) is a kind of commercial polymers with good performance.However, some defects such as low notch impact strength and low-temperature brittleness restrict further application of PP.Because of this reason, some fillers or reinforcements have been introduced in order to develop PP composite with desired properties [1][2][3][4][5][6][7][8][9][10].CaCO 3 is among the most frequently studied objects in modification research.As a typical rigid particle, CaCO 3 does indeed play an important role in performance improvement of PP.In particular, with the development of powder technology, the use of nanoscale CaCO 3 particles has received much attention for polymer-based nanocomposites.Compared to conventional microcomposites, the addition of CaCO 3 nanoparticles can ameliorate the mechanical, thermal, and wear properties of thermoplastics [6][7][8][9][10].However, there is still dearth of some data about the influence of the CaCO 3 nanoparticles with different shapes (rhombohedral or spherical) on crystallization behavior of the nanocomposites.
At present, the PP modification products are emerging.Nevertheless, there are deficiencies in each product, which need to be improved.Studies have shown that -nucleated PP has excellent impact resistance and creep resistance.However, the development and study of nucleating agent are far less than those of nucleating agent.The research on new nucleating agent and compound modification system deserves attention.
Material.
The isotactic polypropylene T30S with a melt flow index of 2.6 g/10 min was supplied by Lanzhou Petrochemical Company (China National Petroleum Corporation).The rhombohedra CaCO 3 was provided by Nano Materials Technology Co., Ltd., (Ruicheng, Shanxi, China), which will be denoted as "RCC" throughout the paper.The rare earth nucleating agent (WBG) was provided by Guangdong Winner New Materials Technology Co., Ltd.(WINNER).The adipic acid was provided by Chengdu Kelong Chemical Co., Ltd.All other materials were of analytical grade and used without further purification.Water was distilled and deionized.
Surface Treatment of the CaCO 3 Particles.
To improve the interfacial adhesion between the filler and the polymer matrix, CaCO 3 nanopowders were coated with 8 wt% adipic acid, which will be denoted as "AA-RCC" throughout the paper.The coating method was as follows.Firstly, 15 g of RCC was put into the single-neck flask and mixed with a 60 mL
Characterization.
Wide-angle X-ray diffraction (WXRD) analyses were performed on an XRD-6000 diffractometer (Shimadzu, Japan) with an X-ray generator of 3 kW, graphite monochromatic, and Cu K radiation (wavelength = 1.5406Å) and were operated at 40 kV and 20 mA.The samples were scanned at room temperature from 10 ∘ to 50 ∘ at a scanning rate of 2 ∘ /min.The content of the -crystal in the crystalline part was calculated according to the standard procedures as follows [1]: where (hkl) denotes the intensity of respective (hkl) peak belonging to phase .
The crystallization behavior of PP composites was studied by DSC Q2000 (TA Instruments, USA).For nonisothermal crystallization, the samples were first treated isothermally at 200 ∘ C for 3 min to erase previous thermal history and quenched to 20 ∘ C, then heated up to 200 ∘ C at 10 ∘ C/min, held at 200 ∘ C for 1 min, and cooled to 20 ∘ C at the constant cooling rates 10 ∘ C/min.The endothermic and exothermal traces were recorded for the later data analysis.All measurements were ( Here Δ and Δ are the fusion heat of and crystal from DSC thermograms, respectively.Δ 1 is the perfect crystal fusion heat of the hypothetically 100% crystalline PP (177 J/g), and Δ 2 is the perfect crystal fusion heat of the hypothetically 100% crystalline PP (168.5 J/g) [11].
Polarized optical microscopy (POM) studies were carried out with an ECLIPSE LV100POL microscope (Nikon, Japan) in conjunction with a HSC621 V hot stage (Instec, USA).The specimens were heated to 230 ∘ C on a hot stage and held at this temperature for 3 min to eliminate the thermal history and then quenched to the fixed crystallization temperature (130 ∘ C).The photographs were provided by a digital camera.
Results and Discussion
3.1.Wide-Angle X-Ray Diffraction Analysis.Figures 1 and 2 show the XRD patterns of composites and the values calculated according to (1) are listed in Table 2.It is clear that the main diffraction peaks are at 2 = 14.1 ∘ , 16.9 ∘ , and 18.6 ∘ , which are attributed to the reflections of (110), (040), and (130) lattice planes, respectively.The peaks at 2 = 16.1 and 21.3 ∘ are assigned to (300) lattice plane of 1 crystal and (301) lattice plane of 2 crystal, respectively.According to the XRD curve 1-1, the strongest diffraction peak is at 2 = 16.1 ∘ and assigned to (300) lattice plane of 1 crystal.The XRD results show that the content of -crystal is 0.87 when the WBG addition content is 0.2 phr, which is in accord with previous report [12].As seen at 2-1 line for 4 phr AA-RCC/PP, the strongest diffraction peak is at 2 = 16.9 ∘ and assigned to (040) lattice plane of crystal.It can be concluded that, with addition of WBG or AA-RCC, composites crystalize along a specific lattice plane and orientation.Therefore, WBG acts as a kind of -crystal nucleating agent, while AA-RCC acts as a kind of -crystal nucleating agent.It can be seen form Figure 1 that when the WBG content is 0.2 phr, the peak intensity at 2 = 16.1 ∘ decreases sharply with increasing AA-RCC, while the peak intensity at 2 = 16.1 ∘ increases.As reported in Table 2, quantitative analysis shows that the 1 crystal content falls to lowest value of 0.55, and the 2 crystal content increases from 0.26% to 0.63.In Figure 2, when the AA-RCC content remains constant (4 phr), the peak intensity at 2 = 16.1 ∘ firstly increases and afterwards decreases with increasing WBG, and the 1 crystal content decreases from 0.70 to 0.49.However, the peak intensity at 2 = 16.9 ∘ decreases sharply but peak at 2 = 21.3 ∘ increases, indicating the emergence and increase of 2 crystal content.There is no doubt that WBG and AA-RCC act with mutual inhibition effect in the crystallization of PP matrix and the effect increases with increasing content of WBG.Moreover, the mutual inhibition effect induces the growth of 2 crystal.
Nonisothermal Crystallization and Melting Behavior
Characterization. Figure 3 presents the crystallization ((a) and (c)) and melting ((b) and (d)) DSC curves of WBG/AA-RCC/PP.The corresponding data is listed in Table 3.It is known that the crystallization peak temperature ( cp ) is 118∼119 ∘ C [9,13].When 0.2 phr WBG or 4 phr AA-RCC is added to PP, either of the DSC curves has a single peak and cp is 126.81 ∘ C and 123.89 ∘ C, respectively.When the content of WBG remains at 0.2 phr level, with increasing AA-RCC, two peaks emerge in the curve of composites.The peak at low temperature is for crystal and the other at higher temperature for crystal.When the content of AA-RCC remains at 4 phr level, with increasing WBG, one of the two peaks gradually disappears resulting in a single peak at 118.54 ∘ C.This indicates that WBG and AA-RCC affect the nucleating of PP and promote its crystallization.With the increasing content of WBG, a highly effective nucleating agent of crystal, the number of crystal nucleus, and crystallization rate increase, resulting in the increase of peak area of crystal [12].
Figures 3(b) and 3(d) show the melting curves of composites and relevant data are listed in Tables 2 and 3.According to DSC curve 2-1, when AA-RCC was added to PP, the melting curve has a single peak and is 161.78 ∘ C.However, with only WBG being added to PP, the DSC curve 1-1 has two peaks and the crystallization peak temperatures are 152.38 ∘ C and 167.23 ∘ C, respectively.When both WBG and AA-RCC were added to PP, the curves of ternary composites have three peaks and their peak temperatures are 130 ∘ C, 149 ∘ C, and 162 ∘ C, respectively.It is also noted that the three peaks move to lower temperature and their peak areas increase with increasing content of WBG and AA-RCC.It is known that the melting peak of crystal appears at a temperature range from 160 ∘ C to 170 ∘ C, while it appears at a lower temperature for crystal [13].There are two kinds of crystal melting peaks for WBG/AA-RCC/PP composites according to Figures 3(b) and 3(d).The peak at lower temperature was attributed to 1 crystal and the other peak at higher temperature to the recrystallization melting peak of 1 and 2 crystal [14].From Figures 3(b) and 3(d), it is clear that the crystallization peak temperatures of 1 and 2 crystal vary remarkably.The peak area of 1 crystal decreases gradually with increasing content of WBG and AA-RCC.It could be concluded that the two peaks suggest two kinds of crystals which might be assigned to (301) lattice plane of 1 crystal and (300) lattice plane of 2 crystal.These results are in accord with XRD analysis.
Crystal Morphology of WBG/AA-RCC/PP by POM.
Figure 4 shows the POM micrographs of 0.2 phr/WBG/4 phr AA-RCC/PP at 130 ∘ C. Just as shown in Figure 4(b), line shape crystal nucleus emerges at the beginning of crystallization.The line shape crystal nucleus becomes thicker gradually by growing towards outside symmetrically over time, resulting with a tightly arranged network structure with uniform density.Because crystal occurs before crystal, AA-RCC promotes the heterogeneous nucleation of crystal and RCC offers a template for crystal growth along a specific lattice plane and promotes the epitaxial crystallization of PP matrix as shown in XRD analysis, while the specific arranged crystal could induce the formation of column crystal.When spherocrystal grows faster than column crystal, crystal gradually transforms into crystal [15].Therefore, it finally comes out to be crystal with a tightly arranged network structure.
Conclusions
In this paper, the ternary composite materials of WBG/AA-RCC/PP were prepared and their crystallization behaviors were investigated by XRD and DSC.WBG and AA-RCC promote the formation of crystal and 2 crystal increases but 1 crystal decreases, suggesting that there is a competition between and crystal growth which is in favor of the formation of crystal.
Figure 1 :
Figure 1: WAXD spectra of PP with the different AA-RCC content.
Figure 2 :
Figure 2: WAXD spectra of PP with the different WBG content.
Table 1 :
The compositions of composites (weight ration).
Table 2 :
The content of the -crystal with dfferent measurement methods.
|
2018-12-28T16:45:45.845Z
|
2015-11-12T00:00:00.000
|
{
"year": 2015,
"sha1": "ea1675f4674c323f594486af0f81bec29eadbcb4",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jchem/2015/545016.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ea1675f4674c323f594486af0f81bec29eadbcb4",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
119251253
|
pes2o/s2orc
|
v3-fos-license
|
DiskFit: a code to fit simple non-axisymmetric galaxy models either to photometric images or to kinematic maps
This posting announces public availability of version 1.2 of the DiskFit software package developed by the authors, which may be used to fit simple non-axisymmetric models either to images or to velocity fields of disk galaxies. Here we give an outline of the capability of the code and provide the link to downloading executables, the source code, and a comprehensive on-line manual. We argue that in important respects the code is superior to rotcur for fitting kinematic maps and to galfit for fitting multi-component models to photometric images.
INTRODUCTION
The code DiskFit may be used to fit simple nonaxisymmetric models either to images or to velocity fields of disk galaxies. If the fit is successful, Disk-Fit provides quantitative estimates of the non-circular flow speeds and an estimate of the mean circular speed when run on velocity fields (Spekkens & Sellwood 2007;Sellwood & Zánmar Sánchez 2010), and fractions of the galaxy light in a bar, disk, and bulge when run on images (Reese et al. 2007;Kuzio de Naray et al. 2012).
The kinematic branch of the code differs fundamentally from the frequently-used rotcur algorithm (Begeman 1987) since the minimization fits a global model to the complete map, rather than separate tilted rings, and it is superior to reswri (Schoenmakers et al. 1997) because it does not employ the epicyclic approximation to fit departures from circular motion. The photometric branch of the code also differs fundamentally from popular algorithms such as galfit (Peng et al. 2010) in that it fits non-parametric disk and bar light profiles rather than specified functional forms. Furthermore, it is superior to all three algorithms because it is capable of providing statistically valid, but realistic, estimates of the uncertainties in the fit. Kuzio de Naray et al. (2012) illustrate the functionality of DiskFit on high-quality kinematic and photometric data for the nearby galaxy NGC 6503.
A single code is provided to fit both photometric images and kinematic maps because, for both applications, DiskFit employs the same basic minimization algorithm originally described in the Appendix of Barnes & Sellwood (2003). The first applications (Barnes & Sellwood 2003) were to fit axisymmetric models; the extension to non-circular flows was described by Spekkens & Sellwood (2007) 1 and by sellwood@physics.rutgers.edu Kristine.Spekkens@rmc.ca 1 DiskFit is also an extension of the publicly-available velfit 2.0. Sellwood & Zánmar Sánchez (2010), while Reese et al. (2007) extended the code to include barred models when fitting photometric images.
Note that DiskFit does not fit photometry and kinematics simultaneously; the same code simply fits either type of data depending on the users choice of inputs. Of course, if the user makes separate fits to both types of data from the same galaxy, the fitted values will likely differ.
CAPABILITIES OF THE CODE
DiskFit minimizes a χ 2 estimate of the differences between a projected model and the data. The data can be either a 2D velocity map derived from Doppler shifts of spectral lines obtained using an IFU in the optical or aperture synthesis in the radio, or a photometric image. The user can supply a map of uncertainties in the data and a mask image to indicate only good pixels to be fitted.
Fitting an axisymmetric model
Aside from an optional simple warp in the outer parts, the model presented to the data is a flat disk with inclination, i, and position angle, φ d , that are assumed to be the same at all radii. Furthermore, the position of the center, (x c , y c ) and, for kinematic fits only, the systemic velocity, v sys , are parameters fitted to the entire 2D data set. A simple axisymmetric model will therefore fit any or all these parameters to determine global estimates that best fit the data.
In addition, DiskFit estimates either the circular speed, for kinematic data, or the mean intensity for a photometric image, at a set of radii specified by the user. Model values at data points that lie between the specified radii are computed by linear interpolation. It is important to note that this implies the model is simply a tabulated set of values over a range of radii and has no pre-specified functional profile, such as an exponential disk, etc.
Uncertainties
Uncertainties in the parameters, and in the intensity or circular speed at each radius, are estimated by a bootstrap method. The residuals from a simple model are generally correlated at neighboring pixels, because the model ignores spirals and other sources of correlated turbulence.
The bootstrap algorithms employed attempt to preserve these correlated residuals (see Spekkens & Sellwood 2007;Sellwood & Zánmar Sánchez 2010, for a fuller discussion), which lead to larger and more realistic estimates of the uncertainties in the model.
Non-axisymmetric models
The most powerful aspect of DiskFit is that it can include simple non-axisymmetric features into the model and fit for their parameters. The most useful capability is to fit for a bar, which is a bi-symmetric distortion having a fixed position angle that is, in general, not aligned with, or perpendicular to, the major axis of projection. DiskFit allows for an underlying axisymmetric model on which a non-axisymmetric feature having a fixed position angle in the disk plane is superposed, and returns an estimate of the angle of its principal axis to the disk major axis. A bar that is almost aligned with the major or minor axis of projection may require that the fit is smoothed (see §2.7), but the bar cannot be separated from the disk by this algorithm when the alignment is exact; note that in such a case, an axisymmetric fit will be no worse than that obtained by other algorithms.
For kinematic fits, the non-circular flows have two m-fold symmetric components (m = 2 for a bar): a radial part that is the mean flow away from and towards the model center, and an azimuthal part that is the departure above and below the mean streaming speed. Each component varies in azimuth in the disk plane as a cos(mθ) or a sin(mθ) function, respectively, with zero phase on the bar major axis. These additional velocities are fitted at the same radii as those used to tabulate the circular speed, although the user can specify that the distortion has a smaller radial extent than the entire disk. DiskFit does not impose any relation between the radial and azimuthal velocity distortions, which can be arbitrarily large compared with the mean circular speedi.e. it is not restricted to a small amplitude distortion. If the distortions turn out to be small, Sellwood & Zánmar Sánchez (2010) give formulae that can relate the fitted velocity distortions to the ellipticity of the potential.
For photometric fits, the bar represents a light component that increases the fitted intensity above the axisymmetric mean along the bar major-axis, with a corresponding reduction along the bar minor axis. The bar light profile is again tabulated at the same radii as the mean axisymmetric light profile.
In principle, DiskFit can fit for distortions having other rotational symmetries, such as m = 1 (lopsided) or m = 3 (trefoil) distortions, although they could not be spiral in form as the algorithm restricts the non-axisymmetric component to having a fixed position angle in the disk plane at all radii.
Less usefully, DiskFit can also fit for axisymmetric radial flows. However, radial flow velocities would need to be unrealistically large -at least a few percent of the circular speed -to be detectable. Axisymmetric flow speeds of this magnitude would indicate the galaxy is in a transitional state and that extensive rearrangement of the mass distribution is taking place on a dynamical time-scale.
Spiral distortions
The largest residuals in fitted models generally arise from spiral arms, which are non-circular flows in kinematic maps and coherent features in photometric images. DiskFit does not attempt to fit these distortions, and merely treats them as sources of error that are allowed for in the bootstraps.
The reason is that these features are hard to model. Unlike bars, which are strong, clearly bisymmetric, and long-lived, mild spiral distortions are transient and probably result from multiple, superposed modes having different pattern speeds, and rotational symmetries.
Warp fitting
DiskFit allows the model to be warped in a simple, parametric manner. The code assumes that the line of nodes of the warp is at a fixed position angle, the warp begins at a certain radius, and increases in amplitude as a quadratic function of radius to some maximum amplitude at the last measured point. Since the kinematic signature of a warp closely resembles that of an in-plane bar, DiskFit will not allow the user to select both options in the same fit.
2.6. Bulge fitting Photometric images can be fitted with a disk, bar and bulge model if desired. DiskFit makes the (highly questionable) assumptions that the bulge is both axisymmetric and symmetric about the disk mid-plane, and has a flattening that is constant with radius. It also assumes the parametric form of a Sérsic profile for the bulge, and will fit, if desired for the Sérsic index, n, effective radius, R e , central intensity, I 0 , and flattening ǫ b . A very high spatial resolution image is generally required to fit for all these parameters, and it is usually safer to hold at least n fixed at some reasonable value.
The user of this capability should bear in mind that the fitted values provided by the code are meaningful only if the above listed assumptions about the bulge light profile are valid for his/her data.
Seeing corrections
If the user requests, DiskFit will blur the model, by convolving it with a point spread function, before comparing it with the data, which can be done for either photometric or kinematic fits. The blurring function is a Gaussian of specified width; note that the FWHM cannot be greater than 3 pixels. The code to compute these seeing corrections had a bug in version 1.1, which has been fixed in the present release.
Smoothing penalties
In general, DiskFit places no restrictions on the tabulated values of radial variation of the light profile, rotation curve, bar distortion amplitudes, etc. We note that DiskFit has an option to apply a smoothing penalty to the radial variation of these tabulated functions, if desired. Since the smoothing penalty will affect the fitted values, it should never be large, and no smoothing is recommended in most cases. However, Sellwood & Zánmar Sánchez (2010) found that when fitting for the flow velocities of a bar that was inclined by just a small angle to the projected major axis, the velocity distortions became absurdly large and variable, and some smoothing was necessary to obtain meaningful fits.
OBTAINING DiskFit
The code is available from http://www.physics .rutgers.edu/∼spekkens/diskfit. This website includes links to a comprehensive manual, giving full details of the procedure to use the code, data requirements, and illustrative examples, software update history, as well as executables and the source code. Versions 1.0 and 1.1 were released previously, in September 2012 and May 2013 respectively, and this posting is to announce version 1.2. The improvements at this version are that arrays are dimensioned dynamically, so that there are no software limits to the size of the dataset that can be fitted, and several bugs have been fixed.
The authors encourage feedback from users, and will make every effort to correct bugs and inconsistencies. Requests for additional capabilities will be considered and may be provided in future releases, but the authors cannot undertake to meet every possible request.
COPYRIGHT AND LICENSE ISSUES
DiskFit is free software and comes with ABSO-LUTELY NO WARRANTY. It is distributed under the GNU General Public License. 2 This implies that the software may be freely copied and distributed. It may also be modified as desired, and the modified versions distributed as long as any changes made to the original code are indicated prominently and the original copyright and no-warranty notices are left intact. Please read the General Public License for more details.
Note that the authors retain the copyright to the code and documentation. Those publishing papers that use the code are requested to acknowledge this arXiv posting as the source.
|
2015-09-23T20:04:51.000Z
|
2015-09-23T00:00:00.000
|
{
"year": 2015,
"sha1": "c6211b5032e064b6f43199797b57fcf1a573f10c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c6211b5032e064b6f43199797b57fcf1a573f10c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
1105803
|
pes2o/s2orc
|
v3-fos-license
|
Novel Pathways of Ponatinib Disposition Catalyzed By CYP1A1 Involving Generation of Potentially Toxic Metabolites
Ponatinib, a pan-BCR-ABL tyrosine kinase inhibitor for the treatment of chronic myeloid leukemia (CML), causes severe side effects including vascular occlusions, pancreatitis, and liver toxicity, although the underlying mechanisms remain unclear. Modifications of critical proteins through reactive metabolites are thought to be responsible for a number of adverse drug reactions. In vitro metabolite screening of ponatinib with human liver microsomes and glutathione revealed unambiguous signals of ponatinib-glutathione (P-GSH) adducts. Further profiling of human cytochrome P450 (P450) indicated that CYP1A1 was the predominant P450 enzyme driving this reaction. P-GSH conjugate formation paralleled the disappearance of hydroxylated ponatinib metabolites, suggesting the initial reaction was epoxide generation. Mouse glutathione S-transferase p1 (mGstp1) further enhanced P-GSH adduct formation in vitro. Ponatinib pharmacokinetics were determined in vivo in wild-type (WT) mice and mice humanized for CYP1A1/2 and treated with the CYP1A1 inducers 2,3,7,8-tetrachlorodibenzodioxin or 3-methylcholanthrene. Ponatinib exposure was significantly decreased in treated mice compared with controls (7.7- and 2.2-fold for WT and humanized CYP1A1/2, respectively). Interestingly, the P-GSH conjugate was only found in the feces of CYP1A1-induced mice, but not in control animals. Protein adducts were also identified by liquid chromatography–tandem mass spectrometry analysis of mGstp1 tryptic digests. These results indicate that not only could CYP1A1 be involved in ponatinib disposition, which has not been previously reported, but also that electrophilic intermediates resulting from CYP1A1 metabolism in normal tissues may contribute to ponatinib toxicity. These data are consistent with a recent report that CML patients who smoke are at greater risk of disease progression and premature death.
Introduction
Ponatinib (Iclusig) is an orally available pan-BCR-ABL tyrosine kinase inhibitor that has been approved for treatment of resistant chronic myeloid leukemia (CML) and Philadelphia chromosome-positive acute lymphoblastic leukemia (Zhou et al., 2011;Cortes et al., 2012;Razzak, 2013;Miller et al., 2014). Ponatinib was structurally designed with a carbon-carbon triplebond linkage to target the T315I mutation (BCR-ABL T315I ), which confers resistance to other approved tyrosine kinase inhibitors (O'Hare et al., 2009;Huang et al., 2010). Ponatinib has high efficacy against multiple resistant forms, including T315I mutation in a phase I clinical trial of ponatinib in refractory Philadelphia chromosome-positive acute lymphoblastic leukemia (Cortes et al., 2012) and T315I tyrosine kinase inhibitor-resistant CML cells (O'Hare et al., 2009). However, serious side effects have been reported; thrombocytopenia (in 37% of patients) and neutropenia (19%) are the most common hematologic events of ponatinib treatment (Cortes et al., 2012). Hepatotoxicity has also been reported, including alanine aminotransferase and aspartate aminotransferase elevation in patients in clinical trials (10.5% and 8.2%, respectively). Cardiovascular toxicity, including both arterial and venous thromboembolism and severe systemic hypertension, and dose-limiting toxic effects limit its broad application in clinic (Frankfurt and Licht, 2013;ARIAD Pharmaceuticals, Cambridge, MA, 2014 Iclusig package insert). However, the mechanism of ponatinib toxicity remains unclear.
The metabolic profile of ponatinib has been evaluated in preclinical studies. The disposition of ponatinib is a result of both esterase/amidase activity to an inactive carboxylic acid (AP24600) and metabolism by the cytochrome P450 (P450)dependent monooxygenase system [ARIAD Pharmaceuticals, 2013 Iclusig (ponatinib) tablets for oral use: prescribing information (http://wwwaccessdatafdagov/drugsatfda_docs/label/ 2013/203469s007s008lblpdf); Narasimhan et al., 2013]. Ponatinib is metabolized by P450 CYP3A4 (and to a lesser extent by CYP3A5, CYP2C8, and CYP2D6) to N-oxide and N-desmethyl metabolites, the latter being 4-fold less potent than ponatinib. Although the hydrolysis of ponatinib has been reported to be the major pathway of disposition, a recent study showed that concurrent therapy with ponatinib and ketoconazole (a CYP3A4 inhibitor) significantly increased the ponatinib area under the curve (AUC) AUC 0-' and AUC 0-t values and the C max values (Narasimhan et al., 2013).
Levels of drug exposure, as well as the generation of chemically reactive metabolites, can play a major role in drug side effects (Baillie, 2006;Kalgutkar and Dalvie, 2015). To investigate this possibility, we carried out a detailed analysis of ponatinib metabolism in a panel of purified recombinant human P450 enzymes. In addition to identifying CYP3A4 as a major pathway of ponatinib disposition (Narasimhan et al., 2013), we have also found that CYP1A1, an inducible enzyme in tissues such as lung and lung tumors, is highly active toward this compound. Intriguingly, metabolism by CYP1A1 resulted in the formation of an electrophilic metabolite, which formed a glutathione (GSH) conjugate in a reaction catalyzed by the glutathione S-transferases (GSTs). The significance of these data in relation to the efficacy and side effects of ponatinib is discussed.
Materials and Methods
Chemicals and Reagents. All reagents, unless otherwise stated, were purchased from Sigma-Aldrich (Poole, United Kingdom). NADPH was obtained from Melford Laboratories (Ipswich, United Kingdom), ponatinib was purchased from LC Laboratories (Woburn, MA), and 2,3,7,8-tetrachlorodibenzodioxin (TCDD) was purchased from Toronto Research Chemicals (Toronto, Canada). The Nanosep centrifugal device with Omega ultrafiltration membrane molecular weight cutoff of 10 kDa was obtained from Pall Corporation (East Hills, NY).
Animal Lines and Husbandry. The generation and characterization of hCYP1A1/1A2 mice will be described elsewhere (manuscript in preparation). All animals were maintained under standard animal house conditions, with free access to food (RM1 diet; Special Diet Services, Essex, United Kingdom) and water, and a 12-hour light/12hour dark cycle. All animal work was carried in accordance with the Animal Scientific Procedures Act (1986), as amended by European Union Directive 2010/63/EU, and after local ethical review. Subcellular Fractionation. Livers were harvested, excised, and snap frozen in liquid nitrogen for storage at 280°C until processing. Briefly, a small piece (∼200 mg) in three volumes of KCl-phosphate buffer (10 mM KHPO 4 , 20 mM EDTA, and 150 mM KCl, pH 7.4) was homogenized (POLYTRON PT 2100; Kinematica, Lucerne, Switzerland). The homogenate was centrifuged for 80 minutes at 100,000g (30,000 rpm in an F50L rotor) at 4°C. After removal of the fatty layer, the resultant supernatant was centrifuged for 1 hour at 100,000g (30,000 rpm in an F50L rotor; Thermo Scientific, Perth, UK). After ultracentrifugation, the supernatant (cytosolic fraction) was retained and the pellet (microsomal fraction) was resuspended in KCl buffer containing 0.25 M sucrose. The protein content of the microsomal and cytosolic fractions was quantified by bicinchoninic acid assay (Thermo Scientific).
Glutathione-Affinity Chromatography. The recombinant mouse glutathione S-transferase p1 (mGstp1) and GST mixture from liver cytosol (mouse and human) were purified using a GSTrap FF affinity column (1 ml; Amersham Biosciences AB, Uppsala, Sweden) according to the manufacturer's protocol. Briefly, the column was equilibrated with five column volumes of binding buffer (phosphate-buffered saline, pH 7.3). The sample was filtered through a 0.45 mm filter and loaded onto the column at a flow rate of 1 ml/min, followed by washing with five columns of binding buffer, and then the bound protein was eluted by adding five column volumes of elution buffer (50 mM Tris-HCl, 10 mM reduced GSH, pH 8.0, at a flow rate of 1 ml/min). The eluted protein was collected and analyzed by SDS-PAGE. The proteins were concentrated and buffer exchanges were performed using a 10 kDa Nanosep centrifugal device (previously conditioned by wetting with 0.5 ml of distilled water and followed by drying by centrifugation at 14,000g for 10 minutes at 4°C) to 50 mM Tris-HCl, pH 7.4. The retained protein residues were quantitatively dislodged from the membranes by using 100 ml aliquots of 50 mM Tris-HCl, pH 7.4. Protein concentration was then quantified by a bicinchoninic acid assay using the Pierce BCA Protein Assay Kit (Thermo Fisher Scientific, Somerset, NJ) according to the manufacturer's instructions.
In Vitro Metabolism. All incubations were performed at 37°C and 300 rpm in a thermomixer. Human cDNA-expressed P450 isoenzymes coexpressed with human NADPH P450 oxidoreductase (see Supplemental Fig. 1 for CYP:oxidoreductase ratio data) were carefully thawed on ice prior to the experiment. Ponatinib (50 mM with 0.25% dimethylsulfoxide) was mixed with human P450 isoenzymes (100 nM) in 100 mM potassium phosphate buffer (pH 7.4) containing 3.3 mM MgCl 2 supplemented with or without GSH and a GST mixture from mouse or human liver cytosol. After 5-minute preincubation at 37°C, the incubation reactions were initiated by addition of a NADPH regenerating system (final concentration: 1.3 mM NADPH, 4 mM glucose-6phosphate, and 2 unit/ml glucose-6-phosphate dehydrogenase). After 5-120 minutes, an aliquot of 100 ml of reaction mixture was quenched with same volume of cold acetonitrile containing 400 ng/ml of triazolam as the internal standard. The samples were then centrifuged at 3000g for 55 minutes at 4°C. An aliquot of the supernatant fraction was analyzed by liquid chromatography (LC)-tandem mass spectrometry (MS), i.e., LC-MS/MS.
In Vivo Studies. All animal work was carried out on 8-to 12-week-old male mice. 3-Methylcholanthrene (3-MC) was suspended in corn oil at 4 mg/ml for administration by intraperitoneal injection at 40 mg/kg (10 ml/g body weight) daily for 3 days for hCYP1A1/2 mice. TCDD was suspended in corn oil at 1 mg/ml for administration by intraperitoneal injection at 10 mg/kg (10 ml/g body weight) daily for 3 days for wild-type (WT) mice. For pharmacokinetic (PK) analyses, ponatinib was first dissolved in 25 mM citrate buffer (pH 2.5) at a concentration of 4 mg/ml for immediate administration by oral gavage at a final dose of 40 mg/kg. For blood sample collection, 10 ml of whole blood was withdrawn from the tail vein at the indicated time points. Samples were immediately added to a tube containing heparin solution (10 ml, 15 IU/ml), put into liquid nitrogen, and then stored at 280°C until processing. On the day of analysis, acetonitrile (80 ml containing 200 ng/ml triazolam) was added to thawed samples, which were shaken for 15 minutes, centrifuged for 10 minutes at 16,000g, and analyzed. Plasma ponatinib concentrations were determined by an internal standard LC-multiple reactions monitoring (MRM) method using calibration standards prepared in blank mouse plasma. Feces were collected in Tecniplast mouse metabolic cages (Tecniplast, Leicester, UK); mice were housed singly with free access to food and water for up to 24 hours after administration of ponatinib dose. To measure fecal GSH ponatinib metabolites each sample was diluted by water (1:3 v/v) and homogenized with a mortar and pestle. An aliquot of 0.25 g was extracted with 1 ml of 80% methanol for 30 minutes on a vortex evaporator (Labconco, Kansas City, MO). After centrifugation (30 minutes at 4000g), the solvent of a 800 ml aliquot of the supernatant was removed by speed vacuum, followed by reconstitution in 500 ml methanol and water (50/50, v/v), and analyzed by LC-MRM.
CYP1A1-Dependent Ponatinib Metabolism
LC-MS-MRM Analysis. Analysis of in vitro incubation and in vivo blood PK samples was carried out by ultra-performance LC-MS/MS using the Waters Acquity UPLC (Micromass, Manchester, United Kingdom) and Micromass Quattro Premier Mass Spectrometer with electrospray ionization. The chromatography was performed using a C18 column (Kinetex 1.7 m 100 A; 50 Â 2.1 mm; Phenomenex, Macclesfield, United Kingdom) at a temperature of 45°C with mobile phases of 0.1% formic acid (A) and acetonitrile and 0.1% formic acid (B). A gradient at a flow rate of 0.5 ml/min was run over 3 minutes as follows: 0-0.5 minutes, 95% A; 0.5-0.75 minutes, 95%-67.5% A; 0.75-1.50 minutes, 67.5% A, 1.50-2.00 minutes, 67.5%-50% A; 2.00-2.50 minutes, 50%-5% A; and then returning to the initial conditions for a final 0.5 minutes. The cone voltage and collision energy were optimized for each substrate and MRM data were acquired (Supplemental Table 1).
liquid chromatography eletcrospray ionization/multi-stage mass spectrometry (LC-MS-MS n ) Analysis. The LC-MS/MS system consisted of a Waters Alliance 2690 high-performance LC system (Waters, Milford, MA) and a LTQ-Orbitrap XL mass spectrometer (Thermo Fisher Scientific) with an electrospray ionization interface. The chromatography was performed on a XB-18 column (Kinetex 50 Â 2.1 mm, 2.6 mm particle size; Phenomenex) at a temperature of 40°C. Analytes were eluted from the column with 0.1% formic acid in water (A) and 0.1% formic acid in acetonitrile (B). A gradient at a flow rate of 0.4 ml/min was run over 18 minutes as follows: 0-0.5 minutes, 95%-80% A; 0.5-10 minutes, 80%-5% A; 10-12.5 minutes, 5% A; 12.5-14 minutes, 5%-95% A; and 14-18 minutes, 95% A. The mass spectrometer was tuned to optimal conditions for ponatinib and was operated in a data-independent acquisition mode, which consisted of two scan events: a survey full scan (150-600 m/z) at a resolution of 30 K Orbitrap and multi-stage mass spectrometry (MS n ) scan at an ion trap. The product ions were generated by collision-induced dissociation mode.
Bioactivation of Ponatinib by Human Recombinant CPY1A1 in the Presence of mGstp1. To generate protein adducts of oxidative ponatinib metabolites with mGstp1, incubations were conducted with a reaction mixture of human recombinant CYP1A1 (100 nmol) in Escherichia coli membrane, 10 mg/ml mGstp1, and 50 mM ponatinib (0.5 ml 20 mM in acetonitrile) as substrate, in a final volume of 200 ml 100 mM potassium phosphate buffer (pH 7.4). The incubation was started by the addition of a NADPH-regenerating system (final concentration: 1.3 mM NADPH, 4 mM glucose-6-phosophate, 2 U/ml glucose-6-phosphate dehydrogenase, and 3.3 mM potassium phosphate) and allowed to proceed at 37°C for 80 minutes. The reaction was terminated by cooling on ice, and the E. coli membrane containing human recombinant CYP1A1 was removed by ultracentrifugation at 41,000g at 4°C for 80 minutes. The supernatant was further filtered through a Pall Nanosep 10 K OMEGA (PN OD010C34) filter to remove unbound ponatinib and its metabolites. The solvent was exchanged to 50 mM ammonium bicarbonate and further concentrated to 100 ml.
Aliquots of the solution (10 ml) were mixed with 10 ml of 4Â NuPAGE lithium dodecyl sulfate loading buffer (Invitrogen, Carsbad, CA) by incubation at 95°C for 5 minutes and then subjected to SDS-PAGE on a NuPAGE Novex 10% Bis-Tris mini gel (Invitrogen NP0341BOX). After electrophoresis at a constant 180 V for 30 minutes, gels were stained with Coomassie Blue. Bands (20-25 kDa) containing mGstp1 were cut from the gel, and after removing the Coomassie Blue from the gel by adding 100 mM ammonium bicarbonate/acetonitrile (1:1, v/v) and subsequently dehydrating with 100% acetonitrile, the solvent was removed from the gel pieces under vacuum. The residue was resuspended in 0.01 mg/ml MS grade trypsin (Promega, Madison, WI) in 25 mM ammonium bicarbonate and incubated at 37°C overnight. Peptides were extracted by adding buffer [1:2 (v/v) 5% formic acid/acetonitrile] and evaporated by speed vacuum (Shevchenko et al., 2006). Samples were redissolved for LC-MS/MS analysis in 20 ml of 5% acetonitrile containing 0.1% trifluoroacetic acid.
Data Analysis. In the present study, the absolute concentrations of the metabolites in the incubation mixture could not be determined because of the lack of authentic standards. Therefore, the relative activity of the P450s was presented by comparing the concentration of the metabolite in a sample with the concentration of the same metabolite in the sample that had the highest response under the same experimental condition (Yu et al., 2010). The relative activity of the P450s toward metabolite formation could be calculated according to the ratio of the metabolite concentration to that in the sample with highest mass response. The PK parameters of in vivo data were calculated with a simple noncompartmental model using the WinNonLin software, version 4.1 (Pharsight (Certara, Cambridge, UK)) and are shown with the S.D. values. The P values were calculated using an unpaired t test; *P , 0.05, **P , 0.01, ***P , 0.001.
Results
Ponatinib Biotransformation and Formation of a GSH Adduct by Human Recombinant P450 Enzymes. The contribution of individual P450 enzymes to ponatinib biotransformation was evaluated using a panel of 11 human recombinant P450 enzymes (CYP1A1, CYP1A2, CYP1B1, CYP2A6, CYP2B6, CYPC8, CYP2C9, CYP2C19, CYP2D6, CYP3A4, and CYP2E1). Ponatinib was principally metabolized via CYP3A4 and CYP1A1, as measured by the disappearance of the parent drug (Fig. 1A). CYP3A4 had the highest activity in the formation of both the N-desmethyl (AP24567) and N-oxide (AP24734) metabolites (Fig. 1, B and C), with lesser contributions from CYP2C8 and CYP2D6. It should be noted that in the case of CYP2C8 and CYP2D6, the P450/P450 oxidoreductase ratio was considerably lower than for CYP3A4. In the event that P450 oxidoreductase is rate limiting in these samples, the actual rates of metabolism could be higher. In the case of CYP1A1, metabolism of ponatinib resulted in four different monohydroxylated products (P-OH, m/z 549) with retention times of 5.00 minutes (P-OH-1), 5.60 minutes (P-OH-2), 5.94 minutes (P-OH-3), and 7.07 minutes (P-OH-4), as well as a number of dihydroxylated products (see Fig. 2 and Supplemental Material for a more detailed description of the metabolites formed). The major metabolites were 2-and 4-hydroxyponatinib. These metabolites were also produced, to a lesser degree, by CYP1A2 and CYP1B1. The formation of hydroxylated metabolites could be a consequence of epoxidation reactions, a characteristic activity of CYP1A1. For this reason, we also carried out incubations in the presence of GSH. Indeed, this resulted in the formation of a ponatinib-GSH (P-GSH) conjugate. The P-OH-2 and P-OH-4 metabolites were not detected in incubations containing GSH, whereas P-OH-1 and P-OH-3 were unchanged ( Fig. 2A, middle), suggesting that epoxidation had taken place at the site of these hydroxylation reactions. In addition, the formation of the dihydroxylation products was significantly reduced when GSH was added to the incubation (Fig. 2B, middle). Generation of P-GSH was not detected with any other P450 isoforms.
The MS spectrum of P-GSH revealed a molecular ion [M12H] 21 of m/z 419.7, suggesting a GSH adduct with the addition of the sulfhydryl nucleophile to ponatinib (Supplemental Fig. 2A). Fragmentation of P-GSH molecular ions resulted in neutral loss of 75 and 129, corresponding to elimination of the glycine and pyroglutamate of GSH, respectively (Supplemental Fig. 2B), but yielded no information on the specific site of ponatinib modification. MS analysis of the fragments containing four monohydroxylated products clearly indicated that CYP1A1-mediated oxidation had occurred on the partial structure of ring A, linker B, or ring C, and that GSH conjugation had occurred on ring C .
Glutathione S-Transferases Catalyze the Formation of P-GSH Conjugates. The action of electrophiles with GSH is catalyzed by GST (Hayes et al., 2005;Tew and Townsend, 2012). Therefore, we investigated this possibility through the addition of recombinant GST or GST mixtures to the incubation system. The formation of P-OH-2 and P-OH-4 was linear for at least 20 minutes and with differing concentrations of recombinant CYP1A1 protein up to 10 nM at 3 mM ponatinib. Incubations were therefore routinely carried out for 20 minutes with 10 nM recombinant CYP1A1 protein. The formation of the P-GSH conjugate was significantly increased by the addition of recombinant mGstp1 protein up to 20 mg/ml (Fig. 3A). The fact that this reaction was saturated at 20 mg/ml suggests that the rate of hydroxylation was rate limiting in Fig. 2, the formation of P-OH-2 and P-OH-4 was inversely related to the formation of the P-GSH conjugate (Fig. 3, B and C), whereas the formation of P-OH-1 was not affected by the addition of mGstp1 (Fig. 3D).
The dependence of GST-mediated P-GSH conjugate on GSH concentration is shown in Fig. 3E. At a GSH concentration of 5 mM, the rate of the mGstp1-catalyzed P-GSH conjugate formation was approximately double the rate of the nonenzymic reaction. However, at a GSH concentration of 0.05 mM the rate of the enzyme-catalyzed conjugate was increased more than 40-fold compared with the nonenzymic control (68% versus 1.4%, respectively). These differences were also reflected in an inverse manner by the formation of P-OH-2 and P-OH-4 (data not shown). Therefore, the effect of mGstp1 on the rate of P-GSH-conjugated formation is greatest at the lowest GSH concentration, consistent with a previous report that the covalent binding rate of an acetaminophen-GSH conjugate is greatest at low GSH concentrations (Rollins and Buckpitt, 1979).
To further investigate the role of GSTs in the formation of the P-GSH conjugate, Gst/GST mixtures were purified from Gstp1 WT and Gstp1/2 NULL mice and human liver through a GSTrap FF affinity column and incubated with ponatinib. Compared with the activity of the Gst mixture from mouse Gstp1 WT liver cytosol (100%), the formation of P-GSH was significantly reduced in both Gstp1/2 NULL (56%) mice indicating that, in agreement with the data obtained with the recombinant enzyme, Gstp is responsible for ∼60% of the murine hepatic conjugating activity. The activity of human liver GSTs was also lower (28%), which could be ascribed to the fact that human hepatocytes from healthy individuals do not express glutathione S-transferase P (GSTP) (Fig. 3F).
Induction of CYP1A1 Markedly Alters Ponatinib Exposure and Induces P-GSH Conjugate Excretion In Vivo. To study the contribution of CYP1A1 to the metabolism and disposition of ponatinib in vivo, WT, and humanized hCYP1A1/1A2 mice were administered the aryl hydrocarbon receptor activators TCDD or 3-MC for 3 days prior to ponatinib administration. In WT mice treated with TCDD an 87% reduction in the AUC 0-8 hours value and a significant decrease (62%) in the ponatinib half-life were observed ( Fig. 4A; Table 1). In hCYP1A1/1A2 mice treated with 3-MC, significant reductions in both the AUC 0-8 hours (54%) and half-life (67%) values for ponatinib were also observed ( Fig. 4B; Table 1). In addition, significant amounts of the P-GSH conjugate were detected in fecal samples from 3-MC-treated hCYP1A1/1A2 mice (Fig. 4F). This metabolite was not detectable in control (vehicle-treated) animals treated with ponatinib (Fig. 4E). The absolute level of the P-GSH conjugate formed could not be determined because relevant standards were not available.
Ponatinib Forms an Adduct with mGstp1 in the Presence of CYP1A1. It has often been observed that the conjugation of electrophiles by Gstp1 also results in covalent modification of Gstp1 itself. To investigate this possibility, tryptic peptides obtained following incubations of ponatinib with CYP1A1 and Gstp1 were analyzed and two different adducts of Gstp1 were identified (Supplemental Table 2). Further MS analysis demonstrated that ponatinib-Gstp adducts had been formed on Cys-14 and Cys-47, but not on Cys-167.
Discussion
Ponatinib is used to treat CML patients who have relapsed following imatinib therapy (Holyoake and Vetrie, 2017). Although effective, the use of this drug is compromised by a number of serious side effects including vascularitis, pancreatitis, and hepatotoxicity (Cortes et al., 2013;Lipton et al., 2016). To obtain greater insights into pathways of ponatinib metabolism and toxicity we investigated the enzymes involved in its disposition and the metabolites produced by different human P450 enzymes (Fig. 5). Studies were carried out in vitro using humanized mouse models. In agreement with the published literature, CYP3A4 was the major hepatic P450 enzyme involved in the production of the N-desmethyl (AP24576) and N-oxide (AP24734) metabolites (Narasimhan et al., 2013). However, a number of different metabolites were produced by CYP1A1. Surprisingly, this potentially important pathway of ponatinib metabolism has, to our knowledge, not been previously reported. The metabolites produced by CYP1A1 were reduced by the addition of GSH to the incubation medium, suggesting that reactive electrophilic products had been formed, probably epoxides. Epoxides can be inactivated by a number of enzymes, including GSTs and epoxide hydrolases (Seidegård and Ekström, 1997). In support of epoxide formation, we have shown that the formation of the GSH conjugate of ponatinib is greatly increased by the addition of both murine and human GST. In particular, we have demonstrated that Gstp has significant activity in this regard. If an epoxide was the electrophilic intermediate, it Table 1. might have been anticipated that two GSH adducts would have been formed. This may be the case, but not observed, if the two adducts were not separated by ultra-performance LC. Also, the MS fragmentation pattern would unfortunately not distinguish the sites of glutathionylation. In addition, it is feasible that one site of glutathionylation was preferred due to steric differences, or the reaction catalyzed by Gstp generates a site-specific product. The formation of reactive epoxides from ponatinib and their inactivation by Gstp has a number of possible implications. Epoxides are chemically reactive and can react covalently with both DNA and proteins to cause mutations and toxicity (Hinson and Roberts, 1992;Boelsterli, 1993;Swenberg et al., 2011). The nucleophilic sites in proteins include cysteine residues, consistent with our finding here that metabolic activation of ponatinib resulted in cysteine conjugates of Gstp1. The cysteine residues, especially Cys-47 of human enzyme GSTP1-1, have been found to be targeted by many electrophilic compounds resulting from P450 metabolism such as acetaminophen, clozapine, and troglitazone (Orton and Liebler, 2007;Tew, 2007;Boerma et al., 2011Boerma et al., , 2012Zhang et al., 2014). When ponatinib was incubated with Gstp1 in the presence of recombinant human CYP1A1, adducts of ponatinib were observed on Cys-14 and Cys-47. The fact that no adducts of ponatinib to Cys-169 were observed in this study may be due to its low reactivity, or perhaps more likely to the relative insensitivity of our methodology in detecting Cys-169-containing tryptic peptides. It would have been interesting to establish whether the addition of epoxide 1 Pharmacokinetic parameters of ponatinib in WT mice and hCYP1A1/2 pre-treated with TCDD or 3-MC The PK parameters were calculated as outlined in Materials and Methods. The P values were calculated using an unpaired t test: *P # 0.05; **P # 0.01; ***P # 0.001.
CYP1A1-Dependent Ponatinib Metabolism
hydrolase to the incubation reduced the formation of the GSH conjugate. However, we have been unable to source any functionally active epoxide hydrolase, and while we believe that epoxide formation is the initial oxidation step, this remains to be formally established. Therefore, it is possible that the formation of reactive epoxides from ponatinib contributes to the side effects associated with its clinical use. CYP1A1 is a highly inducible enzyme and unlike many other P450s can be expressed in most tissues (Hussain et al., 2014). Metabolic activation could therefore, in principle, occur in all of the reported ponatinib target tissues. CYP1A1 levels are constitutively very low, but are highly inducible on activation of the aryl hydrocarbon receptor by compounds including polycyclic aromatic hydrocarbons found in cigarette smoke (Schiwy et al., 2015). It is interesting to note that hypertension or vaso-occlusive disease observed following ponatinib administration has been associated with smoking (Jain et al., 2015). This potential association is therefore worthy of further epidemiologic studies. If reactive metabolites of ponatinib are involved in its toxicity this could be reduced using inhibitors of CYP1A1. It is worthy of note that the reactive metabolites of ponatinib are also inactivated by conjugation with GSH, mediated at least in part by GSTP. GSTP is expressed in most cell types and indeed in many tumors resistant to anticancer drugs (Henderson et al., 2011(Henderson et al., , 2014Tew and Townsend, 2011). Therefore, cellular GSH and GSTP status may also be an important determinant of ponatinib side effects.
In addition to possible involvement in ponatinib side effects, or indeed as a determinant of intratumoral drug concentration, CYP1A1 and CYP1A2 may also contribute to ponatinib exposure since plasma levels and half-life were significantly changed in CYP1A1/1A2 humanized mice treated with aryl hydrocarbon receptor activators (Fig. 4). The PK parameters obtained using this model showed comparable plasma half-life (t 1/2 , 13.9 hours) to that in human (∼24 hours), whereas the t 1/2 value was 6.2 hours in WT mice demonstrating the strengths of using humanized models for such studies. CYP1 family enzymes, especially CYP1A1 and CYP1B1, are overexpressed in myeloblastic and lymphoid cell lines such as U937, BALL-1, and HL-60 (Nagai et al., 2002). The potential expression of these enzymes in CML cells may also be a factor in drug efficacy, or indeed drug resistance. Indeed, our findings provide a mechanistic rationale for the recent report that CML patients that are smokers are at increased risk of disease progression and premature death (Lauseker et al., 2017).
|
2018-04-03T01:01:26.619Z
|
2017-10-01T00:00:00.000
|
{
"year": 2017,
"sha1": "f695fb9cb7ae7cc979d8060616700894fcb6f5fb",
"oa_license": "CCBY",
"oa_url": "https://jpet.aspetjournals.org/content/jpet/363/1/12.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "2550bfb9a2c64743912ea8b46c804ae57d92e983",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
238354491
|
pes2o/s2orc
|
v3-fos-license
|
Depichering the Effects of Astragaloside IV on AD-Like Phenotypes: A Systematic and Experimental Investigation
Astragaloside IV (AS-IV) is an active component in Astragalus membranaceus with the potential to treat neurodegenerative diseases, especially Alzheimer's diseases (ADs). However, its mechanisms are still not known. Herein, we aimed to explore the systematic pharmacological mechanism of AS-IV for treating AD. Drug prediction, network pharmacology, and functional bioinformatics analyses were conducted. Molecular docking was applied to validate reliability of the interactions and binding affinities between AS-IV and related targets. Finally, experimental verification was carried out in AβO infusion produced AD-like phenotypes to investigate the molecular mechanisms. We found that AS-IV works through a multitarget synergistic mechanism, including inflammation, nervous system, cell proliferation, apoptosis, pyroptosis, calcium ion, and steroid. AS-IV highly interacted with PPARγ, caspase-1, GSK3Β, PSEN1, and TRPV1 after docking simulations. Meanwhile, PPARγ interacts with caspase-1, GSK3Β, PSEN1, and TRPV1. In vivo experiments showed that AβO infusion produced AD-like phenotypes in mice, including impairment of fear memory, neuronal loss, tau hyperphosphorylation, neuroinflammation, and synaptic deficits in the hippocampus. Especially, the expression of PPARγ, as well as BDNF, was also reduced in the hippocampus of AD-like mice. Conversely, AS-IV improved AβO infusion-induced memory impairment, inhibited neuronal loss and the phosphorylation of tau, and prevented the synaptic deficits. AS-IV prevented AβO infusion-induced reduction of PPARγ and BDNF. Moreover, the inhibition of PPARγ attenuated the effects of AS-IV on BDNF, neuroflammation, and pyroptosis in AD-like mice. Taken together, AS-IV could prevent AD-like phenotypes and reduce tau hyperphosphorylation, synaptic deficits, neuroinflammation, and pyroptosis, possibly via regulating PPARγ.
Introduction
Alzheimer's disease (AD) is a neurodegenerative disease characterized by cognitive decline and behavioral impairment. The incidence of AD is increasing as the world population ages. According to the World Alzheimer Report 2018 [1], there are more than 50 million people suffering from AD worldwide and it is predicted that by 2050, the number of AD patients will increase to 152 million. Currently, the pathogenesis and etiology of AD have not been fully elucidated, and there is no effective treatment for AD [2]. Remarkable efforts are made in developing strategies to resist mechanisms that lead to neuronal damage, synaptic deficits, neuroinflammation, and cognitive impairment [3][4][5]. Especially, amyloid-β (1-42) oligomers (AβO) accumulating in AD brains are linked to synaptic failure, neuroinflammation, and memory deficit [2,6,7].
Astragaloside IV (AS-IV), one of the major effective components purified from Astragalus membranaceus, has been documented in the treatment of diabetes and diabetic nephropathy [8,9]. AS-IV has been reported to play a variety of beneficial roles in the prevention and treatment of neurodegenerative diseases with cognitive impairment [10]. Especially, AS-IV, as a selective natural PPARγ agonist, inhibited BACE1 activity by increasing PPARγ expression and subsequently reduced Aβ levels in APP/PS1 mice [11]. In addition, other studies pointed out that AS-IV could inhibit Aβ 1-42 -induced mitochondrial permeability transition pore opening, oxidative stress, and apoptosis [12,13].
PPARγ activation regulates the response of microglia to amyloid deposition, thereby increasing phagocytosis of Aβ and reducing cytokine release [14,15]. In addition, PPARγ agonists are able to improve the memory deficits in AD models [16,17], which are further confirmed in clinical trials [18,19]. In a previous study, we reported that AS-IV prevented AβO-induced hippocampal neuronal apoptosis, probably by promoting the PPARγ/BDNF signaling pathway [20]. However, the findings were limited in the in vitro experiments, and systemic mechanisms have not been clearly disclosed.
In this study, we adopted a systematic study of the multiscale mechanism to investigate the treatment effect of AS-IV for AD, which combined the drug prediction, network pharmacology, functional bioinformatics analyses, and molecular docking. Subsequently, experiments were carried out to validate the potential mechanisms from the target of PPARγ. This study would provide important implications for the treatment of AD.
Target Prediction.
To obtain the molecular targets of AS-IV, a computer developed model SysDT based on random forest (RF) and support vector machine (SVM) algorithms [21], which integrates large-scale information on genomics, chemistry, and pharmacology was proposed to predict the potential targets with RF score ≥ 0:8 and SVM ≥ 0:7 as threshold. In addition, we also combined pharmacophore model [22] and structural similarity prediction methods to predict the targets of AS-IV [23].
Network Construction.
To visualize and analyze the relationship between the targets of AS-IV and their related biological functions, we screened the relevant function corresponding to the targets, introduced them into Cytoscape, and constructed the network. In this section, three networks including compound-target (C-T), compound-targetfunction (C-T-F), and protein-protein interaction (PPI) [24] were structured to unclose the multitarget and multifunction therapeutic effect of AS-IV in combating AD (Figure 1).
Gene Ontology (GO) Enrichment
Analysis. Presently, to further investigate the vital biological process connected with the AS-IV-related targets, we mapped these targets to DAVID 1 for analyzing targets' biological meaning. The GO terms of biological process were utilized to symbolize genic function. Finally, those GO terms with P ≤ 0:05 and FDR ≤ 0:05 were selected in subsequent research.
Molecular Docking.
To validate the C-T network, AS-IV was docked to its predicted targets (PPARγ, caspase-1, GSK3Β, PSEN1, and TRPV1) by the AutoDock software version 4.1 package with default settings based on a powerful genetic algorithm method [25]. The X-ray crystal structures of targets (5GTN, 5IRX, 6IYC, 6PZP, and 6GN1) were taken from the RCSB Protein Data Bank. Each protein was prepared using methods such as adding polar hydrogens, partial charges, and defining the rotatable bonds. Finally, the results were analyzed in the AutoDock Tools.
2.6. Animals and Treatments. Male C57BL/6 mice (5-6 weeks old, 20-25 g) were obtained from the Beijing Weishang Lituo Technology Co., Ltd (SCXK (Beijing) 2016-0009). The mice were housed in groups of six per cage with controlled room temperature and humidity, under a 12 h light/dark cycle, with free access to food and water. The mice were adapted for one week before administration. All protocols were approved by the Animal Ethics Committee of Anhui University of Chinese Medicine (approval No. AHUCM-mouse-2019015), and the procedures involving animal research were in compliance with the Animal Ethics Procedures and Guidelines of the People's Republic of China.
The mice were randomly divided into the following groups (N = 8 per group) ( Figure 1): a sham group, an AβO group, AβO plus AS-IV (10, 20, and 40 mg/kg/day, i.g.) groups, an AβO plus donepezil (5 mg/kg/day, i.g.) group, and an AβO plus AS-IV (20 mg/kg/day, i.g.) with GW9662 (1 mg/kg/day, i.p.) group. Drugs were administered once per day for one week followed by intrahippocampal infusion of AβO and continuously received AS-IV once per day for another four weeks. The dose of AS-IV and GW9662 was selected and modified based upon a previous study [11].
2.7. Preparation and Infusion of AβO. AβO were prepared from synthetic Aβ 1-42 and incubated at 37°C for 1 week in a stock solution of 10 μg/μL, then routinely characterized by size-exclusion chromatography, as previously described [26,27], and stored at -80°C until use after subpackaging. AβO were perfused at a final concentration of 2.5 μg/μL in aCSF.
2
Oxidative Medicine and Cellular Longevity For intrahippocampal infusion of AβO, mice were anesthetized with 5% isoflurane using a vaporizer system (RWD life Science Co., Ltd, Shenzhen, China) and maintained at 1% during the injection procedure, as previously described [26,28]. AβO (5 μg per site) were bilaterally delivered into the hippocampal CA1 region (stereotaxical coordinates relative to bregma: 2.3 mm anteroposterior, ±1.8 mm mediolateral, and 2.0 mm dorsoventral). Injections were performed in a volume of 2 μL infused over 5 min, and the needle was left in place for 1 min to prevent backflow. Then, the mice were treated with penicillin to prevent infection. After the operation, the mice were kept under standard conditions with eating and drinking freely. Mice that showed signs of misplaced injections or any sign of hemorrhage were excluded from further analysis. Seven days before the AβO infusions, AS-IV (10, 20, and 40 mg/kg, once/day) was administered intragastrically in mice. Behavioral and pathological studies were performed 4 weeks postinjections of AβO.
Fear
Conditioning. FC was evaluated as previously described [29]. On adaption day, mice were allowed to freely explore the conditioning chamber (UgoBasile, Gemonio, Italy) with a camera that was connected to the ANY-Maze™ software (Stoelting, NJ, USA, RRID:SCR_014289) for 5 min. On conditioning day, mice were placed into the same test chamber, and then, an 80 dB audiotone (conditioned stimulus: CS) was presented for 30 s with a coterminating 1.0 mA, 2 s long foot shock (unconditioned stimulus: US) three times at a 73 s interval. Then, mice were Figure 1: Experimental design. First, we screened the relevant targets via a comprehensive procedure; second, compound-target (C-T) and compound-target-function (C-T-F) were established to reveal the underlying molecular mechanisms; third, protein-protein interaction (PPI) network analysis and Gene Ontology (GO) enrichment analysis were performed to predict related targets; forth, we also studied the regulatory effect and specific mechanism of AS-IV on the AD via molecular docking and dynamics simulation; last, we investigated the effects of AS-IV on AD phenotypes in AβO-infused mice and further assessed the potential mechanisms. The mice were intragastrically administered daily with AS-IV (10, 20, and 40 mg/kg) for 7 days followed by intrahippocampal infusion of AβO or vehicle (aCSF) and then received AS-IV intragastrically or AS-IV plus GW9662 (1 mg/kg) intraperitoneally or donepezil (5 mg/kg) or the same volume saline intragastrically for 28 continuous days once per day. After that, the behavioral tests were conducted in the following one week, then the animals were sacrificed, and brain samples were collected. GW9662 (1 mg/kg) was coadministrated with 20 mg/kg of AS-IV or vehicle in the AβO-treated or aCSF-treated animals.
removed from the cage. The next day (contextual test), mice were put back into the conditioning chamber for 5 min, but without any audiotone or foot shock. On day 4 (cued test), the cover of the back and side chamber walls was removed. The mice were returned to the chamber followed by three CS (without a foot shock) that were presented for 30 s each. The freezing time was recorded for each test using the software.
2.9. Preparation of Hippocampal Tissue. Twenty-four hours after behavioral tests, some mice were anesthetized with 5% isoflurane and decapitated, and the hippocampi were then rapidly dissected on ice and snap-frozen in liquid nitrogen before storing at -80°C for biochemical tests. Others received transcardial perfusion with 4% paraformaldehyde (PFA), and then, the hippocampi were rapidly dissected and postfixed with 4% PFA overnight at 4°C followed by immersions in a solution containing 30% sucrose at 4°C for graded dehydration. Parts of the hippocampi were then cut into serial coronal frozen slices (20 μm) for immunofluorescence assay, and other hippocampus samples were sliced into 4 μm thick coronal slices for histopathological analysis.
Hematoxylin and Eosin (HE) Staining.
After fixed in 4% paraformaldehyde for 24 h at room temperature, the hippocampal tissues were embedded in paraffin and coronally cut into 4 μm thick slices (three slices per mouse). The tissues were dewaxed and successively rehydrated with alcohol (70%, 85%, 95%, and 100%), and then, the slices were stained with hematoxylin solution for 3 min followed by eosin solution for 2 min at room temperature. The slices were finally mounted by following dehydration with gradient alcohol and hyaline with xylenes and sealed with neutral gum. Representative photographs were captured by a light microscope with the DP70 software.
2.11. Enzyme-Linked Immunosorbent Assay. Hippocampal tissues were collected and homogenized with ice-colded saline, supplemented with protease and phosphatase inhibitor cocktails. The supernatants were collected for further analysis. The levels of endogenous Aβ 1-42 , IL-1β, IL-6, and TNF-α were determined using ELISA kits according to the manufacturer's instructions. The absorbance was recorded at 450 nm using a microplate reader (SpectraMax M2/M2e; Molecular Devices, Sunnyvale, CA, USA), and the concentrations of Aβ 1-42 , IL-1β, IL-6, and TNF-α were calculated from standard curves. Results were expressed as picograms per milliliter. Data were generated from 6-8 mice per group.
2.12. Immunofluorescence. Mice were sacrificed, and the hippocampi were snap-frozen in optimal cutting temperature (OCT) compound (Sakura Finetechnical, Japan). For immunofluorescence staining, the OCT-embedded hippocampi were cut into serial coronal 20 μm thick slices and mounted on adhesive microscope slides. The slices were fixed with icecolded acetone for 10 min and then blocked in 10% goat serum (containing 0.04% Triton X-100) for 90 min at room temperature. Subsequently, the slices were incubated with primary antibodies to MAP-2 (1 : 200), PSD95 (1 : 200), SYN (1 : 400), GAP43 (1 : 200), and GFAP (1 : 200) overnight at 4°C followed by incubation with Alexa-conjugated secondary antibodies (Thermo Fisher Scientific) for 2 h at room temperature. After counterstained with DAPI solution in the dark, the fluorescent images of slices were acquired using a confocal scanning microscope (FV1000, Olympus, Japan). At least six representative images were taken from each mouse for analysis by the Image J software (NIH, USA, RRID:SCR_003070).
2.13. Immunohistochemistry. Hippocampal slices were deparaffinized and rehydrated as described above. After antigen retrieval, slices were incubated with 3% H 2 O 2 for 15 min and blocked in goat serum (containing 0.1% Triton X-100) for 30 min followed by incubation overnight at 4°C with primary antibodies to PPARγ (1 : 200) and BDNF (1 : 200). Then, the slices were washed three times with PBS and incubated with the horseradish peroxidase (HRP) conjugated goat anti-rabbit or anti-mouse IgG (1 : 100) secondary antibody for 2 h at room temperature followed by incubation with 50 μL 3,3 ′ -diaminobenzidine (DAB) substrate (DAKO, Denmark) at room temperature for 10 min. The number of immunoreactive cells in the hippocampus was assessed using light microscopy (DP70; Olympus, Japan). At least three different fields (200 × 200 μm) per slice were randomly selected for visualization. The mean optical density in the hippocampus region was calculated and used to determine PPARγ and BDNF expression levels.
2.14. Golgi-Cox Staining. Golgi-Cox staining was performed to assess changes in dendrites and dendritic spines within hippocampal neurons using the FD Rapid GolgiStain™ Kit (FD NeuroTechnologies, USA) according to the manufacturer's instructions. Briefly, mice were anaesthetized with 5% isoflurane and decapitated, and the brains were rapidly removed and immersed in the impregnation solution (A : B = 1 : 1, total 2 mL/mouse) at room temperature in the dark and then replaced with new impregnation solution after 2 days. Two weeks later, brains were transferred into solution C and stored at 4°C for three days and then rinsed 3 times with PBST (containing 0.3% Triton X-100). Brains were then cut serially into 100 μm coronal slices on a vibration microtome, and each slice was transferred to a gelatincoated slide with solution C and then dried at room temperature at dark for up to 3 days. Then, the slices were placed in a mixture consisting of solution D, solution E, and distilled water (1 : 1 : 2) for 15 min followed by a dehydration series consisting of 50%, 70%, 85%, 95%, and 100% ethanol, for 3 applications at 5 min each. The slices were then transparented with xylenes and sealed with neutral gum for light microscopic observation. At least 3-5 dendritic segments of apical dendrites per neuron were randomly selected in each slice, and 5 pyramidal neurons were analyzed per mouse. For each group, the number of spines per dendritic segment of at least 3 mice was analyzed with using the Image J software (NIH, USA, RRID:SCR_003070). Results are expressed as the mean number of spines per 10 μm. 5 Oxidative Medicine and Cellular Longevity at 4°C for 4 h followed by fixation with 1% osmium tetroxide for 1.5 h. After a series of gradient ethanol dehydrations, the tissues were immersed in propylene oxide for 30 min and then infiltrated with a mixture of propylene oxide and epoxy resin overnight. Then, the tissues were embedded in epoxy resin and placed in oven at 60°C for 48 h and then cut into serial ultrathin slices (70 nm thickness) and stained with 4% uranyl acetate for 20 min followed by 0.5% lead citrate for 5 min. The synaptic ultrastructures were observed under TEM (HT7700; Hitachi, Tokyo, Japan). In this study, at least 10 micrographs were randomly taken from each mouse and analysis of synaptic density was performed using the Image J software (NIH, USA, RRID:SCR_003070).
2.17. Statistical Analysis. All analyses were performed with the GraphPad Prism 5.0 software (GraphPad Prism, San Diego, CA, USA, RRID: SCR_002798), and data were expressed as mean ± standard deviation ðSDÞ. The statistical significance of difference between groups was evaluated using one-way ANOVA followed by Tukey test. P values of <0.05 were considered statistically significant.
C-T Network.
In this study, we used a comprehensive method to screen AS-IV targets. Figure 2(a) shows that there are 64 targets with the combining capacity to AS-IV. In this network, all these observations provide strong evidence that AS-IV works through a multitarget synergistic mechanism. Oxidative Medicine and Cellular Longevity Figure 2(b) depicts the global view of the C-T-F network, in which the diamond, circle, and hexagon nodes represent AS-IV, targets, and the corresponding function of the targets, respectively. Further observation of this network shows that these 64 targets are related to 7 functions, including inflammation, nervous system, cell proliferation, apoptosis, pyroptosis, calcium ion, and steroid.
PPI Network.
Proteins do not exert their functions independently of each other but interact together in the PPI network [30]. It is very helpful to understand the functions of proteins through analyzing the topological characteristics of proteins in PPI networks. Here, we constructed the PPI network of the 64 target proteins obtained from AS-IV and calculated the degree of each node. As shown in Figure 2(c), the degree of ADRA2A, ADRA2B, ADRA2C, CHRM2, S1PR5, S1PR2, DRD3, and HRH3 was the highest (degree = 7), followed by APH1B, PSENEN, PSEN1, PSEN2, and NCSTN (degree = 4), demonstrating that these proteins are hub targets and may be responsible for bridging other proteins in the PPI network.
GO Enrichment Analysis.
Through the GO enrichment analysis (Figures 2(d)-2(f)), the targets were related to following biological processes, including G-protein coupled acetylcholine receptor signaling pathway (count = 2,3,6), protein kinase B activity (count = 1), Notch receptor processing (count = 5), protein processing (count = 7), and inflammatory response (count = 4). These processes were usually related to cell proliferation, gene transcription, differentiation, and development. Oxidative Medicine and Cellular Longevity for the binding pocket of caspase-1 with its ligand, there were large hydrophobic interactions formed by residues Trp340, Pro343, and Ala284; with respect to GSK3Β, the hydrophobic interactions were formed by residues Val110, Leu188, Ala83, Leu132, Val70, Phe67, and Ile62. Additionally, in PSEN1, it was formed by residues Phe14, Ile408, Ile135, Phe6, Trp404, Leu142, and Ala98. Also, in TRPV1, it was formed by residues Phe543, Phe522, Met547, Val518, Leu515, Ile573, Ala566, Leu553, and Ile569. AS-IV interacted with many residues in the active sites of caspase-1, and three H-bond networks were formed ( Figure 3). AS-IV forms H-bond networks with GSK3Β in Lys85, Val135, Lys60, Tyr134, Arg141, and Asn64. AS-IV forms H-bond interactions with PSEN1 in Ala139, while forms with TRPV1 in Asn551, Thr550, Arg557, and Ser512 ( Figure 3). AS-IV is well suited to the receptor binding pocket as the binding of AS-IV to amino acids was tight and deep into the cavity. The binding free energy of AS-IV with caspase-1, GSK3Β, PSEN1, and TRPV1 was -5.30 Kcal/mol, -4.85 Kcal/mol, -6.41 Kcal/mol, and -6.07 Kcal/mol, respectively. These results indicated that AS-IV showed high binding affinities to its targets. For the target PPARγ, AS-IV is directed toward the binding site and stabilized by the hydrogen-bonding interactions with Gln343, Cys285, and Ser289. Five critical proteins in the network, including PPARγ, caspase-1, GSK3Β, PSEN1 and TRPV1, were selected to further validate the PPI. As shown in Figure 4(d), these five proteins showed a close interaction.
3.7.
Effect of AS-IV on AβO-Induced Memory Impairment and Pathological Changes. FC task was further performed by the intensity of freezing to context and auditory cue to assess the effects of AS-IV on fear memory in AβO-infused mice. During the adaptation session, there was no difference in freezing time among experimental groups (data not shown). By exposure to the context and auditory cue, freezing response was both higher in sham mice than AβOinfused mice (Figures 5(a) and 5(b)). The freezing time was lower in AβO-infused mice after administration of AS-IV (10, 20, and 40 mg/kg) or donepezil, a positive control
10
Oxidative Medicine and Cellular Longevity drug. These results suggested that AS-IV prevented AβOinduced contextual and cued fear memory impairments. HE staining showed that the pyramidal cells in CA1 region of the hippocampus of sham mice had intact cell body and round nuclei with tight arrangement, and no cell loss was found. However, the pyramidal layer was disintegrated, and neuronal loss was observed in the CA1 region. Additionally, neurons with shrunken or irregular shape of cell bodies and degeneration of nuclei were also found in the hippocampus of AβO-infused mice ( Figure 5(c)). It is worth mentioning that AS-IV (10, 20, and 40 mg/kg) administration attenuated the structural damage and loss of neurons to some extent relative to AβO-infused, which indicated a neuroprotective effect of AS-IV.
Next, the level of Aβ 1-42 and phosphorylated tau expression was measured in the hippocampus. Results showed that there was no difference in the hippocampal Aβ 1-42 level among experimental groups ( Figure 5(d)). Compared with sham mice, the phosphorylated tau expression was increased significantly in AβO-infused mice. Compared with AβOinfused mice, AS-IV treatment reduced the hippocampal phosphorylated tau expression ( Figure 5(e)).
We also observed MAP-2 expression in the hippocampus of mice by immunofluorescence assay. Results showed that there were a large number of MAP-2 + cells, with regular arrangement of neurons, obvious neurites arranged in bundles in the hippocampus of sham mice. Compared with sham mice, the numbers of MAP-2 + cells were remarkably reduced, the arrangement of dendrites was disordered, and the length of the neurites was significantly shortened in the hippocampus of AβO-infused mice. In contrast, AS-IV (20 mg/kg) administration reversed the inhibitory effects of AβO on the growth of MAP-2 + neurites ( Figure 5(f)). Based on these findings, AS-IV administration alleviated AβOinduced neuronal injury and reduced tau phosphorylation in the hippocampus, but had no effect on endogenous Aβ 1-42 level in AβO-infused mice.
AS-IV Suppresses AβO-Induced Synaptic Deficit in the
Hippocampus. The effects of AS-IV on synaptic protein expression were investigated through determining the expression of PSD95, SYN, GAP43, and ARC. Results from immunofluorescence assays showed that the synaptic proteins PSD95, SYN, and GAP43 were all significantly reduced in hippocampal regions after AβO infusion when compared with sham mice. In contrast, AS-IV administration increased the immunoreactivity of PSD95, SYN, and GAP43 as compared to AβO-infused mice (Figures 6(a) and 6(b)).
The results from immunoblotting assays also showed that there was a significant decrease in the expression of PSD95, SYN, and GAP43 in response to AβO infusion, while AS-IV administration significantly ameliorated AβOinduced downregulation of these synaptic protein expressions in the hippocampus (Figures 6(c) and 6(d)). By contrast, there was no difference in these groups of mice regarding ARC expression (Figures 6(c) and 6(d)).
We next detected the density of dendritic spines in hippocampal neurons among experimental groups by Golgi-Cox staining assay. Results showed that the density of dendritic spines in hippocampal neurons of AβO-infused mice was significantly lower than that in sham mice, but these AβO infusion-induced changes in dendritic spine densities were significantly ameliorated by AS-IV (20 mg/kg) administration (Figures 6(e) and 6(f)).
We further used transmission electron microscopy to examine the synaptic ultrastructure of hippocampal
12
Oxidative Medicine and Cellular Longevity neurons. Our data showed that AβO infusion resulted in a significant decrease of numbers of hippocampal synapses as compared to that of sham mice, whereas AS-IV (20 mg/kg) administration significantly ameliorated this synaptic loss (Figures 6(g) and 6(h)). Overall, the results indicate that AS-IV affords protection against AβO-induced synaptic deficits. (Figure 7(a)). By contrast, AS-IV attenuated the decrease of PPARγ in AβO-infused mice. A specific PPARγ antagonist, GW9662, was used to suppress PPARγ activation in AβO-infused mice. Interestingly, the effect of AS-IV was blocked by GW9662 in the hippocampus of AβO-infused mice (Figure 7(b)).
AS-IV Inhibits AβO-Induced BDNF Reduction via
Promoting PPARγ Expression in Mouse Hippocampi. To fur-ther explore the underlying neuroprotective mechanism of AS-IV on AβO-infused mice, the levels of PPARγ and BDNF in hippocampus were detected by immunohistochemistry. Compared with sham group, PPARγ and BDNF immunoreactivity was decreased in the hippocampus of AβO-infused mice, whereas hippocampal immunoreactivity of PPARγ and BDNF was higher in AS-IV-treated mice compared to AβO-infused mice (Figures 8(a)-8(f)). Additionally, the effect of AS-IV on the expression of BDNF and PPARγ was blocked by GW9662 in the hippocampus of AβO-infused mice (Figures 8(a)-8(f)).
AS-IV Inhibits AβO-Induced Neuroinflammation via
Promoting PPARγ Expression. Our data showed that there were significant differences among the experimental groups with regard to the number of astroglia in DG region of the hippocampus, as detected by immunofluorescence (Figure 9(a)). Infusion of AβO induced a remarkable activation of astroglial responses in the hippocampus of mice, which was prevented by AS-IV (20 mg/kg) administration. Consistently, infusion of AβO also increased GFAP expression as determined with . Compared with sham, * P < 0:05; compared with AβO, # P < 0:05 (one-way ANOVA followed by the Tukey test).
13 Oxidative Medicine and Cellular Longevity immunoblotting assay, while AS-IV (20 mg/kg) administration significantly suppressed GFAP expression in AβOinfused mice (Figure 9(b)). Furthermore, we asked whether PPARγ mediated the beneficial effect of AS-IV on antiinflammatory response in AβO-infused mice. Interestingly, PPARγ inhibition by GW9662 blocked the inhibitory effects of AS-IV on GFAP immunoreactivity and expression in the hippocampus of AβO-infused mice (Figures 9(a) and 9(b)).
We measured the hippocampal IL-1β, IL-6, and TNF-α level in AβO-infused mice by ELISA. Results showed that AβO infusion led to an upregulation of IL-1β, IL-6, and TNF-α level in the hippocampus compared with sham mice, but AS-IV administration suppressed the upregulation of cytokines following AβO infusion. In line with the above findings, this effect of AS-IV was blocked by GW9662 (Figure 9(c)). These results suggest that AS-IV prevented the inflammatory response in the hippocampus via PPARγ. Figures 10(a)-10(c), the protein expression of NLRP3 and cleaved caspase-1 was significantly elevated in the hippocampus of AβO-infused mice compared with sham mice. In contrast, AS-IV (20 mg/kg) administration suppressed AβO-induced expression of NLRP3, as well as cleaved caspase-1 in the hippocampus of AβO-infused mice.
AS-IV Inhibits AβO-Induced Pyroptotic Cell Death via Promoting PPARγ Expression. As shown in
As shown in Figures 10(a)-10(d), AβO infusion significantly increased the levels of IL-1β in the hippocampus, which was inhibited by AS-IV administration. In order to further confirm the role of PPARγ in AS-IV-mediated suppression of AβO-induced pyroptosis, specific PPARγ antagonist, GW9662, was used to suppress PPARγ activation in AβO-infused mice. Interestingly, the effects of AS-IV against AβO-induced expression of NLRP3 and cleaved caspase-1 were blocked by GW9662. Moreover, the blockade of PPARγ was able to significantly reverse the effect of AS-IV on AβO-induced proinflammatory cytokine IL-1β overexpression (Figures 10(a)-10(d)).
Discussion
In this study, we applied systemic pharmacology strategies and in vivo experiments to probe the mechanism of AS-IV in treatment of AD. AS-IV could interact with 64 targets, and those targets had multipharmacological properties relevant with nervous system, inflammation, cell proliferation, apoptosis, pyroptosis, calcium dysregulation, and steroid. Molecular docking suggested that AS-IV could regulate the AD-like phenotypes by binding with caspase-1, GSK3Β, PSEN1, and TRPV1. Furthermore, in vivo experiments evidenced that AS-IV promoted the expression of PPARγ and BDNF in hippocampal neurons of mice infused with AβO and prevented synaptic deficits, inflammation, and memory impairments in AD-like mice. Consistent with the bioinformatics data, in vivo data also verified that AS-IV could suppress AβO infusion-induced neuronal pyroptosis. This systematic analysis provides new implications for the therapeutic of AD by AS-IV.
AS-IV Prevents AD Phenotypes through Multiple
Mechanisms. In the present study, we screened 64 related targets of AS-IV and these targets together play important roles in the pathogenesis of AD, possibly through regulating cell proliferation, calcium dysregulation, inflammation, pyroptosis, and apoptosis [20,[31][32][33]. Specifically, the Gprotein coupled acetylcholine receptor signaling pathway and protein kinase B/GSK3B axis are involved in the processes of AD pathogenesis, resulting in cognitive dysfunction [34][35][36]. Besides, the decrease of response to hypoxia and dysregulation of vasoconstriction could effectively ameliorate vascular dementia [37,38]. Furthermore, the neuroinflammation caused by the generation of caspase-1mediated IL-1β and IL-18 is involved in the development and progression of AD [32]. GSK3Β plays an important role in hyperphosphorylation of tau, which is one of the pathological features in AD [35]. PSEN1 mutation is a risk factor for AD [39]. Additionally, TRPV1, a nonselective cation channel, is involved in synaptic plasticity and memory [40]. Our molecular docking results demonstrate that AS-IV could integrate with caspase-1, GSK3Β, PSEN1, and TRPV1. The binding affinity of AS-IV is mainly through electrostatic, H-bond, and hydrophobic interaction, suggesting the reliability of the docking model. Therefore, AS-IV may improve cognitive impairment by binding to ADrelated gene, such as caspase-1, TRPV1, PSEN1, and GSK3Β, reduce cell death, and ultimately inhibit AD-phenotypes.
AS-IV Reduces Tau Hyperphosphorylation in AD Model.
AβO accumulate in the brains of AD patients and induce AD-like cognitive dysfunction [41]. Therefore, AβOinduced AD-like phenotypes may be a promising model to find treatments [41,42]. In this study, we investigated the impact of AβO in the brains of mice and further confirmed the effect of AS-IV on memory formation in mice infused with AβO and to assess the mechanisms. Our results demonstrated that intrahippocampal infusion of AβO impaired both contextual and cued fear memory, which is consistent with previous study [43]. Conversely, AS-IV prevents AβO-induced contextual and cued fear memory impairment. Considering that hippocampus is an important brain region involved in the formation and expression of fear memory, our findings suggest that AβO infusion damaged the structure and function of hippocampus and subse-quently blocked the formation of learning and memory, which can be prevented by AS-IV administration.
Similar to previous studies, our findings showed that AβO infusion induced neuronal loss, as well as increased tau phosphorylation, suggesting that the pathological changes of the hippocampus induced by AβO infusion may be the basis of AD-like behavioral changes [3,44]. On the contrary, AS-IV inhibited the pathological changes of hippocampal neurons and tau phosphorylation induced by AβO infusion, which may contribute to memory improvement in AD-like mice. It is speculated that Aβ pathology in AD brain is earlier than those of tau, and neurofibrillary tangles develop downstream of toxicity induced by Aβ and eventually lead to neuronal death. Moreover, the mutual promotion between them accelerates the pathogenesis of AD, which is consistent with previous reports [44][45][46]. Certainly, we also note that AβO infusion has no effect on endogenous Aβ 1-42 content in the hippocampus, suggesting it may not cause the increase and accumulation of Aβ, and formation of amyloid plaque in the brain. Through bioinformatics prediction, AS-IV could integrate with GSK3B tightly. As GSK3B is practically responsible for the hyperphosphorylation of tau, the tight interaction of AS-IV with GSK3B might contribute to the effects of AS-IV on the reduction of tau hyperphosphorylation.
AS-IV Prevents
AβO-Induced Synaptic Deficit. Consistent with previous studies [47,48], our findings demonstrated that AβO had neurotoxicity and synaptic toxicity before plaque formation in the brain, causing brain damage and eventually leading to AD-like behaviors. Given the mounting evidences that AβO caused synaptic deficits [3,49,50], elucidating the precise molecular pathways has important implications for treating and preventing the disease. Here, we demonstrate that AβO infusion reduced the immunoreactivity and expression levels of PSD95, GAP43, and SYN, which is similar to previous results [51,52]. It
16
Oxidative Medicine and Cellular Longevity has been shown that the SYN immune response density in the brain of transgenic mice is negatively correlated with Aβ levels, but has nothing to do with plaque loading, indicating that Aβ has synaptic toxicity when plaques are not formed [6]. We further found that AS-IV increased the immunoreactivity and expression levels of PSD95, GAP43, and SYN in the hippocampus of AD-like mice. PSD95, GAP43, and SYN are important markers of synaptic plasticity, and they are positively correlated with hippocampal learning and memory function [14,53]. Furthermore, ARC plays a key role in synaptic plasticity and memory consolidation [54,55]. Surprisingly, we note that there is no significant difference in ARC expression among the experimental mice, which suggested that AβO infusion did not target ARC. Our results of Golgi-Cox and TEM further showed that AS-IV increased the density of dendritic spines and synapse number in hippocampal neurons, which suggested that AS-IV improved synaptic structure damage and alleviated synaptic toxicity in the hippocampus of mice infused with AβO.
In a previous study, we reported that AS-IV promoted PPARγ expression in cultured cells and activated the BDNF-TrkB signaling pathway [20]. Our in vivo findings further showed that PPARγ expression in the hippocampus of mice infused with AβO was significantly decreased along with the reduction of BDNF expression, while AS-IV significantly prevented AβO-induced inhibition of PPARγ and BDNF expression. Considering the important functions of BDNF-TrkB signaling pathway performed in synaptic function [29], those data further supported that AS-IV prevented AβO-induced synaptic deficit.
AS-IV Prevents AβO-Induced Neuroinflammation and
Pyroptosis. Numerous studies have confirmed that neuroinflammation accelerates the pathogenesis of AD [46,56,57]. In this study, we found that AβO infusion increased the immunoreactivity and expression of GFAP and upregulated IL-1β, IL-6, and TNF-α levels in the hippocampus, which were reversed by AS-IV. These results suggested that AS-IV prohibited AβO-induced neuroinflammation in the brain, which was beneficial for cognitive function improvement, which further confirmed the network screening. PPARγ plays a neuroprotective role by reducing brain inflammation and Aβ production [58,59]. Our findings showed that AβO infusion inhibited PPARγ expression in mice, implicating that PPARγ participated in inflammation response of AD-like mice. Furthermore, AS-IV blocked AβO-induced inhibition of PPARγ expression. Pyroptosis is an inflammatory form of programmed cell death that has been reported in neurological pathogenesis [60]. Reducing pyroptosis was shown to alleviate cognitive impairment in AD animal models [61] and the progression of Parkinson's disease [62]. Interestingly, NLRP3 has been reported to initiate neuronal pyroptosis [63,64]. Indeed, NLRP3 inhibition has been shown to exhibit neuroprotective effects through the suppression of pyroptosis [65] and improve neurological functions in a transgenic mouse model of AD [63]. In this study, we demonstrated that AS-IV could inhibit AβO-induced pyroptotic neuronal death, whereas PPARγ antagonist GW9662 blocked the beneficial effect of AS-IV. In the systematic analyses, we also found that AS-IV had a high binding capacity with caspase-1, which might indicate the potential function of AS-IV in the pyroptosis.
AS-IV Reduces Tau Hyperphosphorylation, Synaptic
Deficit, Neuroinflammation, and Pyroptosis via Regulating PPARγ. In this study, we disclosed that AβO administration could progressively reduce PPARγ expression in the hippocampus from 2 h to one day and kept the PPARγ level at a relative low level from one day to 28 days. These data suggested that PPARγ would be an initial event after AβO administration. AS-IV could prevent AβO-induced reduction of PPARγ. The effects of AS-IV on brain inflammation, pyroptosis as well as synaptic deficit in AβO-induced AD phenotypes might be PPARγ-dependent. On the one hand, PPARγ antagonist blocked the effects of AS-IV on PPARγ expression, brain inflammation, and pyroptosis as well as BDNF expression. On the other hand, PPI indicated that
|
2021-10-06T05:22:08.172Z
|
2021-09-24T00:00:00.000
|
{
"year": 2021,
"sha1": "ee7e9a3f7a4973c128c65aed17b603b988311338",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/omcl/2021/1020614.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ee7e9a3f7a4973c128c65aed17b603b988311338",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237342398
|
pes2o/s2orc
|
v3-fos-license
|
Cross-Validation of a Multiplex LC-MS/MS Method for Assaying mAbs Plasma Levels in Patients with Cancer: A GPCO-UNICANCER Study
Background: Different liquid chromatography tandem mass spectrometry (LC–MS/MS) methods have been published for quantification of monoclonal antibodies (mAbs) in plasma but thus far none allowed the simultaneous quantification of several mAbs, including immune checkpoint inhibitors. We developed and validated an original multiplex LC–MS/MS method using a ready-to-use kit to simultaneously assay 7 mAbs (i.e., bevacizumab, cetuximab, ipilimumab, nivolumab, pembrolizumab, rituximab and trastuzumab) in plasma. This method was next cross-validated with respective reference methods (ELISA or LC–MS/MS). Methods: The mAbXmise kit was used for mAb extraction and full-length stable-isotope-labeled antibodies as internal standards. The LC–MS/MS method was fully validated following current EMA guidelines. Each cross validation between reference methods and ours included 16–28 plasma samples from cancer patients. Results: The method was linear from 2 to 100 µg/mL for all mAbs. Inter- and intra-assay precision was <14.6% and accuracy was 90.1–111.1%. The mean absolute bias of measured concentrations between multiplex and reference methods was 10.6% (range 3.0–19.9%). Conclusions: We developed and cross-validated a simple, accurate and precise method that allows the assay of up to 7 mAbs. Furthermore, the present method is the first to offer a simultaneous quantification of three immune checkpoint inhibitors likely to be associated in patients.
Introduction
Currently more than 25 monoclonal antibodies (mAbs) have been approved for treating cancer by the US Food and Drug Administration (FDA), and European Medicines Agency (EMA). Usually, "classical" monoclonal antibodies (e.g., bevacizumab, cetuximab, rituximab, trastuzumab) are distinguished from immune checkpoint inhibitors (e.g., ipilimumab, nivolumab, pembrolizumab, atezolizumab) because of their mechanism of action. mAbs usually target circulating or membrane antigens involved in tumor proliferation such as EGFR (epithelial growth factor receptor), VEGF (vascular endothelial growth factor), CD20 or HER-2 receptor. Over the last decade, the use of mAbs able to modulate anti-tumor immune response has been spreading. These immunotherapies such as checkpoint inhibitors are directed against targets involved in silencing anti-tumoral immune response like PD-1 (programmed cell death receptor), PD-L1 (programmed death-ligand 1), or CTLA4 (cytotoxic T-lymphocyte antigen-4). As such, they do not interfere directly with proliferation and differentiation of cancer cells, but rather aim at harnessing tumor immunity to trigger some kind of immune-related cell death. Although those mAbs have usually proven clinical benefit with acceptable safety in daily clinical practice in paradigmatic settings such as lung cancer, melanoma, head and neck or renal cancer, the variability in the clinical outcomes remains largely unpredictable. In the context of personalized medicine, the determinants of this clinical variability should be identified to optimize response or to propose other treatment modalities.
The inter-individual variability observed in clinical response could be, at least in part, attributed to the pharmacokinetics variability of mAbs. Indeed, exposure levels or more rarely, pharmacokinetic parameters, such as total clearance have already been associated with pharmacodynamic endpoints (i.e., overall and progression-free survival, efficacy) for bevacizumab [1], rituximab [2,3] and cetuximab [4,5]. For instance, 34 µg/mL plasma level threshold for efficacy was proposed for cetuximab in head and neck cancer [4] and 15.5 µg/mL for bevacizumab in metastatic colorectal cancer [1]. Given the large interindividual pharmacokinetic variability reported with most biologics [6,7], predicting whether a patient will be adequately exposed to ensure maximal target engagement can be tricky. Regarding immunotherapies, data are less convincing thus far. It has been clearly documented that exposure-response relationships exist for anti-CTLA4 ipilimumab [8] as shown both in phase II [9,10] and phase III studies [11]. By contrast, pharmacokinetic/pharmacodynamics (PK/PD) data about nivolumab and pembrolizumab are more contradictory. Most studies reported that the PK/PD relationships are flat for both mAbs, whereas a single study evidenced an exposure efficacy with nivolumab [12][13][14]. Some authors argue that PK/PD relationships do exist with immune checkpoint inhibitors, but with respect to the extremely high dosing approved for those drugs, usually plasma levels largely exceed the threshold concentration required to ensure a maximal target engagement [15]. However, to what extent those theoretical large amounts of mAbs are sufficient to ensure proper target engagement despite the PK variability remains unclear. In addition to possible impact on drug efficacy, several groups have suggested that patients treated with immune checkpoint inhibitors could be overdosed with respect to the efficacy thresholds. This calls for developing therapeutic drug monitoring (TDM) to possibly customize the frequency of the administration, i.e., by adapting the scheduling to the decay in plasma levels [16].
Regardless of the context, plasma monitoring of mAbs could be a useful tool for clinical decision making. To achieve TDM with biologics, robust, specific, and validated bioanalytical methods are required. As of today, most of the methods for mAbs quantification in plasma such as phase I studies are based upon ELISA methods [5,[17][18][19][20][21][22][23] which do not necessarily meet the time-and cost-effectiveness requirements of routine drug monitoring, especially in real-world patients. In addition, the limitations in terms of specificity have led to the development of alternative analytical strategies that should be therefore both time-and costeffective. Finally, ELISA methods are not easily adaptable for multiplexed assays. Recently, some multiplex assays based on liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) were proposed [23][24][25]. Most of these methods can simultaneously quantify several biologics in a single run, but thus far none allowed the simultaneous quantification of several immune checkpoint inhibitors. The increasing number of mAbs used in monotherapy or in combination with other mAbs support the need of multiplex assays for mAbs, to meet the time-and cost-effectiveness requirements of routine TDM.
Here, we developed and validated a multiplexed LC-MS/MS method using a readyto-use kit for the simultaneous plasma quantification of up to 7 mAbs frequently used in oncology (i.e., bevacizumab, cetuximab, nivolumab, ipilimumab, pembrolizumab, rituximab, and trastuzumab), over a large range of concentrations in order to be used for drug monitoring as well as PK/PD studies. Then, this method was cross-validated with published reference methods (i.e., ELISA or LC-MS/MS). (2) and (3) present chromatographic profiles of double blank drug-free plasma matrix, lower limit of quantification (LLOQ) and upper limit of quantification (ULOQ), respectively. Int: interference. These chromatograms were obtained using a BioZen™ 2.6 µm Peptide XB-C18 LC column and the LC-MS parameters were those described in the Materials and Methods section. Two proteotypic peptides were selected for trastuzumab because both are unique and give intense signals in LC-MS. For the other mAbs, the list of proteotypic peptides was limited because of the sequence homology of mAbs with endogenous IgG (other peptides are not specific). In this context, a single peptide was selected in the quantification method of all mAbs except trastuzumab. Panels (4), (5), (6), (7), (8) and (9) display chromatograms obtained from plasma samples of patients treated with bevacizumab (Beva) (4), cetuximab (Cetux) (5), pembrolizumab (Pem) (6), rituximab (Rit) (7), trastuzumab (Trastu1 & Trastu 2) (8) and combination therapy nivolumab (Nivo) + ipilimumab (Ipi) (9). The retention times of mAbs shown in these panels are different with those observed in panels (1), (2) and (3) because of the use of different chromatographic systems and columns in each laboratory using the kit. Thus, panels (1), (2), (3) were analyzed on SCIEX 6500QTRAP by Promise Proteomics. Panels (4) and (9) were analyzed on SCIEX 6500 QTRAP by PROMISE Proteomics with distinct LC parameters. Panels (5), (6), (7) and (8) were analyzed on a THERMO TSQ Altis at Cochin hospital (Paris, France). Importantly and as visible on (1), (2) and (3), an interfering signal (Int) is present for nivolumab peptide when analyzing plasma samples with a QQQ mass spectrometer. It elutes very close to the peak of interest. This interference can be separated with an appropriate gradient. Lower limit of quantification and linearity.
No interference was observed at the retention times of analytes and IS in blank samples extracted from the five tested human samples. Regarding Nivo peptide detection (i.e., peptide ASGI), based on our experience, an interference can be observed in most of the patient samples. It was crucial to correctly separate this interference, using adequate LC parameters. Regarding matrix effect (Figure 3), the mean for the 7 mAbs was −12.7% (CV = 35.5%). The lowest value was −54% for Nivo and the highest of +33% for Ipi. Matrix effect for the different mAbs. Black histograms represent the total area measured for the peptide (mean of MRM transitions) in the absence of plasma, while the grey histograms represent the total area measured for the same peptide and same concentration (mean of MRM transitions) in the presence of plasma. Samples, without and with plasma, were analyzed in 6 replicates from a pool of human plasma samples. CV% between replicates were consistent and below 9% in the absence of matrix, and below 11% in the presence of matrix, except for Nivo where CV% was 19% in the presence of matrix.
Within-Run
Between
Dilution Effect
The accuracy and precision of 5-fold diluted plasma sample (n = 6 for each mAb) ranged from 92.4 to 106.8% and from 1.4 to 9.3%, respectively (Table S2).
Discussion
Over the last decade, the literature about pharmacokinetics and pharmacodynamics of mAbs used in oncology has significantly expanded [32][33][34]. A large inter-individual variability in pharmacokinetics parameters and subsequent exposure levels is usually reported, regardless of the type of mAbs considered. By contrast, inconsistencies were observed when reporting on PK/PD relationships with mAbs. For instance, trough levels or clearance values were repeatedly associated with the efficacy of anti-EGFR cetuximab either in colorectal or head and neck cancers [4,5,35,36]. Similar exposure-effects relationships were found with anti-VEGF bevacizumab [1] or anti-HER2 trastuzumab [19]. With immune checkpoint inhibitors, both efficacy and toxicity endpoints seemed to be associated with plasma concentrations of anti-CTLA4 ipilumumab [9]. Oppositely, contradictory findings were published with anti-PD1 nivolumab in lung cancer patients [35,36] and flat relationships were suggested with anti-PD1 pembrolizumab [37]. Overall, the very existence of PK/PD relationships with mAbs remains a controversial issue in clinical oncology. Several explanations can help in understanding those erratic findings. First, clearance of mAbs can be influenced by target-mediated drug disposition (TMDD), a phenomenon making PD endpoints such as tumor burden a relevant covariate for predicting clearance values of mAbs, the higher the antigenic mass, the higher the clearance and the lower the drug plasma levels [38]. Therefore, whether low concentrations of mAbs are the cause or the consequence of increase in tumor burden is tricky to understand. Of note, TMDD does not apply to anti-CTLA4 or anti-PD1 mAbs since target engagement is not related to the tumor burden but rather to the immune system. Another possible confounding factor is the fact that low albumin levels (i.e., cachexia frequently observed in patients with progressive disease) is another factor likely to increase mAbs clearance [35], thus further blurring the picture when trying to understand whether PK is the cause or the consequence of disease evolution. In addition, to better understand PK/PD relationships, the fact that most, if not all, immune checkpoint inhibitors administered now as flat doses could yield plasma levels largely exceeding the threshold required for target engagement [16,39,40] calls for developing tools for evaluating exposure to mAbs. For example, TDM of immune checkpoint inhibitors (ICIs), could be interesting, not necessarily as an attempt to tailor dosing to increase efficacy, but at least to customize the frequency of administrations, in a drug cost saving perspective [16]. Indeed, TDM-based determination of individual PK parameters could allow simulating the time to reach the efficacy threshold, and to determine when the next dose should be administered for a given patient [41]. A new bioanalytical method for TDM application in routine should meet different analytical requirements such as sensitivity, precision, and accuracy, in addition to ease of use, cost and time effectiveness considerations. Furthermore, the increasing use of combination therapy with multiple mAbs calls for multiplex assays. As far as we know, this is the first report of a validated LC-MS/MS method able to simultaneously assay up to 7 therapeutic mAbs including three check point inhibitors (i.e., Ipilimumab, Nivolumab and Pembrolizumab). Co-administration of ICIs such as Ipi plus Nivo or Pembro has become a common practice. To date, only 10 bioanalytical methods [18,20,28,39,40,[42][43][44][45][46] including 5 LC-MS/MS methods [28,39,40,43,44] have been published for these 3 mAbs, but none proposes a simultaneous assay as ours. The present method could become the bioanalytical method of choice to explore PK/PD relationship of ICIs, especially in patients treated in combination.
According to the EMA guidelines for bioanalytical method evaluation [47], the present multiplex LC-MS/MS method meets all the current validation criteria, except for matrix effect evaluation. Different MRM transitions used as "quantifier" gave consistent quantification data, regardless of the mAb. In this context, we preferred calculating the mAb concentration by averaging the signal of multiple quantifiers to gain sensitivity and reliability. The validation showed satisfying intra-and inter-day accuracy (90.1-111.1%) and precision (<14.6%) for all mAbs. Regarding Nivo quantification, an interference was observed using QQQ mass spectrometer. However, this interference would not be expected with HRMS mass spectrometer because of its greater precision in m/z measurement as compared with QQQ (~10 ppm vs~0.6-0.7 Da, respectively). During the analytical validation steps, this interference was correctly separated and always eluted a few seconds before the peak of nivolumab (Figure 1, panel 1). In case of insufficient separation, the accuracy at LLOQ and low IQC for Nivo did not meet the acceptance criteria, thus impacting on low plasma concentrations (i.e., <6.0 µg/mL). This interference is probably due to a peptide from a plasma protein such as physiological IgG which has a m/z and a sequence very close to the peptide of interest when assaying Nivo. To be sure that the interference is correctly separated from Nivo, users should therefore analyze double-blank plasma sample and a blank plasma sample (i.e., matrix spiked with the stable labelled internal standard, which does not show any interference). The contaminating peak visible in the double-blank should have a different retention time than the peak of the labeled peptide visible in the blank sample.
Based on our knowledge, mAbXmise kit is the first solution including stable-labeled mAbs and reagents and consumables for performing therapeutic mAb quantification. The use of full-length stable-isotope-labeled antibodies is an asset in comparison with most other LC-MS/MS methods which use a labeled reference peptide [4,[48][49][50]. Indeed, adding a full-length stable-isotope-labeled antibody at the very beginning of extraction procedure is known to better compensate recovery and matrix effect than labeled peptides or universal stable labeled mAb [48]. In the present study, the matrix effect of all mAbs, is significant, especially for nivolumab for which a previous study already reported a high matrix effect [28]. Given the fact that the matrix effects of full-length stable-isotopelabeled antibodies were not assessed and therefore not taken into account for estimation of the matrix effect of mAbs, these results should be interpreted cautiously. However, one can expect that the use of full-length stable-isotope-labeled antibody should significantly minimize the matrix effect. Finally, the satisfying results of the cross-validation for all mAbs (except Ipi) suggests the absence of significant impact of matrix effect on the present LC-MS/MS assay.
LC-MS/MS methods decrease the inter-batch and inter-operator analytical variability
as compared with canonical ELISA methods. Here, the use of ready-to-use industrial kits is a further plus ensuring better inter-batch and inter-laboratory consistency in a context of TDM or multicentric PK study. A large range of plasma concentrations (i.e., 2-100 µg/mL) was covered by the assay, including the concentrations reported during clinical PK studies [1,4,9,12,18,24,25]. Furthermore, the evaluation of dilution accuracy showed that concentrations up to 5-fold above ULOQ were adequately quantified. Our 2 µg/mL LLOQ is usually higher than those previously reported with ELISA methods [5,[17][18][19][20][21][22][23], indicating a poorer sensitivity of the LC-MS/MS technique. However, with respect to the plasma concentrations usually upon mAbs administration, this LLOQ at 2 µg/mL was considered as sensitive enough in routine setting because the range of concentrations is consistent with the concentrations expected in daily clinical practice. In the present study, many samples had concentrations above the ULOQ, especially for Beva, Cetux, Ritux and Trastu. For those mAbs, blood samples were collected in patients from clinical trials. The methodology of these studies included measuring peak plasma levels, which explains that many samples were superior to ULOQ. This could be fixed by a systematic dilution of all samples withdrawn at the end of the infusion. According to a recent review of literature [6], target trough concentrations >15.5, 33.8, 25 and 20 µg/mL are proposed for Beva (colorectal cancer), Cetux (head and neck cancer), Ritux (lymphoma) and Trastu (breast cancer), respectively. Therefore, the range of plasma concentrations (2-100 µg/mL) covered by our multiplex LC-MS/MS method is fully suitable to drug monitoring in daily clinical practice.
Finally, this multiplex LC-MS/MS method outperforms standard ELISA methods in terms of time-and cost-saving perspectives, thus fully meeting the requirements for implementing routine TDM in oncology.
The LC-MS/MS methods previously published for measuring plasma mAb levels were appropriately validated following the EMA guidelines [47]. However, very few of them were cross-validated with another method [22,24,28,30,[51][52][53]. In a context of TDM, the cross-validation issue is critical for determining whether the obtained data are reliable, and whether they can be compared and used with respect to data from the literature. The present LC-MS/MS method was successfully cross-validated for all mAbs (except Ipi) as demonstrated by the consistent results between our multiplex LC-MS/MS method and reference bioanalytical methods. Indeed, the Cusum test was not statistically significant for each mAb, thus confirming the linear relationships between the methods. Furthermore, the under or overestimation of the results from this multiplex LC-MS/MS method ranged from −17.1% to 13.2%, which was satisfying. The intercept of Passing-Bablok regression for Nivo was higher than those for other mAbs. However, two PK/PD studies reported that the trough plasma concentration of nivolumab ranges from 10 to 25 µg/mL after a single infusion and 45 to 80 µg/mL at steady-state [35,36]. Consequently, this higher intercept should not have any significant consequence on drug monitoring and further decision making. Bland-Altman analysis showed that mean estimated bias for each mAb was acceptable with respect to plasma concentrations observed in patients and should have no incidence on results interpretation either. Following EMA guidelines for bioanalytical method validation [47], more than 67% of individual concentration differences must lower than 20% for each mAb when comparing bioanalytical methods. This condition was verified here.
Altogether, our data suggest that the present multiplex LC-MS/MS method could be used instead of the reference methods for routine TDM purpose. Five French clinical PK laboratories were involved in the cross-validation campaign, thus reinforcing the robustness of our results. As previously mentioned, we could not compare the performance of our method for Ipi. However, Ipi plasma concentrations were assayed in patients treated at different doses (i.e., 1 or 3 mg/kg) for melanoma or lung cancer. These concentrations were consistent with those previously reported elsewhere for these indications [7,49], which suggests a good reliability of our method. A cross-validation for Ipi should be conducted in the future.
Over the last decade, several mAbs (i.e., Beva, Cetux, Ritux and Trastu) have become available in most countries. Since 2018, the different health authorities worldwide have promoted the use of biosimilars in oncology as a means to cut drug costs. However, some practitioners are worried during the switch procedure from princeps mAbs to biosimilar mAb, because of concerns regarding possible loss of efficacy as PK of biosimilar may not perfectly match that of the princeps mAb. The use of TDM before and after such a switch could help to ensure that plasma mAb levels remain stable with biosimilars. In this context, a versatile LC-MS/MS method capable of assaying both originator and biosimilar mAbs would be very useful. The present multiplex LC-MS/MS method exhibited a similar performance (accuracy and precision) to assay plasma concentration with actual Herceptin ® and biosimilar Trastu. This result suggests the analytical reliability of our method, regardless the mAb. However, this reliability should be further confirmed with other biosimilars of Beva, Cetux and Ritux, for fully confirming that our technique enables assaying both originator and biosimilar mAbs.
Despite the relatively long run time of our assay, its versatility is a major asset when implementing routine TDM for various reasons: Samples can be gathered in a unique laboratory and results can be released more quickly. Thus, the loss of time caused by a longer run time is offset by the large benefits of multiplexing. Overall, this may help spreading the monitoring of mAbs in cancer patients as a daily clinical practice. Furthermore, the treatment paradigm of some cancers such as melanoma and lung cancer has dramatically changed in recent years with the introduction of immunotherapy. A better understanding of PK/PD relationship for ICI therapies could contribute to optimizing individual treatment in the era of personalized medicine [7,15].
Sample Preparation with mAbXmise Kit
Samples were prepared according to manufacturer's instructions. Briefly, samples were prepared as follows ( Figure 5): 20 µL of plasma sample (calibration standard, IQC or patients' sample) were loaded in wells of the mAbXmise plate and diluted with 80 µL of Buffer A solution provided in the kit. Then, they were agitated for 1 h at room temperature. The 7 mAbs as well as their full-length isotopically labelled forms (SIL-mAbs) were extracted by immunocapture on the PuriXmise plate. After an elution step, extracted samples were dried in a speed vacuum (Martin Christ, Osterode am Harz, Germany). Samples were re-solubilized and then digested with CutXmise enzyme overnight at 37 • C. Digestion was stopped with CutXStop, then 20 µL of digested samples were injected in the LC-MS/MS system. Figure 5. mAbXmise process summarized: collected plasma samples are loaded on mAbXmise plate as well as calibrators and QC samples provided in the kit. Full-length isotopically labelled mAbs, coated on the plate, are resuspended in the plasma samples and will serve as internal quantification standard. Total IgG are purified, recovered, and then digested. At the end of the process, the samples collected are ready to be injected.
Selection of Peptides for Quantification
The final list of proteotypic peptides selected and their corresponding MRM transitions is given in Table 3. For all listed peptides, the MRM transitions were used as "quantifier" as all gave consistent quantification data. In the presented data, the mean of multiple MRM transitions was used to calculate mAb final concentration.
Selectivity, Carry-Over and Matrix Effect
The selectivity was evaluated by analyzing plasma samples (n = 6) from naïve treatment cancer patients and carry-over by analyzing the signal intensities for peptides (from mAbs and IS) in a blank sample (mobile phase) injected just after the sample CAL5 (100 µg/mL of each mAb). To determine the matrix effect, a mix of the 7 pure mAbs was digested with CutXmise. This mix was then divided in two fractions (2 × 30 µL). One fraction was supplemented with 30 µL mobile phase A, while the second fraction was supplemented with 30 µL digested blank plasma from a pool of human plasma samples (n = 6). The peak areas of each peptide were determined in both conditions and the matrix effect for each mAb was determined by dividing the peak area in the presence of plasma by the peak area in absence of plasma.
Method Validation
The method was fully validated according to the EMA Guidelines for Industrial Bioanalytical Method Validation [47] for linearity, accuracy, carry-over, dilution integrity, matrix effect and selectivity. For the linearity assessment, double blank, zero samples and CAL samples (between 2 µg/mL and 100 µg/mL) were prepared and analyzed on 6 different days. Samples were prepared by spiking different known concentrations of the pure mAb solution in drug-free plasma samples as described before. The response for each mAbs was evaluated with respect to the theoretical concentration of each calibration standard. Linear regression (1/x) was applied to fit the calibration curves (area peak ratio vs. concentration). The five calibration levels in each run should be within ±15% of the nominal value, except the LLOQ which must be between ±20% of the nominal value. The regression coefficient was calculated for each analytical run and should be over 0.99. These tests were replicated six times as independent experiments. Inter-accuracy and precision were determined as four separate validation runs by injecting IQC samples (n = 4) at low (6 µg/mL), medium (15 µg/mL) and high concentrations (75 µg/mL) and LLOQ samples (2 µg/mL). For intra-run tests, six replicates of IQC and LLOQ samples were injected in the same day. Intra-run and inter-run accuracies were expressed as the relative bias. The intra-run and inter-run precisions were calculated as the coefficient of variation (CV). At each concentration level of IQC, the bias should be within ±15% and the precision <15%. For the LLOQ, both concentration bias and precision should be within ±20%. The instrument carry-over was tested by injecting three blank samples after an ULOQ sample. The carry-over was calculated as the ratio of the peak area in the blanks and the peak area of the LLOQ. The carry-over was considered acceptable if signal at the analyte was <20% of the LLOQ in each blank. Dilution integrity was demonstrated by diluting a plasma sample (at the concentration of 2.5 times higher than the ULOQ of each mAb) with free-drug plasma or PBS 1X by 5-fold. Six aliquots of both dilutions were processed. Both accuracy and precision should be within ±15% of the nominal value.
Cross Validation
All patients were treated with mAbs for solid cancers, except for Ritux. Patients receiving Ritux were treated for vasculitis. All whole blood sampling were collected as part of clinical trials. All patients provided written informed consent for blood sampling. Whole blood was collected in heparin lithium-containing tubes just prior to the next infusion (trough concentration) or at peak (end of the infusion). After centrifugation at 3000 rpm during 15 min, plasma was aliquoted in polypropylene tubes, then stored at −20 • C until analysis.
A cross validation between the multiplex LC-MS/MS method and a published reference method was conducted for all mAbs except for Ipi. The multiplex LC-MS/MS method was applied in French laboratories of pharmacology: Cochin Hospital (Paris, France) for Ipi, Beva, Ritux and Trastu; La Timone University Hospital (Marseille, France) for Nivo, Pembro and Cetux. Reference techniques [26][27][28][29][30][31] including Beva, Cetux, Nivo, Pembro, Ritux and Trastu were performed in other French laboratories of Pharmacology at Grenoble, Lyon, and Tours according to protocol recommendations previously published. All participating laboratories are GPCO-Unicancer members.
Statistical Analysis
All statistical analyses were performed using MedCalc statistical package version 19.2.6 (MedCalc Software, Mariakerke, Belgium). The Passing-Bablok regression was used to estimate the relationship between the multiplex LC-MS/MS method and the reference method [50]. The regression equation was expressed with the 95% confidence interval (95% CI) for the estimates of slope and intercept. The Bland-Altman plot was used to evaluate method agreement [54]. The numerical results were reported both mean bias and the limits of agreement, including respective 95% confidence intervals (95% LOA).
Ethic Committee Approval
The study was conducted according to the guidelines of the Declaration of Helsinki and approved by Ethics Committee: Minister or Research and Innovation (number DC2016-2739) for bevacizumab, Sud-Méditerrannée III (number 2012.03.02) for cetuximab, Sud-Est IV (number DC-2008-72) for ipilimumab, CLEC (number 2442) for nivolumab and CPP Île de France (MAINRITSAN2 study, ClinicalTrials.gov NCT02119559) for rituximab. The collection of blood samples during a regular medical visit was approved by the local review board of Oncology (Assistance Publique des Hôpitaux de Paris) for patients treated with pembrolizumab or trastuzumab.
Conclusions
We described here a completely validated multiplexed MS method for seven mAbs quantification. To our best knowledge, this work is the first to present a method that has been cross-validated in several laboratories. Moreover, it is the first approach to allow simultaneous determination of immune checkpoint inhibitors (ipilimumab, nivolumab, pembrolizumab).
Institutional Review Board Statement:
The study was conducted according to the guidelines of the Declaration of Helsinki and approved by Ethics Committee: Minister or Research and Innovation (number DC2016-2739) for bevacizumab, Sud-Méditerrannée III (number 2012.03.02) for cetuximab, Sud-Est IV (number DC-2008-72) for ipilimumab, CLEC (number 2442) for nivolumab and CPP Île de France (MAINRITSAN2 study, ClinicalTrials.gov NCT02119559) for rituximab. The collection of blood samples during a regular medical visit was approved by the local review board of Oncology (Assistance Publique des Hôpitaux de Paris) for patients treated with pembrolizumab or trastuzumab.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
|
2021-08-29T06:16:18.031Z
|
2021-08-01T00:00:00.000
|
{
"year": 2021,
"sha1": "7e0f039c28e71b4ef12bcfff29cb116272b42a6a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8247/14/8/796/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6ff5085fa113d43b2fce35cba0f674d58ec67462",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
208234373
|
pes2o/s2orc
|
v3-fos-license
|
Video Smoke Detection Method Based on Change-Cumulative Image and Fusion Deep Network
Smoke detection technology based on computer vision is a popular research direction in fire detection. This technology is widely used in outdoor fire detection fields (e.g., forest fire detection). Smoke detection is often based on features such as color, shape, texture, and motion to distinguish between smoke and non-smoke objects. However, the salience and robustness of these features are insufficiently strong, resulting in low smoke detection performance under complex environment. Deep learning technology has improved smoke detection performance to a certain degree, but extracting smoke detail features is difficult when the number of network layers is small. With no effective use of smoke motion characteristics, indicators such as false alarm rate are high in video smoke detection. To enhance the detection performance of smoke objects in videos, this paper proposes a concept of change-cumulative image by converting the YUV color space of multi-frame video images into a change-cumulative image, which can represent the motion and color-change characteristics of smoke. Then, a fusion deep network is designed, which increases the depth of the VGG16 network by arranging two convolutional layers after each of its convolutional layer. The VGG16 and Resnet50 (Deep residual network) network models are also arranged using the fusion deep network to improve feature expression ability while increasing the depth of the whole network. Doing so can help extract additional discriminating characteristics of smoke. Experimental results show that by using the change-cumulative image as the input image of the deep network model, smoke detection performance is superior to the classic RGB input image; the smoke detection performance of the fusion deep network model is better than that of the single VGG16 and Resnet50 network models; the smoke detection accuracy, false positive rate, and false alarm rate of this method are better than those of the current popular methods of video smoke detection.
Introduction
Of all disasters, fire is one of the most frequent and widespread threats to public safety and social development. Fire not only destroys material property and causes social order chaos, but also directly or indirectly endangers life. Timely fire detection helps decrease disaster risks. Therefore, studying fire detection techniques is necessary. When a fire occurs, its main form is smoke and fire. Generally, smoke occurs before fire. Thus, understanding smoke detection techniques is conducive to early fire detection and fire hazard reduction.
At present, smoke sensors are popular in indoor environments. Smoke sensors prevent fire by monitoring the smoke concentration. However, smoke sensors require smoke to enter the sensors and then detect it when the concentration reaches a certain level. Using such sensors in outdoor open spaces is difficult.
In outdoor environments, computer vision-based smoke detection technology is often used because it is not limited by space. This technology also has a large coverage area and low cost, which is one of the main directions of outdoor smoke detection research [1]. By considering the motion variation and color change of smoke and combining with the research results of deep learning technology, a video smoke detection method is proposed in this paper on the basis of change-cumulative image and fusion deep network. The main contributions are as follows: First, the concept of change-cumulative image is put forward for the time domain transformation of YUV image. By calculating the inter-frame variation cumulative image of Y space and the mean filtering cumulative image of UV space, the change-cumulative image can represent the motion and color-change characteristics of smoke.
Second, a fusion deep network is designed. On the basis of the classic VGG16 network, the network depth is increased by multiple convolutional layer cascading. By cascading the VGG16 and Resnet50 networks, the network layers are increased, and the expression ability of the model is effectively improved, thus extracting additional identifiable characteristics of smoke.
Related Work
From a visual perspective, previous scholars often used traditional features, such as color, texture, and shape, to detect smoke. Chen et al. [2] pointed out that the gray values of smoke color in the RGB model on the three channels are close, mainly distributed within the range of 80-220. Krstinić et al. [3] proposed the HS'I model that can reflect the characteristics of smoke unlike several color models, including RGB, YCbCr, CIELab, and HSI. In addition, texture features are often used to distinguish between smoke and non-smoke objects. Gubbi et al. [4] suggested a video smoke detection method on the basis of wavelet transformation and support vector machine (SVM), which extracts a total of 60 characteristics, such as arithmetic mean, geometric mean, deviation, gradient, peak, and entropy, for describing smoke on all sub-band images of three-level wavelet decomposition. Russo et al. [5] used local binary pattern (LBP) and SVM to detect smoke in images. LBP values and histograms are calculated from the pixels of motion regions to form a feature vector for describing the texture of smoke. Yuan et al. [6] introduced a smoke detection algorithm on the basis of the multi-scale features of the LBP and LBPV pyramids. Yuan et al. [7] eliminated the shape dependency generated by the AdaBoost algorithm learning by using rules to divide the detection window, thus presenting a robust video smoke feature. However, the unfixed shape, unremarkable color, and texture of smoke often lead to extremely high false alarm rates when detecting smoke by these traditional features. Recently, deep learning technology is developing rapidly. The use of a deep network model can adaptively extract features with strong discriminating ability, which helps improve smoke detection performance [8]. Xu et al. [9] proposed a novel video smoke detection method on the basis of deep saliency network, which highlights the most important object regions in an image by using visual saliency detection method. For extracting the informative smoke saliency image, they combined the pixel-and object-level salient convolutional neural networks. Frizzi et al. [10] used CNNs (Convolutional Neural Networks) to automatically extract the characteristics of smoke with strong differentiation, which is more generalized than the characteristics of artificially selected LBP and wavelets. Zhang et al. [11] used faster R-CNN to detect smoke and produced synthetic smoke images by inserting real or simulative smoke into forest background to solve the lack of training data. Yin et al. [12] proposed a deep normalized CNN for image smoke detection (DNCNN). The network improves the convolution layer in traditional CNN to the batch standardized convolution layer, which effectively solves gradient dispersion and overfitting in the course of network training. Solving these problems speeds up the training process and improves the detection effect. In addition, the training sample data are enhanced to address the imbalance between positive and negative samples and lack of training samples. Although deep learning methods greatly improve the accuracy of smoke detection, false alarms for objects remain, such as cloud, fog, and others that are particularly similar to smoke.
For video smoke detection, the motion characteristics of smoke is the main basis of smoke detection. Toreyin et al. [13] studied the fuzzy and fluctuating characteristics of smoke over time by using wavelet transformation. Yuan et al. [14] proposed an improved fast Horn-Schunck optical flow algorithm to obtain optical flow field, by which the suspected smoke motion areas are detected. Subsequently, the smoke and other interference sources are distinguished by features of the mean direction and the average velocity of the optical flow vector. Kopilovic et al. [15] extracted the distributed entropy of the direction of motion light flow and identified the irregular characteristics of smoke movement to detect smoke. Tung et al. [16] presented a four-stage video smoke detection algorithm. In Stage 1, the motion area is extracted using an approximate median method. In Stage 2, the motion area is clustered to obtain the candidate smoke region by using the fuzzy c-means method. In Stage 3, the space-time characteristics of the candidate area are extracted. In Stage 4, the SVM is used to make a judgment. In general, the use of the motion characteristics of smoke can reduce the false alarm rate to a certain extent. However, existing methods mostly use the traditional features of smoke and motion characteristics, moreover smoke detection performance is not high enough.
Our Method
To enhance the detection accuracy of smoke objects in a video, this paper proposes a video smoke detection method on the basis of change-cumulative image and fusion deep network. First, we convert the YUV color space of multi-frame video image into a change-cumulative image, which can represent the motion and color-change characteristics of smoke. Second, we design a fusion deep network. On the one hand, the network cascades two convolutional layers after each convolutional layer of the VGG16 network to enhance feature extraction by increasing the network depth. On the other hand, the network cascades the VGG16 network with the Resnet50 network to improve the expression ability of the model while deepening the network level.
Change-Cumulative Image
By analyzing the smoke characteristics, the brightness of smoke varies greatly with its concentration and composition, and smoke may be bright or dark. The color of smoke slightly changes, and in general, the color saturation of smoke is very small. From the occurrence and development of smoke, it significantly diffuses upward and outward, the process of which is as follows: Smoke starts with a fire point and continues to spread up and around. On the basis of these characteristics, this paper proposes the concept of change-cumulative image, and its basic idea is as follows: The adjacent frame images in the Y space are detected for change on the YUV color space. Moreover, the variation is accumulated to obtain a change-cumulative image for reflecting the motion diffusion characteristics of smoke. The cumulative image of U and V spaces is treated with mean filtering to obtain a cumulative image for reflecting the color change of smoke. The reasons of choosing the YUV color space to build change-cumulative image are listed as below: (1) This method is mainly used to detect smoke object in videos. The color space output by camera system is usually YUV color space, which can avoid the color space transformation and reduce time consumption; (2) since the brightness of the smoke may be bright or dark, the brightness needs to be separated when selecting the color space. The Y component in the YUV color space is the luminance component, which is easy to be separated; (3) the accumulation of color components is mainly used to reduce the influence of clutter, noise, or foreground moving object which also accumulate absolute value of the difference in the Y space. In the YUV color space, the U and V components of change-cumulative image can extract deep features reflecting the color characteristics of the smoke, while other color spaces such as HSI usually has only one color channel for extracting smoke color characteristics. For example, the H component of HSI cannot be used because the chromaticity characteristics of smoke are not significant; (4) in order to facilitate the subsequent deep network construction, we hope that the change-cumulative image is similar to RGB image at the data level. For example, they all have three channels and the gray level is 255. Choosing the YUV color space to build change-cumulative image can meet this need.
In the Y space, the inter-frame difference method is used to calculate the binary image of the changing image. Specifically, for the kth vector image in Y space Y k , the binary image of the inter-frame where T is a fixed threshold. This paper takes the empirical value T = 10.
Then, the adjacent N binary images are summed to obtain a change-cumulative image Y (a) k , which is expressed as In this paper, N is smaller than 255; thus, the change-cumulative image Y (a) k can be regarded as a grayscale image. This paper takes the empirical value N = 100.
In the U space, the U space vector images of the adjacent N frame images are cumulatively summed and averaged to obtain a cumulative image U (a) k , which is expressed as Similarly, in the V space, the V space vector images of the adjacent N frame images are cumulatively summed and averaged to obtain a cumulative image V (a) k , which is expressed as Thus, the kth YUV image can be converted into a change-cumulative image, and its Y space vector can reflect the motion diffusion characteristics of smoke. The U and V space vectors reflect the color change of smoke. Compared with the original YUV image, the change-cumulative image can better highlight the changing characteristics of smoke. Therefore, on the basis of the change-cumulative image, significant and robust smoke characteristics can be extracted. For the subsequent deep network model, the cumulative image is used as the input image, in which the three-color component spaces Y, U, and V are the three channels of the input image that correspond to the three-color channels of the conventional RGB image. Figure 1 shows three component images of YUV and corresponding change-cumulative image k of the 108th frame in the video "wildfire_smoke_4.avi" [17]. From the Y component, the difference between the smoke object and the background is not obvious. However, in the Y
Fusion Deep Network
Deep networks have exhibited good performances in image classification and object detection. VGG16 is a commonly used network, which has a simple structure and can easily form a deep network. In this network structure, the convolutional and pooled layers use the same kernel function, and both layers are stacked to form a convolutional block structure for easily forming a deep network. The maximum pooling method is used between the blocks of the VGG16 network. Thus, certain important features of the original smoke images with rich details may be lost. To compensate for the lost features, this paper improves the VGG16 network by introducing a cascading convolutional layer, which enhances the extraction ability of detailed features. We also cascade the ResNet50 k of the 108th frame in the video "wildfire_smoke_4.avi".
Fusion Deep Network
Deep networks have exhibited good performances in image classification and object detection. VGG16 is a commonly used network, which has a simple structure and can easily form a deep network. In this network structure, the convolutional and pooled layers use the same kernel function, and both layers are stacked to form a convolutional block structure for easily forming a deep network. The maximum pooling method is used between the blocks of the VGG16 network. Thus, certain important features of the original smoke images with rich details may be lost. To compensate for the lost features, this paper improves the VGG16 network by introducing a cascading convolutional layer, which enhances the extraction ability of detailed features. We also cascade the ResNet50 network and the improved VGG network for extracting more detailed features. The ResNet50 network uses a jump connection to form residual blocks, thereby conveying the image information to the deep layers of the neural network to avoid the loss of important features of smoke images. Meanwhile, this approach can avoid the under-fitting problem caused by the disappearance of the gradient and effectively improve the expression ability of the model while deepening the network level. Compared with single deep network model such as VGG16 and ResBet50, the fusion deep network proposed in this paper can extract richer detailed features of smoke images and enhance the distinguishing ability of features between smoke and non-smoke images.
(1) Cascading Convolutional Layer Although the VGG16 network increases the extraction ability of detailed features by the combination and stacking of 3 × 3 filters, the distinguishing ability of features between smoke and smoke-like images such as cloud and fog is insufficiently strong. To further strengthen the significance of the features, this paper draws on the convolutional layer enhancement idea in literature [18], for cascading two convolutional layers after each convolutional layer of the VGG16 network.
As illustrated in Figure 2a, after the traditional convolutional layer operations (including convolution operation, batch normalization, and reluctant activation) on the input data, the output data have the same feature dimensions as the input data.
As shown in Figure 2b, the output data after the convolution layer operation are averaged with the input data. Then, the output data after the averaging operation are used as the input data of the cascading convolutional layer to conduct a convolutional layer operation again. By analogy, three convolutional layers are cascaded, which can avoid the loss of the original detailed features during the convolution process. Such cascading also has a beneficial effect on distinguishing between smoke and smoke-like images such as cloud and fog. Concurrently, the network depth is increased by cascading, which enhances the extraction of object features and improves recognition performance. Furthermore, additional parameters need not be calculated to increase network depth because of weight sharing. Skipping the calculation can avoid the difficulties of deep training, such as over-fitting problems. (1) Cascading Convolutional Layer Although the VGG16 network increases the extraction ability of detailed features by the combination and stacking of 3 × 3 filters, the distinguishing ability of features between smoke and smoke-like images such as cloud and fog is insufficiently strong. To further strengthen the significance of the features, this paper draws on the convolutional layer enhancement idea in literature [18], for cascading two convolutional layers after each convolutional layer of the VGG16 network.
As illustrated in Figure 2a, after the traditional convolutional layer operations (including convolution operation, batch normalization, and reluctant activation) on the input data, the output data have the same feature dimensions as the input data.
As shown in Figure 2b, the output data after the convolution layer operation are averaged with the input data. Then, the output data after the averaging operation are used as the input data of the cascading convolutional layer to conduct a convolutional layer operation again. By analogy, three convolutional layers are cascaded, which can avoid the loss of the original detailed features during the convolution process. Such cascading also has a beneficial effect on distinguishing between smoke and smoke-like images such as cloud and fog. Concurrently, the network depth is increased by cascading, which enhances the extraction of object features and improves recognition performance. Furthermore, additional parameters need not be calculated to increase network depth because of weight sharing. Skipping the calculation can avoid the difficulties of deep training, such as overfitting problems. (2) Fusion Deep Network The VGG16 network consists of 13 convolutional layers and three fully connected layers. The network structure is displayed in Figure 3. The most notable point is that the features are extracted by the combination and stacking of 3 × 3 filters, and the distinguishing ability of features is strong. (2) Fusion Deep Network The VGG16 network consists of 13 convolutional layers and three fully connected layers. The network structure is displayed in Figure 3. The most notable point is that the features are extracted by the combination and stacking of 3 × 3 filters, and the distinguishing ability of features is strong. The ResNet50 network consists of 49 convolutional layers and one fully connected layer. The network structure is shown in Figure 4. Given that the network joins the constant mapping layer, it directly connects the shallow network with the deep network, thereby ensuring that the network does not degrade as depth increases, and the convergence effect is good. In this paper, the VGG16 and ResNet50 networks are cascaded to form the fusion deep network, as illustrated in Figure 5. The details of this network structure are shown in Table 1. The VGG16 network feature extractor is the five blocks of convolutional structure, shown in the dashed box of Figure 3. The convolutional layers use the cascading convolution layers described in this paper. The ResNet50 network feature extractor is the residual block structure portion of the dashed box in Figure 4. The fusion deep network uses the VGG16 and ResNet50 feature extractors to extract features. For the input image with the size of 224 × 224, the features extracted by the VGG16 and ResNet50 feature extractors are denoted as F1 and F2, respectively. The feature dimension of F1 is 7 × 7 × 512 = 25,088, and that of F2 is 7 × 7 × 2048 = 100,352. We put F2 after F1 to construct a mixed feature denoted as F3. The feature dimension of F3 is 100,352 + 25,088 = 125,440. This feature passes through three fully connected layers and two dropout layers, and then outputs the results. The specific implementation process is as follows. Each dimension feature of the mixed feature F3 is regarded as a node of a neuron, and the extracted feature is connected by a full connection (FC), outputting 1024 neuron nodes. To prevent the over-fitting of a convolutional neural network, the dropout method is used to randomly select the neural units with a certain probability P (P = 0.3 in this paper) and discard them. Other neural units remain connected by FC to output 128 neuron nodes. The dropout operation is then performed again. Finally, the remaining neural units are output through the Sigmoid activation function. If the output value is greater than or equal to 0.5, it is determined to be a smoke object; otherwise, then it is determined to be a non-smoke object. The loss function used in this paper can be denoted as follows.
where, t is the label value, x is the input data. The ResNet50 network consists of 49 convolutional layers and one fully connected layer. The network structure is shown in Figure 4. Given that the network joins the constant mapping layer, it directly connects the shallow network with the deep network, thereby ensuring that the network does not degrade as depth increases, and the convergence effect is good. The ResNet50 network consists of 49 convolutional layers and one fully connected layer. The network structure is shown in Figure 4. Given that the network joins the constant mapping layer, it directly connects the shallow network with the deep network, thereby ensuring that the network does not degrade as depth increases, and the convergence effect is good. In this paper, the VGG16 and ResNet50 networks are cascaded to form the fusion deep network, as illustrated in Figure 5. The details of this network structure are shown in Table 1. The VGG16 network feature extractor is the five blocks of convolutional structure, shown in the dashed box of Figure 3. The convolutional layers use the cascading convolution layers described in this paper. The ResNet50 network feature extractor is the residual block structure portion of the dashed box in Figure 4. The fusion deep network uses the VGG16 and ResNet50 feature extractors to extract features. For the input image with the size of 224 × 224, the features extracted by the VGG16 and ResNet50 feature extractors are denoted as F1 and F2, respectively. The feature dimension of F1 is 7 × 7 × 512 = 25,088, and that of F2 is 7 × 7 × 2048 = 100,352. We put F2 after F1 to construct a mixed feature denoted as F3. The feature dimension of F3 is 100,352 + 25,088 = 125,440. This feature passes through three fully connected layers and two dropout layers, and then outputs the results. The specific implementation process is as follows. Each dimension feature of the mixed feature F3 is regarded as a node of a neuron, and the extracted feature is connected by a full connection (FC), outputting 1024 neuron nodes. To prevent the over-fitting of a convolutional neural network, the dropout method is used to randomly select the neural units with a certain probability P (P = 0.3 in this paper) and discard them. Other neural units remain connected by FC to output 128 neuron nodes. The dropout operation is then performed again. Finally, the remaining neural units are output through the Sigmoid activation function. If the output value is greater than or equal to 0.5, it is determined to be a smoke object; otherwise, then it is determined to be a non-smoke object. The loss function used in this paper can be denoted as follows.
where, t is the label value, x is the input data. In this paper, the VGG16 and ResNet50 networks are cascaded to form the fusion deep network, as illustrated in Figure 5. The details of this network structure are shown in Table 1. The VGG16 network feature extractor is the five blocks of convolutional structure, shown in the dashed box of Figure 3. The convolutional layers use the cascading convolution layers described in this paper. The ResNet50 network feature extractor is the residual block structure portion of the dashed box in Figure 4. The fusion deep network uses the VGG16 and ResNet50 feature extractors to extract features. For the input image with the size of 224 × 224, the features extracted by the VGG16 and ResNet50 feature extractors are denoted as F1 and F2, respectively. The feature dimension of F1 is 7 × 7 × 512 = 25,088, and that of F2 is 7 × 7 × 2048 = 100,352. We put F2 after F1 to construct a mixed feature denoted as F3. The feature dimension of F3 is 100,352 + 25,088 = 125,440. This feature passes through three fully connected layers and two dropout layers, and then outputs the results. The specific implementation process is as follows. Each dimension feature of the mixed feature F3 is regarded as a node of a neuron, and the extracted feature is connected by a full connection (FC), outputting 1024 neuron nodes. To prevent the over-fitting of a convolutional neural network, the dropout method is used to randomly select the neural units with a certain probability P (P = 0.3 in this paper) and discard them. Other neural units remain connected by FC to output 128 neuron nodes. The dropout operation is then performed again. Finally, the remaining neural units are output through the Sigmoid activation function. If the output value is greater than or equal to 0.5, it is determined to be a smoke object; otherwise, then it is determined to be a non-smoke object. The loss function used in this paper can be denoted as follows.
where, t is the label value, x is the input data. The fusion deep network not only can extract the detailed features of smoke images by using the small filters of VGG16 but also can solve the feature loss and under-fitting problem of VGG16 by employing the ResNet50 network for extracting deeper features.
In summary, the algorithm of this paper is listed as follows. The fusion deep network not only can extract the detailed features of smoke images by using the small filters of VGG16 but also can solve the feature loss and under-fitting problem of VGG16 by employing the ResNet50 network for extracting deeper features.
In summary, the Algorithm 1 of this paper is listed as follows.
(1) Experimental Dataset
We test the performance of the algorithms on two public datasets of smoke video. One is the dataset provided by Prof. Yuan Feinu, which includes three smoke and three non-smoke videos [19]; the other is CVPR Lab's video dataset, which includes four wild smoke videos and 10 non-smoke videos [17]. In our experiments, we select two smoke videos and four non-smoke videos from two datasets to build training dataset, and use the other videos to build testing dataset, the details are shown in Table 1. The total frame count of videos in training dataset is 5984, which has 3526 smoke frames. The one in testing dataset is 46,568, which has 17,084 smoke frames. All subsequent experimental results are included in the dataset described in Table 2. Table 2. Description of the dataset used in this paper.
3) Experimental Environment
The experimental environment of this paper is Windows 10 system, Python 3.6.2, Tensorflow 1.11.0, and Keras 2.2.4. In the hardware device section, the graphics card is NVIDIA GeForce GTX1080Ti, and the CPU is Intel Core i7-8700K. The emulator is written in Python and uses the neural network library Keras and GPU.
Model Training
The following explains the training method of fusion deep network. First, we extract the change-cumulative images from the video in the training dataset. Second, we manually select the change-cumulative images of smoke and non-smoke objects to construct the positive and negative sample datasets for training the fusion deep network model, respectively. Comparatively, the number of change-cumulative images is limited. Thus, training the fusion deep network model with improved performance by relying on these images alone is difficult. This paper adopts the transfer learning method to solve the problem of small sample training, specifically by using the feature-based transfer learning method in isomorphic space. At present, few dataset and few public network models are used in smoke detection. Considering the increase in network levels, the high-level network feature layer has universality in shape and texture. Therefore, this paper uses a large amount of annotation data in the object detection domain for transfer learning. Specifically, VGG16 and ResNet50 models in the Keras database are used, and the trained pre-training weight [20] from the ImageNet dataset is used for transfer learning. Figure 6 illustrates the loss and accuracy curves in the process of training three models including fusion deep network, VGG16 network, and ResNet50 network. We use Adam (adaptive moment estimation) for optimization. The training-relevant hyper-parameters are shown in Table 3. As displayed in Figure 6a, the loss curve suggests the gap between different models under the conditions of migration-based learning and actual data, in which the loss curve of the fusion deep network model proposed in this paper can be stabilized quickly. Figure 6b shows that the training accuracy of the fusion deep network model is superior to that of the single network. The accuracy also tends to be stable at the 20th iteration. The performance of the fusion deep network proposed in this paper is better than that of the VGG16 and ResNet50 single models.
Performance Comparison
(1) Performance Comparison with Different Input Images and Network Models The main innovations of this paper include (1) the change-cumulative image, which is used as the input of the deep network model, and (2) the fusion deep network model. To verify the beneficial effects of these two innovations on smoke detection, we compare the smoke detection performance with different input images and network models. Figure 7 shows ROC (receiver operating characteristic) curves of the methods with different input images and network models. Where, "fusion deep network 1" means "fusion deep network without pre-trained ImageNet weights", and "fusion deep network 2" means "fusion deep network with pre-trained ImageNet weights". It is clearly that, by using change-cumulative image and fusion deep network with pre-trained ImageNet weights, we can obtain the largest AUC (area under curve) value.
Performance Comparison
(1) Performance Comparison with Different Input Images and Network Models The main innovations of this paper include (1) the change-cumulative image, which is used as the input of the deep network model, and (2) the fusion deep network model. To verify the beneficial effects of these two innovations on smoke detection, we compare the smoke detection performance with different input images and network models. Figure 7 shows ROC (receiver operating characteristic) curves of the methods with different input images and network models. Where, "fusion deep network 1" means "fusion deep network without pre-trained ImageNet weights", and "fusion deep network 2" means "fusion deep network with pre-trained ImageNet weights". It is clearly that, by using change-cumulative image and fusion deep network with pre-trained ImageNet weights, we can obtain the largest AUC (area under curve) value. The detailed results are presented in Table 4, where the classification threshold is set to 0.5. Under the same network model, the three indicators AR, FPR, and FAR of smoke detection are superior when using the change-cumulative image as the input of the network model, note that the FAR has significantly dropped. When the input image is the same, the smoke detection indicators using a fusion deep network are significantly better than those when using VGG16 or ResNet50 alone. Comparison with the fusion deep network without pre-trained ImageNet weights, the one with pretrained ImageNet weights obtains higher AR value, at the same time gets the lower FPR and FAR values. Some detection results are shown in Figure 8. Most smoke and non-smoke images can be detected correctly, but there are still few smoke-like and smoke-insignificant objects with classification error. In general, using the change-cumulative image and the fusion deep network pretrained ImageNet weights can greatly improve the AR value and decrease the FPR and FAR value for smoke detection, thereby enhancing the overall smoke detection performance. The detailed results are presented in Table 4, where the classification threshold is set to 0.5. Under the same network model, the three indicators AR, FPR, and FAR of smoke detection are superior when using the change-cumulative image as the input of the network model, note that the FAR has significantly dropped. When the input image is the same, the smoke detection indicators using a fusion deep network are significantly better than those when using VGG16 or ResNet50 alone. Comparison with the fusion deep network without pre-trained ImageNet weights, the one with pre-trained ImageNet weights obtains higher AR value, at the same time gets the lower FPR and FAR values. Some detection results are shown in Figure 8. Most smoke and non-smoke images can be detected correctly, but there are still few smoke-like and smoke-insignificant objects with classification error. In general, using the change-cumulative image and the fusion deep network pre-trained ImageNet weights can greatly improve the AR value and decrease the FPR and FAR value for smoke detection, thereby enhancing the overall smoke detection performance. (2) Performance Comparison with Different Methods We compare the smoke detection performance of our method with that of other popular smoke detection methods in recent years, as presented in Table 5. Among them, the training and testing dataset used by all methods are the same, as shown in Table 2. The training methods and parameters of the classifier are all from the corresponding literature. Evidently, the AR value of our method is the largest, and the FPR and FAR values are the smallest. Specifically, these methods using deep learning (including CNN, DNCNN, and our method) always obtain higher AR value and lower FPR value, because the identification ability of deep features is stronger than traditional features such as color, texture, and motion. From the FAR indicator, the smoke detection methods combined with motion characteristics (including our method and literature [14]) often have a low false alarm rate, which is also caused by the unremarkable color, shape, and texture of smoke. Our method not only combines the motion, color, shape, and texture features of smoke in the change-cumulative image, but also improves the depth of network and the extraction ability of detail features by the fusion deep network. Furthermore, the smoke detection method solves the problem of insufficient training of the network model caused by small sample learning by using the transfer learning method, solving the problem improves the AR of smoke detection and decreases the FPR and FAR. However, the dfps (detected frames per second) of our method is lowest among these methods. This shows that the computational efficiency of this method is low and needs to be improved.
Conclusions
The use of computer vision technology to detect smoke in videos is a popular research direction in fire detection in recent years. Given the uncertainty of smoke objects, the traditional smoke detection methods used are impractical because of low performance, especially of the high false alarm rate. Deep learning-based methods have improved smoke detection performance to a certain degree. However, solely relying on depth characteristics results in difficulties in effectively distinguishing other objects similar to smoke, such as clouds and fog. This paper proposes a new conception of change-cumulative image, which converts the YUV color space of multi-frame video images into (2) Performance Comparison with Different Methods We compare the smoke detection performance of our method with that of other popular smoke detection methods in recent years, as presented in Table 5. Among them, the training and testing dataset used by all methods are the same, as shown in Table 2. The training methods and parameters of the classifier are all from the corresponding literature. Evidently, the AR value of our method is the largest, and the FPR and FAR values are the smallest. Specifically, these methods using deep learning (including CNN, DNCNN, and our method) always obtain higher AR value and lower FPR value, because the identification ability of deep features is stronger than traditional features such as color, texture, and motion. From the FAR indicator, the smoke detection methods combined with motion characteristics (including our method and literature [14]) often have a low false alarm rate, which is also caused by the unremarkable color, shape, and texture of smoke. Our method not only combines the motion, color, shape, and texture features of smoke in the change-cumulative image, but also improves the depth of network and the extraction ability of detail features by the fusion deep network. Furthermore, the smoke detection method solves the problem of insufficient training of the network model caused by small sample learning by using the transfer learning method, solving the problem improves the AR of smoke detection and decreases the FPR and FAR. However, the dfps (detected frames per second) of our method is lowest among these methods. This shows that the computational efficiency of this method is low and needs to be improved.
Conclusions
The use of computer vision technology to detect smoke in videos is a popular research direction in fire detection in recent years. Given the uncertainty of smoke objects, the traditional smoke detection methods used are impractical because of low performance, especially of the high false alarm rate. Deep learning-based methods have improved smoke detection performance to a certain degree. However, solely relying on depth characteristics results in difficulties in effectively distinguishing other objects similar to smoke, such as clouds and fog. This paper proposes a new conception of change-cumulative image, which converts the YUV color space of multi-frame video images into change-cumulative image. The change-cumulative image can describe the motion characteristics and color change of smoke. Experiments show that the change-cumulative image can improve smoke characteristics and smoke detection performance. Furthermore, a fusion deep network is designed in this paper. The main characteristics are to improve the convolutional layer of the VGG16 network and cascade two network models. Doing so can effectively improve the model expression while deepening the network level and help extract additional identifiable smoke features. The experimental results reveal that the new method can improve AR of smoke detection, especially reduce FPR and FAR, which is of great significance for the practical application of smoke detection. However, the method used in this paper extracts smoke characteristics by using two network models with complex calculation processes. The computational efficiency is low. Improving the efficiency of the algorithm is one future research direction.
|
2019-11-22T00:56:13.890Z
|
2019-11-20T00:00:00.000
|
{
"year": 2019,
"sha1": "5d15db8129b0bb85f906ea8703fe0aba8c4c533b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/s19235060",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0645abce162f09f98c22bf7d4fb7a33a47e23028",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
118737307
|
pes2o/s2orc
|
v3-fos-license
|
Unavoidable Gapless Boundary State and Boundary Superfluidity of Trapped Bose Mott States in Two-Dimensional Optical Lattices
We study the boundary nature of trapped bosonic Mott insulators in optical square lattices, by performing quantum Monte Carlo simulation. We show that a finite superfluid density generally emerges in the incommensurate-filling (IC) boundary region around the bulk Mott state, irrespectively of the width of the IC region. Both off-diagonal and density correlation functions in the IC boundary region exhibit a nearly power-law decay. The power-law behavior and superfluidity are well developed below a characteristic temperature. These results indicate that a gapless boundary mode always emerges in any atomic Mott insulators on optical lattices. This further implies that if we consider a topological insulating state in Bose or Fermi atomic systems, its boundary possesses at least two gapless modes (or coupled modes) of an above IC edge state and the intrinsic topologically-protected edge state.
In Supplemental Material, we define and explain local helicity modulus and local superfluid density. The contents mainly focus on the numerical techniques of quantum Monte Carlo (QMC) simulation. Readers who are interested in only physical results may skip this Supplement Material.
The helicity modulus Y tot and the superfluid density ρ HOMO s in a d dimensional unif orm system [1] are defined by using a winding number vector W [2] as where t is the "averaged" short-range hopping amplitude, L is a linear size of the system. The α-component W α of W stands for the number of winding particles along the α axis in the path-integral formalism, and can be evaluated by QMC method. The value Y tot is usually insensitive to details of the system, but ρ HOMO s may depend strongly on them such as values of hopping amplitudes and interactions. In fact, for instance, it is well known that the helicity modulus exhibits a universal jump at a Kosterlitz-Thouless (KT) transition point in two dimensions. Now, keeping in mind the above facts of the uniform systems, let us discuss local superfluidity and local helicity modulus in spatially inhomogeneous systems. To see the universal nature of the systems, we will mainly use the local helicity modulus rather than local superfluid density. In order to theoretically define helicity modulus, we have to impose a periodic boundary condition along at least one spatial direction, because the modulus is defined by the energy variation after slightly twisting the boundary condition. Namely, by definition, it is impossible to define "truly" local helicity modulus at any position x on the d dimensional space. However, when d ≥ 2, we can introduce a local helicity modulus defined at any position on one special x direction by imposing a periodic boundary condition along a direction perpendicular to the x axis. For a d-dimensional system with d ≥ 2, let us define the local helicity modulus Y (x) on x axis such that the total helicity modulus is given by summation of Y (x) along the x direction: Following Eqs. (1) and (2), we then define the relation between the local superfulid density ρ s (x) and Y (x) as In this equation, ρ s (x) stands for the superfluid density along a direction perpendicular to the x axis. For example, if we consider a two dimensional system with open boundary condition along the x axis, ρ s (x) is the superfluid density along the y direction at x. Next we explain how the local helicity modulus is related to numerical evaluated quantities such as W . Similarly to local helicity modulus, winding (topological) numbers including W also has a non-local nature and a periodic boundary condition is necessary along at least one direction to define them. Paying attention to these properties, we can introduce the partial winding number vector w(x) at each position on the x axis. The summation of w(x) over the x axis is equivalent to the original winding-number vector W : The x component w x (x) of w(x), which can be evaluated by QMC, does not satisfy the property of winding number, since information about the whole range of the x axis is necessary to define winding number along the x axis. However, sum of them, i.e., W x can be a winding number if we impose a periodic boundary condition along the x direction. Each remaining component w α (x) (α = x) is the number of winding particles along the α direction at x, and thus it has a topological nature. From Eqs. (1), (2) and (4), we find that w(x) and ρ s (x) should satisfy the following relation, When an open boundary condition is applied along the x axis as in our setup in the main text, the x component of the winding vector becomes zero and ρ s (x) represents the superfluid density along the y axis at the position x.
In this way, we can evaluate the local helicity modulus and local superfluid density (see Fig.2 of the main text). We study the boundary nature of trapped bosonic Mott insulators in optical square lattices, by performing quantum Monte Carlo simulation. We show that a finite superfluid density generally emerges in the incommensurate-filling (IC) boundary region around the bulk Mott state, irrespectively of the width of the IC region. Both off-diagonal and density correlation functions in the IC boundary region exhibit a nearly power-law decay. The power-law behavior and superfluidity are well developed below a characteristic temperature. These results indicate that a gapless boundary mode always emerges in any atomic Mott insulators on optical lattices. This further implies that if we consider a topological insulating state in Bose or Fermi atomic systems, its boundary possesses at least two gapless modes (or coupled modes) of an above IC edge state and the intrinsic topologically-protected edge state. Introduction.− Topological insulators (TIs) [1,2], more widely, symmetry-protected-topological (SPT) states [3][4][5], have been vividly studied as new quantum many-body states in the last decade. These gapful states cannot be characterized by any local order parameter, while they generally possess a gapless edge/surface mode. Each SPT phase is protected by certain symmetries, namely, it is stable against any perturbation keeping the symmetries. A complete classification of TIs and the relationship between their bulk symmetry and the corresponding surface/edge state have been established for free fermion models [6][7][8]. Several TI materials have been synthesized and their surface/edge states have been observed [1,2].
Many physicists stimulated by the study of fermionic TIs have been exploring SPT states in spin and boson systems. The Haldane-gap state [9,10] of onedimensional (1D) spin-1 antiferromagnets is a typical SPT state in quantum spin systems, and it is indeed realized in several quasi-1D magnets [11,12]. In addition to the Haldane state, several SPT phases in 1D fermion, boson and spin systems have been discussed. In fact, a way to classify 1D bosonic SPT phases has been proposed by tensor product representation [13].
On the other hand, two-or three-dimensional (2D or 3D) bosonic SPT phases and their edge/surface states have been little understood. Several theorists discussed the possibility of higher-dimensional bosonic TIs [5,[14][15][16][17][18], and proposed ways to classify them: bosonic TIs in spatial dimensions d can be distinguished by a technique based on (d + 1)th group cohomology [15][16][17][18]. In those studies, some models for bosonic TIs were predicted, but it is difficult to realize them in real materials because the corresponding Hamiltonians contain various tuned cou-pling constants.
For the realization of 2D or 3D SPT phases in boson or spin systems, strong interactions among bosonic particles or spins are generally necessary. The interaction usually makes it quite difficult to analyze the systems, and this is a main reason why the theory for 2D or 3D bosonic TIs has not been developed in comparison with fermionic TIs. Because of the same reason, even non-topological (i.e., trivial) gapped phases and their boundary nature have not been understood well in the strongly interacting boson and spin systems. In a sense, boundary nature of gapped states is more important than classification of ground states because physical phenomena at boundary can be observed and their information often provides a experimental way to characterize the bulk state.
Recently, we have studied a edge state of 2D spin-Peierls states [19], by quantum Monte Carlo (QMC) calculations [20][21][22]. The Peierls state is a typical trivial gapped state in quantum spin systems and it does not accompany any spontaneous breaking of basic symmetries (such as spin rotation and time-reversal symmetries). We showed that if we prepare a sufficiently clean edge of the Peierls state with a large enough length (∼ 50 sites), we can observe a gapless Tomonaga-Luttinger-liquid (TLL) like behavior [23] along the edge and the edge spin-spin correlation function decays in an almost algebraic fashion. We proposed some experimental ways of detecting these gapless edge excitations. In this paper, we will explore the fundamental nature of boundary states of 2D Bose Mott insulator on optical lattices, by QMC computations. Similarly to the spin-Peierls state and fermionic TIs, no spontaneous symmetry breaking occurs and a finite bulk excitation gap exists in Bose Mott states. They have been already realized in ultracold-atomic systems on optical lattice [24][25][26][27][28]. Therefore, their boundary nature could be an important research subject as a good comparison in that of bosonic TIs. An important feature of trapped ultracold-atom systems is that their boundary is always clean. This considerably contrasts with solid systems, whose boundary is usually dirty. As a result, we always observe a clean and homogeneous boundary region in Bose Mott states. We will numerically clarify the boundary properties of 2D Bose Mott states. They could be experimentally detected in principle. Our findings are also useful to deeply understand boundary states of cold-atom TIs as well as those of Bose Mott states.
Model.− In this paper, we focus on the 2D soft-core Bose Hubbard model with confinement potentials. To discuss the finite-size effects systematically, we consider systems on the quasi-1D geometry shown in Fig. 1(b). This geometry would be hard to be realized in optical lattices, but it could be regarded as a boundary part of circular or elliptic shaped trapped systems [see Fig. 1(a)]. The Hamiltonian for the quasi-1D geometry is given as where is a boson creation (annihilation) operator at position (x, y). Parameters t, U , and V (x) denote hopping amplitude, on-site repulsion, and axial confinement potential along the x-axis direction, respectively. To realize the quasi-1D geometry, we impose the periodic (open) boundary condition along the y (x) direction. In our computations, we fix U/t = 20, at which the bulk system can belong to the Mott-insulating state with a single boson per site [24][25][26][27][28]. We also fix the x-direction length L x = 48, but tune the y-direction length L y . Increase of α means the growth of potential slope between bulk Mott and vacuum (empty) regions. Boundary state in T → 0 limit.− In order to understand the boundary nature of the Bose Mott state, we will study particle densities, superfluidity, and correlation functions of the model (1) by QMC computations. In Fig. 2, we first show the local density profile n(x) = n x,y and local helicity modulus Y (x), which is proportional to the local superfluid density [29,30], changing µ 0 and the curvature α at very low temperatures T (k B is set to be unity). We here show QMC results of three parameter settings (µ 0 , α) in Table I as representatives. In all CASEs I, II and III, an incommensurate (IC) filling region (e.g., 0.21 < x/L x < 0.6 in CASE I) appears between the filling-one Mott and the vacuum (empty) states. In the IC region, the local superfluid density takes a finite value. Size of the IC region and the superfluid density profile are almost irrespectively of the length L y when L y is sufficiently large. We note that finite-size effect along x direction can be ignored in the present parameter settings. Survival of a superfluid density sharply contrasts with the case of the purely 1D Bose system because the latter superfluid density is known to disappear in the thermodynamic limit [33][34][35]. We confirmed that a further narrower IC region still survives when a more large value of α is applied. Furthermore, we observe an IC region when we apply other types of the chemical potentials with nonharmonic curvatures [36,37]: V (x) = µ 1 + α 1 x 10 and = µ 2 + α 2 exp(−x/ξ 2 ). These results indicate that an IC superfluid region between Mott and vacuum areas generally appears and a special potential V (x) with an extremely large curvature is necessary to remove the IC region. In Fig. 3, we present the equal-time one-particle (offdiagonal) correlation function C o (y)= b † x0,y b x0,0 and density one C d (y)= n x0,y n x0,0 − n x0,y 2 at the boundary position x = x 0 where ρ s (x 0 ) takes the largest values. As a comparison, we also show the QMC results of a purely 1D Bose Hubbard model [ Fig. 3(g) and (h)] and a spatially uniform 2D Bose Hubbard one [ Fig. 3(i) and (j)] in an IC-filling case. The 1D and 2D Bose systems of Fig. 3 belong to TLL and Kosterlitz-Thouless (KT) phases, respectively. At the position x 0 , powerlaw decays along the y direction are observed in both off-diagonal and density correlations with long distances. Their critical exponents are evaluated by assuming the form C z (y)=const. × y −η z , and they are summarized in Table II. The emergence of algebraic decay is independent of the width of IC region. This clearly indicates that at least one gapless edge mode around the bulk Mott-insulating region always appears at very low temperatures. The algebraic decay of the density correlation is quite different from that of the KT phase in uniform CASE I CASE II CASE III 1D ηo 0.069(2) 0.12(4) 0.25 (1) 2D Bose systems. In fact, Fig. 3(j) shows that the density correlation decays exponentially in the 2D case. In addition, it is known [23] that two critical exponents satisfy η o η d = 1 in the purely 1D TLL (see Table II), while the relation is clearly broken in the present boundary gapless mode. These results conclude that the boundary IC region possesses intermediate properties between 1D and 2D Bose systems. Boundary state in finite temperatures.− Next, we discuss the temperature dependence of the boundary IC states. In the present model as well as real experimental systems, effects of finite size and spacial inhomogeneity may spoil true phase-transition phenomena in the thermodynamic limit, but their residual things might still survive. Figure 4 shows the temperature dependence of local helicity modulus Y (x 0 ) at a typical IC position x = x 0 in CASE I. The figure shows that the L y dependence of Y (x 0 ) becomes negligibly small below T /t ∼ 0.075, where off-diagonal and density correlation functions decay algebraically. This small L y dependence implies the existence of a gapless KT-like phase around the bulk Mott state.
To quantitatively determine a KT-transition-like temperature in our finite inhomogeneous system, we simply apply the standard finite-size analysis for the KT transition in spatially uniform 2D systems, which is expected to be reliable if the width of the IC region is sufficiently large as in CASE I. In the uniform system, Y approaches to 2k B T KT /π at the KT transition temperature T = T KT . For each finite system with size L, the KT transition temperature T * (L) is known to be where C is a fitting parameter. Since we have the data of Y (L y ) for different sizes L y in Fig. 4, the tempera- ture T * (L y ) can be estimated as the cross point between numerically determined Y (L y ) in Fig. 4 and the linear line Y (T ) = 2k B T /π. In Fig. 5(a), we plot the evaluated T * (L y ). In its inset, we determine the value of T * (L y → ∞) by combining T * (L y ) and the scaling relation (2) (T * (L y → ∞) corresponds to the KT transition temperature T KT in the case of uniform systems). The characteristic temperature T * (∞) is determined as T * (∞)/t ∼ 0.11 (1).
As an alternating way to determine T * (∞), we can utilize correlation functions in our inhomogeneous system. It is well known that the critical exponent η o of the offdiagonal correlator increases from zero and it becomes a quarter at the KT transition with the growth of temperature. Let us simply apply this property to fix the KT-like temperature in our IC region. The inset of Fig. 5(b) is the T dependence of the exponent η o . We see that η o indeed crosses a quarter around T = T * (∞) which was estimated above from Y (L y ).
We stress that the temperature T * (∞) is much lower than the true KT transition temperature T KT in the uniform 2D system. From the QMC calculation, we obtained T KT /t ∼ 0.92(3) for U/t = 20 and µ/t = −0.74, where the averaged particle number per site is almost same as that at the position x = x 0 in CASE I. This must be because the development of off-diagonal correlation along the x direction is suppressed owing to the existence of Mott-insulating and vacuum regions. When the width of the IC region is small as in CASE III, it is hard to quantitatively determine T * . However, even in such a case, a KT-like power law in correlations appears at very low temperatures (see Fig. 3).
Structure factors.− From all the discussions above, we see that at least a gapless IC state always appears around the Bose Mott state if temperature is low enough. Finally we discuss a experimental way to detect the gapless edge mode. In cold-atomic systems, the momentum distribution of correlation functions [38] can be observed in principle. For example, time-of-flight (TOF) method and light-scattering spectroscopy have been applied to observe them. In Fig. 6, we show the momentum-q y distribution of S o (x, q y ) = 1/ L y y C o (x, y)e −iyqy for the IC region at x = x 0 and for the bulk Mott region at x = x 1 . Here, C o (x, y) = b † x,y b x,0 . In the realistic experimental setup, the number of sites along the y-axis is less than ∼100 sites and then we set L y = 64 in all the panels of Fig. 6. As temperature decreases, the mo- mentum distribution at zero wave number q y = 0 of the IC region well develops, while that of the Mott region is suppressed for any wave number q y . The q y = 0 peak reflects the development of superfluidity in the IC region. In Fig. 6 (d), as a comparison to the IC region, we show the momentum distribution of a finite-size 1D Bose Hubbard model under an uniform chemical potential with the almost same filling as the IC boundary region. In the 1D case, we also observe a p y = 0 peak structure which is the contribution of TLL. From Fig. 6, we find that momentum distributions in both Mott and IC regions exhibit the similar T dependence to the finite-size 1D system with the same filling. This is an another evidence for the existence of a gapless edge mode in the IC region and it also indicates the difficulty of distinguishing the IC gapless state and the 1D TLL. Summary and discussions.− In conclusion, we have studied the edge state surrounding the 2D Bose Mottinsulating phase. From the QMC method, we have found that in the IC edge region, both off-diagonal and density correlators show an algebraic decay and a superfluid density appears below a characteristic temperature irrespective of the width of the IC region. This "universal" gapless edge mode can be detected e.g., by observing a q y = 0 peak of S o (x, q y ).
Our result naturally indicates that a similar gapless edge mode generally emerges in any kinds of 2D coldatomic Bose and Fermi insulating states. Therefore, if we consider a topological insulating state in 2D cold-atom systems, we can expect that there are at least two gapless modes of the edge state in the IC region and an intrinsic topologically-protected edge state as shown in Fig. 7. There might be a relevant coupling between these two edge states. Thus, when we discuss a way to detect topological edge mode in 2D cold-atom systems, we should generally consider effects of a non-topological (but universal) edge mode in the IC region around the bulk insulating area.
|
2015-05-25T10:51:03.000Z
|
2015-05-25T00:00:00.000
|
{
"year": 2015,
"sha1": "9f72b16e85f901cc72d1fa8a868d0e263a897760",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9f72b16e85f901cc72d1fa8a868d0e263a897760",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
7639295
|
pes2o/s2orc
|
v3-fos-license
|
A novel method for the evaluation of proximal tubule epithelial cellular necrosis in the intact rat kidney using ethidium homodimer
Background Ethidium homodimer is a cell-membrane impermeant nuclear fluorochrome that has been widely used to identify necrotic cells in culture. Here, we describe a novel technique for evaluating necrosis of epithelial cells in the proximal tubule that involves perfusing ethidium homodimer through the intact rat kidney. As a positive control for inducing necrosis, rats were treated with 3.5, 1.75, 0.87 and 0.43 mg/kg mercuric chloride (Hg2+, intraperitoneal), treatments which have previously been shown to rapidly cause dose-dependent necrosis of the proximal tubule. Twenty-four h after the administration of Hg2+, ethidium homodimer (5 μM) was perfused through the intact left kidney while the animal was anesthetized. The kidney was then removed, placed in embedding medium, frozen and cryosectioned at a thickness of 5 μm. Sections were permeabilized with -20°C methanol and then stained with 4',6-diamidino-2-phenylindole (DAPI) to label total nuclei. Total cell number was determined from the DAPI staining in random microscopic fields and the number of necrotic cells in the same field was determined by ethidium homodimer labeling. Results The Hg2+-treated animals showed a dose-dependent increase in the number of ethidium labeled cells in the proximal tubule, but not in other segments of the nephron. Other results showed that a nephrotoxic dose of gentamicin also caused a significant increase in the number of ethidium labeled cells in the proximal tubule. Conclusion These results indicate that this simple and sensitive perfusion technique can be used to evaluate cellular necrosis in the proximal tubule with the three-dimensional cyto-architecture intact.
Background
As a highly perfused organ having unique filtration, secretory and reabsorptive functions, the kidney is susceptible to toxic injury. Of the various segments of the nephron, the proximal tubule is especially prone to toxic injury. As a result of its location adjacent to the glomerulus and the presence of specific secretory systems for organic acids and bases, the proximal tubule is frequently exposed to higher levels of toxicants than other segments of the nephron and is, therefore, often a primary site of nephrotoxic injury. In light of its importance as a site of toxic injury, considerable attention has been focused on the development of sensitive and accurate methods for quantifying cell injury and cell death in the proximal tubule. Obviously, a key issue in attempting to quantify toxic injury in the proximal tubule is to identify dead or dying cells. The techniques that have been used to accomplish this fall into three categories: those based on changes in general cell morphology; those based on biochemical markers in the cascade of events leading to oncotic/necrotic or apoptotic cell death; and those based on a loss of cell membrane integrity [1][2][3][4].
Morphological analysis of whole kidney tissue is a simple and widely used method to determine the nephrotoxic effects of xenobiotics while keeping the three dimensional cyto-architecture of the kidney intact [5,6]. However this technique is semi-quantitative at best, with a necrotic/ apoptotic score or index assigned to a certain degree of cell death or tissue damage. Ultimately this method is largely based on an observer's judgment as to whether a cell is dead or alive.
Several methods of identifying cell death in the kidney are based on specific markers of key biochemical events in the process of necrosis/oncosis or apoptosis, for reviews see [2,3,7]. For example, DNA fragmentation is one defining characteristic of apoptotic cell death and assays based on the Terminal Deoxynucleotidyltransferase-Mediated UTP End Labeling (TUNEL) and electrophoretic analysis of DNA have been primarily used to detect apoptotic cell death [8][9][10], although the TUNEL assay does not always discriminate apoptotic from necrotic or autolytic cell death [11]. Other assays have utilized specific markers of key events in the process of apoptosis or necrotic cell death. Some of the markers on which these assays are based include the mitochondrial membrane permeabilization, the translocation of phosphatidyl serine from the inner to the outer cell surface, and the activation of caspase or calpain enzymes [12][13][14]. These assays have the advantage of indicating the exact stage of progression of cell death. However, these assays can not always be used in the intact kidney and they can be difficult to interpret. Furthermore, the assays may be subject to interferences by the toxic substances themselves [15,16].
Some of the most widely used methods for assessing viability are based on the loss of cell membrane integrity that occurs prior to cell death [17]. Many of these methods based on cell membrane permeability involve the detection of cytosolic enzymes such as lactate dehydrogenase (LDH) and alkaline phosphatase that appear in the urine as the apical membrane of the epithelial cells of the proximal tubule begin to break down. These enzymatic assays are rapid, quantitative, and relatively easy to perform. However, many of the enzymatic urinary markers of cell death are not cell-or even organ-specific. For example, with toxic agents such as cadmium, that cause hepatic as well as renal damage, urinary enzymatic markers of cell death are not an ideal method to determine renal damage. Furthermore, certain substances can inhibit or interfere with the actual enzyme activity. For example, the nephrotoxic metal Hg 2+ directly inhibits LDH activity [18]. Recently, several commercially-available ELISA-based assays have been developed for the determination of urinary alpha-glutathione-S-transferase, a urinary marker that is thought to be specific for proximal tubule damage [19]. However these products are rather expensive and have yet to be fully validated for use in assessing the effects of most nephrotoxic substances.
Other techniques to assess cell necrosis through the loss of cell membrane integrity involve the use of cell membraneimpermeable dyes such as trypan blue, or fluorochromes such as various ethidium or propidium compounds [15,20]. These agents are normally excluded from live cells but can label dead or dying cells in which cell membrane integrity is compromised. While these methods have been widely used for assessing necrotic cell death in in vitro model systems they have been less widely used for in vivo studies.
While considering the assessment of renal viability in an intact kidney it is important to acknowledge work done by Molitoris and colleagues and Peti-Peterdi and co-workers, who have pioneered the use of multiple photon fluorescence microscopy to study renal physiological and biochemical processes in vivo [20][21][22]. This powerful technique involves the use of nuclear fluorochromes and allows for the real time quantification of apoptotic and necrotic renal cell death in the intact kidney in a live animal [23]. However this technique requires considerable technical expertise along with the use of very expensive equipment. Moreover, the optical constraints limit its use to the visualization of functional changes only in the outermost areas of the exposed regions of the kidney.
As part of our ongoing research to identify the mechanism underlying the nephrotoxic effects of cadmium (Cd 2+ ) and other metals, we have developed a novel in situ renal cell viability assay that involves perfusing the fluorescent cell viability marker, ethidium homodimer through the intact rat kidney. Ethidium homodimer is a nuclear fluorochrome with high affinity for DNA (Ka = 2 × 10 8 M -1 ), a molecular weight of 856.8 and low membrane permeability [24]. Traditionally, ethidium homodimer has been widely used as an indicator of necrosis of cells in culture. However it has also been used as a marker of necrosis in specific populations of cells within isolated corneae [25] and in whole lung preparations [26][27][28].
To validate the sensitivity and utility of this assay, rats were treated with known nephrotoxic doses of Hg 2+ . Hg 2+ primarily accumulates within and targets the epithelial cells of the proximal tubule, especially at the junction of the cortex and medulla [29][30][31]. Exposure to nephrotoxic concentrations of Hg 2+ results in renal dysfunction as indicated by elevations in blood urea nitrogen (BUN) and serum creatinine [32]. At the cellular level, Hg 2+ exposure is associated with oxidative stress, decreased reduced glutathione levels [33], alterations in stress protein expression [29,34] and alterations in cell adhesion molecule expression and localization [32]. Acute exposure to nephrotoxic doses of Hg 2+ results primarily in necrosis with apoptotic cell death occurring to a lesser extent at very high doses of Hg 2+ [35]. In order to further evaluate the utility of this technique for assessing pathological changes in the proximal tubule, additional animals were treated with the aminoglycoside antibiotic gentamicin at a dose (100 mg/kg for 8 consecutive days) that has previously been shown to cause necrosis of the proximal tubule [36]. In addition, other animals were treated with the nephrotoxic metal Cd 2+ , using a subchronic dosing protocol (0.6 mg/kg 5 days per week for 6 weeks) that causes dysfunction of the proximal tubule without causing extensive tubular necrosis [37][38][39].
The results of these studies show that perfusion of the whole kidney with ethidium homodimer allows for the accurate determination of cell death in conditions where the three-dimensional cyto-architecture of the kidney remains intact.
Results
No animals died in any of the treatment groups. However, animals in the 3.5 mg/kg Hg 2+ treatment group appeared lethargic with orange-colored bristly fur, especially around the neck at 24 h following Hg 2+ exposure. Others have noted similar responses in rats using similar doses of Hg 2+ [29]. In general, the abdominal cavity of animals dosed with 3.5 mg/kg HgCl 2 tended to have visceral connective tissue that appeared to be dehydrated or "sticky". Subsequently the clearing of connective tissue from the aorta and kidneys for the in situ viability assay was more difficult and required slightly more time in the higher Hg 2+ -treatment groups, as compared to control animals. Macroscopic observations of the unperfused, right kidney demonstrated a pale coloration of the cortex and medulla in animals treated with 3.5 mg/kg Hg 2+ .
Effects of Hg 2+ on serum and urinary parameters
Hg 2+ treatment had a biphasic, dose-dependent effect on urine volume (Fig. 1A). Moderate doses of Hg 2+ (0.875 mg/kg) caused an increase in urine volume. However, the highest doses of Hg 2+ (3.5 mg/kg) resulted in a trend of decreasing mean urine volume values that was not statis-tically significant. The urinary excretion of protein also showed a trend toward this type of dose-dependent, biphasic response, although differences in mean values did not reach levels of statistical significance (Fig. 1B). Treatment with the highest dose of Hg 2+ caused a significant decrease in the urinary excretion of creatinine (Fig. 1C). This high dose of Hg 2+ also caused a significant increase in serum creatinine and BUN (Figures 2A and 2B). The results in Figures 1(A-C) and 2(A, B) are similar to results that have been reported by other investigators using similar Hg 2+ -treatment protocols in rats [32,40] and indicate that after 24 h the highest doses of Hg 2+ caused severe acute renal failure and the lower doses caused less severe impairment of renal function. Figure 3 show images of comparable fields from the perfused left kidneys from the same animals from which the panels A-C (non-perfused right kidney), were obtained. As expected, the perfused, left kidneys all contained fewer blood cells than the corresponding nonperfused right kidneys. However, in all other respects, the perfused kidney showed the same morphologic properties as the corresponding non-perfused kidneys; the samples from the Hg 2+ -treated animals showed evidence of dosedependent necrosis in the proximal tubule. Furthermore, perfused (3D) and non-perfused (3A) kidneys of control animals showed no signs of cell death or damage. This indicates that the perfusion technique did not cause any morphologic changes in the proximal tubules. Images taken with a 10× objective (Fig. 3H) show that at a dose of 0.875 mg/kg Hg 2+ the boundary between the cortex and outer medulla is more distinct compared to the 3.5 mg/kg Hg 2+ dose (Fig. 3I). In the images taken with a 40× objective, samples from the 0.875 mg/kg Hg 2+ -treated animals ( Fig. 3B, 3E) some swelling of epithelial cells in the proximal tubule was apparent coupled with a modest loss of nuclei. However, epithelial cells appeared to remain attached to the basement membrane. In samples from the 3.5 mg/kg Hg 2+ -treated animals (Fig. 3C, 3F) the epithelial cells throughout the proximal tubule were swollen and appeared to slough off from the basement membrane. Furthermore, the 3.5 mg/kg Hg 2+ -treated animals demonstrated a loss of nuclei with widespread degeneration and necrosis. In general, the glomeruli and distal tubules from the Hg 2+ -treated animals showed normal morphology. These effects of Hg 2+ are similar to those reported in other studies [32,34,41] and indicate that Effects of Hg 2+ on urine volume (A), urinary protein (B) and urinary creatinine (C) Figure 1 Effects of Hg 2+ on urine volume (A), urinary protein (B) and urinary creatinine (C). Animals received single, ip injections of varying Hg 2+ doses (0.437, 0.875, 1.75 and 3.5 mg/kg) and 24 h urine volume samples were collected and analyzed for creatinine and protein as described under methods. The values for each data point represent the mean ± SE for a total n ≥ 6 for each treatment group. An asterisk (*) indicates significant difference from control (one-way ANOVA, P < 0.05, post-hoc Tukey's test).
Effects of Hg 2+ on serum creatinine (A) and BUN (B) Figure 2 Effects of Hg 2+ on serum creatinine (A) and BUN (B). Animals received single, ip injections of varying Hg 2+ doses (0.9% NaCl vehicle control, 0.437, 0.875, 1.75 and 3.5 mg/kg) and 24 h blood samples were collected and analyzed for BUN and serum creatinine. The values for each data point represent the mean ± SE for a total n ≥ 6 for each treatment group. An asterisk (*) indicates significant difference from control (one-way ANOVA, P < 0.05, post-hoc Tukey's test).
Hg 2+ caused extensive necrosis of the proximal tubule but had little effects on other segments of the nephron.
Effects of Hg 2+ on ethidium-labeling in the renal cortex
Cryosections of the ethidium-perfused kidneys from control and Hg 2+ -treated animals were fixed, permeabilized and stained with DAPI, as described in the Methods section. Figure 4 shows representative images that were captured from random fields of the renal cortex of control and Hg 2+ -treated animals. Figure 4A-C show the sample fields under phase contrast imaging. Figure 4D-F show the DAPI labeling and Figure 4G-I show the ethidium labeling. As expected, there was almost a complete absence of ethidium labeling in the nuclei of the samples from control animals (Fig. 4G). However, animals treated with 3.5 mg/kg Hg 2+ show a significant increase in the number of ethidium-labeled cells in the proximal tubule (Fig. 4H). Low powered (10× objective) images of 3.5 mg/ kg Hg 2+ -treated animals ( Fig. 4I) show that the ethidiumlabeling was confined to the renal cortex, and outer medulla, but was absent in the inner medulla (arrow) these results are in agreement with previous studies showing that the proximal tubules are the primary site of Hg 2+ toxicity [30,31,35].
Dose-response relationship for the effects of Hg 2+ on proximal tubule epithelial cell viability
To test the sensitivity of the in situ viability assay, animals were treated with varying doses of Hg 2+ (0.4375, 0.875, 1.75 and 3.5 mg/kg). The results are summarized in Figure 5. Figure 5A-C shows the sample fields under phase contrast imaging. Figure 5D-F shows the DAPI labeling and Figure 5G-I shows the ethidium labeling. There was almost a complete absence of ethidium labeling in the nuclei of the samples from control animals (Fig. 5G). However, at Hg 2+ doses of 0.875 (Fig. 5H) and 1.75 mg/ kg (Fig. 5I) there was an increase in the number of ethidium-labeled nuclei of proximal tubule epithelial cells in the renal cortex compared to images from control animals (Fig. 5G). The dose-response relationship of Hg 2+induced renal cell death is shown graphically in Figure 6. Note that the lowest dose of 0.4375 mg/kg Hg 2+ did not cause a significant increase in the number of ethidiumlabeled cells but the higher doses, 0.875, 1.75 and 3.5 mg/ kg showed significant dose-dependent increases in the percent of ethidium-labeled cells. This dose-response of Hg 2+ -induced renal necrosis is consistent with the morphological observations described in Fig. 3 and is similar to those reported in other studies using non-quantitative or semi-quantitative morphological analyses [31,33,35,42].
Effects of gentamicin and Cd 2+ on renal cell viability
In order to further evaluate the utility of this technique for assessing pathological changes in the proximal tubule, additional animals were treated with the aminoglycoside antibiotic gentamicin (n = 2) at a dose that has previously been shown to cause necrosis of the proximal tubule [36]. Animals treated with gentamicin for 24 h excreted larger volumes of urine (mean values of 34.1 and 140.9 ml/kg/ 24 h for control and gentamicin-treated, respectively) and displayed elevations in urinary protein content (mean values of 34.8 and 172.9 mg/kg/24 h for control and gentamicin-treated, respectively). These findings are similar to those reported in the literature [43] and are characteristic of gentamicin nephrotoxicity. In addition, other animals were treated with the nephrotoxic metal Cd 2+ , using a sub-chronic dosing protocol (0.6 mg/kg, sc, 5 days per week for 6 weeks) that has previously been shown to cause dysfunction of the proximal tubule without causing tubular necrosis [37][38][39]. Urine samples from 6 week Cd 2+ -treated animals showed significant proteinuria (38.6 ± 4.1 and 50.5 ± 3.8 mg/kg/24 h for control and Cd 2+treated (n = 12), respectively) and decreased body weight gain. However, there were no changes in urine volume or creatinine excretion (data not shown). Figure 7 shows the results of the renal cell necrosis assay in samples from animals that had been treated with either gentamicin (Fig. 7J) or Cd 2+ (Fig. 7L). Animals treated with the proximal tubule-specific nephrotoxic, gentamicin, demonstrated a significant increase in the percent of ethidium homodimer labeled cells (Fig. 7J). In the samples from the gentamicin-treated animals 21 ± 9% of cells were labeled with ethidium as opposed to < 1.0% in the control samples (n = 3 fields from 2 control and 2 gentamicin-treated animals). None of the samples from the Cd 2+ -treated animals exhibited signs of necrosis in the renal cortex (Fig. 7L). This is in agreement with previous studies using the identical treatment protocol [38].
Discussion
In examining the nephrotoxic effects of xenobiotics, one of the most important endpoints that investigators must evaluate is cell death. Unfortunately, the tasks of identifying and quantifying dead or dying cells in a complex organ such as the kidney are not as easy as they may appear to be at first glance. In the current report, we describe a relatively simple procedure for identifying and quantifying necrotic cells in the proximal tubule that involves the perfusion of the intact kidney with ethidium homodimer and the subsequent observation of fluorescence of cryosections from the renal cortex. Furthermore, we show that this technique can be used to detect changes in cell viability resulting from other site-specific nephrotoxicants such as gentamicin. This would indicate the possible utility of employing this in situ viability assay for a wide variety of potential nephrotoxicants.
This method offers a couple of significant advantages over many of the other techniques that have been used to assess the cell viability/cell death in the intact kidney. First, while the method does require some surgical preparation of the animal, it is relatively easy to perform and it does not require the use of elaborate or expensive equipment. Secondly, the labeling of cell nuclei with ethidium homodimer provides a very clear-cut end point for identifying necrotic cells. A random analysis of the pixel density indicated that the intensity of light emission of almost all of the ethidium homodimer labeled cells was at least 10-100 times the intensity of background labeling (data not shown). There were few, if any, cells that exhibited intermediate levels of labeling. This makes it easy for the observer to interpret the results; the nuclei are either labeled or they aren't and the delineation is very clear cut. Also, since the delineation between live and necrotic cells is so clear cut, it should be possible to interface this technique for use with quantitative image analysis and statistical programs.
The renal perfusion technique that was used in these studies is similar to renal perfusion procedures that have been used by others to harvest renal epithelial cells for cell culture or to infuse markers for the evaluation of renal function [44][45][46]. In the procedure described in this study, the The effects of Hg 2+ on the general morphology of the outer renal cortex Figure 3 The effects of Hg 2+ on the general morphology of the outer renal cortex. Rats were treated with Hg 2+ (0.9% NaCl vehicle control, 0.875 and 3.5 mg/kg, i.p.). 24 h later, the kidneys were removed, and processed for H & E staining. Panels A-C are representative sections from the non-perfused right kidney and panels D- ureter is sectioned and the perfusion pressure (50-100 mmHg) is below that of reported rat systolic blood pressure [47]. This low pressure tubular perfusion procedure has little effect on general renal morphology (see Fig. 3). Since the perfusion procedure washes most of the blood cells from the kidney, the morphology and cyto-architecture of the proximal tubules in the perfused kidneys is actually clearer and better preserved than in non-perfused kidneys. Since the morphology of the kidney is so wellpreserved, it should be possible to adapt this procedure to include fixatives and labels or markers of specific molecules and physiologic functions of interest.
One caveat to keep in mind regarding this method is that it is based on the loss of cell membrane integrity, which is a relatively late event in the process of cell death [48]. It does not assess the early stages of cellular injury. This is a significant issue because an increasing volume of evidence indicates that some nephrotoxic substances can have profound effects on renal function, without causing necrosis. One example of such a substance is Cd 2+ . The results of our studies utilizing this new technique confirm results of previous studies from this and other laboratories [37][38][39] indicating that subchronic exposure to Cd 2+ for 6 weeks causes proximal tubule dysfunction without causing sig-Effects of Hg 2+ on ethidium-labeling in the renal cortex Figure 4 Effects of Hg 2+ on ethidium-labeling in the renal cortex. Animals were treated with Hg 2+ (3.5 mg/kg, ip) for 24 h and the left kidneys were infused with ethidium homodimer. Cryosections of the kidneys were then fixed, permeabilized and labeled with DAPI as described in the Methods section. Panels A, D, G are high-powered (HP) images from control animals and panels B, E, H are from 3.5 mg/kg Hg 2+ -treated animals. Panels C, F, I represent low-powered (LP) images from 3.5 mg/kg Hg 2+ -treated animals. Panels A-C show phase-contrast images of the same fields in D-F and G-I, respectively. Panels D-F show total nuclei labeled in each field by DAPI, G-I show labeled nuclei by ethidium homodimer. The white arrow in panel I indicates the boundary between inner medulla and the renal cortex. The scale bar in the top left image represents 100 μm and the scale bar in the top right image represents 500 μm. nificant necrosis of tubular epithelial cells. It should be noted that some studies have shown that this Cd 2+ treatment protocol causes a limited level of apoptosis in the proximal tubule [37,39]. While we did not specifically address the issue of Cd 2+ -induced apoptosis in the present study, we feel that this method could eventually be adapted for this purpose. In this procedure, ethidium homodimer, which is infused through the intact kidney, labels necrotic cells preferentially. However the DAPI staining occurs after the cells are permeabilized with methanol and labels all necrotic, apoptotic and unaffected cell nuclei. Apoptotic nuclei appear to have a distinctive fragmented or diffuse appearance that distinguishes them from necrotic or normal cells [5,49,50]. Therefore through close examination of nuclei labeled with DAPI [51], it may be possible to determine the ratio of necrotic to apoptotic cells using this method.
It should also be noted that while the present studies focused on the use of ethidium homodimer to evaluate cell death in the proximal tubule, the technique has the potential to identify cellular necrosis in other segments of the nephron.
Another notable limitation of this technique is that it essentially provides a "snapshot" image of cellular necrosis in the kidney at a single point in time. Obtaining data for multiple time points requires the use of multiple animals. This is in contrast to the more sophisticated multiple photon imaging techniques that allow for real time imaging of the kidney of a single animal over time [20][21][22]. Despite this shortcoming, the ethidium homodimer labeling technique is relatively easy to perform, inexpensive and should provide a useful means for quantifying Dose-response for Hg 2+ -induced cell death
Conclusion
In this study, we describe a novel renal assay to detect necrosis that involves the infusion of ethidium homodimer into the intact rat kidney. Results of this study indicate that this simple and sensitive perfusion technique can be used to evaluate necrosis in the proximal tubule with the three-dimensional cyto-architecture intact.
Animals, Hg 2+ , Cd 2+ and gentamicin treatment protocols
Adult male Sprague-Dawley rats were purchased from Harlan, (Indianapolis, IN) and were randomly assigned to the Hg 2+ -treated or control (sterile 0.9% NaCl) groups (n ≥ 6 for all treatment groups). Rats were given a single intraperitoneal (ip) injection of Hg 2+ at doses of (0.4375, 0.875, 1.75, 3.5 mg/kg) in sterile 0.9% NaCl solution. The varying Hg 2+ doses correlate to 1.6 μmol, 3.2 μmol, 6.5 μmol and 12.9 μmol Hg 2+ /kg body wt, respectively. Control animals received a single ip injection of the 0.9% Dose-dependent increase in the percent ethidium-labeled proximal tubule epithelial cells in the renal cortex following treat-ment with Hg 2+ Figure 6 Dose-dependent increase in the percent ethidium-labeled proximal tubule epithelial cells in the renal cortex following treatment with Hg 2+ . Digital images from three fields per microscope slide with a total of four slides from each animal were quantified for necrosis analysis. Each animal represents one n value or replicate, in each treatment group. The values for each data point represent the mean ± SE for a total of n ≥ 6 for each treatment group. Asterisk (*) indicates significant differences from 0.9% NaCl vehicle control treatment group (one-way ANOVA, p < 0.001, post-hoc Tukey's Test).
NaCl vehicle alone. The total volume of fluid injected for Hg 2+ and control animals was less than 1 ml. After animals were dosed with either Hg 2+ or vehicle NaCl, they were placed in individual metabolic cages, and 24 h urine samples were collected. To further validate the utility of this in situ viability assay, we examined changes in proximal tubule cell viability in male Sprague-Dawley rats after 6-week sub-chronic Cd 2+ (n ≥ 6) exposure and acute gentamicin dosing (n = 2). Briefly, animals were treated by subcutaneous injection (sc) of CdCl 2 at a Cd 2+ dose of 0.6 mg/kg in sterile 0.9% NaCl, 5 days per week, for 6 weeks. This dose correlates to 5 μmol Cd 2+ /kg body wt and has been shown to cause proximal tubule dysfunction without causing significant necrosis of the proximal tubule epithelium [38]. Animals in the Cd 2+ -control group received daily, equal volume, sc injections of 0.9% NaCl alone. Gentamicin-treated animals were given daily injections of gentamicin at 100 mg/kg (71.9 μmol/kg) for eight consecutive days. Gentamicin control animals were given ip injection for 8 days with vehicle alone of physiological saline solution, (PSS). This dosing protocol has been shown to cause proximal tubule damage in rats [36].
Urinalysis
Following each of the treatment protocols the animals were placed in individual metabolic cages and 24 h urine samples were collected. The urine volumes were recorded and the urine samples were aliquoted and stored at -80°C until analyzed for protein and creatinine content. Urinary creatinine was determined by a modified form of the The effects of gentamicin and Cd 2+ on cell membrane integrity Figure 7 The effects of gentamicin and Cd 2+ on cell membrane integrity. Animals were treated with either Cd 2+ (0.6 mg/kg, sc 5 days a week for 6 weeks) or gentamicin (100 mg/kg ip per day for 8 days) and the left kidneys were infused with ethidium homodimer. Cryosections of the kidneys were then fixed, permeabilized and labeled with DAPI as described in the Methods section. Panels A-D are phase contrast images corresponding to DAPI-labeled panels E-H and ethidium homodimer labeled nuclei in panels I-L. Gentamicin treatment resulted in increased ethidium homodimer labeling (J) in the renal cortex as compared to control (I).
No differences in ethidium homodimer labeling were detected in 6-week Cd 2+ -treated images (L) compared to control (K). The scale bar in the top left image represents 100 μm.
colorimetric method of Shoucri and Pouliot [52]. Briefly, 3 ml of a working reagent that consisted of 0.21 M NaOH and 2.4 mM picric acid was added to 20 μl urine samples. After 10 min, the absorbance at a wavelength of 505 nm was measured. Urine creatinine values were determined based on the resulting absorbances of known creatinine standards and were expressed as mg/kg/24 h. Urinary protein was determined by the method of Bradford [53] using the Coomassie ® Plus Protein Assay kit (Pierce #23236).
Determination of blood urea nitrogen (BUN) and plasma creatinine
Serum creatinine was assayed using the same method as described above for urinary creatinine. BUN was assayed following the protocol of Talke and Schubert [54].
In situ evaluation of cell viability
The animals were anesthetized [ketamine/xylaxine (67/7) mg/kg]. The abdominal cavity was opened and the left kidney was isolated from surrounding connective tissue. The aorta was freed from the surrounding connective tissue and from the adjacent vena cava. A 4.0 silk ligature (Roboz Surgical, Gaithersburg, MD) was positioned around the aorta as far caudally as possible (approximately 2 cm below left renal artery) and tied immediately. A second ligature was position around the aorta just above the left renal artery and tied just prior to the insertion of the catheter. In order to allow for an isolated perfusion of the left kidney, a third ligature was tied around the right renal artery and vein to prevent perfusion of the right kidney, just prior to insertion of the perfusion catheter. The perfusion catheter was fashioned from a 23 gauge stainless steel needle and connected to polyethylene (PE 50) tubing filled with a solution of 5 μM ethidium homodimer in PSS which contained (mM): 115.0 NaCl, 5.5 glucose, 16.0 NaHCO 3 , 1.0 MgCl 2 , 0.2 Na 2 HPO 4 , 0.8 NaH 2 PO 4 , 1.0 CaCl 2 , and 5.0 KCl. An incision was made approximately half way through the aorta just above the lower ligature and the catheter was inserted into the aorta and advanced to a position just below the left renal artery. The catheter was tied and secured in place with a new ligature. At this time a blood sample was taken from the inferior vena cava. Once the catheter was secured, the ureter was sectioned and ethidium homodimer was perfused through the kidney at a flow rate of 1 ml/min for 5 min then increased to 3 ml/min for 5 min. The pressure of the perfusate was 50 mmHg at a flow rate of 1 ml/min and 100 mmHg at 3 ml/min, as measured in preliminary experiments with a low pressure transducer (Grass P10EZ) attached to a Model 7 Grass polygraph (Grass Technologies/Astro-Med, Inc., West Warwick, R.I.). The left kidney was then perfused with PSS at 3 ml/min for 10 min to wash out any residual or unbound ethidium homodimer. The perfused left kidney was removed, decapsulized, cut through the transverse plane into three sections and immediately frozen separately at -80°C for later cryosectioning. The non-perfused right kidney was also removed and processed for histological and biochemical analyses. Animals were then euthanized by exsanguination and pneumothorax while under anesthesia.
The frozen left kidney samples were cryosectioned at a thickness of 5 μm. The sections were then mounted on glass slides, fixed and permeabilized in -20°C methanol, and stained with 0.3 μM DAPI, to label all nuclei. The labeled sections were then covered with Aqua Polymount (Polysciences, Warrington, PA) and viewed with a Nikon Eclipse 400 fluorescence microscope within 24 h of DAPI staining. Fields were viewed under both phase contrast and fluorescent illumination using a 40×, high power objective or 10× low power objective. The number of total nuclei and ethidium-labeled nuclei within the field of view of a 40× objective, were determined from random fields within the renal cortex. Each field contained approximately 300 cells in an area of 9.1 × 10 4 μm 2 . Digital images from three fields per slide with a total of four slides from each animal were quantified for viability analysis. Digital images were captured with a Spot digital camera (Diagnostic Instruments, Sterling Heights, MI) using automated exposure times and gain settings for the brightfield images. The dark-field fluorescence images of ethidium homodimer (excitation = 528 nm; emission = 617 nm) and DAPI (excitation = 358 nm; emission = 461 nm) stained slides were captured at a gain setting of 4 and 1 s exposure for green, red and blue. The digital images were processed using Image-Pro Plus imaging analysis software package (Media Cybernetics, Silver Spring, MD). In performing these studies, we found that once the tissue is frozen in embedding medium and stored at -80°C, the ethidium labeling is very stable with no loss of fluorescence over at least 2 months. However, once the tissue sections are cut with a cryotome, washed, stained with DAPI then stored at room temperature there was a subsequent loss of fluorescence over 1-4 weeks. Accordingly, all samples were examined within 48 hours following sections and DAPI labeling.
Statistical Anaylsis
Data were analyzed using the SigmaStat ® statistical program (SPSS Inc., Chicago, IL). Statistical differences were determined using one-way or two-way analysis of variance (ANOVA) where appropriate. If significant differences between sample means were detected (p < 0.05), a post-hoc Tukey-Kramer test was performed to ascertain which mean values were significantly different from control values.
|
2017-08-03T01:46:51.475Z
|
2007-02-23T00:00:00.000
|
{
"year": 2007,
"sha1": "ce2db7a957f89d72dc0bb6d9648cd17b72928cbe",
"oa_license": "CCBY",
"oa_url": "https://bmcphysiol.biomedcentral.com/track/pdf/10.1186/1472-6793-7-1",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "edd2295d9b63eef6c875421b86129ffe2fad5e03",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
246656129
|
pes2o/s2orc
|
v3-fos-license
|
First national-scale evaluation of temephos resistance in Aedes aegypti in Peru
Background The development of resistance against insecticides in Aedes aegypti can lead to operational failures in control programs. Knowledge of the spatial and temporal trends of this resistance is needed to drive effective monitoring campaigns, which in turn provide data on which vector control decision-making should be based. Methods Third-stage larvae (L3) from the F1 and F2 generations of 39 Peruvian field populations of Ae. aegypti mosquitoes from established laboratory colonies were evaluated for resistance against the organophosphate insecticide temephos. The 39 populations were originally established from eggs collected in the field with ovitraps in eight departments of Peru during 2018 and 2019. Dose–response bioassays, at 11 concentrations of the insecticide, were performed following WHO recommendations. Results Of the 39 field populations of Ae. aegypti tested for resistance to temephos , 11 showed high levels of resistance (resistance ratio [RR] > 10), 16 showed moderate levels of resistance (defined as RR values between 5 and 10) and only 12 were susceptible (RR < 5). The results segregated the study populations into two geographic groups. Most of the populations in the first geographic group, the coastal region, were resistant to temephos, with three populations (AG, CR and LO) showing RR values > 20 (AG 21.5, CR 23.1, LO 39.4). The populations in the second geographic group, the Amazon jungle and the high jungle, showed moderate levels of resistance, with values ranging between 5.1 (JN) and 7.1 (PU). The exception in this geographic group was the population from PM, which showed a RR value of 28.8 to this insecticide. Conclusions The results of this study demonstrate that Ae. aegypti populations in Peru present different resistance intensities to temephos, 3 years after temephos use was discontinued. Resistance to this larvicide should continue to be monitored because it is possible that resistance to temephos could decrease in the absence of routine selection pressures. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s13071-022-05310-x.
which has an arid temperate climate with an annual rainfall of only 2 mm [6,7]. These adaptations have led to a wide dispersal of the vector, which in turn has led to the presence of arboviruses that often follow the same pattern of dispersal.
It is estimated that about 2.5 billion people, representing 40% of the human population worldwide, live in areas at risk of dengue transmission [8] and that 390 million cases of dengue occur per year in tropical and subtropical areas [3]. In addition, in the last 5 years, Ae. aegypti has been responsible for the spread of chikungunya and Zika to regions of the Americas, placing a significant burden on healthcare systems and also causing social and economic disruption [9]. While Ae. albopictus is widely distributed in the Americas (present in 21 of 44 countries), it has not been detected in Peru. However, Peru shares borders with countries reporting widespread distribution of Ae. albopictus, such as Brazil, Colombia, Bolivia and Ecuador (where Ae. albopictus was first detected in 2017) [10].
In Peru, the vector of dengue, chikungunya and Zika is Ae. aegypti. Aedes aegypti was eradicated locally in 1958, but it subsequently recolonized the country in 1984 [11]. The first outbreak of dengue occurred 6 years later, attributed to dengue virus serotype 1 (DENV-1) [12]. Low-incidence outbreaks occurred thereafter up to 2001, when a major outbreak occurred, with 23,304 cases. This outbreak confirmed the circulation of the four serotypes of DENV [13]. Significant outbreaks occurred between 2017 and 2020, with 68,290 cases and 52,826 cases, respectively [14]. The first reported cases of chikungunya and Zika occurred in 2015 and 2016, respectively [15,16].
In Peru, Aedes-borne arboviruses are present in three ecological regions: the coast, the Amazon jungle and the Andes Mountains [17,18]. The latter region (Andes Mountains) presents such a diversity of altitudes that is differentiated into two areas: the Andean highlands (> 2300 m a.s.l.) and high jungle (400-1400 m a.s.l.; located on the eastern flank of the Andes) [19]. Aedes-borne arboviruses only affect areas of lower altitude. Aedes aegypti are widely distributed in 21 of Peru's 24 departments and the constitutional province of Callao, and have been identified in 527 districts [20], where approximately 22 million people live, putting 70.4% of the Peruvian population at risk of contracting arboviruses transmitted by this vector.
The main dengue control strategy implemented in Peru until 2016 was focal treatment of larval habitats with the organophosphate (OP) insecticide temephos due to its easy dosage, application and acceptability by the community. For the same reasons, the adulticides used in the 1990s were the OPs fenitrothion and malathion [21]. Since the beginning of this century, pyrethroid (PY) insecticides have also been used (cyfluthrin, deltamethrin, alpha-cypermethrin and cypermethrin) [17,22], applied using thermal and cold fog equipment that can be manually carried or mounted on trucks. However, in 2015, Pinto et al. [23] detected resistance against PYs in field-caught Ae. aegypti populations in association with knockdown resistance (kdr) mutations (Phe1534Cys, Val1016Ile). This led the Peruvian Ministry of Health (MINSA) to implement a control strategy in which PYs were rotated with the OP malathion. For larval control, the OP temephos was switched with the insect growth regulator (IGR) pyriproxyfen because temephos shares the same mode of action as the OP malathion. This switch away from using temephos, however, was made without any knowledge of the susceptibility status of Ae. aegypti populations to this insecticide.
Surveillance of insecticide resistance in arthropod vectors in Peru is an activity that falls under the Instituto Nacional de Salud (INS; National Institute of Health) and Regional Reference Laboratories (LRRs) which, in the context of a decentralized healthcare system, requires coordinated work. However, many of the LRRs do not have entomology laboratories or insectaries and thus do not have the facilities or knowledge to routinely perform the various tasks/responsibilities placed on them [22,24]. This situation results in a weakening of the vector surveillance and control programs, despite ongoing increases in the numbers of cases of Aedes-borne arboviruses. The continuous use of insecticides results in selective pressures that in turn drive physiological and/or behavioral adaptation, a phenomenon known as resistance [25]. Insecticide resistance can lead to operational failures, ultimately requiring rotations of insecticides with different modes of action or mosaic treatments to manage insecticide resistance [26]. Ideally, these strategies should be carried out preemptively to preserve insecticide efficacy, but also reactively to mitigate or reverse resistance [25].
The aim of the present study was to determine the levels of resistance to temephos in 39 Peruvian field populations of Ae. aegypti from the three ecological regions of Peru after having used temephos for > 25 years, following an interruption in its use of 3 years.
Sampling and study area
The mosquito population tested comprised the F1 or F2 generations of Ae. aegypti from colonies maintained at INS that were established from eggs collected in 39 localities in the Peruvian departments of Tumbes, Piura, La Libertad, San Martín, Loreto, Ucayali, Junin and Madre de Dios during April 2018 and January 2019 (Fig. 1). The colonies were obtained from eggs collected with ovitraps, which were distributed every 200 linear meters covering the urban area of the locality, according to the parameters established by MINSA [27]. The ovitraps were installed in intra-and peri-domestic areas and contained a strip of paper towel as oviposition substrate [27,28] and 10% hay infusion as attractant [29]. The localities sampled are located in three ecological regions showing distinct weather patterns [6], variations in the incidence of dengue [17,18] and Ae. aegypti populations with differing insecticide resistance profiles [23].
Mosquito collection and laboratory rearing
The paper strips containing eggs were left to dry for 1 week and then soaked in water to hatch the F0 generation [30]. The F1 and F2 generations were subsequently established in the insectary of the Laboratorio de Referencia National de Entomología (LRNE), INS, Lima, Peru (Table 1). Humidity and temperature conditions in the insectary were maintained at 70 ± 10% relative humidify and 26 ± 2 °C, respectively. Aedes aegypti of the susceptible Rockefeller reference strain were used as a control for the resistance tests [31].
Larvicide susceptibility testing
Dose-response susceptibility tests were performed with temephos following WHO recommendations [32]. The larvae were exposed to a wide range of concentrations of the insecticide with the aim to evaluate larvicidal activity and thereby determine the 50% and 95% lethal concentration values (LC 50 and LC 95 ) for each study population. Four replicates per concentration, and 11 concentrations were evaluated with 20 third-stage larvae (L3) in 100 ml of insecticide solution per replicate. Insecticide solutions were prepared with ethanol solvent and temephos active ingredient (Chem Service, West Chester, PA, USA), using a concentration range of 0.004 to 0.324 mg/ ml for the field populations, and 0.002 to 0.012 mg/ml for the control Rockefeller strain. Simultaneously, a control group with four replicates exposed only to 600 µl of ethanol solvent was evaluated [32]. Each test was carried out on three different days to ensure the reproducibility of the method and consistency of the results [32,33].
Data analysis
The values for LC 50 and LC 95 were calculated from a log dosage-probit mortality regression line using probit analysis (Polo-PC statistics package [34]) for each population. Resistance ratios (RRs; i.e. LC of field population/LC of susceptible population) were also calculated to define the intensity of resistance in the field populations. Specifically, Eq. 1 was used to calculate the RR 50 (where i = 50) and the RR 95 (where i = 95 as shown:: The population was considered to be susceptible when the RR < 5; when the RR was between 5 and 10, the population was considered to have moderate resistance; and when the RR ≥ 10, the population was to be considered highly resistant.
Results
Third-stage larvae from the F1 and F2 generations of 39 Peruvian populations of Ae. aegypti mosquitoes from localities with different incidences of arboviruses and located in three ecological regions of Peru were tested. A total of 37,440 L3 were tested across all bioassays. Table 2 and Fig. 2 show the values of the LC 50 and LC 95 , the resistance ratios (RR 50 and RR 95 ) and the slope of the probit regression lines for the insecticide temephos in the 39 populations of Ae. aegypti. The differences between the RR 95 of all populations were analyzed according to the criteria of Mazzarri and Georghiou [35] and Sá et al. [33].The results showed that the three regions could be grouped into two groups: coastal and jungle (the latter including the Amazon and high jungles). Most of the population in the coastal group showed high levels of resistance to temephos (RR > 10), with three populations showing values > 20 (AG 21.5, CR 23.1, LO 39.4; for abbreviations of localities/populations, see Table 1 and Fig. 1). Some populations in th coastal group (PG, MI, TA, BE, CS, ES and LA) showed moderate levels of resistance (5 ≤ RR < 10) and only four populations (JO, CH, CA and VI) were considered to be susceptible (RR < 5). In the jungle group, most of the populations showed moderate levels of resistance (5 ≤ RR < 10), with RR values ranging from 5.1 in JN to 7.1 in PU, while some populations were considered to be susceptible (RR < 5), with values ranging from 2.2 in MO to 4.9 in YA and PA; the notable exception in the jungle group was the PM population, which was considered to be highly resistant (RR = 28.8).
However, the RRs of the populations were heterogeneous, even within the same department. In Tumbes, for example, most populations were highly resistant to temephos (RR > 10), except for the PG population (RR = 8.9) which showed moderate resistance to this insecticide. Piura, in comparison, had two populations (MA and LO) that exhibited high RR values (RR > 19), but also had two susceptible populations with low values (RR < 5). In La Libertad, the RRs varied between 2.1 in CA and 11.3 in FL, showing that the mosquito populations in this department also showed heterogeneous resistance RRs; however, there were also two populations in which the lowest RRs were detected (RR < 2.5). The departments of San Martin, Junín, Loreto and Ucayali were more homogeneous, with RR values ranging from 2. were considered to be more susceptible (RR < 5) in both the RR 50 and RR 95 . In general, jungle localities were more homogeneous.
The slope values of the probit regression lines of the field populations were lower than those obtained with the Rockefeller lineage, except in the CA population (slope = 7.18) (Table 2; Fig. 3). This result confirms the heterogeneity of temephos resistance in the field strains in relation to the reference strain.
Discussion
As part of a national strategic plan to eliminate dengue, chikungunya and Zika, a national survey to ascertain the susceptibility of Ae. aegypti to temephos was undertaken. The Peruvian INS, in coordination with Regional Health Directorates, surveyed 39 dengue endemic localities from April 2018 to January 2019. Overall, Peruvian populations of Ae. aegypti showed variable levels of resistance to temephos. Based on the results, these populations were clustered into two groups: (i) coastal populations with moderate to high levels of resistance to temephos, and (ii) jungle populations, which showed a moderate level of resistance to temephos, and some susceptibility, except for the Puerto Maldonado population which showed a very high level of resistance. Interestingly, from 2015 to 2017, the dengue cases reported in the coastal departments accounted for between 54 and 86% of the total number of dengue cases reported in Peru, while from 2018 to 2021, dengue cases reported in the jungle regions accounted for between 55 and 87% of the total number of dengue cases reported [36,37] (Additional file 1: Table S1). Higher levels of resistance to temephos were detected in coastal populations than in jungle populations, despite the northern coast having been recolonized by Ae. aegypti 10 years later than the Peruvian Amazon [38,39]. The reasons underlying this difference in susceptibility are likely diverse. One possible explanation is based on the notion that Ae. aegypti entered the country along its border with Ecuador [11]. It is known that Ae. aegypti populations in Ecuador have variable levels of resistance to temephos, with populations from Huaquillas (bordering Peru) and Arenas showing susceptibility, while the populations from San Lorenzo and Nueva Loja show high levels of resistance. This variability could be due to variations in the intensity and frequency of temephos use across Ecuador [40]. Another possible explanation is climate differences. The departments along the northern coast of Peru were the most affected by the El Niño climatic phenomenon (1997)(1998), with heavy rains that caused an increase in malaria and dengue cases [41,42]. This situation led to the establishment of prevention and control activities, including insecticide space spraying and indoor residual spraying, mapping and treatment of larval habitats (including chemical control with temephos) and campaigns to eliminate potential larval habitats. These activities were carried out periodically in 333 localities in the departments of Tumbes, Piura, Lambayeque and La Libertad, with the goal of eliminating larval habitats in urban, peri-urban and rural areas.
In addition, the urban population on the coast is growing, but there is a lack of planning and organization and thus insufficient basic sanitation facilities due to, among other reasons, constant migration from other regions due to violence, lack of job opportunities, education and access to technological resources, as well as poverty [42]. This rapid, unplanned urbanization favors the introduction and establishment of Ae. aegypti. However, the west coast of Peru is a long desert strip interrupted by valleys, and the main characteristic of the region is a scarcity of rainfall [6], which should serve as a limiting factor for the transmission of arboviruses in this region, with the one important exception being the north coast, which experiences high temperatures and rainfall in the summer due to its proximity to the equator [18]. Overall, therefore, the potential benefit of the mainly arid climate is partially or totally negated by the presence of larval habitats inside homes as a consequence of poor water storage facilities due to an inadequate piped water supply in urban centers [17,26]. Evidence of this can be seen in the types of larval habitats that predominate in and around homes, including water storage drums, cylinders, wells and flower vases [7]. As a result, the lack of access to and availability of water, as well as the poor quality of water, create conditions for the proliferation of vectors and transmission of arboviruses [43], as recognized by the Ministry of Health of Peru, which reported in 2016 that inadequate access to water was associated with 41.2% of cases of dengue [44]. The ecosytems of the Peruvian Amazon differ from those on the coast. These jungle areas typically include humid and rainy tropical forests [6]. In this region, due to the constant rainfall, there is a greater abundance of potential larval habitats, especially those known as "los inservibles" (discarded objects, passively filled with rainwater). Unlike along the coast, these larval habitats are typically peri-domestic and are not used intentionally to hold water [45]. In contrast to the urbanization along the coast, the jungle is the least populated region of the country, accounting for only 10% of the national population, and the infrastructure of Amazonian villages often reflects inadequate access to basic services, education and health [46]. The city of Puerto Maldonado is the capital of the Amazonian department of Madre de Dios, located in the southeastern part of the country. The Ae. aegypti population in this city was the only one in the Peruvian Amazon that presented a high level of resistance to temephos. The vector was introduced into this city in 1999 [47], 15 years after it first being recorded in the Peruvian jungle in 1984 [11]. Dengue cases were sporadic in the period 2000-2016, with only one major outbreak (2952 recorded cases) reported in 2010 [48]. This low burden of dengue is supported by the findings of Salmón-Mulanovich et al. [49], who found low seroprevalence to DENV in a retrospective study conducted in Puerto Maldonado in 2018. This low disease burden suggests that vector control activities were not routine or intense and that the local Ae. aegypti population has not experienced strong selection pressure from the larvicide temephos. The explanation for the high levels of temephos resistance detected in the present study could be due to the vector having spread across the border from Brazil and Bolivia. On the Brazilian side, the Ae. aegypti population from Rio Branco (Acre State) is highly resistant to temephos [50] and similarly, on the Bolivian side, the Ae. aegypti population in the border city of Cobija (department of Pando) shows moderate levels of resistance to temephos [51].
It is also important to consider that the use of pesticides in the agricultural sector in Peru is intense at the national level, with the most widely used insecticides being OPs [52]. In a historical context, organochlorine (OC) pesticides were used between 1940 and 1950 (dichlorodiphenyltrichloroethane [DDT], benzene hexachloride [BHC] and toxaphene), followed by the introduction of the OPs parathion and methamidophos in 1950 and their use for several decades, with parathion used until 2005 [52] and methamidophos until 2018-2019 [53]. In the 2000s, pyrethroids (deltamethrin, cypermethrin and alphacypermethrin) and carbamates [CA] (carbofuran, methomyl, carbosulfan and carbaryl) were introduced [52,53]. A study conducted in 2012 reported that 43% of farmers preferred to use OPs because they had a broad spectrum of action (contact, ingestion and fumigant effects) [54]. It is important to note that these pesticides were widely used on cotton, corn and potato crops in the northern and central coastal valleys, as well as in the Andes Mountains. Another important consideration is the illegal trade in pesticides through smuggling (mainly along the northern border), street sales and adulteration and counterfeiting of products, especially in the northern coast, central and southern parts of the Andes Mountains [55]. The continuous use of pesticides in agriculture has led to pest resistance to OCs and OPs due to misuse of pesticides and lack of pesticide management [52]. The agricultural sector is using the same classes of insecticides that are being used for public health, so mosquitoes and other non-target insects may experience selection pressure from insecticides used in agriculture, resulting in the selection of populations that exhibit resistance to multiple insecticides [56].
The results of the present study demonstrate that Peruvian Ae. aegypti populations show diverse levels of Palomino et al. Parasites & Vectors (2022) 15:254 resistance to the OP temephos, and are consistent with resistance patterns observed in other field populations that have been subjected to intense selection pressure from temephos [57]. In Peru, temephos was used for > 25 years; it is thus not surprising that resistance to this insecticide has reached very high levels in some areas. However, it is not possible to estimate or quantify the evolution of this resistance due to the lack of baseline information; the absence of data from Peru is also found in the review of resistance prepared by Moyes et al. [58]. Resistance to temephos has been reported worldwide, with high levels of resistance reported in Tamil Nadu (India) [59], Caldas (Colombia) [60], Pernambuco (Brazil) [61], Martinique [62] and Bahia (Brazil) [63]; moderate levels of resistance reported in Tocantins (Brazil) [33], Laos (Asia) [64], Paraná (Brazil) [65], Quindío (Colombia) [66], Delhi (India) [67], Sao Paulo and Northeast Region (Brazil) [68]; and susceptibility reported in Malaysia [69], Santiago Island (Cape Verde) [70] and Phitsanulok province in Thailand [56].
Chemical control of Ae. aegypti with temephos in Peru was continuous until 2015 when it was replaced by pyriproxyfen [71]; this replacement could explain the moderate and low levels of resistance found in the populations of Pampa Grande, Bellavista and Comunidad Saludable, among others. This finding suggests that resistance to temephos is unstable in the absence of continuous selection pressure. The notion of unstable resistance is supported by the findings of various researchers. Wirth and Georghiou [57] who detected significant decreases in the levels of resistance to temephos in the British Virgin Islands (Tortola) following an interruption of > 10 years in its use, from a RR of 46.8 in 1985 [24] to a RR of 12.1 in 1992-93 [72] and then to a RR of 6.3 in 1995-96 [73]. Similarly, in Colombia (Caldas), Conde et al. [60] observed a reduction in the level of resistance to temephos at > 4 years after discontinuation of its use, from RRs of 13.27 and 11.48 in 2007 to RRs of 4.75 and 5.61 in 2011. Similarly, in Brazil (Juazeiro do Norte) [74], a decrease in resistance to temephos was observed, from a RR of 10.4 to a RR of 7.2, 7 years after this larvicide was replaced by Bacillus thuringiensis israelensis (Bti). Also in Brazil, Rahman et al. [75] observed a reduction in the level of resistance to temephos in Rio de Janeiro, in the municipality of Campos dos Goytacazes, 15 years after discontinuationof its use, from a RR of 7.8 in 2001 to a RR of 2.6 in 2016. These authors also detected a significant decrease in the levels of resistance to temephos in the municipality of Itaperona, from a RR of 25.6 in 2011 to a RR of 7.3 in 2016, after this larvicide was substituted by IGR [75]. Consequently, rotating to a new insecticide with a different mode of action could be advantageous in terms of temephos resistance management. The WHO recommends the following compounds as alternative larvicides: Bti, diflubenzuron, methoprene, novaluron, pyriproxyfen and spinosad.
Following the observed resistance of Peruvian Ae. aegypti to pyrethroids, the OP malathion is being reintroduced [71]. It is uncertain how long this insecticide will remain effective if resistance to temephos has already been demonstrated or if there is cross-resistance between temephos and malathion. Wirth and Georghiou [57] suggested that resistance to malathion did not increase significantly under selection pressure with temephos and that adulticides exerted lower selection pressure than larvicides [28]. If this is indeed the case, it is reasonable to believe that malathion may still be effective in Peru, which is important considering the few chemical options available in Peru for vector control.
An important limitation to this study was the lack of additional evaluations that would allow us to better understand the evolution of resistance to temephos, as well as the characterization of the observed resistance by molecular and enzymatic methods. On this last point, Rodriguez et al. [76], in a study with Latin American populations, found that a Peruvian Ae. aegypti population presented variations in the intensity of resistance to different OPs (temephos, RR = 30; malathion, 1.5; fenthion, 6.6; pirimiphos-methyl, 10; fenitrothion, 1.1; chlorpyrifos, 4.3), with elevated activity of esterases related to resistance to temephos, while mono-oxygenases were associated with resistance to pirimiphosmethyl and chlorpyrifos.
|
2022-02-09T16:19:04.175Z
|
2022-02-07T00:00:00.000
|
{
"year": 2022,
"sha1": "328587aaff5d583289dc82183101882fad41e98e",
"oa_license": "CCBY",
"oa_url": "https://parasitesandvectors.biomedcentral.com/track/pdf/10.1186/s13071-022-05310-x",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0a8a03a90acdb702fc36ddcac556440900185ed2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
119520527
|
pes2o/s2orc
|
v3-fos-license
|
Future X-ray Polarimetry of Relativistic Accelerators: PWNe and SNRs
Supernova Remnants and Pulsar Wind Nebulae are among the most significant sources of non-thermal X-rays in the sky, and the closest laboratories where relativistic plasma dynamics and particle acceleration can be investigated. Being strong synchrotron emitters, they are ideal candidates for X-ray polarimetry, and indeed the Crab nebula is up to present the only object where X-ray polarization has been detected with a high level of significance. Future polarimetric measures will likely provide us crucial informations on the level of turbulence that is expected at the particle acceleration sites, together with the spacial and temporal coherence of the magnetic field geometry, enabling us to set stronger constraints on our acceleration models. In PWNe it will also allow us to estimate the level of internal dissipation. I will briefly review the current knowledge on the polarization signatures in SNR/PWNe and I will illustrate what can we hope to achieve with future missions like IXPE/XIPE.
Introduction
Pulsar wind nebulae (PWNs) are bubbles of relativistic particles (mostly pairs) and magnetic fields that form when the relativistic pulsar wind interacts with the ambient medium (interstellar medium (ISM) or supernova remnant (SNR)). They shine in non-thermal (synchrotron and inverse Compton) radiation in a broad range of frequencies from radio wavelengths to γ-rays (see [1] for a review). PWNs are at present one of the more promising astrophysical environments where relativistic outflows and relativistic shock acceleration can be investigated. They are, above all, one of the most efficient antimatter factories present in the galaxy and have been advocated as a possible source of the so-called "positron excess" [2,3].
At X-rays, many PWNs exhibit an axisymmetric feature known as a jet-torus structure. This feature has been observed by now in a number of PWNs, among which are the Crab nebula [4], Vela [5], and MSH 15-52 [6], to name just a few. It is now commonly accepted that this structure arises due to the interplay between the anisotropic energy flux in the wind and the compressed toroidal magnetic field in the nebula, as confirmed by a long series of numerical simulations [7][8][9], and that its shape and properties can be used to probe the structure of the otherwise unobservable pulsar wind, and the acceleration properties of the wind termination shock [10].
Shell SNRs trace the ejected layers of the parent star, launched during the supernova explosion, as they propagate into the ISM, driving a high Mach number forward shock where the ambient matter is heated and compressed, and particles are accelerated [11,12]. While the stellar ejecta are mostly revealed as thermal line emission, the forward shock is seen as a bright non-thermal limb, shining in synchrotron from radio waves to X-rays. SNRs are commonly thought to be at the origin of the bulk of the galactic cosmic rays (CRs): diffusive shock acceleration, consisting of repeated crossings of the surface of the shock, is capable of rising the particle energy up to~0.1-1 PeV [13].
In the last decade, it has become clear that the acceleration process can strongly modify the dynamics of the shock, and in particular can substantially amplify the magnetic field upstream of the shock itself, driving the development of magnetic turbulence [14][15][16][17]. This has crucial implications for the particle spectra and the maximum energy that can be achieved. The synchrotron X-rays seen in young SNRs are due to accelerated electrons with typical energies of 1-10 TeV in a magnetic field of a few hundred µG [15]. Such high values of the magnetic field strength cannot be explained by shock compression alone, but can be produced by instabilities in the upstream due to the accelerated particles themselves [14].
Radio & Optical Polarization
Radio polarimetry of PWNs and SNRs has a long history. In PWNs, as radio emission is dominated by the outer regions of the nebula, where the effects of the interaction with the SNR are stronger, and where Rayleight-Taylor instability operates, radio polarimetry provides at best an estimate of the degree of ordered versus disordered magnetic field. This means that it cannot be used to probe the conditions in the region close to the termination shock, where most of the variability and the acceleration processes take place. In the Crab nebula, which constitutes a case study for the entire class, the radio polarized fraction is~16% on average [18][19][20][21] with peaks up to 30%, which is lower than the average optical polarized fraction, which is~25% [20]. Interestingly, the polarized flux in radio anti-correlates with the position of the bright X-ray torus.
In other systems, the interpretation of the radio polarized morphology can be quite challenging. Vela shows a clear toroidal pattern, consistent with the orientation of the double ring that is observed in X-rays [22]. A similar highly ordered toroidal pattern is seen also in G106.6-29 [23]. This is consistent with the general expectation of a synchrotron bubble where a highly wound-up magnetic field is inflated by the wind coming from a rapid rotator. Other systems clearly show a far more complex morphology, ranging from a highly turbulent structure [24], typical in old systems that have gone through a strong interaction phase with the SNR known as the reverberation phase [25,26], to one that is mostly radial (or dipole-like) [27]. There is at the moment no framework to interpret these differences or to relate them consistently to the dynamics of the PWNs.
For the Crab nebula, high resolution HST observations in polarized light for the inner region, in particular the brightest optical features, namely the knot and the wisps, show typical polarized fractions of about 60% and 40%, respectively [28]. The results in the Crab nebula are consistent with the general idea of a mostly toroidal magnetic field just downstream of the termination shock, with a possible hint of developing turbulence: the polarized fraction of the wisps is lower than the one in the knot, and at present, emission maps based on numerical simulations suggest that the former is slightly more downstream than the latter. It is interesting to notice that, while it is the brightest feature in total light, the torus has a lower surface brightness than the wisp in polarized light [29]. There is no optical counterpart to Vela, neither in total nor in polarized light [30,31]. As of today, optical polarization is limited to the brightest features of the brightest nebula. Moreover, the optical light usually suffers from large foreground contamination and is often polarized, and the jet-torus structure, which is substantially prominent in X-rays, is much fainter.
The radio polarization of shell SNRs shows an interesting dichotomy between young systems, where the magnetic field appears to be predominantly radial, and old systems, where it looks tangential to the shock front [32]. This is commonly interpreted as an evidence for a stronger level of instability in young and more energetic systems, more likely related to the formation of Rayleigh-Taylor fingers [33,34] at the contact discontinuity between the shocked ISM and the shocked SN ejecta, which will act to preferentially stretch the field in the radial direction. Alternative models invoking other kinds of instabilities, such as Richtmyer-Meshkov, have also been presented [35], as well as models with magnetic dependent acceleration [36]. In older systems, where these instabilities are supposed to be less effective, the field has the naive geometry that one would expect from shock compression (note that shock compression will amplify the tangential component of the field, producing a strong polarization pattern even if the upstream field is strongly turbulent). This dichotomy appears independently of the progenitor type of the SNR.
On the other hand, there are lines of evidence suggesting that some signatures of the upstream mean field are preserved. In particular, there appears to be a correlation between the orientation of bipolar SNRs and the galactic plane [37,38], and there is further evidence in SNRs from Type II SN of an expansion into a magnetized wind bubble [39]. It was found in SN 1006 that the polarized fraction anti-correlates with radio emissions, suggesting that those sites along the shock front that are more likely to accelerate particles have a more turbulent field [38]. It is well known that the level of turbulence and the orientation of the field are pivotal for particle acceleration models: perpendicular shocks are thought to be more efficient accelerators, while parallel shocks tend to be more efficient injectors. Moreover, turbulence is likely required to explain the high magnetic field strength required in SED fitting of shell SNRs [12]. As for PWNs, radio polarization in SNRs traces particles with lifetimes that are longer than the age of the nebulae that are filling the shell volume. In Cas A, for example, high resolution radio observations show evidence of a polarization angle swing at the location of the X-ray rim, which is where particles are accelerated [40].
Polarization Models
In the last decade, several multidimensional models have been put forward to investigate the magnetic field structure and the geometry of the flow in PWNs. Much of the work has focused on trying to reproduce the jet-torus structure and to use it as a probe for the properties of the otherwise unobservable pulsar wind [7][8][9]. These works have shown the importance of Doppler boosting effects and have enabled us to locate the possible origin on many of the primary axisymmetric features observed in PWNs, including the knot and wisps of the Crab nebula. On the other hand, in present day numerical models, the torus tends to be under-luminous with respect to the wisps, and despite being a strong dynamical feature, the jet is hard to reproduce. What is missing in current day models is the possible presence of magnetic turbulence, at scales that are too small to be resolved by our numerical tools but are sufficiently large to affect emission. Several theoretical arguments suggesting that a non-negligible amount of magnetic turbulence should be present in PWNs have been put forward in recent years. For example, the presence of diffuse X-ray halos has been stated to be larger than what is expected for synchrotron cooling and advection [41][42][43], it has been suggested that radio-emitting particles are accelerated in the bulk of the nebula [10,44], and the recurrent γ-ray flares have been interpreted as dissipation in localized strong current sheets [45]. The possible origin of such turbulence is unclear: it could simply be the magnetic cascade of the large-scale turbulence injected at the termination shock [46]; it could be due to residual reconnection taking place downstream of the shock in a striped wind [47]; it also could be related to current-driven instability of the compressed toroidal field [48,49].
Based on the idea that small-scale turbulence can be present, we have developed a formalism to include it, as a sub-grid effect [50][51][52], into large scale models for the global structure of the field, either simplified toy models on the line of [53], which are easy and fast to compute and allow us to deeply scan the possible parameter space in order to optimize the agreement with observations, or more sophisticated time-dependent numerical models that can take into account the interplay between the pulsar wind and the environment. These models have been recently applied to the Crab and Vela nebulae [52]. It has been shown that, in order to recover the current-relative brightness between the wisps and torus as well as the correct luminosity profile of the torus in Crab and of the inner and outer ring in Vela, a substantial fraction of magnetic energy of~50% must be in the form of a turbulent small-scale field. This number represents a typical integrated polarized fraction for the Crab nebula of 17%, consistent with existing measures.
SNRs have a long history of polarization models for the radio band, and such models have attempted to constrain the origin of the observed polarization dichotomy, including recipes to relate it to the physics of acceleration [15,36]. More recently, the same technique used to include the effects of a turbulent component in the emission of PWNs have been applied to shell SNRs, trying to derive possible observational constraints to locate the regions where the turbulence is higher, and to assess its correlation with particle acceleration sites [51]. One of the most interesting aspects of X-ray emission in SNRs is that it takes place close to the cut-off regime: this implies that emission tends to weigh regions of higher magnetic fields to a greater extent, and this means that large differences in the polarized emission pattern are expected for shallow vs. steep magnetic turbulent spectra. The authors of [54] have used a simplified model that takes into account the typical emissivity expected in shell SNRs, to evaluate the level and structure of polarized emission expected from different turbulent spectra, and found that even the simple detection of a polarized signal is enough to rule out the shallower cascades.
More interestingly, a polarized emission model has been recently presented to explain the striped zone observed in X-rays in Tycho SNR [55]. It has been suggested that such stripes might trace turbulent magnetic fields generated by accelerated particles streaming upstream of the the shock itself. The orientation of the magnetic field with respect to the stripes might enable us to constrain the kind of instability driving the amplification of the field, given that different mechanisms provide different polarization patterns [14,56].
Prospects for Future Observations
Ideally one would like to probe these systems using X-ray polarimetry [57], and there is great interest among the scientific community for such an objective [58,59]. Incidentally, the Crab nebula is at the moment the only object with a polarization detected in X-rays [60]. The Crab nebula has been more recently observed in X-ray polarized light by NuSTAR [61] and by PoGO+ [62,63]. High-energy measures by INTEGRAL are also available [64], and there are suggestions of a possible time variation in the polarization angle [65]. Recent polarized detection at high energies has also been reported by AstroSat [66].
In recent years, a renewed interest in modeling space-resolved polarimetric measures has come as a result of the great efforts made to develop the IXPE and XIPE missions [58,59]. Simulations using the baseline combined telescope effective area and point spread function (PSF) were performed for both instruments.
For Crab and Vela, IXPE will be able to measure the polarization in a number of different spatial resolution elements, thus providing the first spatially resolved X-ray polarimetry of a PWN (Figure 1). For Crab, it is estimated that a 7.3-day observation can detect a polarized fraction well below 2% at 99% confidence in each of five distinct spatial regions, including one centered on the jet. This takes into account the fact that 50% of the flux might originate in neighboring zones and be unpolarized. For Vela, a polarization of the entire nebula of 3% may be detected in a 4.6-day observation, allowing also for some spatially resolved imaging with a higher threshold of~5-10%. For other bright PWNs powered by young pulsars such as PSR B1509-58 and J1833-1034, it will be easy with a few days of observation to have enough statistics to obtain an integrated polarized fraction and, perhaps in the case of B1509-58, to also obtain the polarization of the bright jet.
XIPE various simulations of different scenarios with magnetic field orientations based on simplified toy models [52] (see Figure 2) were carried out both for Crab and MSH 15-52 [67] and, with just a 0.2 ksec observation for Crab and a 2 Msec observation for MSH 15-52, showed that the polarized patterns are reconstructed with errors of less than 0.1% and with more than 10σ within the instrument PSF (see the contribution by J. Vink in these same proceedings for details on the modeling of these observations, the instrumental response, and the robustness of the results).
The main targets for polarization measures among shell SNRs are the few that are young and close-by, have enough surface brightness, and are large enough to be resolved. XIPE can detect polarized X-ray emission from Tycho's SNR with enough resolution to allow us to set constraints on models of diffusive shock acceleration with efficient magnetic field amplification in SNRs, with a typical integration time of~1 Msec. In Tycho, the emission in the 4-6 keV range is expected to be of synchrotron origin. The current Monte Carlo method of constructing simulated observation are based on theoretical models for polarized emission constructed by [67].
Another primary target among SNRs will be Cas A, where thermal line emission associated with the interior filaments is present. A good energy resolution is pivotal to select those parts of the emitted X-ray radiation that are of non-thermal origin (far from the lines). In Cas A, X-ray emission is detected also from the putative reverse shock. Simulated observations suggest that, with XIPE, it is possible to disentangle the polarization signature of the reverse shock from that of the forward shock, as long as typical values are~10% (see the contribution by J. Vink in these same proceedings). Among other possible targets, there are SN 1006, RX J1713.7-394, Kepler's SNR, and RCW 86. , and c show the expected intensity map when three different polarization models are applied to the Chandra intensity map: (a) a fully ordered radial B-field, (b) a fully disordered B-field, and (c) a fully ordered perpendicular B-field. From the XIPE Yellow Book, see also [67].
Conflicts of Interest:
The authors declare no conflicts of interest.(Please confirm the conflict of interest.)
|
2019-04-22T13:10:36.853Z
|
2018-03-23T00:00:00.000
|
{
"year": 2018,
"sha1": "af2e15ec681e319b9f5697d7e945bbed297ae630",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "43b5d5d19c32ab94421a67e42d430af09328bdbe",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
251279949
|
pes2o/s2orc
|
v3-fos-license
|
The Sparkler Evolved High-redshift Globular Cluster Candidates Captured by JWST
Using data from JWST, we analyse the compact sources ( “ sparkles ” ) located around a remarkable z spec = 1.378 galaxy ( the ‘ Sparkler ) that is strongly gravitationally lensed by the z = 0.39 galaxy cluster SMACS J0723.3-7327. Several of these compact sources can be cross-identi fi ed in multiple images, making it clear that they are associated with the host galaxy. Combining data from JWSTs Near-Infrared Camera ( NIRCam ) with archival data from the Hubble Space Telescope ( HST ) , we perform 0.4 – 4.4 μ m photometry on these objects, fi nding several of them to be very red and consistent with the colors of quenched, old stellar systems. Morphological fi ts con fi rm that these red sources are spatially unresolved even in the strongly magni fi ed JWST / NIRCam images, while the JWST / NIRISS spectra show [ O III ] λ 5007 emission in the body of the Sparkler but no indication of star formation in the red compact sparkles. The most natural interpretation of these compact red companions to the Sparkler is that they are evolved globular clusters seen at z = 1.378. Applying DENSE BASIS spectral energy distribution fi tting to the sample, we infer formation redshifts of z form ∼ 7 – 11 for these globular cluster candidates, corresponding to ages of ∼ 3.9 – 4.1 Gyr at the epoch of observation and a formation time just ∼ 0.5 Gyr after the Big Bang. If con fi rmed with additional spectroscopy, these red, compact sparkles represent the fi rst evolved globular clusters found at high redshift, which could be among the earliest observed objects to have quenched their star formation in the universe, and may open a new window into understanding globular cluster formation. Data and code to reproduce our results will be made available at http: // canucs-jwst
INTRODUCTION
Despite being the subject of very active research for decades (see, e.g., reviews by Harris & Racine 1979;Freeman & Norris 1981;Brodie & Strader 2006;Forbes et al. 2018), we do not know when, or understand how, globular clusters form.We do know that most globular clusters in the Milky Way, and those around nearby galaxies, are very old.The absolute ages of the oldest globular clusters, determined by main sequence fitting and from the ages of the oldest white dwarfs, are about 12.5 Gyr, corresponding to formation redshifts of z form ∼ 5.However, the uncertainties in age estimates are relatively large compared to cosmic age of the Universe at high reshifts, and absolute ages corresponding to z form ∼ 3 (at Cosmic Noon) at the low end, and extending well into the epoch of reionization at the high end (z form 6), are plausible (Forbes et al. 2018).There are two general views on how globular clusters formed.In the first, globular cluster formation is a phenomenon occurring predominantly at very high redshift, with a deep connection to initial galaxy assembly.Ideas along these lines go back to Peebles & Dicke (1968), who noted that the typical mass of a globular cluster is comparable to the Jeans mass shortly after recombination.In this view, globular clusters are a special phenomenon associated with conditions in the early Universe, and their formation channel is different from that driving present-day star formation.The second view associates globular clusters with young stellar populations seen in nearby starbursting and merging galaxies (Schweizer & Seitzer 1998;de Grijs et al. 2001).In this case, globular cluster formation might be a natural product of continuous galaxy evolution in systems with high gas fractions, and globular cluster formation would peak at lower redshifts (Trujillo-Gomez et al. 2021).
We are on the cusp of distinguishing observationally between these two globular cluster formation channels.The JWST is capable of observing routinely down to nJy flux levels at wavelengths beyond two microns, and thus of observing globular cluster formation occurring at high redshift (Carlberg 2002;Renzini 2017;Vanzella et al. 2017Vanzella et al. , 2022)).In this paper, we use newly-released data from JWST to analyze the nature of the point sources seen around a remarkable multiply-imaged galaxy at z = 1.378 that we fondly named the 'Sparkler'.One image of this galaxy is strongly magnified by a factor of ∼ 10−100 (Mahler et al. 2022;Caminha et al. 2022) by the z = 0.39 galaxy cluster SMACS J0723.3-7327(hereafter SMACS0723).Our goal is to determine whether these point sources are (1) globular clusters, (2) super star clusters in the body of the galaxy, or (3) the product of global star-formation in this galaxy being driven by some other mechanism.
DATA
The imaging and wide field slitless spectroscopy data used for this work are from JWST ERO program 2736 ("Webb's First Deep Field";Pontoppidan et al. 2022).The galaxy cluster was observed with all four instruments on JWST.Only Near Infrared Camera (NIRCam; Rieke et al. 2005) imaging, and Near Infrared Imager and Slitless Spectrograph (NIRISS; Doyon et al. 2012) spectroscopy is used in this paper.NIRCam imaging is available in six broad-band filters: F090W, F150W, F200W, F277W, F356W and F444W.Shallow NIRISS wide-field spectroscopy was obtained in the F115W and F200W filters with the two orthogonal low resolution grisms to mitigate contamination (Willott et al. 2022).Only the F115W grism data is used in this study, because it is the only filter containing a strong emission line, [OIII]λ5007.These JWST data are supplemented with HST/ACS imaging in F435W and F606W from the RELICS program, drizzled to the same pixel grid (Coe et al. 2019).
We reduced all imaging and slitless spectroscopic data together using the Grizli 1 (Brammer & Matharu 2021) grism redshift and line analysis software for space-based spectroscopy package.We first obtained uncalibrated ramp exposures from the Mikulski Archive for Space Telescopes (MAST2 ), and ran a modified version of the JWST pipeline stage Detector1, which makes detectorlevel corrections for, e.g., ramp fitting, cosmic ray rejection (including extra "snowball" artifact flagging), dark current, and calculates "rate images".Our modified version of the pipeline also includes a column-average correction for 1/f -noise.Subsequently, we used the preprocessing routines in Grizli to align all exposures to HST images, subtracted the sky background, and drizzled all images to a common pixel grid with scale 0. 04 per pixel.For the NIRCam F090W, F150W and F200W images we created another data product on a 0. 02 pixel scale.The context for the JWST Operational Pipeline (CRDS CTX) used for reducing the NIRISS (NIRCam) data was jwst 0932.pmap(jwst 0916.pmap).This is a pre-flight version of the NIRCam reference files, so the NIRCam fluxes should be treated with caution.One consequence of this is that the NIRCam photometric zeropoints calculated from our reductions may be incorrect for in-flight performance, so we used EAZY (Brammer et al. 2008) to derive zeropoint offsets consistent with photometric redshift fitting of the full source catalog.Bright cluster galaxies and the intracluster light were modelled and subtracted out using custom code (N.Martis et al., in preparation).To enable measurement of accurate colors our analysis was done after convolution by a kernel to match the point spread function (PSF) of F444W.
Figure 1 shows images of the Sparkler.Coordinates for the three images of the background galaxy are presented in the caption accompanying the figure.The Sparkler was first identified as multiply-imaged in HST imaging combined with ESO MUSE integral-field spectroscopy that shows all three images having [OII]λ3727 emission (Golubchik et al. 2022).We adopt the spec-troscopic redshift of z = 1.378 ± 0.001 from the MUSE [OII] line (Mahler et al. 2022;Caminha et al. 2022;Golubchik et al. 2022).The magnifications of the three images (labeled as 1, 2, and 3 in Figure 1) in the lensing model of Mahler et al. (2022, their IDs 2.1, 2.2, and 2.3) are 3.6±0.1,14.9±0.8 and 3.0±0.1,respectively.In the lensing model of Caminha et al. (2022, their IDs 3a, 3b, and 3c) the magnifications are significantly higher: 9.2 +1.3 −1.2 , 103 +153 −47 , and 6.1 +0.7 −0.7 , respectively.Based on measured flux ratios between the three images we consider the Caminha et al. (2022) ure 2 shows a multi-band montage of Image 2 of the Sparkler, using data from HST/ACS, HST/WFC3, and JWST/NIRCam short-and long-wavelength cameras at observed wavelengths spanning 0.4-4.4µm.Circles in the lower-left of each panel show the full width half maximum of the point spread function.The exquisite resolution of JWST/NIRCam SW best reveals the compact sources surrounding the galaxy, which were not resolved by HST in earlier observations even at similar wavelengths.
METHODS
In this letter we focus our attention on twelve compact candidates in and around the Sparkler.In this preliminary exploration, we selected candidates by eye, focusing mainly on compact objects ('sparkles') in uncontaminated regions of the image.A few compact sources in the galaxy itself were also added to our sample to allow us to compare objects in the body of the Sparkler to objects in the periphery of the galaxy.Objects were selected using the very deep 0. 02 pixel scale F150W image, and were chosen to be broadly representative of the compact sources in this system.As described below, 2D modeling confirms that the objects chosen are unresolved.We emphasize that the objects analyzed in this letter are not a complete sample.Construction of a complete sample will require detailed background subtraction and foreground galaxy modelling, which is deferred to a future paper.
Aperture Photometry
Photometry is challenging in crowded fields, and in the case of the Sparkler the challenges are compounded by contamination from the host galaxy and from other nearby sources.This contamination can significantly alter the shape of the SED of the individual compact sources.In a future paper we will present a full catalog of compact sources around the Sparkler that attempts to account for these effects by subtracting contamination models and using PSF photometry.For simplicity and robustness, in the present paper we used aperture photometry, as this technique is relatively insensitive to variations in the local background.Photometry was done using images that i. are on 0. 04 pixel scale, ii.have bright cluster galaxy and ICL-subtracted, and iii.are F444W PSF-convolved F435W, F606W, F090W, F150W, F200W, F277W, F356W, F444W images.
Using photutils (Bradley et al. 2021), circular apertures with radii of 0. 12, 0. 16 and 0. 20 were defined using the centroided positions of the twelve sparkles in the F150W image.An annulus starting at the edge of the aperture and with width 0. 08 was used to estimate the median local background, which was subtracted from the aperture flux.Aperture correction was applied by multiplying with the F444W PSF growth curve.To determine contamination corrections, we injected simulated point sources of various fluxes around the galaxy to determine how well our procedure recovered the intrinsic total flux of the compact sources.We found that the precision of the photometry varied widely across the different filters, environments, and intrinsic brightness of the sources, but that these variations could be quantified by simulations.For every sparkle, we identified a location proximate to it in which we injected simulated point sources to model the measurement accuracy.
For a sparkle at a given wavelength, we injected 20 point sources of total flux varying between 0.1 and 10 times the measured flux of the source and measured their fluxes using the same techniques used to analyze the original sources.We then fit the intrinsic flux as a function of the measured flux with a second-order polynomial, which we used to determine local aperture corrections.This process was repeated across 20 different locations around the galaxy to estimate the uncertainty in flux measurement.We selected the 0. 20 aperture for our final photometry as the corrected flux recovered > 99% of intrinsic flux across all environments.The procedure was performed for all twelve sparkles in all eight filters to construct the final SED of the sources.For sources that are undetected, we assigned an upper limit of three times the noise of the image.
SED fitting and estimating physical properties
Spectral Energy Distributions (SEDs) derived from our aperture photometry were analyzed using the Dense Basis method3 (Iyer & Gawiser 2017;Iyer et al. 2019) to determine non-parametric star-formation histories (SFHs), masses, ages, metallicities and dust extinction values for our compact sources.The Dense Basis fits were run with a single t 50 parameter, following the prescription in Iyer et al. (2019), with the full methodology and validation tests presented in Iyer et al. (2018Iyer et al. ( , 2019) ) and Olsen et al. (2021).The primary advantages of using non-parametric SFHs is that they allow us to account for multiple stellar populations, robustly derive SFH-related quantities including masses, SFRs and ages, and allow us to set explicit priors in SFH space to prevent outshining due to younger stellar populations that could otherwise bias estimates of these properties (Iyer & Gawiser 2017;Leja et al. 2019;Lower et al. 2020).
However, Dense Basis, by design, implements correlated star formation rates over time, to better encode the effects of physical processes in galaxies that regulate star formation and to better recover complex SFHs containing multiple stellar populations (Iyer et al. 2019).The formalism smooths out star formation histories that are instantaneous pulses, and has an age resolution of about 0.5 Gyr.We therefore also undertook SED fits based on simple luminosity evolution of simple stellar populations (SSPs).As will be seen below, in several cases the Dense Basis fit results return SFHs that are as close to instantaneous pulses as the method allows.In such cases, SSP fits may give comparably good results with fewer assumptions.SSP fits also have the benefit of returning unambiguously-defined ages.Since the Dense Basis fits provide a full SFH posterior, we will define the 'age' from these fits to be the time at which the SFR peaks (t peak ).Using validation tests fitting synthetic SSP sources injected into the field and mock photometry with similar noise properties to the observed sources, we find that this can robustly recover the age of the corresponding SSP within uncertainties, finding a bias and scatter of (µ, σ) t50 ≡ (0.15, 1.00) Gyr, (µ, σ) t peak ≡ (0.13, 0.86) Gyr and (µ, σ) ageSSP ≡ (−0.20, 0.86) Gyr for the three metrics tested.
Grism extraction and fitting
Before extracting individual NIRISS spectra, we constructed a contamination model of the entire field using Grizli.We modeled sources at both grism orientations.This model was built using a segmentation map and photometric catalog created with SEP (Barbary 2016, Bertin & Arnouts 1996).We initially assumed a flat spectrum, normalized by the flux in the photometric catalog, in our models.Successive higher-order polynomials were then fit to each source, iteratively, until the residuals in the global contamination model were negligible.
After the spectral modelling of the full field for contamination removal, we then extracted the 2D grism cutouts of the three images of the Sparkler and fitted their spectra using the Grizli redshift-fitting routine with a set of FSPS and emission line templates.Grizli forward model the 1D spectral template set to the 2D grism frames based on the source morphologies in the direct imaging.Based on the grism data alone, Grizli identified multiple redshift solutions for the Sparkler including a solution at z = 1.38 based on the identification of [OIII]λ5007 at 1.2µm in the F115W grism data.This is consistent with the identification of the complementary OII line previously reported in the MUSE data, and securely confirms the spectroscopic redshift of the source as z = 1.38.As a product of the fitting, emission-line maps of the [OIII]λ5007 line were created for the three images of the Sparkler.
RESULTS
The fluxes and associated uncertainties for the twelve compact sources ('sparkles') in and around the Sparkler are presented in Table 1 and negligible, confirming the original visual impression that these compact sources are unresolved.Panel (C) in Figure 3 shows the colors of the individual sparkles in the rest-frame urJ color-color space (measured directly from F090W, F200W, and the average of F277W and F356W fluxes), overplotted on the distribution of z ∼ 1.4 galaxies from the COSMOS2020 catalog (Weaver et al. 2022).The body of the Sparkler galaxy (blue point) is in the star-forming blue cloud, as are 7 of 12 of our sparkles (orange points).However, five of the sparkles have red colors (u * − r > 1.5) consistent with those of quiescent systems (the so-called red cloud).Panel (B) in Figure 3 shows two-dimensional fits of the point-spread function to these reddest five sources (obtained using GALFIT; Peng et al. 2010).Residuals from the fits are negligible, confirming the original visual impression that these compact red sources are unresolved.These five red, unresolved objects will constitute our sample of globular cluster candidates throughout this paper, and are colorcoded in pink in all figures in this paper.
SEDs and derived SFHs inferred from our modelling are shown in Figure 4.The physical properties corresponding to the models shown in this figure are also given in Table 1.The table contains effective ages of the globular cluster candidates from both Dense Basis and SSP fitting methods, which generally agree within the uncertainties.Of the objects under consideration, six (IDs 1,2,4,8,9,10) are consistent with SFHs that peaked at early formation times.Note that we do not include Object 9 in our list of globular cluster candidates because of its low SNR, coupled with possible contamination from the nearby diffraction spike and extended tail visible in Figure 1.Objects 11 and 12, which are in the bulk of the galaxy, show recent star formation, consistent with the [OIII]λ5007 emission in Figure 3.
Panel (A) of Figure 3 shows the emission line maps at the redshifted wavelength of [OIII]λ5007 for all three images of the Sparkler.Individual columns show the direct, F115W image (the broadband filter within which the redshifted [OIII] emission lies), and a NIRCam F090W, F150W, and F200W color composite for each Sparkler image.There is clear evidence of [OIII]λ5007 emission in all three images, which we interpret as related to star-formation activity in the Sparkler.Note that the line emission is spatially co-located with the two blue regions in the color composite, consistent with show the locations and colors (top left), SFHs (marked panels) and SED fits (inset panels) of the individual globular cluster candidates, while orange is used to show fits and SFHs for objects that are extended sources, heavily contaminated by light from the galaxy, nearby objects or ICL, or in the body of the main galaxy.Even though object 9 is consistent with an early SFH, we exclude it as a globular cluster candidate due to low SNR and possible contamination by a nearby diffraction spike.SEDs are shown in Fν units, with the spectra corresponding to the best-fit model from Dense Basis.SFR values are not corrected for lensing magnification, which could make them ∼10-100 times smaller.z peak corresponds to the redshift at which the posterior SFH peaks in SFR.Overall, the globular cluster candidates show SFHs consistent with very early epochs of star formation ranging over 7 < z < 11.
this interpretation.Most importantly, there is no evidence of line emission at the locations of those sparkles that we have previously identified as globular cluster candidates (IDs 1, 2, 4, 8, and 10), and this adds confidence to our conclusion that these objects consist of old stellar populations and are devoid of ongoing star formation.
Much can be learned from inter-comparing the images shown in Panel (A) of Figure 3, and in particular, from comparing the properties of sparkles we identified in Image 2 with their counterparts in Images 1 and 3. We leave such analysis, as well as the construction of a full lens model of the system, to future papers; for now, we simply highlight a few tentatively matched features in the third column of this panel, focusing on the glob-ular cluster candidates (pink labels) and the two most prominent star-forming regions (cyan labels).
We close this section with some preliminary discussion of the mass of the Sparkler.Fits to the integrated photometry of images 1 and 3 using Dense Basis recover log stellar masses of 9.67 +0.08 −0.09 M and 9.51 +0.08 −0.08 M respectively for the host galaxy (uncorrected for magnification), and star formation histories that show a recent rise over the last ∼ Gyr.We do not fit image 2 due to the strong differential magnification.Assuming magnifications of ∼ 5 for these images (much lower than for image 2), the stellar mass of Sparkler would be around 10 9 M , which is similar to that of the Large Magellanic Cloud (Erkal et al. 2019), which has ∼ 40 globular clusters (Bennet et al. 2022).
DISCUSSION
We are at the earliest stages of understanding how best to calibrate data from the in-orbit JWST, so SED modeling is best approached with a degree of caution.For this reason, we emphasize that our most important conclusions spring from observations that are independent of detailed SED modeling.Firstly, many of the compact sources in and around the Sparkler are unresolved (panel B of Figure 3) and several can be crossidentified in multiple images (Figure 1 and panel A in Figure 3), so they are clearly associated with the host galaxy, placing them at z = 1.378.The colors of these systems are consistent with the expected positions of quiescent sources at z = 1.378 on a rest-frame urJ diagram (panel C of Figure 3).Independently of any modeling, these facts suggest an identification of the red sparkles with evolved globular clusters.
Going further than this requires modeling.At face value, the reddest compact clumps (five of the twelve in Table 1 and Figure 4) surrounding the Sparkler show SFHs consistent with simple stellar populations formed at very high redshifts (z 9).Another two objects, mainly in the bulk of the galaxy, show SFHs consistent with younger (∼ 0.03 − 0.3 Gyr) stellar populations.
The quiescent nature of the reddest point sources in and around the Sparkler effectively rules out the possibility that they are active star formation complexes of the kind seen in many 1 < z < 3 galaxies, such as those associated with dynamical instabilities in gas-rich turbulent disks (Genzel et al. 2006;Förster Schreiber et al. 2006).A number of studies examining clumps in high-redshift systems with strong gravitational lensing have been able to explore the clump size distribution at physical spatial resolutions below 100 pc (e.g., Livermore et al. 2012;Wuyts et al. 2014;Livermore et al. 2015;Johnson et al. 2017;Welch et al. 2022a).These report a broad range of sizes (50 pc -1 kpc), but because of the high magnification of the Sparkler, most such clumps would be expected to be resolved by the JWST data we study.As already noted, pioneering work by Johnson et al. (2017) and Vanzella et al. (2017) suggests that HST observations of strongly lensed active star-formation complexes in galaxies at 2 < z < 6 may already have captured the earliest phases of globular cluster formation.More recent work on lensed z ∼ 6 galaxies has revealed even smaller complexes, e.g. in the Sunrise Arc (Welch et al. 2022b).This work is exciting, but the association of young massive clusters at high redshift with proto-globular clusters remains indirect, and the future evolution of these star formation complexes is unclear.
The most interesting interpretation of the clumps in and around the Sparkler is that the bulk of them are evolved (maximally old, given the 4.6 Gyr age of Universe at the epoch of observation) globular clusters.If this interpretation is correct, JWST observations of quiescent, evolved globular clusters around z ∼ 1.5 galaxies can be used to explore the formation history of globular clusters in a manner that is complementary to searching directly for the earliest stages of globular cluster formation (e.g., by examining young massive star-formation complexes at z ∼ 6 and higher).Young star formation complexes may, or may not, evolve eventually into globular clusters, but there can be little doubt about the identity of an isolated and quiescent compact system if its mass is around 10 6 M and its scale length is a few parsec.JWST observations of evolved globular clusters at z ∼ 1.5 are also complementary to exploration of the ages of local globular clusters, as models fit to local globular clusters cannot distinguish between old and very old systems.For example, distinguishing between an ∼ 11.5 Gyr old stellar population that formed at z = 3 and a 13.2 Gyr old stellar population that formed at z = 9 is not possible with current models and data, because they are degenerate with respect to a number of physical parameters (Ocvirk et al. 2006;Conroy et al. 2009Conroy et al. , 2010)).JWST observations of evolved globular clusters, seen when the Universe was about one third of its present age, provide an opportunity for progress by 'meeting in the middle', because population synthesis models of integrated starlight from simple stellar populations can distinguish rather easily between the ages of young-intermediate stellar populations.This is because intermediate-mass stars with very distinctive photospheric properties are present at these ages.At z = 1.378, the lookback time to the Sparkler is 9.1 Gyr, and the age of the Universe at that epoch is 4.6 Gyr.Distinguishing between z = 3 and z = 9 formation epochs for the globular cluster system corresponds to distinguishing between 2.4 Gyr-and 4.1 Gyr-old populations, which is relatively straightforward for population synthesis models in the JWST bands.In the case of the Sparkler, the striking conclusion is that at least 4 of its globular clusters have likely formed at z > 9.
Our identification of the 'sparkles' in Figure 1 with evolved globular clusters relies on an assumption of very strong magnification of the Sparkler.Strong magnification occurs only in narrow regions near lensing caustics, so there are strong magnification gradients in the source plane.This makes it difficult to invert lens models to compute accurate luminosity functions for the putative globular cluster population.Based on Figure 1, we assume the overall magnification of the system is large (at least a factor of 15), but handling the strong magnification gradients across the local environment of the Sparkler is beyond the scope of this paper.Assuming magnifications of 10-100, the stellar masses of these point sources fall in the range ∼ 10 6 − 10 7 M , which is plausible for metal-poor globular clusters seen at ages of around 4 Gyr, although most lie at the high end of the local globular cluster mass range.Since critical curve may be running through the system, we emphasize again that the magnification (and hence the masses) of the clusters is very uncertain.
If lens models can be determined with the accuracy needed to compute source plane luminosity functions and mass distributions, then the Sparkler may place interesting constraints on globular cluster dissolution.Physical processes slowly dissolve globular clusters, and luminosity evolution is significant, so distant globular clusters are expected to be both more massive and more luminous than their local counterparts.The most relevant physical processes are stellar evolution coupled with relaxation and tidal effects, and in some models significant mass loss is expected.For example, with a standard Kroupa IMF (Kroupa 2001) about 30% of the mass of a star cluster is expected to be lost due to stellar evolution alone in the first few Gyr (Baumgardt & Makino 2003), and this fraction is much higher for topheavy IMFs.Dynamical processes would compound this loss, though dynamical processes are likely to be most significant for lower mass clusters (Baumgardt 2006).In any case, unless globular cluster dissolution processes are operating far more quickly than expected, very high magnifications are certainly needed to explain the point sources surrounding the Sparkler as globular clusters.
CONCLUSIONS
In situ investigations of evolved globular cluster systems at z ∼ 1.5 present us with a golden opportunity to probe the initial formation epoch of globular clusters with a precision unobtainable from studying local systems.Magnified red point sources seen at this epoch are old enough to be unambiguously identified as globular clusters, but young enough that their ages can be determined quite reliably.We applied this idea to JWST and HST observations of a z = 1.378 galaxy (which we refer to as the Sparkler), which is strongly lensed by the z = 0.39 galaxy cluster SMACS J0723.3-7327.At least five of the twelve compact sources in and around the Sparkler are unresolved and red, and the most likely interpretation of these is that they are evolved globular clusters seen at z = 1.378.By modeling the colors and spectra of these compact sources with the Dense Basis method, four (33%) are found to be consistent with simple stellar populations forming at z > 9, i.e., in the first 0.5 Gyr of cosmic history and more than 13 Gyr before the present epoch.If these ages are confirmed, at least some globular clusters appear to have formed contemporaneously with the large-scale reionization of the intergalactic medium, hinting at a deep connection between globular cluster formation and the initial phases of galaxy assembly.Data and code to reproduce our results will be made available at http://canucsjwst.com/sparkler.html.05 -12.25 -12.05 -12.15 -11.55 -11.05 -8.95 -12.45 -11.75 -12.25 -9.25 -8.35 log sSFR * ,16 [yr −1 ] -13. 25 -13.35 -13.25 -13.35 -13.15 -12.95 -9.75 -13.35 -13.15 -13.35 -9.35 -8.45 log sSFR * ,84 [yr −1 ] -10.85 -11.05 -10.85 -10.95 -10.05 -9.55 -8.05 -11.35 -10.35 -11.15
Figure 1 .
Figure 1.Color images of the Sparkler and its environs made by combining F090W, F150W, and F200W images at native spatial resolution.The left panel shows the region around the three images of The Sparkler, with lines of lensing magnification from the Mahler et al. (2022, solid curves) and Caminha et al. (2022, dashed curves) models overlaid.Note that regions of very strong magnification (µ ∼ 10 − 100) cross image 2 of The Sparkler.The remaining three panels zoom in on the three images of this galaxy.Images are centered on the following positions.Image 1: RA=110.83846,Dec=-73.45102;Image 2: RA= 110.84051,Dec=-73.45487, and Image 3: RA= 110.83614,Dec=73.45879.Note the compact sources, many of them red, surrounding the body of the galaxy; these are most prominent in image 2, but are also discernible in images 1 and 3.
model to better fit the properties of this galaxy.As shown in Figure 1, there may be critical curves and/or high magnification contours crossing image 2 (magnification 5-10 in the Mahler et al. 2022 model and magnification 30-100+ in the Caminha et al. 2022 model), suggesting strong differential magnification in the image.Fig-
F435WFigure 2 .
Figure 2. Image 2 of the Sparkler from HST/Advanced Camera Survey (HST/ACS), HST/Wide Field Camera 3 (HST/WFC3), and JWST/Near Infrared Camera (JWST/NIRCam) Short Wavelength (SW) and Long Wavelength (LW) at observed wavelengths from 0.4-4.4µm.Circles in the lower-left of each panel show the full width half maximum of the point spread function.Note the exquisite resolution of JWST/NIRCamSW reveals the compact sources surrounding the galaxy, which were not resolved by HST in earlier observations at similar wavelengths.
Figure 3
Figure 3. (A): The globular cluster candidates are associated with the main galaxy.F115W images (left column), [OIII]λ5007 emission line maps derived from the NIRISS grism data in the F115W band (middle column), and NIRCam color composite images (right column).Sparkle IDs are shown for Image 2, with tentative counterparts identified in Images 1 and 3.The lower part of the [OIII] map of Image 2 suffers from significant contamination.[OIII] emission is a classic signature of ongoing star formation; here, it is present in the star-forming regions of the host galaxy, but its absence at the locations of the globular cluster candidates supports the hypothesis that at the epoch of observation these are quiescent systems.(B):The globular cluster candidates are unresolved.Fits to the globular cluster candidates with point sources on the 0. 02 F150W images using GALFIT(Peng et al. 2010) show that the residuals are consistent with noise.(C): The globular cluster candidates have colors of quenched stellar systems.urJ colors (measured directly from F090W, F200W, and the average of F277W and F356W fluxes) compared with z ∼ 1.4 galaxies in the COSMOS2020 catalog(Weaver et al. 2022): the integrated colors of the Sparkler galaxy (blue circle labeled Sp) are in the star-forming blue cloud, as are our other point sources (orange), but the globular cluster candidates (pink) have u * -r>1.5 and are consistent with the colors of quenched systems.
Figure 4 .
Figure 4. Non-parametric SFHs derived from fitting the photometric SEDs of the individual sparkles.Pink points and curvesshow the locations and colors (top left), SFHs (marked panels) and SED fits (inset panels) of the individual globular cluster candidates, while orange is used to show fits and SFHs for objects that are extended sources, heavily contaminated by light from the galaxy, nearby objects or ICL, or in the body of the main galaxy.Even though object 9 is consistent with an early SFH, we exclude it as a globular cluster candidate due to low SNR and possible contamination by a nearby diffraction spike.SEDs are shown in Fν units, with the spectra corresponding to the best-fit model from Dense Basis.SFR values are not corrected for lensing magnification, which could make them ∼10-100 times smaller.z peak corresponds to the redshift at which the posterior SFH peaks in SFR.Overall, the globular cluster candidates show SFHs consistent with very early epochs of star formation ranging over 7 < z < 11.
|
2022-08-04T01:16:17.978Z
|
2022-08-03T00:00:00.000
|
{
"year": 2022,
"sha1": "9c00a5e227d3884cdfbeb4d90174dd2f85ac7687",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.3847/2041-8213/ac90ca/pdf",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "9c00a5e227d3884cdfbeb4d90174dd2f85ac7687",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
232068912
|
pes2o/s2orc
|
v3-fos-license
|
A complete description of thermodynamic stabilities of molecular crystals
Significance Predicting stable polymorphs of molecular crystals remains one of the grand challenges of computational science. Current methods invoke approximations to electronic structure and statistical mechanics and thus fail to consistently reproduce the delicate balance of physical effects determining thermodynamic stability. We compute the rigorous ab initio Gibbs free energies for competing polymorphs of paradigmatic compounds, using machine learning to mitigate costs. The accurate description of electronic structure and full treatment of quantum statistical mechanics allow us to predict the experimentally observed phase behavior. This constitutes a key step toward the first-principles design of functional materials for applications from photovoltaics to pharmaceuticals.
M olecular crystals are ubiquitous in the pharmaceutical industry (1) and show great promise for applications in organic photovoltaics (2); gas adsorption (3); and the food, pesticide, and fertilizer industries (4). Their tendency to exhibit polymorphism, i.e., to exist in multiple crystal structures, on one hand provides a mechanism to tune properties by controlling crystal structure (5) and on the other hand introduces the challenge of synthesizing and stabilizing crystal structures with desired properties (6). While thermodynamic stability at the temperature and pressure of interest is sufficient (although not necessary) to ensure long-term stability, simply understanding thermodynamic stability already poses a formidable challenge. * This is particularly true for pharmaceuticals, where free energy differences between drug polymorphs are often smaller than 1 kJ/mol (7), leading to the risk of the drug transforming into a less soluble and consequently less effective form during manufacturing, storage, or shelf life (8,9). Indeed, the problem of late-appearing drug polymorphs is widespread (10,11).
The pharmaceutical industry therefore spends considerable resources on high-throughput crystallization experiments to screen for polymorphs (12), into which the target structure may decay. However, crystallization experiments do not probe thermodynamic stability, and conclusive studies of the impact of temperature changes after crystallization on the stability of polymorphs (i.e., their monotropic or enantiotropic nature) (13) are often prevented by limited sample quantities. Hence, there is the appeal of theoretical crystal structure prediction (CSP) (14) based * Kinetics may protect thermodynamically metastable structures from decaying almost indefinitely.
on the thermodynamic stability, which promises to complement crystallization experiments (15) by exhaustively searching for competing polymorphs.
Despite the demonstrable value of CSP for many classes of materials (16)(17)(18)(19)(20)(21), and the continuing progress evidenced by a series of blind tests (15), the success of CSP for molecular crystals has been limited by the inability to routinely predict the relative stability of competing candidate structures (22). This is largely because the methods used for stability rankings typically ignore or approximate the subtle interplay of several effects, such as intricate intermolecular interactions (23), the (quantum) statistical mechanics of the nuclei (24) and the unit cell (25), and thermal expansion (26), thereby incurring errors larger than the free energy differences of interest. The importance of each of these effects has been demonstrated in isolation, but predictive stability rankings must also comprehensively account for their interplay.
Recent implementations of advanced path-integral (PI) approaches (27,28) allow exactly accounting for the quantum statistical mechanics of the nuclei and the unit cell (29,30) for arbitrary potential energy surfaces (PESs). At the same time, modern machine-learning potentials (MLPs) (31) permit accurately reproducing ab initio PESs and dramatically reduce the cost of performing simulations approaching ab initio accuracy (32).
Significance
Predicting stable polymorphs of molecular crystals remains one of the grand challenges of computational science. Current methods invoke approximations to electronic structure and statistical mechanics and thus fail to consistently reproduce the delicate balance of physical effects determining thermodynamic stability. We compute the rigorous ab initio Gibbs free energies for competing polymorphs of paradigmatic compounds, using machine learning to mitigate costs. The accurate description of electronic structure and full treatment of quantum statistical mechanics allow us to predict the experimentally observed phase behavior. This constitutes a key step toward the first-principles design of functional materials for applications from photovoltaics to pharmaceuticals.
Despite these advances, calculations of rigorous thermodynamic stabilities for general molecular materials have been complicated by the absence of an integrated framework, which facilitates both the rapid development of MLPs and free energy calculations including all physically relevant effects, while ensuring universal applicability to diverse systems.
In this work we present an efficient framework for ranking candidate structures of arbitrary compounds using rigorous ab initio Gibbs free energy calculations, based on the streamlined development of MLPs and their integration with PI methods. Our approach builds upon our previous work on combining PI approaches with MLPs for ice polymorphs (29,33), but greatly enhances its accuracy, efficiency, and robustness for out-of-thebox applications to general compounds. In particular, we simplify the development of MLPs using a straightforward and inexpensive protocol for compiling ab initio reference data, which is designed to work for general organic compounds and accounts for (the often-neglected) cell flexibility and quantum nuclear motion. Additionally, robust data-driven techniques minimize the human effort involved in training the MLPs. In contrast to previous CSP ranking methods that use MLPs (34,35), we exactly account for the quantum statistical mechanics of the nuclei and the cell and use MLPs only as a stepping stone for computing ab initio Gibbs free energies, eliminating all dependence on the MLPs and their limitations.
The reliability and general applicability of our approach are showcased by the rapid development of MLPs and correct stability predictions for crystal polymorphs of three prototypical compounds: benzene, glycine, and succinic acid. These bear the hallmarks of more complex biomolecular systems-molecular flexibility, competing polymorphs, and intermolecular interactions ranging from weak dispersive to hydrogen bonded and ionic. Importantly, the relative stability of their polymorphs is well established (36)(37)(38). We further assess the temperature and pressure dependence of relative stabilities based on gradients of Gibbs free energies, which correspond to indicators widely used by experimentalists to predict the monotropic or enantiotropic nature of the polymorphs.
Our work complements state-of-the-art CSP methods, which efficiently survey structural space to extract small sets of promising candidate structures using ab initio calculations and/or MLPs (34,35), but struggle to reliably resolve subtle differences in stability among them (22). Combining rigorous free energy calculations, as demonstrated here, with structure searching and inexpensive CSP ranking methods constitutes an avenue to predictive CSP for complex molecular crystals of industrial importance.
Computational Framework and Systems
To predict rigorous relative stabilities, we combine PI thermodynamic integration (29) (referred to as quantum thermodynamic integration [QTI]) in the constant pressure ensemble (thereby accounting for anharmonic quantum nuclear motion and the fluctuations and thermal expansion of the cell) with densityfunctional-theory (DFT) calculations with the hybrid PBE0 functional (39,40) and the many-body dispersion (MBD) correction of Tkatchenko et al. (41) and Tkatchenko and coworkers (42) (referred to as PBE0-MBD). PBE0-MBD provides an accurate description of intermolecular interactions, as benchmarked using experimental and coupled cluster theory with singlets, doublets, and perturbative triplets [CCSD(T)] lattice energies for various molecular crystals, including form I of benzene and α-glycine (43,44). Since direct calculation of Gibbs free energies using ab initio QTI is prevented by the cost of the required energy and force evaluations (29), ab initio Gibbs free energies are calculated in a four-step process, as depicted schematically in Fig. 1 and detailed further in SI Appendix. Fig. 1. Schematic representation of the workflow for computing ab initio, quantum anharmonic Gibbs free energies for candidate crystal structures. Upper section shows the main steps: 1) generating ab initio reference data on which to 2) train a combined MLP, which can then be used to 3) compute MLP Gibbs free energies, which one can finally 4) promote to ab initio Gibbs free energies. Lower section (shaded in blue) details the key aspects of how each of these steps is performed in practice.
First, we use a simple strategy to generate a minimal but exhaustive set of unit cell "training configurations," for which we then perform PBE0-MBD calculations: We perform PI simulations based on density-function tight binding (DFTB) (45) theory for unit cells with perturbed cell parameters. This allows us to gather a large number of configurations, which incorporate quantum nuclear fluctuations and cell flexibility and from which we can distill the most distinct ones using a data-driven approach (46). This strategy leverages the low cost of DFTB and its qualitative accuracy for diverse molecular crystals (47) to avoid the bottleneck that is PBE0-MBD-based configurational sampling. Due to the versatility of DFTB, it can be used to generate robust training data for almost any compound of interest.
The subsequent training of the MLPs hinges on identifying the most important "features" of the configurations, fed to the MLPs as input. These features are usually abstract functions quantifying the local density of atoms and require the careful tuning of multiple parameters (46). Here, we render training MLPs for general compounds accessible to nonexperts by automating this procedure using a "size-extensive" data-driven approach, which avoids the manual selection of features based on "prior experience." Combining these first two steps with a "tried and tested" neural network architecture (48-50) greatly simplifies and speeds up the generation of MLPs, while remaining agnostic to the system of study.
In a third step, we exploit the orders-of-magnitude lower cost of the resultant MLPs compared to the ab initio reference method, to compute Gibbs free energies for much larger simulation supercells using QTI (29). We account for anisotropic fluctuations of the simulation cell, which are important for flexible functional materials (51), and directly calculate the free energy difference between the harmonic reference systems and the physical, anharmonic system at the PI level, which substantially reduces the complexity and cost compared to the multistep integration performed in ref. 33. We note that the affordability of MLP free energies comes at the price of residual errors with respect to the ab initio reference values due to the imperfect reproduction of the reference PES. These may arise from the short-ranged nature of the MLPs (52), from information lost during the "featurization" of the configurations (53), or from insufficient training data. The typical errors in MLP predictions of configurational energies (Table 1) are small but comparable to the subtle free energy differences between polymorphs. Therefore, in a fourth and final step, we eliminate the associated errors to obtain true ab initio Gibbs free energies by computing the difference between the MLP and PBE0-MBD free energies using free energy perturbation (FEP) (33). All calculations and simulations are performed using readily available and well-documented software, and Jupyter notebooks for analysis are provided in SI Appendix.
As an exposé of the universal applicability of this scheme, we predict the relative stabilities of a set of prototypical systems, whose small number and size belie how representative they are of general organic molecular crystals: Benzene is the archetypal rigid, van der Waals bonded molecular crystal, while succinic acid represents general hydrogen-bonded systems, and glycine prototypes are flexible zwitter-ionic systems. This small, "irreducible" set of prototypical systems covers not only the three different types of bonding, but also the chemical space that includes pharmaceuticals such as aspirin and paracetamol. Moreover, molecular flexibility and the large-amplitude curvilinear motion of the amide group in glycine trigger the same pathologies of approximate free energy methods as more complex systems exhibiting free rotation of molecular units (24,29) and serve as a stringent test for stability predictions.
For each compound we compute the free energy differences between the stable ambient-pressure polymorph and its closest experimentally established competitor(s): We consider forms I and II of benzene (36) and α-and β-succinic acid (37) at 100 K and α-, β-, and γ-glycine (54) at 300 K to compare with available calorimetric data (38,55). The nearly orthorhombic simulation supercells shown in Fig. 2, which contain equivalent numbers of molecules for all polymorphs of the same compound, ensure near cancellation of center-of-mass free energies and suffice to converge stabilities with respect to finite-size effects to within 0.1 kJ/mol (SI Appendix).
Ab Initio Thermodynamic Stabilities
As shown in Fig. 3A, the final ab initio Gibbs free energies (shown in red) reproduce the greater stability of form I over form II of benzene and of β-over α-succinic acid, the metastability of β-glycine, and the near degeneracy of α-and γ-glycine (55). Moreover, our Gibbs free energy differences are in agreement with available calorimetry data (38,55) to within statistical and experimental uncertainties.
The QTI approach also yields gradients of Gibbs free energies, including the molar volume, entropy, and heat capacity, which provide indication regarding pressure-and temperaturedriven changes in relative stability and thus the monotropic or enantiotropic nature of compounds. For instance, since molar volumes are derivatives of the free energy with pressure, we can predict form II of benzene to become thermodynamically stable over the ambient pressure form I at 1.4 GPa (at 100 K), which is in good agreement with the experimentally determined transition pressure of 1.5 GPa (56). Similarly, we determine the entropy of β-succinic acid to be smaller than that of α-succinic acid, making the latter the preferred high-temperature polymorph, in agreement with the experimental phase behavior (57). While in the case of glycine we are able only to predict near degeneracy of α-and γ-glycine at ambient conditions, molar volumes suggest α-glycine to be the most stable phase at high pressures, which is in line with experiments showing that it remains stable up to 23 GPa (54).
By comparing rigorous free energies with estimates that exclude nuclear quantum effects (NQEs), anharmonicity, and cell expansion and flexibility, we are able to understand the extent to which these effects and their interplay contribute toward the stability of molecular crystals. Crucially, as shown in Fig. 3B, the size and sign of these effects depend entirely on the compound and the polymorphs at hand, highlighting that rigorous QTI is indispensable for predicting phase stabilities and that molecular crystals are typically stabilized by a nontrivial interplay of different physical effects, whose individual importance is belied by the subtle resultant free energy differences. For instance, the greater stability of form I of benzene hinges on an accurate description of the electronic structure, while NQEs and anharmonicity cancel out almost perfectly and thermal expansion affects both forms similarly. In contrast, in succinic acid NQEs and anharmonicity cooperatively stabilize the α form and thermal expansion differentiates the two polymorphs. In glycine NQEs and thermal expansion differently affect the stability of the α-and β-polymorphs with respect to the γ form, and neglecting any of the three effects would lead to large errors on the scale of the experimental free energy differences.
Meanwhile, the MLP-based stability predictions (shown in blue in Fig. 3A) are only limited by the accuracy, with which the MLPs reproduce the ab initio PES (Table 1), and consequently correctly reproduce the greater stability of form I of benzene compared to form II. At the same time, the incorrect MLP-based stability predictions for succinic acid and glycine highlight the critical importance of the final FEP step. Promoting MLP free energies to the ab initio level by FEP incurs only the cost of a few tens of ab initio energy and force evaluations for configurations sampled by the MLPs. We note that the cost of this step is comparable to that of common equation-of-state calculations and thus constitutes a reliable and computationally efficient means of predicting the relative stability of polymorphs.
Given that errors of 1 kJ/mol are often considered to be within "chemical accuracy," it is worth emphasizing that the compounds considered here are not hand-picked, "pathological" examples, but expected to be representative of many biomolecular compounds. The small free energy differences between polymorphs, which are smaller than k B T but can be resolved experimentally (38,55) due to the kinetic suppression of interconversion between polymorphs, constitute a very stringent test of our framework and its ability to accurately capture phase stability. By matching the subkilojoule per mole accuracy of calorimetry experiments, it provides a robust foundation for studying α-, β-, and γ-glycine (G-α, G-β, and G-γ); and αand β-succinic acid (S-α and S-β) calculated using PBE0-MBD-based MLPs (blue) with the QTI approach and corrected to the ab initio PBE0-MBD DFT level using free energy perturbation (red). Experimental data (38,55) are shown in green. (B) Contributions of quantum nuclei (olive), anharmonicity (gray), and cell expansion and flexibility (pink) to the relative stabilities of the said polymorphs. These have been respectively obtained by comparing Gibbs free energy differences to estimates from a classical thermodynamic integration, a harmonic approximation, and a quantum thermodynamic integration using a fixed 0-K optimized cell. transition temperatures, pressures, and rates and permits benchmarking sophisticated electronic structure theories against experiment.
Comparison with Approximate Approaches
To further highlight the advantages of the approach proposed here over established approximate methods for ranking stabilities in CSP, we assess the limitations of the most widely used approximate methods, prefaced by acknowledging their successes for a wide range of applications (58). We note that the MLPs reproduce the ab initio PESs with sufficient accuracy to assess the impact of approximations to nuclear motion and compare the respective approximate (free) energy differences between polymorphs to the corresponding exact MLP Gibbs free energies.
The current state of the art is to correct (free) energies on the basis of a single-point hybrid-functional DFT calculation for the structure relaxed using semilocal DFT (57). Thermal and quantum nuclear effects are included within a harmonic approximation (HA) (59), while thermal expansion is modeled by relaxing the cell within a quasi-harmonic approximation (QHA) (58). These corrections are generally computed at the semilocal DFT level. As shown in Fig. 4, these approaches neither universally predict the most stable form (as they exhibit errors larger than 1 kJ/mol) nor systematically converge to the full hybridfunctional QHA reference. This highlights the need to go beyond a single-point hybrid-functional DFT correction to semilocal configurational or (quasi-)harmonic free energies to consistently deliver correct stability orders and free energy differences with subkilojoule per mole accuracy. We further note that the above results benefit substantially from the fortuitous cancellation of errors (29), but the residual errors cannot be estimated and apparent physical insights may be misleading.
Since the hybrid-functional-based QHA seems to be competitive with the rigorous PI approach, is further worthwhile to put the cost of the calculations into perspective. For glycine, as the most costly example, the 4,000 PBE0-MBD calculations on unit cells constituting the reference data for the MLP, the MLP-based PI thermodynamic integration, and the 50 PBE0-MBD calculations on supercells required for the FEP contribute roughly equally to the total cost of around 148,000 core hours per polymorph. For comparison, computing PBE0-MBD HA free energies for the same simulation supercells using finite differences and nondiagonal supercells to probe individual k-points (60), but not leveraging the MLP, would require about three times the core hours. A PBE0-MBD QHA free energy calculation would be an order of magnitude higher in computational cost. Although (Q)HA free energies may also be computed inexpensively using MLPs, they cannot be promoted to their first-principles counterparts in a straightforward and cost-effective manner as exact MLP free energies. Despite a focus on universal applicability over efficiency, the cost of the above rigorous Gibbs free energies is thus small compared to the estimated cost of calculating free energies within the (Q)HA using hybrid-functional DFT.
Discussion
The ability of our approach to predict free energy differences with subkilojoule per mole accuracy renders it valuable in identifying "competing" polymorphs with similar lifetimes to the most stable form. It bridges the gap between theory and experiments by allowing direct comparison of free energy differences with calorimetric data-a significant improvement over current approaches, which require error-prone ad hoc extrapolations to 0 K (61). Moreover, 1) rigorous predictions of the entropy, molar volume, and heat capacity and 2) robust MLPs with ab initio accuracy are complements to our approach. The former are directly related to "thermodynamic rules of thumb," which are widely used by experimentalists to assess stability trends (13), while the latter enable structure determinations for experimental samples based on NMR (62) and vibrational spectra (63). Furthermore, a rigorous account of thermally induced phase transitions can be obtained without repeating the procedure at every state point. The combination of QTI with parallel tempering (64) can enhance the efficiency of performing single-temperature (or pressure) sweeps, yielding full phase diagrams, while also sampling "slow" degrees of freedom such as conformational transitions, inaccessible to approximate methods (29). All these features are highly sought after by the pharmaceutical industry, as they are made possible at a manageable computational cost.
Our protocol easily extends to stability predictions for other complex molecular crystals, as its data-driven nature accelerates MLP development, irrespective of the material under consideration. As a proof of this, we have developed MLPs for polymorphs of three complex pharmaceuticals-aspirin, paracetamol, and XXIII, the most complicated system (58) from the latest blind test of organic crystal structure prediction methods (22)-and tested them by performing PI simulations in the constant pressure ensemble, as required for QTI (SI Appendix). Although these MLPs have been trained on DFTB data as a proof of concept and consequently lack chemical accuracy, they remain robust and capture the molecular flexibility of these systems. Given that dynamic disorder, thermal expansion, conformational relaxation of the molecular units, and potential (dynamic) instabilities of candidate polymorphs are automatically accounted for within the QTI approach, we expect stability predictions to be very robust with respect to the nature of the candidate polymorphs and thus directly applicable to said pharmaceutical and blind-test systems.
In applications involving large numbers of polymorphs or polymorphs with large unit cells, suitable sets of reference configurations can be generated based on configurations of liquid or amorphous states at different pressures (65). This exploits that the accuracy of MLPs, which predict energies and forces on the basis of local contributions, rests on having reference data for all distinct local atomic environments (65), rather than for all polymorphs of interest. The computational cost of building the training set then remains largely independent of number and unit cell size of the polymorphs of interest. For large numbers of polymorphs the cost per polymorph thus effectively reduces to that of the MLP-based thermodynamic integration and of FEP. In practice, the computational cost of FEP can be reduced by running only as many ab initio calculations as required to reduce the statistical error to below the predicted free energy differences between polymorphs. For instance, fewer than a handful of PBE0-MBD calculations would have sufficed to conclusively establish that form I of benzene is more stable than form II. Indeed, subject to estimates of the uncertainty of the MLP predictions (66,67), it may be possible to omit FEP altogether. Recent work on the use of higher body-order correlations in atomistic representations (68) and on including long-ranged interactions (52) promises to enable subkilojoule per mole accuracy, eliminating the need for FEP even in applications involving subtle free energy differences.
Finally, the empiricism involved in selecting the exchangecorrelation functional and dispersion correction used in the DFT calculations can be removed by using PESs evaluated using beyond-DFT electronic structure theory. Crucially, our scheme extends naturally to predictions of Gibbs free energies based on quantum-chemical electronic structure methods (61,69) such as second-order Møller-Plesset perturbation theory, random-phase approximation, coupled cluster, or quantum Monte Carlo, some of which are systematically improvable and can thereby be rendered truly ab initio (70). While these come at an increased computational cost per calculation, recent developments in machine learning for materials science (71) promise to minimize the number of quantum-chemical calculations required to train accurate MLPs and thus to keep the overall costs in check. Indeed, recent work demonstrates the corresponding construction of robust and accurate MLPs for CCSD(T) reference data (70).
In conclusion, marrying state-of-the-art electronic structure, free energy, and machine-learning methods in a widely applicable framework enables rigorous, predictive free energy calculations for complex (organic) molecular crystals at general thermodynamic conditions. The unprecedented accuracy of our approach sets the stage for future studies of kinetic effects as well as full p-T phase diagrams in a reliable and computationally efficient manner, paving the way for guiding experimental synthesis of such materials. The protocol and the scripts provided in SI Appendix permit its application practically out of the box. Determining the relative stability of generic polymorphic compounds is a recurrent problem across different domains of science and engineering-from nucleation theory to the practical design of pharmaceuticals-and we hope that the robust and easy-to-use nature of our end-to-end protocol will facilitate reliable, accurate free energy calculations beyond those of the computational chemistry community.
Methods
Machine-Learning Potentials. We have constructed Behler-Parinello-type neural network potentials (48) for benzene, glycine, and succinic acid using the n2p2 code (72). In this framework, structures are encoded in terms of local atom-centered symmetry functions (SFs) (48). Initial sets of SFs were generated following the recipe of ref. 73. Based on the same reference structure-property data subsequently used for training, the 128 (benzene and succinic acid) and 256 (glycine) most informative SFs were extracted via principal covariates CUR selection (74).
Our data are based on Langevin-thermostated PI NVT simulations at 300 K, performed using the i-Pi force engine (28) coupled to DFTB+ (75) calculations with the 3ob parameterization (76). For each polymorph multiple cells were simulated, rescaling the experimental cell lengths and angles by up to 10 and 5%, respectively. The trajectories of PI replicas for all polymorphs of a given compound were concatenated and farthest-point sampled (77)(78)(79) to extract the most distinct configurations for feature selection and MLP training. Subsequently, ab initio reference energies and forces were evaluated for said configurations.
To minimize the computational cost of the reference calculations the MLPs are composed of a baseline potential trained to reproduce energies and forces from more affordable PBE-DFT (80) calculations with a Tkatchenko-Scheffler (TS) dispersion correction (81) (PBE-TS) and a Δ-learning (82) correction trained (on 10 times fewer training data) to reproduce the difference between the baseline and more expensive calculations with the hybrid PBE0 functional (39,40) Free Energy Methods. For each polymorph the average cell was determined using MLP-based PI NST simulations (88) at the desired inverse temperature β, accounting for anharmonic quantum nuclear motion and anisotropic cell fluctuations. The difference between the Gibbs and Helmholtz free energies computed from an MLP-based PI NPT simulation based on its average cell is where ρ(V|P ext , β) is the probability of observing the cell volume V at external pressure P ext and inverse temperature β. A standard Kirkwood construction (89) that transforms the Hamiltonian from a harmonic to an anharmonic one provides the difference between the anharmonic and the harmonic quantum Helmholtz free energies: whereĤ λ is the Hamiltonian of the MLP alchemical system with the potential U λ ≡ λU MLP + (1 − λ)U har MLP , and · is the ensemble average computed from a PI NVT simulation. The reference absolute harmonic Helmholtz free energy is obtained from a harmonic approximation using where ω i is the frequency of the ith phonon mode. In a final step, the ab initio Gibbs free energy is obtained from its MLP counterpart by free energy perturbation using For systems exhibiting large-amplitude curvilinear motion, the harmonic-toanharmonic thermodynamic integration can be performed efficiently using a Padé interpolation formula (24).
Understanding the Role of Different Effects. We disentangle the role of anharmonicity directly from Eq. 2 and that of thermal expansion by comparing the Helmholtz free energies from Eq. 2 for the variable-cell geometryoptimized and mean PI NST cells. The role of the quantum nature of nuclei is quantified by comparing the classical and quantum Gibbs free energies. We calculate the former using the Helmholtz free energy of the classical harmonic oscillator as a reference and evaluating Eqs. 1 and 2 using classical molecular dynamics.
Free Energy Gradients. Volume and entropy are related to gradients of the free energy V = ∂G ∂P N,T , S = − ∂G ∂T N,P .
[4]
Differences between equilibrium (molar) volumes of polymorphs can directly be observed in PI NPT simulations. Meanwhile entropic differences can be computed from with G from Eq. 1 and the enthalpy H from the associated PI NPT simulation. Linear extrapolation then permits estimating whether and at which pressures Pc and temperatures Tc the Gibbs free energy difference between polymorphs will vanish and a phase transition should be expected: Data Availability. Anonymized (structures, scripts, and codes to reproduce all the results) data have been deposited in https://github.com/ venkatkapil24/data_molecular_fluctuations. All other study data are included in this article and/or SI Appendix.
|
2021-03-01T02:16:04.074Z
|
2021-02-26T00:00:00.000
|
{
"year": 2022,
"sha1": "ae2abeb54ed55f31e8cff53e9632b1c8ed69e1a4",
"oa_license": "CCBY",
"oa_url": "https://www.pnas.org/content/pnas/119/6/e2111769119.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "15da1824381f231764d41844d8249404f4b7ff67",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
120394231
|
pes2o/s2orc
|
v3-fos-license
|
Weak interactions of kaons from lattice QCD
The methods of lattice QCD and available computer resources are now sufficient that predictions for many of the Standard Model properties of kaons can be made from first principles with accurately bounded uncertainties. We discuss two relatively new areas where lattice methods are having or will soon have a large impact: calculation of the two complex amplitudes A0 and A2 describing the decay of the kaon into two-pions with I = 0 and 2 and calculation of long-distance contributions to second-order electro-weak processes including the KL – KS mass difference ΔMK, the CP-violating parameter ϵK and certain rare kaon decays.
Introduction
Over the past thirty years lattice QCD has evolved from providing a first non-perturbative demonstration of confinement and chiral symmetry breaking in QCD to allowing much of low energy QCD to be accurately computed from first principles with controlled uncertainties. By replacing the space-time continuum by a regular, four-dimensional, hyper-cubic lattice and using powerful Monte Carlo methods to evaluate the Euclidean Feynman path integral which defines QCD, the previously intractable complexities of low-energy, strongly-coupled QCD are reduced to the problem of obtaining very large computational resources and developing increasingly powerful and efficient numerical strategies -two areas in which advancing technology and the growing importance of computational methods throughout science work to our advantage.
In the past two years, lattice QCD calculations have crossed an important threshold and can now determine directly the physics of the up, down and strange quarks using their physical masses, or in the case of most calculations, a single, isospin-symmetric average mass of ≈ 3 MeV, (expressed in the MS renormalization scheme at 3 GeV) for the up and down quarks, giving three degenerate pions of mass 135 MeV. Earlier calculations used light quark masses as large as 30 or 40 MeV and relied on chiral perturbation theory to extrapolate to the physical value, introducing significant, uncontrolled errors. This ability to work at a physical value for the light quark mass is especially advantageous for calculations performed with chiral fermions where the light quark mass can be the largest source of unphysical, chiral symmetry breaking. For example, the calculations of the RBC and UKQCD collaborations typically use the domain wall fermion formulation in which an extra fifth dimension, whose size varies between 12 and 32 lattice units, literally separates the right-and left-handed fermion chiralities. While the Wilson and staggered fermion formulations achieve physical chiral symmetry in the continuum limit, the breaking of chiral symmetry at short distances complicates the definition of the ∆S = 1 weak Hamiltonian where the number of operators whose mixing must be controlled is dramatically reduced by chiral symmetry.
In addition, the explicit breaking of chiral symmetry by lattice artifacts for Wilson and staggered fermions is often the largest source of finite lattice spacing errors, errors which are absent when a chiral formulation is used. This is illustrated in Figure 1 where dimensionless ratios for a number of quantities are compared between two calculations, with inverse lattice spacings of 1/a = 1.73 GeV and 2.28 GeV. These two calculations give results which agree on the ≈ 1% level for lattice spacings which are not especially small by current standards. Comparison of a series of dimensionless ratios obtained from calculations at two inverse lattice spacings: 1/a = 1.73 GeV and 2.28 GeV. The ratio of each ratio is plotted on the y axis. The subscripts indicate the masses of the quarks used where h indicates a near strange quark mass while l is a less massive quark. Further explanation can be found in Ref. [1].
This combined ability to work at physical quark masses and the empirically small finite lattice spacing errors found with chiral fermions implies that results accurate on the percent level can be obtained from a single lattice QCD calculation without chiral or even continuum extrapolations. For example, in a recent calculation on a 48 3 × 96 lattice with 1/a = 1.73 GeV and light and strange quark masses chosen close to their physical values, we have obtained the result f π = 130.7(2) MeV directly from the computer with no extrapolations or corrections beyond the normalization factor for the axial current. The 0.2% error is statistical and a satisfactory agreement with the experimental value of f π = 130.4 MeV is seen even for this simple, direct result. While added calculations with a smaller lattice spacing and heavier quark masses allow sub-percent corrections to be made that adjust for the small mismatch of the input and physical quark masses and O(a 2 ) discretization errors [2], a result for f π with percent accuracy can be obtained from a direct calculation of this quantity on a single lattice QCD ensemble.
Since the methods and available resources for lattice QCD are now sufficient to allow a basic quantity such as f π to be so easily computed, it is natural to turn to more complex quantities which are more difficult to compute using lattice methods but also less well known and possibly of greater fundamental interest. In this paper we describe progress in two such directions: the calculation of the complex ∆I = 3/2 and 1/2, K → ππ decay amplitudes A 2 and A 0 and the calculation of the long-distance contributions to K 0 − K 0 mixing and rare kaon decays.
Computing
A 0 and A 2 using lattice QCD Three ingredients are needed for a first-principles lattice QCD calculation of amplitudes contributing to K → ππ decay: the properly normalized, four-quark, ∆S = 1 weak Hamiltonian H ∆S=1
W
; the ability to exploit the energy quantization of two-pion, finite-volume states to create a final π − π state with energy equal to M K and an understanding of the finite volume effects which must be removed to obtain a physical, infinite-volume decay amplitude. Fortunately, reliable techniques are now available to address each of these issues. The effective ∆S = 1 weak Hamiltonian which should describe K → ππ in the Standard Model is known from pioneering work in the 1970's and has been described in a comprehensive way [3] that can be accurately adapted to support lattice QCD calculations. The Rome-Southampton non-perturbative renormalization scheme [4] can be used to express the latticeregularized versions of the seven independent four-quark operators which enter H ∆S=1 W in terms of operators that have a well-defined continuum limit and can be related using continuum, QCD perturbation theory to the MS renormalization scheme used to determine the Wilson coefficients which appear in continuum expressions for H ∆S=1 W . These methods have been used successfully in recent lattice calculations [5] and are expected to be accurate at the 10-20% level.
This uncertainty is caused by the use of QCD perturbation theory at the scale of ≈ 2 GeV and the use of perturbation theory to remove the charm quark to obtain operators appropriate for a three-flavor theory. While this level of accuracy may be appropriate for present calculations, it can be improved to whatever extent is required by simply increasing the energy scale at which the lattice and continuum calculations are compared. A first step is to include the charm quark in the lattice calculation. By using a four-flavor theory we will avoid the problem of using perturbation theory at the charm quark scale and the uncertain validity of assuming that the charm quark mass is much larger than the energy scale relevant for K meson decay. Of course, if the charm quark is to be included in a lattice calculation, a lattice spacing must be used that is sufficiently small that O ( (m c a) 2 ) errors can be controlled. The second step in eliminating potential errors caused by the use of perturbation theory is to exploit "step-scaling" [6] and carry out the Rome-Southampton operator renormalization at a series of smaller lattice spacings and corresponding smaller lattice volumes until the scale of momentum employed is sufficiently large to guarantee sufficiently small perturbative errors. Note, this is much less demanding than performing the entire K → ππ calculation at such small lattice spacings.
The next ingredient in this calculation is the creation of a physical two-pion final state. This can be done in a lattice calculation by following Lellouch and Lüscher [7], exploiting the finite-volume quantization of the two-pion energy and adjusting the volume so that the energy of a finite-volume, excited two-pion state matches M K . In a Euclidean-space Green's function calculation the contribution of an excited two-pion state falls exponentially with increasing time relative to states with lower energy. This difficulty can be partially overcome if we introduce boundary conditions chosen to select the pion momentum that is present in the two-pion state of interest [8,9]. For example, Figure 2 suggests the effects on the pion wave function of boundary conditions which are anti-periodic in one of the three spatial directions.
Of course, in a lattice calculation we can only impose boundary conditions on the underlying quarks and not directly on the pions. For the case of the I = 2, two-pion final state this is not difficult since we can use isospin symmetry to relate the amplitude of interest to the unique state in which both pions have charge +1 and then impose anti-boundary conditions on the down anti-quark, leaving the up quark to obey periodic boundary conditions. While this condition breaks isospin symmetry it cannot alter the I = 2 character of the finite-volume state in question because that state is uniquely determined by its charge.
The problem of imposing boundary conditions which will insure that the lowest energy, I = 0, π − π state will have an energy equal to that of the K meson is much more challenging. Since the I = 0 state has the same electric charge as that with I = 2 we must impose boundary conditions which are consistent with isopsin symmetry to avoid mixing these two states. This can be done by imposing G-parity boundary conditions [8,10] which have the unusual feature of mixing particle and anti-particle. This is illustrated in Figure 3.
In a calculation in which the effects of finite volume are being critically used to create a final two-pion state with the proper energy, we should be concerned that other finite-volume effects may introduce significant systematic errors. Fortunately in the paper [7] pointing out the utility of exploiting finite volume to create a physical two-pion state, Lellouch and Lüscher also provide Diagram illustrating the unusual features of G-parity boundary conditions: a u quark passes through the boundary on the right and emerges on the left as a d while the u − d, π + state in the center of the box is represented by a d d pair when that pion state straddles the boundary. a concrete formula for removing the leading finite-volume effects so that for volumes of linear extent L with m π L ≥ 4 one expects sub-percent residual finite-volume corrections.
While the easier calculation of the ∆I = 3/2 amplitude was a pioneering effort in 2012 [11] it is now a well developed and routine part of the computational package run by the RBC and UKQCD Collaborations at increasingly small lattice spacing [12]. Our present preliminary result for A 2 at physical quark mass in the continuum limit is The real part of A 2 agrees reasonably well with the experimental value of 1.436(4) × 10 −8 GeV while Im(A 2 ) cannot be directly measured in experiment and has not been previously computed.
Calculation of the ∆I = 1/2 amplitude is much more difficult because the quantum numbers of the I = 0, π − π final state are the same as those of the vacuum and the G-parity boundary conditions still allow the vacuum as a final state. Although this state can be subtracted, the noise left behind grows exponentially as e M K t relative to the π − π signal, when the final state propagates for the time t.
However, after much preparation we have begun a realistic calculation of A 0 using G-parity boundary conditions in all three spatial directions on a 32 3 × 64 lattice with 1/a = 1.37 GeV and physical values for the light and strange quark masses. All seven operators entering H ∆S=1 W are being evaluated. The two pions are absorbed on time slices separated by four time units and hydrogen atom wave functions with a radius of two lattice units are used for each of the pion states, an arrangement which is realized by using all-to-all propagators. These two features of the calculation have been shown in earlier studies [13,14] to give improvements which reduce the noise coming from the vacuum subtraction by more than a factor of four. This calculation is now underway and we expect to have results with 20-30% errors within the coming year. If successful, this will give the first Standard Model prediction of the direct CP-violating parameter ϵ ′ . For the real parts of A 0 and A 2 we can already compare unphysical results for Re(A 0 ) with the physical calculation of Re(A 2 ) and recognize an emerging explanation for ∆I = 1/2 rule [15]: the two amplitudes which add to give Re(A 0 ) cancel when combined to form Re(A 2 ).
Long distance contributions to second-order weak processes
Given the ability to work with physical quark masses and to control all systematic errors it is natural to ask if there are additional areas in kaon physics where lattice QCD might advance our understanding of Standard Model processes. One such new and promising direction is the use of lattice QCD to calculate what are often referred to as long distance, second-order weak phenomena. Such phenomena include the K L − K S mass difference ∆M K , the contribution of the up and charm quark loops to the indirect CP-violation parameter ϵ K and certain rare kaon decays where the change of quantum numbers requires that the decay occur at second order in the weak interactions. Such second-order weak processes are of great interest because their small size implies increased sensitivity to other, non-Standard-Model phenomena. Such second order weak phenomena usually involve internal loops containing W bosons and receive contributions from both short distances on the order of the inverse W or top quark mass and long distances on the order of the inverse charm quark mass 1/m c or the QCD scale 1/Λ QCD . Here the notions of short and long distances are best defined as the scales at which QCD perturbation theory is or is not applicable. While in the past, the charm quark scale has been included in the short-distance category, recent perturbative studies of ∆M K [16] which is dominated by distances of order 1/m c suggest that non-perturbative methods are required even at this scale to obtain reliable results.
The largest contribution to ϵ K comes from a top-quark loop which implies that at long distances, this second-order weak process can be expressed as a local ∆S = 2 operator multiplied by a Wilson coefficient which can be accurately computed in perturbation theory. However, up and charm quarks contribute at the few-percent level and such amplitudes involve two local W exchanges (each accurately represented by a H ∆S=1 W or H ∆C=1 W vertex), which may be separated by distances of the order of 1/Λ QCD . For ∆M K the top quark is suppressed by the relatively small size of the real parts of its CKM matrix element and the dominant contribution comes from the long-distance contributions of up and charm quarks.
While non-trivial, the computation of such long-distance mixing effects is possible using lattice QCD [17]. The challenge of such calculations arises both from the complexities of the secondorder amplitudes involved and the fact that the calculation is performed in Euclidean space. The quantum mechanical strategy underlying the calculation is straight-forward. We compute the matrix element between K 0 and K 0 as the product of two, four-quark, weak operators integrated over a volume of time extent T as is illustrated in Figure 4. In such a Euclidean time calculation there will be a number of physical process that contribute. The process of interest is simple propagation of a K-meson state with the exponential time dependence e −(M K +∆M K )T which when evaluated at second order in ∆H W will give the term ∆M K T , linear in T , from which ∆M K can be easily extracted. Unfortunately this term of interest must be distinguished from exponentially larger terms in which the factors of H W allow the energy-non-conserving decay of the kaon state to an intermediate vacuum, single pion state or π − π state with energy below M K . For example, the contribution of an intermediate, single-pion state will fall much less rapidly with increasing T as e −MπT . However, these larger terms can be computed independently and subtracted in a correlated fashion, yielding a linear term which can be accurately identified as shown in Figure 5. The data presented in this figure comes from a complete calculation [18], including all graphs, which obtains ∆M K with a ≈ 15% statistical error but which must be repeated at smaller lattice spacing if the O ( (m c a) 2 ) discretization errors associated with the charm quark mass are to be controlled. A similar calculation of the long-distance contributions to ϵ K has now been started. This calculation is more difficult because there is reduced GIM suppression and a nonperturbative subtraction combined with a perturbative correction is needed to properly join the non-perturbative long-distance with the perturbative short-distance results.
For rare kaon decays such as K L → π 0 ℓ + ℓ − or K + → π + νν such non-perturbative, longdistance contributions are important at the few percent level and their calculation using lattice methods promises to extend the physics reach of experiments studying these decays. Calculation of such decays using lattice QCD [19] is now practical and exploratory calculations are now being carried out by RBC and UKQCD. These decays are more accessible to lattice QCD than the K → ππ calculations described in the previous section because of the absence of final state interactions, allowing the final state particles to be directly assigned the momenta carried by the physical decay products. However, the diagrams that must be evaluated are more complex and the presence of additional, Euclidean-time processes which are exponentially larger than the transition of interest (described above for the calculation of ∆M K ) must be overcome.
Conclusions
The ability to work directly with physical quark masses and to employ a fermion formulation which respects chiral symmetry makes possible the calculation of a variety of important quantities in kaon physics such as the complex, two-pion decay amplitudes A 0 and A 2 , the K L − K S mass difference and the long distance contributions to ϵ K and rare kaon decays. Current calculations and future calculations at sufficiently small lattice spacing to allow accurate treatment of the charm quark, open the possibility to discover new phenomena beyond those predicted by the Standard Model at increasingly high energy and with increasing precision.
|
2019-04-18T13:08:15.034Z
|
2014-11-26T00:00:00.000
|
{
"year": 2014,
"sha1": "0e383f8bb4918690d2a62b94ff9da0294b580a88",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/556/1/012005",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "470e29c715e646670307fc01a821ca3981745dcf",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
170078968
|
pes2o/s2orc
|
v3-fos-license
|
Nonparametric Sample Splitting
This paper develops a threshold regression model, where the threshold is determined by an unknown relation between two variables. The threshold function is estimated fully nonparametrically. The observations are allowed to be cross-sectionally dependent and the model can be applied to determine an unknown spatial border for sample splitting over a random field. The uniform rate of convergence and the nonstandard limiting distribution of the nonparametric threshold estimator are derived. The root-n consistency and the asymptotic normality of the regression slope parameter estimator are also obtained. Empirical relevance is illustrated by estimating an economic border induced by the housing price difference between Queens and Brooklyn in New York City, where the economic border deviates substantially from the administrative one.
Introduction
This paper develops a regression model whose coefficients can vary over different regimes or subsamples (i.e., threshold regression model), where the subsample classes are determined by some unknown relation between two variables. More precisely, we consider a model given by for i = 1, . . . , n, in which the marginal effect of x i to y i can be different as β 0 or (β 0 +δ 0 ) depending on whether q i ≤ γ 0 (s i ) or not. The threshold function γ 0 (·) is unknown and the main parameters of interest are β 0 , δ 0 , and γ 0 (·). The novel feature of this model is that the sample splitting is determined by an unknown relation between two variables q i and s i , and their relation is characterized by the nonparametric threshold function γ 0 (·).
In contrast, the classical threshold regression or structural break models assume γ 0 (·) to be a constant and consider the sample splitting induced by whether a scalar running variable exceeds a certain constant threshold. Examples include, among others, Andrews (1993), Andrews and Ploberger (1994), Bai (1997), Bai and Perron (1998), Bai, Lumsdaine, and Stock (1998), Qu and Perron (2007), Elliott and Müller (2007), Chen and Hong (2012), Elliott and Müller (2014), Elliott, Müller, and Watson (2015), and Eo and Morley (2015) for structural break models; and , Caner and Hansen (2004), Seo and Linton (2007), Lee, Seo, and Shin (2011), Li and Ling (2012), Yu (2012), Lee, Liao, Seo, and Shin (2018), and Yu and Fan (2019) for threshold regression models. 1 This paper contributes to the literature in two folds. First, we formulate the threshold by some unknown relation among variables. Existing literature on sample splitting typically assumes γ 0 (·) to be a fixed constant or a linear combination of variables. In contrast, we leave the threshold function γ 0 (·) to be fully nonparametric as long as it is a smooth function. This specification can cover interesting cases that have not been studied. For example, we can consider the case that the threshold is heterogeneous and specific to each observation i if we see γ 0 (s i ) = γ 0i ; or the case that the threshold is determined by the direction of some moment conditions γ 0 (s i ) = E[q i |s i ]. Apparently, when γ 0 (s) = γ 0 or γ 0 (s) = γ 0 s for some parameter γ 0 and s = 0, it reduces to the standard threshold regression model, where the threshold is determined by the ratio q i /s i for the latter case.
Second, we let the variables be cross-sectionally dependent, which has not been considered in the threshold model literature. More precisely, we consider strong-mixing random fields as Bolthausen (1982). This generalization allows us to study sample splitting over a random field. For instance, if we let (q i , s i ) correspond to the the geographical location (i.e., latitude and longitude on the map), then the threshold 1 [q i ≤ γ 0 (s i )] identifies the unknown border yielding a two-dimensional sample splitting. In more general contexts, the model can be applied to identify social or economic segregation over interacting agents.
The main results of this paper can be summarized as follows. First, we develop a two-step estimator of this semiparametric model, where we estimate γ 0 (·) using local constant estimation. Second, under the shrinking threshold setup (e.g., Bai (1997), Bai and Perron (1998), and Hansen (2000)) with δ 0 = c 0 n − for some c 0 = 0 and ∈ (0, 1/2), we show that the nonparametric estimator γ(·) is uniformly consistent and ( β , δ ) satisfies the root-n-consistency. The uniform rate of convergence and the pointwise limiting distribution of γ(·) are also derived. In particular, we find that γ(·) is asymptotically unbiased even if the optimal bandwidth is used. This feature is novel in comparison with the existing literature on kernel estimation. Since the nonparametric function γ 0 (·) is in the indicator function, it causes additional technical challenges and the proofs are nonstandard. We also develop a pointwise specification test of γ 0 (s) for given s (i.e., a test for the null hypothesis H 0 : γ 0 (s) = γ * (s)). Simulation studies show its good finite sample performance. Third, we extend the basic threshold regression model to estimate threshold contours or sample splitting circles by combining estimates of γ 0 (·) over the artificially rotated coordinates. Fourth, as an empirical illustration, we consider (q i , s i ) as geographic location indices (i.e., latitude and longitude) and examine the border between Brooklyn and Queens boroughs in New York City. In particular, we estimate an unknown economic border that splits these two boroughs by different levels of elasticity to the housing price. The economic border turns out to be substantially different from the current administrative one.
The rest of the paper is organized as follows. Section 2 summarizes the model and our estimation procedure. Section 3 derives asymptotic properties of the estimators and develops a likelihood ratio test of the threshold function. Section 4 describes how to extend the main model to estimate a threshold contour. Section 5 studies small sample properties of the proposed statistics by Monte Carlo simulations. Section 6 applies the results to the housing price data to identify unknown economic border. Section 7 concludes this paper with some remarks. The main proofs are in the Appendix and all the omitted proofs are collected in the supplementary material.
We use the following notations. Let → p denote convergence in probability, → d convergence in distribution, and ⇒ weak convergence of the underlying probability measure as n → ∞. Let r denote the biggest integer smaller than or equal to r and 1[A] the indicator function of a generic event A. Let B denote the Euclidean norm of a vector or matrix B, and C a generic constant that may vary over different lines.
Note that it is basically a constant threshold estimator in the neighborhood of the given point s. Therefore, γ (s) can be naturally seen as a local version of the standard (constant) threshold regression estimator. Second, to estimate the parametric components β 0 and δ 0 , we estimate β 0 and δ * 0 = β 0 + δ 0 by for some constant ∆ n > 0 satisfying ∆ n → 0 as n → ∞, which is defined later. We use the leave-one-out estimator γ −i (s) as the first step estimation. The change size δ can be estimated as δ = δ * − β. We first assume the conditions for identification. Let Q be the support of q i .
(iv) q i is continuously distributed and its conditional density f (q|s) is bounded away from zero for all (q, s) ∈ Q × S.
Assumption ID-(i) excludes endogeneity. Assumption ID-(ii) is the rank condition that yields the identification of the parameters β 0 and δ 0 . Assumption ID-(iii) restricts that the threshold γ 0 (s) lies in the interior of the support of q i for any s ∈ S and δ 0 = 0, which yields that the nonparametric threshold function γ 0 (·) is identified pointwisely under Assumption ID-(iv). 4 Theorem 1 Under Assumption ID, the threshold function γ 0 (·) and the parameters (β 0 , δ 0 ) are uniquely identified.
We allow for cross-sectional dependence in x i , q i , s i , u i in this paper so that we can apply the main results to the spatial (or two-dimensional) sampling splitting. More precisely, we suppose α-mixing over a random field similarly as Bolthausen (1982) and Jenish and Prucha (2009). We consider the samples over a random expanding lattice N n ⊂ R 2 endowed with a metric λ (i, j) = max 1≤ ≤2 |i − j | and the corresponding norm max 1≤ ≤2 |i |, where i denotes the th component of i. We denote |N n | as the cardinality of N n and ∂N n = {i ∈ N n : there exists j ∈ N n with λ(i, j) = 1}. We let |N n | = n and then the summation in (3) can be written as i∈Nn . We also define a mixing coefficient: We suppose additional conditions for deriving asymptotic properties of the semiparametric estimators. Let f (q, s) be the joint density function of (q i , s i ) and for (q, s) ∈ Q × S ⊂ R 2 .
4 Since the last condition in Assumption ID-(iii) does not require the strict positive definiteness of E x i x i |q i = q, s i = s , q i or s i can be one of the elements of x i (e.g., threshold autoregressive model, Tong (1983)) or a linear combination of x i , even when x i includes a constant term.
Assumption A (i) The lattice N n ⊂ R 2 is infinite countable; all the elements in N n are located at distances at least λ 0 > 1 from each other, i.e., for any i, j ∈ N n : λ (i, j) ≥ λ 0 ; and lim n→∞ |∂N n | /n = 0.
(vii) D (q, s), V (q, s), and f (q, s) are bounded, continuous in q, and twice continuously differentiable in s with bounded derivatives.
(x) K (·) is uniformly bounded, continuous, symmetric around zero, and satisfies Many of these conditions are similar to Assumption 1 of . Note that λ 0 in Assumption A-(i) can be any strictly positive value, but we can impose λ 0 > 1 without loss of generality. It is well known that a constant change size leads to a complicated asymptotic distribution of the threshold estimator, which depends on nuisance parameters (e.g., Chan (1993)). In Assumption A-(ii), we adopt the widely used shrinking change size setup as in Bai (1997), Bai andPerron (1998), and to obtain a simpler limiting distribution. The conditions in Assumption A-(iii) are required to establish the central limit theorem (CLT) for spatially dependent random field. The condition on the mixing coefficient is slightly stronger than that of Bolthausen (1982). This is because we need to control for the dependence within the local neighborhood in kernel estimation. When α(m) decays at an exponential rate, these conditions are readily satisfied. When α(m) decays at a polynomial rate (i.e., α(m) ≤ C α m −k for some k > 0), we need some restrictions on k and ϕ to satisfy these conditions, such as k > 3(2 + ϕ)/ϕ. Note that f (γ 0 (s), s) and the marginal density f s (s) are both strictly positive for all s ∈ S from Assumption A-(viii). In practice, we choose S such that their estimates are bounded away from zero in finite samples. Assumptions A-(ix) and (x) are standard in the kernel estimation literature, except that the magnitude of the bandwidth b n depends on not only n but also . The conditions in A-(x) holds for the most of the kernel functions including the Gaussian kernel and the kernels with bounded supports.
It is important to note that we suppose γ 0 as a function from S to Γ in Assumption A-(vi), which is not necessarily one-to-one. For this reason, sample splitting based on 1 [q i ≤ γ 0 (s i )] can be different from that based on 1 [s i ≥γ 0 (q i )] for some functionγ 0 .
Instead of restricting γ 0 be one-to-one in this paper, for the identification purpose, we presume that we know which variables should be respectively assigned as q i and s i from the context. In Section 4, however, we discuss how to relax this point and to identify a convex threshold contour as an extreme case.
Asymptotic Results
We first obtain the asymptotic properties of the nonparametric estimator γ (s). The following theorem derives the pointwise consistency and the pointwise rate of convergence.
Theorem 2 For a given s ∈ S, under Assumptions ID and A, γ (s) → p γ 0 (s) as The pointwise rate of convergence of γ (s) depends on two parameters, and b n . It is decreasing in like the parametric (constant) threshold case: a larger reduces the threshold effect δ 0 = c 0 n − and hence decreases effective sampling information on the threshold. Since we estimate γ 0 (·) using the kernel estimation method, the rate of convergence depends on the bandwidth b n as well. As in the standard kernel estimator case, a smaller bandwidth decreases the effective local sample size, which reduces the precision of the estimator γ (s). Therefore, in order to have a sufficient level of rate of convergence, we need to choose b n large enough when the threshold effect δ 0 is expected to be small (i.e., when is close to 1/2).
Unlike the standard kernel estimator, there appears no bias-variance trade-off in γ (s) as we further discuss after Theorem 3. It thus seems like that we can improve the rate of convergence by choosing a larger bandwidth b n . However, b n cannot be chosen too large to result in n 1−2 b 2 n → ∞, because n 1−2 b n ( γ (s) − γ 0 (s)) is no longer O p (1) in that case. Therefore, we can use the restriction n 1−2 b 2 n → for some ∈ (0, ∞) to obtain the optimal bandwidth.
Under the choice that n 1−2 b 2 n → ∈ (0, ∞), the optimal bandwidth can be chosen such that b * n = n −(1−2 )/2 c * for some constant 0 < c * < ∞. This b * n provides the fastest convergence rate. Using this optimal bandwidth, the optimal pointwise rate of convergence of γ (s) is then given as n −(1−2 )/2 . However, such a bandwidth choice is not feasible in practice since the constant term c * is unknown, which also depends on the nuisance parameter that is not estimable. In practice, we suggest cross-validation as we implement in Section 6, although its statistical properties need to be studied further. 5 The next theorem derives the limiting distribution of γ (s). We let W (·) be a two-sided Brownian motion defined as in : where W 1 (·) and W 2 (·) are independent standard Brownian motions on [0, ∞).
Theorem 3 Under Assumptions ID and A, for a given s ∈ S, if n 1−2 b 2 n → ∈ (0, ∞), as n → ∞, where µ (r, ; s) = − |r| ψ 1 (r, ; s) + ψ 2 (r, ; s) , The drift term µ (r, ; s) in (11) depends on , the limit of n 1−2 b 2 n = (n 1−2 b n )b n , and |γ 0 (s)|, the steepness of γ 0 (·) at s. Interestingly, it resembles the typical O(b n ) boundary bias of the standard local constant estimator even when s belongs to the interior of the support of s i , which is from the inequality restriction in the indicator function of the threshold regression.
However, having this non-zero drift term in the limiting expression does not mean that the limiting distribution of γ (s) itself has a non-zero mean even when we use the optimal bandwidth b * n = O(n −(1−2 )/2 ) satisfying n 1−2 b * 2 n → ∈ (0, ∞). This is mainly because the drift function µ (r, ; s) is symmetric about zero and hence the limiting random variable arg max r∈R (W (r) + µ (r, ; s)) is mean zero. In particular, we can show that the random variable arg max r∈R (W (r) + µ (r, ; s)) always has zero mean if µ (r, ; s) is a non-random function that is symmetric about zero and monotonically decreasing fast enough. This result might be of independent research interest and is summarized in Lemma A.9 in the Appendix. Figure 1 depicts the drift function µ (r, ; s) for various kernels when |γ 0 (s)| = 1 and = 1.
Since the limiting distribution in (11) depends on unknown components, like anḋ γ 0 (s), it is hard to use this result for further inference. We instead suggest undersmoothing for practical use. More precisely, if we suppose n 1−2 b 2 n → 0 as n → ∞, as n → ∞, which appears the same as in the parametric case in except for the scaling factor n 1−2 b n . The distribution of arg max r∈R (W (r) − |r| /2) is known (e.g., Bhattacharya and Brockwell (1976) and Bai (1997)), which is also described in Hansen (2000, p.581). The term ξ (s) determines the scale of the distribution at given s: it increases in the conditional variance E [u 2 i |x i , q i , s i ]; and decreases in the size of the threshold constant c 0 and the density of (q i , s i ) near the threshold.
Even when n 1−2 b 2 n → 0 as n → ∞, the asymptotic distribution in (12) still depends on the unknown parameter (or equivalently c 0 ) in ξ (s) that is not estimable. Thus, this result cannot be directly used for inference of γ 0 (s). Alternatively, given any s ∈ S, we can consider a pointwise likelihood ratio test statistic for H 0 : γ 0 (s) = γ * (s) against H 1 : γ 0 (s) = γ * (s) (13) 6 We let ψ 1 (r, 0; s) = ∞ 0 K (t) dt = 1/2. for a fixed s ∈ S, which is given as The following corollary obtains the limiting null distribution of this test statistic that is free of nuisance parameters. Using the likelihood ratio statistic inversion approach, we can form a pointwise asymptotic confidence interval of γ 0 (s).
Corollary 1 Suppose n 1−2 b 2 n → 0 as n → ∞. Under the same condition in Theorem 3, for any fixed s ∈ S, the test statistic in (14) satisfies as n → ∞ under the hull hypothesis (13), where , which is the case of local conditional homoskedasticity, the scale parameter ξ LR (s) is simplified as κ 2 , and hence the limiting null distribution of LR n (s) becomes free of nuisance parameters and the same for all s ∈ S. Though this limiting distribution is still nonstandard, the critical values in this case can be obtained using the same method as Hansen (2000, p.582) with the scale adjusted by κ 2 . More precisely, since the distribution function of ζ = max r∈R (2W (r) − |r|) is given as where ζ * is the limiting random variable of LR n (s) given in (15) under the local conditional homoskedasticity. By inverting it, we can obtain the asymptotic critical values given a choice of K(·). For instance, the asymptotic critical values for the Gaussian kernel is reported in Table 1, where κ 2 = (2 √ π) −1 0.2821 in this case. In general, we can estimate ξ LR (s) by where δ is from (5) and (6), and σ 2 (s), D ( γ (s) , s), and V ( γ (s) , s) are the standard Nadaraya-Watson estimators. In particular, we let σ 2 (s) for some bivariate kernel function K(·, ·) and bandwidth parameters (b n , b n ). Finally, we show the √ n-consistency of the semiparametric estimators β and δ * in (5) and (6). For this purpose, we first obtain the uniform rate of convergence of γ (s).
Theorem 4 Under Assumptions ID and A,
Apparently, the uniform consistency of γ (s) follows provided log n/(n 1−2 b n ) → 0.
Theorem 5 Suppose the conditions in Theorem 4 hold and log n/(n 1−2 b n ) → 0 as n → ∞. If we let ∆ n > 0 such that ∆ n → 0, {log n/(n 1−2 b n )}/∆ n → 0 as n → ∞, Note that we do not use the conventional plug-in semiparametric least squares estimators, arg min β,δ . The reason why we propose an alternative estimation approach here is that this conventional semiparametric least square estimators may not be asymptotically orthogonal to the first-step nonparametric estimator when n 1−2 b 2 n → ∈ (0, ∞) as n → ∞, though they are still consistent. This is because the first-step nonparametric estimator γ (s) could have very slow rate of convergence, and the estimation error will affect the limiting distribution of the second stage parametric estimators. The new estimation idea above, however, only uses the observations that are not affected by the estimation error in the first-step nonparametric estimator. This is done by choosing a large enough ∆ n in (5) and (6) such that the observations are outside the uniform convergence bound of | γ (s) − γ 0 (s)|. Thanks to the threshold regression structure, we then can estimate the parameters on each side of the threshold even using these subsamples. However, we also want ∆ n → 0 fast enough so that more observations are included in estimation.
The estimator ( β , δ * ) or equivalently ( β , δ ) thus satisfies the Neyman orthogonality condition (e.g., Assumption N(c) in Andrews (1994)), that is, replacing γ by the true γ 0 in estimating the parametric component has an effect at most o p (n −1/2 ) in their limiting distribution. Though we lose some efficiency in finite samples, we can derive the asymptotic normality of ( β , δ ) that has mean zero and achieves the same asymptotic variance as if γ 0 (·) was known.
Using the delta method, we can readily obtain the limiting distribution of θ = ( β , δ ) as The asymptotic variance expressions in (16) and (17) allow for cross-sectional dependence as they have the long-run variance forms Ω * and Ω. They can be consistently estimated by the spatial HAC estimator of Conley and Molinari (2007) The terms Λ * and Λ can be estimated by their sample analogues.
Threshold Contour
When we consider sample splitting over a two-dimensional space (i.e., q i and s i respectively correspond to the latitude and longitude on the map), the threshold model (1) can be generalized to estimate a nonparametric contour threshold model: where the unknown function m 0 : Q × S → R determines the contour on a random field. An interesting example includes identifying an unknown closed boundary over the map, such as a city boundary relative to some city center, and an area of a disease outbreak or airborne pollution. In social science, it can identify a group boundary or a region in which the agents share common demographic, political, or economic characteristics.
To relate this generalized form to the original threshold model (1), we suppose there exists a known center at (q * i , s * i ) such that m 0 (q * i , s * i ) < 0. Without loss of generality, we can normalize (q * i , s * i ) to be (0, 0) and re-center all other observations {q i , s i } n i=1 accordingly. In addition, we define the radius distance l i and angle a • i of the , and each of (I i , II i , III i , IV i ) respectively denotes the indicator that the ith observation locates in the first, second, third, and forth quadrant.
We suppose that there is only one breakpoint at any angle and the threshold contour is convex. For each fixed a • ∈ [0 • , 360 • ), we rotate the original coordinate counterclockwise and implement the least squares estimation (4) only using the observations in the first two quadrants after rotation. Note that using the observations in the first two quadrants ensures that the threhold mapping after rotation is a well-defined function.
In particular, the angle relative to the origin is a • i − a • after rotating the coordinate by a • degrees counterclockwise, and the new location (after the rotation) is given as After this rotation, we estimate the following nonparametric threshold model: using only the observations satisfying q i (a • ) ≥ 0, where γ a • (·) serves as the un-known threshold line as in the model (1) in the a • -degree-rotated coordinate. Such reparametrization guarantees that γ a • (·) is always positive and we estimate its value pointwisely at 0. Figure 2 illustrates the idea of such rotation and pointwise estimation over a bounded support so that only the red cross points are included for estimation at different angles. Thus, the estimation and inference procedure developed before is directly applicable, though we expect efficiency loss as we only use a subsample in estimation at each rotated coordinate. This rotating coordinate idea can be a quick solution when we do not know which variables should be assigned as q i versus s i , in the original model (1). As an extreme example, if γ 0 is the vertical line, the original model does not work. In this case, we can check if γ 0 is (near) the vertical line by investigating the estimates among different rotations; when γ 0 is suspected as the vertical line or has a very steep slope, we can switch q i and s i in the original model (1) to improve the local constant fitting. In addition, this idea can be also used as a robustness check of a threshold function estimate. As we demonstrate in Section 6, if γ 0 is a well-behaving function as in Assumption A, we should have similar estimates even after rotations to some angles when we use the entire sample for each rotation. In the robustness check, the rotation angle has to be within ±90 • unless the mapping from s i to q i is deemed to be one-to-one.
between the ith and jth observations. The diagonal elements of Σ are normalized as Σ ii = 1. This m-dependent setup follows from the Monte Carlo experiment in Conley and Molinari (2007) in the sense that there are roughly at most 2m 2 observations that are correlated with each observation. Within the m distance, the dependence decays at a polynomial rate as indicated by ρ ij n . The parameter ρ describes the strength of cross-sectional dependence in the way that a larger ρ leads to stronger dependence relative to the unit standard deviation. In particular, we consider the cases with ρ = 0 (i.e., i.i.d. observations), 0.5, and 1. We consider the sample size n = 100, 200, and 500. First, Tables 2 and 3 report the small sample rejection probabilities of the LR test in (14) for H 0 : γ 0 (s) = sin(s)/2 against H 1 : γ 0 (s) = sin(s)/2 at 5% nominal level at three different locations s = 0, 0.5, and 1. In particular, Table 2 examines the case with no cross-sectional dependence (ρ = 0), while Table 3 examines the case with cross-sectional dependence whose dependence decays slowly with ρ = 1 and m = 10. Note: Entries are coverage probabilities of 95% confidence intervals for β 02 and δ 02 based on asymptotic normality and plugging in γ (s i ) for γ 0 (s i ). Data are generated from (20) with γ 0 (s) = sin(s)/2, where the dependence structure is given in (21) with ρ = 0.5 and m = 3. The results are based on 1000 simulations. Note: Entries are coverage probabilities of 95% confidence intervals for β 02 and δ 02 based on asymptotic normality, plugging in γ (s i ) for γ 0 (s i ), and a small sample adjustment of the LRV estimator. Data are generated from (20) with γ 0 (s) = sin(s)/2, where the dependence structure is given in (21) with ρ = 0.5 and m = 3. The results are based on 1000 simulations.
For the bandwidth parameter, we normalize s i and q i to have mean zero and unit standard deviation and choose b n = 0.5n −1/2 in the main regression. This choice is for undersmoothing as n 1−2 b 2 n = n −2 → 0. To estimate D (γ 0 (s) , s) and V (γ 0 (s) , s), we use the rule-of-thumb bandwidths from the standard kernel regression satisfying b n = O(n −1/5 ) and b n = O(n −1/6 ). All the results are based on 1000 simulations. In general, the test for γ 0 performs better as (i) the sample size gets larger; (ii) the coefficient change gets more significant; (iii) the cross-sectional dependence gets weaker; and (iv) the target gets closer to the mid-support of s. When δ 0 and n are large, the LR test is conservative, which is also found in the classic threshold regression case ).
Second, Table 4 shows the finite sample coverage properties of the 95% confidence intervals for the parametric components β 02 , δ * 02 = β 02 + δ 02 , and δ 02 . The results are based on the same simulation design as above with ρ = 0.5 and m = 3. Regarding the tuning parameters, we use the same bandwidth choice b n = 0.5n −1/2 as before and set the truncation parameter ∆ n = (nb n ) −1/2 . Unreported results suggest that choice of the constant in the bandwidth matters particularly with small samples like n = 100, but such effect quickly decays as the sample size gets larger. For the lag number required for the HAC estimator, we use the spatial lag order of 5 following Conley and Molinari (2007). Results with other lag choices are similar and hence omitted. The result suggests that the asymptotic normality is better approximated with larger samples and larger change sizes. Table 5 shows the same results with a small sample adjustment of the LRV estimator for Ω * by dividing it by the sample . This ratio enlarges the LRV estimator and hence the coverage probabilities, especially when the change size is small. It only affects the finite sample performance as it approaches one in probability as n → ∞.
Empirical illustration
As an illustration of the nonparametric threshold, we study the economic border between the Queens and the Brooklyn boroughs in New York City. The current administrative border is determined in 1931 using coordinates suggested by multiple federal agencies but ignores the rapid development in the city. Some part of it now even runs through houses, causing troubles for policy maker and local residents. 7 We collect the single family house sales data in the year 2017 and examine an economic border induced by a nonparametric threshold regression model. 8 In particular, we consider the model (1) with the following variables: 9 For the pair (q i , s i ), we consider two cases: the original latitude-longitude on the map; and the "rotated" latitude-longitude relative to the middle point of the administrative border. The rotation method is described in Section 4, where we choose the rotation angle as the slope of the linear regression line approximating the administrative border. We focus on single family houses under property tax Class 1, accounting for 57.9% of the original sample, and drop duplicate observations. The sample size is n = 8121, including 5966 observations in Queens and 2155 observations in Brooklyn. Figure 3 depicts the nonparametric threshold function estimates γ based on the rotated coordinate, which is the "unknown" economic border that splits the Queens and the Brooklyn boroughs in New York City based on the threshold in housing price. The estimated border (black solid line) is found to be substantively different from the administrative border between these two boroughs (orange dot line). Somewhat surprisingly, the 95% pointwise confidence interval (blue dash lines) contains the Forest Park and the Long Island Rail Road (LIRR) route to the east of Jamaica Center Station. As a robustness check, we also estimate the model by setting (q i , s i ) as the original (unrotated) latitude and longitude on the map. The estimated border is very close to the depicted results. 10 We choose the bandwidth b n in the main regression as cn −1/2 and we obtain the constant c by the cross validation. In particular, we choose c that minimizes are obtained using the leave-one-out observations as described in Section 2. Here, S includes the observations between 15th and 85th percentiles of the sample {s i } n i=1 . Table 6 summarizes the coefficient estimates for the parametric components, β and δ. The standard errors reported in the parentheses are computed using the spatial HAC estimator with 5 spatial lags (e.g., Conley and Molinari (2007)). The average housing price elasticity on the southern side of the economic border is lower than that on the northern side. The (semi-)elasticity of the Gross Square Footage and the effect of the house age are slightly larger on the southern side. These patterns are quite robust to whether we use the rotated coordinate or not. As a comparison, we also run the experiment using the current administrative border as γ 0 (·). In particular, the last two columns in Table 6 suggest that there does not exist a significant coefficient change if the sample splitting is based on the current existing administrative border.
Concluding Remarks
In this paper, we propose a general approach of sample splitting, where multiple variables can jointly determine the unknown separation boundary. We develop a semiparametric threshold regression model over a random field, in which the threshold is determined by a nonparametric function between two variables. Our approach can be easily generalized so that the sample splitting depends on more than two variables, though such extension is subject to curse of dimensionality as usually observed in the kernel regression literature. The main interest is in identifying the threshold function resulting in sample splitting, and thus the model developed in this paper should be Columns of "Estimated Border" are based on the nonparametric threshold estimates; columns of "Admin Border" are based on the current administrative border as the threshold function. * * and * are significant at 1% and 5%, respectively.
distinguished from the smoothed threshold regression model or the random coefficient regression model. This new model has high applicability in broad areas studying sample splitting (e.g., segregations and group-formation) and heterogeneous effects over different subsamples. The potential areas include economics, political science, sociology, and marketing science, where the agent-specific heterogeneity and social segregation are important; and regional science and urban economics, where the identification of unobserved/unknown boundaries is of interest using satellite data.
In practice, we may need a testing procedure to check whether or not the classic constant threshold model is sufficient to describe a sample splitting phenomenon. In a companion project, the authors are developing a test for a constant threshold, based on which the nonparametric threshold developed in this paper can be supported. Unlike the existing studies that focus on testing no change (i.e., δ 0 = 0 in (1)) against one change, or testing on a fixed number of changes (e.g., Bai and Perron (1998)), we are developing a test that works for a general null hypothesis of any number of changes versus nonparametric alternatives.
A Appendix
A.1 Proof of Theorem 1 Proof of Theorem 1 First, any given γ 0 (·) = γ ∈ Γ, the parameters β 0 and δ 0 are well identified as the unique minimizer of . Second, the function γ 0 (·) is pointwisely identified as the minimizer of for each s ∈ S. This is because for any γ(s) = γ 0 (s) at s i = s and given (β 0 , δ 0 ) , Note that the last probability is strictly positive because we assume f (q|s) > 0 for any (q, s) ∈ Q × S and γ 0 (s) is not located on the boundary of Q as ε(s) <
A.2 Proof of Theorem 2
Throughout the proof, we denote . We let C ∈ (0, ∞) stand for a generic constant term that may vary, which can depend on the location s. We also let a n = n 1−2 b n . All the lemmas in the proof assume the conditions in Assumptions ID and A hold. Omitted proofs for some lemmas are all collected in the supplementary material.
For a given s ∈ S, we define a mean-zero Gaussian process indexed by γ.
Proof of Lemma A.1 For expositional simplicity, we only present the case of scalar x i . We first prove the pointwise convergence of M n (γ; s). By stationarity, Assumptions A-(vii), (x), and Taylor expansion, we have where D(q, s) is defined in (8). For the variance, we have where the order of the first term is from the standard kernel estimation result. For the second term, we use Assumptions A-(v), (vii), (x), and Lemma 1 of Bolthausen (1982) to obtain that n for some finite ϕ > 0, where α (m) is the mixing coefficient defined in (7) and the first equality is by the change of variables t i = (s i −s)/b n in the covariance operator. Hence, the pointwise convergence is established. For given s, the uniform tightness of M n (γ; s) in γ follows similarly as (and even simpler than) that of J n (γ; s) below, and the uniform convergence follows from standard argument. For J n (γ; s), since E [u i x i |q i , s i ] = 0, the proof for sup γ∈Γ |(nb n ) −1/2 J n (γ, s) | p → 0 is identical as M n (γ; s) and hence omitted. Next, we derive the weak convergence of J n (γ; s). For any fixed s and γ, the Theorem of Bolthausen (1982) implies that J n (γ; s) ⇒ J (γ; s) under Assumption A-(iii). Because γ is in the indicator function, such pointwise convergence in γ can be generalized into any finite collection of γ to yield the finite dimensional convergence in distribution. By theorem 15.5 of Billingsley (1968), it remains to show that, for each positive η(s) and ε(s) at given s, there exist > 0 such that if n is large enough, for any γ 1 . To this end, we consider a fine enough grid over [γ 1 , γ 1 + ] such that γ g = γ 1 +(g−1) /g for g = 1, . . . , g+1, where nb n /2 ≤ g ≤ nb n and max 1≤g≤g γ g − γ g−1 ≤ Then for any γ ∈ γ g , γ g+1 , In what follows, we simply denote h i (s) = x i u i K i (s) 1 γ g < q i ≤ γ k for any given 1 ≤ g < k ≤ g and for fixed s. First, for Ψ 1 (s), we have where each term's bound is obtained as follows. For Ψ 11 (s), a straightforward calculation and Assumptions A-(v) and (x) yield Ψ 11 (s) ≤ C 1 (s)n −1 b −1 n + O(b n /n) = O(n −1 b −1 n ) for some constant 0 < C 1 (s) < ∞. For Ψ 12 (s), similarly as (A.3), Then, by the stationarity, Cauchy-Schwarz inequality, and Lemma 1 of Bolthausen (1982), we have for some constant 0 < C < ∞. Using the same argument as the second component in (A.4), we can also show that Ψ 13 (s) = O(n −1 ) + O(b 2 n ). For Ψ 14 (s), by stationarity, similarly as Billingsley (1968), p.173. By Assumptions A-(v), (vii), (x), and Lemma 1 of Bolthausen (1982), where the first equality is by the change of variables t i = (s i − s)/b n . It follows that the first term in (A.5) satisfies by Assumption A-(iii). However, we select ϕ small enough such that which holds for ϕ ∈ (0, 2) in Assumption A-(iii). Then (A.6) becomes o(1) because nb Using the same argument, we can also verify that the rest of terms in (A.5) are all o(1) and hence Ψ 14 (s) = o(1). For Ψ 15 (s), we can similarly show that it is o(1) as well because By combining these results for Ψ 11 (s) to Ψ 15 (s), we thus have for some constant 0 < C 1 (s) < ∞ given s, and Theorem 12.2 of Billingsley (1968) yields which bounds Ψ 1 (s).
We let φ 1n = a −1 n , where a n = n 1−2 b n and is given in Assumption A-(ii). For a given s ∈ S, we define Lemma A.4 For a given s ∈ S, for any η(s) > 0 and ε(s) > 0, there exist constants 0 < C T (s), C T (s), C(s), r(s) < ∞ such that for all n, if n 1−2 b 2 n → < ∞.
A.3 Proof of Theorem 3 and Corollary 1
For a given s ∈ S, we let γ n (s) = γ 0 (s) + r/a n with some |r| < ∞, where a n = n 1−2 b n and is given in Assumption A-(ii). We define , s) κ 2 as n → ∞, where κ 2 = K(v) 2 dv and W (r) is the two-sided Brownian Motion defined in (10).
Proof of Lemma A.6 Let ∆ i (γ n ; s) = 1 i (γ n (s)) − 1 i (γ 0 (s)). First, for A * n (r, s), consider the case with r > 0. Note that δ 0 = c 0 n − = c 0 (a n / (nb n )) 1/2 . By change of variables and Taylor expansion, Assumptions A-(v), (viii), and (x) imply that = a n γ 0 (s)+r/an where the third equality holds under Assumption A-(vi). Next, we have Similarly as (A.25), Taylor expansion and Assumptions A-(vii), (viii), and (x) lead to Furthermore, by change of variables t i = (s i − s)/b n in the covariance operator and Lemma 1 of Bolthausen (1982), where the last line follows from the conditions that ϕ ∈ (0, 2) in Assumption A-(iii) and n 1−2 b 2 n → < ∞. Hence, the pointwise convergence of A * n (r, s) is obtained. Since rc 0 D (γ 0 (s) , s) c 0 f (γ 0 (s) , s) is strictly increasing and continuous in r, the convergence holds uniformly on any compact set. Symmetrically, we can show that E [A * n (r, s)] = −rc 0 D (γ 0 (s) , s) c 0 f (γ 0 (s) , s) + O (a −1 n + b 2 n ) when r < 0. The uniform convergence also holds in this case using the same argument as above, which completes the proof for A * n (r, s). For B * n (r, s), Assumption ID-(i) leads to E [B * n (r, s)] = 0. Then, similarly as for A * n (r, s), for any i = j, we have Cov c 0 x i u i ∆ i (γ n ; s)K i (s) , c 0 x j u j ∆ j (γ n ; s)K j (s) ≤ Cb 2 n a −1 n (A.27) for some positive constant C < ∞, by the change of variables in the covariance operator and Lemma 1 of Bolthausen (1982). It follows that, similarly as (A.25), where κ 2 = K(v) 2 dv. Then by the CLT for stationary and mixing random field (e.g. Bolthausen (1982); Jenish and Prucha (2009)), we have as n → ∞, where W (r) is the two-sided Brownian Motion defined in (10). This pointwise convergence in r can be extended to any finite-dimensional convergence in r by the fact that for any r 1 < r 2 , Cov [B * n (r 1 , s) , B * n (r 2 , s)] = V ar [B * n (r 1 , s)] + o (1), which is because (1 i (γ 0 + r 2 /a n ) − 1 i (γ 0 + r 1 /a n )) 1 i (γ 0 + r 1 /a n ) = 0 and (A.27). The tightness follows from a similar argument as J n (γ; s) in Lemma A.1 and the desired result follows by Theorem 15.5 in Billingsley (1968).
Lemma A.8 For a given s ∈ S, let r be the same term used in Lemma A.6. If as n → ∞, whereγ 0 (·) is the first derivatives of γ 0 (·) and for j = 0, 1.
Lemma A.9 Let τ = arg max r∈R (W (r) + µ(r)), where W (r) is a two-sided Brownian motion in (10) Proof of Corollary 1 From (A.13) and (A.15), we have where f s (s) is the marginal density of s i . In addition, from Theorem 3 and the proof of Lemma A.7, we have since θ ( γ (s)) − θ (γ 0 (s)) = o p ((nb n ) −1/2 ). Similar to Theorem 2 of , the rest of the proof follows from the change of variables and the continuous mapping theorem because (nb n ) −1 n i=1 K i (s) → p f s (s) by the standard result of the kernel density estimator.
A.4 Proof of Theorem 4
We let φ 2n = log n/a n , where a n = n 1−2 b n and is given in Assumption A-(ii).
Lemma A.11 For a given s ∈ S, let γ(s) = γ 0 (s) + r(s)φ 2n for some continuously differentiable r(s) satisfying 0 < r = inf s∈S r(s) ≤ sup s∈S r(s) = r < ∞. Then there exist constants 0 < C T , C T < ∞ such that for any η > 0, if n is large enough.
Lemma A.12 For a given s ∈ S, let γ(s) = γ 0 (s) + r(s)φ 2n , where r(s) is defined in Lemma A.11. Then there exists a constant 0 < C L , C L < ∞ such that for any η > 0, if n is large enough.
A.5 Proof of Theorem 5 Proof of Theorem 5 We simply denote the leave-one-out estimator γ −i (s i ) as γ (s i ) in this proof. We let 1 S = 1[s i ∈ S] and consider a sequence ∆ n > 0 such that ∆ n → 0 as n → ∞. Then, where Ξ n02 , Ξ n03 , Ξ n12 , and Ξ n13 are all o p (1) from Lemma A.15 below. Therefore, and the desired result follows since as n → ∞. First, by Assumptions A-(v) and (ix), (A.39) can be readily verified since we have with ∆ n → 0 as n → ∞. More precisely, given Theorem 4, we consider γ (s) in a neighborhood of γ 0 (s) with distance at most rφ 2n for some large enough constant r. We define a non-random function γ (s) = γ 0 (s) + rφ 2n and ∆ i ( from Theorem 4, Assumptions A-(v), (vii), and (ix). (A.40) can be verified symmetrically. Using a similar argument, since 1 S ] = 0 from Assumption ID-(i), asymptotic normality in (A.41) follows by the Theorem of Bolthausen (1982) under Assumption A-(iii), which completes the proof.
By Yoonseok Lee and Yulong Wang
This supplementary material contains omitted proofs of some lemmas.
Proof of Lemma A.2 We first show the pointwise convergence. For expositional simplicity, we only present the case of scalar x i . Similarly as (A.1), we have which is non-zero only when (i) γ 0 (s) < q < γ 0 (s + b n t) if γ 0 (s) < γ 0 (s + b n t); or (ii) γ 0 (s + b n t) < q < γ 0 (s) if γ 0 (s) > γ 0 (s + b n t). We suppose γ 0 (·) is increasing around s. Then, for the case (i), since 0 < γ 0 (s + b n t) − γ 0 (s), it restricts t > 0. For the case (ii), however, it restricts t < 0. Therefore, if we let m(q, s) = D(q, s)f (q, s) < ∞, by Taylor expansion, Given the pointwise rate, it suffices to show ∆M n (s) is uniformly tight. This is implied by the tightness of M n (s) in Lemma A.1 since γ 0 (·) is continuous. The proof is complete.
Proof of Lemma A.5 Using the same notations in Lemma A.3, (A.12) yields For the denominator Θ A1 (s), we have where M n ( γ(s); s) → p M (γ 0 (s); s) < ∞ from Lemma A.1 and the pointwise consistency of γ(s) in Lemma A.3. In addition, (nb n ) −1 from from Lemma A.1 and the pointwise consistency of γ(s) in Lemma A.3. Note that the standard kernel estimation result gives (nb n ) −1/2 where the second inequality is from (A.14) and the last equality is because M n (γ; s) → p M (γ; s) is continuous in γ and γ(s) → p γ 0 (s) in Lemma A.3. Since (1) from (B.11), we have Θ A3 (s) = o p (1) as well, which completes the proof.
Therefore, by substituting this into (B.17), we have by the Borel-Cantelli lemma.
Next, we consider T * 1n . Note that For the first item in (B.18), using a similar derivation as Lemma A.6 yields that if n is sufficiently large, for some constant C 5 < ∞. For the second item in (B.18), without loss of generality, consider that γ(s) < γ(s k ) and γ 0 (s) < γ 0 (s k ). Then by choosing the covering interval length C S /m n smaller than φ 2n , we have where the last line follows from Taylor expansion and Assumption A-(vi). This bound does not depend on k and hence T * 1n = O p (τ n /m n ). Similarly for T * 2n , Taylor expansion yields that ≤ C 7 τ n /m n for some C 7 < ∞, where the last line follows by choosing the covering interval length C S /m n smaller than φ 2n . This bound is also uniform in k and hence T * 2n = O(τ n /m n ) as well. Therefore, by choosing m n = [(φ 2n (log n)/nb n ) 1/2 /τ n ] −1 , we have that T * 1n and T * 2n are both the order of (φ 2n (log n)/nb n ) 1/2 . It follows that P T 3n ≤ η −1 C(φ 2n (log n)/nb n ) 1/2 for some C ∈ (0, ∞) by Markov's inequality. Finally, if we choose τ n such that τ n = O(φ 1/2 2n ((log n)/nb n ) −1/2 ), we have both P T 1n and P T 2n are also bounded by η −1 C(φ 2n (log n)/nb n ) 1/2 . A possible choice of τ n is n or larger. This completes the proof.
from (B.20) and (B.23). This probability is arbitrarily close to 0 if r is large enough. Following a similar discussion after (B.5), this result also provides the maximal (or sharp) rate of φ 2n as log n/a n because we need (log n/a n )/φ 2n = O(1) but φ 2n → 0 as log n/a n → 0 with n → ∞.
Proof of Lemma A.14 For a given γ, since all the convergence results in Lemma A.5 hold uniformly by Lemma A.1, we only need to show sup s∈S | γ(s) − γ 0 (s)| → p 0. To this end, denote Γ and Γ as the upper and lower bounds of Γ, respectively, and let d Γ = Γ − Γ. Since S is compact, it can be covered by the union of a finite number of intervals {I k } m k=1 with length d Γ /m and center points {s k } m k=1 . On the event E * n that γ(s) is continuous with probability approaching to one, we can choose a large m such that sup s∈I k | γ(s) − γ(s k )| ≤ η for any η and all k. Such a choice is also valid for γ 0 (·) since it is also continuous by Assumption A-(vi). Then on the event E * n , using triangular inequality and Lemma A.3, for any η > 0 and any ε > 0, there is a large enough m such that where the last line follows from that P(E * n ) > 1 − ε for any ε. This is because γ(·) is a step function taking values in {q i } n i=1 ∩ Γ and hence is piecewise continuous with countable jump points.
Proof of Lemma A.15 We prove Ξ n02 = o p (1) and Ξ n03 = o p (1). The results for Ξ n12 and Ξ n13 can be shown symmetrically. As in the proof of Theorem 5, we denote the leave-one-out estimator γ −i (s i ) as γ (s i ) in this proof. For expositional simplicity, we only present the case of scalar x i .
|
2019-05-30T16:07:46.000Z
|
2019-05-30T00:00:00.000
|
{
"year": 2019,
"sha1": "dccb96136d98f85ad00ec88dd89abd1bd1c049ba",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "dccb96136d98f85ad00ec88dd89abd1bd1c049ba",
"s2fieldsofstudy": [
"Mathematics",
"Economics"
],
"extfieldsofstudy": [
"Mathematics",
"Economics"
]
}
|
253024465
|
pes2o/s2orc
|
v3-fos-license
|
Comprehensive pan-cancer analysis of N7-methylguanosine regulators: Expression features and potential implications in prognosis and immunotherapy
Although immunotherapy has made great strides in cancer therapy, its effectiveness varies widely among individual patients as well as tumor types, and there is an urgent need to develop biomarkers for effectively assessing immunotherapy response. In recent years, RNA methylation regulators have demonstrated to be novel potential biomarkers for prognosis as well as immunotherapy of cancers, such as N6-methyladenine (m6A) and 5-methylcytosine (m5C). N7-methylguanosine (m7G) is a prevalent RNA modification in eukaryotes, but the relationship between m7G regulators and prognosis as well as tumor immune microenvironment is still unclear. In this study, a pan-cancer analysis of 26 m7G regulators across 17 cancer types was conducted based on the bioinformatics approach. On the one hand, a comprehensive analysis of expression features, genetic variations and epigenetic regulation of m7G regulators was carried out, and we found that the expression tendency of m7G regulators were different among tumors and their aberrant expression in cancers could be affected by single nucleotide variation (SNV), copy number variation (CNV), DNA methylation and microRNA (miRNA) separately or simultaneously. On the other hand, the m7Gscore was modeled based on single sample gene set enrichment analysis (ssGSEA) for evaluating the relationships between m7G regulators and cancer clinical features, hallmark pathways, tumor immune microenvironment, immunotherapy response as well as pharmacotherapy sensitivity, and we illustrated that the m7Gscore exhibited tight correlations with prognosis, several immune features, immunotherapy response and drug sensitivity in most cancers. In conclusion, our pan-cancer analysis revealed that m7G regulators may exert critical roles in the tumor progression and immune microenvironment, and have the potential as biomarkers for predicting prognosis, immunotherapy response as well as candidate drug compounds for cancer patients.
Although immunotherapy has made great strides in cancer therapy, its effectiveness varies widely among individual patients as well as tumor types, and there is an urgent need to develop biomarkers for effectively assessing immunotherapy response. In recent years, RNA methylation regulators have demonstrated to be novel potential biomarkers for prognosis as well as immunotherapy of cancers, such as N6-methyladenine (m6A) and 5methylcytosine (m5C). N7-methylguanosine (m7G) is a prevalent RNA modification in eukaryotes, but the relationship between m7G regulators and prognosis as well as tumor immune microenvironment is still unclear. In this study, a pan-cancer analysis of 26 m7G regulators across 17 cancer types was conducted based on the bioinformatics approach. On the one hand, a comprehensive analysis of expression features, genetic variations and epigenetic regulation of m7G regulators was carried out, and we found that the expression tendency of m7G regulators were different among tumors and their aberrant expression in cancers could be affected by single nucleotide variation (SNV), copy number variation (CNV), DNA methylation and microRNA (miRNA) separately or simultaneously. On the other hand, the m7Gscore was modeled based on single sample gene set enrichment analysis (ssGSEA) for evaluating the relationships between m7G regulators and cancer clinical features, hallmark pathways, tumor immune microenvironment, immunotherapy response as well as pharmacotherapy sensitivity, and we illustrated that the m7Gscore exhibited tight correlations with prognosis, several immune features, immunotherapy response and drug sensitivity in most cancers. In conclusion, our pan-cancer analysis revealed that m7G regulators may exert critical roles in the tumor progression and immune microenvironment, and have the potential as biomarkers for predicting prognosis, immunotherapy response as well as candidate drug compounds for cancer patients.
Introduction
Immunotherapies such as immune checkpoint blockade (ICB) have exerted revolutionary influence on cancer treatment, particularly in the end-stage patients in the last decade . Immunotherapies have shown strong antitumor activity in multiple solid tumors, such as non-small cell lung cancer, prostate cancer and melanoma, by means of re-awakening and enhancing the anti-tumor immunity (Guha et al., 2022). Nevertheless, the effectiveness of immunotherapy varies greatly among different populations, tumors, and individuals, and only a fraction of patients benefit from the treatment . Studies have demonstrated that tumor immune microenvironment (TIME) plays essential roles in the pathogenesis of cancer, and its heterogeneity determines the immunotherapeutic effect to a certain extent (Bai and Cui, 2022). Therefore, significant efforts have been made to identify reliable predictive biomarkers of response and resistance to immunotherapy.
RNA epigenetic modifications, such as N6-methyladenine (m6A) and 5-methylcytosine (m5C), was closely related to tumor genesis and progression, and the corresponding regulators performed well in predicting tumor prognosis and immunotherapy response Pan et al., 2021;Huang et al., 2022). N7-methylguanosine (m7G) is another pattern of RNA modification, which methylate the N7atom of guanine (G) by methyltransferase, such as METTL1 (Cheng et al., 2022). The m7G modification often occurred in the 5′cap and internal positions of messenger RNA or internally within ribosomal RNA and transfer RNA (Shoombuatong et al., 2022). Besides, m7G modification has recently been found in primary microRNA (pri-miRNA) and long noncoding RNA (lncRNA) (Pandolfini et al., 2019;Wang et al., 2022). M7G modification mainly exerts their biological functions by regulating RNA processing and metabolism, involving transcription elongation, translation, splicing, polyadenylation, nuclear export, tRNA stability, rRNA maturation and miRNA biosynthesis (Luo et al., 2022). To date, several m7G regulators (mainly m7G methyltransferases) have been revealed to be aberrantly expressed in cancers and regulate tumor-related biological functions through mediating m7G modification of tRNA or miRNA, suggesting m7G modification may exert fundamental effects in tumor genesis and progression like other RNA methylation modifications such as m6A and m5C. For example, METTL1, a m7G methyltransferase, its aberrant upregulation in tumors has been found to be linked to end-stage tumors and worse prognosis, and METTL1 can drive oncogenic transformation and accelerate tumor progression by promoting m7G tRNA modification (Chen et al., 2021;Ma et al., 2021;Orellana et al., 2021). Notably, recent studies have uncovered that risk models based on m7G-associated miRNA and lncRNAs have potential value in predicting tumor prognosis and immunotherapy outcomes (Hong et al., 2022;Zhang et al., 2022). However, the potential of m7G regulators to serve as predictive biomarkers of prognosis and immunotherapy response across cancers remains unclear.
In the present study, we performed a pan-cancer analysis of 26 m7G regulators across 17 cancer types using The Cancer Genome Atlas (TCGA) datasets. Initially, a comprehensive analysis of expression features, genetic variations and epigenetic regulation of m7G regulators was carried out; next, the m7Gscore was modeled based on the single sample gene set enrichment analysis (ssGSEA) and the relations between m7G regulators and cancer clinical features, tumor immune microenvironment, immunotherapy response as well as pharmacotherapy sensitivity were then dissected. In brief, our integrative analysis may provide a new perspective into molecular mechanisms of m7G modification and lay a theoretical support for m7G regulators as biomarkers for prognosis, and response to immunotherapy and chemotherapy in human cancers.
Source and datasets
The mRNA expression raw counts data and corresponding clinical data were downloaded from TCGA (https://portal. gdc.cancer.gov/) and different normal tissues from healthy subjects were obtained from GTEx dataset (https:// commonfund.nih.gov/GTEx), including 7,862 samples and 31 tissues. The TPM normalized gene expression data, copy number variation data estimated using the GISTIC2 threshold method, DNA methylation data (Methylation450K), somatic mutation data and miRNA expression data of pan-cancer tumor and normal patients were achieved from UCSC Xena browser (https://portal.gdc.cancer.gov/). The immunophenoscore was calculated based on The Cancer Immunome Atlas (TCIA, https://www.tcia.at/home) which can also be queried for the cellular composition of immune infiltrates, cancer-germline antigens and the expression of specific immune-related gene sets. The dysfunction and exclusion score of patients was calculated based on Tumor Immune Dysfunction and Exclusion (TIDE, http://tide.dfci. harvard.edu/) to predict anti-PD1 and anti-CTLA4 response. The Genomics of Drug Sensitivity in Cancer (GDSC, https:// www.cancerrxgene.org/) database was used to explore the drug sensitivity based on 1,000 human cancer cell lines and related 100s of compounds.
Gene expression pattern in normal tissues
The baseline expression level of m7G regulators in 31 normal tissues were examined based on the GTEx data. 31 normal tissues included heart, blood, brain, kidney, liver, pancreas, muscle, stomach, colon, pituitary, blood vessel, small intestine, adrenal gland, salivary gland, adipose tissue, lung, prostate, esophagus, breast, ovary, nerve, fallopian tube, bladder, spleen, thyroid, uterus, vagina, cervix uteri, testis, skin, and bone marrow. Raw counts were normalized by the method of transcripts per million (TPM).
Differential analysis of gene expression in cancers
Among all the cancers, the number of tumor subjects ranged from 36 to 1,091, while the number of normal subjects ranged from 0 to 113 and we only included 17 cancers which had over ten subjects both in tumor and normal. Then, we performed differential analysis using R package "DESeq2" and obtained the fold change (FC) and adjusted p-value (FDR) of each gene in all the 17 cancers. Differential expressed genes with FDR < 0.05 were screened for the following analysis.
Survival analysis of m7G regulators across cancers
After filtering out uncensored data, we performed survival analysis of m7G regulators for the subjects which had both expression data and clinical data in 17 cancers.
Tumor subjects were categorized into the high-expression and low-expression group according to the optimal cut-off value of the gene TPM value which was calculated by "surv_cutpoint" function using R package "survminer". The Kaplan-Meier survival analysis was conducted based on logrank test using R package "survival" for each m7G regulator. Genes with p-value < 0.05 were screened for the following analysis.
Single nucleotide variation and mutation analysis of m7G regulators across cancers
Single nucleotide variation (SNV) and mutation data (n = 9,104) in pan-cancer was obtained from UCSC Xena browser. After filtering out non-coding region mutations, such as 3′UTR, 5′ UTR, Silent, 3′Flank, and 5′Flank, the mutation frequency of each m7G regulator in 17 cancers were computed. The oncoplot was drawn for showing the mutation pattern of m7G regulators by using R package "maftools".
Copy number variation analysis of m7G regulators across cancers
The pan-cancer gene-level copy number variation (CNV) data (n = 10,845) estimated by GISTIC2 method was downloaded from UCSC Xena browser. The value of CNVs was divided into amplification and deletion according to the threshold of 0.05, and the percentage of different CNV types was subsequently calculated. Homozygous amplification and deletion data were used to assess the relationship between the CNV and the expression level of m7G regulators in 17 cancers using Spearman's correlation analysis.
DNA methylation analysis of m7G regulators across cancers
The Methylation 450K data (n = 9,639) in pan-cancers was downloaded from TCGA database. The value of DNA methylation was calculated by the median of all beta-values obtained from mapped CpG islands in promoter regions, such as TSS150 and TSS200. Differential analysis was exerted to explore the methylation level of m7G regulators in cancer and normal subjects using R package "edgeR" across 17 cancers. Hypomethylated gene and hypermethylated gene was screened by the threshold of p-value < 0.05. What's more, the Spearman's correlation between the methylation and expression level of m7G regulators was also evaluated.
Frontiers in Genetics frontiersin.org
MicroRNA regulatory network of m7G regulators across cancers
The potential interactions between miRNA and m7G regulator mRNA was evaluated by star base (https://starbase. sysu.edu.cn/). The pan-cancer miRNA expression data (n =10,818) was obtained from UCSC Xena browser. The Spearman's correlation between the expression of m7G regulators and predicted miRNAs was performed and filtered by the threshold of p-value < 0.01 and R < −0.25. Subsequently, the miRNA-mRNA regulatory network of m7G regulators was visualized by Cytoscape software.
Multivariate regression analysis of m7G regulator gene expression
The contributions of CNV alteration, DNA methylation and miRNAs dysregulation to the aberrant expression of m7G regulators were assessed by multivariate regression analysis. The expression of m7G regulators was modeled based on the median miRNA expression, median methylation levels, and CNV values of each m7G regulator.
Establishing and evaluating of the m7Gscore
To establish an index to represent the role of m7G regulators, we conducted single-sample gene-set enrichment analysis (ssGSEA) based on the expression of m7G-related gene set. The enrichment scores (ES) of the m7G-related gene set in each subject across cancers were calculated using R package "GSVA". The m7Gscore between normal and tumor subjects in 17 cancers was estimated and the differential analysis was performed using t-test. The p-value was adjusted by FDR.
Clinical relevance of the m7Gscore
We stratified the tumor subjects into the high-risk and low-risk groups according to the median of m7Gscore in each tumor. Kaplan-Meier survival analysis was performed to explore the survival difference in overall survival (OS) and disease-specific survival (DSS) between the high-risk and low-risk groups in each cancer. For cancers with p-value < 0.05, the survival independence of the m7Gscore as well as clinicopathologic factors (gender, age, race, grade, T, N, M, and tumor stage) was evaluated by performing univariate and multivariate Cox regression analysis. To further evaluate the prognostic value of m7Gscore in cancers with p-value < 0.05, we stratified the clinicopathologic features according to grade, stage, T, N, and M and analyzed the difference of DSS between the two risk groups.
Pathway analysis of the m7Gscore
To clarify the pathways related to m7G regulators, we stratified the tumor subjects in each cancer into the m7G-high group (top 30%) and the m7G-low group (bottom 30%) according to m7Gscore. Then we performed gene set enrichment analysis (GSEA) between the m7G-high and m7G-low groups and analyzed the enrichment of hallmark gene sets.
Immune microenvironment and immunotherapy response analysis of the m7Gscore
To explore the role of m7Gscore in tumor immune microenvironment (TIME), we analyzed Spearman's correlation between m7Gscore and immune parameters, such as immune cell types, immune checkpoint molecules and immunophenoscores (IPSs). For immune cell types, we conducted ssGSEA to quantify the infiltration degree of 28 immune cell types in each subject across cancers using R package "GSVA", and analyzed the immune cell composition among cancers based on TCIA database. For the immune checkpoint molecules, including PDCD1, CD274, PDCD1LG2, CTLA4, CD276, TNFRSF9, TNFRSF4, TGFB1, CXCR4, LAG3, ADORA2A, ICOSLG, IL1A, IL6, CCL2, IL10, TNFSF4, HAVCR2, CD4, ICOS, TIGIT, and SIGLEC15, their expression differences between the m7Gscore high-risk and low-risk groups were evaluated. For IPS, we downloaded the IPS data from The Cancer Immunome Atlas (TCIA) and assessed their correlation with m7Gscore among cancers. To further investigate the role of m7Gscore in predicting tumor immunotherapy response, the dysfunction and exclusion score of patients was calculated by Tumor Immune Dysfunction and Exclusion (TIDE) database, and Spearman's correlation analysis between m7Gscore and T cell dysfunction score and exclusion score was conducted.
2.14 The potential pharmacotherapy sensitivity prediction of the m7Gscore The Genomics of Drug Sensitivity in Cancer (GDSC) database was used to explore the drug sensitivity, and the half-maximal inhibitory concentration (IC50) of each compound for patients was calculated by R package "pRRophetic". To identify novel candidate drug compounds, we performed correlation analysis of m7Gscore and IC50 of each compound for patients across 17 cancers. The potential pharmacotherapy sensitive drugs were filtered by the threshold of p-value < 0.05 and R < −0.2/R > 0.2.
Frontiers in Genetics frontiersin.org 3 Results
Aberrant expression and clinical relevance of m7G regulators across cancers
The overall workflow of this pan-cancer analysis of m7G regulators was demonstrated in Figure 1. Initially, we identified 26 m7G regulators and categorized into 3 groups, including 2 writers, 9 erasers, and 15 readers, and their distribution on human chromosomes was displayed by a circular plot (Figure 2A). To explore the collaboration among m7G regulators, we constructed the protein-protein interaction network and found that writer, reader and eraser proteins had high interaction with each other, especially among readers and erasers ( Figure 2B). The expression status of m7G regulators among normal tissues was also explored using GTEx data, and the results showed that EIF4A1 and EIF3D had highest Frontiers in Genetics frontiersin.org 06 expression among different tissues, while EIF4E1B had the lowest expression ( Figure 2C). Moreover, a correlation analysis was then performed to investigate the co-occurrence among m7G regulators, and the results showed that most m7G regulators had positively correlated expression patterns, especially among readers and erasers ( Figure 2D). For instance, the eraser NUDT3 had high correlation with readers, such as EIF4G3, GEMIN5 and LARP1 and the reader EIF4E had high correlation with readers, such as EIF4G3, GEMIN5, LARP1, and NCBP1.
Next, for the sake of investigating the aberrant expression pattern of m7G regulators across cancers, the expression differences of m7G regulators between tumor and corresponding normal tissues across 17 TCGA cancer types was analyzed. We found that most m7G regulators were significantly upregulated in pan-cancers, especially in LUAD, LUSC, BRCA and UCEC ( Figure 2E). What's more, the expression of some writer genes (METTL1 and WDR4) and reader genes (AGO2 and NCBP2) was significantly upregulated in most cancers, while the expression of some eraser genes (NUDT12 and NUDT16) and some reader genes (EIF4E3) were downregulated in most cancers ( Figure 2E).
Obvious aberrant expression of m7G regulators prompted us to investigate their clinical relevance across cancers ( Figure 2F). Several cancers showed consistent clinical significance of m7G regulators. Most m7G regulators were survival risky in BRCA, KICH, LIHC, LUAD and THCA, while most regulators were survival protective in KIRC and READ. Furthermore, the m7G regulators had heterogenous cancer type-specific clinical relevance. For example, EIF4A1 was survival risky in several tumors including HNSC, KICH, KIRC, LIHC and LUAD, but showed survival protective in READ. Of note, most m7G regulators functioned survival risky were upregulated in LIHC and most m7G regulators functioned survival protective were downregulated in KIRC. Taken together, those results demonstrated that the collaborative m7G regulators were dysregulated across cancer and may play essential roles in tumorigenesis and progression.
Since CNV alteration can affect gene expression and plays fundamental roles in cancers (Shao et al., 2019), we also analyzed the CNV data of m7G regulators and found that the CNV alteration frequency was higher than 5% in most cancers ( Figure 3C). What's more, different m7G regulators had diverse CNV alteration patterns. METTL1, NUDT16, NUDT17, AGO2, and NCBP2 were characterized as heterozygous amplification, while WDR4, NUTD2, NUTD12, NUTD15, CYF1IP1, EIF4E, EIF4E3, EIF3D, EIF4A1, EIF4G3, and NCBP3 showed heterozygous deletion. The distribution of main CNV alteration patterns across cancers was also revealed by pie plots (Supplementary Figure S1B-D). Correlation analysis demonstrated that the expression of m7G regulators was positively correlated with their CNV alterations, especially LSM1 in most cancers, whereas the correlation of most m7G regulators in THCA showed weakly ( Figure 3D). Thus, the above results indicated that the CNV alteration pattern in most cancers may contribute to the aberrant m7G gene expression.
Apart from CNV alteration, DNA methylation, as a critical epigenetic code, also can contribute to tumorigenesis and progression by governing gene expression (Nishiyama and Nakanishi, 2021). Herein, we observed that the methylation pattern of m7G regulators in different cancers is heterogeneous ( Figure 3E). Most genes were hypomethylated in BLCA, HNSC, KIRP, LIHC, PRAD, THCA, and UCEC, while most genes were hypermethylated in KIRC and LUSC. What's more, correlation analysis demonstrated that the expression levels of half of the m7G regulatory factors, such as DCPS, NUDT12, EIF3D, and LARP1, were negatively correlated with methylation levels in most tumors, while the expression of DCP2, EIF4E, EIF4E3, and GEMIN5 showed a positive correlation. ( Figure 3F). These results indicate that DNA methylation may contribute to the abnormal expression of some m7G regulators in tumors.
The regulatory network between m7G regulators and microRNAs in cancers
Besides DNA methylation, miRNA, another important epigenetic regulation mechanism, can modulate gene expression at post-transcriptional level. To illustrate potential m7G regulators-related miRNAs, a m7G regulators-miRNA network was constructed and 56 potential miRNAs targeting 21 m7G regulators screened based on ENCORI database, which can identify miRNA-mRNA interactions under pan-cancer analysis and check whether their expression is negatively Frontiers in Genetics frontiersin.org Frontiers in Genetics frontiersin.org 08 correlated (Li et al., 2014). Here, we screened the potential miRNA-m7G regulators interactions existed over six cancer types, and then obtained their regulatory network. The network demonstrated that most m7G regulators could be regulated by miRNAs and some regulators could be targeted by multiple miRNAs, such as NUDT12, EIF4E3, EIF4A1, WDR4, AGO2, and EIF4E2 ( Figure 4A). What's more, differential analysis of potential miRNAs based on miRNA-RNA interactions across cancers were performed, and results indicated that most miRNAs had diverse regulation patterns in various cancers. For example, hsa-miR-224 which targeting DCP2 was up-expressed in 10 cancers, while down-expressed only in 1 cancer. Furthermore, hsa-miR-99a which targeting AGO2 was only down-expressed in 8 cancers, while hsa-miR-93 which targeting CYFIP1 was only upregulated in 12 cancers ( Figure 4B).
To further identify different contributions of CNV alteration, DNA methylation and miRNAs dysregulation to the aberrant expression of m7G regulators, we applied multivariate regression analysis and the results indicated that the expression of m7G
Differential analysis and clinical relevance of the m7Gscore across cancers
To further explore the importance of m7G regulators in pancancers, we modeled the m7G score by ssGSEA through calculating the normalized enrichment score (NES) of m7G regulator gene sets. Then, differential analysis of m7Gscore between tumor and normal patients across cancers was performed, and results showed that m7Gscore had remarkable differences in most cancers, except for ESCA, CHOL, and HNSC ( Figure 5A). Of note, the m7Gscore was only significantly downexpressed in cancers of KIRC, THCA, and KIRP, while upexpressed in the rest cancers.
The m7Gscore is an independent prognostic factor in LIHC and LUAD
To identify potential independent prognostic factors in LIHC and LUAD, we exerted univariate and multivariate Cox regression analysis including factors like m7Gscore and main clinicopathologic features, such as gender, age, race, grade, T, N, M, and stage. The results showed that m7Gscore (HR: 1.981, 95% CI = 1.021-3.846, p = 0.043) was significantly related to DSS and could be a potential independent prognostic factor in LIHC (Figures 6A,B). Moreover, for LUAD patients, both m7Gscore (HR: 1.752, 95% CI = 1.078-2.849, p = 0.024), T (HR: 2.063, 95% CI = 1.045-3.846, p = 0.043) and N were potential independent negative prognostic factors of DSS ( Figure 6C,D). We subsequently stratified main clinicopathologic features and investigated the prognostic difference of DSS in LIHC and LUAD patients, and the results indicated that the m7Gscore performed well in subgroup of grade 3-4, stage Ⅰ-Ⅱ, stage Ⅲ-Ⅳ, T3-4, N0 and M0 in LIHC patients, while m7Gscore also performed well in subgroup of stage Ⅲ-Ⅳ, T3-4, N0 and M0 in LUAD patients ( Figures 6E,F). Taken together, the high-risk patients had poorer prognostic outcomes than low-risk patients.
Relationships between m7Gscore and hallmark pathways among cancers
To further explore the relationship between m7Gscore and hallmark pathways among cancers, we performed gene set enrichment analysis (GSEA) based on the two tumor groups, featured as top 30% and bottom 30% of the m7Gscore in each cancer. The hallmark pathways could be categorized into four types, involving cell growth, metabolism, cancer signaling and immune signaling, and we observed that different types of pathways had distinct expression patterns ( Figure 7A). For cell growth, pathways related to cell proliferation were enriched in the high-m7Gscore group, while pathways related to cell death were enriched in the low-m7Gscore group. For example, the G2M checkpoint pathway was positively correlated with m7Gscore in 14 cancers, while the apoptosis pathway was negatively correlated with m7Gscore in 15 cancers. For metabolism, oxidative phosphorylation, glycolysis and fatty acid metabolism pathway were activated in the high-m7Gscore group in most cancers. For cancer signaling, MTORC1 and PI3K-AKT-MTOR pathway were significantly enriched in the high-m7Gscore group, while KRAS and hypoxia pathways were enriched in the low-m7Gscore group. For immune signaling, we observed that most immune pathways were inactivated in high-m7Gscore group. For example, IL2-STAT5 signaling, IL6-STAT3 signaling and TNFα signaling were negatively correlated with m7Gscore among cancers. Furthermore, the correlation between the expression of m7G regulators with the NES score of each hallmark pathway in pan-cancers were also analyzed. As Frontiers in Genetics frontiersin.org Frontiers in Genetics frontiersin.org shown in Supplementary Figure S2A, METTL1, WDR4, NUDT15, AGO2, EIF4E2, EIF4A1, and NCBP2 had the same expression pattern with m7Gscore in cell growth signaling pathways, while NUDT2, NUDT3, NUDT12, NUDT16, NUDT17, EIF4E1B, LSM1, and NCBP2 had the same expression pattern with m7Gscore in immune signaling pathways.
3.7 Association between m7Gscore and tumor immune microenvironment, immunotherapy response among cancers As there was a negative correlation of m7Gscore and immune signaling pathways, we further explored the roles of m7Gscore in tumor immune microenvironment (TIME) among cancers. For the immune cell types among cancers, we revealed that m7Gscore had inverse correlation with most immune cells, except for activated CD4 T cell, memory B cell and Th2 cell ( Figure 7B). And differential analysis indicated that the m7Gscore of activated CD4 T cell, memory B cell and Th2 cell were higher in the highrisk patients (Supplementary Figure S3A). For example, m7Gscore was positively correlated with memory B cell and activated CD4 cell in LIHC, while m7Gscore also positively correlated with memory B cell, activated CD4 T cell and Th2 cell in LUAD (Supplementary Figure S2B,C). Moreover, we also investigated the relationship between the expression of m7G regulators with each immune cell in LIHC and LUAD, and the results showed that Th2 cell, memory B cell and activated CD4 T cell were positively associated with most m7G regulators Association between m7Gscore and hallmark pathways, tumor immune microenvironment, immunotherapy response among cancers. (A) GSEA for hallmark pathways between top 30% and bottom 30% of m7Gscore in each cancer. (B) The correlation between m7Gscore and immune cells among cancers based on ssGSEA. (C) The differential analysis of immune checkpoint molecules between m7Gscore high-risk and low-risk groups in pan-cancers. (D) The correlation between m7Gscore and immunophenotypes calculated by TCIA database across cancers. (E) The correlation between m7Gscore and T cell dysfunction/exclusion score by TIDE database across cancers.
Frontiers in Genetics frontiersin.org and EIF4E3 was positively related with most immune cells in both LIHC and LUAD (Supplementary Figure S2D,E). To further validate those results, we also analyzed the immune cell composition among cancers using TCIA database and we also observed that only CD4 T cell was positively correlated with m7Gscore in most cancers (Supplementary Figure S3B). For the immune checkpoint molecules, we performed differential analysis and found that most immune checkpoint molecules were significantly different expressed between the m7Gscore high-risk and low-risk groups, except for TNFSF4 ( Figure 7C). Among those different expressed molecules, most molecules were down-regulated in the high-risk group, while only CD276 and IL-1A were up-regulated. What's more, correlation analysis between the m7Gscore and immune checkpoint molecules in each cancer was also conducted and we found most immune checkpoint molecules were negatively correlated with the m7gscore across cancers, except for CD274 in KIRP, KIRC and KICH, CD276 in PRAD and ESCA, ICOSLG in KIRP and KIRC, IL-1A in HNSC, SIGLEC15 in KICH and TNFSF4 in PRAD (Supplementary Figure S3C). Previous research found that immunophenoscore (IPS) can be used to evaluate tumor immunogenicity and predict the response to immune checkpoint inhibitor, which was classified into four categories, including MHC molecules (MHC), immunomodulators (CP), effector cells (EC) and suppressor cells (SC) (Charoentong et al., 2017). Herein, our results indicated that the m7Gscore was negative correlated with IPS in half of cancer types, like UCEC, THCA, STAD, PRAD, LUSC, LUAD, KIRC, and BRCA. Moreover, the m7Gscore was positively correlated with SC, but negatively correlated with MHC and EC in most cancers ( Figure 7D). Tumor Immune Dysfunction and Exclusion (TIDE) database can be used to predict tumor immunotherapy response based on gene expression matrix (Fu et al., 2020). To further investigate the role of the m7Gscore in predicting tumor immunotherapy response, we conducted correlation analysis and found that the m7Gscore was negatively correlated with the T cell dysfunction score in most cancers, while there was no consistent correlation between the m7Gscore and T cell exclusion score ( Figure 7E). What's more, patients in the immunotherapy responder group had lower m7Gscore in most cancers than the non-responder group, except for LIHC (Supplementary Figure S3D). Patients in the m7Gscore high-risk group had lower immunotherapy responder rate than the low-risk group, except for CHOL and LIHC (Supplementary Figure S3E).
The pharmacotherapy sensitivity prediction based on the m7Gscore
To evaluate the association between m7G regulators and drug sensitivity and identify novel candidate drug compounds, the correlation between the m7Gscore and half-maximal inhibitory concentration (IC50) of each compound for patients across cancers was examined based on GDSC database and the results demonstrated that the IC50 of 62 drugs was remarkable associated with the m7Gscore, involving 43 drugs with positively correlation and 19 drugs with negatively correlation (Supplementary Figure S4). What's more, the contribution of each m7G regulator to drug sensitivity was investigated and we found that the m7G regulators can be categorized into two groups according to different correlation patterns. On the one hand, most m7G regulators had uniform correlation with IC50 of drugs, such as, the IC50 of PD-0332991 (alias Palbociclib, CDK4/6 inhibitor) and AZD0530 (alias Saracatinib, Src Inhibitor) were uniformly positive correlated with most m7G regulators. On the other hand, some m7G regulators had heterogenous correlation with IC50 of drugs. For example, NUDT12 and NUDT16, which were positively correlated with A.443,654 (Akt inhibitor) and BI-2536 (PLK inhibitor), while other m7G regulators including NUTD19 and NCBP2 had negative correlation. In conclusion, the m7Gscore might be a potential biomarker which could predict candidate drug compounds across cancers.
Discussion
Growing evidence indicating that RNA epigenetic modification plays fundamental roles in tumorigenesis and progression, and its regulatory genes exhibit great potential as predictor for prognosis and immunotherapy response. For example, m6A, a well-studied RNA modification type, its regulators have been uncovered to be tightly related to prognosis, tumor immune microenvironment, tumor cell stemness, and anticancer drug sensitivity through pan-cancer analysis . M5C, another common RNA modification in eukaryotes, its regulators have also been demonstrated by a pan-cancer analysis to be closely correlated with cancer progression and patient survival , and can affect the tumor immune microenvironment in several tumor types Pan et al., 2021;Fang et al., 2022). Besides, N1-methyladenosine (m1A), as a critical posttranscriptional RNA modification, its regulators have also recently been revealed to possess potential value as biomarkers for predicting prognosis and evaluating the tumor immune microenvironment (Gao et al., 2021;Zheng et al., 2021;Zhao et al., 2022).
N7-methylguanosine (m7G) is another pattern of RNA modification in post-transcriptional regulation, and it generally occurs in the 5′ cap or internal regions of multiple kinds of RNA, including tRNA, rRNA, mRNA, lncRNA as well as pre-miRNA (Zhang et al., 2019). Like m6A, m5C, and m1A, m7G Frontiers in Genetics frontiersin.org modification has also been found to be involved in tumor progression (Luo et al., 2022). Interestingly, m7G related lncRNAs have recently been disclosed to be aberrant expressed in several tumors and tightly related to the prognosis and tumor immune microenvironment of patient Dong et al., 2022;Wu et al., 2022), suggesting that m7G regulators have the potential to predict tumor prognosis and immunotherapy effects. For example, we recently constructed a m7G-related lncRNAs risk model to predict prognosis, immunotherapy response, and drug sensitivity in LIHC (Wei et al., 2022). However, the relationship between m7G regulatory genes and tumor prognosis as well as the immune microenvironment is not yet clear and needs to be further explored. Thus, in this study, a pancancer analysis of 26 m7G regulators across 17 cancer types was carried out to survey their expression characteristics and clinical significance in tumors through bioinformatics approach. First, in our study, we found that the expression trends of m7G regulators and their correlations with survival of patient were different in different tumors, suggesting that m7G modification plays different roles in different tumors. Most obviously, m7G modification might play opposing roles in liver and kidney cancer, as almost all m7G regulators were up-regulated in liver cancer (CHOL and LIHC) and were risk factors for survival, whereas in kidney cancer (KICH, KIRC), nearly all m7G regulators exhibited the low expression and were protective factors for survival, suggesting that m7G modification plays an oncogenic role in liver cancer and play a tumor suppressive role in kidney cancer. Up to now, several m7G regulators have been confirmed to promote liver cancer progression by regulating m7G modification of tRNA, such as METTL1and WDR4 (Chen et al., 2021). Unfortunately, the biological function of the m7G regulators in kidney cancer has not been reported by experimental validation (Dong et al., 2022).
Second, the genetic variations (SNVs and CNVs) and epigenetic regulation (DNA methylation and miRNAs) of m7G regulators were examined to understand the mechanism of their abnormal expression in cancer. For SNV, we found the genetic mutation patterns of m7G regulators were dominated by missense mutations, and any missense mutations in the m7G regulators which are associated with tumor progression have not yet been identified, unfortunately. For CNV, we revealed that there was a close correlation between CNV and differential gene expression of m7G regulators in almost all tumors, suggesting that CNVs could contribute to the abnormal expression of m7G regulators in tumors. For DNA methylation, we disclosed that the DNA methylation patterns of m7G regulators were heterogeneous in different cancers and largely corresponded to their gene expression trends in several cancers. For example, in LIHC, all detectable genes were hypomethylated, while they showed a highly expression status. In KIRC, almost all detectable genes were hypermethylated, whereas they showed a low expression status. For miRNAs, we constructed the network of miRNA-m7G modulators, and found that most m7G regulators could be regulated by miRNAs and some regulators could be targeted by multiple miRNAs, indicating that miRNA as critical epigenetic regulators could participate in the aberrant expression of m7G regulators in tumors. Recent study founded that miR-4293 can promote the proliferation of lung carcinoma by targeting DCP2, which is a mRNA-decapping enzyme .
Third, to investigate the roles of m7G regulators, the m7Gscore was established by performing ssGSEA. We found that m7Gscore was significantly down-regulated only in KIRC, THCA and KIRP, while up-regulated in most cancer types including LUSC, LUAD, LIHC etc., indicating that the m7G score was overall consistent with the gene expression trend of m7G regulators in cancers. By conducting survival analysis, the m7Gscore was revealed to be an independent prognostic factor in LUAD and LIHC. A recent study unveiled that a prognostic model containing 7 m7G regulators performed well in predicting survival outcomes in LIHC .
Fourth, the relationship between the m7Gscore and hallmark pathways among cancers was assessed, and we found that the pathways significantly related to the m7Gscore mainly involve cell growth, metabolism, cancer signaling and immune signaling. For cell growth pathway, the results suggested m7G regulators may mainly exert roles in promoting cell proliferation by modulating cell cycle and apoptosis. METTL1, a m7G "writer", has been proved to promote the proliferation of LIHC cells by accelerating cell cycle G2/ M transition and suppressing apoptosis (Chen et al., 2021). For metabolism, the results hint that m7G regulators may positively modulate oxidative phosphorylation, glycolysis and fatty acid metabolism pathways. For cancer pathway, PT3K/AKT/ mTORC1 signaling was found to be the most significantly associated with the m7Gscore. Recently, METTL1 was validated to promote the proliferation and autophagy of HNSC cells by upregulating the PT3K/AKT/mTOR signaling pathway (Chen J. et al., 2022). For immune signaling, we revealed that almost all immunerelated pathways were negatively correlated with the m7Gscore, suggesting that m7G regulators may associated with tumor immunosuppressive microenvironment.
Fifth, to further explore the roles of m7G regulators in the TIME, we detailed evaluated the immune parameters, including immune cell types, immune checkpoint molecules, immunophenoscores (IPSs). Herein, results indicated that the m7Gscore was negatively correlated with most immune cells as well as checkpoint molecules in cancers. A recent study showed that the proportion of infiltrating Mrc1+ macrophages, Macro-3 cells and Langerhans cells in HNSC tissues was significantly increased after METTL1 knockdown, while CD4 + T exhaustion and regulatory T cells were remarkably decreased (Chen J. et al., 2022). In addition, the m7Gscore was also unveiled to be correlated with poor immunotherapy response in most cancers. Thus, m7G regulators may be the potential biomarkers for predicting tumor immune microenvironment and immunotherapy response. Finally, the association between the m7G regulators and drug sensitivity was explored based on the GDSC database and totally 62 drugs were disclosed to be significantly associated with the m7G scores, indicating that m7G regulators have the potential as biomarkers for predicting candidate drug compounds for cancer patients.
Conclusion
Our pan-cancer analysis demonstrated that m7G regulators may play a significant role in the tumor progression and immune microenvironment, and show the potential as biomarkers for predicting prognosis, immunotherapy response as well as candidate drug compounds for cancer patients. Meanwhile, this study will provide novel clues for further basic and clinical translational research of m7G regulators in cancers.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
Author contributions WW, YZ, and SZ inspired this study and analysed TCGA data. WW, WJ, and CW summarized the data and plotted most figures. WW and CL drafted the manuscript. SZ and YZ monitored this study. All authors contributed to the article and approved the submitted version.
Funding
This study was founded by the National Natural Science Foundation of China (Nos. 81802456) and the Fundamental Research Funds for the Central Universities (xzy012019085).
|
2022-10-21T13:35:20.496Z
|
2022-10-21T00:00:00.000
|
{
"year": 2022,
"sha1": "2106896caf62719503656e00add6e22b9a59dc58",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "2106896caf62719503656e00add6e22b9a59dc58",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
58540615
|
pes2o/s2orc
|
v3-fos-license
|
Health Benefits and Cost-Effectiveness From Promoting Smartphone Apps for Weight Loss: Multistate Life Table Modeling
Background Obesity is an important risk factor for many chronic diseases. Mobile health interventions such as smartphone apps can potentially provide a convenient low-cost addition to other obesity reduction strategies. Objective This study aimed to estimate the impacts on quality-adjusted life-years (QALYs) gained and health system costs over the remainder of the life span of the New Zealand population (N=4.4 million) for a smartphone app promotion intervention in 1 calendar year (2011) using currently available apps for weight loss. Methods The intervention was a national mass media promotion of selected smartphone apps for weight loss compared with no dedicated promotion. A multistate life table model including 14 body mass index–related diseases was used to estimate QALYs gained and health systems costs. A lifetime horizon, 3% discount rate, and health system perspective were used. The proportion of the target population receiving the intervention (1.36%) was calculated using the best evidence for the proportion who have access to smartphones, are likely to see the mass media campaign promoting the app, are likely to download a weight loss app, and are likely to continue using this app. Results In the base-case model, the smartphone app promotion intervention generated 29 QALYs (95% uncertainty interval, UI: 14-52) and cost the health system US $1.6 million (95% UI: 1.1-2.0 million) with the standard download rate. Under plausible assumptions, QALYs increased to 59 (95% UI: 27-107) and costs decreased to US $1.2 million (95% UI: 0.5-1.8) when standard download rates were doubled. Costs per QALY gained were US $53,600 for the standard download rate and US $20,100 when download rates were doubled. On the basis of a threshold of US $30,000 per QALY, this intervention was cost-effective for Māori when the standard download rates were increased by 50% and also for the total population when download rates were doubled. Conclusions In this modeling study, the mass media promotion of a smartphone app for weight loss produced relatively small health gains on a population level and was of borderline cost-effectiveness for the total population. Nevertheless, the scope for this type of intervention may expand with increasing smartphone use, more easy-to-use and effective apps becoming available, and with recommendations to use such apps being integrated into dietary counseling by health workers.
Overview and purpose
This Technical Report provides the documentation on the Burden of Disease Epidemiology Equity and Cost Effectiveness (BODE 3 ) DIET models. i The first intervention model (IM) estimates the effect of a range of preventive dietary interventions on risk factors. The second, the BODE 3 DIET multistate lifetable (MSLT) model, estimates the effect the change in risk factor has on health impacts and cost impacts of a range of interventions in the New Zealand population, with the ability to examine heterogeneity by sex, age and ethnicity.
By health impacts, we mean a range of metrics. The primary metric is quality adjusted life years (QALYs) gained (or perhaps lost) by the intervention compared to modelled business as usual (BAU), but the following can also be outputted: mortality rates, morbidity rates, life years gained, and disease incidence.
By cost impacts, we mean two levels. First, and as the main or default option, health system perspective costs. This is the net of both the intervention cost (e.g. the cost of a new law for new taxes or the cost of dietary counselling by practice nurses) and the downstream costs averted (or incurred) in the health system due to changing disease incidence and prevalence. Second, societal cost impacts, most notably productivity costs. ii This adds to the health system costs those costs due to gains (or losses) in productivity in the labour force through keeping people healthy to work. We plan to extend this to welfare benefit costs. Greenhouse gas emissions and other 'costs' will also be considered. We do not, however, extend out to monetary value of life as this is partially captured in the QALY metric.
By preventive dietary interventions we mean public health or similar interventions that have the potential to change future dietary-related disease incidence. We consider these as two types of intervention: dietary interventions directly changing a 'risk factor', such as dietary counselling parametrized as directly changing body mass index (BMI; Section 1.02); dietary interventions that change food consumption or composition (and then change risk factors; Section 1.03).
By heterogeneity by sex, age and ethnicity we mean that model outputs will be examined and contrasted by these demographic groups. Why? Several reasons: we are interested in the ability of population-wide interventions to reduce ethnic inequalities in health (and socioeconomic inequalities in the future); intervention effectiveness varies by background epidemiological parameters (e.g. if the cardiovascular disease [CVD] rate for a group is high, they stand to gain more); gains in QALYs (intervention effect held constant) will differ by background mortality and morbidity rates.
The conceptual structure of the combination of both of the BODE 3 models is shown below in Figure 1. Specific dietary interventions lead to change in foods consumed, and then to change in nutrients and physiological markers that in turn lead to changes in disease incidence. The dietary interventions are 'channelled through' selected foods (fruit and vegetables, sweetened sugary beverages (SSBs)), 9 group codes were matched to an assortment of Nutritrack food products to provide price information for each food group. Product price was considered when matching food groups to the corresponding food products to ensure the range of products were most appropriate in terms of cost. Each food group required matching to at least one food product, and where possible food groups were matched to at least 10 different food products. v Where there were a limited number of appropriate Nutritrack food products available to match to NZANS food groups, food products were duplicated to allow more than one NZANS food group to be matched to one Nutritrack product. Food groups that reflected recipes (for example, casseroles/stews with sauce only) were matched to the most appropriate food products resembling the same or similar food components, and with probable similarities in terms of cost.
Prices for food groups that could not be matched to Nutritrack data (collected between December 2010 and April 2011) were obtained from online supermarket data. This included food products such as fresh fruit, vegetables and meat and poultry. The prices for these food products were obtained using the Countdown online supermarket (http://shop.countdown.co.nz). An unweighted average price was calculated across a range of food products considered to be most commonly consumed to obtain an average price for that food. Prices obtained from the online supermarket (year 2014) were scaled using the CPI to reflect 2011 prices.
Section 1.02. Dietary interventions parameterised as directly changing a risk factor
Here we focus on the 'general' modelling of dietary interventions onto 'risk factors' (the risk factors currently in the BODE 3 DIET MSLT model are fruit, vegetables, sugar-sweetened beverages, sodium and polyunsaturated fat intake and BMI). The parameterisation of these interventions is straight forward in principle. Firstly, we need to determine, based on the current best evidence, the effect of the intervention on the particular risk factors. For example, how much does BMI decrease with a mHealth weight loss intervention? To determine the effect size we perform a literature search, and possibly expert knowledge elicitation (see BODE 3 Protocol for a general approach 4 , and specific publications for specific approach).
Then, there are a number of other factors to consider and parameterise: -Who does the intervention effect? (e.g. just obese? Or everyone?) -Is there any heterogeneity of effect size by population characteristics? (e.g. sex, age) -What proportion of this population takes up and also completes the intervention? -What attenuation of effect is there over time? (E.g. informed by a literature search, and probably at least some expert knowledge elicitation (empirical estimates often sparse, and for short follow-up only), to determine the attenuation.) -Any heterogeneity of attenuation by population characteristics? (e.g. sex, age; however, it is most unlikely that enough information will be available to specify such heterogeneity of attenuation).
This intervention effect size and attenuation, with attendant uncertainty, is then modelled as an absolute change in risk factor (e.g. a 0.2 absolute unit change in BMI), for the intervention population of interest (e.g. all people 65 years and older, all people with a BMI ≥ 30). This intervention effect size is then applied to each relevant category of risk factor (e.g. if the intervention was targeted at people with a BMI over 30, then only to these groups; see PART 2 later for more detail on how this 'feeds into' the population impact fraction (PIF) estimation using a 'relative risk shift method').
Section 1.03. Dietary interventions that change food consumption or composition
For the purposes of this Technical Report, we consider two types of intervention here: price changes from taxes and subsidies (Section 1.03.1); and food reformulation by the food industry (Section 1.03.2). Food taxes and subsidies are complex to model, and dominate this Section.
Food taxes and subsidies
The BODE 3 intervention model, which merges food price changes with price elasticities to generate changes in 346 foods consumed, is complex. The conceptual process is that a change in price of food(s) leads to change in purchasing (and in parallel consumption), modelled through price elasticities (PEs). This change in consumption then leads to percentage changes in food (vegetables, fruit, SSBs) and nutrient (sodium, PUFA) and total energy intake, which in turn changes disease incidence. The most complicated component is the change in food price to change in consumption, through price elasticities, for reasons such as: -There are many possible foods that can have a price change, yet price elasticities are only (usually) calculated for aggregate groupings of foods. -For any single food with a price change, one has to not only model its own change in purchasing/consumption (through own-PEs), but also how the change in this food effects consumption in all (or some) other foods (through cross-PEs). -Price elasticities are calculated as a system in a different context to that in which they are applied in modelling. For example, the starting consumption of foods may differ between the context in which the PEs were calculated, compared to the population to be modelled. For a price set change (especially if large and/or affecting multiple foods; e.g. a saturated fat tax) the predicted purchasing/consumption of many foods changes, and it is possible to see 'implausible' changes in energy intake. Put another way, the PE modelling may 'correctly' see decreases and increases in consumption of foods relative to one another, but the net energy intake change may be implausibly large.
Yet food taxes and subsidies are a key public health research question, and using price elasticities is usually necessary. We address some of these issues in this Technical Report.
This Section is structured as follows: -Price elasticities: o Disaggregating price elasticity matrices o Theoretically selected price elasticities -Calculating the estimated change in consumption for a given price change -Constraining total food expenditure change
(i) Price elasticities
In this section we: 1. Outline the method used to disaggregated a 24 by 24 price elasticity matrix (from the SPEND Study 5,6 ; Figure 2, page 13) into a 338 by 338 price elasticity matrix. The reasons for this disaggregation is that our price interventions will differentially effect price within each of the 24 aggregated food categories (e.g. a saturated fat tax based on grams of saturated fat per 100g of product would not affect the price of each food sub-type (e.g. low and high fat cheese) by the same percentage within each aggregated food category (e.g. dairy products)). We mainly used theoretical means to do this, as empiric data are limited or non-existent. Price change, per gram of saturated fat, is based on the food composition data of the specific food groups. 2. Outline a basis upon which to theoretically 'set' many cross-PEs to zero for use as either the 'best' or scenario analyses (a choice that will be made in subsequent publications). The reason for doing this is because even a small (but erroneous) price elasticity from, say, dairy to fruit may just add more error to modelling, whereas theoretical setting of some cross-PEs to zero may improve subsequent modelling -or at least provide a useful scenario or sensitivity analysis. Many published modelling studies theoretically suppress cross-PEs (e.g. [6][7][8] ). 3. Outline a method to scale all purchases up or down by the same percentage, after modelling through disaggregated price elasticities. The reason for doing this is that even with our best efforts above to specify PE matrices, one may still end up with an implausible change in total food expenditure (and total energy intake). For example, a 10% increase in average food prices due to a saturated fat tax may result in no change in food expenditure (and a 10% reduction in energy intake) through the above PE matrix modelling. Yet there is an elasticity of total food expenditure given change in total food price; an envelope within which redistribution between foods must operate. There are also reasons from econometric theory why such an envelope is sensible to invoke, namely that if the prices of many foods changes then expenditure on food (in total) now has to also consider the total household budget and income elasticities (e.g., for some budget-constrained families higher food prices may result in total reductions in the amount of food purchased).
(ii) Generating disaggregated PE matrices
Initial price elasticities were from the SPEND Study, conducted for New Zealand. 5,6 These are in a 24 by 24 matrix (see Figure 2, page12) of own-and cross-PEs (with standard errors for default uncertainty). These 24 food groups have been matched to the 346 food groups used in the intervention model. This gives us 24 overall food groups and 338 food subgroups (ignoring 5 'alcoholic beverage' groups, 2 'dietary supplement' groups and 1 'not applicable' group). The 24 by 24 price elasticity matrix was then expanded to a 338 by 338 matrix as follows: -Own-PEs: Econometric theory posits that as one keeps disaggregating foods into smaller and smaller subgroupings, the own-PE of each food is expected to increase (in absolute value terms). [9][10][11][12] For example, the own-PE of all bread might be -0.5, but wholegrain bread separated might be -0.55. Why? Because, assuming subgroups in each aggregated category are substitutes, changing the price of just white bread means consumers can swap to multigrain bread, meaning that consumers can be more price sensitive (a larger, in the negative sense, own-PE). How much does the own-PE strengthen? Unfortunately, that is difficult to estimate. What we have done is assumed that the own-PE increases by 2.5% (with wide uncertainty expressed as a 50% (of 2.5% = 1.25 percentage point) standard deviation (SD) on the normal scale) for each additional food sub-group. Of note, the own-PE increases by 5% if splitting one category in two (we deliberately allow a greater increase in own-PE for the first split), but then 2.5% for each additional food category thereafter. (Whilst theoretical literature can be found to support the fact that own-PE increases with disaggregation, [9][10][11][12] we were unable to find empirical research on the same for food. We therefore plan to undertake such analyses ourselves in the future with data collected from a virtual supermarket experiment in the Price ExaM study (within the DIET Programme; https://diet.auckland.ac.nz/content/price-exam) for which we can change the level of food disaggregation in calculating own-PEs -and then amend our 2.5% estimate as appropriate.) The overall sensitivity of the modelling to this parameter will be investigated and reported with oneway uncertainty analyses and Tornado plots (e.g. of QALYs gained).
-Cross-PEs within the initial food group. We assume that each food subgroup (e.g. four bread subtypes of white bread, fibre-containing white bread, wholemeal bread, wholegrain) within each separate food (e.g. bread) is a substitute for each other, meaning they have small positive cross-PEs. We specify all these, so the sum (across rows of PE matrix) of own-and cross-PEs gives the SPEND Study's own-PE, following econometric theory. 12,13 For example, if as above the own-PE of breads as one aggregated food category was -0.5, but when disaggregated the four subcategories of bread each had an own PE of -0.55 then the sum of: -Wholegrain bread's own-PE (-0.55) -and each of the three cross-PEs of white, fibre white, and wholemeal onto wholegrain … must be -0.5. Meaning the sum of the three cross-PEs must be +0.05. We disaggregated this quantum across the three non-wholegrain breads proportional to their consumption (i.e. the cross-PE of a commonly purchased item on x is greater than the cross-PE of a rarely purchased item on x). For example, assume that the percentage consumption of the three non-wholegrain breads was white=50%, fibre white=20%, and wholemeal=30%, then the cross-PEs for white onto wholegrain would be 50%×0.05 = +0.025, for fibre white = 0.01, and for wholemeal = 0.015.
Note that, thus far, we have two main assumptions: first, that own-PEs increase by 2.5% (with wide uncertainty) for each additional sub-category of food; second, that the disaggregation of cross-PEs is proportionate to that food's relative consumption. These assumptions are qualitatively justified based on econometric theory, but the exact quantification (or weighting) is unknown, and needs empirical testing (and in the meantime uncertainty or scenario analyses). vi -Cross-PEs for food sub-categories of food in different aggregate categories. (For example, for each of four breads (e.g. white, fibre white, wholemeal and wholegrain) onto any fruit.) Again, the cross-PE from the aggregated categories needs to be disaggregated by food sub-category, and we assume weighted by consumption. So, extending the above example, the cross-PE of aggregated bread onto fruit is 0.016 from the SPEND PE matrix (Figure 2, page 13). Assume wholegrains were 20% of all bread expenditure (the percentages above excluded wholegrain), meaning percentage expenditure on the three other breads within all breads is white=80%×50%=40%, fibre white=80%×20%=16%, and wholemeal=80%×30%=24%. And therefore, the cross-PEs for each of these four breads onto (any) fruit is estimated to be white=40%×0.16=0.064, fibre white =11%×0.16=0.026, wholemeal=24%×0.16=0.038, and wholegrain=20%×0.16=0.032.
Logic checking of the above was undertaken by determining changes in purchasing for various policies that had the same percentage price change on all food subtypes (e.g. all sub-types of bread and cereals) within each aggregate food category (e.g. bread and cereals combined) through the completely disaggregated and the 'simple' aggregated price elasticity matrix -identical results were obtained, as should be the case.
vi Assumptions implicit to price elasticity matrices include: -The homogeneity assumption: the sum of the cross-PEs for a product and the income elasticity for that product is zero. -The budget constraints assumption: the sum of the income elasticities weighted by the share of income spent on the goods is equal to 1. Further mathematical work by Scarborough and Blakely managed to meet this 'stricter' homogeneity assumption using an 'odds' method to calculate the cross-PE in this system (further information from authors; emails and workings August 2016). However: 1) whilst 'mathematically correct' for one system of disaggregated foods, implausibly high cross-PEs can result; 2) it was mathematically intractable to find a solution of linear equations to apply to a larger food system (as we need to in the BODE 3 intervention model). We also note that the application of PE matrices calculated in one setting (with a set of assumptions (e.g. conditionality, meaning no change in budget share for food)) to another setting (e.g. New Zealand in the future with different starting distributions of food consumption, tastes and preferences) is structurally uncertain -albeit unavoidable. Therefore, in the interests of model parsimony, we settled on the approach here detailed in the main text of this Appendix.
(iii) When empirical data on disaggregated PEs exists from other research
Finally, for soft drinks there were actual estimates of cross-PEs for regular and diet soft drinks available through a paper published by Sharma et al (2014) 14 (Australian study). These were rescaled to the SPEND carbonated beverages own-PE as follows: a) Assume that the relative distributions of own/cross-PEs in Sharma et al apply to New Zealand. b) Then imagine that diet and regular soft drinks have the same price increase/decrease, meaning that this 2 by 2 matrix should return what the single own-PE in SPEND returns. c) The SPEND own-PE is -1.23. Thus, we need to make the Sharma 2 by 2 matrix behave as if it were -1.23 in aggregate. We achieved this by a scalar based on budget share (using food consumption from the NZANS as a proxy).
The scalar is calculated as follows. First, the own-PEs (shaded cells) and cross-PEs from Sharma regular soft drinks will give a 1.509% reduction in regular and a 0.670% increase in diet soft drinks. Given that regular soft drinks make up 83.2% of consumption, and diet ones 16.8%, the net change in soft drink consumption (due to change in regular prices only) will be (83.2% × -1.509%) + (16.8% × 0.670%) = -1.143% diet soft drinks will give a 0.383% increase in regular and a -2.418% decrease in diet, and therefore a net change in soft drink consumption (due to change in diet prices only) of (83.2% × 0.383%) + (16.8% × -2.418%) = -0.088% and, therefore, in both regular and diet soft drinks there will be a net change of: -1.143% + -0.088% = -1.23%, consistent with the 'starting' SPEND own-PE of -1.23.
This disaggregation was repeated using own-and cross-PE from a report published by Tiffin et al in 2011 15 in a sensitivity analysis.
Selected examples of expected (i.e. no uncertainty propagated through calculations) cross-and own-PEs for some of the food sub-types from the fully disaggregated PE matrix are shown in Table 1 below (using methods 1, 2 and 3 above for everything except the underlined block of disaggregated soft drink PEs which uses method 4 above, for Sharma et al (2014) external data), and can be contrasted with the more aggregated SPEND PEs shown in Figure 2, page 12.
(iv) Theoretically selected cross-PEs
We updated the literature review from our previous work 16 to include PE studies for high-income countries (mainly UK, US and Australia). We searched Ovid database with the keywords: "Price elasticit$" AND "Food$" OR "Drink$" OR "Beverage$", NOT tobacco, NOT alcohol, from 2000 onwards (English language, human, full text). Studies that just estimated price elasticities in low or middle income countries were ignored. These studies had to report cross-PEs between at least two food groups (given we are interested in cross-PEs). There were 11 studies that meet our search criteria. 9,[17][18][19][20][21][22][23][24][25][26] We matched food groups from the selected studies with the BODE 3 intervention model's food groups. Then all PEs from these studies were extracted to a database. Median cross-PEs from this database were selected as the best cross-PE for each food group pairing in the PE matrix (There were some outliers in the data so we decided not to use average cross-PEs, the majority of the cross-PE had three or more estimates). We refer to these selected cross-PEs as the BODE 3 cross-PEs (as opposed to the SPEND cross-PEs). We also classified cross-PEs as a weak, medium or strong association. That is: These values were estimated from our PE database above, with weak association accounting for the lower 25 th percentile, strong association for the upper 25 th percentile, and medium association being the rest.
For modelling of the impact of price changes on food purchasing/consumption, we will use three general approaches, each with alternative options or scenarios within it:
Approach A: use SPEND PEs
In this approach we will simply use all SPEND own-PEs and cross-PEs (i.e. no suppression of any cross-PEs, use standard errors about each own-PE and cross-PE as initial uncertainty intervals to draw from in Monte Carlo simulation).
Suppress selected cross-PEs as sensitivity analyses: suppress (i.e. set to 0) those SPEND cross-PEs that in the above mentioned literature review we classified as 'weak', i.e. where the BODE 3 |cross-PE| ≤ 0.04(AS1, see Appendix B: SPEND Study price elasticity tables, page 72); suppress those SPEND cross-PEs that in the above literature review we classified as 'weak' or 'moderate', i.e. where the BODE 3 |cross-PE| ≤ 0.09 (AS2, see Appendix B: SPEND Study price elasticity tables, page 72).
suppress those SPEND cross-PEs as 'theoretically' determined by previous users 6,8 of SPEND price elasticities (AS3, varied by policy and will be described in detail if used).
Approach B: use BODE 3 (cross) PEs
In this approach we will retain SPEND own-PEs, but use the median BODE 3 cross-PEs from the literature (BS1).
Suppress selected cross-PEs as sensitivity analyses: suppress (i.e. set to 0) those BODE 3 cross-PEs that in the above literature review we classified as 'weak', i.e. where the BODE 3 |cross-PE| ≤ 0.04 (BS2, see Appendix B: SPEND Study price elasticity tables, page 72); suppress those BODE 3 cross-PEs that in the above literature review we classified as 'weak' or 'moderate', i.e. where the BODE 3 |cross-PE| ≤ 0.09 (BS3, see Appendix B: SPEND Study price elasticity tables, page 72).
Additional sensitivity analysis: Use the median BODE 3 own and cross-PEs from the literature (BS4, see Appendix B: SPEND Study price elasticity tables, page 72).
All the above Approaches used the above described disaggregation method (page 13) to move from the SPEND 24 by 24 matrix to the fully disaggregated 338 by 338 matrix.
(v) Calculating change in consumption for a give price change
Whilst the matrices are large, and there is uncertainty in the own-and cross-PEs (that is uncertainty intervals about each own-and cross-PE that are sampled from during Monte Carlo simulation), the actual mechanics of calculating the change in consumption is fairly straight forward. Imagine that there are only three food groups, A, B and C Next, assume that the PE matrix is as follows: This means that for each 1% increase in price of A, consumption of A will reduce by 0.7% (own-PEs, shaded), but consumption of B will increase 0.02% and consumption of C will increase by 0.15% (cross-PEs). And so on.
Food groups
Assume that A has a 20% increase in price, B a 10% increase in price, and C no change in price. Next, assume that initial consumption of A was 500g, B 200g and C 100g. Then the post price change consumption will be: This gives change in grams. Whilst we are using consumption data in grams, not purchasing data in grams, as long as one assumes that wastage (i.e. the percent of food purchased that is not consumed) is similar between baseline and intervention, one can safely convert to percentage change after working with grams in the actual calculations. We acknowledge that this is a simplifying assumption about wastage).
(vi) Constraining total food expenditure change
The price elasticities used in this model were calculated from a subset of the New Zealand population, with internationally sourced cross-PEs for scenarios BS1 to BS3 and internationally sourced own and cross-PEs for scenario BS4, and do not 'fit' perfectly to the consumption data from the NZANS used in this model. Moreover, the price elasticity values we use are from 'conditional' models, where the total expenditure on food is assumed fixed; if the interventions we model substantially change prices and therefore overall expenditure on food, we need to allow for how much total food expenditure changes as a result of price changes. These two problems can lead to implausible changes in food expenditure and energy intake if the price elasticities are naively used without constraints.
To address this issue, we need to consider how total food expenditure changes as a result of substantive changes in food prices. Theoretically, we would not expect the TFE e to exceed 1.0. If it did exceed 1.0, this would suggest that as food prices increased expenditure increased even faster -clearly implausible on a fixed household total budget. Conversely, it seems unlikely that the TFE e is less than 0, as food is essential to our existence. Accordingly, the naïve upper confidence limit of 1.21 from the Michelini (1999) derived TFE seems implausible -it should be less than 1.0. Table 2 (page 22) presents TFE e estimates for eight studies that used multi-stage budgeting models to estimate unconditional and uncompensated food own-PEs, for high-income countries up to June 2017 (keywords: "price elasticities" or "price elasticity" or "demand" and "food" and "multi-stage" or "multi stage", mainly Google Scholar) Consistent with theoretical expectation, all estimates were between zero and one -albeit spanning this entire range. The previous New Zealand study estimated a TFE e of 0.68, a bit less than 0.832. The average, median and standard deviation across these eight studies were 0.59, 0.66 and 0.29, respectively. In the absence of an ideal (let alone perfect) recent New Zealand study, we elected to specify a Beta distribution to estimate the TFE e , a Beta distribution was chosen as the value needs to be between 0 and 1. Values for alpha and beta were varied in order to return a mean of close to the New Zealand literature and were set to 6 and 2. This returns a mean of 0. There was one additional prior step required too. Changing total household expenditure on food is equivalent to an income change for food consumption. Therefore, income elasticities for each food category were also applied. This step made little relative difference to food expenditure, and everything was still scaled to the 'set' new expenditure based on the TFE e and percentage change in food price index.
In summary, given our (necessary) reliance on: a) less than ideal price elasticity matrices; b) baseline food consumption distribution in our simulation studies that are not the same as that used in price elasticity estimation and; c) simulated food price interventions that will change the food price index by more than a trivial amount, it was necessary to 'set' the new total expenditure on food. To not do so would have risked implausible changes in total food expenditure and -importantly for final estimation of health gains -implausible changes in food energy intake. We specify generous uncertainty about the TFE e , as it is genuinely uncertain. Finally, the TFE e essentially just scales all food purchasing up or down by the same amount; the relative impact on food consumption from the PE matrix is preserved (e.g. the effect of a saturated fat tax decreasing fatty food purchasing but increasing non-fatty food purchasing, relative to each other, is preserved).
Food reformulation
The methods used for food reformulation will be expanded in future versions of this Technical Report. In principle, the approach will be: 1. Specification of the policy option, and what foods/nutrients it targets. 2. Estimation of how much individual food product, or nutrient amounts directly, change as a result of the policy. This will be fed into the foods, and resultant changes in risk factors, from baseline, will be estimated. These are likely to be for nutrient risk factors and BMI only (i.e. for sodium, PUFA and BMI).
Risk factor distributions
There are currently six risk factors generated in the BODE 3 intervention model that flow into the BODE 3 DIET MSLT model; change in BMI, intake of fruit (grams/day), vegetables (grams/day), sugarsweetened beverages (SSBs, mls/day), sodium (mgs/day) and polyunsaturated fat (as a percentage of total energy (%TE)) between baseline intake and intervention intake.
Changes in consumption from baseline to intervention are calculated separately for Māori and non-Māori, males and females, but due to data limitations could not usually be further calculated by agegroups. We treat this (necessary) simplification as satisfactory for estimating average changes across ages, and from there the percentage change (of baseline intake). But given that there are some important age variations in risk factor distributions (e.g. SSBs more commonly consumed by young people), it was necessary to use the 'all ages percentage change' to in turn estimate grams or mls change by age.
This percentage difference is applied to the average consumption for the specific age-groups (15-25, 25-35, 35-45, 45-55, 55-65, 65-75, 75-85 and 85+) giving a change in intake in grams (for fruit and vegetables) or mls (for SSBs) specific to each sex, ethnic and age-group. Change in sodium uses the change in grams for all the different food groups and the sodium content of these foods (outlined in Section 1.01.1) to calculate a change in mg of sodium. This is also calculated by sex and ethnic groups and estimated as above for age groups. The percentage of total energy (%TE) from polyunsaturated fat is calculated for baseline and intervention. The change in %TE from polyunsaturated fat is the risk factor that flows through to the BODE 3 DIET MSLT model and is not differentiated by age-group. Table 3, page 25). We applied the estimated percentage change to the grams per day by age-group (within Māori males) given by the NZANS 2 . Accordingly, absolute consumption of SSBs was estimated to decrease (under the 10% SSB tax intervention) by a minimum of 2.35mls per day for the elderly, and a maximum of 54.54mls per day for young Māori males. *As a result of the intervention (with TFE e switched on) average intake (for the four demographic groups as a whole) changed by this absolute amount. **The absolute change was converted to a percentage change that was then applied to the baseline intake of the specific age-groups to give an estimate of absolute change by age. For all risk factors except BMI the change occurs in the first year, for BMI it takes 2 years for the full BMI change to occur (see section 2.01.1 for details). For taxes and subsidies the change in risk factor is then maintained for the length of the tax/subsidy. For one off interventions the initial effect starts to decay after the first year (or 2 in the case of BMI, see section 1.01.08 for details).
Change in BMI
Change in BMI is calculated through a change in energy intake from baseline to intervention. As outlined in the Nutrients section on page 7, baseline consumption is matched to the energy content of the foods consumed. As consumption increases or decreases so does the energy intake.
Change in energy intake is converted to change in kg and change in BMI using the formula presented in Hall et al (2011). 34 This paper critiques the commonly used 'static weight-loss rule': reduction of food intake of 2mJ/day will lead to a steady rate of weight loss of 0.5kg/week. This Hall et al method takes into account the dynamic physiological adaptations that occur with decreased bodyweight, and quantifies the effect of energy imbalance on bodyweight using mathematical modelling: reduction of food intake of 100kJ/day will lead to a change of 1kg with half of the weight change reached in 1 year and 95% by year 3. This is operationalised in the BODE 3 DIET MSLT model as 50% of the change in BMI in the first year, then 100% of the change by the second year, and then with subsequent weight change either held constant or decayed (due to decaying intervention effect) over time.
PART 3. Disease Modelling
Part 3, Disease Modelling in the BODE 3 DIET MSLT model, is presented as four sections: 1. Section 3.01 outlines the structure of the BODE 3 DIET MSLT model. 2. Section 3.02 outlines the baseline specification and parametrization of the model. In other words, how the mortality, morbidity and cost parameters are expected to behave under 'business as usual' (BAU). 3. Section 3.03 presents model calibration. 4. Section 3.04 presents model validation. 5. Section 3.05 briefly outlines analysis. 6. Section 3.06 provides an additional note on why we use disability-adjusted life-years (DALYs) and QALYs interchangeability in the context of simulation modelling. Everyone still alive in each cycle of the model (more specifically, the alive proportion for whichever five-year cohort is currently being modelled) is represented in the main life-table.
In this main life-table, age-specific all-cause mortality and morbidity rates are applied in each cycle to the 'alive cohort', until the age of 110 years when all remaining alive people are assumed to die. As such, the sum of QALYs can be tallied.
In parallel, proportions of the cohort can simultaneously reside in one or more parallel disease-specific life-tables or states. Or put more correctly, multiple disease states are modelled independently. vii Within these disease-specific life-tables, disease incidence rates, remission and case-fatality rates, and disease-specific morbidity (disability weights from the New Zealand Burden of Disease Study (BDS) 35 and GBD 36 ), and disease-specific costs, are modelled.
vii With the exception of diabetes, which has been 'linked' to coronary heart disease and stroke states (See section 1.01.09 for details).
The disease-specific life-tables have both a BAU and intervention model. The latter intervention model differs from the BAU model, in that incidence rates are changed (usually lowered) based on population impact fractions (PIFs; a 'merging' of changes in risk factor distributions and relative risks; see 3.01.4 later in this Technical Report). This allows a calculation of differences in disease-specific mortality and morbidity rates, and differences in disease-costs per capita.
These differences are then summed across all parallel disease states, and added or subtracted to the all-cause mortality and morbidity rates in the main life-table and captured as cost differences between BAU and intervention, allowing estimation of QALYs gained (or lost) and health system cost change between the BAU and intervention scenarios for the population overall -the main objective of the modelling.
Figure 4: Schematic of a proportional multi-state life-table, showing the interaction between disease parameters and life-table parameters, where x is age, i is incidence, p is prevalence, m is mortality, w is disability-adjustment (or health status valuation), q is probability of dying, l is number of survivors, L is life years, Lw is health adjusted life expectancy (HALE), and where '-' denotes a parameter that specifically excludes modelled diseases, and '+' denotes a parameter for all diseases (i.e. including modelled diseases). 37
(page 30) is an alternative way of presenting a proportional multi-state life-table structure. There are numerous 'disease processes' that are modelled independently, and the total population 'experience' (in this case shown as health-adjusted life expectancy, or quality adjusted life expectancy) is a sum of these disease process contributions, and the mortality and morbidity experience due to all remaining diseases considered as one 'residual entity'. The way the intervention simulations work (not shown directly in the figure below) is to calculate changes between BAU and intervention scenarios in mortality, prevalence and disability rates for each disease process (due to changing disease incidence rates in each disease process), and then 'sum' these changes to calculate new total population (i.e. in the main life-table) mortality, prevalence and disability rates. And from here one derives a change in quality adjusted life years lived by the cohort.
Other outputs like change in total mortality rate can also be outputted. Finally, health system costs can be 'attached' to the model structure in a similar way to disability or morbidity weights, allowing an estimation of change in health system costs due to changing disease epidemiology (see Section 3.02. 5). .
Diet-related disease models
Diet has been linked to increased incidence of various cancers (e.g. colorectal), cardiovascular diseases (e.g. coronary heart disease (CHD), stroke) and osteoarthritis through dietary impacts on BMI. These diseases were modelled, within each disease process or parallel disease state as above, using a set of differential equations that describe the transition of people between four states (healthy, diseased, dead from a disease in the model, and dead from all other causes), with transition of people between the four states based on rates of background mortality, incidence, case-fatality and remission ( Figure 5, page 32).
Figure 5: Each disease is modelled with four states (healthy, diseased, dead from the disease, and dead from all other causes) and transition hazards between states of incidence, remission, casefatality and mortality from all other causes.
The default model structure was that diseases were modelled independently. Specifically, the sex-, age-and ethnic-specific incidence, remission, and case-fatality rates for each disease were modelled independently, e.g. the incidence rate for colorectal cancer did not vary with changes in the incidence rate (or prevalence) of kidney cancer. However, we include dependency for diabetes as a disease state, essentially treating it both as a disease state and a risk factor itself for coronary heart disease and stroke. Given this 'both a disease and risk factor' treatment of diabetes, we defer describing this model structure until after describing how risk factors are treated (i.e. Section 3.01.5).
How changes in risk factors change disease incidence
Health and cost impacts of simulated interventions are achieved by interventions changing risk factors (e.g. BMI) which in turn change disease incidence. This is similar to comparative risk assessment, and indeed involves 'shifts' in risk factor distributions that are merged with relative risks to determine PIFs, the percentage by which disease incidence is (usually) decreased. In this section we describe the model structure features, namely: 1. the risk factor disease associations included in the model 2. the calculation of the PIFs 3. how decay (if any) in risk factor change is modelled over time 4. how time lags between risk factor changes and disease incidence changes are modelled.
(Actual relative risks used are given in Appendix E: Relative risks of diet to disease associations (page 96). How dietary interventions change risk factors was described in PART 1. Baseline data on risk factors was described in PART 2 Section 2.01.)
Healthy
Diseased Dead
(vii) Risk factor-disease associations included in the BODE 3 DIET MSLT model
Risk factors were included if they met the following criteria: -If they were assessed as a top risk factor (top 20) in Australasia (Australia/New Zealand) in the GBD 2010 Study. 39 -There are interventions we plan to model that can modify this risk factor. -There are data available: o Distributional data in New Zealand (e.g. NZANS) o RR data (to all key diseases; i.e. GBD sourced RRs preferable), and mutually adjusted for other risk factors in the model where possible. Table 5 (page 33) shows the risk factor-diet associations operating in the BODE 3 DIET MSLT model. All diet-disease associations that met the above criteria were included in the model with planned modifications for future versions of the model outlined in Table 6 (page 34).
GBD risk factors to be included in Model V2 Comment
Physical inactivity and low physical activity To be added in the next version of the model (V2).
Diet low in nuts and seeds
To be added in the next version of the model (V2).
Diet low in whole grains
To be added in the next version of the model (V2).
Diet high in processed meat
Ideally to be added in the next version of the model (V2). Firstly investigate the level of effect that is mediated through other risk factors currently in the model (e.g. Sodium). Add the risk factors into the model with appropriately modified RR.
Diet low in fibre
The effect of low fibre is completely mediated between 'diet low in whole grains, fruits and vegetables', risk factors either currently in the model or planned to be in the model (V2).
Diet low in seafood omega-3 fatty acids
There is no intake data for this risk factor in New Zealand.
Additionally SSB intake (ranked as the 31 st top risk factor in Australasia in the GBD 2010 Study 39 ) is included in the model due to the planned interventions that would impact on SSB consumption.
(viii) Calculation of PIF: Relative risk shift method
We modelled the health benefits of interventions through a reduction in incidence of each diet-related disease (Equation 4, page 35). The change in risk factor acts on the starting risk factor distribution by sex, ethnic and age groups. For each risk factor there are up to 10 categories of risk (e.g. For BMI: <20, 20-25, 25-30, 35-40, 40-45 and 45+; six categories). The proportion of the population for each sex, ethnic and age group that sits in each of those categories is obtained from the NZANS. This proportion, the category midpoint and the relative risk associated with that risk factor are mathematically combined with the effect size to calculate the PIF for each risk factor disease combination -not by shifting proportions of the cohort by category, but rather by shifting the RR to what it would be for the new midpoint of the same starting category under the intervention 40 (more below). Note that all calculations were done by age, sex and ethnicity, although we omit these subscripts from the following equations for clarity. where: x I = the current incidence of disease x in the population; ' I x = the new incidence of disease x after an intervention is implemented; and x PIF = is the population impact fraction for disease x.
A PIF 41 is derived for each risk factor disease combination. For example, for CHD there were PIFs for the association between each of fruit, vegetables, BMI, sodium, percentage of total energy from polyunsaturated fatty acids and CHD.
The PIF is calculated using the Relative Risk shift method. 40 This method changes the relative risk of the categories and keeps the proportion in each category constant. For example, if categories are formed for every 5-point increase on the continuous scale (e.g. BMI), and the RR per 5-point increase was 1.5, and the intervention lowers everyone's (and therefore the category midpoints) risk factor by 1 unit, then each categories RR is lowered by 0. where: n= the number of risk factors;
Scaling of risk factor distribution and category midpoints
For the majority of the risk factors the risk factor distributions are taken straight from the NZANS as described above, however additional scaling is done for Sodium and SSB intakes. Sodium intake data is scaled to sodium excretion data as described in Section 1.01.2. SSB intake data are scaled to approximate usual intake as described below.
SSB intake to approximate usual intake
The majority of risk factors in the DIET model are foods or nutrients that will be consumed on a daily basis. SSBs on the other hand are a periodically consumed food group. GBD relative risk estimates are based on SSB consumption as recorded by food frequency questionnaires, and therefore represent estimates for usual intake of SSBs. Data from a single 24-hr recall is unlikely to accurately represent usual consumption of SSBs. Firstly, a single 24-hr recall is likely to underestimate the proporiton of the population that consume some SSBs. Secondly, a single 24-hr recall is likely to overestimate the amount of SSBs consumed by individuals who do have SSB consumption recorded on the day of the survey. For these reasons, we rescaled SSB intakes from 24-hr recall data in NZANS to obtain a better estimate of usual population SSB intake.
We combined data from the overall NZANS sample with the subsample of the survey for whom two 24hr recalls were recorded. This allowed us to calculate the probability of being a SSB consumer, and (for consumers) the probability of consuming SSBs on any given day. At the individual level, we then predicted whether an individual was a true zero consumer and if not, we predicted a weekly frequency of SSB consumption. SSB intakes for (predicted) consumers were then scaled based on (predicted) consumption frequency to avoid overestimating SSB consumption in consumers. For example, an individual with 500ml SSB intake recorded in the single 24-hr recall with a predicted frequency of consumption of two days per week was assigned an estimated usual SSB intake of 143ml (1000ml estimated weekly total divided by seven). Estimates of usual intake for (predicted) consumers without consumption recorded in the single 24 hr recall were based on average recorded intake values for their age, sex, and ethnic group.
We simulated individual intakes 10,000 times and averaged across the runs to obtain estimates of population distributions of SSB intake. Each simulation randomly assigned different individuals with different frequency of consumption values, and also accounted for the survey standard error around initial estimates of the probability of ever-consumption and consumption on any given day.
Theoretical Minimum Risk Exposure Level (TMREL)
In the Comparative Risk Assessment (CRA) approach, attributable burden is calculated in reference to a counterfactual risk exposure. In this modelling the counterfactual used is the Theoretical Minimum Risk Exposure Level (TMREL). The TMREL is a theoretically possible level of intake that minimizes overall risk. This allows us to quantify how much of the disease burden could be lowered by shifting the risk factor distribution to a 'theoretically possible' level associated with the greatest improvement in population health 1 . As the evidence for the TMREL is uncertain for the risk factors modelled, a range or uncertainty interval about the TMREL is used rather than just a central estimate. For risk factors where lower BMI or intake decreases disease incidence (BMI, SSBs and sodium), in those categories whose midpoints are lower than the TMREL then there is no effect. For risk factors where higher intake decreases disease incidence (fruits, vegetables and polyunsaturated fatty acids) the method works in reverse; those categories whose midpoints are higher than the TMREL there is no effect, i.e. people are already receiving maximum benefit from their high consumption.
(ix) Modelling decay or attenuation of effect
Many interventions, such as dietary counselling, have attenuating effects. For example, a particular dietary counselling regime may change population average BMI by 0.1 unit initially, but over the next 'x' years the population tends to regain weight back to their BAU levels. The length and shape (e.g. linear or exponential to return back to BAU) of this decay is informed by evidence relevant to the specific interventions modelled e.g. Dasinger et al. (2007) 42 , and specified in the model.
(iv) Time lags
Changing diet does not usually rapidly change disease incidence; it takes time for disease incidence to change to a 'new equilibrium'. Evidence on time lags, and the shape of change in disease incidence, following dietary change, is very limited. Some simulation studies circumvent this by assuming the change in disease incidence is immediate. However, this will (grossly) over-estimate the effect of dietary intervention on cancer incidence (where time lags are likely to be decades, and moderately overestimated changes in cardiovascular disease (where time lags might be months to years). This issue of time lags is compounded by discounting (i.e., little net benefit might be seen with a cancer preventing diet where a high discount rate is used in the model).
The approach we used was to look back to the average (1-PIF) reflecting the average change in risk factor in a past window of exposure. For example, the relevant time of exposure to increased fruit consumption on current CHD incidence may be the previous 5 years. Thus, we use the average (1-PIF) in the last five years. For cancers, it might take at least 10 years for any (notable) change in disease incidence to occur, and any benefit on disease incidence might last up to 30 years. Therefore, we would use the average (1-PIF) for 10 to 30 years ago. There is considerable uncertainty in these time lags. Therefore, we: Specify the minimum and maximum time lags (e.g. 10 and 30 years for cancers) And additionally make these parameters uncertain themselves (e.g. 20% SD normal distribution about minimum and maximum).
And calculate the average (1-PIF) within this look-back time lag range.
We will include these parameters in actual publications, but in principle the following parameters (by disease) are the 'default'.
Diabetes: both a disease and a risk factor
The MSLT has key independence assumptions, including: 1. Risk factor distribution: the distributions of each risk factor can be treated as though independent of other risk factors. 2. Disease incidence rates: the incidence rate for a given disease (e.g. CHD) is independent of other diseases (e.g. the presence of diabetes). 3. Disease case-fatality and remission rates: the rates for a given disease (e.g. CHD) are independent of those for other diseases (e.g. diabetes).
The second assumption is the focus here, for diabetes. Diabetes is associated with increased rates of coronary heart disease and stroke (and some cancers), be it by shared common causes (i.e. confounding) or cause and effect (the concern here). Whether to address such 'dependency' depends on what one is doing with the model, through what risk factors. For the BODE 3 DIET MSLT model, interventions that change BMI and thence disease incidence are important. Figure 6 (page 39) gives the standard structure. BMI is independently associated with each of CHD and Diabetes Mellitus (DM), and change (∆) in the BMI distribution combined with the relative risk for the BMICHD and BMIDM association to give a PIF results in a change in both disease incidence rates. The change in mortality, morbidity and cost rates that result are then 'added' to the overall mortality, morbidity and cost rates in the main life-table.
Figure 6: Standard structure in MSLT for BMI as risk factors and CHD and DM states
A modelled intervention that lowers BMI may result in an overestimated QALYs if the reduction in diabetes and coronary heart disease 'double-count' the gains when considered independently. But if only the 'pure diabetes' mortality rate (e.g. based on the deaths coded as DM) is estimated in the DM state, and the higher than average population mortality rate otherwise (e.g. due to people with DM having higher CHD and stroke mortality) is not allowed for, the prevalence of DM will drift too high over time as the total mortality rate modelled for diabetics is not high enough. This over-estimated morbidity rate, in turn, may lead to an overestimate of morbidity gains due to a BMI lowering intervention. (And likewise an overestimate of costs savings as costs are a function of prevalence.) One solution to this dependency problem is a microsimulation model, where each individual's other disease status is 'known'. But for the BODE 3 DIET MSLT model, the partial solution we use is to restructure and re-parameterise the model. Inputs to main lifetable the PIF link from the DM prevalence to CHD and stroke incidence. And the excess rate of other deaths due to DM will (partly at least) be implicitly captured through the changes in (say) BMI to cancer that includes some unquantified pathway through diabetes.
o given the uncertainty above in the default model, as a sensitivity analysis we model excess mortality among people with DM from having diabetes, excluding CHD and stroke mortality as that is quantified in, and outputted from, the CHD (and stroke) states instead of the DM-only case fatality rate above. This will probably overestimate the mortality due to DM, but does give an upper limit.
But to 'allow' for the higher mortality rate among diabetics, a 'total excess' mortality rate (mort[all-cause|DM] -mort[all-cause], where the former is the all-cause mortality rate among diabetics, and the latter is the all-cause mortality rate in the general population without DM) is applied within the DM state as an absorbing state. This mortality is only used to 'kill people off' in the model to allow for dependent mortality risk; without this higher mortality rate taking people out of the alive DM population, the prevalence would drift too high (impacting on costs and morbidity). The above structure ( Figure 7 and Figure 8, pages 41 and 42) and parametrization is an improvement for a disease like DM. However, it is not perfect.
The parameterisation of this modification to the MSLT requires recalculation of baseline or BAU parameters, and intervention parameters. Rather than present it either here (before such parameterisation has been described for the main model), we give a full description of how the above model alteration was specified in Appendix D: Parameterisation of 'DM as both a risk factor and disease' (page 89).
. Background population inputs
The following population parameters were included: 1) population size; 2) total prevalence years lived with disability (pYLDs); and 3) total mortality rates, all by 5-year age groups for each sex and ethnicity. Population counts were compiled using Statistics New Zealand 2011 estimates. Total pYLDs were calculated using the total (corrected for multiple morbidity) YLDs for all diseases in the NZBDS divided by the total population in New Zealand for each age, sex and ethnicity group. Population mortality rates were calculated using data from the Statistics New Zealand life tables for 2010-2012. Annual reductions in background population mortality were assumed to be 1.75% for non-Māori and 2.25% for Māori out to 2026 44 , then held constant.
Data sources, processing, DISMOD, and inputs to BODE 3 DIET MSLT model
The basic steps for generating disease inputs for the BODE 3 DIET MSLT model were: 1) data compilation; 2) preliminary processing of the data; and 3) DISMOD II estimation of epidemiologic parameters 45 .
Step 1: Data for these diseases were compiled from various sources (see Table 9).
Step 2: Some parameters were further processed to give 'best' (pre-DISMOD) estimates for 2011. For example, data on prevalence for less common diseases were compiled and then regression-smoothed prior to inputting into DISMOD II. Readers can refer to Appendix C: DISMOD II example for lung cancer (page 86) for a step by step description of data compilation and processing in DISMOD II for one example, lung cancer. (Similar documentation for all other diseases is available from the authors on request.) All parameters were generated by 5-year age groups by sex and ethnicity (Māori/Non-Māori), except breast, ovarian and endometrial cancers which were only compiled for women.
Step 3: These parameters were then inputted to DISMOD II, separately by sex and ethnicity, to generate a mathematically and 'epidemiologically consistent' set of parameters. For example, if the prevalence estimate was too low given what is known about incidence and case-fatality from the disease (and background 'competing' mortality), DISMOD II outputs values that are epidemiologically / mathematically consistent, allowing the user to 'weight' the inputs. For cancers, full weighting (setting at "100%") was given to incidence, as it was the most reliable parameter (due to New Zealand Cancer Registry data). Typically, mortality was also given full weighting and prevalence was given a 50% weighting (for disease-specific weighting information, README files for the disease of interest available upon request from the authors, and for lung cancer (only) in the Appendix C: DISMOD II example for lung cancer page 86). For DM, stroke and CHD, we additionally included time trends in incidence and case fatality inputs to DISMOD II, given the strong time trends in these diseases. The DISMOD output rates (in one year age groups) for incidence, prevalence, case-fatality and remission were then used to populate the BODE 3 DIET MSLT model for all diseases -except CHD, stroke, type 2 diabetes and osteoarthritis. For CHD, stroke, type 2 diabetes and osteoarthritis, only incidence, prevalence, and case-fatality were used (i.e. remission was assumed to be zero as these are usually life-long conditions). For specific details on final parameters for each disease, see Table 10 below.
Generating DRs by dividing pYLDs by prevalent cases for each 5-year age group, for each disease, for each sex by ethnicity, was often too unstable due to sparse data. We therefore aggregated age groupings to ensure the sum of prevalent cases exceeded 10 (e.g. 0-44 year olds were always combined; for common diseases such as CHD and stroke age groupings were: 0-44, 45-54, 55-64, 65-74, and 85+ years; for rare diseases such as pancreatic cancer in Māori males all age groups were combined). CHD DISMOD II DISMOD II DISMOD II DISMOD II & NZBDS Stroke DISMOD II DISMOD II DISMOD II DISMOD II & NZBDS Type 2 diabetes DISMOD II DISMOD II DISMOD II DISMOD II & NZBDS Osteoarthritis DISMOD II DISMOD II DISMOD II GBD DW Breast cancer DISMOD II DISMOD II DISMOD II DISMOD II DISMOD II & NZBDS Colorectal cancer DISMOD II DISMOD II DISMOD II DISMOD II DISMOD II & NZBDS Endometrial cancer DISMOD II DISMOD II DISMOD II DISMOD II DISMOD II & NZBDS Gallbladder cancer DISMOD II DISMOD II DISMOD II DISMOD II DISMOD II & NZBDS Head & neck cancer DISMOD II DISMOD II DISMOD II DISMOD II DISMOD II & NZBDS Kidney cancer DISMOD II DISMOD II DISMOD II DISMOD II DISMOD II & NZBDS Liver cancer DISMOD II DISMOD II DISMOD II DISMOD II DISMOD II &
Final processing of incidence and prevalence estimates
In an effort to more accurately reflect the disease epidemiology in the New Zealand population, some diseases incidence and prevalence rates were forced to be zero at young ages as a final step in processing. Specifically, the incidence and prevalence rates for all cancers were set to 0 for those 20 years and younger, for CHD they were set to 0 for 18 year olds and younger and for stroke they were set to 0 for 24 year olds and younger.
Future disease trends (incidence, remission and case-fatality)
The above parameterisation was for 2011 only. Some key parameters are known to have increasing or decreasing trends in recent decades -and are likely to have such trends in the near-future. Thus, we also specified future disease incidence and case-fatality as percentage annual change from 2011 to 2026. For CHD and stroke, we relied on NZBDS projections for annual changes in incidence and mortality (see Table 8 in a Report 47 ). Specifically, we incorporated an annual incidence change of -2% and an annual case-fatality trend of -2% for CHD and stroke.
For cancer trends, we relied on our previous modelling of future cancer incidence. 46 Uncertainty around the incidence, case-fatality and remission disease trends were included in the model for all diseases of 1 percentage point SD about the annual percentage change. This uncertainty draw is independent for each epidemiological parameter (i.e. incidence, case-fatality and remission) by disease, but correlated r=1.0 across each of the four sex by ethnic groupings and all diseases.
Disease health system cost inputs
Just as proportions of the cohort 'alive' in the overall and disease process are rewarded with additional QALYs for each annual cycle they live, so too can health system costs be 'rewarded'. In the BODE 3 DIET MSLT model, we have five types of health system cost: Main life- Of note, health system costs will be updated in the future (as more years of data are accrued, and with 'improvements' to scale costs to more accurately reflect VOTE: Health), productivity costs (human capital approach) will be added in future models.
Section 3.03. Calibration
Calibration has been described as ensuring that "inputs and outputs are consistent with available data". 48,49 To a large extent, the BODE 3 DIET MSLT model is self-calibrating on inputs; the model uses total New Zealand population data for 2011, with some modification (usually slight) with DISMOD II to ensure epidemiological coherence.
As an additional calibration check, we compared the following rates for CHD, stroke and diabetes, in Figure 9 to Figure 14 (pages 49 to 54): -MSLT model input incidence, case fatality and prevalence -which are actually outputs from DISMOD II. -DISMOD mortality rates. They are neither inputs nor outputs for the MSLT, but are one of the rates used in DISMOD to develop the coherent set of epidemiological parameters -most notably the case fatality input rate. -MSLT model output prevalence and mortality rates. These differ from the DISMOD mortality and prevalence rates, as they are determined dynamically within the model as the cohorts (aged 2, 42 or 72 in 2011) age within the model.
The model check is that we expect the output prevalence and mortality rates to differ somewhatbut not too much -from the input prevalence and mortality rates given what we know about epidemiological trends and transitions. In brief, they appear to, and thus provide a form of calibration 'check' on the model.
In more detail, consider first the CHD rates in Figure 9 (page 49) and Figure 10 (page 50). The DISMOD and output mortality rates are virtually indistinguishable as the cohorts age. The output prevalence, however, is a bit lower. But this is coherent. The inputs are the rates in 2011. As CHD incidence is falling so rapidly, the prevalence as recorded in 2011 is higher than what it would have been if incidence had not been falling in the past. Put another way, for these graphs where 2011 rates are used as inputs with no future time trends, the prevalence rate is at 'equilibrium' for these inputs, whereas the prevalence as recorded in 2011 is not at equilibrium. Thus, we conclude the CHD rates are plausible and coherent.
Stroke rates are shown in Figure 11 (page 51) and Figure 12 (page 52). There is closer agreement than with CHD.
Diabetes rates are shown in Figure 13 (page 53) and Figure 14 (page 54). Here the pattern is the reverse of that for CHD, which is plausible and coherent as diabetes incidence rates have been increasing (and case fatality rates decreasing) that the observed prevalence rates by age in 2011 are less than the 'equilibrium' prevalence rates over time into the future outputted by the model.
As at younger ages it is a bit difficult to see differences in rates on an absolute scale, Appendix G: Model rates vs. DISMOD rates, log graphs from Section 3.04.5 (page 119) gives replicates of these calibration graphs with rates on a log scale. It is only with diabetes that a difference in prevalence rates between input and output series remains evident.
.04. Validation
The BODE 3 DIET MSLT model is a multi-application model, for studying preventive interventions. We attempted some validation given this broad remit. However, it was and is impossible to fully validate the model, both due to resource limitations and an absence of 'gold standard' data for many interventions (e.g. no randomized trials of saturated fat taxes through to disease incidence outcomes exist). Validation of the BODE 3 DIET MSLT model will continue alongside producing results (e.g. comparisons with overseas models), and new data will be forthcoming (e.g. disease incidence trends, intervention effect sizes). Thus, future improvements to the model are likely.
We organize this section using the headings from an International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Good Practices in Modelling Task Force consensus paper: 48 face validity, verification (or internal validity), cross validity, external validity, and predictive validity.
"Face validity is the extent to which a model, its assumptions, and applications correspond to current science and evidence, as judged by people who have expertise in the problem." 48
The BODE 3 DIET MSLT model follows the form and structure of a MSLT, and more specifically the Assessing Cost Effectiveness (ACE) Prevention models 37,50-58 (including dietary and Physical Activity models) and the BODE 3 Tobacco model 59,60 . These models have been peer reviewed many times, lending face validity. -Other known risk factors (e.g. nuts and seeds) will likely be included in the future.
-As a macrosimulation model, it is difficult to allow for correlated risk factor distributions. That is, the BMI distribution is assumed independent of the fruit and vegetable intake distribution, across the population. Thus, the BODE 3 DIET MSLT model -unless modified or adapted -will be limited in answering questions around targeting of populations with multiple poor risk factors. The model also treats diseases as independent; no correlations in disease incidence for -say -CHD and lung cancer are allowed for. (Importantly -thoughwe do treat diabetes as both a risk factor and disease, allowing for the dependency of diabetes with CHD and stroke.) We have not formally subjected the BODE 3 DIET MSLT model to external face validity review prior to submission of publications for scientific peer review.
Verification (or Internal Validity) "Verification addresses whether the model's parts behave as intended and the model has been implemented correctly." 48
A regular process of verification was and is used in modifying and extending the BODE 3 DIET MSLT model, namely: -The following procedure was followed once model development was complete, checked and signed off by one of the Programme Directors: all model changes are undertaken by one team member, checked and signed off by a second team member, and signed off by one of the Programme Directors. This process accords with a Accountability for Quality Assurance process outlined by UK Department of Energy and Climate Change in their guidance for quality assurance of Excel-based models 61 , and is documented more fully in a BODE 3 quality assurance protocol (forthcoming). All model builds and extensions are 'logged' in a 'readme' tab in the model. -The following checks are implemented for a model version to be signed off: o A second team member -independently -randomly checking formulas and links in models o A second team member -independently -working through each process from beginning to end (e.g. risk factor A distribution, merged with risk factor A relative risks, to population impact fractions and their connection with disease incidence, then all-cause mortality, etc.). -A series of sensitivity analyses are undertaken to logic (stress) test the model. This covers both extreme values and a likely range of values to check how the model responds. For example, trends in disease incidence rates are turned off, and compared against expectation, the results of this checking is signed off by a Programme Director. For stress testing, selected input parameters are changed to extreme values (e.g. turning disease incidences to zero, one by one) to ensure changes in model outputs are consistent with expectation.
The above and other BODE 3 quality assurance processes are documented more fully elsewhere, 62 as well as specifically to the BODE 3 DIET MSLT model in its Readme tab.
"Cross-validation involves comparing a model with others and determining the extent to which they calculate similar results." 48
Model comparisons within the BODE 3 Programme have occurred, and are proposed with other international groups.
Within the BODE 3 programme, identical dietary salt reduction interventions were run through an early iteration of the BODE 3 DIET MSLT model and a CVD model built in TreeAge that had previously been developed by BODE 3 . 63,64 When an intervention of a decrease in sodium of 22.8mmol/day was run through both models, the overall QALYs gained were 110,000 in the TreeAge model and 103,000 in the DIET MSLT model (3% discounting). As there are a number of differences between the models generating results within 20% of each other was regarded as satisfactory, and the difference seen was closer to 5%. From our investigations it seems that the differences seen between the two models were due to a combination of different baseline incidence rates, baseline case-fatality rates and differing disability rates/weights between the two models. Model structure, definitions of stroke and effect size calculations don't appear to contribute very much to the differences seen.
Model comparisons are also underway with the Nuffield Department of Population Health, Oxford University (Adam Briggs, Peter Scarborough and colleagues) who are working on similar types of models with similar food taxes and subsidy interventions (e.g. [65][66][67] ). Model comparisons proposed include 'stripping back' to the same population demography and epidemiology to allow a head-tohead comparison of any differences in model structure, then sequential addition of varying population epidemiology (e.g. disease incidence rates, case-fatality and trends), and population demography (e.g. varying age structures).
External Validity
"In external validation, a model is used to simulate a real scenario, such as a clinical trial, and the predicted outcomes are compared with the real world ones." 48 Randomized trials through to disease incidence for the interventions proposed to be modelled with the BODE 3 DIET MSLT model are rare. We will consider the relevance of one of these for such validation work: a major sodium reduction trial on health outcomes, 68 but we note this might not prove to be informative given the decline in CVD incidence over the 20 years of this trial.
Meta-analyses of trials (where available) are used for parameterizing intervention effect sizes in the model (e.g. association of mHealth on weight loss 69 ).
'Natural experiments' -as they accrue (e.g. Danish food taxes 67,70 and Mexican SSB taxes 71 ) -will also provide comparison points.
"Predictive validity involves using a model to forecast events and, after sometime, comparing the forecasted outcomes with the actual ones." 48
It was not possible to compared forecast incidence and mortality rates in New Zealand for various interventions with model forecasts, as none of the interventions have been applied. However, it will be possible to compare BAU trends in disease incidence from the 2011 base-year out in due course.
Section 3.05. Model: Analysis
For each intervention, the model is run 2000 times using Monte Carlo simulation. Probabilistic uncertainty is included for intervention effect sizes (e.g. price elasticities, relative risks for the association between diet and disease incidence), intervention costs (e.g. cost of a new tax law) and selected baseline parameters (i.e. health system costs were assumed to have a gamma distribution with a SD of +/-10%).
We included uncertainty in the annual percentage changes in selected disease incidence trends (see above) for the diseases that made the largest contribution to the QALYs gained (and hence also cost savings).
Uncertainty around the starting estimates of incidence and case-fatality has been included in the model. Year 2011 starting estimates have been assigned a log-normal distribution, SD of +/-5%, with random draw in each iteration separately for incidence and case-fatality, by sex and age, but applied uniformly across ages (i.e. independent uncertainty by sex and age, but 100% correlated uncertainty by age within sex by ethnic groups).
All modelling is undertaken in Microsoft Excel®, using the add-in tool Ersatz (EpiGear, Version 1.3) for uncertainty analysis with R-software 'add-ons' for batch processing and output collation.
Section 3.06. A note on interchangeable use of 'DALYs averted' and 'QALYS gained'
Previous BODE 3 modelling 50 termed health gain as DALYs averted'. We use the terms 'DALYs averted' and 'QALYs gained' interchangeably. Why? Two reasons. First, The QALYs gained (or DALYs averted) in the MSLT modelling are not the same as DALYs calculated in a BDS. In the BDS they are (usually) calculated in one cross-sectional year, as a shortfall against an ideal standard (e.g. the best sexspecific life-table mortality rates in the world). In BODE 3 (and other related MSLT modelling, e.g. ACE-Prevention 50 ) the QALYs gained are the difference between the starting population's expectation of the remainder of their lives, and that under the intervention scenario. Second, the morbidity weights or 'health status valuations' (HSV) are pairwise comparisons conducted for the GBD 36 , and as such are one variant of HSV used in routine economic evaluations and QALY estimation. 72 These morbidity weights -given their derivation for the GBD -are called disability weights.
The disability weighting (DW) (in this case DRs, which in term stem from DWs applied in the BDS itself) assigned is just one variant of health status valuation (HSVs); QALYs use a variety of HSVs (e.g. those from EQ5D, etc.). Furthermore, DALYs in the BDS use an external or reference life-table (to generate a health gap or loss measure); in this multi-state life-table, the DALYs averted are at the incremental margin for the 2011 New Zealand population, the same concept and method as used for QALYs. The only conceptual difference between the QALYs we calculate and the various QALYs presented in much other research, is the HSV metric. In other cost-utility analyses the source of HSV is likely to vary between studies (arguably to fit the population's preference, but more usually due to the pragmatics of different questionnaires used) whereas our QALYs are derived from one very large and coherent set of disability weights calculated in the GBD 2010 from multi-country surveys. 36 We do not claim that the HSV in our QALY is 'better' than that used in other QALY estimates -there is genuine uncertainty in all HSVs.
The QALY metric captures health gain (assuming the intervention is beneficial) that arises from a mix of change in years of life and quality of each year of life. Usually a gain in QALYs (in prevention interventions at least) is due to a gain in life years lived (with or without change in quality of life). Note, however, that it is possible to achieve QALY gains with a reduction in life years lived (but very good improvements in quality of life), or with an increase in life years gained that is greater than any 'penalty' from living in lower quality of life.
2.
Step 2: Processing in DISMOD II software Below are examples of the weighting schemes used for lung cancer parameters ( Figure 16, page 90).
Figure 16: Example of parameter weighting in DISMOD II
Note that there was sometimes considerable instability in the case-fatality rates at younger ages. This is a function of sparse data, and the case-fatality rate needing to 'move' to reconcile with the incidence and mortality inputs (and to a lesser extent prevalence). Once inputted to the BODE 3 DIET MSLT model, it does however balance out to ensure a target mortality rate (which largely drives the health loss/gain). It must be noted that the VDR is getting 'better' or more comprehensive over time, meaning that if used to generate year-on-year incidence rates it will be spuriously high. (This may become less of a problem in future years once data systems and case definitions equilibrate.) However, it should be more accurate for prevalence, and if prevalence cases are also used to generate morbidity and costings, and mortality rates among this pool of prevalent cases, then there is coherence for these parameters excluding incidence.
(i) Incidence and prevalence rates
In principle: -The DM prevalence is just that observed on 31 December 2011 -The DM incidence is the new cases observed each year. But note above, we expect it will be spuriously high using VDR data up to 2014 at least. The decision was therefore made to ignore incidence in DISMOD.
Regression on the VDR linked with mortality and core population files were used to estimate annual prevalence (logistic model; main effects of sex, age (categorical in five year age groups), and ethnicity; and interactions of main effects), using the predicted values for 2011.
Due to the artificially high estimates of incidence this parameter was ignored in DISMOD; details are provided below.
(ii) Mortality rates
Diabetes is a difficult disease to model due to itself being a risk factor for other diseases, and therefore having mortality rates dependent with other diseases (e.g. it is no longer viable to assume independence of disease incidence and mortality when consider DM and CHD). It is important to keep in mind the BODE 3 DIET MSLT that is being parameterized, and its model structure. Namely: -DM is treated as a disease state just as any of the other states are (e.g. CHD, stroke, lung cancer). However, it is also a risk factor in and of itself for CHD and stroke, meaning that changes in DM prevalence are linked through PIFs to changes in CHD and stroke incidence. -A diagnosis of DM causes a non-ignorable increase in mortality for deaths coded with other than DM as the underlying cause of death. Some of this is causally due to DM, but some of it is due to confounding or correlated common causes (e.g. BMI as a risk factor for both DM and a range of cancer deaths and stroke and CHD).
o For the purposes of the DIET MSLT, we classify CHD and stroke as causally related. We assume this is captured by the above link of changing DM prevalence to changing CHD and stroke incidence (through a PIF) that then flows onto change in mortality from CHD and stroke, per se.
o DM-coded deaths -by definition -are causally due to DM. Changes in such DM-specific or DM-coded mortality in the DM state (due to changes in disease incidence from a given intervention) link to the main life-table, capturing mortality rate gains from interventions lowering DM incidence.
o The non-causally related deaths (i.e. non-CHD, non-stroke and non-DMcoded, or simply 'other') are not captured as an effect of the intervention, and therefore do not link through to the main lifetable. However, they still matter as far as determining the prevalence. That is, if we do not allow for higher 'other' competing mortality among diabetics, the future simulated prevalence will be too high, leading to overestimated morbidity and health system cost impacts of interventions.
To satisfy all these requirements, the MSLT needs the following mortality rates:
(iv) DISMOD
For any given disease, the following parameters are mathematically related: incidence, duration, prevalence, case fatality. Therefore, if estimates of (some of) these parameter are estimated, they may not be mathematically coherent as a system. In our example, the VDR case definition of who was a diabetic may have some (differential over time) misclassification bias, meaning that the incidence rates are (somewhat) biased.
DISMOD II is an epidemiological tool 45 that takes in sets of these parameters, and outputs a coherent set of the same input parameters (plus those from the above list for which input data was missing).
The input and output estimates should -of course -be close, acting as a check.
Treating diabetes as the disease of interest, we inputted the following parameters (for 2011): -Prevalence (see above) -Remission rate set at zero (i.e. assumption that once you have diabetes, you have diabetes forever) -Case fatality (see above) or Excess DM mortality to that in general population = Mx[all-cause|DM] -Mx[ _ | ̅̅̅̅̅ ] =Excess all-cause mortality rate among diabetics -Population mortality rate due to DM (see above) -And, as is required, the all-cause mortality rate in the general population. . This case fatality generated the mortality difference (between the BAU and intervention) in the DM that was then 'added to' the all-cause mortality rate in the main lifetable. CHD and stroke deaths were excluded from the mortality rate linked to the main lifetable from the DM state, as this mortality was captured in the CHD and stroke disease processes (with the DM state acting as a risk factor to change incidence inflow to the CHD and stroke states).
Preventing double-counting of BMI effects on DM and on CHD and stroke
BMI is a risk factor for diabetes, CHD and stroke. However, diabetes is itself also a risk factor for CHD and stroke. This is illustrated for CHD in Figure 17 (page 96).
Although diseases in the BODE 3 DIET MSLT model are assumed to be independent, we added a link between changing diabetes prevalence and CHD and stroke incidence, using relative risks from systematic reviews of cohort studies by Peters et al 75,76 to quantify the increased risk of CHD and stroke in diabetics (Pathway B 2 in Figure 17: The relationship between BMI, diabetes and CHD, page 94) To then prevent double-counting of CHD and stroke effects we determined the relative risks of CHD and stroke associated with changes in BMI that would not be mediated by diabetes (Pathway A in Figure 17). Since there are no published estimates of these RRs, we derived them using the GRG nonlinear method of optimisation in Excel, assuming that the fractions of the disease attributable to BMI directly (Pathway A in Figure 17) and indirectly via diabetes (Pathway B in Figure 17) must sum to the total attributable fraction (Pathway C in Figure 17). *The RRs used here were an average of the RRs in the GBD for cancers of the larynx, nasopharynx and other pharynx and mouth. *RRs were the same in the GBD paper published in 2016 77
|
2018-04-22T12:05:40.252Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "13ff0f5885d61bdd827d7abc7feb8fb4a861632f",
"oa_license": "CCBY",
"oa_url": "https://mhealth.jmir.org/2019/1/e11118/PDF",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5d7b5ccf6d35d302392f19a49750e3690be56872",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
56380856
|
pes2o/s2orc
|
v3-fos-license
|
Investigation of the process energy demand in polymer extrusion: a brief review and an experimental study
Extrusion is one of the fundamental production methods in the polymer processing industry and is used in the production of a large number of commodities in a diverse industrial sector. Being an energy intensive production method, process energy efficiency is one of the major concerns and the selection of the most energy efficient processing conditions is a key to reducing operating costs. Usually, extruders consume energy through the drive motor, barrel heaters, cooling fans, cooling water pumps, gear pumps, etc. Typically the drive motor is the largest energy consuming device in an extruder while barrel/die heaters are responsible for the second largest energy demand. This study is focused on investigating the total energy demand of an extrusion plant under various processing conditions while identifying ways to optimise the energy efficiency. Initially, a review was carried out on the monitoring and modelling of the energy consumption in polymer extrusion. Also, the power factor, energy demand and losses of a typical extrusion plant were discussed in detail. The mass throughput, total energy consumption and power factor of an extruder were experimentally observed over different processing conditions and the total extruder energy demand was modelled empirically and also using a commercially available extrusion simulation software. The experimental results show that extruder energy demand is heavily coupled between the machine, material and process parameters. The total power predicted by the simulation software exhibits a lagging offset compared with the experimental measurements. Empirical models are in good agreement with the experimental measurements and hence these can be used in studying process energy behaviour in detail and to identify ways to optimise the process energy efficiency. 2014 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http:// creativecommons.org/licenses/by/3.0/).
Introduction
Polymers are among the most important materials available today. Many conventional raw materials such as steel, glass and wood are being replaced by various types of polymeric materials or polymer composites which perform the same function while offering a number of advantages including low density and ability to form readily into complex shapes. As a result, the demand for polymeric materials has shown a rapid increase over the last few decades. Records show that the world total plastic production in the years 1950,1976,1989,2002 and 2010 was 1. 3,50,100,200 and 304 millions of tonnes, respectively [1]. Moreover, world plastics production in volume surpassed that of steel in 1981 and the gap has been continuously increasing since then [2]. The demand for polymeric materials is forecast to further increase.
Polymer extrusion
Various types of polymer processing extruders are currently in use in industry including single/multi screw extruders and disk/ drum extruders. Of these extruders, single screw continuous extruders are the most commonly used [3]. The screw is the key component of an extrusion machine and can be divided into three main functional/geometrical zones (i.e. feed or solids conveying, compression or melting, and metering or melt conveying) in the case of simple, single flighted screw geometries. The feedstock material fed into the machine through a hopper is conveyed along the screw while absorbing heat provided by the barrel heaters and through process mechanical work. Eventually, a molten flow of material is forced into the die which forms the material into the desired shape. More details on the process operation and mechanisms of polymer extrusion can be found in the literature [4,5].
Being a fundamental method of processing polymeric materials, extrusion is used in the production of commodities in diverse sectors such as packaging; household; automotive; aerospace; marine; construction; electrical and electronic; and medical applications. Usually, polymer processes use energy carriers in two major ways as raw materials (petrochemicals) and for processing. Typically, extrusion is an energy intensive production method and it is wellknown that these processes often operate at poor energy efficiencies [3,[6][7][8]. Although process energy efficiency is good at higher processing speeds, it is difficult to run at these conditions as thermal fluctuations increase with increasing screw speed resulting in very poor melt quality. Details on the typical melt thermal variability with increasing screw speed was discussed by the authors' previously [5,[9][10][11][12][13][14]. Therefore, the majority of extrusion processes are operating at conservative rates to control or avoid problematic thermal fluctuations and this leads to poor energy efficiency. Since, global energy prices are increasing rapidly, plastics based manufacturing companies are highly concerned about the energy efficiency of their production plants in order to maximise profit margins. A major current concern in the industry is therefore to determine how to optimise energy and thermal efficiencies simultaneously while achieving the required process output rate and melt quality. The aims of this work are therefore to explore process efficiency using a highly instrumented single screw extruder with commercial grades of polymers. Then, it is expected to develop models to predict the process energy consumption which can be useful in optimising the process energy demand. Initial results are presented in this paper.
Extruder energy demand and possible energy losses
Usually, extruders are supplied with electrical energy for their operation and this energy is converted into mechanical or thermal energy. Process energy losses occur in the various stages of the operation mainly as electrical, mechanical or thermal losses. A typical energy flow diagram for an extruder is shown in Fig. 1 (not drawn to scale). Usually, the drive motor is the component which consumes the highest portion of the supplied energy to an extruder. Currently, most extruders are driven by alternating current (AC) or direct current (DC) motors. In a typical AC motor, energy losses usually occur as electrical (or copper), core, mechanical and stray losses. In addition to these four types of losses, brush loss also occurs in DC motors which use brushes for supplying the power [15]. Usually, the losses related to the drive motor have accounted for approximately 14% for a medium scale extruder [7]. The maximum energy efficiency of a motor can be achieved when it is running at the rated speed. However, as mentioned earlier, most industrial extruders are operated at conservative rates to avoid undesirable thermal and throughput fluctuations, and hence achieving the rated motor speed may not be possible. Also, these are inductive loads (as they use magnetic fields) and the total power demand is related to the power factor as given in Eq. (1) [16].
where V is the supply voltage, I is the current drawn by the motor and cos / is the power factor which ranges from 0 to 1. Usually, the power factor relates the shape of the current waveform drawn by a load to the sinusoidal voltage waveform supplied by the power supplier. For purely resistive loads, the current drawn by the load is a sinusoid which is exactly in phase with the voltage waveform and hence the power factor is unity. This is the most energy efficient operating condition. For inductive loads, the current will lag behind the voltage in phase, and hence the power factor will be less than one. Therefore the energy supplied to the load will not be used optimally. As the mains voltage is fixed, a higher current is required from the power supplier (i.e. a high apparent power than usual) to compensate for the phase shift and deliver the same usable power to the load, bringing the active power back up to the level required to do the desired mechanical work. The power supplier must build additional infrastructure to deal with low power factor conditions and pay for the higher apparent power. Due to these issues, power suppliers may charge extra capital and operating costs to the industrial users who operate with a power factor below a certain level (e.g. below 0.95) [17,18]. Obviously, these low power factor conditions are quite common with electrical motors as these are inductive loads. are the dominant power consumers in extrusion plants. More details relating to the extruder drive motor can be found in previous studies reported by the authors [13,19]. The barrel/die heaters also consume a considerable amount of energy depending on their wattage, material being processed, screw geometry used in the machine, selection of process settings, etc. Extruders obtain the required heat energy for material melting via these heaters and through the mechanical work generated by the screw rotation. Usually, these heaters are resistive loads (i.e. power factor is unity) and hence there will be no energy efficiency issues relating to the power factor. However, excessive process heat is removed by the blowers attached along the barrel or by internal cooling of the screw core and/or barrel wall (by water or oil) to maintain the process thermal stability and these heat removals come under the forced cooling losses. Also, a considerable amount of heat energy is lost across the surfaces exposed to the surroundings naturally via radiation and convection. In general, the losses related to the die barrel heaters have accounted for approximately 8% for a medium scale extruder [7].
The energy losses which may take place in the rectifier, water pumps, instrument panel and other auxiliary devices can be categorised under other energy losses (see Fig. 1). The actual amount of total energy input and loss are dependent upon factors such as the size/age/type of the machine, selection of processing conditions (particularly the screw speed and barrel/die set temperatures), material being processed, skills/knowledge of the operator and so forth.
The plastics industry consumes 4% of the world's oil production as fuel and energy. In the UK, for a typical plastics company the electricity bill is usually between 1 and 3% of turnover. For the whole plastics industry, the turnover is 19 billion, accounting for 2.1% of GDP. In the USA, the plastics industry employs about 8% to 9% of the country's manufacturing workforce, and it consumes approximately 6% of all the energy used by industries. In the UK, the total industry fuel cost is over 5,000 million pounds. Generally speaking, despite many different processing techniques, significant quantities of energy are consumed. Although energy makes only be a small proportion of the total cost in plastic processing, unlike the cost of raw materials, it is controllable. Therefore this study is focused on investigating the process energy demand in polymer extrusion.
Previous studies on extruder energy consumption
Previous research reported in the literature on extruder energy evaluation, monitoring and modelling are now discussed.
Early work performed by Chung et al. [6] discussed the typical energy efficiency of extruders and stated that a 2.5 in (63.5 mm) diameter extruder represents only around 62% mechanical energy efficiency while the energy efficiency of larger extruders is lower than that of small extruders. Moreover, they argue that the energy efficiency was not a real concern to the polymer industry until the late 1970s.
Kruder and Nunn [7] reported that the typical extruder energy efficiency ranges from about 45% to 75%. They stated that the extruder energy efficiency depends on factors such as screw design, gearing, polymer feedstock, product geometry and extrusion rate and that the major energy losses of an extruder occur via the drive train and forced cooling. They also provide some interesting information on extruder energy consumption including the approximate percentage of contribution of each individual component for extruder total energy usage and loss. Furthermore, they argued that the barrel heaters are the dominant energy input source at low screw speeds. Likewise, the operation of an extruder with the highest possible power factor is also significant in energy saving.
McKelvey [20] stated that the overall extruder power requirement can be reduced by using a gear pump at the end of the extruder. These energy savings can be achieved not due to increasing pumping efficiencies but due to the low pressure operation which make fundamental changes in the energy conversion processes occurring in the screw. In addition, a gear pump will help to increase the mass flow rate.
Strauch et al. [8] presented information on the percentage energy consumption by different components of a 63.5 mm diameter single screw extruder. They stated that the energy is majorly provided to the machine via the drive or the screw while the process heating has only a back-up function. Moreover, they stated that more than half of the supplied energy is taken away by the cooling water while free convection and radiation also make a significant contribution towards energy losses. Additionally, a discussion was included on the possible energy saving approaches and possible waste heat recovery techniques.
Rosato et al. [21] reported that the energy efficiency of an extruder is dependent upon the factors such as torque available on the screw, screw rotational speed, heat control and material being processed. They mentioned that energy losses from 3% to 20% can occur and commented that the drive system is responsible for the major portion of these losses. However, they stated that plastics have a lower specific energy requirement for their manufacture, fabrications of products and recycling compared to most other conventional raw materials.
Womer et al. [22] studied the effects of cooling on the extruder total energy consumption. The results showed that the extruder consumes more energy with water cooling compared to air cooling regardless of the material being processed. Therefore, they recommend the use of only air cooling unless extensive cooling is required.
As stated by Heur and Verheijen [23], the major energy consumers in the extrusion processes are the motors, heating units, cooling processes and compressors. They state that the energy consumption can differ significantly from plant to plant and describe a number of possible factors that may be responsible for these varying energy demands including type and characteristics of the plastic; design, complexity, and size of the end product; cycle time; and size of the plant. The authors highly recommended the use of frequency controllers for energy saving purposes.
Anderson et al. [24] argued that the majority of polymeric materials demand specific energy (i.e. for motor) of between 0.0822 and 0.1644 kW h/kg when they feed to the machine from room temperature. If the extruder motor specific energy consumption (SEC) is above 0.3288 kW h/kg, usually it indicates that there is an excessive power consumption in the extruder.
Falkner [25] revealed that over 65% of the average UK industrial electricity bill in the year 1994 accounted for motor operations which cost about £3 billion. However, more than 10% of motor energy consumption is wasted, costing about £460 million per annum in the UK. Although this is the overall motor energy usage, the contribution of the plastics industry may be considerable as the major power consumer in plastic processing machines are the electric motors. Currently, the plastics industry is one of the major industries within the UK and makes a considerable contribution to the UK economy accounting for approximately £19 billion of annual turnover (includes 180,000 employees in 7500 companies) [26]. The same trend applies to most of the countries in the world. A small improvement in process energy efficiency will therefore considerably reduce global energy costs.
Barlow [27] argued that 1/3 of a typical extrusion plant energy consumption can be attributed to the motors. Furthermore, he stated that most older extrusion machines are using DC motors (typical full speed and full load efficiency 90%) and a recommendation is made to replace DC motors with AC vector-controlled motors (typical full speed and full load efficiency 95%) to achieve better energy performance. The paper mentioned that these typical efficiency figures further reduce as the motor is not running at its rated speed and load, and these reductions can be up to 75% and 85% for DC and AC motors, respectively. Motor efficiency is further reduced when the plant becomes older. Therefore, minimising unnecessary energy usage by selecting optimum processing conditions is important to achieve a better overall process efficiency as machine attributed inefficiencies may not be controlled or eliminated.
Kent [28] argued that motors are often neglected from energy usage considerations within extrusion plants and although motors in the main processing equipment, such as extruders and injection moulding machines are obvious, the majority of motors are hidden in other equipment such as compressors, pumps and fans. Moreover, he has presented a detailed description on energy saving issues in polymer processes and stated that the process operators should have a sound knowledge on where, when, why and how much of energy is used, before taking actions to reduce the energy costs.
Work presented by Cantor [29] measured extruder specific energy consumption (SEC) together with the contributions of the motor and each heater zone to the extruder SEC (SEC is the power consumed to produce a unit amount of extrudate). Experiments were carried out at five different screw speeds utilising three different screw designs and two materials (a crystalline and an amorphous polymer). The extruder SEC was shown to be reduced as screw speed increased. SEC of the zone heaters also reduced with increasing speed. In general, there was a trend of reducing SEC from the heater bands of the feed to the die but this was not true at 10 rpm. Moreover, the contribution of heaters towards the extruder SEC was higher at low screw speeds than the drive motor and this trend changed as screw speed increased. The author claimed that the heaters waste over 95% of the supplied energy and hence suggested consideration of new barrel heating technologies.
A number of other authors [30,31] have discussed the advantages of replacing the DC motors with AC motor drives to benefit energy consumption. They performed experiments on the same extruder with AC and DC motors and found that a considerable amount of energy saving can be achieved with AC motors compared to DC motors. As they claimed, the replacement of old DC motors with new vector-controlled AC motor drives provide significant benefits in the long-run although the initial capital cost is higher for AC motors. It should be noted that the payback time period depends on the size of the motor and the type of the application.
As explained by Drury [32], in extrusion there is little potential of useful recovery of rejected energy as these losses are largely released to air or water. Moreover, the paper argued that over 40% of the energy supplied to the small scale extruder is lost without being effectively used through drive/transmission losses, radiation, convection, conduction, etc.
A few other works [33][34][35] also focused on energy consumption related issues in polymer extrusion and more details can be found in the literature. The majority of previous works highlight the importance of efficient operation of the drive motor for energy efficiency. Obviously, other devices attached to the extruder such as barrel heaters, cooling fans, gear pumps, pelletizers, etc, should also operate with their optimum energy efficiency for the energy efficient operation of the whole extrusion plant.
Jing et al. [36] proposed new real-time energy monitoring methods without the need to install power meters or develop data-driven models. The effects of process settings on energy efficiency and melt quality were studied based on developed monitoring methods. Then, a fuzzy logic controller was developed for a single screw extruder to achieve high melt quality. The resultant performance of the developed controller showed it to be a satisfactory alternative to the expensive gear pump. Also, they stated that the energy efficiency of the extruder can further be achieved by optimising the temperature settings.
Effects of process settings on extruder energy consumption
Rauwendaal [4] stated that the extruder power consumption depends on both material and machine geometry. He presented a detailed analysis on the screw design procedure to achieve optimum extruder power consumption. Work by Rasid and Wood [37] found that the solids conveying zone barrel temperature has the greatest influence on the energy consumption of the extruder. They experimentally investigated the effects of each barrel zone temperature on the total energy consumption of a single screw extruder.
Studies carried out by Brown et al. [38] and Kelly et al. [39] have shown that the extruder SEC reduces as screw speed increases despite the differences in screw geometry. However, the SEC differed with screw geometry and the material being processed within the same operating conditions. Subsequent work by Kelly et al. [40] and Sorroche et al. [41] used three different grades of high density polyethylene (HDPE) and found that the extruder SEC differed depending of the material viscosity. A barrier flighted screw had the lowest energy consumption compared to single flighted gradual compression and rapid compression screws. In the same experiment, they found that melt temperature fluctuations increased as screw speed increased. Therefore, it seems that achieving both an energy efficient operation and a high quality melt output with desirable output rates remains challenging despite significant developments in the polymer extrusion field over the last few decades.
Previous work by the present authors [19] discussed the effects of process settings on motor energy consumption and motor SEC in a single screw extruder. It was found that motor energy consumption increased as the screw speed increased while the motor SEC decreased. The barrel set temperatures had a slight effect on the motor energy consumption and the motor SEC. The motor SEC reduced as the barrel zone temperatures were increased. However, as stated previously, running an extruder at a higher screw speed at higher energy efficient conditions may not be realistic as the required thermal quality of the melt output may not be achieved due to the reduction in material residence time. The identification of an optimum operating point in terms of energy efficiency and thermal quality must therefore be one of the most important requirements for the polymer processing industry today which is the focus of the current research.
Modelling of the extruder energy consumption
From the review of literature, it is clear that only a limited amount of work has been reported that has attempted to develop model/s to predict the total energy consumption of an extruder or its individual components. Mallouk and Mckelvey [42] proposed a theoretical expression to derive the energy requirements of the melting section of extruders under the conditions of Newtonian flow, constant screw channel dimensions and isothermal operation. Screw dimensions, screw speed, die pressure and melt viscosity were taken into account for calculating the energy. They concluded that the total extruder energy demand is the sum of the energy consumed in the helical screw channel and that dissipated between the screw land and the barrel wall. Moreover, the authors claimed that the proposed equation should be useful in design of extruders and evaluating their performance.
Wilczynski [43] presented a computer model for single screw extrusion and stated that the model takes into account five zones of the extruder (i.e. hopper, solids conveying, delay zone, melting zone and melt conveying) and the die. The model predicts mass flow rate; pressure and temperature profiles along the extruder screw channel and in the die; the solid bed profile; and the power consumption based on the given material and rheological properties of the polymer, the screw, the hopper and die geometry and dimensions, and the extruder operating conditions (i.e. screw speed and barrel temperature profile). However, no details were given of the predicted motor power consumption.
Lai and Yu [44] proposed a mathematical model to calculate the energy consumption per channel in single screw extruders based on screw speed, material viscosity and a few other machine geometrical parameters. However, no details are available regarding the model performance or predictions.
Previous work by the current authors [45] studied the motor energy consumption of a single screw extruder and static nonlinear polynomial models were presented to predict the motor energy consumption over different processing conditions and materials. Screw speed was identified as the most critical parameter affecting the extruder motor energy consumption while the barrel set temperatures also showed a slight effect. Of the barrel zone temperatures, the effects of the feed zone temperature were more significant than the other two zones. These models can be used to find out the significance of individual processing conditions on motor energy demand and for selecting the optimum process settings to achieve better energy efficiency. Moreover, they argue that the selection of energy efficient process settings should coincide with good thermal stability as well. Therefore, studies to identify the combined effects of process settings on both energy efficiency and thermal stability would be more desirable to select a more attractive operating point with better overall process efficiency.
The development of models to predict the extruder total energy consumption based on the processing conditions may help operators to select the most desirable operating conditions by eliminating excessive energy demand (i.e. situations in which the energy is more than that required for the process). Particularly, models based on the motor energy consumption may be very useful for selecting the most desirable and highest screw speed (higher energy efficiency at higher screw speeds) with suitable barrel set temperatures to run the process while achieving the required melt quality, which is still a challenging task within the industry. Any improvements in the energy usage of polymer processing machines would be timely and important for the industry.
In this work, an attempt is made to model the total extruder power consumption as a function of key process variables (e.g. screw speed, barrel/die set temperatures) and some other functional process parameters such as melt temperature. A commercially available computer simulation software is also used to model the total extruder energy demand. A single screw extruder is used in the experiments as it is the most commonly used type in industrial polymer extrusion. Three different screw geometries and set temperature conditions are examined. This paper contributes to the knowledge in several areas. As shown in the literature review, relatively little work has been done so far on energy studies in polymer process. Compared to previous studies, this work extends the research findings on energy consumption over different processing conditions and investigates on the power factor issues. An attempt has also been made to model the energy consumption as a function of process variables; such models have not previously been reported.
Equipment & procedure
All experiments were carried out on a 63.5 mm diameter (D) single screw extruder (Davis Standard BC-60) at the IRC laboratories of the University of Bradford. A gradual compression (GC) screw with 3:1 compression ratio, a tapered rapid compression (RC) screw with 3:1 compression ratio and a barrier flighted (BF) screw with a spiral Maddock mixer and 2.5:1 compression ratio were used to process the material. The extruder was fitted with a 38 mm diameter adapter by using a clamp ring prior to a short 6 mm diameter capillary die as shown in Fig. 2.
The extruder barrel has four separate temperature zones (each with a heater of 4 kW) and another three separate temperature zones at the clamp ring (with a heater of 0.9 kW), adapter (with a heater of 1.4 kW) and die (with a heater of 0.2 kW). All of these temperature zones are equipped with temperature controllers which allows individual control of the set temperature of each zone. The extruder drive is a horizontal type separately excited direct current (SEDC) motor which has ratings: 460Vdc, 50.0 hp (30.5 kW), at speed 1600 rpm. The motor and screw are connected through a fixed gearbox with a ratio of 13.6:1, and according to the manufacturers' information the gearbox efficiency is relatively constant at all speeds ($96%). The motor speed was controlled by a speed controller (MENTOR II) based on speed feedback obtained through a direct current (d.c.) tachometer generator.
Melt pressure was recorded using a Dynisco TPT463E pressure transducer close to the screw tip to observe the functional quality of the process. The total extruder power and motor power were measured using a Hioki three-phase power meter and an Acuvim IIE three-phase power meter, respectively. Melt temperatures of the different radial locations of the melt flow at the end of the adapter were measured using a thermocouple mesh [46] placed inbetween the adapter and the die as shown in Fig. 2. A thermocouple mesh with seven junctions (i.e. with 7 positive and 1 negative thermocouple wires) was used in this study and mesh junctions were placed asymmetrically across the melt flow along the diameter of the mesh as shown in Fig. 3 A data acquisition programme developed in LabVIEW was used to communicate between the experimental instruments and a PC. All signals were acquired at 10 Hz using a 16-bit DAQ card, National Instruments (NI) PCI-6035E, through a NI TC-2095 thermocouple connector box and a NI low-noise SCXI-1000 connector box.
Materials and experimental conditions
Experimental trials were carried out on a virgin high density polyethylene (HDPE), Rigidex HD5050EA (a semi-crystalline material with density: 0.950 g/cm 3 The extruder barrel temperature settings were fixed as described in Table 1 under three different set conditions denoted as A (low temperature), B (medium temperature) and C (high temperature). Eighteen different experimental trials (two materialsÂthree screwsÂthree set temperature conditions) each lasting around 45 minutes were carried out with the three screw geometries (with both materials) and the data were collected at 0 rpm for a small time period. Then, the screw speed was adjusted from 10 rpm to 90 rpm in steps of 20 rpm. All data were recorded continuously whilst the extruder was allowed to stabilise at each screw speed. Separate experiments were carried out for model training and validation.
Experimentally measured extruder energy consumption
Initially, experimentally measured signals of both materials were studied to understand the process energy demand over the different processing conditions. The data collected over the last minute at each screw speed were used for the evaluation. The average values of the experimentally measured mean total extruder power (TP), the level of fluctuations of the total power (DTP), mass throughput (MT) and specific energy consumption (SEC) of the extruder for both materials with different screws and processing conditions are shown in Fig. 4. Sub-figures in each row are shown on the same scale for ease of comparison. As expected, both the mass throughput and the total power increased with the screw speed regardless of the material being processed and the screw geometry used. Conversely, specific energy consumption of the extruder reduced with increasing screw speed regardless of the material and screw geometry. Previous work reported by Cantor [29] used the same size of extruder with a slightly different screw geometry and set temperatures to process three different grades of cyclic block copolymer (CBC). However, the results showed that the extruder specific energy consumption increased with the screw speed (in the range of 500-1000 J/g) at all the conditions tested which is opposite to the findings of this work. In this work, a virgin HDPE and a virgin PS were used with three screw geometries and three set temperature conditions and SEC reduced with the screw speed (in the range of 2600-650 J/g) for all conditions tested. It is likely that this differing behaviour is due to the differences in the material properties. Total power fluctuations shown in Fig. 4 do not display any significant trend although the level of these fluctuations is lower at the highest screw speed in general. These figures clearly explain the effects of screw design, material and process settings on the process energy demand and the level of fluctuations of the energy demand. Generally, energy fluctuations were lower with the BF screw (between 4.5 and 13 kW) than the GC and RC screws (between 3.0 and 15 kW). Moreover, the process specific energy was lower with the BF screw than other screws particularly at low screw speeds. Additionally, Fig. 4 shows that the mass throughput of the PS is higher than the HDPE while the SEC of the PS is lower than the HDPE under the same processing condition due to higher PS density. Also, some differences in power consumption can be observed between these two materials at the same processing condition. These differences should be attributed to the differing properties of these two materials such as melt viscosity, frictional properties, level of material compaction inside the screw channels and thermal conductivity.
As shown in Fig. 4, total power fluctuations were significant and these were as high as 15 kW for this particular extruder at some of the processing conditions. A separate experimental trial was carried out to check both the total extruder power and motor power while comparing the effects of barrel heaters (together with cooling fans) on the total extruder power signal. Here, both power signals were observed in parallel with the BF screw (with HDPE) and after a few minutes all the heaters (with cooling fans) were turnedoff at 90 rpm as shown in Fig. 5. It is clear that the barrel heaters and cooling fans are responsible for most of the variations induced in the total extruder power signal. Data shown in Fig. 5 confirms that the drive motor and barrel heaters are the dominant power demanding components of the extruder. Moreover, barrel heaters demand less power than the drive motor particularly at high screw speeds. As was reported by Kruder and Nunn [7], the energy demand of barrel heaters are dominant at low screw speeds. It is obvious from the results that the extruder energy demand results from a complex combination of machine, material and process parameters.
Experimental investigation of the power factor
An experimental trial (with HDPE and the BF screw under set temperature condition B) was carried out to observe the variations of the power factor related to the extruder total power as the screw speed changes under normal processing conditions. The recorded data for power factor, total extruder power and screw speed (SS) are shown in Fig. 6. The power factor signal shows a highly fluctuating behaviour and this may be due to the characteristics of the inductive loads of the extruder such as the drive motor (i.e. due to load variations), drive motor cooling fan and barrel cooling fans (i.e. centrifugal air fan blowers with on-off switching action), etc. The extruder total power and power factor were observed by turning on the barrel heaters one by one (then they were turned-off one by one as well) when the drive motor has been turned-off and the corresponding details are shown in Fig. 7. As shown in Fig. 7 the power factor stays around 0.45 when only the control electronics of the extruder are turned on. However, it suddenly jumps to unity as barrel heaters are turned on which can be considered as pure resistive loads. The power factor of a system, and hence its efficiency, is directly related to the reactive component within the total load-impedance. Usually, the heaters have zero reactive component, whereas motors form a combination of reactive and resistive impedance. Being orthogonal components, the total impedance becomes the Pythagorean-sum of the two. In the case of the motor, it is important to recognise that its resistive (apparent) component is not fixed, but that it changes with load/ speed conditions. Therefore, under DC conditions (non-running), the motor's windings have normal copper resistance, and this is fixed. However, under AC conditions (running, but low-load) these copper windings also manifest inductive-reactance which, as mentioned, can be thought of as being orthogonal to the resistance. It is this orthogonality which causes the current to lag the voltage, and hence, lower the power factor. Under the normal process operating conditions, the extruder is quite thermally stable (i.e. all heaters should have reached their set temperature) and hence some of the heaters may operate intermittently. Likewise, as the (resistive) heaters switch on, causing the vector-sum impedance to become resistively dominant, the power factor again rises towards unity and reduces as the heater goes off. This behaviour is clearly illustrated in Fig. 6 where the power factor shows sudden variations. Under increasing load/speed the inductive-reactance diminishes, so that the vector-sum (Pythagorean-sum) becomes increasingly resistive, and thus the power-factor approaches unity and this is clearly demonstrated in Fig. 6 where screw speed and power factor are seen to be directly correlated. Moreover, the demand from the heaters is reduced with increasing screw speed due to the increase of process mechanical heat. Here, increase or decrease of power factor is unrelated to active power. The active power remains constant although power factor changes from 0 to 1 at a particular processing condition. As power factor increases, it will reduce the apparent power demand and hence the consumer will not be charged by their power supplier for the reactive power. Therefore, there will be a financial saving as the plant is running at a high power factor (close to unity).
One implication of this is that optimum efficiency will be related to motor load: i.e., there is likely to be an optimal speedmaterial-screw combination under which electrical energy is most efficiently utilised. This is not unreasonable: the system simply conforms to the same physics as, for example, when impedancematching loud-speakers to an amplifier or an antenna to a radio. There is one other point that might be considered namely the power factor correction. By convention the reactive impedance of an inductor is considered to be positive. Conversely, capacitive reactance is negative. Maximum energy transfer takes place when the source impedance is the complex-conjugate of the load-impedance. This is simply due to the cancellation of the positive and negative reactances, leaving a purely resistive load. Therefore, with a fixed inductive load, the introduction of a suitable capacitance in parallel can mitigate any power factor issues. The problem in this case is that the load is not constant, and hence the introduction of a capacitor is probably not a suitable option. Power factor correction should be carried out after through investigation of all of the relevant factors [47].
Empirical modelling of the total extruder energy consumption
Here, the main aim was to develop a model to predict the total extruder power (E p ) as a function of major process variables and functional process parameters. Of the process variables, screw speed (x sc ) and barrel set temperatures ðT 1 ; T 2 ; T 3 ; T 4 Þ were selected as the model inputs. Among the functional process parameters the difference between the maximum and minimum melt temperatures of the output melt flow cross-section (T d ) was selected. Then, the total power demand of the extruder can be given as: Overall, this is a multi-input-single-output (MISO) model which has six inputs to predict the total extruder energy consumption at a given condition and the model structure is shown in Fig. 8. As shown in Fig. 8, the set temperatures of the clamp ring, the adapter and the die were always equal to T 4 during the experiments and hence these were considered as a single input. If these set values are different from T 4 , it is possible to add them as three different model inputs.
In this study, a linear-in-the-parameters (LITP) modelling technique was used to model the extrusion process. A two-stage algorithm [48,49] was employed in the selection and refinement of the LITP models. In the first stage, a fast recursive algorithm (FRA) was used for the selection of the model structure and for estimation of the model parameters. This solves the problem recursively and does not require matrix decomposition as is the case for orthogonal least squares (OLS) techniques [50]. However, the models developed include a constraint that the terms added later are based on previously selected ones. As a result, some of them may not have a significant contribution to the model performance. Then, in the second stage a backward model refinement procedure was carried out to eliminate non-significant terms to build up a compact model. The significance of each selected model term was reviewed and compared with those remaining in the candidate term pool and all insignificant terms were replaced, leading to improved performance without increasing the model size. The authors have used the same modelling technique for the modelling of the die melt temperature profile [9,11,[51][52][53], melt pressure [54] and motor power consumption [45] in polymer extrusion, and good results have been achieved. For this study, separate models were developed for each screw and the data was arranged in order of set temperature conditions A-B-C for both model training and validation (see Table 1). Then, six models were developed from the data of both materials with three different screw geometries. All the models showed good performance with the validation data with small root mean square errors. After studying a number of model combinations (i.e. models with different numbers of terms and orders), it was decided to choose 2 nd order 12 terms models for further study as they showed a good fit and also small training and test errors. The selected models for the BF, GC and RC screws are shown in Eqs.
It is possible to develop lower or higher order models with a different number of terms, if required. Then, a suitable model can be selected based on the required model accuracy and the application type.
The experimentally measured and model predicted total power values were compared to evaluate the model performance and these are shown in Fig. 9. Figure legends are in the format of EXP/PRE-set temperature condition and the terms EXP and PRE are used to denote experimental and model predicted conditions, respectively. In the modelling work, all the experimental data of each screw which covers a broad operating window (i.e. 5 screw speeds, 3 set temperature conditions) were fitted into a single model. It is evident that the experimental measurements and model predictions show a good agreement. As the predictions of the proposed models are accurate, they can be used to identify significant process parameters in terms of extruder total power consumption based on the screw geometry. By simply observing the models (i.e. coefficients and variables), it is clear that the screw speed has the most significant impact on the extruder total energy consumption as confirmed by the experimental results. Effects of each barrel zone temperature differs depending on the processing situation. The proposed models were then used to check the effects on the extruder total energy demand of increasing each barrel zone set temperature by 5°C (i.e. from set condition B) while other set conditions remained constant. The change of the energy demand in kW with the applied change in temperature is shown in Table 2. Table 2 clearly demonstrates the complexity of the relationship between the process energy demand and other relevant parameters. These values clearly show that increments in barrel set temperatures at different zones have caused not only to increase but also to decrease the total power demand in different quantities. In theory, power demand of the heaters should increase with the increase of their set temperature but on the other hand this may cause to decrease the motor power demand due to the increase in melt viscosity resulting a reduction of the extruder total energy demand. Furthermore, the internal heat generated by viscous/frictional forces are affected by the changes in barrel set temperature. Overall, the extruder total energy demand varies in a complex manner depending on the screw geometry, screw speed, set temperature and the material being processed. Obviously, it is extremely difficult to understand the nature of a such complex behaviour by simply monitoring power values on a meter display. Therefore, these models can be used to obtain a detailed understanding of process energy demand while identifying the effect of significant process/functional/machine/material parameters on the energy behaviour.
In addition, these models will be useful in optimising the energy consumption of an extruder while minimising the melt temperature variations. Here, one constraint should be set to select an appropriate barrel set temperature profile with the minimum pos-
Table 2
Changes to the level of the total power as each barrel zone temperature increased by 5°C from the set condition B.
Screw and Material
Variation of the level of total power in kW related to each temperature zone and screw speed (rpm) sible total power (TP) and melt temperature fluctuations (T d ) while achieving the highest possible screw speed (the higher the screw speed the higher the energy efficiency). Moreover, another constraint should be set to achieve the required process mass throughput (MT) at the each speed. Then, an optimisation algorithm could be programmed to satisfy both of these constraints simultaneously and this will be explored under the future work.
3.4. Computer modelling of the total extruder energy consumption FLOW 2000, a commercially available extrusion simulation software, was used to predict the extruder total power for the Polystyrene under the same processing conditions used for the experiments. Initially, the corresponding frictional coefficients of material-barrel and material-screw were selected to provide a closed fit between experimental and predicted mass throughput values. These were determined by trial and error and this procedure was followed to match the experimental extruder and simulation model. Here, the final friction coefficients were selected as: material-barrel: 0.43 and material-screw: 0.2, and these values were used for estimating the extruder total power with all the screws. The experimental and software predicted mass throughput and total power values are shown in Fig. 10. Figure legends are in the format of EXP/COM-set temperature condition and the terms EXP and COM are used to denote experimental and computer simulation conditions, respectively. Sub-figures in each row are plotted on the same scale.
The mass throughput values predicted by the software match with the experimental values reasonably well. Although the experimental and data-driven model predicted values show a good agreement (see Fig. 9), the total power values predicted by the simulation software are offset from the experimental values particularly with the BF screw. This may be due to the incorrect estimation of the friction coefficients. The selection of proper friction coefficients to match the experimental machine and simulation model is a trial and error process and hence it cannot be granted that this is the best prediction that can be achieved. In general, it is a time consuming process and someone may obtain improved results by spending more time for selecting well-matching friction coefficients. The offset may also be that the simulation does not take into account (or underestimates) losses, such as motor inefficiency, convection/radiation heating losses, etc. The majority of existing simulation software packages (e.g. EXTRUD, SSD, REX, CHEMEXTRUD, EXTRUCAD and FLOW 2000) model the single screw extrusion process by considering the three major zones (i.e. solids conveying, melting and melt conveying) of an extruder. Moreover, these follow the Tadmor melting model [55,56] which is based on the Maddock's melting mechanism [57]. Usually, finite element methods are used by most of these simulation packages to obtain solutions of relevant differential equations. In the late 1990s, Vlachopoulos [58] stated that some of the major challenges/shortcomings of the existing simulation software packages relate to the inability to represent shear thinning behaviour of polymer flows, the difficulty of representing contact between polymer melt and metal wall, inabilities of predicting phenomena such as solid bed break-up, sharkskin, die lip buildup, melt fracture and die resonance, etc. Simulation packages are continually being developed with increase in computing power and with better understanding of the underlying physics of polymer processing. On the other hand, some of the areas of polymer extrusion such as solids conveying and solid bed break-up are not well understood and are complex to model. Therefore it is understandable that commercial simulation software packages have some limitation in these areas. In fact, extensive experimental findings on process operation are invaluable for the improvements of such software packages.
Conclusions
The proposed models show a promising agreement with the experimental measurements made over a wide operating window. Models show that the screw speed has the greatest influence on the total energy demand of the extruder as was confirmed by the experimental observations as well. Also, the screw geometry can be significant in determining energy demand depending upon the material being processed. The importance of running the processes at high speeds with a high power factor to achieve a better process energy efficiency were highlighted. However, the optimum process operating point should be selected by considering both energy and thermal efficiencies. Overall, the results showed that the relationship between the extruder total energy and other process parameters is highly complex and hence further research is recommended to formulate techniques to enable selection of an optimum operating point with the highest possible energy and thermal efficiencies.
Future work
In future, the proposed models will be used to further study process energy consumption. Also, the research will be extended to observe the motor and heater powers together with the total power. A number of different materials (i.e. both semi-crystalline and amorphous) will be used to understand the process energy usage while attempting to explore the relationship/s between the process thermal stability and energy usage. The accuracy of the proposed empirical models will be improved by including other possible machine and material related parameters. Additionally, an attempt will be made to develop a physical model to relate extruder total energy demand.
|
2018-12-18T03:50:33.770Z
|
2014-12-31T00:00:00.000
|
{
"year": 2014,
"sha1": "03ce446cc1f204323fe64a848bceea0d1a20d371",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.apenergy.2014.09.024",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6a2487336010975d53992ca240cd93f848ec9b54",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
248445335
|
pes2o/s2orc
|
v3-fos-license
|
Local Wisdom of Indramayu Community in Transforming Islamic Values through Bujanggaan Tradition
This study aims to explore the religious values contained in the Bujanggaan tradition in the village of Jambak, Cikedung, Indramayu, West Java. One important part of this tradition is the reading of the Lontar Yusuf. The lontar contains several verses which are divided into several parts that tell various aspects. The method of this research was descriptive qualitative using an oral tradition approach. The data collection techniques were interviews, observation, and document review. This study concludes that the verses in Lontar Yusuf contain many religious values, especially those derived from Islamic teachings, such as the rights and obligations of husband and wife.
INTRODUCTION
Indramayu is one of the regencies included in the West Java Province which is strategically located because it is located on the north coast (pantura) route. Based on data from the Regional Government of Indramayu Regency, geographically, Indramayu is bordered by Subang Regency in the West, Java Sea and Cirebon Regency in the East, then with Majalengka Regency, Sumedang Regency and Cirebon Regency in the South, and the Java Sea in the North. Indramayu is a dynamic region economically, socially, politically and culturally. Various forms of culture come and go, affecting the valley of the Cimanuk estuary. All art forms, traditions and value systems, none truly Javanese or Sundanese, nor are they truly Hindu or Islamic. Everything was born from the process of hybrid (mixing) cultural history. Indramayu culture undergoes a dialectical process between culture and religion (Nugroho, 2016: 101).
This uniqueness makes scholars interested in researching the richness of these cultures and civilizations. Some of the previous results studies related to the traditions and culture of the Indramayu community include the writings of Mochamad Fikri Yasin, AT Sugeng Priyanto and Setiadji (2017) which discusses "Symbolic Interactions in the Ngarot Culture of the Jambak Village Community, Cikedung District, West Java Province. This study discusses the ngarot culture that is still trusted and preserved by the people of Jambak Village, Cikedung District in order to inherit noble values and agricultural systems through traditional performing arts. According to data from the Center for the Preservation of History and Traditional Values, which maps culture in Indramayu Regency, there are more than seventy-seven folk tales of Indramayu, including the history of Indramayu, Juntinyuat, Sage, Sampuyung, Ki Panganjung, Pulau Mite, Demang Bei, Palguna Palgunadi, Pangeran Surapati, Adipati Aria Wira Lodra III, Raden Kartawijaya and Raden Welang, the origin of Sukahaji Village, the origin of Sukra Village and so on (BPSNT, 2008: 88).
RESEARCH QUESTIONS
Therefore, this study is conducted to answer several questions as follow: 1) how is the description of the bujanggaan tradition in Indramayu carried out; 2) how the procession contained in the bujanggaan tradition; and 3) what are the values of religious education in the bujanggaan tradition. This research aims to describe the bujanggaan tradition that still survives in Indramayu. The next objective is to reveal and explain the values of religious education contained in the bujanggaan tradition. This research is expected to be useful both academically and practically. Academically, it provides additional information and data regarding to oral traditions that still exist in Indramayu today. Meanwhile, practically can be used as a policy consideration for related parties such as the Indramayu Culture and Tourism Office and the Indramayu Regional Archives and Library Service. 1 Interview with Ki Tarka Sutarahardja, Wednesday 7 August 2019.
Religious Education Values
Based on Indonesian Dictionary, the word "value" means a) price; b) the price of money; c) figures of intelligence, how much content, content, quality; d) characteristics or things that are useful to humanity (KBBI, 2008). According to Horton and Hunt as quoted by Wignjosoebroto, value is the idea of whether an experience or thing is meaningful or insignificant, valuable or not. Value essentially directs one's behavior and judgment. Values are an important part of culture. An action is considered valid, meaning that it is morally acceptable, if it is in harmony with the values agreed upon and upheld by the society in which the action was carried out. For example, when the prevailing values state that the piety of worship is something that must be upheld, then if there are people who are lazy to worship, they will certainly become the subject of gossip (Wignjosoebroto, 2006: 55). In relation to religion, values can be useful in three ways, namely as a basis for obligations or commandments, as a framework for cultural orientation and thought, and as specific moral traditions. There are religious values that act as commands and prohibitions, sometimes in the form of moral guidelines that regulate the relationship between humans and the almighty, humans with others and humans with nature. All of this is based on the belief in a substance that is almighty (Howell et al, 2003: 915).
Meanwhile, the word education comes from the basic word (verb) 'educate' which means to maintain and provide training (teachings, guidance, and leadership) regarding morals and intelligence. Education (noun) is the process of changing the attitudes and behavior of a person or group of people in an effort to mature humans through teaching and training efforts; process, way, act of educating. If forming the phrase religious education, the Indonesian Dictionary defines it as an activity in the field of education and teaching with the main goal of providing religious knowledge and instilling a religious attitude. The phrase religious education according to Government Regulation Number 55 of 2007 (Article 1 paragraph 1) is the meaning of education that provides knowledge and shapes the attitudes, personalities and skills of students in practicing their religious teachings. In the PP (Article 1 paragraph 1), it is stated that the function of religious education is to form Indonesian people who believe and fear to God and have noble character and are able to maintain peace and harmony in the inter-religious relations. Meanwhile, the purpose of religious education (Article 1 paragraph 2) is the development of the ability of students to understand, appreciate, and
Advances in Social Science, Education and Humanities Research, volume 660
practice religious values which harmonize their mastery in science, technology and art.
The word "religion" in the Indonesian Dictionary is defined as teaching, a system that regulates the system of faith (belief) in God, the system of worship, and the rules relating to the association of humans and their environment based on these beliefs. At the same time, the word "religious" means everything about religion. In English, the word religion corresponds to the word "religion," which comes from the Latin religio. Modern scholars, as proposed by Smith, use this term to refer to a power outside humans that obliges humans to carry out behavior under the threat of sanctions, or refers to human feelings in dealing with powers outside of humanity. Therefore, the value of religious education can be interpreted by the rules of life that are accepted as knowledge to shape attitudes, personalities and skills in practicing their religious teachings to become Indonesian people who believe and fear to God Almighty and have noble character and are able to maintain peace and harmony in relationships inter and between religious believers. These values have three kinds of content: (1) worship according to beliefs; (2) doing good deeds; (3) maintaining peace and harmony.
Oral Tradition
Oral tradition according to Jan Harold Brunvand as quoted by Danandjaja as oral folklore and/or partially spoken. The term folklore itself comes from the English language 'folklore,' which comes from two words, namely folk (a group that has the same physical or cultural identifiable characteristics and has an awareness of personality as a community unit and lore (part of a culture or tradition that is inherited). Included in the oral tradition include: (1) folk language, such as accents, nicknames, traditional ranks and aristocratic titles; (2) traditional expressions, such as proverbs, proverbs and bywords; (3) traditional questions, such as riddles; (3) folk poetry, such as pantun, gurindam and syair; (4) folk prose stories, such as myths, legends and fairy tales; (5) folk songs; (6) folk beliefs; (7) folk games, folk theater, folk dances, customs, ceremonies and people's parties.
METHOD
The type of research was descriptive qualitative using an oral tradition approach. The data collection techniques were interviews, observation, and document review. Interviews were conducted with traditional actors, cultural observers, village officials and local residents who supported the bujanggaan tradition. Meanwhile, observation was used to see the oral tradition, from outside to inside and to describe exactly what is seen. The observations were (1) the physical environment of a form of oral tradition being carried out; (2) the social environment a form of oral tradition; (3) the interaction of the participants is a form of oral tradition; (4) showing the form of the oral tradition itself; (5) the period or time of the said oral tradition. After all data was collected and analyzed, then it was submitted in a research report. This report consisted of an introduction, a description of the socio-cultural background of the research area where the oral tradition was the focus of the research, a description of the ritual tradition itself, and further analysis of the values of religious education in the oral tradition. In the description of oral traditions, of course, there was part or all of these oral traditions, which should be transcribed. This transcription could become evidence in analyzing and evidence in the part of the values of religious education in the oral tradition. The data were analyzed using content analysis to reveal the values contained therein. According to Endraswara, content analysis departs from the axioms of cultural studies which examine the process and content. Cultural behavior is considered as a discourse that can be examined through its form and content (Endraswara, 2017: 81).
Jambak Village Profile
Geographically, this village is located in Cikedung District, Indramayu Regency, West Java Province. Based on the profile book about the village, there was a legend about the origin of Jambak Village where there was a fierce war in the Karang Anyar forest. The war pitted against a kanuragan between Cirebon troops was led by Nyi Gede Krapyak. She was a student of Mbah Kuwu Sangkan Ki Gendeng. Cimanggung's son with Bogis Bogiana troops were led by Ki Koang. The war was quite tiring because they were both strong and all were skilled Kanuragan. Even though Nyi Gede Krapyak was a woman, she could match Ki Koang who was powerful and brave. The war between the two army leaders used horses as weapons of war, Ki Koang's horse was named Brama Tunggal while Nyi Gede Krapyak was named Turangga Deling. Both horses were trained to fight. Once upon a time, it was said that Nyi Gede Krapyak's horse was hit by a terrible blow from Ki Koang, which was named Brama Sentaka's punch, so that Nyi Gede Krapyak's horse died and was then buried in a place called Putat Payung.
Based on various folk tales and the remnants of past life in the form of stumps / stumps of various large trees, it was concluded that the area around Jambak village at the beginning of its growth was a grove of forests. The formation of Jambak Village as narrated from its history, namely the hamlet where Ki Koang grabbed Nyi Gendeng Krapyak's hair by Ki
Advances in Social Science, Education and Humanities Research, volume 660
Koang. Then, the hamlet was known as Pedukuhan Jambak or now the Jambak Village.
Description and Excerpt of Yusuf's Lontar Text
The manuscript with the title "Wawacan Yusuf Indramayu" is one of the collections of Ki Lebe Warki. The location is in Jambak Village Blok 2, Cikedung, Indramayu Regency. This manuscript has 192 pages with manuscript dimension; length 21 cm x width 17 cm, text size; 19 cm long x 14 cm wide. The script uses Javanese script or Carakan using Javanese Indramayu dialect. This type of manuscript paper is in the form of lined paper with a fairly good condition. This manuscript tells about the sonand daughter of the Prophet Jacob As. There was a love story of Siti Juleha, the daughter of a king from the country of Temas. The story of the long journey of the little Yusuf, who was full of misery and sorrow, after being bought into a slave by Ki Juragan Malik, he was then used as a spectacle to the public in exchange for gold dinars. Ki Juragan became very rich. At the request of Siti Juleha, the ownership of his slavery was transferred to King Kadmirul Ajid in Egypt with an abundance of ransom property. Ki Juragan Malik finally found out that his former slave was a prophet, he finally converted to Islam along with his followers.
At that time Siti Juleha had become a "garwa" of King of Egypt and she still worshiped idols. This time she felt that she had full of the ideal man who had been present in her dreams many times a long time ago. That way he always tried in various ways to get sympathy from the Prophet Yusuf As. The gift of good looks given by Allah to the Prophet had made the kingdom unstable. Finally, it was decided by the King based on a request from Siti Juleha to put him in prison with the intention of avoiding continuing slanders. At that time, many of the wives of relatives and court officials became crazy about the prophet. In the prison the prophet awakened and converted as many as one thousand four hundred prisoners who were shackled by his accomplices with iron chains. From a nightmare interpretation, the King of Egypt finally freed the Prophet Yusuf and his followers from prison, and crowned his adopted son as the successor to occupy the throne in the country of Egypt. The elderly king himself and his subordinates eventually embraced the Abrahamic religion, while Siti Juleha still persisted in worshiping idols and chose to leave the palace. The Prophet Yusuf As became a king not because of the inheritance of the king's line or obtained by seizing power by means of war, he became king because it was destined by Allah and ruled the country by carrying out the law of god.
Pupuh Kasmaran (p. 6-8) 1. The badness of this woman, the main bad, the bad, the medium and the main nature. My advice to you is to remember and to be carried out. In hindsight, it's a father's advice.
2. There are six kinds of badness, the first is blereh woman. She said sassy, dare to say words. His possessions must be recognized, feel the results themselves.
5. The fourth badness of the woman is the woman who likes to be in the mirror. Women love to be decorated only, when her husband is away there is an intention to play love with others. If her husband has come, wear a rag.
6. The fifth badness is kesit woman. She is happy to lie to her husband. Lie forever, steal food and selling names, happy to steal (items that are not very valuable) and calimud (take) something food if neighbors cook.
7. The sixth badness of women is the women who love to walk around. Play sanja (visit to other men) and watch. Her husband is considered a horse herder, his job is not certain. If she calls, he should be listened to. But if she is called, she pretended to be deaf.
PRIDE PROCESSION AND RELIGIOUS EDUCATION VALUES
The bujanggaan tradition in every appearance goes through three stages, namely first, the opening stage. Usually, the dalang bujanggaan reads the prayers first and reads the letter al Fatihah, whose reward is given to the family who is holding a celebration. Second, the implementation stage, which is filled with developing stanzas of the text in Yusuf's lontar. Third, the closing stage. After reading Yusuf's lontar, he continued by developing Rahayu's song and closed with prayer readings.
Some advices and advice found from the developed verses of the text are advice to be a good woman or in the language of the text is called the main woman. A main woman must have four important criteria, namely: 1) Hambrap Arum is a woman who is obedient and obedient to her husband. Showing a face like a jasmine flower in crossing the household ark and appreciating what her husband likes and hates. 2) Hamrapsari is a woman who is adept at work. There is no need to be taught what to do and the results are good and beyond reproach; 3) Hamrapkayon, is a woman who maintains honor and obedience to her husband, both when the husband is near and far; 4) Hambarungsari is a woman who is diligent in keeping her house and herself clean. Always love her husband and children. (Lontar Yusuf, Sinom, p. 8). In addition to advice on how to be a good woman, this text also develops teachings and knowledge to always have faith in Allah Almighty, eliminate arrogance and always give alms to the poor and religious scholars so that we get forgiveness for all the sins and mistakes we have committed. (Lontar Yusuf, Pucung, p.26-27).
If we look at the verses of the Lontar Yusuf text above, we will find that there are strong Islamic values contained in it, such as those contained in Lontar Yusuf Pupuh Kasmaran, third point. Behavior to her Advances in Social Science, Education and Humanities Research, volume 660 husband, women do not prececa (praceka, open). Behind the husband's words, the husband's badness is told, told to the neighbors. Basically, Islam has stipulated that husband and wife must take care of each other and not reveal their disgrace. Allah reminded that "... they (your wives) are clothes for you and you are clothes for them" (Surah al-Baqarah: 187). This verse implies that husband and wife are like clothes that cover each other. The husband is the clothes for the wife and the wife is the clothes for the husband. If a husband or wife exposes their partner to shame, it is the same as stripping themselves. Husband and wife are a complementary unit.
In the hadith it is also stated that Rasulullah SAW said, "surely a man whose position is the worst on the Day of Judgment is a man (husband) who mingles (intercourse) with his wife, then reveals the secret of his wife." (Reported by Muslim). Although it is meaningful in a husband and wife relationship, actually protecting a partner's disgrace includes many aspects. Syekh Abdullah al-Bassam when commenting on the above hadith explained that the disgrace that exists in a partner can be in the form of a husband and wife's body. Including it, the secret between the two of course both husband and wife do not like it if their secret is known by others. If general disgrace is prohibited from spreading, this is even more so regarding the extremely private relationship between husband and wife. The Prophet SAW labeled a husband or wife who exposed their partner's shame as the ugliest human being in the sight of Allah. The reason is, those who open disgrace have denied the mandate that he should have held.
Furthermore, in the fifth point of Lontar Yusuf Pupuh Kasmaran it is said that "The worst thing about women is a woman who likes to look in the mirror. Women just love to make up, when their husbands leave there is an intention to make love with other people. When her husband comes, wears a ragged/worn cloth. "Islam basically prohibits tabarruj, which is excessive attitude in displaying jewelry and beauty, such as: head, face, neck, chest, arms, calves and other body parts, or displaying additional jewelry. Imam Ash-Shyaukani said, "At-Tabarruj is with a woman showing some of her jewelry and beauty which (should be) obliged to cover her, which can provoke men's lust (desire). Allah said: "And you must remain in your house and do not be decorated and behave like the people of the past Jahiliyah ..." (Surah al-Ahzab: 33). Shaykh 'Abdur Rahman as-Sa'di when interpreting the above verse said that the verse forbids women to often go out of the house decorated or wearing fragrances, as was the custom of ancient ignorant women, they did not have knowledge (religion) and faith. All this in order to prevent badness (for women) and its causes.
CONCLUSION
From the above description, it can be concluded that the bujanggaan tradition still exists today in Indramayu with its supporting community and be a part of local wisdom of Indramayu. The existence of this tradition on the one hand could be threatened by developments in information technology such as television and the internet. The pride tradition also has the values of religious education that the wider community needs to know and study, such as the criteria for being a good human being and also human obedience as a servant of God. Therefore, its existence needs to be supported by related agencies such as the Culture and Tourism Office and the Regional Archives and Libraries Service, both material and non-material support.
AUTHORS' CONTRIBUTIONS
Rosadi drafted the article, Reza completed the necessary data, satria translated the article into English
|
2022-04-30T15:12:25.523Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "8bd7ec4b42ef37c6cd92e0014bd1a400c381e9aa",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125973572.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "333da2913cadb9eb53b5b45e9ac176844793cd0b",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
7549194
|
pes2o/s2orc
|
v3-fos-license
|
Camptothecin nanocolloids based on N,N,N-trimethyl chitosan: Efficient suppression of growth of multiple myeloma in a murine model
Camptothecin (CPT) exhibits very strong antitumor effects by inhibiting the activity of DNA topoisomerase I, but its application is greatly limited due to its low solubility and the instability of the active lactone form. To overcome these shortcomings, in the present study, we prepared novel camptothecin nanocolloids based on N,N,N-trimethyl chitosan (CPT-TMC) to efficiently and safely administer CPT systemically. Herein, we investigated the antitumor activity of CPT-TMC against a murine Balb/c myeloma model. Our results showed that CPT-TMC more effectively inhibited tumor growth and prolonged survival time than CPT in vivo, but no statistical difference was observed in vitro between CPT-TMC and CPT. These findings suggest that N,N,N-trimethyl chitosan could increase the stability and the antitumor effect of CPT and CPT-TMC is a potential approach for the effective treatment of multiple myeloma.
Introduction
Multiple myeloma (MM), a malignant plasma cell disorder, accounts for about 10% of all hematological cancer cases (1,2). The annual incidence of MM varies between 1 to 5 cases per 100,000 persons worldwide (3). As the second most frequent malignancy of the blood in the USA after non-Hodgkin's disease, about 19,900 new cases of MM and 10,790 deaths occurred in the United States (4). Since the introduction of alkylating agents and melphalan in the 1960s, the median survival of patients with MM has improved (5), but the treatment outcome is far from satisfactory, and novel drugs are in urgent demand to better combat this malignancy.
Camptothecin (CPT) was first isolated by Monroe E. Wall and Mansukh C. Wani in 1958 from extracts of Camptotheca acuminata, a deciduous tree native to China and Tibet, which has been extensively used in traditional Chinese medicine (6). CPT represents an important class of agents useful in the treatment of cancer, which show a broad spectrum of antitumor activity, including lung, ovarian, breast, pancreas, stomach and leukemia (7)(8)(9)(10)(11). CPT exhibits antitumor effects by inhibiting the activity of DNA topoisomerase I, which is required for replication and transcription of the cell cycle and stabilization of the DNA-topoisomerase complex, thereby resulting in single-strand DNA breaks to induce the apoptosis of cancer cells (12)(13)(14)(15). Cancer cells often overexpress topoisomerase 1 (Topo-1), and they are usually more susceptible to CPT than normal cells (16). It is suggested that CPT may be a promising anticancer agent that warrants further investigation. However, there are some drawbacks of CPT significantly restricting its clinical use. CPT, similarly to a number of other potent anticancer agents of plant origin, is extremely water insoluble and can only be solubilized in dimethylsulfoxide (DMSO), dichloromethane: methanol (1:1) (v:v) and chloroform: methanol (4:1) (v:v). The serious side effects of co-solvents and bioavailability problems have hampered the usage of CPT in vivo (13). The lactone ring in CPT plays an important role in the drug's biological activity but it opens at a physiological or higher pH values, making this drug much less active and highly toxic, and precluding its clinical use ( Fig. 1) (17).
To overcome these drawbacks of aqueous solubility and stability, two strategies have been introduced. One is to synthesize water-soluble analogues, pro-drugs, and derivatives of CPT. Thus far many compounds have been reported, such as CPT-11, SN-38 and DX-8951f (18)(19)(20)(21)(22). However, these compounds are not stable enough in vivo and have lower activity than CPT (23). Alternatively, the development of adequate drug carrier systems to improve the solubility and stability of CPT is gaining attention. There are many reports about utilization of CPT in cancer therapy by using drug delivery systems, such as liposomes, polymer micelles, microemulsions, and microspheres (17,(24)(25)(26)(27)(28). Unfortunately, these carriers are unsatisfactory due to the poor biocompatibility and biodegradability.
Chitosan is an aminoglucopyran composed of N-acetylglucosamine and glucosamine residues and has excellent properties, such as biocompatibility, biodegradability, and is nontoxic (29). However, the application of the polymer in medicine as a drug carrier in vivo is difficult to be achieved due to its insolubility. Thus, there is a need for chitosan derivatives with increased solubility, especially at neutral pH values, to aid in the delivery of therapeutic compounds (30,31). Trimethyl chitosan has been proven to be a derivative of chitosan with superior solubility compared to chitosan (32). N-Methylated chitosan with the hydrophilic groups N + (CH 3 ) 3 and the hydrophobic groups N(CH 3 ) 2 is amphiphilic and water-soluble in character at physiological pH and can be self-assembled to vesicles. N-Methylated chitosan has been previously used as a carrier for the delivery of small drug molecules due to its properties (33)(34)(35).
In the present study, we chose N,N,N-trimethyl chitosan (TMC) as a carrier to encapsulate CPT and to overcome its drawbacks. The anticancer efficacy of CPT encapsulated with N,N,N-trimethyl chitosan (CPT-TMC) was examined in vivo and in vitro. Cell culture and tumor model. The murine Balb/c myeloma cell line MPC-11 was purchased from the American Type Culture Collection (ATCC, Manassas, VA) and cells were grown in RPMI-1640 (Life Technologies, Bedford, MA) containing 10% heat-inactivated FCS, 100 units/ml penicillin, and 100 units/ml streptomycin in a humid chamber at 37˚C under 5% CO 2 .
Materials
The MPC-11 tumor model was established in 8-weekold female BALB/c mice. Briefly, these BALB/c mice were inoculated subcutaneously with MPC-11 cells (2x10 5 ) in the dorsal area. All these mice were purchased from the Sichuan University Animal Center (Sichuan, Chengdu, China). All studies involving mice were approved by the Institute's Animal Care and Use Committee.
Preparation of CPT-TMC nanocolloid. According to a previous study (36), CPT-TMC nanocolloid was successfully prepared by a combination of microprecipitation and sonication. Briefly, 6 mg/ml of CPT was first prepared by dissolving 30 mg CPT into 5 ml DMSO. Then TMC was dissolved in water solution at the concentration of 5 mg/ml. Subsequently, 0.1 ml of CPT solution was added in a dropwise fusion into 2 ml of TMC solution at 4˚C. The obtained colloid solution was ultrasonicated for 10 min keeping the temperature at 4˚C. Finally, the colloid solution was dialyzed against water using a membrane with a molecular weight cut-off of 8,000-14,000 (Solarbio, China). After dialysis for 3 days, the solution was centrifuged at 10,000 x g for 10 min to remove insoluble CPT. The amount of CPT in the TMC solution was measured by HPLC.
In vitro cytotoxicity assay. The growth-inhibitory activity of CPT-TMC on the MPC-11 cell lines was evaluated by MTT assay. Briefly, the MPC-11 cells (4-5x10 3 ) were seeded in 96-well plates and cultured for 24 h, followed by exposure to various doses of free CPT or CPT-TMC with equivalent doses of CPT for 48 h. A volume of 10 µl of 10 mg/ml MTT was added per well and incubated for another 4 h at 37˚C, then the supernatant fluid was removed and 150 µl/well DMSO was added for 15-20 min. The light absorption values (OD) were measured at 570 nm with the SpectraMAX M5 microplate spectrophotometer (Molecular Devices). The viability of cells was measured by the absorbance at 570 nm.
To assess the effect of CPT-TMC on cell apoptosis and the cell cycle, flow cytometric analysis was performed to measure the percentage of sub-G1 cells after PI staining in hypotonic buffer as previously described (37,38). Briefly, cells were suspended in 1 ml hypotonic fluorochrome solution containing 50 µg/ml PI in 0.1% sodium citrate plus 0.1% Triton X-100 and the cells were analyzed by a flow cytometer (ESP Elite, Beckman-Coulter, Miami, FL). Apoptotic cells appeared in the cell cycle distribution as cells with a DNA content of less than that of G1 cells and were estimated with the Listmode software.
For morphological analysis, the cells were fixed using 70% of ethanol following rinsing with PBS. Morphological analysis of apoptosis was performed after staining with PI (1 µg/ml, in PBS) under fluorescence microscopy (Axiovert 200, Zeiss, Germany) or under light microscopy without staining.
The pattern of DNA cleavage was analyzed as previously described (39). Briefly, cells (3x10 6 ) were lysed with 0.5 ml lysis buffer [5 mM Tris-HCl (pH 8.0), 0.25% Nonidet P-40, and 1 mM EDTA], followed by the addition of RNase A at a final concentration of 200 µg/ml, and incubated for 1 h at 37˚C. Cells were then treated with 300 µg/ml proteinase K for an additional 1 h at 37˚C. After addition of 4 µl loading buffer, 20 µl samples in each lane were subjected to electrophoresis on a 1.5% agarose gel at 50 V for 3 h. DNA was stained with ethidium bromide. In vivo antitumor activity. MPC-11-bearing Balb/c mice were coded and divided into four groups (n=10 for each group). Each group was respectively treated with intravenous injections of CPT-TMC (2.5 mg/kg), CPT (2.5 mg/kg), TMC (25 mg/kg) or 0.9% NS. Treatment was initiated when the tumor volume was 90 mm 3 . Treatments were given every 3 days for 15 days and survival time and tumor volumes were observed. Tumor size was determined by caliper measurement of the largest and perpendicular diameters every three days. Tumor volumes were calculated according to the formula: V=axb 2 x0.52, where a is the largest superficial diameter and b is the smallest superficial diameter. The mice were sacrificed when they became moribund. The date of sacrifice was recorded to calculate the survival time. For further investigation, tumors tissues were excised and fixed in 10% formalin.
Apoptosis analysis in tumor tissues. Tumor tissues in paraffin blocks were cut into sections of 3-5 µm thickness. Apoptosis analysis was performed by terminal deoxynucleotidyl transferase-mediated dUTP nick-end labeling staining using the DeadEnd™ Fluorometric TUNEL system (Promega) following the manufacturer's protocol. Four equal-sized fields from the tissue sections were randomly chosen and analyzed. The positive-stained cells were visualized and analyzed under a fluorescence microscope (Olympus, BX60). The apoptotic index was calculated as a ratio of the positive cell number to the total tumor cell number based on the mean value from four high-power fields.
Statistical analysis. Data were assayed by ANOVA and the Student's t-test. For the survival time, Kaplan-Meier curves were established for each group, and the survivals were compared by the log-rank test. All data were presented as mean ± SD. Experiments were performed at least in duplicate. All data were analyzed using the SPSS software (SPSS for Windows, version 17.0; SPSS, Chicago, IL). In all statistical analyses, P<0.05 denoted significant differences.
Results
Inhibition of cell viability. According to the results of the MTT assay, exposure to CPT-TMC and CPT for 48 h, significantly inhibited the cell viability of MPC-11 cells. The results directly suggest that CPT-TMC and CPT inhibited cell viability in a concentration-dependent manner. No statistical difference was observed between the CPT-TMC and the CPT group (P>0.05, Fig. 2).
In vivo antitumor activity. MPC-11-bearing Balb/c mice were treated with CPT-TMC (2.5 mg/kg), CPT (2.5 mg/kg), TMC (25 mg/kg) and NS, respectively. No differences were observed between the TMC group and the NS group in terms of tumor growth (P>0.05), and it was proven that TMC did not have antitumor activity. On the contrary, CPT and CPT-TMC were found to have antitumor efficiency in inhibiting tumor progress. The inhibition rate of the tumor volume treated with CPT-TMC was 83% compared with the NS group (P<0.05). It was interesting that, as shown in Fig. 3A, the antitumor efficiency of CPT-TMC was better than that of CPT by 55% (P<0.05). However, no complete reaction was found in all these groups.
To further investigate the antitumor effects of CPT-TMC in vivo, we assayed the lifespan of the mice. Our results show that the groups treated with NS survived 31 days on average, and there was not significant difference between the TMC
A B
and NS groups (P>0.05). In contrast, systemic therapy with CPT-TMC significantly prolonged the survival time vs. the NS (P<0.05) and CPT group (P<0.05). When the study was terminated at 60 days after inoculation, more than half of the animals in the CPT-TMC group had survived (Fig. 3B). These data indicate that the animals significantly benefited from CPT-TMC treatment.
Induction of cell apoptosis in vitro and in vivo.
The effect of apoptosis induction of CPT-TMC in vitro was measured by flow cytometric analysis, DNA ladder, morphological analysis and the TUNEL assay. By the use of flow cytometry, we could assess the number of sub-G1 cells, which can be used to indirectly estimate the number of apoptotic cells. The results obtained with flow cytometry strongly show that the CPT-TMC treatment led to MPC-11 cell death by inducing apoptosis in a dose-dependent manner. After exposure to CPT-TMC for 48 h, apoptosis could be observed at 12.5 ng/ml. The increased number of apoptotic cells was detected and the apoptosis rate reached 72.9% at 50 ng/ml (Fig. 4).
A B C D
To further confirm the apoptosis induction effect of CPT-TMC, the pattern of DNA cleavage was analyzed after treatment with CPT-TMC demonstrated a ladder-like pattern of DNA fragments consisting of multiples of approximately 180-200 base pairs, consistent with internucleosomal DNA fragmentation (Fig. 5).
Treatment with CPT-TMC also resulted in morphological changes consistent with apoptosis. Compared with the control, cells treated with CPT-TMC for 48 h were vacuolated, had shrunk and gradually showed increased membrane blebbing as the CPT-TMC concentration increased. After PI staining, the morphological changes were also characteristic of apoptosis: brightly red, condensed nuclei (intact or fragmented) were observed by fluorescence microscopy (Fig. 6).
To investigate the apoptosis induction effect of CPT-TMC in vivo, the tumor tissues were subjected to terminal deoxynucleotidyl transferase-mediated dUTP nick-end labeling assays for respective determination of the apoptotic index. The results suggest that there were almost no TUNEL-positive nuclei in the NS and TMC groups. Both the CPT group and the CPT-TMC group had a higher apoptosis rate of tumor cells compared with the NS control (P<0.05). The apoptosis rate of the CPT-TMC group was much higher than that of the CPT group (P<0.05) (Fig. 7).
Discussion
Multiple myeloma (MM) is a chronic hematological disease affecting terminally differentiated B cells, for which there is currently no cure. In the last decades, there is a noticeable improvement in the treatment of MM due to the introduction of new therapeutic strategies and new agents, such as ASCT (autologous stem cell transplantation), thalidomide, bortezomib, and lenalidomide (40). But in fact, the median survival is 4.4-7.1 years in spite of all available therapies for relapse and drug resistance (41,42). For this reason, searching for new and more effective agents is necessary.
Camptothecin (CPT), a plant alkaloid, is a potent antitumor agent, which acts by inhibiting the nuclear enzyme topoisomerase I and inhibits the growth of a wide range of tumors. Some analogues of CPT, such as topotecan, have been ultilized as chemotherapy agent in MM (43)(44)(45). However, there is limited information of CPT itself in MM, because the major drawbacks of the drug, water insolubility and lactone instability, hamper its medical use. To overcome these drawbacks, some delivery systems were developed to increase the solubility and the stability of the drug, such as liposome and polymeric micelles (17,46). But all these approaches are far from satisfactory due to poor biocompatibility, biodegradability, or bioadhesivity. Therefore, we studied the effect of CPT on murine multiple myeloma cells MPC-11 in vitro and in vivo and attempted to increase its solubility and stability.
Chitin, a linear cationic heteropolymer of randomly distributed N-acetylglucosamine and glucosamine residues, is one of the most abundant polysaccharides in nature and is mostly derived from the exoskeleton of crustaceans (47). Chitosan, a polymer obtained by deacetylation of chitin is widely studied for its pharmaceutical and non-pharmaceutical applications. For example, chitosan has been extensively evaluated for its mucoadhesive and absorption enhancement properties. But chitosan is not soluble in a medium except below pH 5.6 and this property limits its use as a drug delivery system. Therefore, there is a need for chitosan derivatives with increased solubility at physiological pH values. N,N,N-Trimethyl chitosan (TMC) is a promising derivativ. TMC is soluble either in an acidic, basic or neutral medium (pH range 1-9 up to 10% w/v concentration) and has mucoadhesive and permeation enhancement properties like native chitosan (48,49). It has been reported that TMC could enhance the transport of small compounds, large molecules, peptide drugs and DNA and has shown promising results as a drug delivery agent as well as a DNA delivery agent (50). Hence, TMC was selected as a carrier to delivery CPT which is insoluble in water.
In the present study, we investigated the antitumor effect of CPT on the murine myeloma cell line MPC-11 and we demonstrated that CPT inhibited the growth of MPC-11 cells in vitro and in vivo. To further improve the antitumor activity, we chose TMC as a carrier to encapsulate and deliver CPT (CPT-TMC). The results of the MTT assay showed that both CPT and CPT-TMC significantly inhibited the growth of MPC-11 cells in vitro and there was a significant difference between their effects. TMC itself was not cytotoxic and TMC could not improve the antitumor activity of CPT in vitro. However, interestingly, the results of in vivo assays were different from those of in vitro assays. Compared with CPT, in murine models, we demonstrated that CPT-TMC more efficiently suppressed tumor growth in murine models. Moreover, the TUNEL assay showed a significant increase of the apoptotic index in the CPT-TMC group. A further study showed that the survival time of animals treated with CPT-TMC was significantly prolonged compared with the NS, TMC and CPT groups. According to these results, it was suggested that the TMC delivery system efficiently improved the antitumor activity in vivo, but not in vitro. The mechanism of the antitumor effects of CPT-TMC may be the prolonged blood circulation time or the accumulation of CPT in tumor tissue. In light of the encouraging results presented herein, delineation of the potential chemotherapeutic effects of CPT-TMC and its precise mechanism of actions warrants further investigation.
In conclusion, we demonstrated that CPT has powerful antitumor activity through induing apoptosis in murine multiple myeloma models, and that the TMC delivery system can efficiently improve the effects of CPT. Thereby, our finding may provide a strategy to permit utilization of CPT in clinical practice. CPT-TMC may be a new effective agent to better combat multiple myeloma.
|
2016-05-17T17:06:59.302Z
|
2012-01-12T00:00:00.000
|
{
"year": 2012,
"sha1": "667b33d40ea3878d4a06983689b2c825740c597d",
"oa_license": "CCBY",
"oa_url": "https://www.spandidos-publications.com/10.3892/or.2012.1635/download",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "667b33d40ea3878d4a06983689b2c825740c597d",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
258663069
|
pes2o/s2orc
|
v3-fos-license
|
Survey on Immunization Services for Children with Medical Conditions — China, 2022
What is already known about this topic? Children with medical conditions frequently experience under-immunization. Ensuring high-quality immunization services is crucial for enhancing vaccination coverage levels; nevertheless, the state of immunization service provision for children with medical conditions in China remains unclear. What is added by this report? Immunization support for children with medical conditions in China demonstrates considerable variability and may be inadequate. Primary obstacles to the provision of immunization services include an absence of comprehensive vaccination recommendations and assessment guidelines for specific medical conditions, as well as inconsistencies among vaccine recommendations, package inserts, and expert consensus statements pertaining to the vaccination of children with medical conditions. What are the implications for public health practice? The examination of provincial practices in providing immunization services for children with medical conditions, as well as understanding the barriers faced by National Immunization Program providers in administering vaccinations, can contribute to the improvement of immunization services for this population in China.
Children with medical conditions are defined as those possessing specific physiological or disease states that may increase their risk of infection or exacerbate the severity of vaccine-preventable diseases. Such conditions can also impact the safety and effectiveness of vaccinations, often necessitating evaluations prior to immunization (1). A strategic objective of Immunization Agenda 2030 (IA2030) is to extend immunization services to "zero dose" and underimmunized children, ensuring that all children receive full benefits from vaccines (2).
In China, vaccination rates for the National Immunization Program (NIP) vaccines have reached 99% among age-eligible children (3). However, delayed and missed vaccinations are common among children with medical conditions for various reasons. These factors include a lack of awareness of vaccinepreventable diseases, uncertainty surrounding vaccination safety, restrictions stated in vaccine package inserts, difficulties in assessing medical condition severity, misperceptions regarding contraindications and precautions, and barriers related to operations and systems (1).
Immunization services are critical for maintaining and enhancing high vaccination coverage levels. Nevertheless, the current state of immunization service support for children with medical conditions in China remains unclear. In order to address this knowledge gap and lay the groundwork for potential improvements in immunization services, we conducted a study examining immunization service patterns for children with medical conditions across 31 provinciallevel administrative divisions (PLADs) in China.
Between August 3 and 16, 2022, we conducted a cross-sectional, questionnaire-based survey targeting provincial-level CDC immunization program departments responsible for implementing immunization services for children with medical conditions in China. Questionnaires were electronically disseminated and collected, including questions about relevant immunization policies, development of recommendations or expert consensus statements, utilization of vaccination evaluation clinics, the structure and processes of such clinics, and the availability of relevant training programs. Additionally, we inquired about perceived barriers to and urgent demands for the provision of immunization services. The question addressing urgent demands featured eight items ranked from zero to seven, with higher scores indicating higher demand. Definitions and meanings of each question were clarified through online face-to-face interviews. All 31 PLADs completed the survey. Table 1 shows specific immunization services support for children with medical conditions by region. In general, supporting services were more numerous in eastern PLADs than in central and western PLADs, especially in the use of expert consensus statements and use of vaccination evaluation clinics for certain medical conditions. There were 74 vaccination evaluation clinics in pediatric hospitals nationwide, distributed in 16 PLADs. * Local vaccination recommendations represent the official guidelines, which NIP providers must adhere to when vaccinating children with medical conditions. † Local expert consensus statements, developed by expert teams, are not considered official standards. However, they serve as a foundation for NIP providers to enhance their scientific understanding of vaccination necessity, as well as to investigate the safety and efficacy of vaccinations in children with medical conditions. § Specialized training programs aim to enhance the vaccination of children with medical conditions by offering education on fundamental knowledge, professional skills, and relevant case studies. ¶ Vaccination evaluation clinics have been established to provide counseling and assessment for children with medical conditions, addressing safety concerns and the necessity of vaccination. ** The comparison of indicator rates between regions was conducted using Fisher's exact test, as the small sample size of one cell (<5) necessitated this statistical approach.
"−" means data not available. China CDC Weekly Table 2 shows barriers to immunization service provision as perceived by PLADs. More than half of the PLADs identified the following barriers: lack of comprehensive vaccination recommendations for specific medical conditions (74.2%); absence of standardized procedures to assess the appropriateness of vaccination in certain medical conditions (74.2%); inconsistencies between official recommendations, vaccine package inserts, and published expert consensus statements (61.3%); and limited authority of expert consensus statements (54.8%). Table 3 shows scores and rankings of urgent demands for immunization services for children with medical conditions. All three regions indicated that the top priority is to develop detailed official vaccination recommendations for children with medical conditions.
DISCUSSION
This study revealed that the provision of immunization services varies throughout China and might not be adequate to guarantee that children with medical conditions receive the recommended vaccinations. Incomplete vaccination can increase the vulnerability of these children to vaccine-preventable diseases. While significant efforts are needed at a national level, some PLADs have already addressed this issue and have explored suitable immunization service models for children with medical conditions (4-5). To facilitate effective service provision, it is crucial to identify the barriers and unmet needs, as well as to implement measures that enhance vaccination coverage for children with medical conditions. Immunization service support for children with medical conditions varies across regions, revealing inconsistencies in the development and implementation of local recommendations, expert consensus statements, training programs, and vaccination evaluation clinics. Although progress has been observed in eastern PLADs, this is expected to enhance protection for children with medical conditions against vaccine-preventable diseases in the long run (6). Nonetheless, the limited-service support in central and western PLADs may indicate a lack of sufficient child health resources. Taking into account the practices and experiences from leading PLADs could potentially improve immunization service capacity in other regions. This study discovered that the majority of provincial-level CDCs placed a higher importance on enhancing official vaccination recommendations for immunizing children with medical conditions. This preference outweighed the development of expert consensus statements, which would cover a broader range of conditions and operational details. The inclination towards official recommendations is primarily due to the fact that expert consensus statements are not recognized as official documents under the Vaccine Administration Law of the People's Republic of China. Consequently, these statements may vary depending on the expert teams involved, leading to a lack of confidence among healthcare workers (7).
Vaccination recommendations for prevalent childhood medical conditions, including prematurity, low birth weight, allergic predisposition, immune system dysfunction, congenital diseases, and congenital infections, have been outlined in the 2021 version of China's national immunization schedule (8). As more data emerge from pertinent vaccine effectiveness and safety studies, national recommendations ought to encompass additional medical conditions. Providing training on these updated recommendations and Inconsistencies between vaccine package inserts and official recommendations have the potential to cause confusion among healthcare workers and concern parents. These inconsistencies were identified by provincial CDCs in our survey as significant barriers to vaccinating children with medical conditions. Similar discrepancies have been observed in other countries (9). Factors contributing to these discrepancies, which are important considerations for provincial CDCs, include varying disease burdens in children with medical conditions, differing risk and benefit estimates, vaccine characteristics, and parental and public acceptance (9). It is essential that all stakeholders collaborate to develop a well-coordinated immunization policy for potentially off-label vaccine recommendations.
This study discovered that 50% of PLADs have implemented vaccination evaluation clinics within pediatric hospitals to assess the appropriateness of vaccinations for children who have medical conditions that are challenging for NIP providers in community healthcare centers to evaluate. In other nations, children with medical conditions are typically assessed and vaccinated by NIP providers rather than specialist physicians, resulting in vaccination rates similar to those among healthy children (10). Variations in methodology may be related to a hesitancy among Chinese NIP providers to administer vaccinations to children with medical conditions (11), which may be influenced by unfamiliarity with certain medical conditions, insufficient knowledge regarding vaccine safety and efficacy in specific medical cases, and an absence of official, detailed guidelines for particular conditions (12). Although vaccination evaluation clinics positively impact the intentions of concerned parents to vaccinate their children (5), it is crucial to comprehend and overcome the barriers preventing NIP providers from recommending and administering vaccinations to children with medical conditions.
To address ongoing and emerging challenges in enhancing immunization services for children with medical conditions, several steps should be taken. These include monitoring vaccination coverage among children with medical conditions, expanding current national vaccination recommendations, developing evaluation procedure guidelines, addressing barriers experienced by immunization service providers, creating targeted educational tools for the public, and advocating for changes in immunization policies to safeguard providers.
The current study presents several limitations. While the survey encompassed 31 PLADs in China, it may not adequately represent the perceptions of CDCs at the prefecture and county levels, nor the NIP providers and management operating within vaccination clinics. Additionally, the study did not incorporate the viewpoints of parents concerned about vaccinations. Future research should endeavor to assess the opinions and perspectives of these vital stakeholders in this field.
In conclusion, this study identified challenges related to the provision of immunization services for children with medical conditions. Furthermore, it highlighted potential opportunities to address these challenges, ultimately aiming to enhance vaccination coverage and protect children with medical conditions from vaccinepreventable diseases.
Conflicts of interest: No conflicts of interest. Acknowledgments: The authors are grateful to the participants from provincial-level CDC immunization program departments. Thanks very much to Dr. Lance Rodewald for polishing language.
|
2023-05-14T15:18:40.226Z
|
2023-05-12T00:00:00.000
|
{
"year": 2023,
"sha1": "0b523e9a1d3c48c899d2dbcf05b507697c04d7ca",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.46234/ccdcw2023.079",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a806caf6fc439353ef1d4d3a2036bfab4945b439",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258244741
|
pes2o/s2orc
|
v3-fos-license
|
Comprehensive Genetic Analysis of Druze Provides Insights into Carrier Screening
Background: Druze individuals, like many genetically homogeneous and isolated populations, harbor recurring pathogenic variants (PV) in autosomal recessive (AR) disorders. Methods: Variant calling of whole-genome sequencing (WGS) of 40 Druze from the Human Genome Diversity Project (HGDP) was performed (HGDP-cohort). Additionally, we performed whole exome sequencing (WES) of 118 Druze individuals: 38 trios and 2 couples, representing geographically distinct clans (WES-cohort). Rates of validated PV were compared with rates in worldwide and Middle Eastern populations, from the gnomAD and dbSNP datasets. Results: Overall, 34 PVs were identified: 30 PVs in genes underlying AR disorders, 3 additional PVs were associated with autosomal dominant (AD) disorders, and 1 PV with X-linked-dominant inherited disorder in the WES cohort. Conclusions: The newly identified PVs associated with AR conditions should be considered for incorporation into prenatal-screening options offered to Druze individuals after an extension and validation of the results in a larger study.
Introduction
Druze individuals constitute a Middle Eastern minority population. Traditionally, the Druze religion is believed to have formed as an Islamic reform movement, under the rule of the sixth caliph of the Fatimid Dynasty of Egypt, ElHakim (AD 966-1020) [1]. In Israel, there are~150,000 Druze (of an estimated~1,000,000 worldwide), overwhelmingly residing in the Northern part of the country [2]. For centuries, Druze have strictly prohibited marriage to non-Druze and limited conversion into the religion. These practices, combined with a high rate of (47%) consanguineous marriages [3], and residence in isolated, mountainous regions, have made the Druze a unique population for genetic research.
Given the founder population attributes of Druze, drifted variants resulting in a high prevalence of monogenic disorders are expected. Indeed, previously reported recurring Given the founder population attributes of Druze, drifted variants resulting in a high prevalence of monogenic disorders are expected. Indeed, previously reported recurring pathogenic variants (PVs) amongst Druze include two PVs in the ATM gene (the gene that underlies Ataxia Telangectesia-OMIM # 208900) in Druze communities in Jordan, Lebanon, and Syria [4]; a PV in the β globin gene [5]; and a nonsense variant in the LDL receptor (LDLR) gene, causing familial hypercholesterolemia [6]. In the most comprehensive account of prevalent germline PVs causing autosomal recessive (AR) disorders in the non-Jewish Israeli population, of 103 PVs in 81 genes, 32 PVs were founder mutations in Druze individuals [7].
Behar et al. [8] demonstrated close genetic relations between Druze and other Middle Eastern populations, such as Bedouins, Palestinians, Syrians, Lebanese, and Jews. A previous study published by some of us [9] confirmed the Middle Eastern origins of the Druze, as well as suggested a ≈ 15-fold reduction in population size taking place ≈ 22-47 generations ago.
In the current study, we performed whole exome sequencing (WES) in 118 samples collected from Druze trios SNP-genotyped in our previous study [9] to further define the genetic makeup of Druze individuals and characterize novel, clinically relevant coding variants in this population. We also analyzed HGDP-available Druze whole-genome sequence (WGS) data from 40 distinct Druze samples [10] (Figure 1).
Figure 1.
Methodology flow diagram: HGDP-derived data was filtered based on Druze ethnicity to create a Druze cohort of 40 individuals. Additionally, exome sequencing was performed on 118 Druze individuals from different clans in Israel, creating the WES cohort. Simultaneously, all the variants from ClinVar were filtered based on interpretation labeled as "pathogenic" or "likely pathogenic". Then, the Druze-cohort variants and the WES-cohort variants were cross referenced with the catalogue of the pathogenic variants from ClinVar creating the Druze pathogenic-variants list. Only variants that were classified as "pathogenic" or "likely pathogenic" according to the ACMG-AMP guidelines were included in the list. We compared the allele frequency of each variant in our cohort and the allele frequency of the variants in worldwide populations based on the data from gnomAD and dbSNP. Using Fisher's test, we identified the variants that were significantly different in Druze. After a literature review, we narrowed down the list to obtain a curated set of pathogenic variants that are enriched in the Druze population in comparison to other populations.
Recruitment of Druze Participants for WES
Druze trios-The study population was individuals who were recruited and participated in our previously described study [9]. Briefly, in the original study, 40 trios of Druze origin (n = 120) representing the different clans (Hamullas) were recruited. These healthy participants were recruited from the Druze communities in Beit Jan located in the Northern Galilee in Israel (20 trios) and in the Golan Heights (20 trios), primarily the village of Figure 1. Methodology flow diagram: HGDP-derived data was filtered based on Druze ethnicity to create a Druze cohort of 40 individuals. Additionally, exome sequencing was performed on 118 Druze individuals from different clans in Israel, creating the WES cohort. Simultaneously, all the variants from ClinVar were filtered based on interpretation labeled as "pathogenic" or "likely pathogenic". Then, the Druze-cohort variants and the WES-cohort variants were cross referenced with the catalogue of the pathogenic variants from ClinVar creating the Druze pathogenic-variants list. Only variants that were classified as "pathogenic" or "likely pathogenic" according to the ACMG-AMP guidelines were included in the list. We compared the allele frequency of each variant in our cohort and the allele frequency of the variants in worldwide populations based on the data from gnomAD and dbSNP. Using Fisher's test, we identified the variants that were significantly different in Druze. After a literature review, we narrowed down the list to obtain a curated set of pathogenic variants that are enriched in the Druze population in comparison to other populations.
Recruitment of Druze Participants for WES
Druze trios-The study population was individuals who were recruited and participated in our previously described study [9]. Briefly, in the original study, 40 trios of Druze origin (n = 120) representing the different clans (Hamullas) were recruited. These healthy participants were recruited from the Druze communities in Beit Jan located in the Northern Galilee in Israel (20 trios) and in the Golan Heights (20 trios), primarily the village of Majdal Shams. Clan ancestral roots were based on family names and repress ented the origins of major locales of Druze residing in the Middle East. Only 118/120 individuals recruited in the original study were included herein, based on DNA quality and availability. HGDP cohort-The HGDP contains 929 DNA samples and WGS data from ethnically diverse individuals, including 40 Druze samples [10]. HGDP DNA samples were Illumina-genome sequenced to an average coverage of 35× (minimum 25×) and reads were mapped to the GRCh38 reference assembly as reported [10]. HGDP Druze study individuals resided in Druze villages in the Carmel and Galilee regions of Israel and not in the Golan Heights. Whole exome sequencing-WES was carried out at the Regeneron Genetics Center following previously published protocols [11]. In brief, genomic DNA was sheared and used to prepare 75 bp paired-end libraries for exome sequencing. Samples were captured using the IDT XGen exome capture reagent and sequenced on an Illumina NovaSeq instrument. Captured fragments were sequenced to achieve a minimum of 85% of the target bases covered at 20× or greater. Following sequencing, data were processed using a DNAnexus-implemented cloud-based pipeline that runs standard tools for sample-level data production and analysis. Sequence reads were mapped and aligned to the GRCh38/hg38 human genome reference assembly using BWA-mem and SNP and InDel variants, and genotypes were called using GATK's HaplotypeCaller in accordance with the best practices for germline short-variant discovery. Samtools 1.12 was used for coverage and depth calculations. Variant filtering-In this study we focused on variants that were labeled as either "Pathogenic" or "Likely pathogenic" (PV) according to ClinVar (https://www.ncbi.nlm.nih.gov/clinvar/ (accessed on 23 January 2023)). Additionally, the actual pathogenicity of each PV was classified according to the American College of Medical Genetics and Genomics and the Association for Molecular Pathology (ACMG-AMP) guidelines [12]. Since the focus of this research is on disease-associated variants previously unreported in the Druze population, variants previously reported to be present in the Druze population and appear in previous relevant studies or at the Israeli Medical Genetic Database (http://INGD.huji.ac.il) are listed separately in Table S1 (WES analysis) and Table S2 (HGDP analysis).
Sources of comparison populations and datasets-For each PV, general population allele count (AC) and general population allele number (AN) were retrieved from gnomAD (https://gnomad.broadinstitute.org/ (accessed on 1 November 2022)), as indicated by the total row in the Population Frequencies table. If AC and AN were missing, those values were extracted from dbSNP (https://www.ncbi.nlm.nih.gov/snp/ (accessed on 1 November 2022)), as indicated by the total column in the ALFA allele-frequency table. Additionally, suitable AC and AN of the Middle Eastern population were extracted from gnomAD, as indicated by the Middle East row in the population frequencies table. Allele frequency (AF) was calculated by dividing AC by AN.
Statistical analyses-A two-sided Fisher's exact test was performed to compare the difference between the AF of the WES cohort and the AF of the general population for each SNP and between the AF of the HGDP cohort and the AF of the general population for each SNP. A p value of 0.05 was set to be the cutoff for statistically significant results.
One PV (rs777172978) was significantly enriched in both the WES and the HGDP analysis.
One PV (rs777172978) was significantly enriched in both the WES and the HGDP analysis.
Discussion
In the current study, 34 PVs in genes associated with AR and AD disorders not previously described in Druze individuals were identified. The most updated list of genes and PVs prevalent in the Druze population in Israel encompasses 79 AR diseases, 81 genes, and 103 variants [7]. The findings of PVs in the isolated populations reported herein are in line with previous reports [12,13]. Specifically, Khayat et al. [13] reported 48 PVs in the AR genes (24 novel PVs) in an isolated community of Muslim Arabs in Israel (n = 50) based on the results of WES in that population [14]. The Israeli population genetic carrier screening program is included in the health basket and hence is covered by the health maintenance organizations (HMOs) [15]. The data presented herein suggest that the expansion of the list of testable AR disease genes genotyped in the context of the Israeli population genetic-carrier screening program should be considered. Such a list should be based on more comprehensive data collected from all ethnicities with a specific emphasis on genotyping adequate numbers of individuals from isolated populations to address their unique needs. Notably, rates of carrier screening use among Druze and other non-Jewish ethnic groups in Israel are substantially lower compared to rates in Jewish Israeli counterparts [16]. Given the cost effectiveness of prenatal screens in guiding prenatal diagnostic procedures, awareness of the availability of effective testing should be increased in the Druze population.
PVs in two genes that are associated with AD chronic pancreatitis-PRSS1 (OMIM #276000; PV-rs111033565) and CTRC (OMIM #601405; PV-rs202058123) were detected. The incidence of chronic pancreatitis ranges from 4 to 14 per 100,000 per year, and the prevalence from 13 to 52 per 100,000 population [17]. There are no reported studies suggesting that Druze individuals are at an increased risk for developing chronic pancreatitis compared with other ethnically diverse populations. Since clinical manifestations may be subtle, the implication of this finding needs to be investigated in a larger population of Druze cases. Perhaps those that are referred for a clinical workup of undefined abdominal pain
Discussion
In the current study, 34 PVs in genes associated with AR and AD disorders not previously described in Druze individuals were identified. The most updated list of genes and PVs prevalent in the Druze population in Israel encompasses 79 AR diseases, 81 genes, and 103 variants [7]. The findings of PVs in the isolated populations reported herein are in line with previous reports [12,13]. Specifically, Khayat et al. [13] reported 48 PVs in the AR genes (24 novel PVs) in an isolated community of Muslim Arabs in Israel (n = 50) based on the results of WES in that population [14]. The Israeli population genetic carrier screening program is included in the health basket and hence is covered by the health maintenance organizations (HMOs) [15]. The data presented herein suggest that the expansion of the list of testable AR disease genes genotyped in the context of the Israeli population genetic-carrier screening program should be considered. Such a list should be based on more comprehensive data collected from all ethnicities with a specific emphasis on genotyping adequate numbers of individuals from isolated populations to address their unique needs. Notably, rates of carrier screening use among Druze and other non-Jewish ethnic groups in Israel are substantially lower compared to rates in Jewish Israeli counterparts [16]. Given the cost effectiveness of prenatal screens in guiding prenatal diagnostic procedures, awareness of the availability of effective testing should be increased in the Druze population.
PVs in two genes that are associated with AD chronic pancreatitis-PRSS1 (OMIM #276000; PV-rs111033565) and CTRC (OMIM #601405; PV-rs202058123) were detected. The incidence of chronic pancreatitis ranges from 4 to 14 per 100,000 per year, and the prevalence from 13 to 52 per 100,000 population [17]. There are no reported studies suggesting that Druze individuals are at an increased risk for developing chronic pancreatitis compared with other ethnically diverse populations. Since clinical manifestations may be subtle, the implication of this finding needs to be investigated in a larger population of Druze cases. Perhaps those that are referred for a clinical workup of undefined abdominal pain or nonspecific symptom that may herald chronic pancreatitis. Other possibilities to account for these genetic findings, as well as for other seemingly prevalent PVs in AD disorders reported herein, should also be entertained: incomplete penetrance, or even misclassification of pathogenicity by ClinVar.
Notably, the high rate of the PV in the PRRT2 gene in the current study (3%), as is the rate of the PV in the COL6A2 gene (3%), are expected to be associated with a high rate of Episodic Kinesigenic Dyskinesia, Type 1 and Ullrich congenital muscular dystrophy, Type 1 amongst Druze individuals, respectively. Underreporting, incomplete penetrance, or variable expressivity of these disorders in Druze individuals, as indeed is the case in other populations for Episodic Kinesigenic Dyskinesia, Type 1 [18], may account for the lack of reported overrepresentation of clinically relevant diseases.
In this study, we identified two PVs in two ACMG actionable genes [19] MUTYH (OMIM #604933; PV-rs587778541) and MEFV (OMIM #608107; PV-rs28940580). Homozygous MUTYH PVs are associated with colorectal cancer and adenomatous polyposis while homozygous PVs in MEFV cause Familial Mediterranean Fever (FMF), a relatively prevalent disease in people who live around the Mediterranean region, including the Druze population [20,21]. Notably, homozygous PVs in both genes are associated with a clinically significant disease, whereas heterozygous PVs, as is the case here, are not.
The p.I1307K APC (OMIM #611731) increased risk allele was detected in two Druze, cancer-free individuals in the current study (AF = 0.03, AC = 2). This variant is very prevalent in Ashkenazi Jews (AJ),~6% [22] of the general average risk population with rates of up to 20% in AJ colorectal cancer (CRC) cases with a family history of CRC [23]. Since its original description in AJ, this variant has been reported in ethnically diverse populations of Jewish non-Ashkenazim [24] and Muslim Arabs residing in Israel [25]. Detecting this variant in Druze individuals, given the unique and almost exclusive intrafaith marriage patterns, may suggest that this variant may have arisen in the Middle East prior to the separation of the Druze from the Muslims. The clinical implication of harboring the p.I1307K APC variant and the associated cancer risk is still unsettled. In most studies, this variant marginally increases the risk for developing CRC with a pooled odds ratio in one meta-analysis of 2.17 (95% confidence interval: 1.64, 2.86) [26] with the median age not younger in variant carriers compared to the general population [27]. The risk for developing CRC in Israel is significantly lower for non-Jewish individuals compared with ethnically diverse Jews (https://www.health.gov.il/UnitsOffice/HD/ICD C/ICR/CancerIncidence/Pages/default.aspx (accessed on 1 November 2022)). Yet the carrier rate in non-AJ of the p.I1307K APC variant is estimated to be 1.6% [27], similar to what has been observed in the current study. Taken together, these facts may be indirect evidence for a minimal role of this specific APC variant in conferring CRC risk during population screens.
Behçet disease (BD) is a multisystem inflammatory disorder pathologically hallmarked by vasculitis affecting the small and large veins and arteries [28]. Ethnic groups living along the historical silk road are at an increased risk of developing BD [29]. Specifically, in Israel, the rate of BD amongst Druze is reportedly among the highest of all ethnic groups with rates of up to 150/100,000 [8]. Like most adult-onset diseases, genetic factors play a role in BD predisposition. Notably, human leukocyte antigen (HLA)-B51 has been reported as the strongest genetically-associated factor for BD. Other HLA alleles, as well as other loci containing genes involved in host defense, immunity, and inflammation pathways (detected predominantly via GWAS), have been shown to contribute to BD susceptibility [30]. Of these additional BD-associated genes, the interleukin pathway family of genes, including IL10, IL23R-IL12RB2, IL12A, and IL23R, have been reported [30]. Specifically, the possible contribution of the IL18R1 gene to BD has not been thoroughly investigated. IL18R1 encodes for the α chain, a subunit of the IL18 receptor [31]. IL18, the IL18 receptor ligand, is a member of the IL1 family of cytokines [31], proteins that play a key role in BD ocular or mucocutaneous manifestations and was found to be elevated in the synovial fluid of BD patients [32,33]. Tan and coworkers [34] reported that three SNPs in the genomic region encompassing the IL18R1 gene were associated with ocular manifestations of BD in the Han Chinese population. In the current study, these three SNPs were in perfect linkage disequilibrium creating a 10 Kb haplotype enriched in the Druze population. Yet, the high rate of these SNPs in the general population, the lack of any bona fide PVs in the WES cohort, and the paucity of supporting data in other populations may indicate that the contribution of PVs in the IL18R1 gene to the burden of BD may be minimal at best.
The limitations of the current study should be acknowledged. This is a study that generated data on a limited number of Druze families residing in Israel, where only a small subset of the world Druze population resides, and it may not reflect the entire populational spectrum of this ethnic community. Given the lack of precise clinical knowledge on the genotyped individuals and basing the health status on self reporting at a single time point adds another limitation. Given the current study design, the penetrance of the autosomal dominant alleles reported herein cannot be assessed, thus limiting the ability to provide more insightful and evidence-based genetic counseling. Additionally, the results on which AR genes' PVs (or a subset of them) should be incorporated into a Druze prenatal screening, should await a validation study encompassing more Druze cases.
Conclusions
Novel PVs in genes associated with severe AR disorders prevalent in Druze individuals should be considered for inclusion in the next version of the national prenatal screening in Israel to the relevant population, after validation in a larger study.
|
2023-04-21T15:15:08.332Z
|
2023-04-01T00:00:00.000
|
{
"year": 2023,
"sha1": "435f1be76c9ea05d7e51ccd4b67b4fc54cfa71af",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4425/14/4/937/pdf?version=1681822777",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2a2544f870c592a029fc685aec798c85fc554918",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
58593558
|
pes2o/s2orc
|
v3-fos-license
|
Regional anaesthesia-induced peripheral nerve injury
Regional anaesthesia techniques have gained great popularity in recent years, as they provide excellent anaesthesia and analgesia for many surgical procedures. Many courses, workshops, multimedia materials and a wide access to high-end ultrasound devices have resulted in Polish anaesthesiologists eagerly performing various blockades. However, there is also a dark side to regional anaesthesia which should not be forgotten — complications. Although nerve injuries are considered to be multifactorial in nature and the vast majority of them are not due to regional anaesthesia, anaesthesiologists and anaesthetised patients must be aware of the risk involved. Due to the potentially devastating sequelae of regional blocks, updating one’s knowledge of this topic is very much necessary. The aim of this review is to summarise current knowledge concerning regional anaesthesia-induced peripheral nerve injury. Anaesthesiology Intensive Therapy 2018, vol. 50, no 5, 367–377
ePIdemIologY oF PeRIPHeRAl neRve block comPlIcAtIons
The data in the literature regarding the incidence of regional anaesthesia-induced neurologic complications differ markedly.These discrepancies result from the way the complications are defined, the duration of observation (different after 1 week and markedly different after 12 months), the type of surgery and block, or difficulties in determining the cause of nerve injury (anaesthetic, surgical, patient-related, etc).Early transient neurological complications are relatively common during the first days and weeks following anaesthesia.According to the meta-analysis published in 2007, the incidences of transient neurological deficits following interscalene brachial plexus block, axillary brachial plexus blocks and femoral nerve blocks are 2.84%, 1.48% and 0.34%, respectively while the incidence of permanent neurological deficits amount to only 0.04/1000 blocks [1].More recently, Sites et al. [2] assessed the incidence of neurological complications following ultrasound-guided nerve blocks and noted the incidence of transient neuropathies being 1.8/1000, neuropathies persisting for at least 6 months at 0.9/1000, and associated with interscalene blocks at 3.1/1000 blocks.Another analysis of the Italian registry of regional anaesthesia-induced complications in 2016 involving over 29,000 patients who had undergone peripheral nerve blocks revealed that transient neurological complications were observed only in 3 patients.The presented incidence of transient perioperative nerve injuries (less than 1/10,000) is probably the lowest one reported in literature.The authors, however, admit that the complications recorded in the registry regarded only the cases which were "evident during hospitalisation" while neuropathies whose symptoms may have occurred at home were not included [3].
Irrespective of inter-study differences, one element remains constant -although the initial incidence of neurological deficits is relatively high, it significantly decreases over time (to 2.2% in the first 3 months, 0.8% during the 6th month, and 0.2% after 12 months) [4].The most commonly reported incidence of persistent (over a year) neurological injuries associated with regional anaesthesia is 2-4 per 10,000 blocks and is comparable irrespective of nerve location methods (stimulation or ultrasound) [4][5][6][7][8][9][10].
PeRIPHeRAl neRve AnAtomY
A peripheral nerve is comprised of axons surrounded by Schwann cells which, together with delicate connective tissue elements of the endoneurium and capillaries, are bundled into circular or oval fascicles.The individual fascicles exchange nerve fibres, morphologically resembling plexuses rather than long isolate cables.The perineurium forms the external part of fascicles -several to a dozen layers of tightly adhering fibroblasts and collagen fibres.The perineurinal cells with tight junctions and non-fenestrated capillaries form the blood-nerve barrier providing a stable environment for axons.The outermost part of the nerve, rich in collagen fibres, is called the epineurium.This name pertains also to the connective tissue rich in adipose cells and a network of small blood vessels (the vasa nervorum) filling the inter-fascicle space.The nerve is surrounded by a loose connective tissue, the paraneurium, which is there to stabilise the nerve's position [11].
Some nerves, e.g. the sciatic nerve, are surrounded by a connective tissue sheath; although relatively closely attached to the nerve, the sheath is a paraneural structure independent of the epineurium [12].
The connective tissue of nerves plays an important mechanical and protective role and its content changes along the course of individual nerves.For instance, in the brachial plexus, the ratio of nervous to non-nervous tissue within the epineurium changes from 1:1 between the scalene muscles to 1:2 in the subclavicular region.Similar relationships are found in the sciatic nerve -2:1 in the gluteal region and 1:1 in the popliteal region.The above has relevant clinical implications as a block performed in the proximal segment theoretically creates a higher risk of neurological complications resulting from higher concentration of the nervous tissue [13,14].
PeRIPHeRAl neRve PAtHoPHYsIologY
In the 1940s, Seddon and Sunderland [15] classified nerve injuries; despite certain limitations, their classifications are still valid (Table 1).
Neurapraxia is the mildest form of nerve injury in which the continuity of nerve fibres is intact and the conduction block results from axon oedema, disorganisation of neurofilaments and segmental demyelination.Remyelination and complete conduction recovery occurs within 2-12 weeks.
Axonotmesis is defined as disruption of nerve fibres with preserved epineurium continuity.Separation of the nerve cell body from its peripheral part leads to complete degeneration of the distal axon segment (and partially of the proximal segment).Following injury, biochemical and morphological changes in the peripheral axon take place within several hours.This process is called Wallerian degeneration and lasts up to 3-6 weeks.The cytoskeleton and the axon cell membrane are disintegrated and the myelin sheath is destroyed.The residual parts are eliminated by Schwann cells, as well as macrophages and granulocytes migrating into the site of injury.The severity of degenerative lesions of the nerve depends on the location and extent of injury to the nerve fibres and the surrounding connective tissue structures.Injuries close to the nerve cell body can lead to the neuron's death and lack of regeneration.The earliest symptoms of regeneration, in the form of proliferation of Schwann cells, may be observed already within the first post-injury week.The Schwann cells (the bands of Büngner) that form tubes regenerating axons.Axonotmesis is associated with poorer prognosis, as compared with neurapraxia.If the injury involves up to 20-30% of motoneurons, the function may return within 2-6 months thanks to reinnervation of the denervated Neurotmesis means a complete disruption of the nerve together with the external connective tissue elements (epineurium).In such cases, the return of nerve function is not possible without surgical intervention [15].
mecHAnIsms oF PeRIoPeRAtIve neRve InjuRY
The mechanisms of perioperative nerve injury can be divided into 4 major groups: chemical, mechanical, vascular and inflammatory.These may be associated with anaesthetic and surgical factors, as well as the patient's predisposition (neuropathy).To illustrate the complexity of the issue, the following situation may be considered: during anaesthesia, the anaesthesiologist introduces the tip of the needle into the bundle, injuring the epi-and endoneurial blood vessels.If undiagnosed and the local anaesthetic is administered under high pressure, three kinds of injuries can be observed, namely: mechanical (direct injury to the nerve and pressure insult caused by the administration of the solution and formation of an intra-nerve haematoma); chemical (exposure to high concentrations of local anaesthetics (LAs), direct contact with blood); and vascular (a haematoma can locally limit the blood supply).If all this happens to a patient with a pre-existing nerve injury, e.g. in diabetic patient (the patient-dependent factor) and the procedure is associated with an increased risk of nerve injury (the surgical factor), the probability of severe nerve injury is extremely high.Although the causes of nerve injuries will be further discussed separately, they cannot be practically separated as the nerve injury is often the result of many of them (Table 2).
cHemIcAl InjuRY -toXIcItY oF tHe Agent AdmInIsteRed
Almost 40 years ago it was found that the basic tools of regional anaesthesia, i.e. local anaesthetics, exert cytotoxic effects on cell cultures, inhibiting cell growth and survival and that such effects intensify with prolonged exposure time and increasingly high LA concentrations [16,17].In clinical practice, the place of deposition of LAs is essential for increased toxic effects.As mentioned earlier, the perineurium with the endothelium of subperineural capillaries functions as a blood-nerve barrier limiting the entry of various substances into the nerve bundle.The administration of LAs outside the perineurium only slightly affects the efficiency of the blood-nerve barrier, although increases its permeability.In such cases, the fluid within the endoneurium changes from hypertonic to hypotonic due to the difference in osmolarity, which causes oedema and leads to an increase in intra-fascicular pressure [18].Irrespective of oedema, high extrafascicular concentrations of LAs can damage axons [19].Local administration of bupivacaine or lidocaine to the nerve reduces the blood flow in the nerve, which can contribute to its ischaemic injury (concentration-dependent) [20,21].
Although the vasoconstrictive effect is unequivocal, it does not seem to play a relevant clinical role in the majority of patients [22].An exception may be patients with baseline disorders of blood supply to the nerves, e.g.smokers and diabetic patients.Even a small amount of LA injected into the bundle causes significantly more serious sequels, such as demyelination and Wallerian degeneration of axons and this effect is concentration-dependent [23,24].LAs injure not only axons but also the Schwann cells and this effect is also exposure time-and concentration-dependent [25].
To date, the cause of LA neurotoxicity at the cellular level has not been explicitly determined.According to in vivo studies, LAs uncouple oxidative phosphorylation in the mitochondria and activate neurone apoptosis via the activation of p38 mitogen-activated protein kinase and caspases.Moreover, neurotoxicity is modulated by the phosphoinositide 3-kinase (P13k)/Akt pathway [26].
In order to improve the quality of blocks and lengthen them, various adjuvants are added to LAs, e.g.adrenaline, clonidine, buprenorphine, dexamethasone or dexmedetomidine.Adrenaline added to LAs as an agent lengthening the duration of a block and a marker of intravascular administration, enhances the vasoconstrictive effect and prolongs the contact of nervous structures with LAs [21,27].Moreover, adrenaline increases axon degeneration after the administration of bupivacaine into the bundle [28].In in vitro studies, cytotoxicity of buprenorphine was observed during the 24-hour exposure of neurones to adjuvants (although lesser than that of ropivacaine alone).Clonidine was found less toxic while dexamethasone was the least toxic agent.After the 2-hour exposure of neurones to the mixture of adjuvants with ropivacaine, clonidine increased the toxicity of ropivacaine while dexamethasone and buprenorphine did not [29].Dexmedetomidine can attenuate the bupivacaine-induced inflammatory reaction around the nerve [30].Likewise, dexamethasone, added as an adjuvant to an LA, can reduce the toxicity of bupivacaine by increasing the activity of the Akt pathway [31].
mecHAnIcAl InjuRY
As the amount of connective tissue inside the nerve is large, its perforation with the needle usually does not disrupt the continuity of nerve fibres as the needle tip may be inside the nerve, namely under the epineurium, outside the fascicle or between the fascicles.If the needle tip is inserted into the fascicle, the continuity of perineurium and nerve fibres is directly disrupted.Both the thickness and type of the needle are of importance.The penetration of the nerve/ fascicle with short-bevelled needles is much more difficult, as compared with pencil-pointed needles; whenever this happens, the injury is more extensive [32].In animal models, the intraneural placement of the needle, even without damaging the fascicles or vessels, induced an inflammatory reaction with resultant demyelination and transient impairment of the nerve function [33].It seems, however, that an isolated needle injury does not lead to serious sequels unless accompanied by intrafascicular deposition of an LA when a high neurotoxic concentration is achieved.High intrafascicular pressure leads to mechanical disruption of nervous structures and the occlusion of blood vessels [34].
From the anaesthesiologist's point of view, it is essential that some surgical procedures per se are associated with a significant percentage of neurological complications which can be attributed to regional anaesthesia.A representative example is brachial surgery in which the majority of complications is caused by physical injury, e.g. by instruments or excessive traction-pulling of the limb when the head and neck being bent to the side opposite to that being operated on, stretches the brachial plexus.In such cases, the percentage of neurological complications, mainly short-lasting, is very high and in arthroscopic procedures reaches even 10% [35].The incidence of neurological complications following total hip arthroplasty is about 1%.These complications most commonly affect the common peroneal nerve, less commonly femoral and sciatic nerves [35].Transient injuries to the lateral femoral cutaneous nerve are extremely common (up to 88%) while performing the procedure from the anterior access [36].The percentage of neurological complications following total knee arthroplasty ranges from 0.3% to 9.5% [35].The more severe the preoperative knee deformity (e.g.valgity), the higher the risk of common peroneal nerve injury.Commonly performed knee arthroscopies are associated with a high percentage (up to 25%) of transient dysaesthesia within the anterior knee.In cases of anterior cruciate ligament reconstruction, this percentage can reach even 75% [37,38].
vAsculAR InjuRY -IscHAemIA
As mentioned earlier, LAs and adjuvants can directly constrict blood vessels.However, it seems that impaired blood supply to the nerve, caused by the injury to the vasa nervorum or nerve compression (e.g. by a haematoma caused by anaesthetic or surgical intervention) and leading to vasoconstriction, is more important clinically.The nerve structures within the fascial compartments of low resilience are at the highest risk.For instance, under unfavourable conditions, the vessel injection during brachial plexus block from the axillary access can lead to the development of medial brachial fascial compartment syndrome, which can cause severe neurological complications [39].Another common cause of ischaemia of nervous structures is the use of tourniquets.Acute ischaemia causes depolarisation and generates spontaneous nerve discharges, which are felt by patients as parasthesias.Prolonged ischaemia blocks slow-conducting fibres, or even all fibres [40].If ischaemia lasts less than 2 hours, the nerve functions are restored within 6 hours.Reperfusion causes oedema and degenerative changes in axons followed by a phase of regeneration which lasts several weeks.Ischaemia up to 6 hours does not cause permanent structural lesions [41].
InFlAmmAtoRY InjuRIes
Kaufman et al. [42] described a series of 14 cases of persistent diaphragmatic paralysis following interscalene brachial plexus blockade.The cause of phrenic nerve neuropathy detected during surgical revision was nerve entrapment in the scar tissue resulting from chronic inflammatory lesions.Inflammatory lesions can also be caused by haematomas around the nerve or the ultrasound gel applied close to it [44,45].The stress factor, i.e. surgical intervention, can trigger an inflammatory response involving the peripheral nerves (PSIN), as well as other structures.The symptoms of this inflammatory neuropathy can be uni-or multifocal and are accompanied by muscle weakness and muscle pain [46].
RIsk FActoRs oF PeRIoPeRAtIve neRve InjuRY
Although perioperative peripheral nerve injuries can result from regional anaesthesia, recent studies have paradoxically failed to demonstrate that peripheral nerve blocks are an independent risk factor of such injuries [4].In the cohort retrospective studies conducted at the Mayo Clinic, the incidence of neurological complications following orthopaedic procedures was assessed.The incidence of peripheral nerve injuries associated with hip, knee and shoulder arthroplasty was found to be 0.72%, 0.79% and 2.2%, respectively.The risk factors of nerve injuries in hip arthroplasty included younger age and the duration of tourniquet use.Peripheral nerve anaesthesias did not increase the total incidence of postoperative neuropathies [46,47].Nevertheless, one should be aware that the orthopaedic literature tends to attribute a higher percentage of complications to nerve blocks than the anaesthesiological literature [48,49].
The symptomatic or subclinical nerve dysfunctions (neuropathies) present prior to anaesthesia are likely to increase the risk of perioperative deterioration of nerve function (the hypothesis of "double crush injury").The other risk factors include peripheral nerve diseases, vasculitis, tobacco smoking, and arterial hypertension [4].
Diabetic peripheral polyneuropathy is a common complication of diabetes mellitus and the most commonly diagnosed peripheral neuropathy.Neuraxial anaesthesia in diabetic patients is associated with a significantly higher risk of neurological deficits, as compared with the general population (0.4%).In this group of patients, the motor stimulation threshold can be markedly higher than in healthy patients, which can increase the risk of intraneural insertion of the needle [50].Moreover, diabetic patients have been demonstrated to have higher success rates of blocks, as well as longer and metabolic compensation-dependent duration of blocks [52][53][54].
Among the other causes of neuropathy that should be considered are alcoholism (which can be associated with vitamin deficiencies), chemotherapy and congenital factors (Charcot-Marie-Tooth neuropathies, and hereditary neuropathy with pressure palsies [HNPP], in particular) [55].
sAFetY oF neRve locAtIon metHods nerve Stimulation
Relatively recently, needle-nerve contact causing parasthesias was considered indispensible to provide the block.However, it appears that the lack of parasthesias does not exclude intraneural location of the needle tip [56][57][58].Nerve injuries can occur even when the injection is immediately discontinued once parasthesias and pain have been reported by the patient [5].On the other hand, some discomfort can be a natural and harmless symptom while performing the block and there are no explicit data confirming that the induction of parasthesias is an independent factor of postoperative neurological disorders.The symptoms reported by patients are insufficient to prevent nerve injuries.
A huge step forward in the techniques of nerve location was the introduction of nerve stimulators to everyday practice.A current intensity of 0.2 mA, causing a motor response, is most likely associated with intraneural location of the needle.Finding the minimal current intensity, which locates the nerve with high sensitivity and specificity without its puncture, is a much more important issue.In cases of supraclavicular blocks, the use of typical stimulation thresholds of 0.2-0.5 mA may be connected with a high percentage (54%) of intraneural needle tip location; in popliteal blocks -even 94% of cases.The disappearance of motor responses at > 0.5 mA is also associated with intraneural location of the needle tip; in supraclavicular blocks in 10% of cases and even in 90% (!) of cases in popliteal sciatic blocks [59][60][61][62][63].The lack of motor response to markedly higher current intensities (1.5 mA) does not exclude intraneural location of the needle tip [61].The lack of motor response to an intensity of 2.4 mA was observed in many diabetic patients, despite explicit, ultrasound-confirmed needle-nerve contact [50].There is a high individual variability in the threshold current intensity required to induce a motor response, while such extremely high intensities of the stimulation current, although more commonly observed in diabetic patients, with diabetic neuropathy, in particular, have also been found in healthy individuals [64].
Important data determining the dependence of stimulation current intensity on needle-nerve contact may be found in the study carried out by Vassiliou et al. [65].In the swine model, the risk of needle-nerve contact at 0.5 mA was found to be 0.5, while at 0.9 and 1.1 mA -0.13 and 0.1, respectively.
Newer nerve stimulators have an additional option, namely enabling bioimpedance measurements for the flowing current.The nerve is composed of a higher number of lipid elements and lower amounts of water compared to the surrounding tissue; therefore, if the needle tip has penetrated the nerve, the stimulator shows a significant increase in resistance [66].
ultraSonography
As the epineurium is composed of relatively compact connective tissue, after contact with the needle, the nerve is initially moved away.Prior to the nerve puncture by a needle, an "indentation" can be observed on the nerve surface [67].After forcing the superficial epineurium, the needle tip will be more likely to penetrate looser connective tissue between the fascicles than in the fascicles themselves.
Sonographic signs of intraneural administration are an increased transverse cross-sectional area with decreased echogenicity of the nerve.Another later finding is the "halo" sign, namely a concentric, hypoechogenous area around the nerve visible proximally and distally in relation to the site of injection visualising subepineural spread of anaesthetics [68,70].In some cases, the "halo" sign evidences the LA spread in the paraneural sheath.In a study regarding the sciatic nerve sheath, Andersen et al. [12] performed intraneural injections and observed an increase in the cross-sectional area of the nerve followed by gradual "permeation" of the injected fluid, which caused the separation of the sheath from the epineurium.An increase of 9% in the cross-sectional area together with decreased echogenicity after the administration of 0.5 mL of volume enables highly sensitive detection of intraneural administration.Unfortunately, in practice even experts are not able to detect 1/6 of intraneural administrations, and less experienced individuals not even 1/3 of such cases [68,70].The difficulties in early detection of intraneural administration using ultrasound result both from equipment-associated limitations and insufficient experience.The incidence of unintended intraneural administrations during ultrasound-guided brachial plexus blocks (interscalene and supraclavicular) performed by experienced physicians is up to 17% [71].Similar incidences (16.3%) were found for sciatic nerve blocks from the subgluteal approach [72].
Many studies have revealed that intraneural injection of LAs during brachial plexus or sciatic nerve blocks does not lead to remote neurological sequels, although it produces explicit ultrasound signs [59][60][61][68][69][70].Based on the above studies, it may be concluded that intraneural injections to the external part of the epineurium but extrafascicularly, both intended and unintended, are relatively safe.Unfortunately, the studies mentioned were performed in small groups of patients (up to several dozen), which is undeniably insufficient to conclude that this management strategy is harmless.Additionally, although modern ultrasound devices generate images of increasingly high quality, in practice, it is still impossible to differentiate extra-and intrafascicular locations of the needle tip.The linear resolution of the 10 MHz transducer is about 1mm, while in the case of deeper blocks it is necessary to use lower frequencies, which translates into even lower resolution.Only in cases of superficially running nerves (e.g. in forearm blocks) and the use of transducers of high resolution (e.g.18 MHz), is the quality of nerve structure images good.
Given the current state of knowledge, it seems that it is better to move the needle tip slightly further away from the nervous structures, even at the expense of worse quality and shortened duration of blocks.Such a management option is suggested in those studies comparing the efficacy of interscalene brachial plexus blocks depending on the site of LA deposition, namely intraplexus or periplexus.In one of the above studies, no differences in block onset time and block quality were demonstrated; however, a significant prolongation of the blockade was observed after intraplexus administration.Another study disclosed quicker blocks albeit also higher incidences of transient parasthesias after intraplexus LA administration [73,74].
Although the use of ultrasound accelerates and facilitates the provision of blocks, reduces the risk of vessel puncture and enables the administration of lower amounts of LAs, it does not reduce the risk of neurological complications [2,68,75].
monitoring of injection preSSure
To differentiate intra-and extrafascicular administration, the assessment of pressure with which LA is applied may be useful.Animal studies carried out several years ago suggested that the high pressure of an injection (> 25psi) was likely to indicate intrafascicular administration due to low compliance of the fascicle [78].What then should be the injection pressure?Autopsy studies have demonstrated various patterns of increases in pressure and peak pressures depending on the injection site, namely: to the nerve root; to the peripheral nerve; and perinervously.Administration to the nerve root was associated with a peak pressure of 60.2 psi, intrafascicular administration to the peripheral nerve with 52.9 psi, and extrafascicular administration with 22.4 psi.Moreover, the authors pointed out that the peak pressure was achieved after more than several seconds, i.e. once a certain volume of the anaesthetic had been deposited [77].
From the clinical point of view, it is more useful to determine the pressure in the syringe-catheter-needle system, at which the injection can be started, called "the opening pressure".In cases of interscalene brachial plexus anaesthesias, direct needle-nerve contact is connected with an opening pressure exceeding 15 psi.The withdrawal of the needle tip by 1 mm significantly reduces the opening pressure [56].Likewise, in femoral nerve blocks whenever the needle tip touches the nerve or iliac fascia, it is not possible to start the injection with an opening pressure lower than 15 psi in the majority of cases [78].Recent autopsy studies confirm that the opening pressure in femoral, femoro-crural, sciastic, common peroneal and tibial nerve blocks is several times higher when the needle tip is placed inside the fascicle, as compared with perinervous insertion and always exceeds 15 psi; at the rate of administration of 10 ml min -1 , this pressure is achieved after 10-12 seconds [79].
It is extremely difficult to "feel" the opening pressure.In one study, the pressure with which anaesthesiologists injected LAs was measured: 70% of them started the injection at a pressure of > 20 psi; 50% at > 25 psi; and 10% at > 30 psi [80].
Two devices that help to reduce the risk of LA injection at too high a pressure are available on the market.One of these is a sensor which warns one against excessive pressure using colours while the other is a pressure limiter, which prevents LA administration at a pressure higher than 15 psi.
According to the above-mentioned studies, if the motor response subsides at a current intensity higher than 1mA, the needle tip on the US image, as assessed by the observer, is outside the nerve and the pressure during LA administration does not exceed 15 psi, then it is highly likely that the needle tip is not actually in contact with the nerve (triple guidance).In this "trinity", stimulation is probably of least importance while pressure monitoring is crucial.
Producers of ultrasound devices and needles for regional blocks are constantly developing new technologies which can improve safety.The visualisation of the needle along its entire course, especially when advanced under a high angle towards the front of the transducer can be extremely problematic.Therefore, the majority of producers of regional anaesthesia sets offer needles with technology improving their visibility in the ultrasound beam.The use of highly efficient piezoelectric monocrystals in some ultrasound transducers provides higher resolution and deeper penetration, hence better images of nervous structures.Some ultrasonographs are equipped with software automatically improving needle visibility.Some other devices use the needle-induced electromagnetic field changes for virtual 3D tracking, even if the needle is outside the ultrasound beam.The above solution can be extremely useful in deep blocks, especially in an out-of-plane approach.
sYmPtoms oF neRve InjuRY
The onset of neuropathic symptoms following the injuring stimulus may be acute (hours, days) or delayed (weeks).Acute onset is associated with direct nerve injury, and delayed onset with oedema and inflammation.Nerve injury symptoms are likely to include abnormal sensations (hypoesthesia, parasthesias, pain, allodynia, hyperesthesia), muscle weakness, disorders of the autonomic system.These may occur in various configurations and affect one's quality of life depending on their severity, location or the patient's age.It is difficult to determine the cause of injury based only on symptoms.Small nervous fibres are more susceptible to chemical injuries; therefore, the most common symptoms of their injury are parasthesias and abnormal pain and temperature sensations rather than disorders of deep sensations and of movement.As neuropathy caused by surgical tourniquets concerns mainly thick myelinised fibres, its symptoms include motor, touch, vibration and position disorders with heat, cold and pain sensations preserved and without parasthesias.When the tourniquet is placed on the arm, the symptoms of neuropathy mainly concern the radial nerve and not the medial/ulnar nerve, as opposed to neuropathies following the axillary block where the symptoms mainly concern the medial nerve (exclusively or together with the ulnar nerve [28,38].
DiagnoStic management
In order to effectively implement diagnostic-therapeutic management, it is essential that a nerve injury is suspected as early as possible.This is extremely difficult if anaesthesiologist-patient contact ends in the recovery room.In many cases, the anaesthesiologist is informed about the most severe complications detected in surgical wards with delay; less severe complications (parasthesias subsiding within several days or weeks) may not be even reported to the surgeon, not to mention the anaesthesiologist.The simplest screening method for nerve injuries (used in our centre) is the anaesthetic visit during which the basic examination, including the neurological evaluation, is carried out after the anticipated duration of anaesthesia.In the early postoperative period, such an assessment may be hindered due to residual sedation, limb immobilisation or the presence of a catheter; however, this approach can accelerate the decision to eliminate any potentially reversible causes of disorders (e.g.removal of a haematoma compressing the nerve) and facilitates further contact with patients with suspected injuries.Moreover, the post-anaesthesia visit is the right time to discuss possible causes of injuries, diagnostic procedures, prognosis, and referral to appropriate outpatient clinics.Moreover, conducting an assessment after anaesthesia is important as a high proportion of patients cannot precisely determine the onset of symptoms; even when they develop symptoms with a week's delay, after some time they can interpret them as lasting from the procedure.If a haematoma compressing the nerve is suspected or there is a risk that the cause of nerve injury is surgical, it is worth talking to the surgeon to determine whether the nerve could have been injured (cuts, stretching, suture entrapment) or whether fascial compartment syndrome should be considered and find out if revision is possible.In each case in which the motor component of the block persists, the block intensifies or returns after earlier subsidence, neurological consultation is required.Early diagnostic imaging procedures involve basic ultrasound examinations, e.g. to exclude the haematoma.Theoretically, more advanced examinations can also be performed, e.g.MRI, enabling the assessment of morphological changes of the nerve -prolonged T2 relaxation and enhanced signals in the STIR sequence may be observed earlier than the changes typical of denervation.Moreover, for instance, a haematoma compressing the nerve can be visualised [81].
However, neurophysiological examinations are far more important for the diagnosis of nerve injuries.The electrophysiological tests applied most commonly are a nerve conduction study (NCS) and electromyography (EMG).Nerve conduction tests are performed by stimulating the nerve in two separate locations along its course with the receiving electrode placed over the muscle supplied by this nerve, or along the course of the sensory nerve.They measure the following: amplitude reflecting the number of depolarised fibres; latency, i.e. the time between a stimulus and the appearance of a response -compound motor action potential (CMAP) or sensory nerve action potential (SNAP); and conduction velocity -the speed with which the stimulus spreads along the thickest myelinised axons.Based on the analysis of such tests, one can assess whether the nerve has been injured, and if so, determine the level of injury.Moreover, the test results should demonstrate whether this is a case of demyelination (neurapraxia) or the loss of axons (axonotmesis).Then, based on baseline values, the regeneration can be monitored.
The loss of myelin (demyelinating neuropathies) decreases the conduction velocity and lengthens the latency while the CMAP amplitudes remain normal, or are only slightly reduced.In neuropathies with a reduced number of axons (axonal), CMAP and SNAP amplitudes are decreased at normal latency and conduction velocity.In cases of nerves with impaired conduction caused by segmental demyelination, the amplitude of CMAP evoked above the focus is markedly reduced, as compared with stimulation below the focus.The response to normal stimulation induces the orthodromic response.Supraliminal stimulation additionally induces antidromic impulse conduction, which depolarises the cells of the anterior horns of the spinal cord and delays the response in muscle cells (F wave).Prolonged F waves are typical of demyelinising injuries while their absence evidences an axonal or severe demyelinising injury.The examinations of sensory nerve conduction, particularly of fine cutaneous branches in patients with abundant subcutaneous tissue, are much more difficult to perform due to lower amplitudes of evoked potentials.
The axonal parts distal to the injury site remain excitable for many days after injury.CMAP and SNAP amplitudes do not decrease for 2-3 days.Moreover, CMAP amplitudes do not achieve their lowest point until 9 days after the insult, and SNAP amplitudes -until 10-11 days post-injury [82].
During this time, stimulation distal to the site of injury may not disclose the evident pathology while stimulation proximal to the site of injury will not allow differentiating demyelination and the loss of axons.Therefore, it is more reasonable to perform nerve conduction examinations after 10-14 days following the occurrence of damage and not earlier.
EMG assesses only the motor functions of nerves.A needle electrode inserted directly in the muscle can record spontaneous potentials in the denervated muscle, namely fasciculations and sharp waves.Moreover, motor unit potentials (MUPs) are assessed, i.e. the sum of potentials of all muscle fibres innervated by one stimulated nerve.In cases of neurapraxia, EMG does not show significant changes, except for reduced MUP recruitment.Up to 21 days after major nerve injuries, electromyographic changes resemble those mentioned above, although between days 14 and 21, the spontaneous muscle activity begins in the form of fasciculations and sharp waves mentioned earlier.In cases of completely cut nerves, MUPs are not recorded.
Although in cooperation with the neurologist, electrophysiological tests may be performed immediately after the insult, such tests are rather to determine the pre-existing pathology.In most cases, the first tests are performed 3 weeks after injury, follow-ups after 3-6 months and 12 months, if required.[83].
stRAtegIes ReducIng tHe RIsk oF neRve InjuRY
Prior to surgery: 1. Practicing and continuously improving one's skills.2. Screening for patients with neuropathies.If a regional block has been decided upon, adrenaline (as an adjuvant) should be avoided and a reduction in LA concentration should be considered.3. Providing the patient with detailed information about possible complications of the suggested procedure, as well as alternative techniques, and obtaining patient's informed consent.Unfortunately, even in the United States, anaesthesiologists eagerly inform patients about the advantages of regional anaesthesia and usually mention only mild complications (transient parasthesias, haematomas) while the potentially lifethreatening complications, such as toxicity of local anaesthetics or neuropathies leading to disability, are often passed over [84].
In the operating suite: 1. Combine the location methods (ultrasound, injection pressure monitoring, and stimulation -triple guidance).2. Do not use a current intensity of < 0.5 mA for nerve stimulation.3. Use needles clearly visible in the ultrasound beam 4. In each case in which the patient reports severe pain radiating along the limb during needle manipulations or the administration of LA, the administration of LA should be immediately discontinued and the needle withdrawn.5. Choose co-anaesthetics individually, bearing in mind that perineural administration of dexamethasone is off-label and adrenaline is contraindicated in diabetic patients.6. Careful positioning of the patient on the operating table, namely the arm abducted to 90 degrees in the reverse position, the places of compression protected; the elbow bent to < 90 degrees; lateral decubitus with the hip flexed to < 120 degrees.7. Attentive monitoring and recording of the time of tourniquet use and the pressure within it.
summARY
Huge advances in regional anaesthesia have been observed in recent years."Old" techniques are being improved and new ones designed; better ultrasound devices and anaesthesia sets are available.Nevertheless, the incidence of complications has not changed.It can even be cautiously assumed that the number of patients with complications is likely to increase as regional anaesthesia methods are becoming increasingly popular.The paper summarises the present state of knowledge about nerve injuries and the strategies concerning their prevention and management.
Table 1 .
Classifications of nerve injuries according to Seddon and Sunderland
|
2019-01-22T22:28:36.415Z
|
2018-12-31T00:00:00.000
|
{
"year": 2018,
"sha1": "2fff57ceabf7358b21fccef7393c70421e5dccd3",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.termedia.pl/Journal/-118/pdf-38029-10?filename=pages_367-377_article_56198_en.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "cade20ea0d99deed48860fbc25a20538ca3c2209",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
231729506
|
pes2o/s2orc
|
v3-fos-license
|
Spontaneous pneumothorax: An emerging complication of COVID-19 pneumonia
Spontaneous Pneumothorax in the setting of coronavirus disease 19 (COVID-19) has been rarely described and is a potentially lethal complication. We report our institutional experience. Patients with confirmed COVID-19 who were admitted at 5 hospitals within the Inova health system between February 21 and May 2020 were included in the study. We identified 1619 patients, 22 patients (1.4%) developed spontaneous pneumothorax during their hospitalization without evidence of traumatic injury.
Spontaneous pneumothorax in the setting of coronavirus disease 19 (COVID-19) has been rarely described and is a potentially lethal complication. We report our institutional experience from February 21, 2020 to May 21, 2020.
Patients with confirmed COVID-19 who were admitted at 5 hospitals within the Inova Health System between February 21, 2020 and May 21, 2020 were included in the study. We identified 1619 patients, 22 patients (1.4%) developed spontaneous pneumothorax during their hospitalization without evidence of traumatic injury.
The median age of the patients was 60 years and 82% were male ( Table 1). The majority of the cohort was Hispanic at 95%. The median BMI was 25.4, 52% of patients had a history of hypertension, 32% had a history of diabetes mellitus and 14% were smokers. Spontaneous pneumothorax was diagnosed between the 1st and 15th day of hospitalization (median 9th day) and 100% of patients were diagnosed by chest X-ray (Fig. 1). There were 16 patients (73% of the overall population) who had a chest tube placed and the remaining 6 patients were monitored closely. Eight patients died (36% of the overall population) with fourteen patients either remaining in hospital or discharged to home. Of the 8 that remained hospitalized, 2 patients are on extracorporeal membrane oxygenation (ECMO), 2 patients remain intubated. The median length of hospitalization was 18.5 days as of May 20, 2020.
The deceased patients were more likely to be older (median age 63) and overweight (median BMI 28.3) ( Table 1). As it pertains to risk factors, there was a higher prevalence of hypertension, diabetes, and congestive heart failure in the deceased patients versus patients who remain alive (75%, 50% and 25% vs. 43%, 21% and 0% respectively). At the time of diagnosis of pneumothorax, the deceased patients had higher levels of inflammatory markers (Ferritin, C-reactive protein and Fibrinogen) and white blood cell count. The deceased patients were also more likely to have required ventilator support (62.5% vs. 29%). The patients who remain alive were more likely to have received an interleukin-6 inhibitor, remdesivir or convalescent plasma as compared to the deceased patients (50%, 29%, and 29% vs. 25%, 12.5% and 0% respectively).
The most comprehensive study to date evaluating common radiographic findings associated with COVID-19 reported the incidence of pneumothorax to be 1% (1 out of 99 patients). 1,2 A few other case reports have presented isolated cases of pneumothorax in the setting of COVID-19. 3.4 Our case series comprehensively demonstrates this is a potentially crippling complication in COVID-19 patients as evidenced by our mortality rate of 36%, with the potential to be higher given the severity of illness in some of the patients who remain hospitalized in this study. The crude mortality of all patients admitted with COVID-19 pneumonia during this same span of time was 15.8%. The marked inflammatory response, fibrosis, and need for positive pressure ventilation in COVID-19 pneumonia are likely contributory to the development of pneumothorax in these patients. Though classically, pneumothorax is more likely to develop in patients with underlying lung disease, the prevalence of COPD and asthma was only 19% in this study. Additionally, 50% of the patients were not on a ventilator when pneumothorax was diagnosed with 4 patients (18% of the population) only on nasal cannula. This suggests that there are factors uniquely associated with COVID-19 that contribute to the incidence of spontaneous pneumothorax. Additionally, we believe that the development of spontaneous pneumothorax likely reflects the severity of disease which is why the mortality is high in this cohort. Review of the chest x-rays and CT scans, the latter only obtained in 2 patients, demonstrated variable incidence of cystic and bullous disease (Fig. 2). Cross sectional imaging may be of value in this population as bullous disease may develop rapidly with progression resulting in misplaced chest tubes. To our knowledge, this is the largest and most comprehensive description of the findings associated with spontaneous pneumothorax in COVID-19. Clinicians should be attuned to the possibility of this complication as it portends a poor prognosis.
Funding
None.
Declaration of Competing Interest
None.
|
2021-02-02T17:32:51.115Z
|
2021-01-29T00:00:00.000
|
{
"year": 2021,
"sha1": "4390baf18b860b924cebefd9731797921b86f347",
"oa_license": "other-oa",
"oa_url": "http://www.heartandlung.org/article/S0147956321000200/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "1375afe60ca729c449ca9e44930b2f11fbd2f32f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
246512770
|
pes2o/s2orc
|
v3-fos-license
|
Immunohistochemical markers to diagnose primary squamous cell carcinoma of the lung: a meta-analysis of diagnostic test accuracy
Background: Inconsistent diagnostic test accuracies of immunohistological staining for squamous cell carcinoma (SQC) of the lung have been frequently reported. There have been few meta-analyses of the diagnostic accuracies of the immunohistochemical markers. Methods: A systematic review and meta-analysis were performed following standard guidelines for systematic reviews of diagnostic test accuracy. Immunohistochemical markers (p40, p63, CK5/6, and DSC3) were evaluated as index tests for SQC. The diagnostic odds ratio (DOR) was obtained by the DerSimonian–Laird variate model. Summary estimates of sensitivity and specificity were calculated using a bivariate model. The protocol registration ID is UMIN000041664. Results: The meta-analysis included 85 of the 1353 first-screened articles. The total number of patients was 17,893, which consisted 6151 SQC cases and 11,742 non-squamous non-small-cell lung cancer cases. The DOR was better for p40 (377, 95% confidence interval (CI) = 213–644, I2 = 0%) than for CK5/6 (120, 95% CI = 78–184, I2 = 2.5%), p63 (70, 95% CI = 55–88, I2 = 9.1%), and DSC3 (94, 95% CI = 35–250, I2 = 3.7%). Summary estimates of sensitivity and specificity were followings: p40 sensitivity 0.92 (95% CI = 0.89–0.95), specificity 0.94 (95% CI = 0.93–0.96); p63 sensitivity 0.92 (95% CI = 0.90–0.94), specificity 0.83 (95% CI = 0.80–0.86); CK5/6 sensitivity 0.90 (95% CI = 0.87–0.93), specificity 0.91 (95% CI = 0.89–0.93); DSC3 sensitivity 0.81 (95% CI = 0.73–0.88), and specificity 0.95 (95% CI = 0.85–0.98). Conclusion: P40 had the best DOR to diagnose SQC in non-small-cell lung carcinoma. Despite its lower sensitivity, DSC3 had the best specificity among the four markers and might be useful to rule-in the diagnosis of SQC.
of anti-cancer agents had similar efficacy for NSQ-NSCLC and SQC of the lung, drugs such as pemetrexed and bevacizumab are only effective for patients with NSQ-NSCLC. 4,5 Based on molecular advances and the clinical demand for accurate subclassification of lung cancer, the World Health Organization (WHO) updated the Classification of Tumors of the Lung, Pleura, Thymus, and Heart in 2015, which emphasized the expanded use of immunohistochemical techniques even for the diagnosis of SQC and NSQ-NSCLC and explicitly included some immunohistochemical markers. 6 The incidence of large-cell cancer of the lung has been decreasing since 2015 because these immunohistochemical markers can discern the difference between poorly differentiated SQC and ADC. 7 Numerous immunohistochemical and immunocytochemical markers have been explored to distinguish between pulmonary SQC and NSQ-NSCLC. p40, p63, cytokeratin 5/6 (CK5/6), and desmocollin-3 (DSC3) have been frequently used in the diagnosis of SQC. 8 Sensitivity and specificity are key metrics to understand the diagnostic test accuracy of immunohistochemical staining techniques. To the best of our knowledge, no systematic review has evaluated the diagnostic test accuracy of SQC immunohistochemical markers. The current systematic review and meta-analysis aimed to summarize data from the previous studies of diagnostic test accuracy of immunohistochemical markers used for the diagnosis of SQC.
Study overview
The protocol of this systematic review and metaanalysis of diagnostic test accuracy was prepared following standard guidelines for systematic reviews of diagnostic test accuracy and registered on the website of the University Hospital Medical Information Network Clinical Trials Registration (UMIN000041664). 9,10 Approval of the Institutional Review Board was not required because of the nature of this study. A checklist of PRISMA was shown in Supplementary Table 1.
Study search
Four major online databases, PubMed, Web of Science, Cochrane, and Embase, were searched (January 31, 2020). The following search strategy was used for PubMed: ((p40 OR deltaNp63 OR ΔNP63) OR (p63 OR DBR16.1) OR (ck5/6 OR Cytokeratin 5/6) OR (desmocollin 3 OR desmocollin-3 OR DSC3 OR DSC-3) OR (TTF1 OR TTF-1 OR Thyroid transcription factor-1 OR Thyroid transcription factor 1) OR (NapsinA OR Napsin A OR TA02 OR aspartic protease) OR (CK7 OR cytokeratin7 OR cytokeratin 7)) AND (sensitivity and specificity) AND (NSCLC OR lung OR pulmonary OR bronchial OR pleural OR respiratory OR bronchoscopy) AND (NSCLC OR adenocarcinoma OR squamous OR squamous-cell OR non-small OR non small). The detailed information of the research stratagem was shown in Supplementary Table 2.
Two authors (SK and NH) independently screened the titles and abstracts and carefully evaluated full text to select eligible articles; in cases of discrepancy, they reached a consensus through discussion. Review articles and included original articles were hand-searched (HC and NH) for additional research papers that met the inclusion criteria.
Study selection
Full articles, brief reports, and conference abstracts published in any language that provided data for sensitivity and specificity of immunohistochemical markers to diagnose lung SQC were included. An article that provided data of either sensitivity or specificity was excluded since bivariate analysis is not applicable for such data. 9 A case-control study design that consisted of patients with ADC and SQC was accepted, though a case-control design may be considered to have a risk of bias according to Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2). 11 The target population was patients with NSCLC. Commonly used pathological criteria were accepted along with WHO 2015 criteria. A study that collectively evaluated both NSCLC and SCLC was excluded since such a study did not fit the clinical question. Studies focusing on nonpulmonary cancers and metastatic lung cancers of non-pulmonary origin were also excluded. Similarly, studies that compared NSCLC subtypes and mesothelioma were not accepted. Studies including patients with only ADC or SQC diagnosis were considered two-gate studies, and studies including all NSCLC patients were considered one-gate studies. journals.sagepub.com/home/tam 3 Specimens outside the lung such as lymph nodes and pleural effusion were accepted as well. Immunocytochemical staining using lung cytology or pleural effusion cell blocks was also accepted along with immunohistochemical staining. Small samples from cell blocks, lymph nodes, and pleural effusion was classified as biopsy specimen.
Target immunohistochemical markers included p40, p63, CK5/6, and DSC3 for SQC. Immunohistochemical techniques using any commercially available antibodies and non-commercial antibodies were accepted. The reference test had to be a pathological diagnosis by pathologists.
Risk of bias
QUADAS2 was applied to assess the risk of bias in each study. 11 Outcomes Sensitivity, specificity, area under the curve (AUC), and the diagnostic odds ratio (DOR) were evaluated. If two or more cutoffs were applied in an original article, all of the weakly, moderately, and strongly positive were collectively considered positive. To diagnose SQC, both SQC and adenosquamous carcinoma were counted since adenosquamous carcinoma has a squamous cell component, whereas large-cell carcinoma and NSCLC not otherwise specified were not counted as SQC.
Data extraction
Two review authors, SK and NH, independently extracted data, including the name of the first author, publication year, publication country, types of immunohistochemical markers, numbers of patients with positive results, numbers of patients evaluated, and QUADAS-2-related information.
Study search and study characteristics
A total of 1346 articles, including 1336 articles through database search and 17 articles by hand search, were identified; 999, 229, and 85 articles were left after removing duplication, screening, and full-article reading, respectively ( Figure 1). Finally, 85 reports, comprising 75 full-length articles and 10 conference abstracts, were included ( Table 1). All were written in the English language except for one article written in the Chinese language. Prospective study designs were adopted in four articles, and the other 81 were retrospective studies. Of the 85 reports, 28 were from the United States, nine were from China, six were from Germany and Japan, five were from Turkey and the United Kingdom. Of the 17,893 patients who were enrolled in this study, 6151 had SQC based on the pathological diagnosis, and 11,742 had NSQ-NSCLC. Surgical specimens were assessed in 34 studies, and 31 studies evaluated biopsied specimens, whereas 10 studies collected both surgical and biopsy samples. Ten reports did not specify specimen type. Fifty-one studies were two-gate case-control studies, enrolling SQC and ADC, respectively, and the other 34 studies were one-gate studies that enrolled NSCLC specimens. The WHO classification of lung cancer pathology was used in 67 articles, and the other 18 studies did not mention classification criteria.
The cutoff values for immunohistochemical markers were 1% in 29 studies, 5% in 6 studies, 10% in 15 studies, and 35 studies that did not report cutoff values.
Clones of used immunohistochemical markers were shown in Supplementary Table 3. Although different clones were used in studies, more than half of the studies used the same clone ploy antibody, 4A4, D5-16B4, and DSC3-U114 for p40, p63, CK5/6, and DSC3, respectively. The risk of bias assessment is shown in Figure 2. There were 45 studies with high patient selection bias, and 26 studies showed an unclear risk of selection bias. A total of 12 studies and 24 studies with high and unclear risk of bias compared to the reference. No study showed bias in patient selection applicability concerns, index test, index test applicability concerns, reference standard applicability concerns, and flow and timing.
Discussion
The diagnostic test accuracies of the immunohistochemical tumor markers p40, p63, CK5/6, and DSC3 in SQC were systematically reviewed. Based on our analysis, p40 showed the best DOR and AUC among these four markers, and the systematic review and meta-analysis provided evidence supporting the use of p40 as the first choice in the algorithm of diagnosis of predicting SQC, as in current guidelines. 6,15 Given the AUCs of p63 and CK5/6, which were at least 0.93, suggesting 'very good' diagnostic test accuracy, 16 p63, and CK5/6 were all capable in the diagnosis of SQC as a choice, as suggested by some guidelines. 17 Although the detailed diagnosis accuracies of immunohistochemical tumor markers were a litter different in one-gate and two-gate analysis. The expression of p40, p63, CK5/6, or DSC3 might be seen in, for example, LCNEC or other non-ADC NSCLC. The sequence of diagnostic accuracy of each tumor marker kept the same with the result in the overall analysis. Data of studies used in this meta-analysis compared diagnosis accuracy between SQC with ADC or NSQ-NSCLC. The test accuracy of the above immunohistochemical tumor markers to identify metastases to the lungs or salivary gland-type carcinomas was still unclear.
The results are seen as just the markers' ability to separate SQC from ADC.
The combination of TTF1 and p40 was recommended to identify SQC or ADC among NSCLC specimens. TTF1 single-positive suggests ADC of the lung, and p40 single-positive diagnoses SQC. When TTF1 and p40 are double-positive, the specimen should be further stained by highly specific markers such as Napsin A and DSC3, a protein found in desmosomes. 19 On the contrary, when TTF1 and p40 are double-negative, another sensitive marker for ADC, such as CK7. Although CK7 cannot be regarded as an ADC marker, for example, a significant proportion of SQC are positive for CK7, while the addition of CK7 or broad keratin in TTF1/p40-negative NSCC without clear morphology is recommended. 20 Additional sensitive markers for SQC are also required; p63 and CK5/6 are candidates additional immunohistochemical stains. It is true that p63 is more sensitive than CK5/6 for the diagnosis of SQC. Nonetheless, since p40 is the N-terminally truncated isoform of p63, 21 IHC results of p40 and p63 correlate with each other. CK5/6, intermediate-sized basic keratins with a molecular mass of 58 kDa, 22 had a different immunostaining target from p40. Although p63 was slightly more sensitive than CK5/6, CK5/6 might be a better additional marker when TTF1 and p40 are double-negative.
The largest number of studies of SQC IHC markers was conducted for p63, followed by CK5/6, p40, and DSC3. CK5/6 and p63 were the previous standards to diagnose SQC, whereas p40 and DSC3 have been investigated since around 2011. Although studies of p40 and DSC3 were relatively fewer, both of them had abundant samples. Across all analyses, observed heterogeneities were almost absent (I 2 < 25%).
There were several limitations in this study. First, the included studies shown by QUADAS-2 were the two-gate study design. A high risk of patient selection was observed. However, results from sensitivity subgroup analysis focusing on one-gate studies were compatible with those from two-gate studies. Second, we searched data from 2001, and diagnosis standard was different with different periods. A total of 36 studies showed a high or unclear risk of reference standard. Third, although more than half of the studies used the same clones of immunohistochemical markers, different clones, and protocols might potentially exit a selection bias in this study.
Conclusion
P40 was the only marker with 'excellent' AUC to diagnose SQC among NSCLC. Both CK5/6 and p63 showed 'very good' AUC; however, CK5/6 may have slightly better diagnostic test accuracy. Despite the lower sensitivity, DSC3 had the best specificity among the four markers, and it might be useful to rule-in the diagnosis of SQC.
|
2022-02-04T16:08:03.532Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "10f999f526f6a58987f23e4f2e12d2b8df4dfc6c",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/17588359211065152",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dddf05a9126309ba5e6472cba827caae6250c2cf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
212747131
|
pes2o/s2orc
|
v3-fos-license
|
Duplication Cysts in Caecum in Three Months Old Girl
Gastrointestinal duplication cysts are rare congenital anomalies. It can occur in any part of the alimentary tract but they are rare in the caecum. The reported incidence is 1:4.500 births. The exact etiology is unknown. There is no common clinical pattern of signs and symptoms of duplications. The perioperative diagnosis duplication of cysts are often inaccurate. Histopathological examination enables us to confirm the diagnosis. Resection is the treatment of choice with an excellent outcome. The aim of this case report was to describe the clinical presentation and treatment of duplication cysts. A three months old girl had abdominal distention since 7 days before admitted to the hospital. This complaint was accompanied with fever and vomiting. She couldn’t have defecation and had history of black greenish feces. Physical examination revealed distended abdomen and decreased of bowel sound. Laboratory investigation showed normal features, abdominal X-Ray found enlargement of the intestine with increasing of the gas distribution and ground glass appearance. We diagnosed the patient with observation of abdominal distention caused by suspect ileus paralysis with different diagnosis hirschsprung disease. After the diagnosis, laparotomy exploration were performed. During surgical procedure, the surgeon found duplication cysts in the caecum area, then they performed resection with end to end anastomose and took the tissue for biopsy. The biopsy result was duplication cysts. Nine days after surgery, the patient was discharged in good condition. A high index of suspicion for diagnosis duplication cysts was required and the result of surgical treatment was good.
Introduction
Gastrointestinal duplication cysts are rare congenital anomalies. It can occur in any part of the alimentary tract. The most common site is in the ileum, followed by esophagus, large bowel, jejunum, stomach and duodenum but they are rare in the caecum. The reported incidence is 1:4.500 births, about 60-70% detected in antenatally or within first two years of life. In the most cases there is no sex predilection. No familial or racial association has been reported [1].
There is no common clinical pattern of signs and symptoms of duplications. They present with a variety of symptoms or sometimes as masses found incidentally during routine examinations or investigations, or they are encountered during an operation for other problems. The clinical presentation also varies according to the age of the patient, location of duplication, type of mucosal lining, duration of disease, and presence of complications. Infants and neonates present with abdominal pain, nausea, vomiting, bleeding, abdominal distention, abdominal mass, obstruction and intussusception [2,3].
Diagnosis is confirmed with histopathological examination. Resection is the treatment of choice with an excellent outcome. Good sectioning of the cyst wall with attached bowel help in ruling out the malignant changes [4].
The objective of this case report was to describe clinical presentation and treatment of duplication cyst.
Case Illustration
A three months old girl, were brought to emergency room (ER) Sanglah hospital. Patient was referred from a private hospital with observation of abdominal distention suspect hirschsprung disease. Patient had abdominal distention since 7 days before admitted. This complaint was accompanied with fever and vomiting. Patient had vomiting after drinking milk. The vomit consisted of meal which she consumed. The body fever was fluctuated, it decreased after consumption of medicine. The highest temperature of fever was 38°C. She couldn't have defecation since 2 days before admitted. She had history of black greenish feces since 11 days before admitted. After arrived in ER Sanglah hospital, the surgeon had done rectal washing, but the stool couldn't be found. Patient was the first child in the family, no history of gastrointestinal congenital anomalies in both parents. She was born vigorously with body weight of 3200 grams, and term in gestational age. No illnesses history during the pregnancy period were noted, no history of consuming any traditional medicine. There were no abnormality during pregnancy or delivery reported. There were no abnormality in ultrasound examination.
The head was normal in shape, the hair was black in colour, the fontanel was flat. There were no sunken eyes, jaundice on sclera, neither conjunctiva injection and anemia. The pupils light reflect was normal. The ears, nose, and throat examination were in normal limit. It was no lymph nodes enlargement found on the neck. The chest was symmetrical both on rest and movement. Breath sound was vesicular without rales or wheezing, the first and second heart sound were normal, regular and no murmur in auscultation. There were no lymph nodes enlargement found on both of axilla. Abdomen was distended with difficulty to evaluate liver and spleen. Bowel sound was decreased when auscultated. Anal and genital examination was normal. Rectal toucher examination found normal rectal toucher, no palpable mass and no flushed feces. The yellowish feces with no blood on the hand gloves was found.
Abdominal X-Ray showed an enlargement of the intestine with increasing of the gas distributions and ground glass appearance in lower abdominal area (mostly found in a half sitting position) ( Figure 1). Laboratory results such as complete blood count, electrolytes count, liver and kidney functions were within normal limit.
Based on the clinical and adjunctive examination, the patient was diagnosed with observation of abdominal distention caused by suspect ileus paralysis with different diagnosis hirschsprung disease. Patient has been given antibiotic (ceftriaxone), gaster decompression with nasogastric tube, temporary fasting with parenteral nutrition and prepared for laparotomy exploration.
During surgical procedure, the surgeon found duplication cysts in the caecum area, then they performed resection with end to end anastomose and took the tissue for biopsy (pathology anatomy examination) ( Figure 2). The result was cysts duplication (compatible with the clinical examination), chronic colitis, with 2 reactive lymph nodes.
Patient was admitted to the intermediate ward (IW) after surgery. The patient was given antibiotic therapy of intravenous cefoperazone sulbactam, amikacin, metronidazole, and total parenteral nutrition. Three days after surgery, tropic feeding from the feeding tube was given. On the seventh day after the surgery, the patient was full fed and could to ward. Two days after moving to ward, the patient was discharged in good condition.
Discussion
Gastrointestinal duplication cyst are rare congenital anomalies, originating anywhere along alimentary tract from the tongue to the anus. The reported incidence is 1:4.500 births, most duplications are detected in children (antenatally or within first two years of life) and fewer than 30% of all duplications are diagnosed in adults. In the most cases, there is no sex predilection. No familial or racial association has been reported [1]. In this case, patient was a girl, three months old, without history of gastrointestinal congenital anomalies in both parents.
The exact etiology is unknown, several theories have been proposed. They are many theories on embryology of gastrointestinal duplication. None of these theories, however is able to explain all types of duplication. These embryological theories include the following [ There is no common clinical pattern of signs and symptoms of duplications. They present with a variety of symptoms or sometimes as masses found incidentally during routine examinations or investigations, they are encountered during an operation for other problems. The clinical presentation also varies according to the age of the patient, location of duplication, type of mucosal lining, duration of disease and presence of complications. The clinical presentation may be due to the pressure effect of the duplication. Duplications in the abdomen commonly present with pain, vomiting, abdominal distention and abdominal mass. The clinical presentation may also be secondary to complications of the duplications. These include intussusception, volvulus, perforation, bleeding, peptic esophageal stricture and malignant transformation, as seen in adults [7,8]. In this case, patient had abdominal distention, accompanied with fever and vomiting. Patient had vomiting after drinking milk and vomit consisted of meal she consumed. She couldn't have defecation and had history of black greenish feces before admitted to the hospital.
The prenatal diagnostic of abdominal cyst lesions is relatively common. They may either represent a normal structural variant or pathological entity that may require surgical intervention after birth. In clinical practice, these lesions are most frequently detected at the time of routine morphology scan at 18-20 weeks of gestational age. However, gastrointestinal malformation does not become apparent until the third trimester. Additionally, some cysts may develop and then resolve during intrauterine life. The preoperative diagnostic duplication of cyst are often inaccurate. Diagnosis is usually done using imaging modalities such as plain X-Rays, barium studies, ultrasonography, CT scan or MRI, technetium 99m pertechnetate scintigraphy scan and laparoscopy. Plain X-Rays may show evidence of intestinal obstruction. Barium studies demonstrate filling defect or rarely a communication between the cyst and normal bowel. Ultrasonography is the most common modality used and should be the first choice. It typically show a double layered wall (inner echogenic mucosa and outer sonoluscent muscular layer). When this double layered pattern is present on ultrasonography, a gastrointestinal duplication cyst is confirmed and there is no need for further radiologic evaluation. CT scan are more useful in demonstrating the precise anatomical relationship between cysts and surrounding structure. These cysts can manifest as smooth, rounded, fluid filled cysts or tubular structure with thin slightly enhancing wall on CT scan. MRI show intracystic fluid with heterogenous signal density. Technetium 99m pertechnetate scintigraphy scan indicates the definite existence of gastrointestinal duplication cyst when it contains ectopic gastric mucosa. This is especially useful in esophageal, duodenal and tubular small bowel duplication with a high incidence of heterotropic gastric mucosa. Laparoscopy with a high incidence of heterotropic gastric mucosa. However, all these modalities allow us only to suspect the presence of an abnormal lesion and diagnostic confirmation is possible only after resection [9,10]. In this case, there were no abnormality in ultrasound examination during pregnancy. However, the abdominal X-Ray on admission showed an enlargement of the intestine with increasing of gas distribution and ground glass appearance in lower abdominal area.
Histopathological examination enables us to confirm the diagnosis. According to the definition of Ladd and Gross, the cyst must be adherent to some part of the gastrointestinal tract, contain smooth muscle in the wall and have an internal lining of alimentary epithelium. They are named according to the location and are generally cystic or tubular masses, in 10-15% cases the cysts are multiple [11,12]. In this case, the surgeon took the tissue for biopsy (pathology anatomy examination). The result was duplication cysts (compatible with the clinical examination), chronic colitis, with 2 reactive lymph nodes.
Although many of the duplications are diagnosed incidentally, most patients present with a combination of pain and/or obstructive symptoms. These symptoms may be the direct effect of distention of duplication or caused by compression of adjacent organs (including their associated blood supplies) [13,14]. Excision should be considered in all cases wherever possible. The surgical approach varies with location and type of the cysts. Resection and anastomosis may be required. However, gastrointestinal duplication cysts in children is a benign disease, and any treatment should not be more radical than to eliminate the patient complaints and prevent further recurrence. Important points to be considered in surgical of gastrointestinal duplication cysts include [14]: 1. The nature of the blood supply shared between the duplication and native bowel. 2. The presence of heterotopic gastric mucosa, which will negate internal drainage due to the risk of peptic ulcerations. 3. The relationship with adjacent structure. The long-term prognosis of enteric duplication cyst is excellent after surgical treatment. However, those with associated anomalies and the extent of the physiological disturbance caused by them [14,15]. In this case, based on clinical and adjunctive examination the patient prepared for laparotomy exploration. On the surgical treatment, surgeon found cysts duplication in the caecum area, then did resection with end to end anastomose and took the tissue for biopsy (pathology anatomy examination). The patient was discharged in good condition.
Summary
A three months old girl, patient had abdominal distention since 7 days before admitted. This complaint was accompanied with fever and vomiting. She couldn't have defecation since 2 days before admitted. She had history of black greenish feces since 11 days before admitted. Physical examination revealed abdomen was distended with difficulty to evaluate liver and spleen. Bowel sound was decreased when auscultated. Laboratory investigation showed normal limit, abdominal X-Ray found enlargement of the intestine with increasing of gas distribution and ground glass appearance in lower abdominal area. We diagnosed the patient with observation of abdominal distention caused by suspect ileus paralysis with different diagnosis hirschsprung disease. After the diagnosis, laparotomy exploration were performed. During surgical procedure, surgeon was found cysts duplication in the caecum area, then they performed resection with end to end anastomose and took the tissue for biopsy (pathology anatomy examination). The result was cysts duplication (compatible with the clinical examination). Nine days after surgery, the patient was discharged in good condition.
|
2020-03-18T03:05:59.434Z
|
2020-03-11T00:00:00.000
|
{
"year": 2020,
"sha1": "38f9d5a0a252ef188c232e8bfaf455c0e3f17d96",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ajp.20200602.17.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "38f9d5a0a252ef188c232e8bfaf455c0e3f17d96",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119619485
|
pes2o/s2orc
|
v3-fos-license
|
Non-G-completely reducible subgroups of the exceptional algebraic groups
Let G be an exceptional algebraic group defined over an algebraically closed field k of characteristic p>0 and let H be a subgroup of G. Then following Serre we say H is G-completely reducible or G-cr if, whenever H is contained in a parabolic subgroup P of G, then H is in a Levi subgroup of that parabolic. Building on work of Liebeck and Seitz, we find all triples (X,G,p) such that there exists a closed, connected, simple non-G-cr subgroup H
Introduction
Let G be an algebraic group defined over an algebraically closed field k of characteristic p > 0 and let H be a subgroup of G. Then following Serre [Ser98] we say H is G-completely reducible or G-cr if, whenever H is contained in a parabolic subgroup P of G, then H is in a Levi subgroup of that parabolic. This is a natural generalisation of the notion of a group acting completely reducibly on a module V : if we set G = GL(V ) then saying H is G-completely reducible is precisely the same as saying that H acts semisimply on V .
This notion is important in unifying some other pre-existing notions and results. For instance, in [BMR05], it was shown that a subgroup H is G-cr if and only if it satisfied Richardson's notion of being strongly reductive in G. It also allows one to state some previous results due to Liebeck-Seitz and Liebeck-Saxl-Testerman on the subgroup structure of the exceptional algebraic groups in a particularly satisfying form: Assume G is simple of one of the five exceptional types and let X be a simple root system. The result [LS96, Theorem 1] asserts a number N (X, G) such that if H is closed, connected and simple, with root system X, then H is G-cr whenever the characteristic p of k is bigger than N (X, G). In particular if p is bigger than 7 then they show that all closed, connected, reductive subgroups of G are G-cr. There is some overlap in that paper with the contemporaneous work of [LST96]. If H is a simple subgroup of rank greater than half the rank of G, then [Theorem 1, ibid.] finds all conjugacy classes of simple subgroups of G, the proofs indicate where these conjugacy classes are G-completely reducible. With essentially one class of exceptions, all subgroups, including the non-G-cr subgroups, can be located in 'nice' so-called subsystem subgroups of G. We shall mention these in greater detail later.
More recently, [Ste10a] and [Ste12] find all conjugacy classes of simple subgroups of exceptional groups of types G 2 and F 4 . One consequence of this is to show that the numbers N (X, G) found above can be made strict. (One need only change N (A 1 , G 2 ) from 3 to 2.) The main purpose of this article is to make all the N (X, G) strict. That is, for each of the five types of exceptional algebraic group G, for each prime p = char k and for each simple root system X, we give in a table of Theorem 1 an example H = E(X, G, p) of a connected, closed, simple non-G-cr subgroup H 1 with root system X, precisely when this is possible. In other words we classify the triples (X, G, p) where there exists a connected, closed, simple non-G-cr subgroup H with root system X. Moreover, in all but one case (where (X, G, p) = (G 2 , E 7 , 7)), we can locate E(X, G, p) in a subsystem subgroup.
Our main theorem can thus be viewed as the best possible improvement of the result [LS96, Theorem 1], in the spirit of that result. Before we state our main theorem in full, we need a definition: A subsystem subgroup of G is a simple, closed, connected subgroup Y which is normalised by a maximal torus T of G. Let Φ be the root system of G corresponding to a choice of Borel subgroup B ≥ T and for α ∈ Φ, let U α denote the T -root subgroup corresponding to α. Then Y = U α |α ∈ Φ 0 where either Φ 0 is a closed subsystem of Φ or (Φ, p) is (B n , 2), (C n , 2), (F 4 , 2) or (G 2 , 3) and Φ 0 lies in the dual of a closed subsystem. The subsystem subgroups of G are easily determined by the Borel-de Siebenthal algorithm. Most of our examples H = E(X, G, p) are described in terms of an embedding of H into a subsystem subgroup M . Here we describe M just by giving its root system. Theorem 1. Let G be an exceptional algebraic group defined over an algebraically closed field k of characteristic p > 0. Suppose there exists a non-G-cr closed, connected, simple subgroup H of G with root system X. Then (X, G, p) has an entry in Table 1.
Conversely, for each (X, G, p) given in Table 1, the last column guarantees an example of a closed, connected, simple, non-G-cr subgroup E(X, G, p) with root system X.
In particular we can improve on [LS96, Theorem 1]. In the table in Corollary 2 we have struck out the primes which were used in the hypotheses in [loc. cit.]. This is done partly to show where we have made improvements but mainly to facilitate reading the proof of the first part of Theorem 1. B 4 2 E ≤ A 8 ; V 9 ↓ E = 1000/0000 Table 1. Simple non-G-cr subgroups of type X in the exceptional groups a reductive algebraic group G, containing a maximal torus T of G. Recall that for each dominant weight λ ∈ X + (T ) for G, the space H 0 (λ) := H 0 (G/B, λ) = Ind G B (λ) is a G-module with highest weight λ and with socle Soc G H 0 (λ) = L(λ), the irreducible G-module of highest weight λ. The Weyl module of highest weight λ is V (λ) ∼ = H 0 (−w 0 λ) * where w 0 is the longest element in the Weyl group. We identify X(T ) with Z r for r the rank of G and for λ ∈ X(T ) + ∼ = Z r ≥0 ≤ X(T ), write λ = (a 1 , a 2 , . . . , a r ) = a 1 ω 1 + · · · + a r ω r where ω i are the fundamental domninant weights; a Z ≥0 -basis of X(T ) + . Put also L(λ) = L(a 1 , a 2 , . . . , a r ). When 0 ≤ a i < p for all i, we say that λ is a restricted weight and we write λ ∈ X 1 (T ). Recall that any module V has a Frobenius twist V [n] induced by raising entries of matrices in GL(V ) to the p n th power. Steinberg's tensor product theorem states that L(λ) = L(λ 0 ) ⊗ L(λ 1 ) [1] ⊗ · · · ⊗ L(λ n ) [n] where λ i ∈ X 1 (T ) and λ = λ 0 + pλ 1 + · · · + p n λ n is the p-adic expansion of λ ∈ Z r + . We refer to λ 0 as the restricted part of λ.
The right derived functors of Hom(V, * ) are denoted by Ext i G (V, * ) and when V = k, the trivial G-module, we have the identity Ext i G (k, * ) = H i (G, * ) giving the Hochschild cohomology groups. We recall some standard modules; when G is classical, there is a 'natural module' which we refer to by V nat ; or V m where m is the dimension of V nat . It is always the Weyl module V (ω 1 ), which is irreducible unless p = 2 and G is of type B n ; in the latter case it has a 1-dimensional radical. Certain properties of these modules is described in [Jan03,8.21]. Of importance to us is the fact that when G = SL n , r (L(ω 1 )) = L(ω r ) for r ≤ n − 1. We use this fact without further reference.
We will often want to consider restrictions of simple G-modules to reductive subgroups H of G. Where we write V 1 |V 2 | . . . |V n we list the composition factors V i of an H-module. For a direct sum of H-modules, we write V 1 + V 2 . Where a module is uniserial, we will write V 1 / . . . /V n to indicate the socle and radical series: here the head is V 1 and the socle V n . On rare occasions we use V /W to indicate a quotient. It will be clear from the context which is being discussed.
Recall also the notion of a tilting module as one having a filtration by modules V (µ) for various µ and also a filtration by modules H 0 (µ) for various µ (equiv. dual Weyl modules). Let us record in a lemma some key properties of tilting modules which we use: Lemma 2.1.
(i) For each λ ∈ X(T ) + there is a unique indecomposable tilting module T (λ) of high weight λ; (ii) A direct summand of a tilting module is a tilting module; (iii) The tensor product of two tilting modules is a tilting module; As we are considering very low weight representations in general, it is possible to spot that a module is a T (λ); for instance when p = 2, the natural Weyl module for B n has a 1-dimensional radical, so its structure is W (λ 1 ) = L(λ 1 )/k. It is then the case that giving the Loewy series for a module k/L(λ 1 )/k uniquely characterises it as a tilting module T (λ 1 ).
Recall that a parabolic subgroup P of G has a Levi decomposition, where Q is the unipotent radical of the P . Recall also L = L ′ Z(L) with L ′ being semisimple.
Outline
Theorem 1 has two facets. The first proves that if p ∈ N (X, G) for N (X, G) as defined in Corollary 2, then X is G-cr. The second proves the existence of the examples given in Table 1 and proves that they are non-G-cr.
The proof of the first part runs along the same lines as that of [ From [ABS90], Q has a filtration Q = Q 1 ≥ Q 2 ≥ Q 3 . . . with successive quotients being known (usually semisimple) L-modules.
Now, for an exceptional algebraic group G over k of characteristic p and a simple root system X we consider possible embeddingsH ≤ L ′ whereH is an L ′ -irreducible subgroup (which can be determined using 4.8 and/or by working down through 4.9). The composition factors V of the restrictions of the L-modules Q i /Q i+1 are investigated, and then conditions for the vanishing of For the proof of the second part of Theorem 1, we must show that for each of the remaining cases (where some composition factor V of Q [1] has H 1 (H, V ) = 0), we exhibit a non-G-cr subgroup H with the required root system over the required characteristic. In almost all cases we can give an example in a classical subgroup of G. Here is it easy to see when it is in a parabolic subgroup using 4.8. In two cases this is not possible, yet we can assert the existence of such a group using a cohomological argument.
Preliminaries
One needs to be careful about the notion of complements in semidirect products of algebraic groups. These are treated systematically in [McN10]. We recall some of the main facts. A closed subgroup H ′ of G is a complement to Q if it satisfies the following equivalent conditions: for a discussion. Note that [LS96] uses item (iv) above as its definition of a complement, without the last condition on Lie algebras.
. We write Z 1 (H, Q) for the set of 1-cocycles.
We say γ ∼ δ if there is an element q ∈ Q(k) with q −h γ(h)q = δ(h) for each h ∈ H(k). We write H 1 (H, Q) for the set of equivalence classes of 1-cocycles Z 1 (H, Q)/ ∼. 2 We use the data on Frank Lübeck's website which accompanies [Lüb01].
We recall some results from [Ste10b]. In almost all cases the cohomology group H 1 (G, V ) for a semisimple algebraic group G satisfies ). This fact allows us to reduce our considerations to simple modules with non-trivial restricted parts.
Lemma 4.5. Let G be a simple algebraic group and V a simple G-module. There are many papers finding the values Ext n H (L, M ) with H of low rank and L, M simple. Taking L = k, the trivial module, one gets the following result, where we have included more data than necessary for our purposes for completion's sake.
Lemma 4.6. Let V be a simple module for a simple algebraic group H where H is one of SL 2 , SL 3 , Sp 4 over an algebraically closed field of any characteristic p; G 2 for p = 2, 3 or p ≥ 13; or SL 4 , Sp 6 or Sp 8 when p = 2. Then H 1 (H, V ) is at most one-dimensional, and is non-zero if and only if V is a Frobenius twist of one of the modules in the following table.
In the table we also give some useful dimension data, often in only specific characteristic.
Proof of Theorem 1
In [Ste10a] and [Ste12] we find all semisimple non-G-cr subgroups of G where G is G 2 and F 4 respectively. So the result follows for these cases. It remains to deal with the cases G = E 6 , E 7 and E 8 . We start by honing the Liebeck and Seitz result to show that if H is a closed, connected, simple subgroup of G with root system X and p is not in our list N (X, G) then H is G-cr. Then we check that the examples given in Table 1 are indeed non-G-cr.
Corollary 5.2. With the hypotheses of the lemma, let V be an L ′ -composition factor of Q and suppose L ′ does not contain a component of type A 1 . Then either dim V ≤ 60 or G = E 8 , L ′ = D 7 and V is a spin module for L ′ of dimension 64.
Proof. If L ′ is itself simple, this follows from the lemma. Also, if G = E 6 or E 7 then the number of positive roots is less than 56, so the result is clear. So we may assume G = E 8 . The possibilities for L are A 2 A 2 , A 2 A 3 , A 2 A 4 , A 3 A 3 , A 3 A 4 , A 2 D 4 and A 2 D 5 . Since V is simple, it must be a tensor product of simple modules for the two factors, with the simple modules occurring in the lemma. One checks that the highest dimension possible for this is when L = A 3 A 4 , V = L(λ 2 ) ⊗ L(λ 2 ) with dim V = 6 × 10 = 60.
For the second part, if G = E 7 and L ′ is simple this follows from Lemma 5.1, the largest case occurring when L ′ = A 6 . If L ′ is not simple, then it is A 4 A 2 , A 3 A 2 or A 2 A 2 . Then the largest possible dimension comes from the first option and is at most 10 × 3 = 30 ≤ 35-dimensional.
p ∈ N (X, G) implies that H is G-cr. Since we are building on [LS96, Theorem 1], we need only deal with the struck out numbers in the table in Corollary 2.
Proof of the first statement of Theorem 1:
Looking for a contradiction, we will assume H is non-G-cr; then we can make the following assumption, using 4.4: We Suppose H is of type B 2 and p = 3. SinceH is D 7 -irreducible, it must have act on the natural module V 14 for L ′ as specified in 4.8. Checking [Lüb01], one finds the simple untwisted representations of dimension no more than 14 are L(0, 1), L(1, 0), L(0, 2), L(2, 0) with dimensions 4, 5, 10 and 14, respectively. But L(0, 1) is the natural representation for Sp 4 , thus carries a symplectic structure, which cannot be non-degenerate. Hence V 14 ↓H = L(2, 0); moreover, as L(2, 0) is an irreducible Weyl module when p = 3, the embeddingH ֒→ L ′ can be seen as the reduction mod p of an embeddingH Z ֒→ L ′ Z . Now [LS96, Proposition 2.12] gives that V Z ↓H Z is the irreducible Weyl module V (1, 3). Using [Lüb01] one can calculate the composition factors of a reduction mod 3 of this module; one sees that V ↓H has composition factors L(1, 3)|L(2, 1)|L(0, 1). Since none of these modules appears in 4.6, this rules out (X, G, p) = (B 2 , •, 3).
By 5.2 the largest possibility for the dimension V when G = E 7 is 35; when G = E 6 it is 16.
For (A 2 , E 8 , 5), the fact that V has dimension at least 54 forces L ′ = E 7 , D 7 or A 7 but simple E 7and D 7 -modules are self-dual, so the possibilities for V coming from 4.6 are discounted as they are not self-dual. Thus we may assume that L ′ = A 7 and V = L(ω 3 ) = 3 (L(ω 1 )). SinceH is L ′ -ir, L ′ must act irreducibly on the natural 8-dimensional module V 8 for L ′ . A check of [Lüb01] forces V 8 |L ′ = L(1, 1). But 3 L(1, 1) has highest weights (2, 2) and (0, 3). But the weights appearing in 4.6 are all higher than these (in the dominance order). This rules out (A 2 , E 8 , 5).
Since there are no embeddings of a subgroup of type C 4 into any proper Levi of E 6 , this case is ruled out too.
This completes the proof of the first statement of Theorem 1.
p ∈ N (X, G) implies the existence of a non-G-cr subgroup H with root system X. The examples when G = G 2 and F 4 were shown already in [Ste10a, Theorem 1] and [Ste12, Theorem 1(A)(B)] to be non-G-cr, so we need only deal with the cases G = E 6 , E 7 and E 8 .
L(1, 1) + k 6 + L(1, 0) 4 + L(0, 1) 4 , so V 27 ↓ H = L(1) ⊗ L(1) + L(1) 8 + k 7 = T (2) + L(1) 8 + k 7 . In [Ste12] it is shown that H is in a long A 1 -parabolic of F 4 , hence H is in an A 1 -parabolic of E 6 (so that L ′ of type A 1 ). But V 27 ↓ L ′ = L(1) 9 + k 9 and so H is not GL(V 27 )-conjugate to (a subgroup of) L ′ , let alone E 6 -conjugate. Now To show it is also non-E 8 -cr, note that L(E 8 ) ↓ E 7 = L(E 7 ) + L(T 1 ) + L(Q) 2 where Q is the unipotent radical of an E 7 -parabolic of E 8 , with L(Q) ↓ E 7 = V 56 + k. Thus L(E 8 ) ↓ H contains at least two submodules isomorphic to T (2) (contained in the two V 56 s). On the other hand L(E 8 ) ↓ L ′ = L(A 1 T 1 ) + k 6 + M 2 where M is the restriction to L ′ of the Lie algebra of the unipotent radical of an A 1 -parabolic. Using 5.1, M has composition factors with high weights 1 or 0, which must be semisimple since Ext 1 A 1 (L(1), L(1)) = Ext 1 A 1 (L(1), L(0)) = 0. In particular, while the direct summand L(A 1 T 1 ) is an indecomposable module T (2) for L ′ , it is the only one in L(E 8 ); for H there are at least two such (in L(Q)). Thus H is also non-E 8 -cr.
(G, X, p) = (E 6 , A 2 , 3) Let τ denote a graph automorphism of G with induced action on the Dynkin diagram for G. If G τ denotes the fixed points of τ in G, we have G τ ∼ = F 4 such that the root groups corresponding to simple short roots are contained in the subsystem (of type A 2 A 2 ) determined by the nodes in the Dynkin diagram of G on which τ acts non trivially. Thus H is contained in A 2Ã2 ≤ F 4 by x → (x, x). It is shown in [Ste12,4.4.1,4.4.2] that this subgroup is in a B 3 -parabolic of F 4 with V 7 ↓H = L A 2 (11).
In [Ste12,5.1] the restrictions of the F 4 -module V 26 = V (0001) ∼ = 0001/0000 to H andH is calculated. Using this together with V 27 ↓ F 4 = T (0001) = 0000/0001/0000 we see that V 27 ↓H cannot be the same as V 27 ↓ H: the former is an extension by the trivial module of V 26 ↓H = 11 3 + 00 5 where the resulting module is self-dual, so must be 11 3 + 00 6 whereas the latter is forced to be T (11) 3 . By a similar argument as before, we also get that this subgroup is non-E 7 -cr and non-E 8 -cr.
We give an example of a subgroup not arising from a non-F 4 -cr subgroup (these being found in [Ste12]): (G, X, p) = (E 6 , A 1 , 5).
The remaining cases where X = A 1 are similar.
Let us now vouch for the existence of the subgroup asserted in case (G, X, p) = (E 8 , C 3 , 3).
There is one further case where we could not give a nice embedding as we have done above. Let H = (E 7 , G 2 , 7).
Suppose H had a proper reductive overgroup in G. Then by 4.9 it would have to lie in a subsystem subgroup of type A 7 . Also it cannot lie in any parabolic subgroup of A 7 since thenH would not be E 6 -irreducible. Checking [Lüb01] one sees that there are no irreducible 8-dimensional representations of H ∼ = G 2 . This is a contradiction. Thus H has no proper reductive overgroup in G as required.
The remaining cases are all similar and easier. This completes the proof of Theorem 1.
|
2012-04-24T12:49:31.000Z
|
2012-01-25T00:00:00.000
|
{
"year": 2012,
"sha1": "fc6385cce68962fde85bda7ffc8993ffad3e0c0e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1201.5310",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fc6385cce68962fde85bda7ffc8993ffad3e0c0e",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
209585854
|
pes2o/s2orc
|
v3-fos-license
|
A Functional Landscape of CKD Entities From Public Transcriptomic Data
Introduction To develop effective therapies and identify novel early biomarkers for chronic kidney disease, an understanding of the molecular mechanisms orchestrating it is essential. We here set out to understand how differences in chronic kidney disease (CKD) origin are reflected in gene expression. To this end, we integrated publicly available human glomerular microarray gene expression data for 9 kidney disease entities that account for most of CKD worldwide. Our primary goal was to demonstrate the possibilities and potential on data analysis and integration to the nephrology community. Methods We integrated data from 5 publicly available studies and compared glomerular gene expression profiles of disease with that of controls from nontumor parts of kidney cancer nephrectomy tissues. A major challenge was the integration of the data from different sources, platforms, and conditions that we mitigated with a bespoke stringent procedure. Results We performed a global transcriptome-based delineation of different kidney disease entities, obtaining a transcriptomic diffusion map of their similarities and differences based on the genes that acquire a consistent differential expression between each kidney disease entity and nephrectomy tissue. We derived functional insights by inferring the activity of signaling pathways and transcription factors from the collected gene expression data and identified potential drug candidates based on expression signature matching. We validated representative findings by immunostaining in human kidney biopsies indicating, for example, that the transcription factor FOXM1 is significantly and specifically expressed in parietal epithelial cells in rapidly progressive glomerulonephritis (RPGN) whereas not expressed in control kidney tissue. Furthermore, we found drug candidates by matching the signature on expression of drugs to that of the CKD entities, in particular, the Food and Drug Administration–approved drug nilotinib. Conclusion These results provide a foundation to comprehend the specific molecular mechanisms underlying different kidney disease entities that can pave the way to identify biomarkers and potential therapeutic targets. To facilitate further use, we provide our results as a free interactive Web application: https://saezlab.shinyapps.io/ckd_landscape/. However, because of the limitations of the data and the difficulties in its integration, any specific result should be considered with caution. Indeed, we consider this study rather an illustration of the value of functional genomics and integration of existing data.
Regardless of the type of initial injury to the kidney, the stereotypic response to chronic repetitive injury is scar formation with subsequent kidney functional decline. Scars form in the tubulointerstitium as tubulointerstitial fibrosis and in the glomerulus as glomerulosclerosis. Despite this stereotypic response, the initiating stimuli are quite heterogeneous, ranging from an auto-immunological process in LN to poorly controlled blood glucose levels in DN. A better understanding of similarities and differences in the complex molecular processes orchestrating disease initiation and progression will guide the development of novel targeted therapeutics.
A powerful tool to understand and model the molecular basis of diseases is the analysis of genome-wide gene expression data. This has been applied in the context of various kidney diseases contributing to CKD, 3-7 and many studies are available in the online resource NephroSeq. 8,9 However, to the best of our knowledge, no study so far has combined these data sets to build a comprehensive landscape of the molecular alterations underlying different kidney diseases that account for most CKD cases. We collected from 5 extensive studies microarray gene expression data from kidney biopsies of patients of 8 separate glomerular disease entities leading to CKD (from here on referred to as CKD entities), FSGS, MCD, IgAN, LN, MGN, DN, HN, and RPGN. We normalized the data with a bespoke stringent procedure, which allowed us to study the similarities and differences among these entities in terms of deregulated genes, pathways, and transcription factors, as well as to identify drugs that revert their expression signatures and thereby might be useful to treat them.
Data Collection
Raw data CEL files of each microarray dataset, GSE20602, 10 GSE32591, 11 GSE37460, 11 GSE47183, 12,13 and GSE50469, 14 were downloaded and imported to R (R version 3.3.2). For more information see the Supplementary Methods.
Normalization
Cyclic loess normalization was applied using the limma package. [15][16][17] YuGene transformation was carried out using the YuGene R package. 18 Batch Effect Mitigation: Method First, we structured the data in a platform-specific manner. Then, we conducted differential gene expression analysis between the identical biological conditions from distinct study sources after cyclic loess normalization. We subsequently removed the genes that are significantly differentially expressed between them, as it indicated differences mainly due to the data source, rather than the biological difference. We applied this procedure for the data fragments coming from Affymetrix (Santa Clara, CA) Human Genome U133 Plus 2.0 Array and Affymetrix Human Genome U133A Array. Next, we merged the data sets between the 2 platforms using the overlapping genes, followed by a process to mitigate the platform-induced batch effect. This latter procedure is similar to the one used for the data source-specific batch effect mitigation.
Detection of Genes With Consistently Small P Values Across All Studies
Based on the assumption that common mechanisms might contribute to all CKD entities, we performed a Maximum P value (maxP) method, 19 which uses the maximum P value as the test statistic on the output of the differential expression analysis of the hypothetically separate studies. For more information see the Supplementary Methods.
Diffusion Map
The batch-mitigated data containing merely the maxP identified 1790 genes (Supplementary Material and Supplementary Data and Code) (false discovery rate < 0.01), were YuGene transformed, 18 and the destiny R package 20 was used to produce the diffusion maps.
Transcription Factor Activity Analysis
We estimated transcription factor activities in the glomerular CKD entities using DoRothEA, 21 which is a pipeline that tries to estimate transcription factor activity via the expression level of its target genes using a curated database of transcription factor (TF)-target gene interactions (TF Regulon). For more information see the Supplementary Methods.
Inferring Signaling Pathway Activity Fusing PROGENy
We used the cyclic loess normalized and batch effect mitigated expression values for PROGENy, 22 a method that uses downstream gene expression changes due to pathway perturbation to infer the upstream signaling pathway activity. For more information see the Supplementary Methods.
Pathway Analysis With Piano
Pathway analysis was performed using the Piano package from R. 23 For more information see the Supplementary Methods.
Drug Repositioning
For each CKD entity, the signature of cosine distances computed by characteristic direction was applied to a signature search engine, L1000CDS 2,24 with the mode of reverse in configuration.
Immunofluorescent Staining of Human Kidney Biopsies and Analysis
Validation involving human kidney biopsies was approved by the local ethics committee at Karolinska Institutet (Dnr 2017/1991-32). Stainings were performed on 2-mm paraffin-embedded sections as previously described. 25 For more information see the Supplementary Methods.
Assembly of a pan-CKD Collection of Patient Gene Expression Profiles
We searched in NephroSeq (www.nephroseq.org) and the Gene Expression Omnibus 26,27 and identified 5 studies, GSE20602, 10 GSE32591, 11 GSE37460, 11 GSE47183, 12,13 GSE50469 14 , with human microarray gene expression data for 9 different glomerular disease entities: FSGS, FSGS-MCD, MCD, IgAN, LN, MGN, DN, HN, and RPGN, as well as healthy tissue and nontumor part of kidney cancer nephrectomy tissues as controls (Figure 1a and b). In addition, in one dataset, patients were labeled as an overlap of FSGS and MCD (FSGS-MCD) and we left it as such. These studies were generated in 2 different microarray platforms. To jointly analyze and compare the different CKD entities, we performed a stringent preprocessing and normalization procedure involving quality control, either cyclic loess normalization or YuGene transformation, and a batch effect mitigation procedure (see the Methods section and the Supplementary Material). At the end, we kept 6289 genes from 199 samples in total. From the 2 potential controls, healthy tissue, and nephrectomies, we chose the latter for further analysis as the batch mitigation removed a large number of genes from the healthy tissue samples.
Technical Heterogeneity Across Samples
We first examined the similarities between the samples to assess further batch effects. Data did not primarily cluster by study source or platform, which can be attributed to our batch mitigation procedure (Figure 1c; Supplementary Figure S1), although some technical sources of variance potentially remained (Supplementary Figure S1). Samples from RPGN and FSGS-MCD conditions seemed to be more affected by platform-specific batch effects than samples from other conditions, due to the unbalanced distribution of samples: RPGN and FSGS-MCD samples were exclusively represented in 1 study and in 1 of the 2 platforms (Affymetrix Human Genome U133 Plus 2.0 Array [GPL570]). Therefore, the batch effect mitigation procedure could not be conducted on them.
Biological Heterogeneity of CKD Entities
We set out to find molecular differences among glomerular CKD entities. First, we calculated the differential expression of individual genes between the different CKD entities and tumor nephrectomy (TN) samples using limma. 17,28 From the 6289 genes included in the integrated dataset, 1791 showed significant differential expression (jlogFCj > 1, P < 0.05) in at least 1 CKD entity. RPGN was the CKD entity with the largest number of significantly differentially expressed genes (885), and MCD was the one with the least (75). Twelve genes showed significant differential expression across all the CKD entities (AGMAT, ALB, BHMT2, CALB1, CYP4A11, FOS, HAO2, HMGCS2, MT1F, MT1G, PCK1, SLC6A8). Interestingly, all these genes were underexpressed across all the CKD entities compared with TN. In contrast, QKI and LYZ genes were significantly overexpressed in HN, IgAN, and LN, and significantly underexpressed in FSGS-MCD and RPGN (and DN for QKI); 107 different genes were significantly differentially expressed relative to TN in at least 6 CKD entities ( Figure 2a). Of note, several of the previously mentioned genes are considered to be mainly in tubule. This could be explained by contamination of the glomerular samples with tubular cells during the microdissection procedure. Future studies using single-cell RNA sequencing (scRNA-seq) will dissect which genes are specifically expressed in glomerular cells during homeostasis and disease.
To better comprehend the divergence and similarities of the CKD samples, we asked how the distinct CKD entities localized with respect to each other using a common set of differentially expressed genes with regard to the nontumorous part of tumor nephrectomies using diffusion maps (Figure 2b). For illustrative purposes, we included the healthy tissue samples in the diffusion map; we did not use the healthy samples for differential expression analysis. The diffusion distances of each given CKD entity sample relative to TN samples reflect a nonlinear lower dimensional representation of the differences in gene expression profiles between those samples. The diffusion map orders the patients along a "pseudo-temporal" order, which we interpret here as an indicator of disease progression severity in glomeruli. 29 The most distant condition from nephrectomy samples was RPGN, which is arguably the most drastic kidney disease condition with the most rapid functional decline among the entities included ( Figure 2b). Healthy donor samples were distinct from TN samples even though the latter were resected distantly from the tumors. This might be explained by either minor contamination with cancer cells or paraneoplastic effects on the nonaffected kidney tissue, such as immune cell infiltration or solely that the nephrectomy tissue was exposed to short ischemia whereas the biopsy tissue from healthy donors was not. DN and LN were in close proximity to RPGN, whereas HN localized near IgAN. Differences were harder to assess in the middle of the diffusion map, but were visible when plotting the dimension components pair-wise (Supplementary Figure S2). For instance, MCD samples spanned from a point proximal to TN to near FSGS, but some MCD samples were in close proximity to MGN or even HN. Although it makes sense that MCD, as a relatively mild disease with normal light microscopy, is relatively close to the control groups of TN and healthy living donor, it remains unclear why other disease entities such as LN and DN spread widely in the diffusion map. Unfortunately, the data we used did not include information about disease severity, which might help to explain this heterogeneity, with early-stage disease possibly closer to the control groups and late-stage disease closer to RPGN. Dimension component 1 (DC1) seems to focus on the dissimilarity between the 2 reference healthy conditions, TN and healthy living donor from the CKD entities. Dimension component 2 (DC2) provides more insight into the disparity of the reference conditions. Dimension component 3 (DC3) discerns the subtle geometrical manifestation of the distinct CKD entities with regard to each other. In summary, using diffusion maps, we could visualize the intertwinement of the CKD entities that are present in our studies.
Transcription Factor Activity in CKD Entities
To further characterize the differences among the CKD entities, we performed various functional analyses. First, we assessed the activity of TFs based on the levels of expression of their known putative targets (see the Methods). These changes provide superior estimates of the TF activity than the expression level of the transcription factor itself 21,30 (Figure 3). We found 10 TFs differentially regulated in at least one CKD entity. Furthermore, we correlated the activities of the identified TFs with their expression. Those TFs with no correlation indicate factors whose activity may be significantly modulated using posttranslational modifications or factors whose regulation or expression measurements are unconfident. For instance, interferon regulatory factor-1 (IRF1) is significantly enriched in LN and moderately correlated (Spearman's rho, r s ¼ 0.624) with the expression level of the gene encoding for IRF1. This might imply an as of yet undiscovered potential role of IRF1 as a transcriptional activator in LN. In addition, the transcriptional activity of IRF1 was elevated in LN compared with the other disease entities. The activity of the upstream stimulatory factor 2 (USF-2) 31 was estimated to be significantly decreased in MCD compared with the rest of the conditions. Interestingly, the estimated activity of USF-2 across the CKD entities was inversely correlated, Spearman's rho (r s ¼ À0.867), with the expression level of the gene USF-2 itself.
We next sought to validate the expression of the 2 previously identified TFs USF-2 and FOXM1 in human tissue by immunostaining. We choose these 2 TFs because the activity of USF2 was predicted to be the lowest in MCD, and FOXM1 to be highest in RPGN (Figure 3). We stained for USF2 in human kidney biopsies from healthy controls and patients with MCD. In glomeruli, USF2 was expressed in podocytes, the mainly affected glomerular cell-type in MCD (Figure 4a and b). However, when compared with controls, USF2 expression in podocytes showed no significant difference detectable by immunofluorescence (Figure 4c-c 00 ). This does not exclude reduced activity of USF-2, as this might be regulated not only by its abundance in the nucleus but also by its DNA binding capability, influenced in turn by, for example, posttranslational modifications and the interaction with other proteins.
We then stained for FOXM1, a transcription factor of the forkhead box family. Our analysis suggests a highly increased activity of FOXM1 in RPGN (Figure 3). We next validated this observation in human biopsy samples from patients with RPGN and healthy controls. FOXM1 showed a unique expression in CD44-positive glomerular parietal epithelial cells in RPGN lesions, whereas we did not find any expression of FOXM1 in healthy human glomeruli (Figure 4d-f). Consistent with our TF activity analysis, quantification of this finding in 5 RPGN biopsies versus 6 controls yielded a highly significant difference (Figure 4f), indicating that FOXM1 could play a role in RPGN progression. These data suggest that our computational method might be useful to identify novel regulators in CKD.
Signaling Pathway Analysis
We complemented the functional characterization of transcription factor activities with an estimation of pathway activities with the tools PROGENy 22 and Piano. 23 Pathway Activity of CKD Entities Using PROGENy PROGENy infers pathway activity by looking at the changes in levels of the genes downstream of the corresponding pathways, rather than to the genes that constitute the pathway. This provides a better proxy of pathway activity than assessing the genes in the actual pathway. 22 We used PROGENy to estimate pathway activity in the different disease entities from the gene expression data (Figure 5a). Essentially, the degree of pathway deregulation was associated with the degree of disease severity, and present rather divergent activities across the CKD entities. For example, vascular endothelial growth factor pathway was estimated to be significantly influential in 5 CKD entities: RPGN, HN, DN, LN, and IgAN, of which it is predicted to be deactivated in RPGN and DN, but more activated in HN, LN, and IgAN. Ten of 11 pathways were predicted to be significantly deregulated in RPGN with respect to TN, in accordance with the diffusion map (Figure 2b) outcome; the divergence of RPGN from TN (control) was considerably more prominent both at a global transcriptome landscape and signaling pathway level. Intriguingly, the Janus kinase-signal transducers and activators of transcription (JAK-STAT) pathway did not appear to be affected in RPGN, but was considerably activated in LN and markedly deactivated in DN in comparison with TN. Overall, the CKD entities were characterized by distinct combinations, magnitudes, and directions of signaling pathway activities according to PROGENy. Although PROGENy can give accurate estimates of pathway activity, it is limited to 11 pathways for which robust signatures could be generated. 22 To get a more global picture, we complemented that analysis with a gene set enrichment analysis using Piano. 23 A total of 160
Prediction of Potential Novel Drugs That Might Affect the Identified Disease Signature in Different Kidney Diseases
Finally, we applied a signature search engine, L1000CDS 2,24 that prioritizes drugs that are expected to have a reverse signature compared with the disease signature. This engine is based on computing the distance between the signature of the disease and the signature of the LINCS-L1000 data, a large collection of changes in gene expression driven by drugs. We performed this analysis separately for the 9 CKD entities and identified 220 small molecules across the CKD entitles (Supplementary Figure S5). To narrow down the list of 220 small molecules, we focused on 20 small molecules observed in the L1000CDS 2 output of at least 3 subtypes (Figure 6a). Figure S7). BRD-K04853698 (LDN-193189), which is known as a selective bone morphogenic protein signaling inhibitor, has been shown to suppress endothelial damage in mice with CKD. 32 Wortmannin, a cell-permeable PI3K inhibitor, decreased albuminuria and podocyte damage in early DN in rats. 33 The tyrosine kinase inhibitor nilotinib is used to treat chronic myelogenous leukemia in humans. 34 Nilotinib treatment resulted in stabilized kidney function and prolonged survival after subtotal nephrectomy in rats when compared with vehicle. 35 Finally, narciclasine was identified, and it has been reported to reduce macrophage infiltration and inflammation in the mouse unilateral ureteral obstruction model of kidney fibrosis. 36 To further explore the association of these drugs with CKD and its progression, we analyzed the expression data for the targets of the drug candidates. First, each drug candidate was mapped to genes that encode the proteins targeted by these drugs (Figure 6b). For each gene, its differential expression of any CKD entity against TN was evaluated. Of the 11 mapped genes, MYLK3, a target of narciclasine, was significantly differentially expressed (underexpressed, logFC <À1, P < 0.05) in 2 CKD entities (IgAN and LN) (Supplementary Figure S6). Complementarily, screened drugs were mapped to the pathways they affect based on their functional information. The enrichment of the subset of pathways was evaluated using the previous results from the gene set analysis algorithm (Piano). This time, only the PI3KCI pathway appeared to be enriched in HN (upregulated, P < 0.05), and as the pathway affected by the candidate repositioned drug (Wortmannin, PI3K inhibitor). Taken together, these data suggest that kidney transcriptomics might be useful to predict potential drug candidates novel for CKD.
CONCLUSION
We have aimed to shed light on the commonalities and differences among glomerular transcriptomes of major kidney diseases contributing to the CKD epidemic affecting >10% of the population worldwide. Multiple pathologies are covered under the broad umbrella of CKD and, although they share a physiological manifestation (i.e., loss of kidney function), the driving molecular processes can be different. In this study, we explored these processes by analyzing glomerular gene expression from kidney biopsies obtained via microdissection. We observed expression data of many genes that are considered to be tubule-specific in the glomerular data set (e.g., ALB and CALB1). This might be due to contamination of the glomerular samples with surrounding tubular cells as a consequence of imperfect microdissection. Current technologies, including scRNA-seq will help to dissect expression in particular cell types of the glomerulus.
Genes such as Quaking (QKI) or Lysozyme C (LYZ), were significantly overexpressed, underexpressed, or not altered depending on the underlying kidney disease. It is known that QKI is associated with angiogenic growth factor release and plays a pathological role in the kidney, 37 whereas LYZ is known to be related to the extent of vascular damage and heart failure 38 and has recently been found to be increased in plasma during CKD progression. 39 These data support the fact that despite a stereotypic response of the kidney to a b injury with glomerulosclerosis, interstitial fibrosis, and nephron loss, there are various disease-specific differences that are important to understand so as to develop novel personalized therapies. CKD is a complex disease that can be acquired through a variety of biological mechanisms. Our pathway analysis reflects this heterogeneity. There was little to no overlap in significantly enriched pathways between the different kidney disease entities. We found 59 different pathways that showed significant enrichment in at least 3 disease entities (Figure 5b), indicating that different disease entities share some general mechanisms but their underlying pathophysiology differs from one to another. Besides increasing the interpretability, the pathway analysis identified many more differences among disease identities than the gene-level analysis (Figures 2a and 5b). For example, pathway analysis identified pathways related to the metabolism of lipids and lipoproteins significantly downregulated in MCD, MGN, and HN; and pathways related to fatty acid metabolism significantly downregulated in MCD, IgAN, MGN, and HN, results similar to those reported by Kang et al. 6 PROGENy (Figure 5a) yielded JAK-STAT, a major cytokine signal transduction regulator, 40 to be significantly activated in LN with respect to TN, and DoR-othEA (Figure 3) predicted the TFs IRF1 and STAT1 to be significantly enriched in LN and downregulated in DN. A pathogenic role of JAK-STAT/STAT1/interferon signaling in LN is supported by various studies. [41][42][43] Indeed, different human and mouse studies have shown an upregulation of JAK-STAT signaling in DN, 44,45 in contrast to our results showing decrease in JAK-STAT. However, the study of Berthier et al. 45 also revealed a downregulation of JAK2 mRNA in glomeruli of advanced/progressive diabetic kidney disease in humans. Pathway activities could vary depending on, among many other factors, the state of pathology of the cohort. Such a difference or other confounding factors could explain this discrepancy.
We next aimed to compare some of the predicted pathway activity to the literature. Interestingly, our analysis predicted increased nuclear factor-kB pathway activity in LN and it has been shown that selective inhibition of nuclear factor-kB inducing kinase reduces disease severity in an LN mouse model. 46 Furthermore, we predicted increased phosphoinositide 3-kinase activity in FSGS and a human causing FSGS mutation in the anilin gene was shown to increase phosphoinositide 3-kinase activity in podocytes. 47 We also used a signature-matching algorithm to explore potential drugs that could revert the disease phenotype. We found that 4 drugs hold promise in different CKD entities. Even though more experimental and clinical validation is required, our approach suggests that it is possible to find promising treatments for CKD via drug repositioning. In particular, for one of the identified drugs, nilotinib, use in humans has already been granted in leukemia and there are supporting data of its valuable insight at indications for CKD. 35 Analysis of expression of the drug targets found that MYLK3, a gene encoding for one of the targets of narciclasine, was significantly underexpressed in IgAN and LN when compared with TN. Similarly, the PI3KCI pathway, the target of Wortmannin was enriched in HN (upregulated, P < 0.05). This analysis attempted to refine the outcome of the repositioning analysis and at the same time helped to connect it to the disease mechanism at both the gene and pathway levels.
The analysis of TF activity revealed significantly higher FOXM1 activity in RPGN over all other kidney diseases analyzed. RPGN is characterized by a rapid decline in kidney function due to proliferation of parietal epithelial cells in the glomerulus, which leads to deterioration of the associated nephron. 48 FOXM1, a TF crucially involved in proliferation, 49 could represent a potential therapeutic target in glomerular parietal epithelial cells in RPGN. The fact that protein level of USF-2 was not significantly changed does not exclude reduced activity of USF-2, as the activity of transcription factors is very often influenced not only by their expression but also by posttranslational modifications and binding to other proteins.
We view our analysis as a first, preliminary step toward a characterization of the similarities and differences of the various pathologies that lead to CKD. As more data become available, either from micro-arrays or RNA sequencing, these can be integrated in our pipeline. Furthermore, the burgeoning field of scRNAseq has just started to produce data from kidneys 50,51 and can revolutionize our understanding of the functioning of the kidney and its pathologies. 52,53 In particular, scRNA-seq data can provide signatures of the many cell types in the kidney, which in turn can be used to deconvolute the composition of cell types 12 in the more abundant and cost-effective bulk expression data sets. 53 Other data sets, such as (phospho)proteomics 54 and metabolomics, 55 may complement gene expression toward a more complete picture of the CKD entities. 56 Ideally, all these data sets will be collected in a standardized manner to facilitate integration, which was a major hurdle in our study. Such a comprehensive analysis across large cohorts, akin to what has happened for the different tumor types thanks to initiatives such as the International Cancer Genome Consortium, can lead to major improvements in our understanding and treatment of CKD. 57
TRANSLATIONAL RESEARCH
Looking forward, our aim is to extend our collection and subsequent analyses as more datasets are available. Our methods complement those already available elsewhere, in particular in NephroSeq. While we provide our own user-friendly interface to mine the results, we could envision to embed them within NephroSeq and/ or other resources.
Limitations
Our results should be interpreted with caution because of the limitations coming from the data, which required an aggressive batch effect mitigation procedure. Nonetheless, our analysis could provide pointers to mechanisms in CKD entities to be further studied. Furthermore, our work provides an example of the potential usefulness of integrating publicly available data. Further, the limitations observed in this study shed light into the lack of standardization of basic experimental designs in the CKD community. To learn the most of CKD, the community should work collectively to create fundamental experimental and data handling guidelines. This should result in more comparable and robust data across research laboratories.
The integration of the data from different sources and platforms requires batch effect management, which should be customized to the data at hand. The current data were heavily affected by platform-and studyspecific batch effects, because the outcome categories (CKD entities and their samples) were unevenly distributed across studies and microarray platforms. The commonly used algorithms for correcting batch effects assume a balanced distribution of outcome categories across batches and are vulnerable to the groupbatch imbalance. [58][59][60][61] We conducted a stringent batch effect mitigation process to minimize the influence of technical heterogeneity. Note that this is a more stringent approach than other batch correction approaches that seek to "model-away" batch-related variance but retain all the data. In our case, we opted to remove genes that are most affected by batch effects. For the illustration of this procedure, see Supplementary Figure S1.
Further, we had to omit crucial and pertinent studies, such as Woroniecka et al. 62 or Beckerman et al. 3 Adding Woroniecka et al. 62 would have required introducing a third microarray platform, further complicating the batch mitigation procedure. We did not include Beckerman et al., 3 as we focused on microdissected glomerular fractions, whereas this study provides data from tubuli.
Because of the heterogeneity of the included samples and strong batch effects, we identified only a small number of genes that were differentially expressed, and thus one has to be cautious when drawing conclusions from this analysis. However, our aim is primarily to demonstrate several computational tools that have been developed mainly in cancer settings and will be of major interest in future analyses when more kidney transcriptomic data is openly available.
One important limitation of the study is the lack of detailed individual clinical data that were not deposited together with the raw data and also not available on request. Further, public freely available data were included into this work and thus we did not have any influence on the data generation, quality, and standardization. The limited number of available samples also made use of pool data from patients, only differentiating based on the reported disease entities. Unfortunately, data on glomerular filtration rate and histological scores are highly sparse, and given the limited number of samples, further stratification would have left us with a considerably diminished statistical power. Our analysis resulted in major differences between disease subtypes, although there are likely confounding effects due to different degrees of disease/ clinical phenotype. Thus, our results should be taken with caution and rather considered as hypotheses requiring further studies to be validated.
Furthermore, microdissection often results in crosscontamination of tubule or glomerular fragments, and thus the presented glomerular data do contain potentially various tubule-specific genes. Future scRNA-seq experiments will demonstrate the cell-specific and compartment-specific expression of genes and overcome the current issues with microdissection. In addition, for the drug-matching approach, we had to rely on cell lines that are not necessarily originating from kidney tissues. We plan to revisit this method when a new kidney-specific data set will be available. This will likely improve the prediction accuracy.
In summary, with this article, we do not claim to derive specific precise insights, given the clear limitations in the quality of the public kidney data available so far. Rather, we wish to demonstrate what is possible to achieve with computational functional genomics tools that can be used with high-quality omics and clinical data that will hopefully be available soon.
DISCLOSURE
All the authors declared no competing interests.
ACKNOWLEDGMENTS
This work was supported by the JRC for Computational Biomedicine, which was partially funded by Bayer AG, the European Union Horizon 2020 grant SyMBioSys MSCA-ITN-2015-ETN #675585, which provided financial support for AA, and by grants of the German Research Foundation (SFB/TRR57 and P30), a grant of the European Research Council (ERC-StG 677448) to RK, and a grant of the German Society of Internal Medicine to CK. We thank Nicolas Palacio for feedback on the manuscript.
SUPPLEMENTARY MATERIAL
Supplementary File (PDF) Figure S1. Figure S3. Transcriptional regulation in CKD entities. Heatmap of consistently differentially expressed genes across 6 or more disease entities (upregulation or downregulation). Figure S4. Hierarchical clustering of CKD entities based on a common set of differentially expressed genes with regard to the nontumorous part of tumor nephrectomies. The figure is a complementary representation of Figure 2b. Figure S5. Heatmap depicting the expression of the genes encoding for the transcription factors shown in Figure 3. The expression values were averaged within each condition, then scaled and centered across the conditions. The numbers to the right of factor names are Spearman's rank-based correlation coefficients of factor activity and factor expression across different CKD entities. Figure S6. Enrichment of metabolic pathways after gene set analysis. Pathway analysis result in metabolic pathways ("METABOL"): and their corresponding enrichment: upregulation (green), downregulation (red), and nonsignificant (white). Metabolic pathways are listed on the y-axis and disease entities in the x-axis. Only pathways enriched in at least 1 disease are shown. Note that FSGS, FSGS-MCD, and RPGN do not have any metabolic pathway significantly affected. Figure S7. Bar graph (count of CKD entities) and heatmap of the distribution of 220 small molecules reversely correlated with 9 CKD entities. Colored bars on both the bar graph and heat map correspond to the subtype of CKD entities and 220 small molecules are represented on the x-axis of both graphs. Figure S8. Volcano plot of differential expression of CKD entities versus TN for glomerular samples for the drugtargeted genes. The x-axis indicates the log2 of the fold change (FC) and the y-axis the Àlog10 of the P value after differential expression analysis using limma. Figure S9. Manual curation of 4 small molecules. The figure includes drug names corresponding to 4 small molecules, biological function, Food and Drug Administration (FDA) approval status, and publications describing the clinical relevance of the particular small molecule in CKD. Figure S10. Cell lines used in the drug-matching paradigm. The number of significant perturbations of a given cell line per condition. Data and Code. Supplementary Methods. Data collection, preprocessing and mapping, correlation of arrays, batch effect mitigation, detection of genes with consistently small p values across all studies, transcription factor activity analysis with DoRothEA, inferring signaling pathway activity using PROGENy, pathway analysis with piano, drug repositioning, immunofluorescent staining of human kidney biopsies and analysis, and supplementary references.
TRANSLATIONAL STATEMENT
Different etiologies cause chronic kidney disease. We integrate and analyze transcriptomic analysis of glomerular and tubular compartments from different entities to dissect their different pathophysiology, what might help to identify novel entity-specific therapeutic targets.
|
2019-11-14T17:08:29.057Z
|
2019-11-13T00:00:00.000
|
{
"year": 2019,
"sha1": "bee85cbfba1f51caff708001a7b8068c045fe616",
"oa_license": "CCBYNCND",
"oa_url": "http://www.kireports.org/article/S2468024919315335/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0176f31867814ee5554ba279b25f90165823d2b6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256221883
|
pes2o/s2orc
|
v3-fos-license
|
Dextran Formulations as Effective Delivery Systems of Therapeutic Agents
Dextran is by far one of the most interesting non-toxic, bio-compatible macromolecules, an exopolysaccharide biosynthesized by lactic acid bacteria. It has been extensively used as a major component in many types of drug-delivery systems (DDS), which can be submitted to the next in-vivo testing stages, and may be proposed for clinical trials or pharmaceutical use approval. An important aspect to consider in order to maintain high DDS’ biocompatibility is the use of dextran obtained by fermentation processes and with a minimum chemical modification degree. By performing chemical modifications, artefacts can appear in the dextran spatial structure that can lead to decreased biocompatibility or even cytotoxicity. The present review aims to systematize DDS depending on the dextran type used and the biologically active compounds transported, in order to obtain desired therapeutic effects. So far, pure dextran and modified dextran such as acetalated, oxidised, carboxymethyl, diethylaminoethyl-dextran and dextran sulphate sodium, were used to develop several DDSs: microspheres, microparticles, nanoparticles, nanodroplets, liposomes, micelles and nanomicelles, hydrogels, films, nanowires, bio-conjugates, medical adhesives and others. The DDS are critically presented by structures, biocompatibility, drugs loaded and therapeutic points of view in order to highlight future therapeutic perspectives.
DEX is a noteworthy example of the abovementioned compounds, being a non-toxic, biocompatible, biodegradable and very hydrophilic bio-polymer [25,26]. DEX is biosynthesised intra-or extracellularly by lactic acid bacteria (LAB), which represent one of the most important microbial groups due to their roles in food fermentations and synthesis of techno-functional metabolites [27]. By virtue of its properties, DEX has been used for over 50 years as a circulatory volume expander, in order to improve blood flow [13] and prevent postoperative deep-vein thrombosis [16]. It has also been used in anaemia treatment or as an antiviral agent, being selective for various viruses [13].
In food industries, DEX has technological functions, such as improving the physicochemical properties of food products, and also functional roles, such as prebiotic and immune-modulatory agents [27]. DEX acts as a hydrocolloid in the manufacturing processes of bread and other bakery products, serving as a natural component to replace chemically synthesised commercial hydrocolloids, meeting consumers' demands for fewer or zero additives in food products. At the same time, it has supplementary properties such as improving dough rheology, textural properties [31] and staling rate [32]. More recently, it was used as a thickener [33], as a surfactant emulsion's stabiliser [34] and in the production of cereal-based fermented functional beverages and ice cream [35]. The principal potential uses of DEX in foods are mostly related to its capacity to prevent crystallization and retain moisture [36].
In the non-food industry, DEX is used as a bio-separation agent (Sephadex ® gels), or as a chromatographic media due to its non-ionic character and good stability under normal operating conditions or for the construction of universal calibration curves used in the evaluation of size exclusion chromatography results [37]. It is used as a steric dispersion stabiliser in the production process of polypyrrole NPs [38].
In the pharmaceutical industry, DEX is already commercially used as a plasma substitute (by increasing volume), as an iron carrier (in the treatment of anaemia, complexed with ferric hydroxide), as an anticoagulant and antithrombotic agent (reducing blood viscosity), as a coating and protective agent for NPs used in nanodrug delivery [25], as an antioxidant and free radical scavenging agent [39], or as inducing agent for interferon biosynthesis [31,35,36,40].
From a medical point of view, the interest in the development and validation of new DDS for different pathologies has grown exponentially. These systems must allow temporal and spatial control of drug delivery, and a continuous plasmatic concentration for a prolonged period and should also improve the drugs' pharmacokinetic and biopharmaceutical properties. Another very important feature of these systems is that they must provide and increase the drug circulation time and stability in blood flow, improving the drug's performance, which can be achieved through different types of conjugations with drugs [28].
Over the last decades, DEX has been considered the most promising candidate for the transport of a wide range of therapeutic agents, due to its outstanding physico-chemical properties and biocompatibility [28,41]. Due to the inherent mechanisms of cells which reduce the drug's effects and facilitate excretion, by using DEX in different DDS, the stability, the local drug concentration and retention time of such nanocarriers (NC) are increased [42].
After systemic administration, the pharmacokinetics of DEX-DDS is considerably influenced by the kinetics of the DEX carrier [41]. Thus, the unmodified polymer can be absorbed by the digestive tract after oral administration only in a small amount. The in vivo studies have shown that both distribution and elimination of DEX depend on the molecular mass and overall charge of the polymer. Pharmacodynamically, the DEX-DDS have resulted in a prolonged effect, a low toxicity profile and a decreased immunogenicity of bioactive molecules [16,43,44].
This review presents a critical and comprehensive overview of the recent developments regarding dextran and its applications for the transport and delivery of drugs, proteins, enzymes, imaging agents, nucleic acids, highlighting the substantial increase in therapeutic potential as compared to the free active principles.
DEX Obtained by Biosynthesis from LAB Fermentation
DEX is a polysaccharide which is biosynthesized intra-or extra-cellularly (endopolysaccharide-ENS or exopolysaccharide-EPS) by several microorganisms such as Commercially, DEX is usually obtained from L. mesenteroides or L. dextranicum fermentation in a media with sucrose and a considerable nitrogen source.
In the biosynthesis of linear polysaccharides, there are two general mechanisms. In the first mechanism, the monomers are sequentially added at the non-reducing end of a growing chain using a high-energy donor. This pathway has been demonstrated for DEX biosynthesized by L. mesenteroides NRRL-B512F [47]. The second mechanism consists of the sequential addition of monomeric units to the reducing end by insertion between a carrier and the growing chain. In both mechanisms, the DEX molecule grows by extrusion, with the enzyme inserting glucose units from sucrose at one end of the polymer chain [36].
An important aspect of obtaining high amounts of bio-polymers is the fermentation conditions. Depending on the composition of the culture medium and the strain type, DEX can be obtained with a low or high molecular weight (over 150 kDa) [35,46]. Dextransucrase (1,6-α-D-glucan 6-α-glucosyltransferase, E.C. 2.4.1.5) is a generic name for a family of enzymes that synthesize DEX from sucrose [48]. The activity of dextransucrase is higher in aerobic compared to anaerobic conditions, and the biosynthesis rate are considerably improved by air-sparging [49]. Under proper aeration conditions, sucrose is converted to DEX with maximum yield. Dextransucrase has maximum stability and activity at a pH between 5.0 and 5.5, although most of the published research reports a fermentation pH of around 6.7. At pH 5.5, sucrose is converted into DEX from the beginning of the fermentation process, increasing the conversion yield by approximately 10% in a short period of time [49], preferably in the presence of small amounts of calcium [32]. The optimal biosynthesis temperature range is between 30-45 • C. The enzyme's nature influences the branching degree of DEX, resulting in different structures of the macromolecule [37]. The molecular weight of biosynthesized DEX is inversely correlated with the dextransucrase concentration and directly correlated with sucrose concentration and temperature [50]. Actually, the dextransucrase cleaves the glycoside bond in sucrose, releasing glucose which is further used in the biosynthesis of DEX by natural polymerisation, and fructose which is used as an energy source in different metabolic processes [51].
To increase the EPS biosynthesized amount, research groups generally optimise the culture media composition by supplementing it with additional carbon and nitrogen sources [ DSM 20,271 and Weissella confusa A1 in a soya flour-or rice bran-based media supplemented with sucrose. The aim of the study was to obtain bread with high nutritional value and the results also showed that the obtained DEX amount was very high, at approximately 58 g/L [53]. Experiments performed in our laboratory showed that the addition of aqueous fruit extract from Hippophae rhamnoides to the LAB culture media yielded 4.8 g/L dry EPS, with 2 g/L more compared with standard MRS media [54], while the addition of anthocyanin-rich Hibiscus sabdariffa L. extracts to culture media supplemented with peptone and sucrose yielded biosynthesized DEX with high molecular weights [55] (see Table 1).
Biomedical Applications of Modified DEX
After thorough investigations, different research groups postulated that pure DEXbased systems cannot achieve good mechanical properties and high drug-loading capacity. Native DEX exhibits low-cell-adhesive properties and in order to obtain hydrogels with controlled cell-scaffold interactions, specific molecules must be incorporated [19]. Many research groups have chemically modified DEX by introducing functional groups into the molecule through cross-linking reactions, therefore improving mechanical strength and drug-loading ability [9,41] and increasing the number of compound classes that can be obtained. Furthermore, DEX has been shown to have metal chelating activity [46] and antioxidant properties [59], as well as antitumour activity by regulating apoptosis and autophagy [61].
Below we present the most commonly used types of modified DEX, as well as the active substances that have been loaded into DEX-based systems.
Acetalated Dextran (Ac-DEX)
The main reason for performing DEX acetylation is to allow solubility of DEX molecules in organic solvents, facilitating the encapsulation of various hydrophilic and hydrophobic active substances, which has always been challenging, and allowing their simultaneous delivery [62]. Ac-DEX is an essential derivative of DEX synthesized in mild conditions, at room temperature, from DEX and 2-methoxypropene in a one-step reaction catalysed by pyridinium p-toluene sulfonate [3]. Ac-DEX contains cyclic and methoxy acyclic acetal moieties and has been shown to be biodegradable at neutral pH, biocompatible and pHsensitive [4,62]. Because it is an acid-sensitive polymer, Ac-DEX degrades more rapidly at lower pH, for example in the endosome of phagocytic cells, tumours, or in areas with inflammation [63], making it an ideal carrier for a wide range of therapeutics. Ac-DEX has several characteristics that make it a unique biodegradable polymer, such as facile synthesis and degradation rates' adjustment properties. It is suitable for vaccine applications, targeted host-directed therapies to macrophages, controlled release of drugs, chemotherapeutic delivery and engineered drug-delivery devices [64]. By the simultaneous release of different active substances, synergistic effects, as well as the reduction in side effects and solubility improvement could be achieved at lower concentrations and improved pharmacokinetics [62]. As a therapeutic system, Ac-DEX was used to develop porous microparticles made by single emulsion method in water/oil and loaded with rapamycin [4,65], camptothecin [66], or curcumin [67] in order to be used for pulmonary drug delivery or phagocytes' passive targeting. The delivery and release tests recorded very good results. These systems are more efficient in drugs' transport to the alveolar region of the lung, or for immune suppression therapies than other similar systems [4,[65][66][67]. At the pulmonary level, after the post-processing of these microparticles, the respirable fraction increased with the improvement of aerosolization and no significant damage was caused by the system to lung epithelial cells either in liquid-or air-exposed conditions [4,[65][66][67]. The dry powder aerosol formulations were capable of deep lung delivery of drugs by targeting and releasing the therapeutics to a desired location [4,[65][66][67]. By using these systems, a rapid onset of pharmaceutical action was obtained, avoiding hepatic metabolism and decreasing the side effects of the drugs. Resiquimod, a drug with antiviral and antitumour activity, was encapsulated in an electrospun Ac-DEX microparticles' scaffold and the results were remarkable for tissue engineering, wound healing, immunotherapy and drug-delivery applications [68,69]. Pyraclostrobin, an antifungal agent, was successfully loaded in pHsensitive Ac-DEX microparticles in order to treat Sclerotinia sclerotiorum plant infections [3]. Konhäuser et al. (2022) [62] developed a DDS system in order to simultaneously release Lasparaginase and etoposide. The active substances have synergistic activity against chronic myeloid leukaemia (CML) K562 cells, but L-asparaginase is hydrophilic and etoposide is hydrophobic [62]. This system has great potential for CML therapy due to its ingenious ability to release both compounds in a pH-dependent manner, leading to synergistic cytotoxicity, increased drug efficacy and reduced side effects [62].
Oxidized Dextran (oDEX)
Some research groups have obtained oDEX in order to bind therapeutic active molecules for secure delivery. DEX oxidation using sodium periodate is a catalysis-free aqueous reaction which produces a polyaldehydic DEX that can serve as a macromolecular cross-linker for amino groups-bearing substances.
By using oDEX, different DDS were synthesized, including microspheres, vesicles, hydrogels, NPs. Cortesi et al. (1999) [1] synthesized oDEX gelatine microspheres loaded with TAPP-Br antitumour drug and cromoglycate, obtaining very good results for drug release. Curcio et al. (2020) [70] developed a self-assembling oDEX-based vesicular system loaded with camptothecin, which was determined to be very efficient against MCF-7 and MCF-10A cell lines. The antitumour drugs, such as 5-fluorouracil and methotrexate, were encapsulated in oDEX hydrogels for breast, skin and gastrointestinal tract cancer treatment [71]. The obtained DDS induced faster drug release and had excellent biocompatibility and degradability, therefore being suitable for anticancer therapies [71]. Novel oDEX-based NPs for insulin release [29] or loaded with 5-fluorouracil for colorectal cancer therapies [30] were also obtained and were suitable for further in vivo testing.
Zhou et al. (2022) [12] reported an oDEX-based hydrogel loaded with black phosphorus nanosheets and zinc oxide nanoparticles. This DDS was suggested to be a hopeful approach for chronic wound treatment with bacterial infection through the synergistic effect of photothermal action and immunomodulation [12]. Multiple hydrogels as transdermal DDS loaded with ceftazidime or with collagen and Epidermal Growth Factor were reported for the treatment and healing of diabetic wounds infected with multidrug-resistant bacteria [39,72].
Carboxymethyl Dextran (CMD)
CMD, a polyanionic polysaccharide, was considered as a DDS constituent since it was discovered that its functional groups facilitate chemical conjugation and ionic complexation with various drugs. Its hydrophilic characteristics facilitate prolonged drug circulation improving its tumour-targeting efficiency [73]. By itself, CMD has high antioxidant properties [74].
CMD was used as a nanocomposite hydrophilic shell in order to be loaded with glutathione as an inhibitor of reactive oxygen species' cytotoxic effects associated with tumour apoptosis [75].
Magnetic NPs were coated with CMD in order to be used as contrast agents for magnetic resonance molecular imaging (MRI) [76,77]. Several research groups used CMDcoated magnetic NPs loaded with antibodies [78], peptides [79] and enzymes [80] for different medical applications.
Dextran Sulphate Sodium (DSS)
Certain types of dextran functionalization can lead to very toxic compounds, which can, however, be useful for particular applications. DSS is a polyanionic derivative of dextran with high-water solubility properties containing approximately 17% sulphur with up to three sulphate groups (-OSO 3 Na) per glucose molecule [81]. DSS has found wide utilization in the food, biotechnology, cosmetic and pharmaceutical industries [82]. In proper concentrations, it exhibits positive effects as an anticoagulant and antiviral agent or has the properties of lowering blood lipid and glucose levels in clinical studies [83]. Despite DSS promising application prospects and biological properties, its application is limited due to its harmful effects on the gastrointestinal tract [83].
Different research groups use DSS to induce colitis, thus creating artificial conditions for studying inflammatory bowel diseases, such as ulcerative colitis and Crohn's disease. The colitogenic potential of DSS depends on its molecular weight which must be between 36-50 kDa. DSS produces manifestations associated with inflammatory bowel disease, such as submucosal erosions, ulceration, inflammatory cell infiltration, crypt abscesses, as well as epithelioglandular hyperplasia [81]. It also determines the shrinkage of colon length and increases the relative colon weight/length ratio accompanied by mucosal oedema and bloody stools [81]. The DSS colitis paradigm is the most appropriate model for the human phenotype, from many points of view. For this injury, many drugs were tested as treatment, including curcumin [84], garlic oil (which has antioxidant, anti-inflammatory and immunomodulatory effects) [85], carvacrol (a phenolic monoterpene extracted from Oreganum vulgarea sp. essential oils with antioxidant, anti-inflammatory and anticancer properties) [86], resveratrol [87], glucose-lysine Maillard reaction products [88], liquorice (a Glycyrrhiza uralensis rhizome-derived product with anti-inflammatory activity) [89], Lactobacillus sakei K040706 (with immuno-stimulatory effects) [90] and Polygonum tinctorium leaves extract (by enhancing the mRNA expression of interleukin-10 and decreasing expression of tumour necrosis factor in colon tissues) [91].
DSS has also been used for film coatings with biological and biomedical applications [13]. Mixed DSS-based systems were developed, such as eco-friendly PVA/DSS nanofibers loaded with ciprofloxacin [18] or chitosan-DSS microparticles loaded with a hydrophilic peptide used as immunity-enhancing adjuvant or considered as vaccine electuary [92]. An antibacterial biocapsule system obtained from multilayer self-assembled diethylaminoethyl (DEAE)-DEX hydrochloride and DSS was developed as a DDS for kanamycinresistant Escherichia coli treatment. The system manifested an inhibitory effect during bacterial growth having high potential as an antimicrobial agent in future treatments against infection [20]. Wang et al. (2020) [93] developed a dual DDS for paclitaxel and 5-fluorouracil. The pH-sensitive system exhibited a controlled release profile based on a mechanism following a two-phase kinetic model [93]. The system's efficiency was investigated on HepG2 cells, resulting in synergistic effects between the two drugs and enhanced inhibition of cancer cells, presenting a good potential for biomedical delivery applications [93].
Diethylaminoethyl-Dextran (DEAE-DEX)
DEAE-DEX was the very first chemical vector used for DNA delivery, reported by Vaheri and Pagano in 1965 as DEAE-DEX used to enhance the cells' viral infectivity. The DEAE-DEX-mediated transfection method gained attention in the early 1980s because of the simplicity, efficiency and reproducibility of the procedure. DEAE-DEX forms electrostatic interaction complexes with DNA, exhibiting higher transfection efficiency, but at high concentrations, it is toxic to cells [94]. Recently, it was used to develop carrier polyplex nanoparticles with luciferase coding mRNA [95] or used for β-interferon production enhancement [40].
Dextran Used in Drug-Delivery Systems
From a structural point of view, as a bio-polymer, DEX has molecular weights higher than 1000 Dalton, and a linear backbone of α-linked D-glucopyranosyl repeating units [28]. DEX contains a large number of hydroxyl groups which are capable of conjugating bioactive molecules by direct coupling or via a linker. DEX has been used to form hydrogels [10][11][12], films [13,96], nanosystems (by itself or as a coating agent) [5,6,9,15,16] and other systems [7,8,[17][18][19][20], in order to release controllable amounts of drugs (Table 2). Recently, it was demonstrated that DEX has a protective effect on cells against oxidative stress induced by drug cytotoxicity [28,42]. Significantly improves follicular oocytes' in vitro maturation and development; synergistic effects in 3D tissue culture development [106] It has been postulated that in vivo drug concentrations need to be as constant as possible and optimally targeted to specific cells or organs in order to avoid prolonged treatments. Microencapsulation of antineoplastic drugs has been done using natural or synthetic polymeric materials with the aim of maintaining constant and high drug levels in the blood or at the tumour site, thus reducing multiple administrations and possibly targeting the active agents to the desired location [1].
Below, the most used systems containing DEX as a component have been reviewed.
DEX as a Hydrogel Component
The use of natural polymers in hydrogel systems' development can confer highly beneficial properties to drugs. By using DEX, optimal release profiles and desirable therapeutic characteristics can be achieved for a wide range of DDS [28]. Hydrogels as polymeric networks with swelling capacity can be biodegradable or not, and drugs can be encapsulated in these structures, obtaining delivery systems with controlled drug release [97].
DEX-containing hydrogels are considered valuable and sustainable biomaterials for biomedical applications [10]. They are being used extensively in the pharmaceutical and biomedical fields for drug delivery, tissue engineering [10], neovascularization [106], regenerative medicine, wound repair and dressings [12,41,107], due to DEX's lubrification and unique soft-wet properties similar to natural extracellular matrices [108], as well as their advantages for commercial production, such as high yields and low costs [35] ( Table 2).
Traditional antibacterial hydrogels deliver large dosages of antibiotics or other drugs, increasing the risk for cytotoxicity. However, some research groups have used antimicrobial agents with synergistic activity in models of normal and diabetic wounds infected with multidrug-resistant bacteria, achieving higher therapeutic effects at lower doses compared to classical antibiotics [72].
Dextran as NP Component or Coating Agent
Over the years, intensive efforts have been made to design intelligent systems that are able to deliver drugs more efficiently to the target site and at the same time to minimise the side effects. NPs as DDS for enhancing the drugs' therapeutic efficiency are the hot spot of research in the field of nano-biotechnology. Although there are many advantages associated with these NPs, such as increased solubility of hydrophobic drugs favouring long circulation times in the blood or higher bioavailability [109,110], there are still a number of drawbacks, such as burst release, limited stability of formulations leading to drug leakage and nonspecific cellular uptake resulting in undesired adverse effects [9,44]. Most NPs can be tailored for specific site targeting, controlled release of drugs and high stability under different administration routes. NPs have the ability to penetrate easily through fine blood capillaries due to their subcellular and nano sizes [29,111]. Furthermore, drugs have often been covalently bonded to natural or synthetic polymers in order to reduce renal excretion [109].
DEX in its native form does not self-assemble into NPs, but nonetheless has high water retention capacity and heavy metal chelating activity for Zn 2+ , Fe 2+ , Cu 2+ , Cd 2+ and Pb 2+ [46]. Different strategies have been developed in order to fabricate DEX-based NPs for drug delivery (Table 3), among which we can mention the covalent functionalization of DEX hydroxyl groups or crosslinking of DEX through the lateral hydroxyl groups (using a variety of crosslinking reactions and linkers), both necessary for physical self-assembly into NPs [112] or reducing in vivo accumulation and clinical risk [30,96,113,114]. [128] In order to safely deliver a drug and to release the correct dose, first of all, it is mandatory to study the physico-chemical properties of the administered drug in the location of interest. Furthermore, in order to selectively target a specific site, it is imperative to investigate the physiological properties of the microenvironment. The toxicity and the biodistribution of a delivery system are influenced by the chemical nature of the components, system's size and the coating agents [125]. By using DEX as a coating agent for any NPs, the interactions with cells and proteins are limited, thus conferring increased circulating half-life and colloidal stability in biological environments, which in turn determines good overall safety in vivo and no visible tissular damage [96,129]. At the same time, by the encapsulation of the drug in these systems, the side-effects of the drug are minimized, the efficiency is enhanced and the drug can be released in a controlled rate depending on the drug's diffusion coefficient [44,71,120,124].
Dextran as Nanocarrier Component
Nanocarriers (NC) are similar to NPs, but the methods of synthesis are different. Thus, reaction components represented by natural polymers with low molecular weights and various molecules with smaller or larger molecular weights are embedded by chemical or physical processes [44,130]. Next, the final synthesised compound self-assembles through hydrogen interactions or electrostatic attractions in a NC system. Natural or synthetic hydrophobic substances with therapeutic activity are encapsulated either in the core or grafted on the NC surface by chemical reactions or by electrostatic interactions [131].
Similar to NPs, NCs also help improve drug efficacy, having the ability to increase drug absorption in tissue and increase cellular uptake, to protect the drug from degradation and interaction with the biological environment and to control the drug's pharmacokinetic distribution profile [132]. NCs such as liposomes, micelles or polymeric NPs have shown fabulous opportunities in the field of targeted drug delivery for cancer therapy [133]. Table 4 presents DEX-based NCs developed for drug delivery.
Dextran as Micelles' Component
Micelles are a type of highly regarded DDS, especially for the delivery of hydrophobic/lipophilic drugs due to their unique physicochemical properties, containing a hydrophobic core and a hydrophilic shell. Natural polymeric micelles are more widely used in novel DDS due to their biocompatibility and tunable properties [8]. These DDS have a great capacity to encapsulate high amounts of bioactive compounds and to deliver them at targeted locations in the body.
Several groups have developed DEX-based micelles for drug delivery in a variety of pathologies. Zhang et al. (2020) [137] developed a self-assembled pH-responsible micelle formed by conjugated DEX loaded with doxorubicin and found that the drug accumulation in tumours was increased due to permeation enhancement. Jin et al. (2017) [138] tested the cytotoxicity and antitumour activity of their system on MCF-7 and SKOV-3 tumour cells in vitro and the results were promising. Later, a self-assembled DEX-based micelle was loaded with rapamycin, decreasing the drug's toxicity and increasing the system's uptake by tumoral cells, without affecting normal cells' viability [9]. Malekhosseini et al. (2020) synthesized DEX-based micelles which had a hydrocortisone encapsulation efficiency of 79% and 90% drug release in the first 12 h with cell viability higher than 90% [8]. The study of nateglinide and insulin, vitamin E succinate and insulin combinations loaded into DEX-based micelles reduced oxidative stress and improved the mitochondrial function and glucose metabolism, while also improving the cognitive capacity of mice, demonstrating a paradigm for specific and high-efficacy combination therapy for Alzheimer's disease [139].
Conclusions
Dextran is a biosynthesized non-toxic, biocompatible and biodegradable macromolecule which has been extensively used as a major component in many types of DDS due to its versatile properties. Numerous DDS obtained so far using dextran have great potential in different pharmaceutical applications but, in order to maintain the high DDS biocompatibility, the use of dextran obtained by fermentation with minimum chemical modifications is recommended. By performing dextran chemical modifications, artefacts can appear in the DEX spatial structure which can further lead to biocompatibility decreasing or even cytotoxicity increasing. As a result, many DDS containing acetalated, carboxymethyl, diethylaminoethyl-dextran, or dextran sulphate sodium salt have been removed from in vivo or clinical studies.
On the other hand, the multitude of developed DDS (microspheres, microparticles, nanoparticles, nanodroplets, liposomes, micelles, hydrogels, films, nanowires, bioconjugates, medical adhesives and others) have considerably increased the type and number of applications compatible with DEX-DDS. However, there is still a need for continuous DDS development in order to optimize and study as many systems as possible for biomedical and pharmaceutical applications.
|
2023-01-25T16:01:53.019Z
|
2023-01-21T00:00:00.000
|
{
"year": 2023,
"sha1": "3b1e61a8547f2d1c133e8c19bde8ac6b9d51d906",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/28/3/1086/pdf?version=1674292978",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d90545b8d66ef26489ad2cac9a4d6348c44dea03",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
215769041
|
pes2o/s2orc
|
v3-fos-license
|
A decision support system for addressing food security in the UK
This paper presents an integrating decision support system to model food security in the UK. In ever-larger dynamic systems, such as the food system, it is increasingly difficult for decision-makers to effectively account for all the variables within the system that may influence the outcomes of interest under enactments of various candidate policies. Each of the influencing variables is likely, themselves, to be dynamic sub-systems with expert domains supported by sophisticated probabilistic models. Recent increases in food poverty the UK raised the questions about the main drivers to food insecurity, how this may be changing over time and how evidence can be used in evaluating policy for decision support. In this context, an integrating decision support system is proposed for household food security to allow decision-makers to compare several candidate policies which may affect the outcome of food insecurity at household level.
Introduction
This paper gives a proof of concept practical application of the recently developed statistical integrating decision support system (IDSS) paradigm. An IDSS is developed for policymakers concerned with deciding between candidate policies designed to ameliorate household food insecurity within the UK context of rising food charity use.
Food Security
Food security exists when all people, at all times, have physical and economic access to sufficient, safe and nutritious food to meet their dietary needs and food preferences for an active and healthy life (FAO, 1996). Missing meals and changing diet is a common response to food insecurity, and the latter may persist over extended periods, leading to adverse health effects, especially in children (Seligman et al., 2010). Food insecurity can result in an increased risk of death or illness from stunting, wasting, weakened responses to infection, diabetes, cardiovascular diseases, some cancers, food-borne disease and mental ill health, via insufficient quantity, poor nutritional quality of food, contaminated foods, or social exclusion Friel and Ford (2015). Rising food insecurity has been strongly associated not just with malnutrition, but with sustained deterioration of mental health, inability to manage chronic disease, and worse child health (Loopstra et al., 2015a;Loopstra, 2014). Food insecurity is associated with hypertension and hyperlipidemia which are cardiovascular risk factors. It is also associated with poor glycaemic control in those with diabetes, whose additional medical expenses exacerbate their food insecurity (Lee et al., 2019). Food insecurity has been found to affect school children's academic performance, weight gain, and social skills (Faught et al., 2017). Whilst obesity is more prevalent among Tab. 1: Poverty measures across three countries. UK absolute poverty rate measures the fraction of population with household income below 60% of median income in 2010-11, updated by the Consumer Prices Index. USA Census Bureau uses a set of dollar value thresholds that vary by family size and composition to determine poverty. Canada uses the Market Basket Measure, the concept of an individual or family not having enough income to afford the cost of a basket of goods and services. UK USA Canada Overall 19.0% 11.8% 9.5% Child Poverty 26.5% 16.2% 9.0% Working adults with no children 16.4% --Adults 18-64 -10.7% -Pensioners 13.5% 9.7% 3.9% Food security low (very low) 10.0% 11.1% (4.3%) 12.3% (2.5%)
Comparison with USA and Canada
Like the UK, USA and Canada, are wealthy nations with significant household food insecurity. In contrast to the UK, the USA and Canada have undertaken regular monitoring of household food security over many years through the HFSS module within regular household surveys (Tarasuk et al., 2016). This means that research on determinants and rates of food insecurity over time is more advanced and detailed in USA and Canada than in the UK. The USA and Canada are similar to the UK in their profiles of poverty and types of government, which allows us to draw on their research where UK data and evidence is sparse.
In 2017-18, and the UK absolute poverty rate was 19.0%, ranging from 26.5% among children to 13.5% among pensioners (Bourquin et al., 2019). In the USA, the official poverty rate in 2018 was 11.8%, for children under age 18 it was 16.2%, for people aged 18 to 64, 10.7% and for people aged 65+, 9.7% (Semega et al., 2019). In Canada, the official poverty rate is 9.5% overall and 9.0% for children. 3.9% of seniors were living in poverty in 2017 (StatCan, 2017), although the Market Basket Measure has been criticised for omitting housing and childcare costs. The Canadian Low Income measure, 50% of median income, adjusted for family size, was 12.9% in 2017 on an after-tax basis. In 2018 in the USA, 11.1% of households were food insecure and 4.3% had very low food security. In Canada it was 12.3% in 2011, the latest figures available, with 2.5% of households with very low food security. (Loopstra, 2014;Tarasuk et al., 2010)
Need for decision support
There is a need to gather what information does exist for the UK in order to ascertain the principal drivers of household food security to support policy-makers to design policy to tackle food security and to evaluate other policies which may impact on food security. In ever-larger dynamic systems, such as the food security, it is increasingly difficult for decision makers to effectively account for all the variables within the system that may influence the outcomes of interest under enactments of various given policies. In particular, government policies on welfare, farming, the environment, employment, health, etc. all have an impact on food security at various levels. Each of the influencing variables are likely, themselves, to be dynamic sub-systems with domain expertise, often supported by sophisticated probabilistic models. Within the food system, examples of these are medium to long range weather forecasting which influences food supply which might be large numerical models, and economic models such as autoregressive or moving average which estimate the behaviour of global markets and prices under various plausible scenarios. The emerging crisis in the UK is not merely a matter for charity, but of great concern to policymakers, who are legally and morally obligated to act, but may lack recent experience in dealing with needs of this kind and scale, and so require decision support. This paper proposes an integrating decision support system (IDSS) Barons et al., 2018) for household food security in the UK. The IDSS is a computer-based tool which integrates uncertainties of different parts of a complex system and addresses the decision problem as a whole.
Practical considerations
In Barons et al. (2018), we detail the iterative manner of the development of an IDSS with its decision-makers and expert panels. Before the elicitation starts it is always necessary to do some preparatory work. With the help of various domain experts, the analyst will need to trawl any relevant literature and check which hypotheses found there might still be current. We repeatedly review the qualitative structure of the IDSS in light of the more profound understanding of the process acquired through more recent elicitation. This modification and improvement continues until the decision centre is content that the structure is requisite (Phillips, 1984). Since the process of model elicitation is an iterative one, it is often wise to begin with some simple utility measures, proceed with an initial structural model elicitation, and then to revisit the initial list of attributes of the utility; detailed exploration of the science, economics or sociology can prompt the decision centre to become fully aware of the suitability of certain types of utility attribute measures. By focusing the centre and its expert panels on those issues that really impact on final outcomes we can vastly reduce the scope of a potentially enormous model; only those features that might be critical in helping to discriminate between the potential effectiveness of one candidate policy against another are required. If there is strong disagreement about whether or not a dependency exists in the system then we assume initially that a dependency does exist, except where the consensus is the its effect is weak. Further iterations of the model building process usually clarify the understanding, and if not, a sensitivity analysis can usually distinguish a meaningful inclusion form others. The decision centre also need to decide what time step is the most natural one to use for the purposes of the specific IDSS. This choice depends on the speed of the process, how relevant data is routinely collected on some of the components, and some technical acyclicity assumptions that are typically known only to the decision analysts. There may be conflict between the granularity of informing economic models of the process, sample survey regularity, and the needs of the system. The granularity needed is driven by the granularity of the attributes of the utility. In addition, decision analysts need to match precisely the outputs of a donating panel with the requirements of a receiving panel. When these do not naturally align, then some translation, possibly a bespoke model, may be needed between them. When expert panels design their own systems, sometimes the internal structure of one component can share variables with the internal structure of another. So, for example, flooding could disrupt both the production of food and its distribution and yet these might be forecast using different components. In such cases, the coherence of the system will be lost and the most efficient way to ensure ongoing coherence is to separate out the shared variables and ask the panels concerned to take as inputs, probability distributions from the expert panel in the shared variable, flood risk. One element of these IDSS systems is the way they can appropriately handle uncertainties associated with various modules. This is vital to reliable decision making. For example if the inputs from one module are very speculative -and so have a high variance -Then policies that work well over a wide range of such inputs will -under the sorts of risk averse decisions we have here -ted to be preferred to ones whose efficacy is very sensitive to such inputs. That is why we need conditional inputs to communicate such uncertainties.
Integrating decision support systems
Integrating Decision Support systems are introduced in Smith et al. (2015) and Smith et al. (2016) and briefly reviewed in section 2.1. The IDSS aids decision makers in the understanding of a problem by providing a clear evaluation and comparison of the possible options available. It combines expert judgement with data for each subsystem resulting in a full inferential procedure able to represent complex systems. However, decision support systems often require sophisticated architectures and algorithms to calculate the outputs needed by the decision-makers to inform policy selection when the system is composed of many multi-faceted stochastic processes. There is currently no generic framework or software which is capable of faithfully expressing underlying processes for the scale of problems under consideration here, nor sufficiently focused to make calculations quickly enough for practical use in a dynamic, changing environment.
In this application, the framework knitting together the different component subsystems in the IDSS is the dynamical Bayesian Network (West and Harrison, 1997). In particular, the model can be seen as a multi-regression dynamic model (MDM) (Queen and Smith, 1993). Here this framework is extended to allow variances to vary stochastically over time. The assumed approach is suitable because regression models are well understood but we need to allow for the fact that within this application regression coefficients can drift in time. The dynamical model also allows for separability of the different components of the series. A simulation algorithm is developed which enables decision making to be fast and dynamical over time even for a large system with many dependent variables and time points with nonlinear characteristics. Using the MDM, we can model shocks to the system within the given framework by introducing change point. This sort of property is exploited in the brain imaging (Costa et al., 2019). Within each of the expert panels lies a complex sub-network of variables. We seem to a BN/DBN for all the modules since these are a very well developed method used in main analogous applications and have supporting software easily available. In Section 2.1, the integrating decision support system methodology is briefly reviewed. Section 3 details the model and variables used for utility computation in the context of food security in the UK. Then Section 4 presents the outputs and policy evaluation for the food security system. We end the paper with a short discussion of our findings and the planned next steps in this research programme.
Technical underpinning
In this section, we briefly review these recent methodological developments to support inference for decision support as they apply here. Full details and proofs are provided in (Smith et al., 2016).
Consider a vector of random variables relevant to the system Y = (Y 1 , . . . , Y n ). Typically, there are expert panels with expertise in particular aspects of the multivariate problem. The most appropriate expert panels for each sub-system are identified, each sub-panel will defer to the others, adopting their models, reasoning and evaluations as the most appropriate domain experts. Each expert panel, G i , is responsible for a subvector Y B i of Y, with B 1 , . . . , B m a partition of 1, . . . , n. The multivariate problem is then decomposed in sub-models. The joint model thus accommodates the diversity of information coming from the different component models and deals robustly with the intrinsic uncertainty in these sub-models.
Decisions d ∈ D will be taken by a decision maker (DM) where D represents the set of all policy options that it plans to consider. In the context of large problems like this, the decision-maker is often a centre composed of several individuals. These individuals are henceforth assumed to want to work together constructively and collaboratively supported by using a probabilistic decision tool that can provide a benchmark evaluation of d ∈ D the underlying processes that drive the dynamics of the unfolding scenario. However, to use the Bayesian paradigm, we would like to assume that this centre will strive to act an a single rational person would when that person is the owner of the beliefs expressed in the system and so the need for coherence is satisfied. The DM receives information from each panel and reaches a conclusion that depends on a reward function For this level of coherence, we must be able to configure the panels and their relationships so that certain assumptions are satisfied. Below we briefly outline what these assumptions need to be. More generic descriptions can be found in (Smith et al., 2016).
We introduce some notation: be the utility function for decision d ∈ D. Our main goal is to compute the expected utilities {Ū (d) : d ∈ D} which represents the expected utilities of a decision maker.
To be formally valid, any IDSS must respect a set of common knowledge assumptions shared by all panels and which comprises the union of the utility, policy and structural consensus, described as follows.
1. Structural consensus: The structural consensus requires that all the experts agree, in a transparent and understandable manner, the qualitative structure of the problem in terms of how different features relate to one another and how the future might unfold within the system. Formally, these can be couched in terms of sets of irrelevance statements. We propose such a structure in 1. There needs to be an agreed narrative of what might happen within each component of the system, based on best evidence. Also for each component, there needs to be a quantitative evaluation of how the critical variables might be affected by the developing environment when appropriate mitigating policies are applied. Where there are agreed sets of irrelevance statements, and the semigraphoid axioms are assumed to hold (Smith, 2010), these can be used to populate the common knowledge framework belonging to a decision centre.
2. Utility consensus: requires all to agree a priori on the class of utility functions supported by the IDSS and the types of preferential independence across its various attributes it will need to entertain (such as value independence, mutually utility independent attributes (Keeney and Raiffa, 1993) and more sophisticated versions, see Leonelli and Smith (2015). Sections 3.1 and 3.2 give details of the multiattribute utility, its measurement and rationale.
3. Policy consensus: must be sufficiently rich to contain a set of policies that might be adopted and an appropriate utility structure on which the efficacy of these different policies might be scrutinised.
4.
Adequate: An adequate IDSS will be able to unambiguously calculate expected utility score for each policy that might be adopted on the basis of the panels' inputs; if it has this property the IDSS is called adequate. Note that it should be immediate from the formulae of a given probabilistic composition to calculate these expectations whether or not the system is adequate (see Smith et al. (2016) for an illustrative example).
Sound:
A sound IDSS is one which is both adequate and allows the decision-maker, by adopting the structural consensus, to admit coherently all the underlying beliefs about a domain overseen by a panel as her own, and so accept the summary statistics donated by the panels to the IDSS.
6. Distributive: For such a system to be formal and functional, each component panel can reason autonomously about those parts of the system they oversee and the centre can legitimately adopt their delivered judgements as its own. The semigraphoid axioms provide means to satisfy this requirement and panel autonomy liberates each panel of domain experts to produce their quantitative domain knowledge in the way most appropriate for their domain and using their own choice of probability models. They can update their beliefs through any models they might be using and continually refine their inputs to the system without disrupting the agreed overarching structure and its quantitative narrative.
7. Separately informed: An essential condition for panel autonomy is that panel are separately informed. This requirement can be subdivided within a Bayesian framework into two conditions -prior panel independence and separable likelihood -using the usual properties of conditional independence. The first of these is a straightforward generalisation of the global independence assumption within Bayesian inference (Cowell et al., 1999). The second, the assumption that the collection of data sets gives a likelihood that separates over subvectors of panel parameters, is far from automatic and is almost always violated when there are unobserved confounders or missing data. In such circumstances, one approach is to devise appropriate approximations.
8. Admissibility protocols: Another approach is to impose an admissibility protocol on the information used to make inferences within the system, analogous to quality of evidence rules within Cochrane Database of Systematic Reviews. When data is derived from welldesigned experiments, randomisation and conditioning often leads to a likelihood which is a function only of its own parameters, so trivially separates. When there is a consensus that a quantitative causal structure is a causal Bayesian network, dynamic Bayesian network, chain event graph or multiprocess model and the IDSS is sound (delegable, separately informed and adequate), then the IDSS remains sound under a likelihood composed of ancestral sampling experiments and observational sampling (Smith et al., 1997).
9. Transparent: In such a distributive framework, any query made by another panellist or an external auditor can be referred to the expert panel donating the summaries in question which can provide a detailed explanation of its statistical models, data, expert judgements and other factors informing how its evaluation have been arrived at and why the judgements expressed are appropriate.
For a distributive IDSS, the question then becomes precisely which information each of the panels needs to donate about their areas of expertise for the maximum utility scores to be calculated. Provided that the utility function is in an appropriate polynomial form, each panel need deliver only a short vector of conditional moments and not entire distributions because this type of overarching framework embeds collections of conditional independences allowing the use of tower rule recurrences . This facilitates fast calculations and propagation algorithms to be embedded within the customised IDSS for timely decision-making. In such a system, individual panels can easily and quickly perform prior to posterior analyses to update the information they donate when relevant new information comes to light and this can be propagated to update the expected utility scores; this quality is especially useful within decision support for an emergency, but in any circumstances represents a huge efficiency gain over having to rebuild and re-parameterise a large model. There are a number of frameworks which satisfy the requirements of the IDSS properties, including staged trees, Bayesian Networks, Chain graphs, Multiregression dynamic models and uncoupled dynamic BNs.
The paradigm outlined here will be illustrated throughout the remainder of the paper through a proof of concept application to an IDSS for government policy for household food security in the UK, using a Bayesian network as the overarching framework.
BN and Dynamical BN
Bayesian networks (BNs) and their dynamic analogues are particularly suited to the role of decision support as they represent the state of the world as a set of variables and model the probabilistic dependencies between the variables. They are able to build in the knowledge of domain experts, provide a narrative for the system and can be transparently and coherently revised as the domain changes.
A Bayesian network is formally defined as a directed acyclic graph (DAG) together with a set of conditional independence statements having the form A is independent of B given C written A ⊥ B|C. They are a simple and convenient way of representing a factorisation of a joint probability density function of a vector of random variables Y = (Y 1 , Y 2 , . . . , Y n ). Each node has a conditional probability distribution, which in the case of discrete variables will be conditional probability tables (CPTs). In this model, L i (Y Bi ) = Y Πi , with Π i the indices of parents of Y i . The joint density of Y may be written as . Thus the expected utility is given bȳ Dynamic Bayesian networks are able to accommodate systems which change over time (Dean and Kanazawa, 1989). DBNs are a series of BNs created for different units of time, each BN called a time slice. The time slices are connected through temporal links to form the full model. DBNs can be unfolded in time to accommodate the probabilistic dependencies of the variables within and between time steps. It is usually assumed that the configuration of the BN does not change over time, i.e. the dependencies between variables are static.
Consider the general setting such that with {Y t : t = 1, . . . , T } a multivariate time series composing a DAG whose vertices are univariate processes and Π i the index parent set of Y it and Y t i = (Y i1 , . . . , Y it ) the historical data. Thus, the model assumes that each variable at time t depends on its own past series, the past series of its parents and the value of its parents at time t. This results in the joint density function The observation and system equations are defined as The errors are assumed to be independent of each other and through time and F it , G it are assumed to be known. Given the initial information, The parameters θ it , i = 1, . . . , n may be updated independently given the observations at time t. Conditional forecasts may also be obtained independently. These results are proved in Queen and Smith (1993) assuming Gaussian distributions for the error terms. The predictive density is given by Let D t = (y t , D t−1 ) be the information available at time t. Inference about θ t is based on Forward filtering equations to obtain posterior moments at time t.
-Posterior distribution at time t − 1: it . If data is observed from time 1 to T then backward smoothing may be used to obtain the posterior moments of θ it | D T , t = 1, . . . , T . Thus, The variance evolution follows West and Harrison (1997) which define V it = V /φ it and φ i,t−1 | D t−1 ∼ G(n i,t−1 /2, d i,t−1 /2). The gamma evolution model is given by with δ i ∈ (0, 1) being the discount factors. The posterior distribution at time t is obtained analytically as φ it | D t ∼ Gamma(n it /2, d it /2) with n it = δ i n i,t−1 + 1 and d it = δ i d i,t−1 + S i,t−1 e it Q −1 it e it , with S i,t−1 = d i,t−1 /n i,t−1 . This conjugacy results in closed-form recurrence updating equations for this variance model.
Expected utility computation and scenario evaluation
Suppose that θ 1:T was simulated using the Forward filtering and backwards sampling algorithm as described in subsection 2.2. The predictive posterior distribution for a replicated observationỹ is given by The predictive distribution of a new observationỹ it may be obtained by simulating from g it (· |ỹ t Πi ,ỹ t−1 i , θ it ). If U (ỹ t , d) are linear functions ofỹ t the expected utilities may be computed analytically using chain rules of conditional probabilities. If U (ỹ t , d) is a nonlinear function ofỹ t then expected values are computed by Monte Carlo integration (Robert and Casella, 2004). Note that some ordering in computing expectations need to be followed, starting from the variables such that L i (Y it ) = ∅, their descendants and so on.
In addition, the types of overarching descriptions suitable for these applications must be rich enough to explore both the effects of shocks to the system and the application of policies. These can be conveniently modelled through chains of causal relationships, where causal means that there is an implicit partial order to the objects in the system and we assume that the joint distributions of variables not downstream of a controlled variable remain unaffected by that control. The downstream variables are affected in response to a controlled variable in the same way as if the controlled variable had simply taken that value. This is the assumption underlying designed experiments.
Utility function elicitation
In every decision support scenario, it is essential to clarify the goals of the decision-maker (DM). Support for household food security is provided in the UK context through Local government, typically city or county councils through their financial inclusion and child poverty policies. The goal of a city or county council in the UK is to fulfil their statutory obligations to the satisfaction of central government. Whenever possible, they wish to go beyond mere compliance and continually improve the lives of the citizens within their geographic region, with a special focus on improving the circumstances of the most disadvantaged.
In order to construct an IDSS for food security, the next step is to define the utility function and develop a suitable mathematical form for it. One requirement of the attributes of a utility function is that they must be measurable; it must be possible to say whether an event has happened or a threshold has been reached. One candidate measure of household food security would be data from food bank charities. However, studies have shown that food bank use is not a good measure of food poverty (Kirkpatrick and Tarasuk, 2009;Coleman-Jensen et al., 2016). In the absence of a direct measure of household food security in the UK, the decision-maker needs a good proxy in order to construct a suitable Utility function. Council officers identified the variables: education, health and social unrest as suitable attributes of a utility.
In constructing a utility function based on these attributes, it appeared appropriate to assume value independence (Keeney and Raiffa, 1993). Let Z 1 =measures of education, Z 2 =measures of health, Z 3 =Measures of social unrest, Z 4 =cost of ameliorating policies to be enacted.. The forms of the marginal utility functions then needed to be specified. For social unrest, health and education was assumed exponential, whilst the utility on cost was assumed linear. It was therefore decided that one family of appropriate utility functions might take the form: where z = (z 1 , z 2 , z 3 , z 4 ) and whose parameters (a, b, c 1 , c 2 , c 3 ) were then elicited. As follows, observable variables are defined as proxies for the attributes required to compute the utility function in (3).
Measuring the attributes in the utility function
The utility function depends on the proxy variables of health and education which are defined as follows.
Health: Suppose the expert panellists define a proxy as a function of number of admission to hospital with diagnosis of malnutrition (primary or secondary) and number of deaths with malnutrition listed on the death certificate either as primary or secondary cause. Admissions data are available in the Hospital Episode Statistics (HES) from the UK government's Health and a Social Care Information Service which routinely links UK Office for National Statistics (ONS) mortality data to HES data. In the UK, the number of deaths caused primarily by malnutrition are very low and rates are not significantly different over time. Besides, malnutrition is usually accompanied by other diagnoses such as diseases of digestive system, cancers, dementia and Alzheimer's disease. Thus, the increase of deaths with malnutrition as a contributory factor might be due to ageing of the population and not due to food insecurity. Regarding admissions with malnutrition even the primary diagnosis numbers have increased over time with 391 in 2007-08 and 780 in 2017-18. Thus, in this work we considered the primary and secondary admission cases as a proxy for the health variable. Thus, the variable Health is defined as the count of finished admission episodes with a primary or secondary diagnosis of malnutrition coded ICD-10. A ICD-10 code of malnutrition on the episode indicates that the patient was diagnosed with, and would therefore being treated for malnutrition during the episode of care.
Education: The proxy for education could be defined as a function of educational attainment such as the proportion of pupils achieving expected grades in key stages 1, 2 and 4. Even though educational attainment is published annually at local and national levels by the UK government's Department for Education, the score system has changed in previous years and temporal comparisons are not adequate (Hill, 2014). Thus, as a proxy for education and its relation to food security we considered the proportion of pupils at the end of key stage 4 who were classified as disadvantaged. Thus, the variable Education is measured as the percentage of pupils at Key Stage 4 who were classified by the Department for Education as disadvantaged including pupils known to be eligible for free school meals (FSM) in any spring, autumn, summer, alternative provision or pupil referral unit census from year 6 to year 11 or are looked after children for at least one day or are adopted from care. Before 2015 this classification considered those who have been eligible for Free School Meals at any point in the last 6 years and Children who are 'Looked After'. In 2015 this definition was widened to also include those children who have been 'Adopted From Care'. Pupils classified as disadvantaged have a lower average educational attainment record than other pupils and there is a direct correlation between level of qualification and unemployment in later life; Poor educational attainment is strongly correlated with teenage pregnancy, offending behaviour, and alcohol and drug misuse. Comparisons between educational attainment for disadvantage and other pupils indicate a difference of 4.07 (2010/2011) and 3.66 (2016/2017) in the attainment gap index for Key stage 4 for state funded schools in England. The gap index are scores measuring the differences between the disadvantaged and non-disadvantaged groups in Key level 2 and 4 (Hill, 2014). The index is the mean rank for all the disadvantaged and non-disadvantaged pupils divided by the number of pupils in each cohort. This decimal mean rank difference is scaled to 10 and ranges from 0 to 10, where a higher value means a higher attainment of non-disadvantaged compared to disadvantaged pupils. The index aims to be resilient to changes in the grading systems and in the assessments and curricula, and may be used for temporal comparisons.
Social Unrest: Inadequate food security can cause food riots (Lagi et al., 2012). In the UK, a riot is defined by section 1(1) of the Public Order Act 1986 as where 12 or more persons who are present together use or threaten unlawful violence for a common purpose and the conduct of them (taken together) is such as would cause a person of reasonable firmness present at the scene to fear for his personal safety, each of the persons using unlawful violence for the common purpose is guilty of riot. Riot data is collected by the police. Whilst the likelihood of a food riot is small in the UK currently, post-riot repairs both to physical environment and community relations can be considerable.
Costs Costs of candidate intervention policies are routinely calculated and form part of the decision-making process. Indeed, as a response to falling budgets, decision makers might revise the criteria for assistance of various kinds, for instance by making the eligible cohort smaller. Interventions which are effective but budget-neutral or cost-saving are obviously preferred, however, when the benefit of intervention may not be seen within the same financial year, this would form part of the decision-makers' discussion after the policies had been scored. This is the approach we take here, by scoring the policies and leaving the costs for final discussions of decision makers.
Structure of the IDSS
Having found a parsimonious form of utility function, we are able to begin to build the architecture of the supporting structural model. The paradigm we used for this is described in detail in (Smith, 2010). The method involves first eliciting those variables which directly influence the attributes of the utility function, then the variables which affect those variables and so on until a suitable level of detail has been obtained. This was effected using an iterative process, drawing on the food poverty literature and checking with domain experts, refining and repeating. In particular, the general framework was confirmed by work produced independently in Loopstra (2014). The variables and their dependencies for the UK food system are shown in Figure 1.
There are a range models which can be used for the overarching model of an IDSS, as listed in Smith et al. (2016), and for the purpose of the IDSS for food security we selected a dynamic Bayesian network (DBN) as summarised in subsection 2.2. The structure was assumed to be fixed over time. Figure 1 illustrates the 16-node DBN obtained through literature and confirmed by the experts. The node food security represents the two variables, health and education, considered in the utility function.
Expert panels
Having identified the factors influencing household food security in the UK the next step is to identify the most relevant experts to provide information on these. The panels constituted for such an IDSS will often be chosen to mirror the panels that are already constituted for similar purposes, e.g. in the UK, the Office for Budget Responsibility, HM Treasury and The Confederation of British Industry all produce economic forecasts on the UK Economy. Looking at where the relevant information is held gives some very natural panels.
The 16-node DBN illustrated in figure 1 becomes a 9-panel IDSS (figure 2). Panel G2 reports on cost of food given inputs from pane G5 on food supply, incorporating imports and exports, domestic food production and supply chain disruption. Panel G5, in turn, relies on information from G8 the Met office on weather and climate patterns to calculate its expectations of food supply, since both domestic and world production and supply chain disruption are weather related. Household income, G1, impacts directly on the utility. Panel G1 relies on information provided by G3 and G4 to make its predictions under different policy scenarios. G4 adivises on cost of living including energy, housing and other essentials. G3 assesses income taking into account employment, tax and social security, taking inputs from G7 and G9. G7 advised on demography, including single parents, immigrants, disability and those with no recourse to public funds. G9 advises on matters of the economy and informs the oil price panel, G6, and the cost of living panel, G4 as well as G3.
Dynamical Bayesian Network IDSS for food security
Here we assume plausible models for the expert panels and utility, based on publicly available data.
The attributes being measured to compose the food network were obtained at the Office for National Statistics which publishes official statistics for the UK. The time series for all nodes are measured yearly and the temporal window considered goes from 2008 to 2018. Each variable is detailed at Appendix A.
For the purposes of this proof of concept, social unrest was omitted since there was no available data. The health and education indicators are the attributes in the utility function and are directly affected by household income (HIncome, panel G 1 ) and food costs (CFood, panel G 2 ). The variables are modelled in the log scale as both are percentages or rates.
Panel G 1 advises on household income aiming to reflect the amount of money that households have available after accounting for the expendures with living (panel G 4 ), taxes and also the access to credit and benefits (panel G 3 ).
The variable costs of food (Panel G 2 ) depends on costs of energy (panel G 6 ) and on food supply, imports and exports and food production (panel G 5 ).
Panel G 3 reports on variables affecting the income such as lending, tax and unemployment. Unemployment depends on the economic context (panel G 9 ) represented by GDP and on parttime workers (panel G 7 ).
Panel G 4 reports on costs of living which depend on costs of food (panel G 2 ), on costs of housing including energy. Costs of housing depends on costs of energy (panel G 6 ).
Panel G 5 (Food supply) reports on food production and imports which depend on the economic context (panel G 9 ): F P roduction = θ 05,t + θ 15,t Gdp t + θ 25,t Imports t + 5t , Panel G 6 reports on oil costs and energy given inputs from panel G 9 about economic context.
Using these models as the panels' models, we now examine what happens to the utility under an number of scenarios. Figure 3 presents the fit and effects of household income and food costs on health and education obtained by recursively updating of posterior moments based on the forward filtering and backward algorithm presented in subsection 2.2. Notice the negative effect of household income and positive effect of food costs on the rate of malnutrition and percentage of disadvantaged pupils. Figure 4 presents the fit for all the variables in the food security network.
Model outputs and scenario evaluation
After fitting the dynamical model, different policies were compared using the IDSS approach described in section 2. Policy 1 is 'do nothing', i.e. all variables kept on the same observed values. Policy 2 accounts for an increase of 25% in the food costs and policy, such as a no-deal Brexit (Barons and Aspinall, 2020). Policy 3 represents a decrease of 25% in the food costs, such as through government subsidies. Figure 5 presents the posterior utility function for the 3 policies. Small values for the utility is associated with smaller rates of malnutrition and smaller percentage of disadvantaged pupils. The expected value of utility for policies 1, 2 and 3 are 0.2400, 0.2808 and 0.2091, respectively. Policy 4 considers the situation that food costs are reduced by 15% plus household income is increased in 15%, through economic or welfare interventions. In this scenario the expected utility is 0.2232. Policy 5 is an agricultural policy leading to a reduced the output of food production (related to prices) by 25% resulting in an expected utility of 0.2161. Note that the last scenario maintains the variables affecting food production as fixed in the observed values and modify the variables lower in the hierarchy such as food costs.
Discussion and further developments
We have shown a proof of concept IDSS for policymakers concerned with ameliorating household food security in the UK. We have identified the main drivers of food security, drawing partly on research from the USA and Canada where food security has been measured for a number of years and therefore understanding of determinants of household food security are more advanced than in the UK. We have identified plausible expert panels based on UK structures and have constructed models based on publicly available data. We have demonstrated the output of the IDSS under a number of policies. We have assumed equal weighting between health and educational attainment as a proxy for food insecurity. To move form a proof of concept to a working IDSS, we would need to elicit the user preferences for display of the results, as discussed in (Barons et al., 2018).
A Description of variables used in the network
Panel G1 (household income) is represented by the variable HIncome. This variable depends on the household income after expenses.
-HIncome: Real net households adjusted disposable income per capita less the final consumption expenditure per head.
Panel G2 (food costs) is represented by the variable CFood.
-CFood: CPI index of 9 food groups, 2015=100. Food costs was measured by a combination of CPI indices of items representing household dietary diversity (Kennedy et al., 2012). The score is formed by 9 food groups: cereals, meat, fish, eggs, milk, oils and fat, fruits, vegetables and beverages.
Panel G3 (income) accounts for access to credit (Lending), tax on the income (Tax), unemployment rate and social benefits.
-Lending: Net lending (+)/net borrowing (-) by sector as a percentage of GDP -Household and non-profit institution serving households.
-Tax: Tax on the income or profits of corporations.
-Benefits: Social assistance benefits in cash as a percentage of GDP.
Panel G4 (costs of living) accounts for expenditure per head (Living) and housing costs (Chousing).
Panel G5 (food supply) accounts for output of food production (FProduction) and imports from European Union and other countries.
-FImports: Food imports from European Union countries plus imports from other countries.
Panel G7 (Demography) is represented by part-time work rates (PartTime).
Panel G8 (Weather) is represented by number of days in which the air temperature falls below 0 degrees Celsius. In these cases, sensitive crops can be injured, with significant effects on production.
-Frost: Number of days of air frost.
Panel G9 (Economy) accounts for economic context represented by Gross D domestic Product (GDP): -GDP: Gross Domestic Product at market prices, seasonally adjusted.
|
2020-04-16T01:00:40.430Z
|
2020-04-14T00:00:00.000
|
{
"year": 2020,
"sha1": "cb8c2f6847e15ea3f4e9c217f5479ddac51d9e41",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "cb8c2f6847e15ea3f4e9c217f5479ddac51d9e41",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Mathematics",
"Business"
]
}
|
268585977
|
pes2o/s2orc
|
v3-fos-license
|
Policy Capacity Within the Local Anticorruption Political Agenda (A Study of Corruption Prevention in the Post-Covid-19 Economic Recovery Program in Malang City)
Abstract
Introduction
The issue of preventing and addressing corruption in the context of economic recovery following the Covid-19 pandemic has sparked significant debate, primarily due to a sting operation conducted by the Corruption Eradication Commission (KPK) targeting Juliari P. Batubara, the Minister of Social Affairs of the Republic of Indonesia.Juliari P. Batubara was arrested as a suspect in a case involving alleged bribery related to the procurement of goods and services within the Ministry of Social Affairs, specifically concerning the Covid-19 social assistance (Bansos) program in the Jakarta, Bogor, Depok, Tangerang, and Bekasi ( Jabodetabek) regions in 2020.During this sting operation (OTT), the KPK IAPA confiscated a total of IDR 14.5 billion in cash, comprising three currencies: IDR 11.9 billion, USD 171,085, and SGD 23,000 [1] Juliari P. Batubara ( JPB) was found to have violated Article 12, pandemic social assistance (Bansos), could potentially face the death penalty, in accordance with the statement by Firli Bahuri, the Chairman of the Corruption Eradication Commission (KPK), based on Article 2 of Law 31 Year 1999, which stipulates, "Anyone who intentionally enriches themselves or others, unlawfully causing financial losses to the state as described in paragraph 2, may be subject to the death penalty" [2].
The polemic surrounding current corruption cases demands significant attention within the context of corruption prevention and management in Indonesia, particularly concerning the enforcement of laws against corrupt individuals.Previous research, as highlighted by Zauhar, has revealed a notable and substantial increase in corruption cases in Indonesia.These cases are particularly pronounced in regional areas, notably during pivotal political events like regional elections (Pemilukada).The research findings indicate a consistent pattern of rising corruption cases accompanying these political events.Given this, it is imperative and highly pertinent for local governments to prioritize addressing the surge in corruption cases within their regions as a pivotal element of their corruption prevention and management efforts.This should involve a specific emphasis on combatting corruption offenses as a primary policy agenda at the local level.The central strategy for addressing and preventing corruption cases in Covid-19 social assistance (Bansos) programs, both at the national and regional levels, should encompass the full engagement of all stakeholders.In particular, local governments should prioritize addressing corruption concerns in their regions as pivotal issues and political agendas within their policy frameworks.As articulated by Rose-Ackerman, corruption is not a challenge that can be effectively tackled in isolation.Relying solely on criminal law to pursue and punish wrongdoers is insufficient.Instead, the state must build credibility by penalizing corrupt officials who attract public attention.However, the objective of such prosecution is to raise awareness and garner public support rather than addressing the root causes of corruption.According to her, there is no one straightforward approach that can be applied universally.
Rose-Ackerman emphasizes the necessity of implementing policies that enhance transparency and accountability in government operations while also facilitating the existence of independent oversight organizations.Achieving substantial changes demands unwavering commitment from the highest echelons of government and a resolute dedication to persist in anti-corruption endeavors.One of the core challenges in preventing and eliminating corruption at the local level, particularly in the allocation of social assistance (Bansos) funds during the Covid-19 pandemic, pertains to the absence DOI 10.18502/kss.v9i7.15487 of concurrent anti-corruption policies aligned with the political determination of local authorities.The current anti-corruption policy for social assistance funding (Bansos) remains predominantly top-down, and the supervision of budget utilization is closely interlinked with the roles of institutions such as the Corruption Eradication Commission (KPK), the Supreme Audit Agency (BPK), and the mass media.These entities play a pivotal role in scrutinizing the Covid-19 social assistance budget, a subject inherently infused with political dimensions.
Tackling corruption as a systemic problem calls for a thorough and unified approach.
Experts have put forth a range of strategies and approaches to combat corruption, encompassing both theoretical and practical dimensions.These anti-corruption strategies revolve around institutional reforms, enhancing accountability, and empowering society to oversee government corruption.As per Hylton and Young, the fight against corruption involves more than just enacting laws against corrupt behavior; it also entails various additional endeavors, which are delineated below: "It is also necessary to seek to prevent corruption within the public sector by managing conflicts of interest, setting up mechanisms to provide for greater accountability regarding the use of public resources, providing an avenue for persons to report acts of corruption and setting up mechanisms for greater transparency and public participation".
Local governments, in their endeavor to address and prevent corruption cases within the Covid-19 social assistance (Bansos) budget, should establish a comprehensive anticorruption policy as their primary focus.So far, local governments have not optimally succeeded in their anti-corruption endeavors.Assessing the success and failures of anti-corruption measures, along with comprehending the steps taken by local governments to prevent and manage corruption cases linked to the Covid-19 social assistance budget, is of paramount importance.Furthermore, understanding how local governments elevate the issue of corruption in their regions to the status of a political policy agenda, necessitating a comprehensive and integrated anti-corruption strategy within the implementation of the Covid-19 social assistance (Bansos) budget distribution, is crucial.As a result, the research questions for this study are as follows: 1. What is the mechanism of anti-corruption policy capacity in the social assistance Policy concerns the actions taken by the government.According to Dye [4]), public policy can be defined as "whatever government chooses to do or not to do." Essentially, it reflects the government's decisions on what actions to undertake or avoid.Anderson [5] shares a similar perspective, defining public policy as "the choices made by the government on whether to engage in specific actions or abstain from them."Moreover, according to Anderson, as cited in Hill and Hupe, public policy refers to the regulations and directives formulated by government officials and agencies.These definitions provided by both Dye and Anderson emphasize that public policy goes beyond the mere scope of government capabilities; it encompasses activities designed to address public interests.Fundamentally, policies are regarded as a set of actions intended to achieve a range of purposes and objectives.The direction of public policy is often indicated by the decisions made by government officials or agents.Policy, seen as a course of action according to Friedrich's perspective as cited by Anderson [5], is designed to achieve a multitude of goals and objectives.However, understanding the government's intentions and aims may not always be straightforward.The direction of public policy can typically be deciphered through government officials or agents.
Policy is woven from three primary components: society, the political system, and public policy itself, with each component significantly impacting the others.In the context of examining public policy within the United States, Thomas R. Dye [4] elucidates the interconnectedness of these elements, which encompass institutions, processes, behaviors; social and economic conditions; and public policy.
In the context of this study, the formulation of political agendas and policies represents a profoundly strategic process within the reality of public policy.This process harbors a space where the definition of a public issue and its standing in the political agenda become subjects of contention.It delves into why certain concerns make their way onto the government's agenda while others remain unaddressed.Dye, as cited by Widodo [6], encapsulates agenda setting as "who decides what will be decided."Not all public problems can be elevated to the status of policy problems.To effectively address public problems through public policy, these concerns must be transmuted into policy issues.This transformative process is known as agenda setting.Consequently, agenda setting encompasses policy issues that necessitate a response from the political system, stemming from the surrounding environment [6]Jones, as cited by [6] defines the agenda as "a term commonly used to portray those issues judged to require public action."Therefore, in the process of agenda setting, policy issues emerge as a result of divergent perspectives among the actors regarding the course the government should DOI 10.18502/kss.v9i7.15487pursue.These policy issues arise due to conflicts or "variances in perception" among the actors or as a response to a societal predicament at a given time.
According to John W. Kingdon, the agenda-setting stage involves three streams: the problem stream, policy stream, and political stream [7].The problem stream interprets and selects issues that the government considers as new challenges requiring resolution.Within the policy stream, potential alternatives or solutions to these issues are formulated.The policy stream subsequently processes these problems through political forces to establish them as a policy agenda.These three streams intersect when a policy window opens, and this convergence is organized by those possessing the necessary capabilities and resources, often referred to as policy entrepreneurs.Through these three streams, issues undergo a transformation process to become public policy.
Regarding the policy capacity perspective within the political agenda, it utilizes the model developed by Wu et al. [8].From the government's standpoint, policy capacity, as defined by Wu, encompasses the government's ability to create superior alternatives [9], to monitor the environment and devise strategies [10]; [11], to evaluate and assess the impacts of policy decisions [12], and to effectively harness knowledge in policy formulation [13].Fellegi [14] provides a comprehensive concept of policy capacity, encompassing the nature and quality of the potential resources used for policy assessment, formulation, and implementation, as well as the practices and procedures by which these resources are optimized and utilized for the benefit of public service, the non-governmental sector, and society at large.Policy capacity is a combination of skills and existing resources.At the policy level, analytical capacity ensures that policy actions can technically contribute to achieving objectives when executed correctly.Operational capacity ensures that resources are aligned with policy actions, making them implementable in practice.Political capacity helps gain and maintain political support for policy actions [8] [15]; [16]; [17].Although DOI 10.18502/kss.v9i7.15487political, analytical, and operational capacities are interrelated, they are regulated by different provisions and serve distinct purposes in the policy process.The success of an action doesn't demand all capacities equally; certain capacities hold more importance than others, a flexibility acknowledged within this framework [10]Therefore, categorization significantly aids in translating the concept of policy capacity into practical implementation.Enhancing these competencies involves different processes and considerations, and this diversity would be lost if any of the three core capacities were disregarded or combined inadequately.
Adapted from Wu et al. (2018)
Effective policy design involves technical knowledge for practical policy analysis, the ability to disseminate knowledge, and leadership and negotiation skills at the individual level.At the government organization level, mobilizing information for timely policy analysis, administrative resources for coordination, and political support are fundamental for building comprehensive policy capacity.At the system level, institutions are needed to create and use knowledge, implement mechanisms for coordination, and foster political trust and legitimacy [10] At the system level, assessing analytical capacity involves measuring the extent and quality of data collection across the system, the accessibility and efficiency of stakeholder involvement in the policy process, and the degree of competition and diversity in the production of policy knowledge.
At the system level, operational capacity involves overseeing public sector institutions and their interactions with community partners.Firstly, it encompasses coordinating efforts between-governments and between-institutions, focusing on policy integration to address cross-sectoral issues that go beyond the individual responsibilities of each organization.This involves managing policies responsively within specific sectors.Secondly, establishing robust relationships and engagement within the policy chain and the community is vital for operational capacity.Many sources emphasize that to tackle complex public challenges, public institutions need strong partnerships and collaborations within the public sector.Strong institutional relationships lead to better decision-making and implementation [18].Thirdly, the primary operational capacity at the systemic level requires clear delineation of roles, functions, and accountability of various organizations in the policy process.Within the legal and political framework, public sector institutions not only have the freedom to execute their functions but also play a role in overseeing this freedom to ensure impartial governance.Rothstein et al. regard the enforcement of the principle of impartiality as a measure of government capacity.Enforcing accountability, legal processes, and adherence to the rule of law in public sector institutions not only uphold the principles of liberalism and democracy but also enhance government performance.Essential elements of good governance include public sector institutions DOI 10.18502/kss.v9i7.15487taking responsibility for decisions and actions within executive institutions, partners, and the community.
Lastly, at the system level, policy capacity is determined by the ability and competence of stakeholders in policy processes to maintain public support for policy reform and resolve conflicts arising from policy actions.First, the level of political accountability and policy legitimacy [19].A policy system with a high level of political capacity ensures that the failure of policies can be identified by all parties, and those responsible for making and implementing policies can be held accountable without violating the fundamental principles of governance.Civil society, independent media, and freedom of speech, assembly, and association play important roles in enhancing political accountability [20].Secondly, the level of trust in the government.A government with high levels of trust and legitimacy from the public is expected to be effective.
Effective design, in essence, ensures that policy instruments are anticipated to be consistent with governance regulations while providing means to achieve policy objectives.
A design with the ability to support policy instrument design indicates an environment marked by high analytical, operational, and political capacities [21]; [22]).The capacity required to develop politics and administration for policy design processes is a subject of great interest To address this, policy design must understand the internal mechanisms of government and the policy sector, as constituents can enhance or weaken their ability to think systematically about policies and develop effective policies.
Organizations and individual policymakers depend on political support stemming from the policy-making environment they operate within.Therefore, they derive legitimacy and authority from the systemic-level political capacity, which, in turn, fosters a conducive environment for applying individual and organizational political capacities during the design process [23]; [24].Political support for policymakers and the interactions between policymakers and politicians are considered indispensable for addressing ambiguous goals and enhancing managerial effectiveness.These interactions provide organizations with a clearer understanding of their overall mandate [25]; [26] At the individual and organizational levels, political capacity plays a pivotal role in navigating effectively within the design space [27].It is most evident in the form of trust levels, particularly political trust and legitimacy in the public sector.Political, individual, and organizational capacities are also needed to gain support from key stakeholders, both before and during the design process, as well as in the upcoming policy implementation phase [28].The effectiveness of instruments and how they are assessed hinges on three factors.First, the extent to which substantive policy instruments are bolstered by procedural policy instruments.Second, the degree to which critical institutional prerequisites, conditioning the performance of these instruments, are incorporated in the policy.Third, the extent to which specific elements of the instrument or the outcomes of its evaluation can be adjusted in both the short and long term.
Effective spatial design reviews implicitly treat capacity types as an independent variable that determines policy outcomes.However, our understanding of how capacity is related to the concept of effectiveness, and the causal mechanisms that underlie this relationship, are not always clear.In other words, policy capacity can be multidimensional, with emerging interactions between basic capacity at the first level and more aspirational capacity at the second level.Lodhi [29] and Hartley and Zhang [27] provide recommendations for a comprehensive measurement of policy capacity.These steps can yield various levels of capacity, enabling to better observe and understand the relationships between them.The emphasis is on how policy capacity at one level can either enhance or constrain capacity at the other two levels, a factor that is often overlooked when conceptualizing the relationship between policy capacity and the effectiveness of policy design.
In this review, the study primarily focuses on the operationalization of specific types of capacity rather than exploring how the relationship or interaction between various capacity types influences policy outcomes.When systemic-level policy capacity is high, but individual policy capacity cannot support organizational policy capacity, it may suggest suboptimal design.Such cases indicate that we might sometimes perceive the existence of policy capacity at the systemic level as hindering the mobilization of organizational and/or individual policy capacity.However, it is crucial to recognize that the dynamics are equally essential for effective policy design.Moreover, although many scholars emphasize the significance of political capacity over operational and analytical capacity, they have not sufficiently developed propositions that can logically explain the importance of these capacities.This gap, in the next stage, may impede progress in understanding how the hierarchy and specific types of capacity can elucidate and lead to effective design.
Methods
This research used a qualitative approach, specifically adopting a phenomenological perspective, to explore the following key aspects: The chosen research location for this study is the Government of Malang City.Data analysis was conducted through a qualitative approach based on the Creswell model.Data collection methods involved the use of focus group discussions (FGD).From a systemic perspective, there is currently a lack of integration in the handling of Covid-19 social assistance programs among various Regional Apparatus Organizations (OPDs).The government, which serves as the provider of protection, and the OPDs responsible for health, education, economics, and culture all operate independently, while ideally, they should work in collaboration.The potential for corruption does not only stem from fraudulent budget allocation but also from incorrect targeting in aid distribution.For instance, situations where a single family receives three different types of assistance from various programs can lead to problems.The primary issues frequently encountered when the government provides social assistance are related to the accuracy of data, data updates, and the timing of distribution, which often lacks precision in both targeting and timing.Consequently, at the systemic level, policy capacity is contingent on the abilities and competencies of stakeholders in the policy process to maintain public support for policy reform and resolve conflicts arising from policy actions.
Mechanisms for Formulating Policy Capacity for Corruption Prevention in the Covid-19 Social Assistance (Bansos) Program and Economic Recovery
In formulating policy capacity for political, legal, and security (polhukam) development, attention is paid to developments both domestically and internationally.Several domestic issues that need to be anticipated in the coming five years include intolerance, procedural democracy, the complexity of bureaucratic services and law enforcement, corrupt behavior, potential security threats, and national sovereignty.At the global level, issues of concern include the depolarization of international political gravity, the shift of major-power competition to maritime arenas, deglobalization, and populism leading to unilateral policies by some countries, as well as instability in the Middle East.In the problem stream, issues are identified and selected by the government as problems that require solutions to prevent corruption cases involving various stakeholders, including the executive, legislative, and private sector.The policy stream, on the other hand, focuses on crafting alternative solutions and responses to the identified problems.
Finally, the political stream encompasses the process of deliberating and processing these issues involving political forces and relevant stakeholders to include them on the public policy agenda.The convergence of these three streams and the ensuing discussions occurs when a policy window opens, and these deliberations are steered by entities with the capacity and resources, often referred to as policy entrepreneurs, who play a pivotal role in transforming these discussions into actionable public policies.
Conclusion
The findings and analysis highlight the necessity for local governments, particularly in Malang City, to take innovative measures in their endeavors to prevent and eliminate corruption.Despite the demonstrated commitment of the local government, corruption continues to persist within the region, primarily due to the absence of concrete programs that translate the local government's commitments into action.In the battle against corruption, mere commitment is insufficient; what is paramount is the presence of well-defined action plans and effective mechanisms to combat corruption.Enhanced oversight in corruption prevention is also imperative.
To effectively combat corruption at the local government level, the actions taken require significant support from all stakeholders, including individuals, organizations, and systems.This support should encompass state institutions, law enforcement agencies, civil society organizations, political parties, and the general public.Genuine commitment and collaboration from all relevant parties are vital.Therefore, anti-corruption efforts should extend beyond direct law enforcement actions and incorporate preventive strategies that address both cultural and structural aspects.The key factor is the ongoing and comprehensive engagement of the public in the primary strategies for preventing and eradicating corruption.This ensures that governance challenges do not lead to declining public trust, which could be exploited by corrupt individuals.Hence, robust public participation plays a pivotal role in the battle against corruption and its prevention.
DOI 10.18502/kss.v9i7.15487 subparagraphs a or b, or Article 11 of Law Number 31 Year 1999, which pertains to the Eradication of Corruption Crimes (Tipikor), as amended by Law Number 20 Year 2001 concerning Amendments to Law Number 31 Year 1999 regarding the Eradication of Corruption Crimes, in conjunction with Article 55, paragraph 1 of the Criminal Code.In this case, individuals involved in corruption related to disaster relief funds, including the Covid-19
(
Bansos) Covid-19 program and economic recovery implemented by local governments?2. How far does the capacity of anti-corruption policy in the social assistance (Bansos) Covid-19 program and economic recovery become a commitment of local governments as a political agenda?DOI 10.18502/kss.v9i7.15487
1 .
The scope of the policy capacity mechanisms for corruption prevention in the Covid-19 social assistance program (Bansos).DOI 10.18502/kss.v9i7.15487 2. The mechanisms for formulating policy capacity for corruption prevention in the Covid-19 social assistance program (Bansos) and economic recovery as a political agenda.
3. 1 .
Scope of The Policy Capacity Mechanisms for Corruption Prevention in the Covid-19 Social Assistance Program One of the manifestations of policy capacity mechanisms and the individual, organizational, and systemic commitment of the Malang City Government to addressing the Covid-19 issue is the establishment of the Covid-19 Handling Task Force Team in Malang City.This commitment is seamlessly integrated into economic recovery efforts.Specifically, the economic recovery efforts by the Malang City Government encompass the provision of social assistance (Bansos) targeted at specific beneficiary groups.The social assistance provided by the government to revive the community's economy through the National Economic Recovery Program (PEN) in the form of cash aid, basic necessities, support for small and medium-sized enterprises (UMKM), and electricity tariff discounts.These measures were distributed from April to December 2020 (antaranews.com,September 13, 2020).The findings of the focus group discussions (FGD) indicate that the responsibility for managing and implementing this social assistance program falls under the Department of Social Affairs, the Department of Education and Culture, and the Department of Transportation.The Department of Social Affairs' response to the Covid-19 pandemic involves providing social assistance to the community based on data from the Pre-Prosperous Family and Family Hope Program (PKH).Similarly, the Department of Education and Culture in Malang City redirected funds from the local budget (APBD) to offer social assistance to school canteen traders and artists.This initiative has been ongoing since April and involves distributing Rp 300,000 to each affected individual for three periods, totaling approximately Rp 1,126,200,000.Furthermore, the Department of Transportation's social assistance policy extends support to public transport drivers and parking attendants residing in Malang City.These measures were implemented even before the imposition of Large-Scale Social Restrictions (PSBB), and assistance has DOI 10.18502/kss.v9i7.15487already been distributed to 996 public transport drivers and 2,207 parking attendants in Malang City.Additionally, social assistance is provided to traders by the Department of Cooperatives, Industry, and Commerce.From an organizational perspective, the commitment of the Malang City Government to economic recovery includes empowering startups, micro, small, and medium-sized enterprises (IKM and UKM) in Malang City through a combination of online and offline support, promotion, and endorsement efforts.The "IKM and UKM Resurgence" initiative is a crucial component that demands attention from the Malang City Government to stimulate the local economy.
DOI 10 .
18502/kss.v9i7.15487governments to prioritize transparency by disclosing to the public the reallocation and use of funds for managing Covid-19.This action necessitates local governments to commit to strengthening synergies and enhancing effectiveness in the prevention and eradication of corruption, particularly amidst the ongoing Covid-19 pandemic.Local governments must also pledge to develop innovative responses to Covid-19 in line with the principles of accountability.The policy capacity for economic recovery in response to the Covid-19 pandemic is not solely the responsibility of the central government but also falls within the purview of local governments.Local governments have proactively allocated funds for additional Direct Cash Assistance for Village Funds (BLT-DD).
corruption in the distribution of social assistance funds during the Covid-19 pandemic, undergo a structured process to ultimately become public policies aimed at addressing corruption at the local level.
|
2024-03-22T15:29:22.344Z
|
2024-03-19T00:00:00.000
|
{
"year": 2024,
"sha1": "7192ae4535c609674e14dcf6179f58bb14346614",
"oa_license": null,
"oa_url": "https://doi.org/10.18502/kss.v9i7.15487",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "07648c7cdc8005d5eff1a0306d1164abf92f0a38",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": []
}
|
235280939
|
pes2o/s2orc
|
v3-fos-license
|
Effect of local reinforcement particle distribution on mechanical properties of in-situ TiB/TiC1-x particle reinforced Ti-5Al-5Mo-5V-3Cr – comparison between model and experiment
In-situ TiB and TiC1-x particle-reinforced titanium matrix composites (TMCs) based on a near-β Ti-5Al-5Mo-5V-3Cr alloy (Ti-5553) reacting chemically with 4 vol.-% B4C were processed by spark plasma sintering (SPS). Different local distributions of the reinforcement particles formed within the composite were adjusted by varying the conditions during powder preparation. The measured Young’s modulus, hardness and compressive yield strength of the composites increased, as the reinforcement particles were distributed more homogeneously within the matrix. In addition, Young’s modulus and hardness were modelled considering the hybrid TiB-whisker and TiC-particle reinforcement applying the rule of hybrid mixtures (RoHM) and the Fu-method, respectively. The compressive yield strength was modelled in ac-cordance with the summation of the particular effective strengthening mechanisms and the Clyne-method. Measured values of the Young’s modulus and the hardness were overestimated by the modelling as the clustering of the reinforcement particles increased. The compressive yield strength of the composites is modelled appropriately using the Clyne-method. However, pronounced agglomeration of the reinforcing particles within the matrix resulted in an overesti-mation of the measured values.
Introduction
Owing to the low density of titanium, ceramic particle reinforced titanium matrix composites (TMCs) provide a combination of outstanding specific properties, such as high specific strength, specific stiffness, and hardness with heat-and/or corrosion resistance [1][2][3][4]. However, the strength of these composites is limited by the comparatively low strength of the pure titanium matrix. A considerable increase in strength of the composite would be possible, if the pure titanium matrix was replaced by a high strength titanium alloy. In this context, the high strength near-β alloy Ti-5Al-5Mo-5V-Cr (Ti-5553) appears to be a suitable matrix material. In Ti-5553, the high-temperature modification (β-phase) is stable up to ambient temperature. The high strength results from the solution strengthening effect of the alloying elements in addition with a specific heat treatment including solution treatment and aging (STA) leading to a β-microstructure with small dispersed α-precipitations [5][6][7][8][9][10].
According to the literature it is well established that most effective strengthening in particle reinforced TMCs is achieved by titanium carbide particles TiCp and whisker-shaped titanium boride TiBw [11][12][13][14][15][16][17][18]. By means of powder metallurgy processes it is possible to create such ceramic reinforcement particles in-situ within the matrix during the TMC-synthesis. The advantage of this in-situ method is that the reinforcement's surfaces are free of contaminations like oxygen or nitrogen. Consequently, a stronger bonding of the particle/matrix interfaces is facilitated [11]. Titanium exhibits high chemical reactivities with boron and carbon. Hence, B4C-powder can be used as reactive powder for the in-situ formation process of the TiC-and TiB-reinforcing particles together with titanium [16]. The solid-state reaction behavior is based on a variety of complex diffusion-controlled processes between the titaniumand the B4C-particles occurring at elevated temperatures up to 700-1200 °C (sometimes even higher). Detailed information concerning the thermodynamics and the kinetics of the solid-state reaction between pure titanium and B4C are given in references [19][20][21][22][23][24]. Among others, the spark plasma sintering is a common fabrication technique for such TMCs. Unlike described in the literature, own studies have demonstrated the necessity of considering the real stoichiometrically composition of the TiC1-x particles formed during the in-situ reaction with B4C, see reaction equation (1) below [25].
Furthermore, in the case of Ti-5553 the solid-state reaction behavior is strongly influenced by the matrix alloying elements, resulting in a more sluggish reaction kinetics compared to the reaction with pure titanium [26]. In addition, it could be shown that the largest diameter of completely transformable B4C clusters was limited to ~ 10 µm, if the sintering process was conducted at 1200 °C with a dwell time of 15 minutes. Consequently, the powder preparation conditions strongly influence the final degree of solid-state reaction between titanium/Ti-5553 and B4C by controlling the formation or prevention of undesirably large B4C clusters in the initial powder mixtures [25]. Beside the degree of B4C conversion, the powder preparation conditions significantly affect the local distribution of the in-situ formed reinforcement particles within the TMCs as well. Low energy powder milling in conjunction with the sluggish reaction kinetics cause the in-situ formation of reinforcement particles predominantly in the vicinity of the former matrix particle surfaces. This particle clustering corresponds to an inhomogeneous local distribution of the reinforcement particles [25,26].
Generally, the simultaneous use of two different ceramic phases (TiCp and TiBw) within the metal matrix is designated as hybrid particle reinforcement. However, the mechanical properties of such composites, e.g. hybrid Young's modulus, hybrid hardness and hybrid strength can hardly be predicted by material models typically including only one single reinforcement phase, its phase fraction and sometimes its morphology. Relevant models in this context concerning particle reinforced composites are the Hashin-Shtrikman-model [27,28] or the Halpin-Tsai-model [11] (Young's modulus), the Rice-model [29] or the Halpin-Tsai-model [30] (hardness) and the summation of the contributions of particular strengthening mechanism [31][32][33][34][35] and the Clyne-method [31,36,37] (strength). Fu et al. introduce two approaches, which enable the prediction of the Young's modulus for hybrid particle/short-fiber reinforced polymer matrix composites: The Rule of Hybrid Mixtures (RoHM) and a modified laminate analogy approach -hereinafter referred to as the Fu-method [38]. The RoHM describes the hybrid Young's modulus of the composite resulting from the two separate subsystems particle/matrix and short-fiber/matrix, see Figure 1. is calculated according to the RoHM in the following way , and , are the Young's moduli and the relative hybrid volume fractions of the systems particle/matrix and short-fiber/matrix, respectively. and have to be calculated applying suitable models for the particular subsystems, each including only one single reinforcement phase. Note that the total reinforcement volume fraction = + ( and are volume fractions of the particles and short-fibers, respectively) should be used as reinforcement volume fraction for calculation of and . Finally, the relative hybrid volume fractions in equation (2) are defined as = / and = / . Figure 1. Application of the Rule of Hybrid Mixtures (RoHM) to a particle and short-fiber reinforced composite (schematically) according to Fu et al. [38].
The Fu-method is a two-step procedure. In a first step, the particle reinforced matrix is defined as a new phase -referred to as the effective matrix. Within the model, this effective matrix is reinforced by the short-fibers in a second step, see Figure 2. Both steps allow the application of material models for composites with a single reinforcement phase. In contrast to the RoHM, the calculation of the Young's modulus of the effective matrix , in the system particle/matrix is based on the relative volume fractions of the matrix Application of the Fu-method to a particle and short-fiber reinforced composite (schematically) according to Fu et al. [38].
The present study analyses the influence of the local reinforcement particle distribution within hybrid in-situ TiB/TiC particle-reinforced Ti-5Al-5Mo-5V-3Cr matrix composites on selected mechanical properties. Especially, with respect to improve the predictability of these properties, first investigations IOP Publishing doi:10.1088/1757-899X/1147/1/012016 4 focused on the comparison between the measured and the modelled values for the composite's hybrid Young's modulus, hardness and compressive yield strength. Suitable model approaches, which consider the hybrid particle reinforcement were applied to the TMC property calculations. Deviations between experiment and model were discussed including the role of the local distribution of the reinforcing particles within the TMCs.
Experimental and materials
In the present study, a Ti-5Al-5Mo-5V-3Cr powder (Ti-5553, TLS Technik GmbH & Co Spezialpulver KG, Germany) exhibiting a spherical particle morphology served as the matrix material. B4C (abcr GmbH, Germany) with a significantly lower particle size was used as a reactive powder for the in-situ reinforcement of the matrix. The chemical compositions and particle size distributions of the initial powders are given in Table 1 and Table 2. Powder mixtures of Ti-5553 and 4 vol.-% B4C were prepared under high energy milling conditions to break up the large B4C clusters existing in the initial B4C-powder, see Table 3. The powders were either milled in a planetary ball mill (Pulverisette 6, Fritsch GmbH, Germany) at ambient conditions or by vibration milling in a cryogenic mill (CryoMill, Retsch GmbH, Germany), while the grinding bowl was permanently cooled from the outside by gaseous nitrogen. The density of the samples was measured using the Archimedes method in distilled water. For the microstructural investigations, samples were mechanically ground and polished. This was followed by chemical-mechanical polishing and subsequent etching with Kroll's reagent for 20-40 seconds. Matrix grain size determination and size measurements of the reinforcement particles required etching with Kroll's reagent for a shorter time and subsequently thermal etching according to the procedure proposed by Vander Voort [39]. Phase analysis was performed by X-ray diffraction as described in an earlier study [25]. The Young's modulus was determined by measurement of the longitudinal and transversal wave velocities, employing the acoustical logging method on the faceground compact samples using a Hitachi V-1565 Oscilloscope (Hitachi, Japan) in combination with a Panametrics Model 5800 computer-controlled pulser/receiver (Panametrics GmbH, Germany). Due to the narrow sample diameter, the running periods of the acoustic sound waves were measured three times with an accuracy of at least 0.01 µs at the same position at each sample. A micrometer screw was used to determine the sample thickness between the faceground surfaces with an accuracy of 0.001 mm at three different sample positions. Finally, the wave velocities for Young's modulus calculation were derived from the mean values of the measured sound wave running periods and the mean sample thickness. The Vickers hardness was determined from at least five indentations introduced with an indentation load of 294.2 N. For the compression tests, cylindrical samples with equal diameters and heights of 4 mm were cut from the sintered bodies by means of electrical discharge machining. Quasi-static compression tests at a strain rate of 10 -3 /s were carried out at room temperature using an MTS 810 servo-hydraulic testing machine (MTS, USA). The compressive strain was calculated from the actual compression of the specimen, measured by a displacement gage. At least three compression tests were carried out for each composite to improve the statistical validation of the measured values.
Results and discussion Figure 3 shows a comparison of the initial powder mixtures and the microstructures of the sintered samples for each milling condition, respectively. Especially, during the high energy ball milling the matrix particles undergo a strong plastic deformation, which is associated by pressing the smaller B4Cparticles into the matrix particle surfaces. Moreover this, coarse Ti-5553/B4C particle agglomerates were formed through an increasing cold welding of the deformed matrix particles. In contrast, less plastic deformation of the matrix particles is observed during the cryogenic milling. Nevertheless, the high energy impact successfully avoids the formation of coarse B4C particle agglomerates. However, the individual B4C particles rather adhere to the matrix particle surfaces than they were pressed into them.
The different morphologies of the initial powder mixtures strongly effect the local distribution of the reinforcing particles within the bulk TMCs. Obviously, the reinforcement particles in TMC-B were predominantly located in the vicinity of the former matrix particle surfaces, while less reinforcements were formed inside the matrix particle volume. This appearance is in agreement with the findings for a comparable powder mixture based on Ti-5553 and B4C presented in an earlier study [26]. The microstructure of the TMC-A sample, shows a more homogeneously distribution of the reinforcement particles. This can be ascribed to the repeated deformation, fracture and cold welding of the matrix particles during powder preparation resulting in the formation of agglomerates with embedded B4C particles.
A comparison of the mechanical properties between TMC-A and TMC-B is limited to the requirement of approximately equal phase contents with respect to the in-situ formed TiB-and TiC0,5-particles within the TMC samples. In other words, the completely solid-state reaction of the B4C particles during IOP Publishing doi:10.1088/1757-899X/1147/1/012016 6 sintering has to be ensured, thus avoiding superimposed effects of different particle contents. Both composites exhibit almost the same density, see Table 4. This and the absence of unreacted B4C clusters within the microstructure prove high degrees of B4C conversion in the TMCs. The measured TiB-and TiC0.5 phase fractions only vary in the precision of the XRD-method resulting in almost identical total reinforcement particle volume fractions, designated as Σ(TiB + TiC0.5). The measured phase fraction of α-Ti (Ti-hcp) in TMC-A, may have resulted from multiple effects connected to the powder milling and sintering processes. Due to the high chemical affinity of titanium, oxygen and nitrogen, the repeated deformation and fracture of the matrix particles during the powder preparation in air atmosphere caused their contamination through interstitial solid solutioning, see
5.
Oxygen and nitrogen have a strong α-phase stabilizing effect on titanium. The influence of carbon on the phase stability in titanium is similar to that of oxygen and nitrogen. However, solid solution of carbon within the matrix particles occurs first during the sintering process as a consequence of the B4C-decomposition. Finally, an earlier study proved that the in-situ reaction with B4C effected a significant change in the chemical composition within the initial Ti-5553 particles, in particular leading to an increase of the aluminum concentration [25]. Next to the interstitials mentioned before, aluminum acts as a strong α-stabilizing element in titanium alloys as well. However, the matrix in TMC-B consisted of β-phase, only. Consequently, the β-phase destabilizing effects associated to the sintering process seem to have a much lower efficiency, than the powder contamination with oxygen and nitrogen. The measured microstructural parameters of the TMCs are given in Table 6. Identically sintered samples based on the unreinforced Ti-5553 powder reveal a β-phase grain size of 229 µm. Hence, the β-phase grain size within both composites was decreased in order of two magnitudes. The TiB-whiskers were characterized by their length , diameter and aspect ratio . Characteristically microstructure parameters of the plate-shaped TiC0.5 particles are represented by their width , thickness and aspect ratio = 2 / including the particle length 2 according to the definition of Nardone and Prewo [40]. The TMC's mechanical properties Young's modulus, hardness und compressive yield strength increase with a more homogeneously distribution of the reinforcement particles, see Table 7. A Comparison of the compressive behavior is given by the compressive stress-strain curves in Figure 4. Figure 4. Typical compressive stress-strain curves of TMCs.
Modelling of TMC's mechanical properties was based on measured phase compositions and microstructure parameters. The partial destabilization of the β-phase in the matrix of TMC-A was considered for the matrix property calculations by applying a linear rule of mixture using the relative volume fractions of the α-and β-phase, respectively. This also includes the Hall-Petch strengthening effect due to the reduced β-grain size within both composites. Mechanical properties of the individual phases were taken from the literature (if available) or had to be determined in additional experiments. Input variables for the hybrid Young's moduli and the hybrid hardness modelling are summarized in Table 8. The Young's moduli and the hardness of the subsystems TiCp/matrix and TiBw/matrix (RoHM) as well as the effective matrix and their reinforcement by TiBw (Fu-method) were calculated according to the Halpin-Tsai equations [11,30,45], thus considering the microstructural morphology of the individual reinforcement phases. Another fact that could not be ignored was the misorientation between TiB-whiskers and the load direction. The Halpin-Tsai equations are applicable just for short-fibres and whiskers, that are aligned parallel to the load line. Ryu et al. found, that the whisker orientation influence can be expressed in form of an effective aspect ratio , ( ) considering the probability density function of misaligned whiskers ( ), where is the misorientation angle [46].
, ( ) is calculated as below: For randomly aligned whiskers -like in the present TMCs -the probability density function is reduced to ( ) = 2/ [47]. A comparison between measured and modelled values is given in Table 9. Measured values of and were overestimated by the RoHM and the Fu-method, respectively. However, this overestimation increases significantly, as the local distribution of the reinforcement particles within the matrix becomes more inhomogeneous. A great plenty of effective strengthening mechanism could be identified in the present composites. This includes the grain boundary strengthening (Hall-Petch-effect) of the β-matrix grains ∆ , the solid solution strengthening by interstitials (oxygen, nitrogen and carbon) ∆ , the load-transfer strengthening ∆ and the Orowan strengthening ∆ of both, carbides and whiskers. In TMC-A the β→α phase transformation resulted in an increased matrix strength by the amount of ∆ . A detailed description of the single calculation steps in determining the strengthening contributions is given by Grützner in [48]. The modelled values are listed in Table 10. As mentioned in the introduction, two distinct approaches are suggested in the literature, which enable the strength modelling of particle reinforced composites. One is the summation-approach, where the single contributions of particular strengthening mechanism are added to . According to the Clynemethod the final strength of a composite is calculated by summing the root of squares of all the individual strengthening contributions, as below Table 11 shows a comparison of the modelled hybrid yield strengths for the composites with the measured values. Yield strengths were significantly overestimated by the summation approach. Hence, this approach is unsuitable for the current composites, which is associated with the superposition of the individual strengthening effects [31,37]. These superposition effects are considered by the Clyne approach. However, precision decreases again, if the reinforcing particles were distributed more inhomogeneously within the matrix, see TMC-B in Table 11. Conclusions Mechanical properties of hybrid TiB/TiC particle-reinforced Ti-5Al-5Mo-5V-3Cr matrix composites were strongly influenced by the local reinforcement particle distribution. Despite almost identical total reinforcement particle volume fractions, the composite exhibiting a more homogeneously reinforcement particle distribution within the matrix reveals higher measured values of Young's modulus, hardness and compressive yield strength. Furthermore, these mechanical properties were modelled taking into account the phase composition as well as microstructural parameters and the chemical composition of individual phases. The rule of hybrid mixtures (RoHM) and the Fu-method -both considering the hybrid TiB-whisker and TiC-particle reinforcement -were used for modelling of the Young's modulus and the hardness, respectively. Concerning the compressive yield strength of the composite, it was necessary to identify the particular effective strengthening mechanisms. The strengthening contributions of these mechanism represent input parameters for the compressive yield strength modelling in accordance with the summation approach and the Clyne-approach. With pronounced agglomeration of the reinforcement particles the measured values of the hybrid Young's modulus and the hybrid hardness were overestimated by the RoHM and the Fu-method. The hybrid compressive yield strength of the composites was modelled more appropriately using the Clyne-approach. However, with increasing clustering of the reinforcement particles the measured values of the compressive yield strength were overestimated by the Clyne-approach as well.
|
2021-06-03T00:43:40.215Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "da570f4ec17768793de6dc39595eacf8587a48b8",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/1147/1/012016",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "da570f4ec17768793de6dc39595eacf8587a48b8",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
3852420
|
pes2o/s2orc
|
v3-fos-license
|
Perceived behavioral control as a potential precursor of walking three times a week: Patient's perspectives
Background Behavior change theories can identify people’s main motivations to engage in recommended health practices and thus provide better tools to design interventions, particularly human centered design interventions. Objectives This study had two objectives: (a) to identify salient beliefs about walking three times a week for 30 minutes nonstop among patients with hypertension in a low-resource setting and, (b) to measure the relationships among intentions, attitudes, perceived social pressure and perceived behavioral control about this behavior. Methods Face-to-face interviews with 34 people living with hypertension were conducted in September-October 2011 in Lima, Peru, and data analysis was performed in 2015. The Reasoned Action Approach was used to study the people’s decisions to walk. We elicited people’s salient beliefs and measured the theoretical constructs associated with this behavior. Results Results pointed at salient key behavioral, normative and control beliefs. In particular, perceived behavioral control appeared as an important determinant of walking and a small set of control beliefs were identified as potential targets of health communication campaigns, including (not) having someone to walk with, having work or responsibilities, or having no time. Conclusions This theory-based study with a focus on end-users provides elements to inform the design of an intervention that would motivate people living with hypertension to walk on a regular basis in low-resource settings.
Background
To increase the likelihood of success, health communication interventions tend to rely on behavior change theories in order to identify people's main motivations to engage in recommended health practices. Several behavioral theories are available [1][2][3][4][5][6][7] and reviews stress the argument that program practitioners should rely on these theories to design successful preventive or lifestyle interventions [5,8,9]. Central to behavior change theories is the claim that health interventions impact behavior through a mechanism of influence: health messages-or any informational intervention for that matter-first, influence peoples' beliefs, and, subsequently, these beliefs influence attitudes, self-efficacy or intentions, which in turn influence behavior [6]. These theories can, in turn, provide better tools to design interventions, particularly human centered design interventions.
This study applied one major behavior change theory, namely the Reasoned Action Approach [4,6,7], first, to identify salient beliefs about walking three times a week for 30 minutes nonstop among patients with hypertension and, second, to measure the relationships among intentions, attitudes, perceived social pressure and perceived behavioral control about this behavior in this population. Overall, as a risk factor for cardiovascular diseases, which are responsible for almost half the deaths resulting from chronic diseases in Latin America and the Caribbean [10], hypertension constitutes a problem relevant to public health interventions.
Hypertension
Hypertension is a health condition affecting 26% of people worldwide [11]. Although its consequences include an increased risk for heart diseases or stroke [12], research points at unhealthy habits or the co-presence of other diseases as major risk factors [13]. Three main groups among individuals with the condition can be identified: (i) those that have the condition but do not know they have it, (ii) those who have the condition, know that they have it but do not control it, and (iii) those that have the condition, know that they have it and control it [14,15]. Any health intervention aiming to improve the management of hypertension can target these groups independently, or collectively, and may address any of its determinants, for example physical inactivity [15].
A systematic review of randomized controlled trials has shown that effective interventionsthat is, those causing decreases in measures of blood pressure through increased walking among people with hypertension-exhibited three characteristics: they were implemented over long periods of time (average of 19 weeks), included large samples of individuals and, for the most part, promoted intense walking (e.g., maximum heart rate greater than 80%) among their participants [16]. Thus, interventions addressing the management of hypertension can encourage individuals to walk on a regular basis; but doing so is not an easy endeavor.
The Reasoned Action Approach
The theories underlying the Reasoned Action Approach [6] have been largely employed to examine the predictors of people's behaviors as well as to design health interventions [9,17]. For example, the model has been employed to design interventions to motivate safe sex practices, increase physical activity or decrease the consumption of sugary drinks in U.S. households [5,18,19]. Fig 1 shows the outline of the Reasoned Action Approach [4].
The Reasoned Action Approach postulates that any behavior can be predicted by people's intentions to engage in the behavior [1]. According to the theory, people's intentions are formed on the basis of three cognitive constructs: attitudes, perceived social pressure and perceived behavioral control [4]. The theory conceptualizes perceived social pressure as a combination of injunctive norms, or the perceptions of what ought to be done in a given situation, and descriptive norms, or the perceptions of what is done in that situation [4,20].
Moreover, the theory proposes that each cognitive construct is formed on the basis of a set of beliefs [1]. Thus, people's beliefs about the potential outcomes of a given behavior, or behavioral beliefs, form their attitudes towards the behavior; similarly, the beliefs about the social referents who approve or disapprove of their behavior form their injunctive norms; the beliefs about the social referents who engage or do not engage in a given behavior form their descriptive norms; and, finally, the beliefs about the facilitators and barriers to enact a given behavior form people's perceived behavioral control [4]. For example, the beliefs that walking would improve my health and walking would give me a chance to meet other people would form an overall attitude towards walking; the beliefs that my spouse approves of my walking and my friend also walks frequently would form an overall perception of norm towards walking; and the belief that having free time will help me walk regularly would be the main contributor for an overall perception of control over a behavior. Thus, all these beliefs would be the basis for why people walk on a regular basis.
But, according to the theory, not every belief has a role in people's decisions to engage in the behavior: only those beliefs that are salient in people's minds are relevant-that is, only those that are readily accessible in memory are used to form attitudes, perceived social pressure and perceived behavioral control [4]. Following on the above example, and unlike the behavioral beliefs already indicated, it is possible that a third belief, such as walking would make me feel connected with nature, may not be salient in people's minds when they form their attitudes towards walking on a regular basis. Overall, one advantage of the Reasoned Action Approach is that it provides a clear methodology for identifying those beliefs in a population of interest [4].
The objective of this study was twofold: to identify salient behavioral, normative and control beliefs about walking three times a week for 30 minutes nonstop among a sample of individuals with hypertension in Lima, Peru, and to measure the extent to which these individuals formed intentions based on attitudes, perceived social pressure and perceived behavioral control in regards to this behavior. Walking three times a week has been identified as a relevant behavioral target associated with positive health outcomes [21]. We focused on "30 minutes nonstop," because we wanted to explore the possibility that patients could protect at least 30 minutes of their time to engage in this form of physical activity. Also, defining the frequency of the behavior as "three times a week" would help identify beliefs that are relevant to this frequency, but not to a lower frequency; for example, walking "once a week" could be perceived as achievable by a patient, but "three times a week" may not [4]. Perceived behavioral control as a potential precursor of walking three times a week: Patient's perspectives
Participants and setting
Patients of two health centers-a national public hospital and a private clinic-in Lima, Peru, were interviewed face-to-face between September and October 2011 [15]. All participants were 18 years old or older, and only individuals who were diagnosed with hypertension by an attending physician, at any given time before the dates of data collection, were approached and interviewed by a trained research assistant. Individuals could have controlled or uncontrolled hypertension at the moment of the interview (values above 140 for systolic blood pressure or above 90 for diastolic blood pressured were regarded as uncontrolled hypertension).
Measures
A module of the interview guide [15] was used to measure the constructs of the Reasoned Action Approach in regards to walking three times a week for 30 minutes nonstop [4]. The development of the instrument followed recommendations of Fishbein and Ajzen [4] to capture these constructs and, in order to assure comprehension of the items, the questionnaire was pre-tested with a sample of individuals living with hypertension, who were not part of the study's sample. Thus, one subsection was devoted to the elicitation of the beliefs about walking three times a week for 30 minutes nonstop, including behavioral, normative and perceived behavioral control beliefs; and another subsection was used for measuring the theoretical constructs including attitudes, perceived social pressure, perceived behavioral control and intentions in regards to walking three times a week for 30 minutes nonstop. A single item measured whether participants engaged on this specific health behavior or not.
Elicitation of salient beliefs
Following Fishbein and Ajzen [4], behavioral beliefs were elicited using two questionnaire items: Tell me the advantages, the good or best things, of your walking three times a week for 30 minutes nonstop and Tell me the disadvantages, the bad or worst things, of your walking three times a week for 30 minutes nonstop. Injunctive normative beliefs were elicited using the following two items:
Analytical approach
The analysis, performed in 2015, followed a four-step process. First, we followed Fishbein and Ajzen's recommendation [4] for identifying the most salient beliefs about walking three times a week for 30 minutes nonstop in this sample. Because the total number of beliefs may be greater than the number of participants, Fishbein and Ajzen [4] propose that the selection of salient beliefs can be guided by the following rule: "[p]erhaps the most reasonable decision rule, and one that we would recommend, is to choose beliefs by their frequency of emission until we have accounted for a certain percentage, perhaps 75%, of all responses listed. For example, if the total number of responses provided by all participants in the elicitation sample was 600, a 75% decision rule would require that we select as many of the most frequently mentioned outcomes as needed to account for 450 responses" (p.103).
We completed this first step by hand and the following ones with Stata 11. Second, we computed the means, standard deviations and Cronbach's alpha coefficients for all the theory variables in order to assess the distributions and reliability of the measures. We set alpha levels of p < .05 to establish statistical significance. Third, we computed correlations among all the variables of the theory, and, finally, we conducted an ordinary least square model to regress intentions on its theoretical predictors, in order to measure the extent to which these individuals formed intentions based on attitudes, perceived social pressure and perceived behavioral control in regards to this behavior. Fishbein and Ajzen [4] recommend that, even at this initial step of the formative research, designers explore the relationships between intentions and its three cognitive antecedents, in order to identify the most relevant route for influencing behavior change. With that purpose, they recommend regressing intentions on attitudes, perceived social pressure and perceived behavioral control and then comparing the beta weights. This comparison, they suggest, would orient program practitioners to anticipate the cognitive construct carrying the greatest weight in the formation of intentions and, therefore, practitioners can target the intervention to that specific construct, by trying to influence its underlying salient beliefs.
Ethics
Informed oral consent was obtained from all individual participants included in the study. This study received approval from the Institutional Review Boards of both Universidad Peruana Cayetano Heredia and Hospital Nacional Cayetano Heredia.
Results
A total of 34 patients were interviewed, 58.8% (n = 20) were female. Their mean-age was 68.3 years (median 70; range 40-82) and 58.8% (n = 20) reported an education level of high school or more. Except for one participant with missing information, everyone was aware of their hypertension condition for an average of 8.4 years (median 5; range 0-30). About 41% (n = 14) of the patients were interviewed in the private clinic.
Elicitation of salient beliefs
As shown in Table 1, patients perceived that among the most frequent consequences about their walking three times a week were feeling better or it is good or bad for the body. Further, patients perceived that the most common social referents that approve or disapprove of their walking three times a week were their children, wife and grandchildren; and among the most common social referents that walk and do not walk three times a week were their children, neighbors, spouse and siblings. Finally, patients perceived that among the most frequent facilitators and barriers that would allow or impede their walking three times a week were (not) having someone to walk with, having work or responsibilities or having no time.
The constructs of the theory
The majority of participants (71%) reported engaging in the target behavior. Further, and though responses tended to be positive towards walking, all scales of the theoretical constructs showed good distributions and high internal consistency as measured by Cronbach's alpha. The coefficient of internal consistency as well as the mean, standard deviation and minimum and maximum values for each of the theoretical constructs are shown in Table 2.
Furthermore, the correlation between intentions and current behavior was positive and medium-sized (Spearman rho = 0.44, p<0.01). Intentions were independently and significantly associated with each of its three antecedents. Table 3 shows the bivariate correlations among the variables of interest: intentions was associated with attitudes (Spearman rho = .42, p < .05), perceived social pressure (Spearman rho = .44, p < .05) and perceived behavioral control (Spearman rho = .56, p < .001). Finally, a multiple regression model ( Table 4) showed that perceived behavioral control carried the greatest weight in the formation of intentions (β = 0.34, p < .10), yet none of the three estimates were statistically significant.
Discussion
Aiming to design an efficient intervention to promote physical activity among people living with hypertension, we examined the behavior, and its cognitive antecedents, of walking three times a week for 30 minutes nonstop among a sample of patients living with hypertension in the capital of Peru. Guided by the Reasoned Action Approach [4], one behavior change theory used extensively in health intervention design [9], we measured the extent to which the constructs proposed by the theory are associated with the selected behavior. We found that all measures of the theory constructs showed high internal consistency and good distributions and, for the most part, participants' perceptions were positive towards walking three times a week for 30 minutes nonstop; in fact, the majority reported engaging in the behavior at the moment of the interview. Of interest, participants' intentions to walk three times a week were Perceived behavioral control as a potential precursor of walking three times a week: Patient's perspectives significantly associated with self-reported behavior, and intentions were independently and significantly associated with each of its three cognitive antecedents. As such, this study represents the initial step of a rigorous formative research that is essential to the design of a successful health intervention to motivate the initiation or maintenance of physical activity [9]. In low-resource settings, it is key to conduct theory-driven approaches to intervention development in order to avoid mistakes that can be costly in terms of money and time. Formative research is essential to any intervention design [22], particularly if human centered design approaches are to be considered.
After conducting a multiple regression model, the standardized beta coefficient with the greatest magnitude was that of perceived behavioral control. While all coefficients were nonsignificant, mainly due to the small sample size, it would seem that perceived behavioral control would be the construct carrying the greatest weight in the formation of intentions to walk, and, thus, it would be a candidate target for a health intervention promoting walking among people living with hypertension. A meta-analysis of studies examining physical activity, under the same theoretical approach, has found that attitudes and perceived behavioral control tend to carry the greatest weight in the formation of intentions to engage in physical activity [23]. Finding that perceived behavioral control is a main theoretical predictor of this behavior is relevant, because practitioners may develop an efficient intervention that would appeal to only that construct [4], as opposed to appealing to all constructs together with more complex interventions.
However, before engaging in any decision about what specific construct to target with a health intervention aiming to change behavior, Fishbein and Ajzen [4] recommend validating this initial conclusion with a second phase of formative research. In such phase, program designers can implement a survey with a larger sample to identify the salient beliefs that discriminate between intenders and non-intenders of the selected behavior [4]. In our study, we were capable of identifying those salient beliefs associated with walking three times a week for 30 minutes nonstop, but not to measure the extent to which these beliefs discriminated between intenders and non-intenders. Yet, in a subsequent phase, researchers could use the Perceived behavioral control as a potential precursor of walking three times a week: Patient's perspectives beliefs identified in this study to assess the correlations between each belief and a measure of intentions.
Of interest, our results revealed that among the salient behavioral beliefs were feeling better or it is good or bad for the body. The salient injunctive normative beliefs-that is, social referents that would approve or disapprove of one's behavior-were children, wife and grandchildren. Among the salient descriptive normative beliefs-that is, social referents that engage or do not engage in the same behavior-were children, neighbors, spouse and siblings. Finally, the salient control beliefs about the facilitators and barriers of their behavior were, among others, (not) having someone to walk with, having work or responsibilities or having no time. These salient beliefs together can be further analyzed to identify an even smaller but meaningful selection of beliefs that can be targeted by a health intervention.
For example, based on the prior findings, the next step in this formative research would be to implement a study to measure individuals' intentions to walk three times a week for 30 minutes nonstop as well as their perceptions about the beliefs outlined in this study. With such data, practitioners could examine the existing relationships among these variables and inform the development of message strategies. Recently, for example, Hennessy and his colleagues [6] measured parents' beliefs associated with intentions to ban smoking in households in the United States, and found that while some salient behavioral and control beliefs discriminated between intenders and non-intenders, all the salient normative beliefs differentiated these two groups from each other. Such findings can help practitioners design message strategies that can be implemented with interventions promoting behavior change among a population of interest. In the case of hypertension, the study could find that the belief "walking makes me feel better" discriminates between intenders and non-intenders to walk, such that it would only be held by those who intend to walk but not by those who do not intend to walk; in this scenario, such belief would be selected for the next step in message development.
There is guidance in the health communication literature about how to plan message strategies based on studies using the Reasoned Action Approach [4,24]. Hornik and Woolf [24] proposed three criteria to identify beliefs to be targeted by media interventions addressing health issues using formative research in the context of this theory: first, there should be a strong correlation between the measure of intentions and a selected belief; second, there should be enough individuals holding the opposite view on the selected belief, so that they can be moved into the right direction as a result of the intervention; and third, the selected belief has to be susceptible of change-that is, it cannot be a belief that is veridical or based on the direct experience of the individual, but rather it should be a belief that can be changed by an informational intervention [24]. Thus, those selected beliefs meeting the above criteria would be potential targets of health messages that promote, for example, physical activity among people living with hypertension.
Research in message design can further inform the selection of appeals or message formats that are most likely to influence those targeted beliefs [25]. A good example about how to construct messages is provided by Mendez and his colleagues [26], who designed and validated persuasive message appeals targeting attitudes to promote physical activity among people with coronary heart disease in Brazil. Such studies, including ours, contribute greatly to the design of more effective interventions that bridge the research-to-practice gap largely found in the public health arena [27], especially in guiding the development of patient-centered interventions.
One limitation of this study was the small sample size, thus calling for confirmation of our findings in a larger sample size. Also, the cross-sectional design of this study limits our ability to claim that perceived behavioral control is the main predictor of walking; it may well be that those who walk regularly feel more confident on their walking behavior, rather than the reverse. Future work can replicate this study with a longitudinal design. In addition, it is possible that the broad age range of the sample (from 40 to 82 years old) could hide differences in participants' motivations to walk-in so far as younger adults may have different opportunities to walk than older ones; however, it should be noted that the question about barriers did elicit limitations for engaging in the study's behavior that capture barriers for all age groups (e.g., work/responsibilities vs aches/illnesses, see Table 1). Furthermore, this study focused only on those individuals who have hypertension and know they have it; while implications of our results may not apply to other individuals outside this specific group, we suggest that similar communication strategies be used with those that have the condition but do not know they have it. Lastly, future formative research efforts could conduct a similar theory-driven approach with a focus on sedentary patients, so that beliefs that motivate walking could be identified among initiators. In our study, we did not discriminate patients who were current walkers from those who were non-walkers; that was a limitation as the literature indicates that habit is a predictor of physical activity [28].
Conclusion
Overall and while cross-sectional in nature, our study provides elements to inform the design of an intervention that would motivate people living with hypertension to walk on a regular basis, irrespective of whether they are sedentary or current walkers. Though still preliminary, perceived behavioral control may be key to people's decisions to walk on a regular basis; thus, a small set of control beliefs about barriers, including (not) having someone to walk with, having work or responsibilities, or having no time, could be targeted by a health communication campaign aiming to manage hypertension among individuals living with this condition.
Supporting information S1 File. This is the S1_Qualitative dataset.docx. This is the dataset from the elicitation of salient beliefs. (DOCX) S2 File. This is the S1_Quantitative dataset.dta. This is the dataset with demographics and constructs of the theory. (DTA)
|
2018-04-03T02:26:15.560Z
|
2018-02-16T00:00:00.000
|
{
"year": 2018,
"sha1": "029596889879aa2439a4403c71021f5fb38e496d",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0192915&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "029596889879aa2439a4403c71021f5fb38e496d",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
260945831
|
pes2o/s2orc
|
v3-fos-license
|
Limited Impact of Soil Microorganisms on the Endophytic Bacteria of Tartary Buckwheat (Fagopyrum tataricum)
Soil has been considered the main microbial reservoir for plants, but the robustness of the plant microbiome when the soil resource is removed has not been greatly considered. In the present study, we tested the robustness of the microbiota recruited by Tartary buckwheat (Fagopyrum tataricum Gaertn.), grown on sterile humus soil and irrigated with sterile water. Our results showed that the microbiomes of the leaf, stem, root and next-generation seeds were comparable between treated (grown in sterile soil) and control plants (grown in non-sterile soil), indicating that the plants had alternative robust ways to shape their microbiome. Seed microbiota contributed greatly to endophyte communities in the phyllosphere, rhizosphere and next-generation seeds. The microbiome originated from the seeds conferred clear benefits to seedling growth because seedling height and the number of leaves were significantly increased when grown in sterilized soil. The overall microbiome of the plant was affected very little by the removal of the soil microbial resource. The microbial co-occurrence network exhibited more interactions, and Proteobacteria was enriched in the root of Tartary buckwheat planted in sterilized soil. Our research broadens the understanding of the general principles governing microbiome assembly and is widely applicable to both microbiome modeling and sustainable agriculture.
Introduction
Tartary buckwheat (Fagopyrum tataricum Gaertn.) originated in the Himalayan mountainous areas of western China and is currently grown in Asian countries, including China, Japan and South Korea, as well as Canada and Europe [1]. Tartary buckwheat is a traditional short-season pseudocereal crop that performs well in barren soil and harsh climates. Compared with other crops, Tartary buckwheat shows high resistance to aluminum (Al) toxicity in acidic soils [2]. Tartary buckwheat has attracted increasing world-wide attention in recent years because it has a higher nutraceutical value compared with cereals; it is gluten-free, with a high total vitamin B content [3]; and it has much higher levels of antioxidants such as rutin [4][5][6]. These components have been associated with many potential health benefits such as reducing cholesterol levels, blood clots and high blood pressure [7,8].
Microorganisms are ubiquitous across the entire life cycle of plants, yet we are only just beginning to treat plant holobionts as indivisible entities [9]. Microbial communities play a vital role in host ecology and evolution by, for example, influencing fitness and growth [10], offering protection against herbivores or driving the evolution of multi-disease resistance [11]. The root growth of Arabidopsis is influenced by the interaction between a single bacterial genus (Variovorax) and other genera [12]. In return, for microbial colonization, the plant provides stable niches and photosynthetic products to the microbiota.
The plant holobiont comprises the plant and multiple fungal and bacterial species and is characterized by a dense network of multitrophic interactions, including both pathogens and mutualists [13]. The composition of plant microbiota can be influenced by the host genotype, its ecological niche, and biotic and abiotic factors. In nature, healthy plants will switch to being symptomatic for disease when the microbiota communities shift into dysbiosis. Seeds represent one of the most crucial stages of a plant's life history. In natural and agricultural ecosystems, seeds serve not only to initiate the life cycle and reproduce but also to facilitate dispersal, to adapt and to persist in new environments [14]. Usually, seeds germinate, and the germinated seedlings not only face threats from various pathogens and herbivores in the soil but also respond to resource limitation and deficiencies in overall habitat suitability. These factors make the seed-to-seedling transition in both natural and agricultural systems one of the most serious challenges in a plant's life cycle. Microorganisms, especially fungi in the soil, can adversely affect seed germination and seedling growth and development [15]. It might be inferred that seed germination or seedling survival would be enhanced by fungicide treatment compared to those of control seeds that do not receive the treatment. In Sedum alfredii, the transmission of endophytes from shoot cuttings to the rhizosphere was far more efficient in sterile soil than in normal soil, and the transmitted endophytes became a dominant component of the newly established root-associated microbiome. S. alfredii growth was significantly promoted when cultivated in sterile soil [16].
Although soil is an important reservoir of microorganisms, little work has been conducted to reveal holistically the source of microbiome composition across the numerous potential microbial niches represented by multiple plant organ and tissue types. Some studies have indicated that plant-associated bacteria can be recruited from the soil, while others have indicated that neither local site conditions nor host genotypes fully explain the assembly of plant microbiota [17]. Only when the endogenous seed microbiome is severely disrupted will the soil microbiome colonize the rhizosphere, and during the process of plant microbiome repair the seed microbiome takes priority over the soil microbiome [18]. The endophytic microbial community in the plant changes dynamically during growth and development and is affected by various abiotic and biotic factors such as soil conditions, biogeography, plant genotype, microbe-microbe interactions and plant-microbe interactions [19]. Despite these, bacterial seed endophytes are highly conserved in some plant species [20,21] and potentially provide the bulk of the species pool from which the seedling microbiome is recruited. The seed microbiome is the most promising candidate for plant microbiome engineering efforts.
Plants have evolved the ability to produce a vast array of specialized metabolites that aid adaptation to different environmental niches [22]. Microorganisms are one of the key environmental factors affecting plant growth, and growth promoters and biological control agents can influence the content of antioxidant activity in common buckwheat [23]. The arbuscular mycorrhizal colonization of Tartary buckwheat and the changes in the microbial community during Tartary buckwheat wine fermentation have been documented [24,25]. However, the influence of microorganisms that were already present in the soil before the plant was grown on plant fitness and development and plant microbiome assembly are unclear. Accurate knowledge of the structure, composition and function of Tartary buckwheat microbiota and their changes can help to identify beneficial microbiota that might assist the plant's growth and development. In this study, we focused on elucidating the role of the seed microbiome on the plant microbial community's diversity, structure, composition, function and abundance. We attempt to elucidate the potential sources of seed microbiota, leaf microbiota, stem microbiota and root microbiota with or without the influence of soil microbiota.
Soil Sterilization and Source of Tartary Buckwheat
The humus soil was fermented from plant residues, commonly used for raising flowers on family balconies, cultivating seedling for field crops. The humus soil used in this experiment was purchased from Anning Caopu Town Co., Ltd. (Kunming, Yunnan, China). The humus soil was packaged by the sterilization bag separately and placed in a damp heat sterilization pot under 121 • C and 0.1 mPa for 20 min. This was repeated two more times. Seeds of Tartary buckwheat were collected at Zhaotong Academy of Agricultural Sciences, Yunnan Province.
Sample Collection and Preparation
After removing the seed coat, seeds of Tartary buckwheat were disinfected in 1% sodium hypochlorite (NaClO) for 10 min and then rinsed with sterile water 5-6 times to remove any residue. For germination, the seeds were placed in a conical flask containing MS medium (Cat#M8521, Beijing Solarbio Science & Technology Co., Ltd., Beijing, China) without hormones. The seeds were incubated at 23 • C under 12 h/12 h light/dark for 3 days to facilitate germination. Then, we planted the germinated seeds in pots with sterilized or non-sterilized humus soil, with 5 seeds per pot. The pots were placed in the outdoor potted field of Yunnan Agricultural University. Irrigation with sterilized water, except some rainwater, but no more fertilizer was applied throughout the entire growth period until the seeds matured.
We assessed the microbial communities for five groups samples: (1) Tartary buckwheat parent seeds (FO); (2) roots (TR), stems (TS) and leaves (TL) planted on sterilized humus soil; (3) roots (CR), stems (CS) and leaves (CL) planted on non-sterilized humus soil; (4) seeds (TI) harvested from sterilized humus soil; (5) seeds (CI) harvested from non-sterilized humus soil. Seeds germinated on MS medium for three days were rinsed with sterile water, frozen in liquid nitrogen and stored at −80 • C for sampling of seed microbiota. In this study, we therefore define the "seed microbiota" as the seeding germinated on MS medium for three days since there was barely any microbial DNA obtained from seeds directly. The sampling of microbiota from roots, stems and leaves was as follows: roots were taken out of the pot completely. Large soil clumps were removed by shaking the root, and then fibrous roots were removed, and only the main roots were retained. Main roots were extensively washed by hand with tap water. Washed roots were transferred to sterile bags immersed in ice. Washed roots (2 g) were rinsed with sterile water 5-6 times, frozen in liquid nitrogen and stored at −80 • C. The tops of the stems were chopped and then washed by hand with tap water. Washed stems were transferred to sterile bags immersed in ice. The subsequent procedure was the same as for the roots. Young leaves were sampled following the same procedure as for stems.
DNA Extraction and Sequencing
All plant tissues (i.e., roots, stems, leaves and seeds) were ground to a powder in liquid nitrogen prior to DNA extraction. DNA extraction was performed using the MoBio PowerSoil DNA Isolation kit (QIAGEN, Hilden, Germany), according to the manufacturer's protocol. DNA concentration and purity were monitored on 1% agarose gels. According to the concentration, DNA was diluted to 1 ng/µL using sterile water. To access the bacterial communities, the V4 region of the 16S rRNA gene was amplified using the specific primer (V4: 515F-806R) with the barcode. All PCR reactions were carried out in 30 µL reactions with 15 µL Phusion ® High-Fidelity PCR Master Mix (Biolabs, New England, Ipswich, MA, USA); 0.2 µM of forward and reverse primers, and about 10 ng of template DNA. PCR products were purified with the GeneJETTM Gel Extraction Kit (Thermo Scientific, Shanghai, China). Sequencing libraries were generated using Ion Plus Fragment Library Kit 48 rxns (Thermo Scientific, Shanghai, China) following the manufacturer's recommendations. The library quality was assessed on the Qubit@ 2.0 Fluorometer (Thermo Scientific, Shanghai, China). The library was sequenced on an Ion S5TM XL platform (Novogene, Beijing, China), and 400 bp/600 bp single-end reads were generated.
Sequence Processing and Statistical Analysis
Single-end reads were assigned to samples based on their unique barcode and truncated by removing the barcode and primer sequence. Quality filtering of the raw reads was performed under specific filtering conditions to obtain high-quality clean reads according to the Cutadapt [26] quality control process. The reads were compared with the Silva138.1 using the UCHIME algorithm to detect chimera sequences, which were subsequently removed to give the final clean reads. Operational taxonomic units (OTUs) were then clustered at 97% similarity using Uparse [27] (v 7.0.1001). For each representative sequence, the Silva138.1 Database [28] was used with the Mothur algorithm to annotate taxonomic information [29]. Alpha diversity parameters, including observed species, Chao1, Shannon, Simpson and ACE, were calculated with QIIME [30] (v 1.9.1). Microbial community composition was assessed by computing Bray-Curtis dissimilarity matrices and then visualized using non-metric dimensional scaling (NMDS) ordinations to show compositional differences. We calculated unpaired t-test comparisons of bacterial communities within leaves, stems, roots and seeds using GraphPad Prism 8. Duncan's new complex difference method in SAS v.9.0 (SAS Institute Inc., Cary, NC, USA) was used to conduct the analysis of variance (ANOVA), and the significant differences between different treatments in each experiment were compared at the level of p < 0.05.
Circos and Co-Occurrence Network
Graphical rendering of the community structure at phylum level was performed with the open-source software Circos [31]. From the abundance of the species, we calculated the correlation coefficient values (Pearson correlation coefficient, for each genus to obtain the correlation coefficient matrix with the filtering conditions set as follows: (a) cutoff value (>0.4) to filter out weakly related connections; (b) node self-joining filtered out; (c) connections with node sum abundance less than 8% removed. Network legends were created with Cytoscape. Topological features were estimated with igraph package in R.
Comparison of Alpha Diversity in Different Tissues of Tartary Buckwheat
The experiment collected 135 samples for the amplicon sequencing of endophytic bacteria (Supplementary Table S1), and species accumulation showed sufficient sample size (Supplementary Figure S1). After the removal of low-quality reads and chimera sequences, we obtained on average 78,784 valid reads per sample. A total of 3364 bacterial operational taxonomy units (OTUs) were recovered, with clustering at ≥97% sequence identity using UPARSE. The taxonomic assignment of bacterial OTUs resulted in the identification of 66 phyla, including candidate phyla. Raw sequences of 16S DNA amplicons were submitted to NCBI as Bioproject PRJNA957132.
The microbial alpha diversity including the Simpson, ACE, Chao1 estimator and Shannon indices within each sample were analyzed based on the number of observed species for all sample types (Supplementary Table S2). Compared with the seedlings germinated for 3 days, alpha diversity indices in different tissues were slightly higher. And that in the leaf (TL) planted in sterile humus soil were significantly different from seedlings (FO) germinated for 3 days (Table 1, p < 0.05). But this trend was not found in the leaf (CL) planted in non-sterile soil. Whether planted in sterile soil or non-sterile soil, the microbial alpha diversity indices of the root endophytic bacteria were significantly different from those of the seedlings germinated for 3 days. This conclusion was consistent with the results of stem endophytic bacteria. But there was no significant difference between the same tissues planted in sterilized and non-sterilized soil. The seedlings germinated for 3 days showed the lowest alpha diversity. Similar results were retrieved from the number of observed species, with the highest species numbers detected in the stem sample grown in sterile humus soil (Supplementary Table S2). The number of species observed for leaf endophytic bacteria was slightly less than in roots, but there were slightly more species detected in sterilized soil (TL) than in non-sterilized soil (CL) (observed species TL: 97, CL: 92). The number of observed species was the least in the 3-day-old seedlings. but that in parent seedlings (FO) was significantly less than in seedlings from non-sterilized soil (CI). The root endophytic bacteria community was also more diverse than aboveground stems and leaves regardless of whether planted in sterilized or non-sterile soil.
Comparison of Plant Height and Leaf Number of Tartary Buckwheat Grown in Sterilized and Non-Sterilized Soil
Seedlings were transplanted into pots with sterilized or non-sterile humus soil. The height and leaf number of the plants were measured after 20 days. The average height of Tartary buckwheat grown in sterilized humus soil was 10.6 cm, with an average of 5.1 leaves per seedling, while the average height was only 4.7 cm, planted in non-sterilized soil. The height of Tartary buckwheat planted in sterilized soil was 2.25-fold greater than that in non-sterilized soil ( Figure 1). The number of leaves was also significantly higher when grown in sterilized soil (p < 0.0001).
Assembly of Microbial Community in Different Plant Parts and Core Microbiome Identification
At the phylum level of the bacterial community, all samples were dominated by Firmicutes (Figure 2A), which was present in most grain crops. In the stem of Tartary buckwheat grown in non-sterile humus soil, Bacteroidota were dominant (41%, Figure 2A), followed by Firmicutes and Proteobacteria (32% and 9%, respectively). The top three phyla in the root bacterial community from plants grown in non-sterile humus soil were Firmicutes (33%), Bacteroidota (29%) and Proteobacteria (12%). This was a little different from root samples grown in sterile humus soil. Although Firmicutes still had the highest relative abundance, Proteobacteria were the second highest in the root samples from sterilized soil and were significantly higher than those from non-sterilized soil (Figure 3; p < 0.05). The top three phyla in the leaf bacterial community grown in non-sterile humus soil were Firmicutes (39%), Bacteroidota (30%) and Proteobacteria (16%), followed by Actinobacteria (Figure 2A). There were no significant differences between plants grown in sterilized and non-sterilized soil (Figure 3; p < 0.05). Microorganisms 2023, 11, x FOR PEER REVIEW 6 of 16
Assembly of Microbial Community in Different Plant Parts and Core Microbiome Identification
At the phylum level of the bacterial community, all samples were dominated by Firmicutes (Figure 2A), which was present in most grain crops. In the stem of Tartary buckwheat grown in non-sterile humus soil, Bacteroidota were dominant (41%, Figure 2A), followed by Firmicutes and Proteobacteria (32% and 9%, respectively). The top three phyla in the root bacterial community from plants grown in non-sterile humus soil were Firmicutes (33%), Bacteroidota (29%) and Proteobacteria (12%). This was a little different from root samples grown in sterile humus soil. Although Firmicutes still had the highest relative abundance, Proteobacteria were the second highest in the root samples from sterilized soil and were significantly higher than those from non-sterilized soil (Figure 3; p < 0.05). The top three phyla in the leaf bacterial community grown in non-sterile humus soil were Firmicutes (39%), Bacteroidota (30%) and Proteobacteria (16%), followed by Actinobacteria (Figure 2A). There were no significant differences between plants grown in sterilized and non-sterilized soil (Figure 3; p < 0.05). Proteobacteria were significantly enriched in the root samples planted in sterilized soil, compared with the bacterial community in leaves and stems (Figure 3; p < 0.05). In contrast, among the bacterial community in the root samples planted in non-sterilized soil, Proteobacteria were not enriched. The highest relative abundance of Proteobacteria occurred in the seedlings germinated for 3 days that were harvested from Tartary buckwheat Proteobacteria were significantly enriched in the root samples planted in sterilized soil, compared with the bacterial community in leaves and stems (Figure 3; p < 0.05). In contrast, among the bacterial community in the root samples planted in non-sterilized soil, Proteobacteria were not enriched. The highest relative abundance of Proteobacteria occurred in the seedlings germinated for 3 days that were harvested from Tartary buckwheat planted in sterilized soil (Figure 2A). Proteobacteria were also enriched in seedlings germinated for 3 days that were planted in non-sterilized soil. Chloroflexi and Acidobacteriota were enriched more in roots, stems or leaves grown in sterilized soil than in non-sterilized soil (Figure 3). Bacteroidota were adverse. While Actinobacteria were more enriched in leaves grown in sterilized soil than in non-sterilized soil (Figure 3), it was just the opposite in root samples. Meanwhile, stem samples had higher relative abundances of Bacteroidota, compared with the leaves and roots samples planted in non-sterilized soil ( Figure 3; p < 0.05).
Fifty-five prokaryotic OTUs were present in all samples at a minimum of 0.04% relative abundance and were identified as the core microbiome (Supplementary Figure S2). The 55 prokaryotic OTUs account for about 30% of the relative abundance in each group of samples. Firmicutes, Bacteroidota and Proteobacteria were among the most abundant members of the prokaryotic core microbiome. Core microbiome analysis showed that 79 and 88 prokaryotic OTUs were identified as core microorganisms grown on non-sterile soil and sterilized soil ( Figure 2B), respectively. Soil sterilization has a significant impact on the endophytic core microorganisms of Tartary buckwheat. The relative abundance of 55 prokaryotic core OTUs in different compartments was also disturbed by soil sterilization (Figure 4). OTU_3, OTU_4 and OTU_5 were always at high relative abundance (>1%), regardless of tissue and soil type. OTU_7, OTU_11 and OTU_12 were only at high relative abundance (>1%) in leaves planted in non-sterilized soil. The relative abundance of OTU_92 in leaves planted in sterilized soil was 1.4%, but that was 0.1% in non-sterilized soil. Soil sterilization has little effect on the relative abundance of core endophytic bacteria in various tissues of Tartary buckwheat grown in sterilized and non-sterilized soil (Supplementary Figure S3).
Plant tissue (root, stem, leaf and seedling) was found to be the major explanatory variable of microbial community structure NMDS (stress = 0.246; Figure 5). The structures of the seedlings and the different tissues of microbial communities were distinct. Although soil sterilization affected the relative abundance of the microbial community in the roots, stems and leaves, it did not affect the microbial community retained in the seeds and the community structure in different tissue. soil and sterilized soil ( Figure 2B), respectively. Soil sterilization has a significant impact on the endophytic core microorganisms of Tartary buckwheat. The relative abundance of 55 prokaryotic core OTUs in different compartments was also disturbed by soil sterilization ( Figure 4). OTU_3, OTU_4 and OTU_5 were always at high relative abundance (>1%), regardless of tissue and soil type. OTU_7, OTU_11 and OTU_12 were only at high relative abundance (>1%) in leaves planted in non-sterilized soil. The relative abundance of OTU_92 in leaves planted in sterilized soil was 1.4%, but that was 0.1% in non-sterilized soil. Soil sterilization has little effect on the relative abundance of core endophytic bacteria in various tissues of Tartary buckwheat grown in sterilized and non-sterilized soil (Supplementary Figure S3).
Co-Occurrence Network of the Bacterial Community in Different Tissues of Tartary Buckwheat
To examine variations in the microbial network structure, we built co-occurrence networks of the bacterial community in different tissues of Tartary buckwheat planted in non-sterile and sterilized soil. To construct the network, we selected approximately 30 genera, each with a sum relative abundance of over 7% in fifteen biological duplicate samples. The differences in the response of the bacterial communities detected in the different tissues suggest that the overall co-occurrence patterns of genera in the nonsterilized and sterilized soil would be different from each other. The co-occurrence network in stems from non-sterile soil consisted of 26 nodes and 33 edges, whereas that from sterilized soil consisted of 27 nodes and 52 edges ( Figure 6). In roots from non-sterile soil, the co-occurrence network consisted of 26 nodes and 32 edges (Supplementary Figure S4A), whereas that from sterilized soil consisted of 27 nodes and 45 edges (Supplementary Figure S4B). In leaves from non-sterile soil, the co-occurrence network consisted of 26 nodes and 45 edges (Supplementary Figure S5A), whereas that from sterilized soil consisted of 29 nodes and 61 edges (Supplementary Figure S5B). The nodes and edges of the cooccurrence network suggested tighter associations among genera in sterilized soil than in non-sterile soil. Among the bacterial-bacterial networks, we recorded the high range proportion of positive edges in all tissues of Tartary buckwheat planted in sterilized or non-sterile soil.
Plant tissue (root, stem, leaf and seedling) was found to be the major explanatory variable of microbial community structure NMDS (stress = 0.246; Figure 5). The structures of the seedlings and the different tissues of microbial communities were distinct. Although soil sterilization affected the relative abundance of the microbial community in the roots, stems and leaves, it did not affect the microbial community retained in the seeds and the community structure in different tissue.
Co-Occurrence Network of the Bacterial Community in Different Tissues of Tartary Buckwheat
To examine variations in the microbial network structure, we built co-occurrence networks of the bacterial community in different tissues of Tartary buckwheat planted in nonsterile and sterilized soil. To construct the network, we selected approximately 30 genera, each with a sum relative abundance of over 7% in fifteen biological duplicate samples. The differences in the response of the bacterial communities detected in the different tissues suggest that the overall co-occurrence patterns of genera in the non-sterilized and sterilized soil would be different from each other. The co-occurrence network in stems from non-sterile soil consisted of 26 nodes and 33 edges, whereas that from sterilized soil consisted of 27 nodes and 52 edges ( Figure 6). In roots from non-sterile soil, the co-occurrence network consisted of 26 nodes and 32 edges (Supplementary Figure S4A), whereas that from sterilized soil consisted of 27 nodes and 45 edges (Supplementary Figure S4B). In leaves from non-sterile soil, the co-occurrence network consisted of 26 nodes and 45 edges (Supplementary Figure S5A), whereas that from sterilized soil consisted of 29 nodes Figure S5B). The nodes and edges of the co-occurrence network suggested tighter associations among genera in sterilized soil than in non-sterile soil. Among the bacterial-bacterial networks, we recorded the high range proportion of positive edges in all tissues of Tartary buckwheat planted in sterilized or non-sterile soil.
The Proportion of Endophytic Bacteria in Different Tissues of Tartary Buckwheat
The Venn analyses of endophytic bacteria in different tissues of Tartary buckwheat in sterilized and non-sterilized soil (Supplementary Figure S6) show that only 45.7% of the endophytic bacteria in the roots (TR) grown in sterilized soil appeared (Figure 7). The proportion of root endophytic bacteria shared only with leaves (TL) grown in sterilized soil was 6.7%. And the proportion of root endophytic bacteria shared only with stems (TS) grown in sterilized soil was 7.7%. But this does not necessarily indicate the source of these endophytic bacteria. Furthermore, 36.4% of endophytic bacteria in the roots was shared with the stems, leaves, or parent seedlings simultaneously. But the proportion of recruited self-endophytic bacteria in Tartary buckwheat roots (TR), stems (TS) and leaves (TL) planted in sterilized soil was higher than that planted in non-sterile soil. The proportion of endophytic bacteria recruited by the offspring seedling (CI) from non-sterile soil was significantly higher than that in sterilized soil.
The Proportion of Endophytic Bacteria in Different Tissues of Tartary Buckwheat
The Venn analyses of endophytic bacteria in different tissues of Tartary buckwheat in sterilized and non-sterilized soil (Supplementary Figure S6) show that only 45.7% of the endophytic bacteria in the roots (TR) grown in sterilized soil appeared (Figure 7). The proportion of root endophytic bacteria shared only with leaves (TL) grown in sterilized soil was 6.7%. And the proportion of root endophytic bacteria shared only with stems (TS) grown in sterilized soil was 7.7%. But this does not necessarily indicate the source of these endophytic bacteria. Furthermore, 36.4% of endophytic bacteria in the roots was shared with the stems, leaves, or parent seedlings simultaneously. But the proportion of recruited self-endophytic bacteria in Tartary buckwheat roots (TR), stems (TS) and leaves (TL) planted in sterilized soil was higher than that planted in non-sterile soil. The proportion of endophytic bacteria recruited by the offspring seedling (CI) from non-sterile soil was significantly higher than that in sterilized soil.
in sterilized and non-sterilized soil (Supplementary Figure S6) show that only 45.7% of the endophytic bacteria in the roots (TR) grown in sterilized soil appeared (Figure 7). The proportion of root endophytic bacteria shared only with leaves (TL) grown in sterilized soil was 6.7%. And the proportion of root endophytic bacteria shared only with stems (TS) grown in sterilized soil was 7.7%. But this does not necessarily indicate the source of these endophytic bacteria. Furthermore, 36.4% of endophytic bacteria in the roots was shared with the stems, leaves, or parent seedlings simultaneously. But the proportion of recruited self-endophytic bacteria in Tartary buckwheat roots (TR), stems (TS) and leaves (TL) planted in sterilized soil was higher than that planted in non-sterile soil. The proportion of endophytic bacteria recruited by the offspring seedling (CI) from non-sterile soil was significantly higher than that in sterilized soil. OTUs that only appeared in this diagram. FO-TL: OTUs that appeared simultaneously in FO and TL and so on. UN: OTUs that appeared in more than three tissues simultaneously. The seeds harvested from Tartary buckwheat planted in sterilized (TI) and non-sterilized soil (CI); the leaves planted in sterilized (TL) and non-sterilized soil (CL); the stems planted in sterilized (TS) and non-sterilized soil (CS); the roots planted in sterilized (TR) and non-sterilized soil (CR).
Discussion
Tartary Buckwheat is a traditional short-season pseudocereal crop that is adapted to growth in barren soil. Bacterial microbiota could promote nutrient uptake and transport from the soil as well as increase host immunity, increase tolerance to biotic (and abiotic stresses), promote stress resistance, and influence crop yield and quality. Similarly to human gut microbiota, the plant microbiome is referred to as the host's second or extended genome. Soil was once considered the main source of plant microbiome, but more evidence suggests that the microorganisms in seeds are a valuable asset left by plants to their descendants. Plant-microbe interactions are of specific interest, not only to achieve a better understanding of their role during plant growth and development [32] but also to allow the exploitation of their relationships in phytoremediation applications [33], sustainable crop production [34], and the production of secondary metabolites [35].
Relationship between Soil Microbiota and Tartary Buckwheat Microbiome
In this study, we adopted a standard moisture-heat procedure, using hydrothermal by autoclave at 121 • C at 0.1 mPa for 20 min, three times, to reduce the influence of microorganisms that existed in soil on Tartary buckwheat growth. To investigate the function of rhizosphere microbiota, previous studies used a chemical method, involving the use of methyl bromide, chloropicrin or vancomycin [36], which kill or inhibit cell wall biosynthesis in certain taxa [37]. Plants are rooted and fixed in the soil and greatly rely on their root microbiome for the uptake of nutrients and protection against stresses. The life processes in which plants can survive under adverse environmental conditions are the result of co-evolution of plants and beneficial microorganisms [38]. For example, plants can in effect "cry for help" from their root microbiome when they are under attack by pathogens, leading to the selective enrichment of plant-protective microbes and microbial activities in the soil. Soil is considered to be a major reservoir of leaf [39] and root microorganisms for many plants and the main source of beneficial microorganisms for plants. Some studies on the origin of plant microorganisms have indicated that the phyllosphere largely comprises microbes that are passively dispersed and stochastically assembled from the atmosphere [40,41], but other studies in sugarcane [42,43], grape [44] and Arabidopsis [45] have suggested soil origins of leaf microbiota. Improved knowledge of the source of Tartary buckwheat microbiome is expected to aid our understanding of the mechanisms used to adapt to barren areas, and the identification of beneficial microorganisms, to improve the applicability of crops to different habitats. While our data suggested that 30% of bacteria observed in the roots, stems, and leaves could be detected in seedlings that had germinated for 3 days on an MS medium. The relative abundance of those bacterial OTUs could reach more than 50% in tissue. Therefore, the source of Tartary buckwheat microbiome might be attributed to non-soil reservoirs, with the vast majority derived from seed.
It is noteworthy that, according to the protocol of the soil microbial extraction kit, it is difficult to extract microbial DNA from the humus soil we used to grow the plants. Therefore, we believe that humus soil contains fewer microorganisms than natural field soils equal in quality. The plant height and leaf number of Tartary buckwheat grown in sterilized soil were significantly higher than in non-sterilized soil, and the endophytic bacteria of roots grown in sterilized soil was significantly more abundant than those of roots grown in non-sterilized soil.
Seed Microbiota Associated with Endophytic Microorganisms of Tartary Buckwheat
Seeds represent an essential stage of the plant life cycle and can be dormant for decades, under appropriate conditions, before germinating to produce new plants. Seeds contain complex microbial communities, which can exert beneficial effects on germination and support subsequent plant growth. Germination and emergence also can shape the structure of seed microbiota [46]. Although the microbial community might have been in dynamic change during plant growth and development, the endophytic bacterial communities are relatively well preserved and are transmitted to the next generation. This was confirmed in this study by the result that the microbial community structure of Tartary buckwheat seeds germinated for three days was very similar to that of the parent plants. The vertical transmission of microorganisms from seeds to seedlings has been studied in many crops, such as rice [47] and tomato [48], by culture-dependent experiments or 16S rRNA gene pyrosequencing. Previous studies showed that seeds might provide an important source of microbial inoculum for other plant organs, such as roots, stems, and leaves [49]. In this study, the data led us to assist this hypothesis. These microorganisms are well preserved in the seeds or seedlings and can be transferred to various plant tissues where they have the potential for plant growth promotion and the biocontrol of pathogens. Seed microorganisms can prevent the transmission and colonization of pathogens by maintaining tight hub networks and have a key role as carriers of plant-growth-promoting bacteria (PGPB). These interactions of the microbiome can be explained as an adaptation to prevalent environmental conditions as a result of the co-evolution of plants and microorganisms. In general, the higher the diversity of the microbial community, the greater the plant biomass that can be obtained. This is possibly because of not only effects on resource allocation but also enhanced evolution, allowing the microbiome to develop new desirable functions. The present study provides novel insights into bacterial endophytes of Tartary buckwheat.
In the last decade, rapidly increasing numbers of studies have focused on the role of the microbial community in various crops [50][51][52][53] Microbial abundance, composition, and function have a direct causal relationship with plant health [54]. Plant disease occurrence is often accompanied by microflora dysbiosis, such as altered microbial abundance or diversity [55]. Soil sterilization can change the relative abundance of specific genera in Tartary buckwheat microorganisms and the diversity of communities in different tissues.
In addition to microbial community structure and diversity, microbial co-occurrence patterns play a crucial role in understanding the plant microbiome. Previous studies have shown that the microbial co-occurrence network was modulated by environmental factors such as water depth [56] or soil properties [57]. Moreover, variations in microbial co-occurrence networks might represent different niches [58], and the presence of particular microbial modules might also indicate a similarity of microbial co-occurrence patterns in different environments. Identifying the hubs in a co-occurrence network can be used to infer their potential role in the community. Here, we reported that three tissues (leaf, stem and root) of Tartary buckwheat showed different co-occurrence patterns, which might also be influenced by the type of soil.
In our work, the numbers of nodes and edges of the co-occurrence networks in the stem of Tartary buckwheat grown in sterilized humus soil were significantly higher than those in non-sterilized soil. The most abundant genus did not display the highest number of interactions. According to the degree of betweenness centrality, the top three genera in the co-occurrence network in the stem of plants grown in sterilized humus soil were Ruminococcus, Bacteroides and Agathobacter. In non-sterile humus soil, the top three genera were Ligilactobacillus, Roseburia and Blautia. The same microorganisms might form different interactions in different niches, and the functional role of microorganisms is related to their niche. Therefore, this reminds us that, to fully understand the plant microbiome, its niche and environmental factors must be taken into account.
Conclusions
Like other cereal crops, Firmicutes were the dominant flora of Tartary buckwheat and were vertically transmitted to the next generation. Plant growth was significantly promoted when cultivated in sterile soil. The relative abundance of Chloroflexi, Bacteroidetes, and Firmicutes species in the leaves and roots were significantly affected by whether the soil was sterilized. The assembly of the microbiome in plant tissues was influenced by soil sterilization and interactions in the microbial community.
Overall, our findings suggest that non-soil microbes, especially seed-borne microorganisms, play a key role in plant growth and development. We believe that microorganisms in seeds are more conducive to plant growth and development. Microorganisms in soil have a weak impact on the distribution of seed microbiota in plant tissues. Detailed mechanisms on microorganisms in soil influence the colonization of seed endophytes in different plant tissues, and the scope of methods that could usefully enhance that colonization remains to be explored.
|
2023-08-17T15:07:08.777Z
|
2023-08-01T00:00:00.000
|
{
"year": 2023,
"sha1": "d561020dba3f5d1731a344dc8e9430356b836676",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2607/11/8/2085/pdf?version=1692072514",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ea6a97afa82ddd74e6edf07487e8645fb730dbae",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
8512603
|
pes2o/s2orc
|
v3-fos-license
|
Coating and Surface Treatments on Orthodontic Metallic Materials
Metallic biomaterials have been extensively used in orthodontics throughout history. Gold, stainless steel, cobalt-chromium alloys, titanium and its alloys, among other metallic biomaterials, have been part of the orthodontic armamentarium since the twentieth century. Metals and alloys possess outstanding properties and offer numerous possibilities for the fabrication of orthodontic devices such as brackets, wires, bands, ligatures, among others. However, these materials have drawbacks that can present problems for the orthodontist. Poor friction control, allergic reactions, and metal ionic release are some of the most common disadvantages found when using metallic alloys for manufacturing orthodontic appliances. In order to overcome such weaknesses, research has been conducted aiming at different approaches, such as coatings and surface treatments, which have been developed to render these materials more suitable for orthodontic applications. The purpose of this paper is to provide an overview of the coating and surface treatment methods performed on metallic biomaterials used in orthodontics.
Introduction
Metallic biomaterials are widely used today in dental practices around the world.Metals and alloys offer unique physical properties, such as excellent electrical and thermal conductivity, and outstanding mechanical properties.Some metals can be used as passive substitutes of hard tissues (dental implants) and fracture healing aids (bone plates and screws) due to their aforementioned exceptional mechanical properties and corrosion resistance.Others play more active roles, such as orthodontic brackets and archwires [1].The most extensively used metallic biomaterials are commercially pure titanium and its alloys, stainless steel, and chromium-cobalt alloys [2].
Titanium and its alloys have been used in medicine and dentistry for decades.Different specialties within the dental profession take advantage of this material.Titanium alloyed with elements such as nickel, molybdenum or copper has widespread use in orthodontics [3]; the combinations of this metal with others, such as aluminium or vanadium, are used in oral rehabilitation, implantology [1,4,5], and maxillofacial surgery [6][7][8].
Titanium is an allotropic material that exists in two forms: a hexagonal closed pack structure (α-Ti) up to 882 °C and a body-centered cubic structure (β-Ti) above this temperature.The α-phase is characterized by high strength and low weight.Aluminium is one of the most common elements used to stabilize such a phase for titanium alloys used as biomaterials.The β-phase shows high corrosion resistance.Molybdenum and vanadium, among other elements, are used to stabilize this phase [1].Titanium can be classified as unalloyed or commercially pure (cp) titanium and titanium alloys.Commercially pure titanium can be further classified into four grades (1 through 4) depending on titanium and impurity contents; the most common titanium alloy used for dental applications is Ti6Al4V, with aluminium and vanadium being the main alloying elements [1].Commercially pure titanium grade 4 and Ti6Al4V alloy are the most widely used types for manufacturing orthodontic brackets.Additional titanium alloys, such as NiTi (nickel-titanium) and CuNiTi (copper-nickel-titanium) are used for fabricating orthodontic wires [9].Currently, NiTi alloys have a composition of 55% nickel and 45% titanium [10], 5%-6% copper is added to NiTi alloys with the goal of increasing strength and reducing energy loss, 0.5% chromium is added to CuNiTi alloys to reduce the stress transformation temperature to 27 °C [11].Table 1 shows the chemical composition of commercially pure titanium and Ti6Al4V alloy.Table 1.Chemical composition of titanium and its alloys (% mass/mass) [12,13].Stainless steel is an alloy that is commonly used in orthodontics for manufacturing brackets, wires, ligatures, bands, and other applications [3,14].This alloy is composed of iron and carbon and contains smaller quantities of nickel, chromium, and other elements that impart the property of resisting corrosion [15].Cobalt-chromium alloys are mainly used in rehabilitation [16] and orthodontics [3].The chromium content in this alloy enhances its corrosion resistance [1].There are five types of stainless steel alloys depending on microstructure and chemical composition: austenitic, martensitic, ferritic, duplex (austenitic-ferritic) and precipitation-hardening [17].Austenitic stainless steels are the most widely used types for manufacturing orthodontic brackets and archwires [18][19][20].Table 2 shows the chemical composition and AISI (American Iron and Steel Institute) grades of stainless steel alloys.As already stated, metals and alloys have remarkable physical and mechanical properties.However, corrosion and elemental release are some of the distinctive disadvantages shown by metals used as biomaterials [27][28][29][30].Corrosion can be defined as a "physicochemical interaction (usually of an electrochemical nature) between a metal and its environment which results in changes in the properties of the metal and which may often lead to impairment of the function of the metal, the environment, or the technical system of which these form a part" [31].Saliva, as it contains bacteria, viruses, yeast, fungi and their products [32] may cause corrosion of orthodontic appliances.House et al. [33] classified the corrosion types observed in metallic biomaterials as uniform attack, pitting and crevice corrosion, galvanic corrosion, intergranular corrosion, fretting corrosion, corrosion fatigue, and microbiologically-influenced corrosion.The alloys used in orthodontic appliances rely on the formation of a passive oxide film to resist corrosion, but this layer is not infallible since it can be disrupted by chemical and mechanical attack [33].
Elemental release is influenced by the composition of the alloy.Zinc and copper are released from stainless steel due to the electronic structure of such elements at the atomic level and the phase structure of the alloy [30].Chromium is also released by this alloy [14,34].Nickel, a known allergen, is released by nickel-titanium and stainless steel orthodontic alloys [35][36][37][38][39]. Previous research showed high concentrations of nickel in the saliva and oral mucosa of patients wearing orthodontic appliances [40][41][42].Oral signs and symptoms of nickel allergy include burning sensation, gingival hyperplasia, angular cheilitis, erythema multiforme, stomatitis, popular perioral rash, and loss of taste, as well as others [43].In addition, orthodontic treatment carried out with fixed appliances provides a unique environment for colonization of microorganisms since orthodontic devices contain morphological irregularities that make it difficult for patients to maintain adequate oral hygiene [44].
Additional challenges are encountered when using metals as orthodontic biomaterials.Friction control is a major challenge since a percentage of the applied force is dissipated to overcome friction, while the remaining percentage is transmitted to the supporting structures to induce tooth movement [45].Therefore, the total force is determined by the force to move a tooth and the force needed to overcome friction between the bracket and the wire [46,47].A high friction coefficient is needed in case of anchorage, whereas for retraction of teeth and space closure, a low friction coefficient is desirable [48].Friction coefficients differ among those materials used for fabricating orthodontic brackets and wires, which in turn, can modify treatment times according to the bracket/wire combination used [49].
Many approaches have been investigated to overcome the above-mentioned disadvantages.Coating the surface of orthodontic metallic wires using various techniques and materials, as well as modifying the surface of wires and brackets, are among those strategies developed to improve both mechanical and biological properties of metals used in orthodontics.Therefore, the main objective of this paper is to review coating techniques and materials as well as different surface treatments performed on metallic orthodontic materials.
Coating Techniques and Materials
The application of coatings is one of the approaches that is available to modify the surface of materials.Various coating techniques and materials have been used with the objective of improving surface properties.However, some problems with coatings have arisen, mainly the delamination or wear of the coating [1].Nonetheless, investigations continue to find suitable materials and techniques to improve the properties of metallic biomaterials.To date, this approach has been used mainly in vitro to further evaluate coating behavior, biological properties of coated substrates, and mechanical characteristics of both coatings and substrates.For instance, Bandeira et al. [50] used a coating technology known as an electrostatic powder technique to apply epoxy paint to NiTi wires.The purpose was to compare friction between coated and uncoated wires.It was found that coated wires showed reduced friction when compared to uncoated ones.Regarding biological properties, Kobayashi et al. [51] deposited a diamond-like carbon coating on NiTi archwires to test in vitro whether nickel release could be reduced.This investigation concluded that there was a reduction in the concentration of nickel ions in physiological saline, which makes DLC non-cytotoxic in a corrosive environment.However, clinical approaches have also been tried.Demling et al. [52] coated stainless steel brackets with polytetrafluoroethylene (PTFE) and randomly placed them in the oral cavities of children for eight weeks to compare biofilm formation on those brackets vs. uncoated brackets.After this time, a significant reduction in biofilm formation was found in coated brackets.
Additional techniques and materials are addressed below.Figure 1 summarizes the classification of coating techniques for industrial use, some of which have been implemented in orthodontics to improve the surface properties of such materials.This is a process in which finely ground metallic and non-metallic materials are deposited on a substrate in a molten or semi molten state.This technology is based on direct-current-arc or radiofrequency inductively coupled plasma (RF-ICP) discharge, which provides high temperatures that, in turn, allow melting of any material [54].Plasma spray allows for high deposition rates (80 g/min), thick deposits, and low costs.Further, coatings achieved by this technique possess a rough surface that is suitable for bone growth, which is desirable for orthopedic applications [55].Investigations on the use of this technique on metallic substrates, some of which can be used in implant dentistry, are reported [56][57][58]; however, its use has not been reported in the field of orthodontics to date.
Chemical Vapor Deposition (CVD)
This technique involves the flowing of a precursor gas into a chamber that contains one or more heated objects to be coated.Some chemical reactions take place on and near the hot surfaces, which results in the deposition of a thin film on the surface.The production of chemical by-products and their further exhaustion out of the chamber along with unreacted precursor gases accompanies the process.CVD has distinctive advantages: the films obtained with this technique are conformal (film thickness on the sidewalls is comparable to thickness on the top), a wide variety of materials can be applied and they can be deposited with a high level of purity, and it also presents high deposition rates.Its main disadvantage lies in the properties of the precursors, since they need to be volatile at near-room temperatures, which is non-trivial for a great number of elements.Other drawbacks include the fact that precursors can be toxic, explosive, expensive, corrosive, the by-products can be hazardous, and films need to be deposited at elevated temperatures, which restricts the types of materials that can be coated [59].CVD has been used in endodontics to coat NiTi files [60,61] and in the coating of burs used in dentistry [62,63].The use of this technology for coating metallic substrates in orthodontics has not been reported.
Physical Vapor Deposition (PVD)
This process consists of atomic deposition procedures in which a material is vaporized from solid or liquid sources in the form of atoms or molecules and transported in the form of vapor through a vacuum or low-pressure gaseous (plasma) environment to a substrate, where it finally condenses.This technique is suitable for depositing films in the range of a few nanometers to thousands of nanometers, for multilayer coating, graded composition deposits, very thick deposits, and freestanding structures [64].Ryu et al. [65] used this method to apply coatings based on silver (Ag)-platinum (Pt) alloys to orthodontic stainless steel appliances to test their antimicrobial properties.They concluded that Ag-Pt coatings provide good antimicrobial activity during active orthodontic treatment.Krishnan et al. [66] coated beta titanium orthodontic archwires with titanium aluminum nitride (TiAlN) and tungsten carbide/carbon (WC/C) using PVD and evaluated frictional properties, surface morphology, and load deflection rate.They found that WC/C archwires demonstrated reduced frictional properties, better surface characteristics, and low load deflection rate compared with TiAlN and uncoated archwires.In addition, Krishnan et al. [67] performed electrochemical corrosion behavior and surface analyses, mechanical testing, microstructure, elemental release and toxicology evaluations of TiAlN and WC/W coatings on beta titanium orthodontic archwires and concluded that TiAlN shows better resistance to fluoride corrosive effects on this type of archwire.Physical vapor deposition can be divided into evaporation and physical sputtering.
Evaporation
Evaporation is the simplest physical vapor deposition method and has been proven useful for deposition of elemental films.This process is carried out in a vacuum system in which a material is heated to temperatures near its fusion or sublimation point [68,69].Tripi et al. [60] used this technique for coating endodontic Ni-Ti files.The use of this technique to coat orthodontic metallic materials has not been reported in the literature.
Physical Sputtering
This technique involves the vaporization of atoms or molecules from a solid surface by momentum transfer from bombarding energetic atomic-sized particles.These particles are ions of a gaseous material accelerated in an electric field [64,68].Physical sputtering can be divided into a number of methods, including radiofrequency (RF) magnetron sputtering and high-energy ionic scattering.
Radiofrequency (RF) Magnetron Sputtering Method
This process removes surface atoms from a solid cathode (target) by bombarding it with positive ions from an inert gas discharge and deposits them on the surface to form a thin film.Substrates are placed in a vacuum chamber and are pumped down to a prescribed pressure process.Sputtering starts when a negative charge is applied to the target material, causing plasma or glow discharge.Positively-charged gas ions generated in the plasma region are attracted to the negatively-biased target plate at a very high speed rate.This collision creates a momentum transfer and ejects atomically sized particles from the target.These particles are deposited as a thin film onto the surface of substrates [70].Shah et al. used this method to apply photocatalytic TiO 2 to stainless steel orthodontic brackets in order to assess the antiadherent and antibacterial properties of such compounds against Lactobacillus acidophilus.Photocatalytic TiO 2 demonstrated antibacterial and antiadherent properties; therefore, these authors also recommended it as a surface modification material to prevent developing dental plaque during active orthodontic treatment [70].Ozeki et al. [71] coated NiTi alloy plates with hydroxyapatite, alumina, and titanium using this method to alleviate the effects of nickel allergy from NiTi alloy implants.Ozeki et al. [72] used this technique to coat NiTi orthodontic wires with titanium to evaluate the deterioration of the superelasticity of such wires and concluded that this coating method has potential for application in the orthodontic field.Surmenev et al. [73] used this technique to deposit calcium phosphate on NiTi substrates to assess nickel release.They found that nickel release from this alloy decreased when a thin layer of such a compound was applied.
Electrodeposition
In this technique, the substrate to be coated is made the negative electrode or cathode in a cell that contains an electrolyte, which must allow the passage of an electrical current.The electrolyte is usually a solution in water of a salt of the metal to be deposited and is maintained at a controlled temperature.The electrical circuit is completed by the anode, which is generally made of the metal to be deposited, and is located a short distance from the cathode.When a direct, low voltage current is applied, positively-charged ions in the electrolyte move toward the cathode where they undergo conversion to metal atoms and deposit on the cathode [74].Investigations carried out in this area are in vitro in nature, but results are promising enough to incorporate some of these treatments into orthodontic practice.Redlich et al. [75] used electrodeposition to coat orthodontic archwires to reduce friction.The coating is based on a Ni film impregnated with inorganic fullerene-like nanospheres of tungsten disulphide.Their results showed a significant reduction in friction in coated vs. uncoated archwires.Samorodnitzky-Naveh et al. [76] tested inorganic fullerene-like tungsten disulfide (IF-WS 2 ) nanoparticles in vitro using electrodeposition to coat NiTi substrates in order to reduce friction and found a substantial reduction that might result in multiple applications.Zein El Abedin et al. [77] electroplated tantalum on NiTi alloys and evaluated corrosion behavior in 3.5% NaCl solutions.Coated samples showed better corrosion resistance than uncoated alloys.Qiu et al. [78] used electrodeposition to apply hydroxyapatite and hydroxyapatite/zirconia composite coatings on NiTi alloys with the goal of assessing corrosion resistance.It was found that corrosion resistance significantly improved in a simulated body fluid after application of such a coating.
Sol-Gel Method
This process is a chemical method that allows the synthesis of glass, glass/ceramic and ceramic materials at temperatures much lower than other methods like CVD, PVD, or plasma spray and a variety of shapes, such as monoliths or nanospheres, among others, may be obtained [53,79,80].The sol-gel method (Figure 2) requires the presence in solution of species that undergo polymerization reactions, leading to the formation of a gelatinous network, which can be dried and densified to the required product.Two routes can be used for the formation of such species.The first route involves the use of alkoxides, in which an organic solvent (typically alcohol) is required to act as a mutual solvent for the organometallic starting compound and the water necessary for hydrolysis.The second route involves the use of aqueous solutions, in which colloidal-size particles are formed in an aqueous medium from ionic species using the principles of colloidal chemistry [81].These methods have their greatest potential in thin organic films and coatings [82].Chun et al. [83] applied photocatalytic titanium oxide (TiO 2 ) to stainless steel orthodontic wires using this method to evaluate its antibacterial and antiadherent properties.The results of their investigation showed that photocatalytic TiO 2 may be used to prevent the development of dental plaque during orthodontic treatment.Rendón et al. [84] coated stainless steel orthodontic archwires with a glass using this method to evaluate friction between ceramic brackets and coated vs. uncoated archwires, since high friction coefficients are found between ceramic brackets and stainless steel archwires.It was found that a reduction in friction was not observed when coated archwires were slid against monocrystalline sapphire ceramic brackets and the authors concluded that a glass coating using the sol-gel method is not suitable to reduce friction between AISI 304 stainless steel archwires and ceramic brackets.
Surface Modification
Surface modification is aimed at improving surface properties of metallic materials.Horiuchi et al. [85] modified the surface of TiNi substrates through an electrolytic treatment to thicken the oxide film followed by heat treatment to induce crystallization of the oxide surface film in order to examine antibacterial effect.The amorphous oxide film was successfully modified.It was concluded that, after illumination with UVA light, photocatalytic activity was confirmed in treated TiNi alloy and an antibacterial effect was observed.
Ion implantation is another method used to modify the surface of materials.It consists of a low temperature process in which ions penetrate the surface of a material and modify it instead of coating it [86].This technique has been used in orthodontics for different purposes.Teflon has been reported in the literature as one of the widest used materials to coat orthodontic appliances.De Franco et al. [87] used Teflon to coat stainless steel ligatures and compared frictional resistance between such ligatures and elastomeric ones.They used several archwire/bracket combinations and found that Teflon-coated stainless steel ligatures reduce friction between archwires and brackets.Neumann et al. [88] modified the surface of NiTi, beta titanium and stainless steel wires using polytetrafluorethylene (Teflon) by ion implantation with the objective of assessing corrosion resistance and mechanical endurance.This treatment prevented corrosion of the wires but Teflon was peeled off from the surface after mechanical testing.Husmann et al. [89] investigated force loss due to friction by ion implantation of Teflon on titanium alloy, beta-titanium, and stainless steel archwires.The main conclusion drawn from this investigation was that such coating significantly reduced frictional loss.A similar conclusion was drawn by Farronato et al. [90] in their study conducted to evaluate resistance to sliding of Teflon-coated archwires against different passive and active self-ligating brackets.A diamond-like carbon (DLC) coating was deposited on NiTi and stainless steel wires using the plasma-based ion implantation/deposition (PBIID) method to test friction and other mechanical properties, like hardness and elastic modulus.This study concluded that DLC could be successfully deposited by the PBIID method, there was a reduction in frictional force when this coat was present, and the DLC had a higher hardness value than untreated substrates [91].A similar investigation was carried out to test DLC deposited on stainless steel orthodontic brackets using the PBIID method.The authors found a significant reduction in friction after application of the DLC [92].
Conclusions
Orthodontic surface treatment is an important area of active research.A myriad of materials and techniques have been implemented to modify the surfaces of dental materials.However, today only a few are being used in clinical orthodontics, especially in areas such as friction control and reduction of bacterial adhesion.In the last few years, multidisciplinary research has favored closing the gap that has existed between material surface engineering and clinical practice.Consequently, more new products have been introduced and have had a favorable impact in terms of reduction in biological and financial costs and in treatment time.
Figure 1 .
Figure 1.Classification of processes used for coating at the industrial level (extracted from Peláez-Vargas, A [53].)
|
2014-10-01T00:00:00.000Z
|
2012-12-27T00:00:00.000
|
{
"year": 2012,
"sha1": "b735b5bcfc45b2d54448c3fb48bcd1fb5ceb72b7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6412/3/1/1/pdf?version=1357567621",
"oa_status": "GOLD",
"pdf_src": "CiteSeerX",
"pdf_hash": "b735b5bcfc45b2d54448c3fb48bcd1fb5ceb72b7",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Materials Science"
]
}
|
249376136
|
pes2o/s2orc
|
v3-fos-license
|
d-Alanine Carboxypeptidase from Bacillus subtilis Membranes
The purified D-alanine carboxypeptidase isolated from Bacillus subtilis was irreversibly inactivated by penicillin G and other /3-lactam antibiotics. The reaction involved an initial reversible binding step followed by an irreversible step. Both the reversible binding constant (KI) and the rate constant for irreversible inactivation (k3) were determined for a number of /3-lactam antibiotics. In general KI varied among these antibiotics much more than ka. Significant differences were also observed in the KI for the pure soluble enzyme and that for the membrane bound enzyme; these differences might be attributable to the solubility of the /3-lactam antibiotic in the membrane phase.
SUMMARY
The purified D-alanine carboxypeptidase isolated from Bacillus subtilis was irreversibly inactivated by penicillin G and other /3-lactam antibiotics.
The reaction involved an initial reversible binding step followed by an irreversible step. Both the reversible binding constant (KI) and the rate constant for irreversible inactivation (k3) were determined for a number of /3-lactam antibiotics.
In general KI varied among these antibiotics much more than ka. Significant differences were also observed in the KI for the pure soluble enzyme and that for the membrane bound enzyme; these differences might be attributable to the solubility of the /3-lactam antibiotic in the membrane phase.
Current understanding of the interaction of penicillin with the microbial cell indicates that it reacts with multiple components on the cell surface, of which one, or at least a limited number, are involved in lethality (1,2). Presumably one of these components, in Escherichia coti at least, is the transpeptidase which catalyzes the cross-linking of peptidoglycan subunits in the bacterial cell wall and is irreversibly inactivated by penicillins and cephalosporins.
Killing results from an increased sensitivity to lysis due to failure of synthesis of these cross-bridges.
It has been hypothesized that penicillin G inhibits this reaction by acting as an analog of the D-alanyl-D-alanine terminus of the peptidoglycan subunits (GlcNAc-MurNAc-L-Ala-D-Glu-L-lys-n-Ala-D-Ala) which are the substrates in this reaction (1).
Bacillus subtilis membranes contain a n-alanine carboxypeptidase which is also irreversibly inactivated by penicillins and cephalosporins (3). The function of this enzyme is unknown although it is believed to be involved in some step of cell wall biosynthesis.
The enzyme is the major penicillin binding component of B. subtilis but its inactivation is not lethal to the cell (4). The enzyme has been solubilized by the nonionic detergent *This work was supported by research grants from United States Public Health Service (AI-09152 and m-13230) and National Science Foundation (GB-29747X). This is Paper XXXIV in the series "Biosynthesis of the Peptidoglycan of Bacterial Cell Walls." $ Present address, Department of Biology, McCollum-Pratt Institute, Johns Hopkins University, Baltimore, Maryland 21218.
$ To whom reprint requests should be addressed.
Triton X-100 and purified to homogeneity (5). The interaction of the purified enzyme with penicillins and cephalosporins has been studied in the present work.
MATERIALS AND METHODS
The n-ala&e carboxypeptidase was prepared and assayed as previously described (5). The antibiotics used were kind gifts of the Squibb Institute for Medical Research, New Brunswick, New Jersey; Bristol Laboratories, Syracuse, New York; and Eli Lilly and Co., Indianapolis.
Irreversibility
of Inactivation-Preincubation of the purified soluble D-alanine carboxypeptidase with penicillin G resulted in inactivation that could not be reversed by penicillinase (Table I) or excess substrate (Fig. 1). Radioactive penicillin G remained bound to the enzyme even after the enzyme was denatured in sodium dodecyl sulfate (5,6).
Titration of Enzyme with Penicillin-Titration of the enzyme with penicillin G indicated that 1 mole of penicillin was sufficient to inactivate 1 mole of enzyme (Fig. 2). E$ect of Penicillin on Enzyme Suljhydryl Groups-The n-alanine carboxypeptidase was inhibited by reagents which react with sulfhydryl groups (5), suggesting an involvement of a sulfhydryl group in the catalytic site. The inactivation of the enzyme by penicillin G involved the loss of one of the four sulfhydryl groups in the enzyme which are titrated by 5,5'dithiobis(2-nitrobenzoic acid) (Fig. 3). These data suggest that penicillin G inactivates the enzyme by reacting with a single sulfhydryl group in the active site of the enzyme, or at least that it reacts in the neighborhood of a sulfhydryl group, thereby protecting it from titration.
Determination of Kr and &--Since the carboxypeptidase is inactivated by the penicillin, the mechanism of inhibition which would be predicted is as follows where I is a penicillin, EI is a reversible complex, and EI' is the inactivated enzyme. An analysis of this type of inhibition has been presented by Kitz and Wilson (7) in considering the irreversible inactivation of acetylcholinesterase by esters of methane sulfonic acid. The same kinetic analysis has been applied in the present case. It predicts that ln(e/EO) = -k3t/ (1 + KI/I) where E is the remaining active enzyme (E + El) obtained by assay after dilution and treatment with penicillinase and E" is total enzyme. When I is constant during the reaction (i.e. Z > EO), ln(e/EO) = -k,,, t where kapp = k,/(l + KI/Z) and l/k,,, = (l/k,) + (KI/kS) x l/I. When -log e/E0 was plotted against t at different concentrations of Z, straight lines were obtained (Fig. 4) as predicted by this mechanism. The slopes of these lines give kapp as a function of Z. A double reciprocal plot of l/k,,, versus l/Z also gave a straight line (Fig. . The intercept is then l/k, and the slope Kr/ks from which ka and Kr can be calculated. Such data obtained for propicillin are illustrated in Figs. 4 and 5 and for 10 p-lactam antibiotics are summarized in Table II. Penicillin G and penicillin V inactivation rates (k3) could not be measured at 37", but were slow enough at 4" to allow measurement.
The rate constants, ka, for the other P-lactam antibiotics were quite similar, ranging between 4.2 X 10-l s-l for propicillin and 0.2 x 10-l s-1 for cloxacillin, a variation of only 20-fold. The dissociation constants varied from 0.1 x lop3 M for penicillin G (at 4") to 28 X lOpa M for cephalothin, a variation of almost 300-fold.
Znhibition of Membrane-bound Enzyme by /I-Lactam Antibiotics-when the constants KI and kS were determined for the membrane-bound enzyme a significant difference was found in the slope of the double reciprocal plot, but not in the intercept (Fig. 5). The membrane-bound enzyme had an increased affinity for the antibiotics (Table II). This alteration in Kr for antibiotic was not reflected in an altered K, for UDP-MurNAc-pentapeptide between the membrane-bound and soluble enzyme, nor by an altered Kl for inhibition by 5,5'-dithiobis(2-nitrobenzoic acid) (5). Thus, it appeared that the active site of the enzyme was not significantly altered during solubilization and purification. Therefore the increased affinity for P-lactam antibiotics of the membrane-bound enzyme was an environmental effect of the membrane. One possibility u-as that the hydrophobic antibiotics were "dissolved" in the mem- (1 At 4"; assay at the lower temperature means the rate constants are not directly comparable, but the binding constants should not be very temperature dependent. brane, increasing the local concentration of the antibiotic near the active site of the enzyme. A correlation between the hydrophobicity of the penicillins and the increase in affinity that could be attributed to this process was found (Fig. 6). The two cephalosporins examined, cephalothin and cephalosporin C, did not fit on this curve. Since this family of antibiotics has a different structure, it may require a separate correlation curve. Conceivably its distribution in butanol-water is not proportional to its solubility in the membrane lipids in the same manner as the penicillins.
Furthermore presumably membrane hydrophobicity (8) slightly altered the sensitivity of the membrane-bound enzyme to p-lactam antibiotics (see 9, 10). The addition of Triton X-100 to the membrane preparations resulted in a decrease in the affinity close to that of the purified enzyme.
Protection of Carboxypeptidase from Penicillin G by Substrate-If the KI and k3 measured by the method of Kitz and Wilson (7) are measures of the binding of /3-lactam antibiotics and reaction at or near the substrate binding site, one would predict that high concentrations of substrate would protect enzyme against inactivation by &lactam antibiotics, i.e. substrate is a competitive inhibitor of inactivation (7). This prediction was verified. The enzyme could be protected from penicillin G by the presence of substrate during the preincubation (Fig. 7). Substrate protection also required zinc ions. This fact is compatible with the idea that zinc is required for the proper binding of substrate to enzyme (5).
Reciprocal Plots-Double reciprocal plots of the reaction rate data obtained at one time point intersected on the abscissa when plotted by the method of Lineweaver-Burk (11) (Fig. 8A) suggestive of competitive inhibition.
When the data were plotted by the method of Dixon (12), intersection at a point above the ordinate was again indicative of competitive inhibition (Fig. SB), but the plots were nonlinear.
The nonlinearity resulted from the irreversibility of the inhibition. At high levels of inhibition most of the enzyme was tied up as irreversibly inhibited complex, and the assumptions made in the usual rate scheme could no longer be valid.
A plot of the data by the method of Morri- (13) and Henderson (14) for very tight binding inhibitors showed an increase in slope with increasing substrate concentration, again indicative of competitive inhibition (Fig. NY). In all cases the apparent KI measured was 1.2 to 1.7 x 1OP for penicillin G. This value is far from the true KI as measured above. Thus, misleading data can be obtained if the actual mechanism of the inhibition is not considered in the kinetic analysis.
A Hill plot (15) made at several concentrations of substrate resulted in a Hill coefficient of 1.2 to 1.3 indicating, as expected, the absence of cooperativity (Fig. 80).
DISCUSSION
The n-alanine carboxypeptidase of B. subtilis was irreversibly inhibited by penicillins and cephalosporins.
The inhibition involved two steps, a reversible binding to form a Michaelis complex, and then the irreversible reaction involving the loss of a single titratable sulfhydryl residue.
The initial binding was competitive with the substrate, and the enzyme would be protected from inhibition by the presence of substrate suggesting that penicillin, in all likelihood, reacts at the active center.
The irreversibility of the inactivation of the B. subtilis Dalanine carboxypeptidase contrasts with the reversible inhibition by penicillin G. A, the data were plotted by the double reciprocal method of Lineweaver and Burk (11). The curves intersected on the abscissa for penicillin G (data shown) and other p-lactam antibiotics (not shown). The point of intersection of the lines on the abscissa is magnified in the inset. B, a plot of the data according to the Dixon method (12) resulted in curves intersecting above the ordinate with an estimated Kr of 1.2 X 10-e M suggestive of competitive inhibition, although the extrapolation was uncertain. C, the data were reploted according to the equation I/[1 -(Vi/ Vo)] = Etotsl + K~[Stots~ + K.)/ s tatsi] (Vo/Vi).
It has been suggested that the inhibition of the Streptomyces n-alanine carboxypeptidase might be allosteric (i.e. due to binding of penicillin at a site other than the substrate binding site) because one of these enzymes shows peculiar inhibition kinetics (18). All of the dat.a obtained with the B. subtilis enzyme including the kinetics of inhibition and the protection by substrate from inactivation are compatible with the simpler hypothesis that penicillin reacts with this enzyme at the substrate binding site.
Three features are required in a /I-lactam antibiotic in order to inhibit the in situ enzyme effectively. The more effective antibiotics would have a high rate of inactivation (k3), a tight binding to the enzyme (Kr), and a high solubility in the membrane phase. Of these, the most important would seem to be the binding constant.
The side chain of the P-lactam antibiotic plays an important role in the binding.
The importance of the solubility of the penicillin in organic phases has been discussed by several authors (10,20,21) in relation to supposed increased permeability through the layers of the wall to the membrane.
However, solubility in the membrane appears to be an important feature also.
|
2021-01-07T09:08:29.317Z
|
2002-01-01T00:00:00.000
|
{
"year": 1973,
"sha1": "68532bacefe488b198af7a2a6ff39a02ef245963",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/s0021-9258(19)43419-4",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "6bd843314456f980c2c7cf182ff730df71e25296",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": []
}
|
238221035
|
pes2o/s2orc
|
v3-fos-license
|
Pre-mRNA splicing factor U2AF2 recognizes distinct conformations of nucleotide variants at the center of the pre-mRNA splice site signal
Abstract The essential pre-mRNA splicing factor U2AF2 (also called U2AF65) identifies polypyrimidine (Py) tract signals of nascent transcripts, despite length and sequence variations. Previous studies have shown that the U2AF2 RNA recognition motifs (RRM1 and RRM2) preferentially bind uridine-rich RNAs. Nonetheless, the specificity of the RRM1/RRM2 interface for the central Py tract nucleotide has yet to be investigated. We addressed this question by determining crystal structures of U2AF2 bound to a cytidine, guanosine, or adenosine at the central position of the Py tract, and compared U2AF2-bound uridine structures. Local movements of the RNA site accommodated the different nucleotides, whereas the polypeptide backbone remained similar among the structures. Accordingly, molecular dynamics simulations revealed flexible conformations of the central, U2AF2-bound nucleotide. The RNA binding affinities and splicing efficiencies of structure-guided mutants demonstrated that U2AF2 tolerates nucleotide substitutions at the central position of the Py tract. Moreover, enhanced UV-crosslinking and immunoprecipitation of endogenous U2AF2 in human erythroleukemia cells showed uridine-sensitive binding sites, with lower sequence conservation at the central nucleotide positions of otherwise uridine-rich, U2AF2-bound splice sites. Altogether, these results highlight the importance of RNA flexibility for protein recognition and take a step towards relating splice site motifs to pre-mRNA splicing efficiencies.
by relatively short, consensus motifs that can vary in length and sequence. Uridine (U)-rich polypyrimidine (Py) signals precede the major class of 3´splice sites. Yet, purines often interrupt Py tract signals and can regulate alternative 3ś plice site selection in multicellular eukaryotes (2).
The essential pre-mRNA splicing factor U2AF2 (also called U2AF 65 ) recognizes the Py tract signal to promote the earliest stage of pre-mRNA splicing. The U2AF2 protein forms a ternary complex with SF1 and U2AF1 (also called U2AF 35 ), which ensures 3´splice site fidelity by identifying the branchpoint and AG consensus sequences flanking the Py tract. In a series of ATP-dependent steps, the 5´and 3´splice sites ultimately are positioned for catalysis in the active spliceosome. Breakthrough cryo-electron microscopy structures have revealed the later stages of spliceosome assembly (reviewed in (3)), whereas piecewise X-ray crystallography and NMR structures provide snapshots of splicing factor domains during the transient, early stages of 3´splice site recognition. The U2AF2 protein recognizes the Py tract via two tandem RNA recognition motifs (RRM1 and RRM2) and flanking ␣-helices (U2AF2 12L ). In the absence of RNA or in the presence of degenerate Py tracts comprising less than four consecutive uridines, U2AF2 adopts a 'closed' conformation in which RRM1 is masked and only RRM2 is available for RNA binding (4)(5)(6). When bound to a longer uridine tract such as the 3´splice site consensus, the U2AF2 RRMs have an 'open', side-by-side conformation with RRM1 and RRM2 contacting the respective 3´and 5´regions of the Py tract (4,7). Both RRMs prefer uridines (8,9), although the N-terminal RRM1 is more tolerant of cytidine and purine substitutions in the Py tract than is RRM2 (10,11). In particular, the uridine-specificity of a promiscuous RRM1 site can be enhanced by a structure-guided mutation (10). Yet, unlike the well-characterized RRM1 and RRM2 of U2AF2, the sequence specificity of the RRM1/RRM2 interface for the central nucleotide of the Py tract is unknown.
U2AF2 defects have been associated with a variety of human diseases. Acquired U2AF2 mutations recur among certain cancers (12)(13)(14), although with lower frequency than in the U2AF1 subunit (15). De novo mutations of U2AF2 are significantly associated with developmental delay and malformation (16). U2AF2 binding to RP2 and NF1 Py tracts is reduced by purine substitutions associated with retinitis pigmentosa and neurofibromatosis (10). U2AF2 has been shown to regulate splicing of an IL7R exon that is dysregulated in autoimmune disorders including multiple sclerosis (17). Moreover, disrupted association between U2AF2 and PTEN correlates with autism spectrum disorder (18). Structure/function studies of these diseaseassociated U2AF2 mutations highlight key interfaces for the normal functions of the protein and provide insight into mechanisms of disease progression. However, understanding the normal sequence specificity and adaptability of the protein is an important baseline for comparison with disease-associated mutants.
Here, we investigate the interactions and nucleotide sequence specificity of the U2AF2 RRM1/RRM2 interface. By X-ray crystallography and complementary molecular dynamics simulations, we find that a protein scaffold accommodates bulky purines at the RRM1/RRM2 inter-face by repositioning the central nucleotides of the bound Py tract. Structure-guided variants increased the ability of U2AF2 to distinguish purines from pyrimidines at the central Py tract position. In human cells, we found that the nucleotide consensus was more variable at the central positions of sequence logos for U2AF2 binding to sites that were otherwise uridine-rich. These results reveal that U2AF2, a key factor for early spliceosome assembly, adapts to natural splice site variations by offering alternative binding sites for different RNA conformations.
Preparation of U2AF2 12L proteins and oligonucleotides
The wild-type and mutant U2AF2 12L proteins (residues 141-342 of NCBI RefSeq NP 009210) were expressed and purified as described (7,12). The final protein buffer was 100 mM NaCl, 15 mM HEPES pH 6.8, 0.2 mM TCEP following size exclusion chromatography. Purified, deprotected RNA oligonucleotides were purchased from Horizon Discovery Ltd.
Fluorescence anisotropy RNA binding assays
The RNA-binding experiments followed protocols described in (12,19). The 5´-fluorescein-labeled, RNA oligonucleotides were diluted >100-fold to 30 nM final concentration in a binding buffer comprising 100 mM NaCl, 15 mM HEPES at pH 6.8, 0.2 mM TCEP, 0.1 U ml −1 Superase-In™ (Invitrogen™). The changes in total volume following addition of the protein were <10% to minimize dilution effects. The fluorescence anisotropy changes during titration were measured using a FluoroMax-3 spectrophotometer, temperature-controlled at 23 • C by a circulating water bath. Samples were excited at 490 nm and emission intensities recorded at 520 nm with slit widths of 5 nm. The fluorescence emission spectra also were monitored for similarity throughout the experiment. Each titration was fit with a nonlinear equation (12,19) to obtain the apparent equilibrium dissociation constant (K D ). These fits and the P-values of a two-tailed unpaired t-test with Welch's correction were calculated using Prism v6.0 (GraphPad Software, Inc.). The apparent equilibrium affinities (K A ) are the reciprocals of each K D . The average K D or K A values and standard deviations are given for three replicates of each experiment.
Crystallization, data collection and structure determination
Crystallization conditions were similar to those described (12). Following concentration to 20 mg ml −1 , U2AF2 12L protein was mixed with 1.2-fold molar excess purified oligonucleotide variant (5´-phosphoryl-UU(dU)NU(5BrdU)CC-3´, where N is cytosine (C5), adenosine (A5), or guanosine (G5)). Crystals were obtained by hanging drop vapor diffusion experiments with precipitants composed either of 0.60 M succinic acid, 0.10 M HEPES pH 7.0, 2% PEG monomethyl ether 2000 (C5) or 0.24 M Na malonate, 26% PEG 3350 (G5, A5). Addition of 0.1 l of 5% w/v LDAO detergent (Hampton Research) to the G5 or A5 drops and 10% sucrose to the A5 drops prior to incubation improved crystal quality. Crystals were flash-cooled in liquid nitrogen after coating with a mixture of 1:1 (v/v) paratone-N and silicone oil (G5), or sequential transfers to precipitant solutions containing either 21% glycerol (C5) or 28% sucrose/8% PEG 200 (A5). Crystallographic data sets were collected at 100 K by remotely using the Stanford Synchrotron Radiation Light (SSRL) source and processed using the SSRL AUTOXDS script (A. Gonzalez and Y. Tsai) implementation of XDS (21) and CCP4 packages (22). The structures were determined using the Fourier synthesis method starting from PDB code 6XLW. The models were adjusted using COOT (23) and refined using PHENIX (24). The crystallographic data and refinement statistics are given in Supplementary Table S1 and reduced-bias electron density maps (25) are shown in Supplementary Figure S3.
Molecular dynamics simulations and analysis
Molecular dynamics (MD) simulations were run using Amber 18 (26). The U2AF2-U5, U2AF2-C5, U2AF2-G5 and U2AF2-A5 crystal structures were solvated in a truncated octahedron of OPC water (27) with a 12Å margin of the solute using Leap. The system was neutralized using eight Na + atoms, and 20 Na + and Cl − ions were added to model NaCl at a bulk concentration of 150 mM (28). The starting structures were energy-minimized using the steepest descent and then conjugate gradient methods, each for 500 steps. Subsequently, the systems were heated to 298 K in 200 ps with a timestep of 2 fs. These equilibrated structures were used to run the final production dynamics for 2 s using Amber ff14SB (29) + RNA.OL3 (30)(31)(32) forcefields with periodic boundary conditions, using a 2 fs timestep and a direct space cutoff of 10Å for non-bonded interactions. The structures were written to a trajectory file every 100 ps. Pressure was maintained at 1 atm using a Monte Carlo barostat and the temperature was maintained at 298 K using Langevin thermostat with a collision frequency of 1.0 ps −1 . For the oligonucleotide-only simulations, the U2AF2 protein coordinates were removed to generate the starting structures, then the same steps used for the protein-RNA complex were followed.
For analysis of MD simulations, all the trajectories were merged, and the water and ions were removed using Ambertools 18 (33). The trajectories were aligned using the C␣ of RRMs with the starting structures for U2AF2-RNA simulations and six-membered base rings for the simulations of the isolated oligonucleotide, using aligner in LOOS (34). Root mean square fluctuations were calculated for sixmembered rings of RNA residues using rmsf in LOOS (34). Root mean squared deviations (RMSD) of the C␣ were calculated using rmsd2ref tool in LOOS. Pairwise RMSD was calculated using custom python script, rmsds-align.py.
Enhanced UV-crosslinking and immunoprecipitation
U2AF2 eCLIP-seq experiments followed the protocol in (35) with modifications reported in (36). For consistency with eCLIP-seq of U2AF1 splicing factor complexes (36), we used a human erythroleukemia (HEL) cell line (ATCC, Cat #TIB-180) cultured in RPMI 1640 supplemented with 1% L-glutamine, 1% penicillin-streptomycin and 10% FBS (ThermoFisher Sci. Cat #'s 11875093, 25030081, 15140122 and Gemini Bio-Products Cat #'s 100-106). The HEL cells were subjected to UV-crosslinking and U2AF2-RNA complexes were immunoprecipitated with 8 g anti-U2AF2 antibody (Sigma-Aldrich, Cat #U4758) and Dynabeads Protein G (ThermoFisher Sci., Cat #10004D). RNA was partially digested with RNase I (ThermoFisher Sci., Cat #AM2295) and P 32 -labeled (PerkinElmer, Cat #BLU002Z250UC), followed by RNA linker ligation. After SDS-PAGE and transfer to nitrocellulose membrane, a region between 65 -110 kD was excised to obtain U2AF2bound RNA complexes (Supplementary Figure S7). RNA was isolated using the RNA Clean & Concentrator-5 kit (Zymo Research, Cat #R1016) after treatment with proteinase K, then subjected to library preparation. Libraries were sequenced on Illumina NovaSeq 6000 system at the Yale Center for Genome Analysis (YCGA). The U2AF2 eCLIP-seq was performed in two replicates, compared with four replicates for the U2AF2 eCLIP-seq with U2AF1 overexpression (OE) (36). The U2AF2 eCLIP-seq reads were processed according to the pipeline reported in (36). After duplicate removal (FastUniq (37)) and adapter trimming (Cutadapt (38)), reads were aligned to the human genome (GRCh38.p10) with STAR (version 2.7.0f, GENCODE Release 27 for transcript annotation). The average alignment rates were 86.2% and 81.8% for libraries with endogenous (here) or OE U2AF1 (36). Crosslinked nucleotides were extracted from BAM files considering the genomic position right after the end of each sequenced read. Bound junctions were confidently identified considering a nucleotide region from -40 to +10 around the 3´splice site in all the annotated splice junctions in the human genome and using a coverage threshold of at least 10 reads, resulting in 149 708 and 90 918 selected splice junctions, for samples with endogenous or OE U2AF1 (36). Binding metaprofiles were built after trimming outlier signals at each nucleotide position from -20 to +5 around the 3´splice site.
U2AF2 has little sequence preference for the central Py tract nucleotide
To fill a missing gap in previous studies of U2AF2-RNA sequence specificity (10,11), we investigated the preferences of U2AF2 for binding different nucleotides at the central position of the Py tract ( Figure 1). Since nine nucleotide binding sites have been noted for the open conformation of U2AF2 12L (4,7), we compared the binding affinities of U2AF2 for nine-nucleotide RNAs substituted with U, C, G or A at the fifth nucleotide. We fit the fluorescence anisotropy changes of 5´-fluoresceinlabeled oligonucleotides titrated with protein to obtain the apparent equilibrium dissociation constants (K D ) using nonlinear regression as described (19). The K D 's of the A5-substituted RNAs are lower estimates, since the fluorescence anisotropies at the highest concentrations of U2AF2 12L in the titrations are less than the maxima of the fits. We first tested substitutions of a prototypical, strong Py tract from the adenovirus major late promoter transcript (AdML) ( Figure 1A). The nine-nucleotide AdML Py Figure 1. The specificity of the U2AF2 RRM-containing region for centrally-substituted Py tract oligonucleotides. The boundary of the U2AF2 12L construct (blue) used for RNA binding and structure determination is inset in panel (A). Fluorescence anisotropy measurements of U2AF2 12L titrated into the given RNAs, including (A) an AdML Py tract (blue) and its central cytidine (mustard/yellow), guanosine (salmon), or adenosine (green) substitutions, (B) a nine-uridine tract (blue) and substituted with cytidine (mustard/yellow), adenosine (green), or guanosine at G5 (salmon), or (C) a nine-uridine tract (dashed blue line shown for reference) substituted with G4, (light gray), G5 (salmon), G4/G5 (orange-yellow), G6 (dark grey), or G5/G6 (maroon). The average data points and standard deviations of three experiments are overlaid with the fitted binding curves. The sequences of the 5´-fluorescein-labeled RNA oligonucleotides are inset, alongside average apparent equilibrium dissociation constants (K D ) with standard deviations of three replicates. (D) Bar graph of U2AF2 12L binding affinities for the RNAs shown in B and C. The K D 's of U2AF2 12L for binding the A5 RNAs are estimates due to the very low affinities. The significance of the changes in the average apparent binding affinities compared to the G5-substituted oligonucleotide were calculated using two-tailed unpaired t-tests with Welch's correction in GraphPad Prism: P-values: n.s., not significant, *, <0.05; **, <0.005. The differences between the U2AF2 12L binding affinities for the G4 and G4/G5 RNAs, or between the G6 and G5/G6 RNAs, were not significant. The U2AF2 12L binding affinities for modified oligonucleotides used for co-crystallization are shown in Supplementary Figure 1.
tract bound U2AF2 12L with approximately three-fold lower affinity than a previously studied, 13-mer Py tract from the same intron (K D 100 nM versus 30 nM) (7). Substitution of a cytidine (C5) for the fifth uridine (U5), which is located between the RRM1 and RRM2 of the U2AF2 12L structure (4,7), does not significantly change the binding affinity. For purine substitutions, a guanosine (G5) incurred a subtle, approximately two-fold penalty, whereas an adenosine (A5) produced a more substantial decrease in affinity (at least 4fold, equivalent to ∼1 kcal mol −1 ).
We next introduced substitutions in the context of a consensus uridine tract ( Figure 1B). The U2AF2 12L protein bound the uridine-tract with similar affinity as the AdML Py tract, consistent with a sequence difference of two terminal cytidines. As observed for the AdML Py tract, the effects of the nucleotide substitutions on U2AF2 12L binding ranged from no significant effect for C5, less than 2fold for G5, to a more substantial estimated penalty for the A5 substitution. The greater discrimination of U2AF2 against adenosine could contribute to defining the AGexclusion zone, a region devoid of AG-dinucleotides be-tween the branchpoint and bona-fide AG at the 3´splice site (39).
We further evaluated the consequence of a guanosinesubstitution at the neighboring sites, G4 and G6, which are expected to bind RRM2 and RRM1 ( Figure 1C, D). Although the G4-or G6-associated changes in U2AF2 12L binding affinities were moderate, the approximately threefold decreases were comparable to the penalties for U2AF2 binding to disease-associated mutations in the RP2 and NF1 Py tracts (10). Addition of G5 to the G4 or G6 substitutions (G4/G5 or G5/G6) had no additional effect, again reflecting the promiscuity of the inter-RRM binding site at the fifth position of the oligonucleotide.
To relate U2AF2 12L 's subtle discrimination among different nucleotides at the center of the Py tract to intact 3ś plice site recognition, we compared the RNA affinities of a ternary complex among U2AF2, SF1 and U2AF1 subunits ( Figure 2). The U2AF2 and U2AF1 constructs were nearly full length apart from RS domains that contact the branchpoint rather than the Py tract (40)(41)(42), and a zinc knuckle/proline-rich region of SF1 that have been impli- cated in protein-protein interactions (43)(44)(45)(46). Although the U2AF1 subunit retained an MBP tag to enhance expression and solubility, this tag has no detectable effect on RNA affinity (6). We measured the binding affinities of the purified protein complex for AdML splice site RNAs spanning the branchpoint, Py tract, and 3´splice site junction. We compared the effects of four guanosine substitutions at different positions of the Py tract. Similar to U2AF2 12L binding the G6-substituted Py tract, most guanosines reduced the RNA affinity of the ternary complex by approximately three-fold. Notably, a guanosine at the central position (-9G) had no significant effect on affinity for the protein complex, in agreement with the subtle effect of G5 on U2AF2 12L association with the isolated Py tract. This result supported the relevance of the nine nucleotide binding sites of U2AF2 12L to splice site recognition in the context of the ternary U2AF2-SF1-U2AF1 complex.
Local shifts of the central nucleotides adapt to the U2AF212L structure
To view how U2AF2 adapts to different nucleotides at the RRM1/RRM2 interface, we determined three crys-tal structures of U2AF2 12L bound to Py tracts with various nucleotides at the central position ( Figure 3, Supplementary Table S1). To promote crystallization and confirm the oligonucleotide binding register, we included 2´-deoxyuridine (dU) and 5-bromo-dU modifications at the fourth and seventh positions of U2AF2 12L -oligonucleotide crystal structures as described (7,10,11,47). The U2AF2 12L protein binds the modified oligonucleotides with comparable affinity and specificity as the corresponding RNAs (K D 65 nM versus 100 nM for modified versus unmodified AdML oligonucleotides and approximately three-fold preference for U5 over A5; Supplementary Figure S1). Crystallization was facilitated further by using eight-mer oligonucleotides that omit the 5´-terminal uridine (7,12). Well-defined electron density for the eight nucleotides is observed in the documented nucleotide binding sites 2-9 of the open U2AF2 conformation (PDB ID 5EV4, PDB ID 2YH1). Electron density for the 5-bromo-modification, as well as distinct, atomic resolution shapes for the pyrimidine vs. purine bases, confirms the binding register for each complex (Supplementary Figure S3). To match PDB ID 5EV4, we numbered the eight bound nucleotides from 2-9 starting at U2 in the second documented nucleotide binding site of U2AF2 12L , as shown in Figure 3. The overall conformations of the protein backbones remained similar (0.1-0.3Å pairwise RMSD between matching C␣ atoms of C5, A5 or G5-containing structures when compared to the U5 structure) ( Figure 3E). In particular, the polypeptide backbones of an RRM2-proximal, nucleotide-bound region of the inter-RRM linker (residues 248-260), as well as of the modular RRM1 and RRM2 domains, were nearly identical among the structures. A distinct region of the linker (residues 230-247) near the alpha-helical surface of RRM1 was more divergent, consistent with its higher temperature factors and in some cases, missing residues ( Figure 3A-D). Despite differences in the inter-RRM region, the nucleotides bound to the respective RRM2 and RRM1 also shared similar positions (0.2-0.4Å pairwise RMSD between all atoms of nucleotides 2-4/7-9 of C5, A5 or G5-containing structures compared to the U5 structure). However, the central nucleotide substitutions dramatically shifted the local positions of the U2AF2bound RNA ( Figure 3F, Figure 4, Supplementary Movies S1-S3). A cytidine or adenosine (C5 or A5), for which the hydrogen bond groups differ from uridine, rotated ∼25 • away from the U2AF2 inter-RRM linker relative to the U5 position. Notably, networks of ordered water molecules filled the resulting gaps and mediated contacts between the extruded cytosine or adenine bases and the protein backbone ( Figure 4B, D and Supplementary Figure S3). The sixmember ring of a guanine base at the central position (G5), on the other hand, superimposed with the uracil and equivalent atoms (U-O4/N3H and G-O6/N1H) maintained similar hydrogen bonds with the protein ( Figure 3F, Figure 4A, C).
Interestingly, the adjacent uridine on the 3´side (U6) also shifted position when purine nucleotides were substituted at the fifth site ( Figure 3F, Figure 4). In the U2AF2-bound, all-uridine oligonucleotide, RRM2 and RRM1 loops sandwiched the U6 base. In the presence of the bulky A5 or G5 purines, the downstream U6 rotated ∼25 • away from the inter-RRM linker to settle in an alternative binding site, which also is located between the RRM1 and RRM2 loops. To achieve a comparable position of U6 despite the different locations of the A5 and G5 bases, the A5-linked U6 phosphate rotated over the ribose group ( Figure 4D, Supplementary Movie S3). Although unique to the A5 nucleotide substitution, we cannot rule out that the neighboring 5-bromo- dU7 modification influenced this conformation of the A5linked U6 phosphodiester group. Unlike the U5-linked U6 position, no direct or water-mediated U6 contacts with the protein were detected in either purine-containing structure. Instead, several ordered water molecules that mediated U6 contacts with U2AF2 in the U5/C5 structures appeared absent in the presence of the purine substitutions ( Figure 4, Supplementary Figure S3). The purine-induced perturbations of the adjacent U6 site, coupled with the shifted position of A5, could account for the subtle differences in U2AF2 binding affinity (U5/C5 > G5 > A5) for the oligonucleotides (Figure 1).
U2AF2 12L -bound Py tract RNA is dynamic at the central nucleotides
To explore the conformations of the U2AF2-Py tract complex beyond the environment of the crystal structures, we performed all-atom molecular dynamics simulations using Amber (26). The simulations revealed differences in the conformational flexibility of the protein regions. The simulations also demonstrate that interaction with the protein reduced the intrinsic flexibility of the RNA.
First, we ran 2 s simulations of the U5, C5, G5 and A5 crystal structures, repeated five times each. Each protein-RNA structure was stable (Supplementary Figure S4), and pairwise RMSD plots (Supplementary Figure S5) demonstrated convergence. To quantify the dynamics of residues, we calculated the root mean squared fluctuation (RMSF) for each residue, which is the extent to which a residue fluctuates around the average structure during the simulation ( Figure 5). The RRMs were found to be relatively static ( Figure 5A). A portion of the linker region connecting the RRMs was flexible in the simulations (residues 236-242, Figure 5B). However, residues 250-255, the linker region bound to the central nucleotide of the Py tract, was static. The U2AF2 12L crystal structures are consistent with the results of the simulations, showing variability and sometimes disorder in residues 236-242 of the inter-RRM linker, whereas residues 250-255 and the RRMs remain similar among known structures ( Figure 3A-D, Supplementary Figure S6) (7,12). When a purine was in the fifth position of the U2AF2-bound oligonucleotide, substantially more fluctuation was found in the fifth position than when a pyrimidine was in the fifth position ( Figure 5C). The presence of a purine at the fifth position also increased the fluctuation We also tested whether the conformation of the central nucleotide is related to an intrinsic property of the oligonucleotide. We ran five, 1 s all-atom simulations of oligonucleotides (U5, C5, A5 and G5) in the absence of the protein. These simulations of the oligonucleotides exhibited substantial conformational fluctuations compared to the oligonucleotides bound to U2AF2 ( Figure 5D). Specifically, the pairwise RMSD plots (Supplementary Figure S5) demonstrated no innate preferred conformation for the RNA. These plots compare the conformations sampled across trajectories, and are useful for comparing the consistency of the conformations across multiple simulations. These suggest that the RNA is flexible in nature, allowing the central nucleotide to adopt a conformation that accommodates protein binding.
Structure-guided mutations enhance U2AF2 12L specificity for a central uridine
To test the U2AF2 interactions with central nucleotide viewed in the structures, we substituted either of the positively-charged K225 or R227 residues with negativelycharged glutamates (K225E and R227E) to nonspecifically reduce the RNA binding affinity. Compared to the wildtype protein, the K225E and R227E mutations reduced the U2AF2 12L affinities for the AdML Py tract and its G5 variant by approximately 20-and 80-fold ( Supplementary Figure S2), most likely by general electrostatic repulsion of the phosphodiester backbone. This result supported the observed locations of K225 and R227 residues at the RNA interface of the open U2AF2 conformation.
We next considered whether the promiscuity of U2AF2 for various nucleotides at the central position of the Py tract could be altered by replacing key amino acids (Figure 6). Since the K225 side chain forms a salt bridge with a phosphoryl group of the A5/G5-containing RNAs, we reasoned that an asparagine at this position would penalize U2AF2 binding to purines at this position more than to pyrimidine-containing RNAs. Likewise, we conjectured that replacing R227 with the shorter side chain of asparagine would disrupt the direct and indirect networks of U2AF2 with G5 and A5 bases more than for U5 and C5. Third, we predicted that an aspartate substitution of G297 would repel the U6-O2 atom in the purine-bound conformation, thereby favoring U2AF2 binding to U5 and C5. Accordingly, the K225N and R227N variants significantly increased U2AF2 12L discrimination of U5/C5-from G5/A5containing oligonucleotides ( Figure 6A, B and D), by having substantially greater penalties for U2AF2 12L binding the purine-containing RNAs (at least five-fold penalties). The G297D replacement also increased the specificity of U2AF2 12L for binding to U5 > C5 > G5/A5 oligonucleotides (in order of preference, Figure 6C, D), by having no detectable effect on the all-uridine oligonucleotide and approximately two-fold penalties for binding the other nucleotide variants. These results demonstrated that single amino acid changes could increase the stringency of U2AF2 for distinguishing the identity of the central Py tract nucleotide.
U2AF2 interaction sites in human cells agree with U2AF2 12L -RNA binding specificity
To further understand the organization of U2AF2 and the 3´splice site, we used the enhanced UV crosslinking and immunoprecipitation (eCLIP) assay (35,36,48) to map the RNA interactome of U2AF2 in human erythroleukemia (HEL) cells. The HEL cell line represents a preclinical model for the study of myelodysplastic syndromes and acute myeloid leukemia, which are blood cancers frequently characterized by mutations in splicing factors such as U2AF1. Following U2AF2 immunoprecipitation and 32 P labeling of the crosslinked RNA, the immunoprecipitated complexes were separated by denaturing gel electrophoresis (Supplementary Figure S7). We focused on analyzing the region with a molecular weight between 65 and 110 kD, corresponding to the expected size of U2AF2-RNA complexes. Overall, we could identify U2AF2-binding locations in 149 818 splice junctions across the human transcriptome.
As expected, significant peaks for U2AF2 interactions occurred in Py-rich regions upstream of 3´splice site junctions (Figure 7). To specifically investigate the relationship between U2AF2 binding and the sequence-content of 3ś plice site signals, we divided the splice site junctions into three classes based on their uridine enrichment. These included splice sites with poor (0-2), medium (3)(4)(5), or high (6-8) numbers of uridines in the zone from -11 to -4 nucleotides upstream of the intron 3´end ( Figure 7A). Sequence logos were generated from splice junctions of the three classes ( Figure 7B). Importantly, motif analysis of the high uridine-containing class showed two clusters of approximately two highly conserved uridines (-11, -10 and -6, -5), surrounding a core of less conserved uridines at the central positions (-9, -8, -7), in agreement with the RNA binding preferences of the U2AF2 12L protein and of the ternary SF1-U2AF2-U2AF1 complex (Figures 1 and 2). By comparing the U2AF2 binding signal in each class of splice junctions, we observed that the U2AF2 contacts with endogenous splice sites shifted position depending on the local uridine content. In particular, the interaction peak was broader and more distant from the intron 3´end for the splice site junctions with few uridines, while the peak was narrowest, strongest and closest to the intron 3´end for the high uridine class ( Figure 7C, and for examples of U2AF2 binding on single junctions belonging to the three classes, Supplementary Figure S8). Furthermore, we observed that a modest increase of U2AF1 levels (OE, see Materials and Methods) specifically affected the contacts with the high uridine-containing class, shifting the maximum of the U2AF2 peak to position -8, thereby matching the core of less conserved uridines in positions -9, -8 and -7 ( Figure 7C, bottom panel and Supplementary Figure S8A). The U2AF1-enhanced position of U2AF2 is consistent with U2AF1 stabilization of U2AF2 conformations (6) as well as U2AF1 recognition of the intron-exon junction (49)(50)(51)(52). Collectively, these results demonstrated that the U2AF2 binding sites were responsive to the uridine contents and locations within the pre-mRNA splice site signals.
DISCUSSION
Here, we expand our view of U2AF2 -splice site recognition by demonstrating that a relatively static region of the inter-RRM linker contributes to versatile U2AF2-RNA associations through inherent flexibility of the RNA site itself. Local rearrangements of the bound RNA, rather than protein backbone, contributed to an innate ability of U2AF2 to accommodate different nucleotides at the center of the Py tract ( Figure 3E-F, Figure 4, Movies S1-S3). Bulky purines fit the central U2AF2 binding site through adjustments of the oligonucleotide backbone, which in turn shifted the adjacent, 3´uridine (U6) into a distinct binding site. Cytidine or adenosine have rotated away from the protein at this inter-RRM site, and instead, intermediary water molecules glued the mismatch nucleobases to the inter-RRM surface. Otherwise, the U2AF2 RRMs maintained unperturbed contacts with the surrounding pyrimidines. Prior studies of U2AF2 RRM1/RRM2 bound to noncognate RNAs reveal a variety of changes, ranging from subtle shifts of the side chains and protein backbone to nucleotide rotations and syn/anti-conformer flips (10,11). In particular, we had observed flexible nucleotide conformations facilitating U2AF2 promiscuity at one other site (position 8 bound to RRM1). At this site, a guanosine binds the U2AF2 RRM1 in an unusual syn-conformer (10) or a cytosine shifts to optimize hydrogen bonds with the U2AF2 backbone and side chains (11). A distinct, previously-established means for U2AF2 to fulfill its multifaceted role in 3´splice site recognition is to rely on its modular architecture of tandem RRMs, which differ in uridine-specificity and switch between 'open' and 'closed' conformations in response to the RNA sequence (4,6,11). Consistent with the sequencesensitivity of U2AF2 conformations, the uridine contents of the splice sites modulate the U2AF2-3´splice site binding registers (Figure 7 and (36)). Figure 6. U2AF2 residues at the RNA interface influence its specificity for the central nucleotide. (A-C) Average fluorescence anisotropy data points and standard deviations from three replicates of the indicated U2AF2 12L mutants titrated into 5´-fluorescein-labeled RNA oligonucleotides. The fitted curves are overlaid. The RNA sequences comprising nine-uridines (blue) or its C5, G5 or A5 variants (mustard, salmon, or green), are inset alongside the apparent equilibrium dissociation constants (K D ) and standard deviations. (D) Scatter graph of the ratios of the wild-type or mutant U2AF2 12L binding affinities for U5 to the affinities for the C5 (square), G5 (inverted triangle), or A5 (triangle) variants of the central nucleotide. The K D 's and specificities of the U2AF2 variants binding the G5 and A5 RNAs are estimates due to the very low affinities. Supplementary Figure S2 shows penalties of K225E or R227E mutations on U2AF2 12L -RNA binding.
These expanding views of U2AF2 complexes with different oligonucleotides reinforce an emerging theme among ribonucleoprotein structures, which is that the RNA conformation frequently adapts to fit (or is conformationally selected by) the surface of the protein binding site. Beyond U2AF2, syn/anti base flipping enables SRSF2 to recognize either tandem cytosines or guanosines with similar affinities (53). In the structures of Csr/Rsm with various noncoding RNA substrates, rearrangements of bound nucleotides facilitate recognition of the different RNA sequences (54). In another well-studied example, one mechanism for PUF family repeat proteins to bind a large set of degenerate RNA sequences is to eject noncognate nucleotides from the modular RNA binding surface (55). Altogether, these findings highlight the importance of RNA flexibility for proteins to associate with appropriate sites amidst the milieu of cellular RNAs.
Molecular dynamics simulations, starting from the U2AF2-RNA crystal structures, revealed that the oligonucleotides were inherently flexible in the absence of protein, and that the central nucleotides (positions 5 and 6) remain flexible in the U2AF2-bound complex ( Figure 5). Although more studies of ribonucleoproteins have focused on the dynamics of the protein than on the RNA components, RNA flexibility clearly is an important contributor to versatile RNA-protein recognition. Several proteins have been shown to select an RNA structure with optimal intermolecular contacts among multiple conformations sampled by the protein-free RNA site (56)(57)(58). Indeed, a survey of RNA-binding proteins in the bound and free states implies that nucleic acid movements are a key aspect of protein-RNA recognition (59). In some cases, nucleotides making important contacts increase (rather than diminish) dynamics in the protein complex compared to the free state (57,58). Here, molecular dynamics simulations demonstrated that the Py tract RNAs likewise possessed a conformational repertoire in the absence of protein cofactors. Accordingly, polyuridine lacks a uniform structure in solution and shows the least base-stacking among the nucleotide polymers (60,61). From the ensemble of Py tract RNA conformations, we propose that U2AF2 selects a particular RNA conformation, thereby optimizing the intermolecular contacts with the altered central nucleotide and adjacent uridines. The molecular dynamics simulations further suggest that the central nucleotides remain flexible in the U2AF2-RNA complex, such as observed for other RRMbound RNAs (57,58), and this facilitates recognition of alternative nucleotides in the fifth position.
The ability to structurally adapt to diverse splice sites is likely to represent a key functional characteristic of metazoan U2AF2. The transcriptome of human cells offers a vast number of sequence combinations, from which U2AF2 must select the bona fide splice sites during the initial stages of spliceosome assembly. Indeed, transcriptome-wide mapping of U2AF2 binding sites in cells (Figure 7 and (36,62)) demonstrates widespread association of U2AF2 with a plethora of RNA sites comprising various sequences. We have established that structure-guided mutations, including R227N, K225N and G297D at the central site (Figure 6) and D231V at position 8 (10), could artificially increase the uridine-specificity of human U2AF2. These re-sults suggest that the subtle RNA sequence preferences of human U2AF2 have evolved to support the broad identification of a wide range of 3´splice sites. Yet, accurate identification of the 3´splice site signals is critical for the fidelity of gene expression. Even the relatively 'small', 2-4fold changes in binding affinities, such as observed here for U2AF2 binding to the Py tract variants, can evoke relevant changes in gene expression in certain contexts. Specific Py tract mutations that penalize U2AF2 binding by a few fold, have been associated with specific diseases, including retinitis pigmentosa and cystic fibrosis (10). Likewise, cancerassociated mutations of U2AF2 that modulate its RNA binding affinities have significant consequences for splicing of pre-mRNA transcripts (12,14). Moreover, a cancerassociated S34F mutation of U2AF1, which affects association with 3´splice sites to a similar extent as the nucleotide substitutions studied here, in turn alters splicing, 3´end processing, and translation of transcripts in cells (63)(64)(65)(66)(67). Altogether, these studies support that U2AF2 transcends a traditional classification of either a 'specific' or 'nonspecific' RNA binding protein, and has critical functional requirements to adapt to a variety of splice sites while serving as a sensitive rheostat for splicing.
We note that many factors, beyond the scope of the studies in this work, contribute to the physiological RNA binding preferences of U2AF2 in cells. Multiple partners work to enhance and regulate U2AF2 conformations and RNA interactions, including U2AF1, SF1, SF3B1 and PUF60/RBM39, among others. Already, the distribution of U2AF2 binding sites observed in CLIP experiments reflects the ensemble of all spliceosome assembly states. Accordingly, when U2AF1 levels increase, the conglomerate of U2AF2 binding sites shift closer to the junctions for 3´splice sites with high uridine content (Figure 7). This U2AF1-enhanced position is consistent with the RNA binding preferences of the ternary SF1-U2AF2-U2AF1 complex (Figure 2), conformational stabilization of U2AF2 by the U2AF1 heterodimer (6), and the function of the U2AF1 subunit to direct the ternary complex to the 3ś plice site junction (49)(50)(51)(52). Cancer-associated mutations of U2AF1 also influence the binding register of U2AF2containing splicing complexes relative to 3´splice site junctions (6,36)). Moreover, perturbation of U2AF1, and by extension U2AF2, affects transcription rates and coupled splicing events (68,69). Altogether, these diverse factors in the context of coupled gene expression processes converge to modulate the pre-mRNA sites associated with U2AF2. Resolving how RNA sequence contexts, spliceosome components, cancer-associated mutations, transcription rates, and coupled pre-mRNA processing events influence the U2AF2-RNA conformation for 3´splice site recognition remain important directions for future studies.
DATA AVAILABILITY
Data deposition: The coordinates for the U2AF structures have been deposited in the Protein Data Bank, www.pdb.org (PDB ID codes 7S3A, 7S3B, 7S3C for C5, G5 and A5 structures). The U2AF2 eCLIP-seq files have been deposited in the GEO database, https://www.ncbi.nlm.nih.gov/geo/ (GSE195669). The eCLIP-seq files for U2AF2 with OE U2AF1 are available with GEO accession GSE195620 (36).
|
2021-09-30T13:16:41.347Z
|
2021-09-27T00:00:00.000
|
{
"year": 2022,
"sha1": "bad2af199e9a26df503ca8a0286232f6a5d83f65",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1093/nar/gkac287",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4da001739800e98f5c25bfcdee05a92e4bd09ff9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
109043
|
pes2o/s2orc
|
v3-fos-license
|
Is flood defense changing in nature? Shifts in the flood defense strategy in six European countries
. In many countries, flood defense has historically formed the core of flood risk management but this strategy is now evolving with the changing approach to risk management. This paper focuses on the neglected analysis of institutional changes within the flood defense strategies formulated and implemented in six European countries (Belgium, England, France, the Netherlands, Poland, and Sweden). The evolutions within the defense strategy over the last 30 years have been analyzed with the help of three mainstream institutional theories: a policy dynamics-oriented framework, a structure-oriented institutional theory on path dependency, and a policy actors-oriented analysis called the advocacy coalitions framework. We characterize the stability and evolution of the trends that affect the defense strategy in the six countries through four dimensions of a policy arrangement approach: actors, rules, resources, and discourses. We ask whether the strategy itself is changing radically, i.e., toward a discontinuous situation, and whether the processes of change are more incremental or radical. Our findings indicate that in the European countries studied, the position of defense strategy is continuous, as the classical role of flood defense remains dominant. With changing approaches to risk, integrated risk management, climate change, urban growth, participation in governance, and socioeconomic challenges, the flood defense strategy is increasingly under pressure to change. However, these changes can be defined as part of an adaptation of the defense strategy rather than as a real change in the nature of flood risk management.
INTRODUCTION
It is a core expectation of European societies that they are able to protect themselves from disasters.Consequently, where flood risks pose serious threats, defense has been a central pillar of resistance.By "flood defense," we mean a specific strategy that aims to decrease the likelihood and/or the magnitude of flooding by keeping water away from people: infrastructural works that aim to resist water, such as dikes, dams, barriers, embankments and weirs, upstream retention, or the provision of more space for the water outside of protected areas (Hegger et al. 2013).With their embankments, dikes, or others fortifications, cities have been symbols of protection (Mumford 1961).Among the different flood risk management (FRM) strategies, e.g., prevention, mitigation, or preparation, flood defense could perhaps be said to best suit the conception of the "society of resistance."However, the place of defense is changing.According to Beck (1992Beck ( , 2006)), today no European society can be described as completely resistant because it faces the risk of not being able to cope with a disaster.We can argue that the concept of a zero risk society has been superseded.According to Luhmann, societies become modern when they stop considering risk as something outside their control, i.e., fate and the past, which determines the future; instead it becomes a probability that must be faced (Luhmann 1993).Rather than merely building higher and higher levels of protection, with increasingly improving safety standards, modern societies must accept the possibility of defense failure.This acceptance of the possible failure of defense and, with it, the idea of full protection, is accompanied by new policy concepts in the field of FRM.In the contemporary social and political science literature, we speak of vulnerability models (November 1994(November , 2004)), especially in complex urban environments (d'Ercole et al. 1994, De Sherbinin et al. 2007, d'Ercole and Metzger 2009).The concept of vulnerability aims to evaluate the possibility of the disruption or interruption of the functioning and development of a territory.In the 2000s, the resilience concept, especially the part relating to the capacity to absorb, recover, and adapt, advanced a new approach in FRM.Hence, the vulnerability concept has been superseded by the resilience concept.Relating to the notions of social-ecological and evolutionary resilience (Folke 2006, Brand and Jax 2007, Folke et al. 2010, Hegger et al. 2014), the more holistic and overarching concept of resilience introduces the need to cope with risk, adapting human behaviors, and transforming urban societies, leading to the promotion of a variety of strategies for FRM.There is growing recognition of the need for change in the flood defense strategy stemming either from institutional and political discourses, or from the evolving social and economic needs of a resilient society.In this new paradigm, the flood defense strategy is no longer the only solution (European Union 2007).
Despite this changing nature, the specific evolution of flood defense as a strategy has received little analysis from either a social science perspective nor from a comparative study perspective.The majority of the literature on flood defense consists of articles from fields such as civil engineering, geology, or physical geography and is essentially concerned with managing the hazard, the statistical analysis of the risk, or of the (non)robustness of infrastructure, etc. (Hu et al. 2014, Cheetham et al. 2015, Van Ecology and Society 21(4): 37 http://www.ecologyandsociety.org/vol21/iss4/art37/From Alexander et al. 2015, Ek et al. 2015, Kaufmann et al. 2015, Larrue et al. 2015, Matczak et al. 2015, Mees et al. 2015. Veelen et al. 2015).From a social science perspective, some authors have highlighted the technocratic paradigm that has long pervaded flood management institutions (Brown and Damery 2002) and have explored the evolution toward a more sociotechnical variety of FRM alongside more traditional, centrally managed structural and technical measures (Nye et al. 2011).
Taking an empirical and a comparative perspective, we aim to highlight institutional changes and pose the following questions.
Has the defense strategy been replaced by other strategies or forced to adapt?To what extent are institutions in charge of flood defense ready to change flood defense as a strategy?Does it mean that the flood defense strategy has become less dominant among the FRM strategies or that a new approach within the flood defense strategy has appeared?
To address this knowledge gap, we propose to examine specific shifts in flood defense and the result of those changes through an institutional framework.The study presents a comparative analysis of the evolution of the defense strategy in six European countries relying on legal and bureaucratic systems: Belgium, England, France, the Netherlands, Poland, and Sweden.
DATA COLLECTION AND METHODS
The FRM strategies in the six countries were studied within the comparative data collection of the three-year research project, STAR-FLOOD, funded by the European Commission.To be able to compare the changes that have occurred within the flood defense strategy of the six countries, we organized our data collection around the four dimensions of the policy arrangement approach (PAA; Van Tatenhove et al. 2000, Wiering and Arts 2006, Wiering and Crabbé 2006).This allows us to evaluate if change has occurred or not in four interwoven or interrelated dimensions of a policy arrangement: actors and coalitions, rules, resources, and discourses.The actor dimension refers to the actors and their coalitions involved in the policy domain.The division of resources between them can lead to differences in their ability to influence policy outcomes.The rules dimension refers to the formal and informal procedures for decision making and routine interactions.The policy discourses entail views and narratives, norms, values, and problem definitions.
The six selected northern European countries (Belgium, England, France, the Netherlands, Poland, and Sweden) represent a variety of flood types.Although we do not aim to evaluate hydrological change exactly and specifically, we state that, facing a global increase in frequency, magnitude, and spatial distribution of floods, the six countries all are influenced by climatic and socioeconomic change; however they do not react similarly.The changes with regard to the flood defense strategy are also due to others factors.We study specifically internal institutional factors.
The six selected countries also represent a variety of flood policies implemented from the 1980s to the 2010s.Whereas they have longimplemented defense strategies, and generally a common legal tradition and bureaucratic authority, the six countries' policies are based on different types of measures and different responsibilities for state, market, and civil society (see Table 1).
The empirical data used in this paper are based on the country and case study reports written in the STAR-FLOOD research project.Within this project, the researchers of the consortium have based their analysis on extensive interviews with stakeholders (50 interviews per country on average), observations, policy and legal document analysis, and workshops with practitioners.Because of length restrictions, the paper cannot elaborate on all the specificities of country and case study analyses on which it is based.No graphical or statistical presentation of results from interviews will be provided.However, we will make ample use of concrete illustrations from our extensive research.
A more detailed presentation of the analysis can be found in the six publicly available project reports (Alexander et al. 2015, Ek et al. 2015, Kaufmann et al. 2015, Larrue et al. 2015, Matczak et al. 2015, Mees et al. 2015).
Based on the fieldwork of social scientists and legal scholars, the authors reflected together on how the defense strategy is implemented in their own country.The time period taken into account in the analysis for each country is approximately the last 30-35 years, depending upon the specific flood policy milestones.Taking into account these periods allows observation of patterns of change, and also stability, in the defense strategies within the countries through an empirical, inductive, and comparative http://www.ecologyandsociety.org/vol21/iss4/art37/research approach structured on the four PAA dimensions (actors, rules, resources, and discourses).
CONCEPTUAL FRAMEWORK: HOW TO EXPLAIN INSTITUTIONAL CHANGE?
Studying change in the defense strategy is particularly interesting because it is often earmarked as the oldest, strongest, most institutionalized, and mostly dominant strategy for FRM in Europe and somewhat resistant to change.Until now, there has been no theoretical social science paper to systematically evaluate the degree of institutional changes in flood risk governance across a range of countries, by looking at the core dimensions of institutions (actors-coalitions, rules of the game, division of resources, and discourses).
The body of literature on which we draw our conceptual framework is institutional change theories.Such theories aim to clarify (lack of) societal evolution with arguments on the importance and complexity of institutions.For the purpose of this paper, we have selected one particular theory on institutional change, completed with two others, considering that they form a mainstream strand of literature.First, the long-term policy dynamics-oriented framework on institutional change by Streeck and Thelen (2005) will provide a general framework for explaining institutional change and stability.Further, we add to this framework a more structure-oriented institutional theory on institutional path dependency (North 1994, Levi 1997, Pierson 2000, Greener 2002) and a more policy actors-oriented analysis called the Advocacy Coalitions Framework (Sabatier and Weible 2007).Because the Policy Arrangement Approach combines attention for both actor and structure-related theories of change and offers a practical analytical framework for describing policy arrangements, we use the PAA to both assess and explain the changes or stability that have occurred in flood defense strategy in each country.In that context, we will refer to the four interrelated dimensions of policy arrangements, presented above: actors, rules, resources, and discourses (Van Tatenhove et al. 2000).Streeck and Thelen (2005)
Breakdown and replacement
From Streeck and Thelen (2005).
Following the framework of Streeck and Thelen (2005), we first analyze the dynamics of change, i.e., incremental or abrupt, within the flood defense strategy in each of the six countries under study in the last 30-35 years to assess the result of change, i.e., continuity or discontinuity.This analysis is developed for each of the four PAA dimensions presented above.The framework of Streeck and Thelen is stimulating, concrete, and simple enough to address the comparison on flood management strategy.Therefore, it is well suited for in-depth empirical case studies.Moreover, it allows us to test all the options such as "radical change can lead to a continuous situation," but also "minor change to discontinuity." The second theory that composes the Streeck and Thelen framework is the path dependency approach (North 1990, Pierson 1993, 2000).This refers to policies in which the "preceding steps in a particular direction induce further movement in the same direction" (Pierson 2000:252).This is especially important with regard to flood risk defense because this strategy is dependent on long-term investments in flood infrastructure (dams, dikes, embankments) and their technical and institutional management, including the security of technical expertise embedded in organizations and protected by rules of the game.This leads to high fixed costs and long-term increasing returns, which fosters stability.The path dependency model helps us explain more precisely how and why past decisions encourage policy continuity.Also, Streeck and Thelen (2005) explicitly refer to path dependency, explaining that continuity can come from radical but also minor changes.The historical development of technical flood infrastructure and engineering solutions for flood defense, with their high-cost investments, can make it difficult for policy makers to withdraw from this dominant strategy.As a result of path dependency it is generally difficult to change policies because institutions are resistant to change and actors may protect the existing model, even if it is suboptimal (Greener 2002).In the pattern of the general Streeck and Thelen framework, path dependency provides explanatory factors created by the legacy of flood defense, which may act to resist change.
The third theoretical analytical approach that deals with change focuses on the role of actors and coalitions: the advocacy coalition framework (ACF; Sabatier and Weible 2007).This framework suggests that certain advocacy coalitions (of politicians, civil servants, scientists, journalists, CEOs, NGOs, etc.) group around http://www.ecologyandsociety.org/vol21/iss4/art37/ a specific set of core beliefs where causations and values on the topic (floods) and in policy (risk management) are shared.These coalitions are formed because different policy relevant actors and different interests are linked to them.Policy change can be explained through the interactions between events in the external environment of a policy domain, for instance, economic crises, and the full translation of new ideas within the coalitions (developing new forms of FRM strategies.The inclusion of this final approach is considered to be important because the role of actor coalitions was found to be significant during the different case study analysis. Specifically, in the case of the flood defense strategy, we evaluate first the result of transformation (discontinuity or continuity) by preferentially analyzing the factors calling for a more radical change and by observing whether those changes are present in all or some of the dimensions of the PAA.Then, we assess the nature of change (abrupt or incremental).The Streeck and Thelen double-entry framework allows us, in a first step, to consider if the position of the defense strategy is continuous or discontinuous, and then to assess the nature of change occurring in flood defense strategy.We address the question of how these changes can be described and if they lead to a continuous or discontinuous situation.The central proposition of this qualitative assessment is that the result of change should affect most, if not all, the four dimensions of the PAA.In a second step, we consider the explanatory factors that lie behind these dynamics of change, and analyze how these changes can be explained.
Changes within the actor dimension
Flood defense is principally in the hands of governmental actors at the national or local level.In all of the countries studied, the national ministries and agencies belonging to the domains of the environment and/or public works are heavily involved in flood defense by setting rules and providing guidance regarding, e.g., dam safety.However, how flood defense is implemented in practice differs across the countries: from the domination of the national level in Poland and France, to responsibilities spread over a number of organizations in a layered structure in the Netherlands, Belgium, and England.In the Netherlands this is combined with a strong sector-based governance, e.g., a specific governmental layer for water management.Sweden is the only country in which flood defense measures are primarily managed and financed at the local level, e.g., municipalities, firms, individuals, or combinations thereof, depending upon to whom the land belongs and who needs to be protected.
From centralized to decentralized governance
Decentralization is a general trend in governance in Europe, and it is also observable in the management of flood defense.
Particularly in England, responsibility for flood defense implementation are redistributed to include local governments.
To a certain extent, in France, there is a transfer of responsibility.National-level authorities can delegate the management of defense infrastructures (especially minor fluvial dikes) when they are not highly prioritized in the national defense strategy.The decentralization trend is, however, less observable in Sweden, Belgium, and Poland.In Sweden, the responsibility of providing flood defense was already in the hands of municipalities at the very beginning of our study period.In Poland, flood defense responsibilities continue to be based at the national level.Belgium forms a somewhat specific case, given that it has undergone an intensive federalization process in the period of analysis.Flood defense responsibilities have been transferred from the national level to the regional level, but this transfer recentralized responsibilities at the regional level (Flanders, Wallonia, Brussels Capital).
From agricultural to environmental interest groups
Over the past 30 years, societal needs and activities have changed considerably, impacting the governance structure of flood defense.In most countries in our research, flood management has long been driven by agricultural interests, except in France, where flood defense has been implemented to ensure urban development.In England, flood defense in certain areas has also been performed by regional internal drainage boards whose major focus lies in the optimization of land for agricultural purposes.
Similarly, in Belgium, the water management of non-navigable watercourses was in the hands of the Ministry of Agriculture (Crabbé 2008).Socioeconomic developments and the sustainability discourse led to a decrease in agricultural interests and an increase in environmental concerns.Consequently, reforms to the water management structure shifted the responsibilities in most countries under study toward environmental departments or were integrated under larger umbrella groupings (spatial development, infrastructure, and environment).
Changes within the rules dimension
Two changes can be observed in the transformation of the decision-making processes within the six countries concerning the formal and informal rules that guide the defense strategy, i.e., an evolution toward a more multirule system and the diffusion of responsibilities.
Toward a multirule approach
The evolution shows a shift from a legal requirement for defense infrastructures, essentially from the state and based on safety standards, to a broadening of legal requirements, i.e., the environment, preparation, or urban planning, stemming from different public authorities (multisector).The introduction of a risk-based approach in FRM leads to increased attention to spatial planning, preparedness planning, and emergency management.As a result, flood risk responsibilities are partially shared in different strategies and partially transferred to other http://www.ecologyandsociety.org/vol21/iss4/art37/government departments.In particular, this evolution occurs in France and Belgium.In England, spatial planning and emergency management have already been playing a larger role in FRM for some time.In Sweden, where FRM is not a distinct policy area in itself, flood prevention has recently been explicitly incorporated in local legislation because, in Sweden, both spatial planning and flood preparation are primarily a concern for the local level.In the Netherlands, flood risk responsibilities remain primarily with the water sector, which is similar than in Poland.
The multiplication of the above requirements leads to an unclear situation, especially with regard to the safety standards for flood defense measures and also with the emergence of the concept of integrated systemic flood policy.Ambitious national safety standards for defense infrastructures are legally prescribed in the Netherlands and Poland.In the other countries, the defense infrastructure authority or the water managers determine the most appropriate protection level.In France, the rules on legal safety standards can change, depending on the local authorities in charge of the defense structures, and in Sweden, the local rules may define the requirements necessary to obtain a permit for water operations, which includes flood defense.
Furthermore, we observed a change in rules because of the emergence of a more systemic approach to water management and an increasing role of more diverse risk-based approaches relating to floods that lead to integration of strategies.In England, a holistic approach has been taken, while in Flanders (Belgium) and the Netherlands a multilayered safety approach has been adopted, although this remains at the pilot stage.To date, England is the only country to have legally integrated such diversified policies.
Diffusion of responsibilities
The broadening of the policy domains and the range of actors involved leads to a redistribution of the responsibilities concerning who is responsible for providing protection.Flood protection in the Netherlands and Poland remains a statutory duty of the state.The decentralization process in England shifts responsibilities to lower government levels such as Lead Local Flood Authorities, although (arguably) power continues to be centralized and authorities remain dependent on Local Government Finance Arrangements and must adhere to national FRM policy and project appraisal (Penning-Rowsell and Johnson 2015).In Belgium and France, clear responsibilities remain undefined, as though in a liminal position between the central and the local authorities.In Sweden, municipalities have responsibilities to ensure that citizens are safe.
Flood risk responsibilities are also increasingly shifting toward actors outside the government, i.e., the insurance sector and the citizens.A special change with regard to citizens' legal responsibility is occurring.In the past decade, in most countries, governmental actors have taken actions to make citizens responsible.This process involves both soft (awareness campaigns, etc.) and hard rules (legislation).In France, for example, the revised Act on Civil Security of 2004 states that citizens are responsible for their own safety.The authorities in England and Sweden mainly have permissive powers to provide flood defense, whereas citizens bear responsibility for their own safety.
Changes in the distribution of resources
Toward new allocation strategies for flood defense investments Both investments in and the maintenance of flood defense infrastructure, place a large financial burden on FRM.However, flood events often tempt policy makers to claim that security has no price.This claim is especially observable in the Netherlands, where the Delta Program is investing 1 billion Euro a year mainly in flood defense measures such as dike enhancements.In other countries, however, investments in flood defense appear highly sensitive to the economic situation and other external factors.In England, the FRM budget, which had been on the rise since 2004, faced significant cuts following the elections of 2010 as a response to the global financial crisis (Bubeck et al. 2013).Between 2015 and 2021, the government plans to invest £2.3 billion in more than 1500 projects to reduce the risks of flooding or coastal erosion across England to more than 300,000 households (Defra 2016).However, there has been criticism of the government being reactive rather than proactive following recent flood events and a report by the House of Commons Environmental Audit Committee ( 2016) is sceptical that the Government will reach these targets.In turn, in Sweden, budgets for flood defense have decreased in recent years.
Consequently, we observe a change in the distribution of funding.
In Poland and France, resources for flood defense remain fully funded by governmental levels.Since 2012, the defense infrastructure in England has been partly financed through partnership funding, whereby local authorities, businesses, and other nongovernmental actors at the local level cofinance investments.Flood defense in Sweden is financed by the municipality, the local property owner, or combinations of both.
An important issue in the allocation of flood defense spending is how to decide which areas are most worth investing in.These decisions are partly based on the ability of local actors to lobby for defense structures.In England, investments are, in part, based on the local capacity for partnership funding.A growing number of countries now pay increased attention to the use of cost-benefit analysis (CBA) to support investment decisions.Although CBA is already a common practice in England, it is now also emerging in the Netherlands, the Flemish region of Belgium, and Sweden.
In the Netherlands, CBA sometimes has broadened the scope for flood defense measures, e.g., with side channels and dike relocations as at the time of Room for the River, but mostly CBA ends up supporting well-known measures such as dike enforcement.Interestingly, and in contrast, in Flanders (Belgium), the use of CBA is a catalyst in introducing a new approach in flood defense: no longer are large infrastructural works the norm; instead, a combination of local, small-scale defense construction and mitigation measures are being adopted.
New sources of expertise
Overall, in the six selected countries, flood defense is characterized as a highly technocratic strategy, primarily based on expert decision making and technical solutions.For example, knowledge of flood defense in the Netherlands is highly centralized because of the coordination and support from the national government.However, in recent decades, and in most countries selected, the expertise underpinning the flood defense strategy has also originated from different sources.Many national water managers now host internal knowledge institutions and are also supported http://www.ecologyandsociety.org/vol21/iss4/art37/by a number of external partners, such as universities, consultancy firms, and civil society actors.Most water managers have undergone a shift in their staff composition; whereas they were previously bastions of hydro-technical engineers, new disciplines find their way into risk management, e.g., bioengineers, IT experts, biologists, public administration scholars, or social scientists.
"Defense is not the only solution": a new discourse?
As the fourth dimension of the PAA, the change in discourse is mainly related to a common trend toward a more encompassing management of flood risk that includes the consideration of socially and environmentally sustainable solutions, such as nature-based protection measures (Defra and Environmental Agency 2014).
Despite the rise of the risk-based approach and calls for prevention, mitigation, and preparation measures in the discourse, in practice, flood defense in most countries remains the cornerstone of FRM among the actor, rules, and power dimensions.Whether it is a statutory duty, citizens and public actors expect the government to protect them from flooding through structural protection measures, i.e., infrastructure.However, in the discourse and in all of the countries, it is clear that flood defense has changed in nature; the strategy is now (to greater or lesser degrees) increasingly embedded within a multisectoral flood risk management policy.Whereas in the past flood defense was focused on a limited set of technical measures and engineering options, today the actors in charge have connections to policy fields relating to prevention, mitigation, and preparation.Examples include the use of local diking structures accompanied by flood retention zones and building restrictions in less inhabited areas.
An overview of the four dimensions of the PAA on flood defense policy changes
Between 1980 and 2015, flood defense developed from a strategy that was based on a limited set of governmental actors, infrastructural measures, technocratic expertise, and investment strategies to the cornerstone of an integrated framework for FRM.It now shares responsibilities with other actors; adopts alternative solutions; and diversifies the requirements of the legislative framework and partnerships for funding.Interestingly, this evolution is observed in all of the different countries under study, although the process and the result of change differ significantly among them (Table 3).
In reference to Streeck and Thelen's typology, we generally assess the changes observed with the help of the four dimensions (in brief: actors, rules, resources, and discourses).The changes in the actor dimension are external.Decentralization and public participation are generalized in Europe, crossing many different public policies in the fields of ecology and environment.The change in the interests taken into account within the strategy, from agricultural to environmental actor coalitions, refers to a more internal change.In comparing the countries, even when we observe a discontinuous situation in Belgium and The Netherlands, the result of changes that are occurring maintain continuity with the previous situation in most cases.
The processes of change observed in the rules dimension of the flood defense strategy are also characterized as incremental and the general result of change shows a mostly continuous situation.
In the Netherlands and Poland, there are marginal changes, e.g., further integrating water legislation, but no substantial changes in the nature or role of those rules.In both Belgium and France, first, a transfer from the national authority to other government departments has occurred, potentially using multiple and different legal sources.Second, there is a diffusion of responsibilities toward local authorities and citizens.However, to date, the transfer of legal responsibility has not seemed to lead to radical changes in the legal framework.In Sweden and England, the multiple-rules approach, which integrates and connects to different policy domains at various levels of governance (in Sweden, essentially local authorities), has existed throughout the time period studied.
The changes observed in the resources dimension also occur in a continuous mode, but a shift to decentralized levels is important almost everywhere.In Sweden and England, we observe a decrease in financial resources.In all of the countries, we observe at least a need for the introduction of new financial resources coming from local authorities, businesses, or nongovernmental actors.The implication is that, consequently, in Belgium and Sweden, new expertise on flood defense has for some time been fragmented over different governmental institutions.However, the flood defense strategy is more puzzling.Flood defense remains highly based on closed and expert-led decision making, where the introduction of new expertise does not imply a discontinuity in the defense policy trend, England being an exception here.
In most countries, we witnessed new discourses on flood risk management, but this does not always affect the core of flood defense strategy.Thus, the result of changes in discourses can be characterized as discontinuous, slightly in the case of Poland and heavily in the Netherlands, Belgium, and France, where a more integrative discourse on defense strategy is a new and strong phenomenon.In Sweden and England, the discourses on alternatives and changes from a predominant strategy on defense have been in existence for a long period of time and do not appear as a change.All in all, even when we observe all the processes of changes in the six countries, the result of change is more continuous than discontinuous.
EXPLAINING CONTINUITY AND DISCONTINUITY: AN INCREMENTAL AND PATH DEPENDENT CHANGE IN NATURE
We analyze the nature of change presented above through the lens of institutional theories and to analyze whether they are a change "in nature," i.e., real changes that have major and substantial effects in the policy dimensions.We refer to Streeck and Thelen's (2005) definition of an "abrupt change" as a complete and rapid change.
The combination of the general model with the two specific approaches stemming from institutional theories gives us a way to interpret the dynamics in flood defense that have emerged from the empirical material as presented above.Thanks to the conceptual insights and the analytical study from the authors of the path dependency literature, we observe two types of policy lock-in effects in the flood defense strategy: increasing returns effects, which are related to the financial crisis, and the stickiness of the institutional pattern, which is related to issues pertaining to the environment and climate change.From the ACF, we then learn how to explain the role of the core beliefs of the public administration coalition to elucidate the stability related to societal demand.
The increasing returns effects influencing the stability of the financial crisis context
For most of the countries, the availability of funding remains important in flood defense.As expected from the power of increasing returns (North 1990), we found that, despite the financial crisis, countries continue to spend money on flood defense.However, we can observe a trend of seeking alternative and less expensive nonstructural measures in a few countries (Flanders-Belgium and England).In general, there is strong continuity in development.For instance, in France, although the local Program for Action for Flood Prevention is meant to include different flood management strategies, defense infrastructures still account for more than half of the budget.Furthermore, in the countries that have most recently joined the European Union (EU), such as Poland in 2004, when there has not been sufficient investment from national governments, the EU has provided additional funds to the flood defense sector.In Poland, the problem focuses on the most sensible and effective manner in which the funds should be used.Unfortunately, as an effect of prolonged investments and procedures associated with the division of central or EU funds, money must be spent in a short period of time to be eligible for the next tranche of funding.This situation does not help in making innovative decisions in flood management.Additionally, it leads to the further entrenchment of well-known strategies, such as defense, and provides less opportunity to experience emerging strategies, such as mitigation.
In other cases, however, EU investments have had the opposite effect.For instance, in Belgium and England, EU LIFE-projects have allowed innovative approaches that combine FRM and nature conservation.
In short, reductions in investments and the focus of funding still available for defense infrastructure can, in some countries, partially contribute to the mechanisms of path dependency.Past investments in infrastructures (dikes, levees, etc.) currently imply continuous investments at least to maintain their efficacy in the Netherlands.With the exception of England and Belgium, the limited degree of radical change within the flood defense strategy can be explained by reductions in investments, the concentration of resources in well-known solutions to address the short-term requirements of funders, and the strong coalitions of public authorities or technocratic expert-based knowledge.These create an increasing returns effect that, to date, does not allow a change in the nature of flood defense.http://www.ecologyandsociety.org/vol21/iss4/art37/ The stickiness of institutional design minimizing the environmental challenge and climate change issues Pierson (1993Pierson ( , 2000) ) suggests that, once policies have been designed, they are change-resistant.Their designers have often built them on a pattern of "no alternative," particularly to avoid changes from their successors.Institutions are designed not only to be difficult to reverse in the future but also unattractive to reform.Changes in institutional patterns are very costly both for individuals and within the institutions themselves.We can observe a strong policy lock-in effect within the defense strategy, even if in each country the environmental discourse is increasingly influenced by concerns related to the hydrological impact of climate change and calls for an adaptation of defense strategy.
In Europe, it is clear that the geographical and meteorological context is changing and, consequently, the physical drivers of flooding are both reinforcing the calls for diversification of FRM strategies.The role of flood as a shock event is a factor of change.
For instance, in Sweden, the national commission on risk and vulnerabilities with regard to climate change, finalized in 2007, contributed to placing the expected consequences of climate change, such as increased flood risks, on the agenda.Although past flood events assist in leading to changes in the flood risk policy, by acting as internal shocks, more often they are facilitating events, or "windows of opportunity" (Kingdon 1995), that enable the policy to be changed rather than act as an absolute driver for change.In the case studies, the increase of hydrological events does not result in a radical turn of FRM, but gradually transforms it toward fewer structural protection measures and toward a more integrative nonstructural strategy.
There is a spirit of change toward a more system-based approach because of the growing importance of sustainability, environmental values, and the possibilities of creating solutions by working with nature (Defra 2014, Wilkinson et al. 2014).In England, the Making Space for Water policy (2004) aimed to develop such a comprehensive, integrated, and forward-thinking strategy for managing future flood risk and integrating these activities into an overall approach to managing flooding more generally.The same tendency exists in the Netherlands with the Room for the River policy.Even if awareness of climate change and the value of environmentally friendly approaches are increasing as a response to the consequences of flooding, the engineered infrastructure of protection remains a dominant response.In The Netherlands, the strong tendency toward naturefriendly measures and room for the river projects has, over the last few years, died down again and recent governmental priorities are back to "safety first."In general, we observe a strong path dependency in the choices of flood management for the traditional defense strategy.The gap between the national institutional discourses, through expertise or influential reports, and local implementation in terms of distribution of resources and rules, explains that even if the process of change seems radical, the result of change is mostly continuous.For instance, in England, an influential Institution of Civil Engineers report (2001) suggested that the country should no longer continue to rely solely on flood defense and that natural processes could be better used to manage flood risk.The report reinforced the continued move toward the desire for a more sustainable flood risk management that can only be achieved by better working with the natural system.It represented an acceptance of the fact that floods cannot be prevented and that communities at flood risk must learn to live with flooding.Nevertheless, in the English Hull case study, where past flood events include the 1953 and 2007 floods, the physical setting has been a key factor in shaping the approach to land drainage and hard-engineered defenses, without which development in the area would not have been possible.In many cases, the argument of the local reality in terms of the need for development, specific geographical patterns, increased awareness of risk among local stakeholders could not influence the change toward a system-based approach.
The stickiness of flood defense is explained by the levee effect (Burby 2000) or the escalator effect (Parker 1995).Investments in defense infrastructure stimulate human spatial development in flood-prone zones, which in turn forces water managers to continuously invest in the maintenance and further development of this defense infrastructure (White 1945, Bubeck et al. 2013).This levee effect also appears as a significant explanatory factor in the case studies investigated in this research.
The core beliefs of the technocratic expertise coalition
The advocacy coalition framework explains that actors involved in policy gather in coalitions joined by specific normative and scientific core beliefs and policy beliefs (Sabatier and Jenkins-Smith 1993) that want to influence the political and policy order.In general, flood defense doctrine and strategy depend entirely on state administrations or agencies.It is a highly expert-based strategy and government-driven policy.The core beliefs are especially strong because, as fundamental reasons to be committed to in policy, they are very unlikely to change (Sabatier andWeible 2007, Sabatier 2014).The flood defense strategy is traditionally framed by public administration, and especially by the national level.Except in the cases of Sweden (the importance of the local administration) and England (the multiactor system), in each of the countries under study, the national administration is predominant in expertise.Traditional actors in flood defense share a common core belief system with regard to the importance of technocratic expertise.This helps to explain the stickiness of the technocentric policy regime and indicates why this is a small transition in power.Even when the spectrum of actors is increasing, there remains not only a lack of integration of the external needs from other government departments, e.g., spatial planning or emergency response, but also a lack of civil society involvement in many countries.
The actor coalition organized around a focus on technocratic expertise gives little room for a window of integration for nongovernmental actors such as NGOs or private companies, public consultation, and participation in decision-making processes in flood defense.We interpret this limit of integration as an explanation of the resistance to change.For example, flood defense investments in Poland sometimes involve public participation procedures, but more often, they result in controversies and protests.The protests advocate NIMBY (Not In My Backyard) sentiments, and projects disregard peoples' interests because of the tradition that decisions should be undertaken by experts.Environmental NGOs attempt to fill this gap in public participation; however, they are perceived by the state administration and services as brakes on all action.In those countries where there is little openness in the decision-making process, the dominant position of the defense strategy is robust http://www.ecologyandsociety.org/vol21/iss4/art37/and explains the resistance to change.Countries where there is more openness of the political system and less dominance of flood defense show interesting examples of public participation, and illustrations of more radical changes.For example, in the northern England town of Pickering a project to address local flood risk involved the formation of local focus (competency) groups.The project demonstrated enhanced stakeholder participation through introducing the concept of collaborative coproduction of knowledge (Lane et al. 2011).In 2007 the coproduction of knowledge model was practically tested in the town, the output of which was so successful that it resulted in implementation of viable local solutions to flooding (Whatmore and Landström 2011).
CONCLUSION
Changes in risk perceptions, hydrological impacts from climate change, growing urban expansion, and participation challenges have increasingly put pressure on the traditional flood defense strategy.This paper explores and compares the changes the flood defense strategy has undergone in six European countries.We ask whether the strategy is shifting toward a discontinuous situation or not, and whether the process of change is in nature more incremental or radical.
In each country, the defense strategy is an outcome of governance arrangements, which are characterized by a specific set of ingredients that consist of actors, power and resources, rules, and discourses.These arrangements are challenged by a number of developments: administrative decentralization, financial constraints, the democratic deficit, and environmental, developmental, and spatial challenges.This dynamic influences the defense strategy, which, in all countries, is somehow becoming much broader and more open, creating more room for local, private, and individual responsibilities (as opposed to concentrating power and resources in technocratic state hands) and promoting a discourse of more diverse modes of protection.
However, we indicate that changes within the flood defense strategy are very heterogeneous among and within countries.Some of the countries observed have in the last 30-35 years clearly been moving toward a path of change in terms of actors, rules, discourses, and resources (Belgium or England), whereas others are shifting in one or two of these dimensions only (Sweden, the Netherlands, France, Poland).In all countries, the change taking place in the flood defense strategy has not led to a discontinuity in FRM but has occurred incrementally.Defense measures remain a founding principle of FRM in every country and the first method of protecting populations and human activities.With the exception of England, which has for a long time diversified FRM, the defense strategy remains dominant.However, it is complemented by other measures to provide a more efficient and effective policy, e.g., spatial planning measures and disaster management.Hence, we can conclude that the rise of the FRM discourse has not led to a replacement of flood defense as a dominant strategy.Rather, it has shifted its position from a solo strategy to a central strategy within the FRM framework.
The stickiness of the flood defense strategy can be explained by factors of path dependency.This path dependency can be found both in the actors, rules, and resources dimensions.Strongly established actor coalitions, a solid institutional design centered around flood defense, and sunk costs of flood defense investments made in the past, hamper a radical shift toward new flood risk strategies.
In conclusion, we assert that the classical role of flood defense remains dominant at a general level; at the least, it is a cornerstone in flood management.However, the position of flood defense is very gradually shifting.Changes can be defined as part of an adaptive strategy of "resilience as resistance" rather than as a real change in nature toward diversified flood risk management, given that it is often promoted in a broader understanding of resilience and integrated flood management.
Table 1 .
Flood defense measures in the six selected countries.
Table 2 .
The process and result of change typology.
Flood defense is traditionally based on technocratic decisionmaking processes, and although this assertion still holds true today, a broadening of the actors in decision making can be observed.In each country, examples can be found of flood defense planning that include nongovernmental actors in decision making, e.g., the involvement of a nature conservation NGO in the development of the Sigma Plan in Belgium, cooperation with the World Wide Fund for flood risk mapping in Poland, and many examples in England where the involvement of nongovernmental actors is now the norm.With the exception of England, in most countries, this engagement mainly involves organized stakeholder groups without intensive forms of citizen participation.
Table 3 .
Main outcomes from the dynamics of change for each country.
|
2017-04-19T08:35:22.255Z
|
2016-11-28T00:00:00.000
|
{
"year": 2016,
"sha1": "a58f4995abe3c6c512de12905527f1c3f63008bd",
"oa_license": "CCBY",
"oa_url": "https://www.ecologyandsociety.org/vol21/iss4/art37/ES-2016-8907.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "83fcbe56bcc40c10ad81005b2a2dbf039b54582f",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Economics"
]
}
|
7417540
|
pes2o/s2orc
|
v3-fos-license
|
Recent Advances and Review on Treatment of Stiff Person Syndrome in Adults and Pediatric Patients
Stiff Person Syndrome (SPS) is one of the rarest autoimmune neurological disorders, which is mostly reported in women. It is characterised by fluctuating muscle rigidity and spasms. There are many variants of SPS, these include the classical SPS, Stiff Leg Syndrome (SLS), paraneoplastic variant, gait ataxia, dysarthria, and abnormal eye movements. Studies have shown that the paraneoplastic variant of SPS is more common in patients with breast cancer who harbour amphiphysin antibodies, followed by colon cancer, lung cancer, Hodgkin's disease, and malignant thymoma. Currently, the treatment for SPS revolves around improving the quality of life by reducing the symptoms as far as possible with the use of GABAergic agonists, such as diazepam or other benzodiazepines, steroids, plasmapheresis, and intravenous immunoglobulin (IVIG). There have been random clinical trials with Rituximab, but nothing concrete has been suggested. A treatment approach with standard drugs and cognitive behavioral therapy (CBT) seems to be promising.
Introduction And Background
Stiff Person Syndrome (SPS) dates back to as long as 1956 where Moersch and Woltman first described the tightness of the back, abdominal, and thigh muscles in 14 patients. They further conducted a study for a period of 32 years to conclude their findings of progressive fluctuating, rigid, and painful spasms that lead to a wooden man appearance as SPS [1]. Almost a decade later, Howard first reported the use of diazepam, which gave relief to SPS-associated symptoms [2]. Major benchmarks were achieved in 1988 when anti-glutamic acid decarboxylase (anti-GAD) antibodies were discovered in SPS, and consequently, corticosteroids were used to manage SPS symptoms. The results were promising and, hence, it was put forth as a new treatment modality. In the past few decades, extensive research on plasmapheresis, intravenous immunoglobulin (IVIG), and various antibodies allowed their introduction in the management of SPS. The link between anti-amphiphysin, anti-gephyrin, anti-GABA A receptor associated protein (anti-GABARAP), and paraneoplastic SPS were also discovered [3][4].
The exact pathophysiology of SPS still remains unclear, but the widely accepted theory is that of the involvement of anti-GAD, which are a group of cytoplasmic enzymes involved in GABA synthesis in brain and spinal cord [5]. There are classically two isoforms of anti-GAD: GAD65 and GAD67. The former is associated to SPS, diabetes mellitus, cerebellar ataxia, and limbic encephalitis [6][7][8].
The incidence of SPS is very rare and the prevalence of the disease is one in a million [9]. SPS cases are difficult to diagnose owing to their rarity and, hence, about 60% of the cases get diagnosed only because of the presence of anti-GAD65 in the blood [10]. The GAD and amphiphysin are both presynaptic autoantigens while GABARAP and gephyrin are postsynaptic autoantigens [11][12][13]. In SPS, there is no structural damage seen to the GABAergic neurons and the pathology is presumed to be due to a pharmacological blockade. There are no neurological symptoms seen in SPS, besides an increase in muscle tone. This is backed up by the normal post-mortem findings and improved symptoms with immunotherapy [14][15]. Major achievements that have contributed to SPS research are as given in Figure 1.
Clinical presentation
SPS is a rare disorder and, therefore, a neurologist may encounter just one or two cases during his/her entire clinical practice. Patients may have an insidious onset with classical findings being episodic aching and stiffness of the axial muscles slowly progressing to proximal muscles. As the disease progress, the patients may find it difficult to carry out their day-to-day activities. Clinical symptoms present themselves at a mean age of 41.2 years (range: 29-59 years). Neonatal cases are also reported very rarely.
The common features seen in SPS include: 1. Stiffness starting in the trunk and progressing to the abdomen and lumbar region. Hyperlordosis due to the episodic aching and stiffness of the lumbar spine is a diagnostic hallmark of SPS [16].
2. The stiffness progresses to other muscles in the body, for instance, progression to the thorax muscles causing breathing difficulties. Facial muscle involvement gives an emotionless, masklike appearance [15]. 3. Painful spasms are elicited by triggers predominantly auditory or tactile in origin, and they are in sync with those observed in the case of tetanus. 4. Joint dislocations and fracture have been observed in some cases with the sudden onset of spasm. 5. Normal sensation, motor function, and intellect are present.
Continuous muscle fibre activity on EMG and anti-GAD are pathognomic of SPS. Antiamphiphysin, anti-GABARAP, and anti-gephyrin may be present in the patient's serum or CSF in GAD-negative patients. For more clarity, the clinical feature of SPS is summarized in Figure 2.
Review
The prime focus in SPS is aimed at giving symptomatic relief to the patient and improving the quality of life. Due to the rarity of the disease, there are limitations in the quality of treatment options that are available. The past few decades have thrown some light on various approaches for reducing the spasticity and rigidity of muscles in SPS. The discovery of anti-GAD proved to be the most important pathognomic finding in SPS.
Over the years, treatment modalities for SPS have included benzodiazepines and baclofen as the first line of drugs followed by IVIG, plasmapheresis, immune modulators, and Rituximab. IVIG and plasmapheresis are either used alone or in combination in refractory cases.
Corticosteroids are used as monotherapy or in combination with other drugs for SPS. However, their efficacy is not determined by any clinical trials. The paraneoplastic variant of SPS, where stiffness is localized to the arms and legs, makes up to just 5% of SPS cases. Classical SPS patients respond well to treatment, but in about 10% of cases, sudden deaths occur due to autonomic dysfunction [16]. Repeated spasms or sudden withdrawal of medicine may lead to autonomic dysfunction, resulting in sudden death [17].
Benzodiazepine as first line drug
Benzodiazepines are considered as the first line treatment in patients diagnosed with SPS. Diazepam, being a GABA A agonist, is not only used as an anticonvulsant but is also used in SPS management owning to its profound muscle relaxant property. A divided dose of 5-100 mg of diazepam or clonazepam (divided dose 1-6 mg) are given by gradually increasing the dose over time [17]. The administration of higher doses at the beginning of treatment may make patients susceptible to dangerous adverse effects, including respiratory depression along with drowsiness and dysarthria.
Other GABAergic drugs
Other drugs, such as gabapentin, tiagabine, valproate, and levetiracetam, have been used for reducing the SPS symptoms. Vigabatrin was used in the past but now has been discontinued due to its probable side-effect of causing visual field constriction. Levetiracetam (2000 mg) was tested in a single, blind placebo controlled trial in just three patients and showed reduced the symptoms in SPS [18].
Oral baclofen vs intrathecal baclofen
Baclofen is mainly used orally, along with diazepam, as a first line treatment for its GABA B agonist activity to manage spasticity. Due to its low CSF bioavailability, intrathecal baclofen (50-800 µg/day) has been used to treat severe spasticity, which has shown significant improvement in symptoms of SPS. However, utmost care must be taken as chances of catheter infection, catheter leakage, pump failure, and, in some cases, death may occur due to autonomic failure [19][20][21].
Treatment with plasmapheresis over intravenous immunoglobulin -the better approach
As per the European Federation of Neurological Societies (EFNS), IVIG (2 g/kg over two to five days) should be reserved for patients who have no symptomatic relief after the use of diazepam and/or baclofen and have a severe disability in carrying out daily activities [22]. The result of a randomised, double blinded, placebo-controlled, crossover trial on patients treated with IVIG has showed improvement in their symptoms with a significant decrease in stiffness and decrease in GAD autoantibodies [23]. The GAD autoantibody titre also decreased after administration of IVIG [23].
IVIG is usually safe but has higher chances of adverse reactions as compared to plasmapheresis, ranging from mild to severe in patients with IgA deficiency and, hence, is contraindicated in them. On the other hand, plasmapheresis therapy has shown promising results in 56% patients registered in the study approved by John Hopkins Institute (JHH) where first-line treatment failed [24]. Studies have shown that plasmapheresis is well tolerated with adverse effects seen in just 4.75% of patients receiving it [25].
Treatment approach based on presence of GABARAP, GAD, GlyRα1, and amphiphysin antibodies
It is observed that the anti-GAD autoantibodies have been associated with involvement of trunk, abdominal, and limb muscles. However, 80% of SPS patients who tested positive for amphiphysin have shown a strong association with rigidity in cervical muscles and were paraneoplastic [26]. In recent times, immense research has been done to identify the autoantigens. It had led to evidence that GABARAP; which is a 14-kD protein localized at postsynaptic region of GABA-ergic synapses, inhibits GABA A receptor expression in about 65% of SPS patients. Such patients have responded better to IVIG as opposed to high doses of GABA-enhancing drugs that cause undesirable adverse effects [27].
Patients with amphiphysin antibodies are known to respond better to steroids, plasmapheresis, or treatment of the primary cause (e.g. breast cancer) while those with anti-GAD responded well to IVIG, diazepam (37 mg/day), and clonazepam (4mg/day) [28][29]. Patients with GlyRα1 antigen respond better to immunotherapies than patients with GAD65 immunoglobulin.
Promising prospects of Rituximab
Rituximab, a monoclonal antibody that binds to the B-lymphocyte cluster of differentiation (CD) surface antigen, has been tried as an effective drug to manage SPS. It is administered as at least two doses each of 350-375 mg/ m 2 infusion with a spacing of seven to 14 days or as four weekly infusions, which have resulted in a substantial decrease in the severity of symptoms [30].
After the failure of benzodiazepines and monthly IVIG treatment, marked gait improvement and ambulation with minimal assistance were achieved after the administration of two doses of rituximab, each of 500 mg/m 2 spaced over 14 days [31]. For relapse cases with anti-GAD positivity in the serum or CSF, repeat doses six to eight months later have reported to be favourable [32].
Though very few papers have reported the effective use of rituximab, it should still be considered as an alternative treatment for patients with SPS when the treatment with benzodiazepines and other conventional antispasmodic immunotherapies have failed to produce the desired effect [33].
Cognitive behavioral therapy
Muscle stiffness gets exaggerated due to anxiety as it is an autonomic physiological symptom. A study conducted have showed that about 44% of the patients develop severe motor symptoms due to their anxiety [34]. A case study was conducted on an SPS patient who underwent five weeks of CBT. The results were promising as evidenced by the substantial decrease in anxiety, upliftment of the self-confidence, and lessening stiffness and rigidity [35].
Pediatric approach in management of SPS
Since SPS is a very rare disorder in adults and manifests later in life, diagnosis of SPS at a pediatric age is very challenging. It may quite resemble tetanus in presentation and thus often lead to misdiagnosis. Tetanus follows an acute course with recovery in few weeks while SPS is a chronic disorder with varying degrees of disability, which does not improve over time [36]. Though unusual features like mild trismus and blepharospasm point to tetanus, the overall time period should be taken into account and other clinical features should be ruled out to confirm the diagnosis of tetanus [36].
The pathophysiology in childhood SPS is still unclear as compared to that of adults. Childhood SPS often demonstrates GlyRα1 mutation. Lately, there has been a strong correlation with striatal lesions and childhood SPS in contrast to spinal and brain lesions in adult SPS [37]. Most children with SPS also have negative anti-GAD and exhibit acute onset with a transient benign course [38]. They may also be associated with psychiatric disorders but frequently go unnoticed. No prospective clinical study has been carried out to outline specific modalities targeting the pediatric group due to the ameliorated data.
Neonates may also present with SPS immediately after birth. The clinical features include an exaggerated startle response, rigidity, and acquisition of flexed fetal position. The hallmark symptom is flexor spasm in response to a light tap on the nose. If left untreated, it leads to sudden death in sporadic cases due to severe spasm [39]. Delayed motor milestones with low intelligence have also been observed [40][41].
Benzodiazepines, the classical first line drugs for SPS, are used for treating childhood SPS as well. Benzodiazepines given intravenously, along with IVIG, have shown gradual significant improvement [42]. However, due to the limitations of insufficient work in this field, nothing conclusive can be derived and more research in this field is required.
Conclusions
SPS is a rare disorder and is very difficult to diagnose. With a timely recognition of the disease and prompt treatment, the quality of life of SPS patients can be improved. Though the first line of drugs for SPS is benzodiazepines and baclofen, their dose-related adverse effects are of major concern. Intrathecal baclofen is a better alternative but care should be practiced to avoid complications, such as infection via the catheter. An improved clinical study focusing combination therapy for SPS may prove beneficial. A combination therapy of benzodiazepines with CBT and IVIG or plasmapheresis, depending on the type of antibody, can be chosen for managing SPS. Less data is available on the pediatric onset of SPS, besides a few handpicked case reports, therefore, making it hard to be conclusive on an effective treatment option. In general, research on SPS is very limited, largely owing to the rarity of the disease. Therefore, more research should be done in this field, which may in turn help patients, although low in number, from the debilitating effects of SPS.
Conflicts of interest:
The authors have declared that no conflicts of interest exist.
|
2016-05-12T22:15:10.714Z
|
2015-12-01T00:00:00.000
|
{
"year": 2015,
"sha1": "17c34cb2fdc9a897877b0525142ede923575f99f",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/review_article/pdf/3589/1511806236-20171127-6186-onatal.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "17c34cb2fdc9a897877b0525142ede923575f99f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252984187
|
pes2o/s2orc
|
v3-fos-license
|
IMPACT OF COVID-19 ON ECT PRACTICE IN QATAR
There is paucity of Electroconvulsive therapy (ECT) utilization surveys from the Arabian Gulf region and none available from Qatar. There is no literature available on impact of Coronavirus Disease 2019 (COVID-19) pandemic on ECT provision. ECT is a lifesaving treatment in psychiatric practice requiring anesthetic support and there were concerns that redeployment of anesthetists due to COVID-19 pandemic might have comparatively bigger impact on the provision of ECT. These concerns stem from the fact that psychiatric patients often get discriminated against in health care systems; largely due to stigma and the belief among healthcare providers that psychiatric illness is somehow not as serious as other types of medical or surgical illness. In this brief report we present pre-COVID ECT utilization from Qatar. We also report findings on ECT utilization during COVID-19 and compare changes with other elective and non-elective surgeries. ECT provision was down by 40% during March to August 2020 in our setting. The decline in ECT provision was comparable to other elective and non-elective surgeries.
INTRODUCTION
Coronavirus Disease 2019 (COVID-19) was declared as a public health emergency by the World Health Organization in March 2020. The State of Qatar confirmed its first positive case on 29 th February 2020. By July, Qatar had one of the highest numbers of COVID-19-positive patients per million population (COVID19 Home, n.d.). The COVID-19 pandemic has posed unprecedented challenges for healthcare delivery due to lockdown measures which were implemented to limit the spread of infection. The Ministry of Public Health in Qatar advised minimizing direct contact with patients for nonurgent care. These restrictions had a huge impact on the provision of psychiatric services. All routine outpatient clinics, daycare services and community outreach services were suspended as part of the containment strategy ). Qatar's only psychiatric inpatient hospital that receives acute admissions from all over the country was designated as a non-COVID-19 site. Similar measures were put in place by other non-psychiatric general and specialist hospitals in the State of Qatar. Hospitals suspended non-urgent care and elective procedures in an effort to minimize exposure by staff and patients. More importantly, anesthetists were redeployed to COVID-19 related intensive care facilities due to high demand at the time. Concerns were raised at the outset of the pandemic that ECT might not be prioritized by policy makers and resource allocation decisions might have adverse consequences for patients with psychiatric disorders in need of this lifesaving treatment (Espinoza et al. 2020). In this paper we present the findings of impact of COVID-19 on ECT provision in Qatar.
IMPACT OF COVID-19 ON ECT UTILIZATION
The State of Qatar is a peninsula situated halfway down the western coast of the Arabian Gulf, bordered to the south by the Kingdom of Saudi Arabia. It is one of the world's wealthiest nations in terms of per capita GDP and has a population of 2.7 million ). Qatar has a predominantly state-funded mental healthcare (Saeed et al. 2020). ECT is only offered in the state funded hospital. The state recognized the need to adapt quickly so that patients continue to receive a range of psychiatric services including ECT. The provision of ECT was prioritized as a lifesaving treatment with appropriate resource allocation. The mental health services adapted the general recommendations for infection control in addition to modifying anesthesia protocols in collaboration with infection control and anesthesia department for safe provision of ECT.
The six months which have elapsed since the beginning of the pandemic appear to be a reasonable observation period for a first look into the actual data on ECT utilization during this period. We analyzed the aggregate data on the number of people who received ECT during 6 months of 2020 and compared with data from 2019. Permission was granted to publish this anonymized aggregate data from hospital directors of the corresponding mental health services. No patient records were accessed, and hence IRB approval was not required.
DISCUSSION
ECT provision during COVID-19 was down by 40% in our setting. The reasons for low utilization during this period was due to lesser referrals from outpatients for patients with treatment resistant depression. ECT was offered to inpatients where a rapid definitive response for the emergency treatment of depression was needed which included patients with high suicidal risk or severe psychomotor retardation and associated problems of compromised eating and drinking and/or physical deterioration. The decline in ECT provision during COVID-19 pandemic was comparable to other elective and nonelective surgeries. It is important to note that there was a decline in overall admissions (9-75%) in non-psychiatry specialty care hospitals in the State of Qatar during the COVID-19 era (March 2020), when compared to January 2020 and March 2019. A decline in both elective and non-elective surgeries was observed. A decline of 9-58% was observed in admissions for acute appendicitis, acute coronary syndrome, stroke, bone fractures, cancer and live births, while an increase in admissions due to respiratory tract infections was observed ) Sharp declines in Emergency Department (ED) visits has been reported in both general and specialty hospitals (Butt et al. 2020) but there is no data on mental health presentations to ED or overall admissions to psychiatric facilities during this period; however a rise in local COVID-19 cases was associated with an increase in demand and access to national mental health helpline and access to telepsychiatric services in Qatar .
There is paucity of ECT utilization surveys from the Arabian Gulf region and none available from Qatar. Our 2015-2019 pre-COVID data on ECT shows low utilization rates in Qatar. The reasons for low utilization are multifactorial. In our setting, the low utilization in the recent years has been attributed to introduction of clinical practice guidelines limiting the indications and severity of illness where ECT can be used. Low utilization in the Arabian Gulf region has also been attributed to stigma associated with mental illness in general and ECT in particular (Zolezzi et al. 2018, Elzamzamy & Wadoo 2020. The portrayal of ECT in the media has perpetuated its negative image and increased the stigma (Okasha 2007. Many countries in the region have called for change in the name of ECT, as they believe it is not only misleading but perpetuates the stigma (Okasha & Okasha 2014). Persistent and pervasive stigma dissuades patients and their families from considering ECT as a treatment option. Other important factors that contribute to underutilization are psychiatrists' lack of training and exposure with the procedure, and underemphasis of ECT training in residency programs (Dinwiddie & Spitz 2010). Research has shown that attitudes toward ECT become markedly more favorable with actual exposure and experience (Szuba et al. 1992).
Our preliminary findings are reassuring as the decline in ECT utilization was comparable to other elective and non-elective surgeries. The possible reasons include, well-resourced state-funded mental healthcare; appropriate resource allocation during COVID-19 and pre-COVID low ECT utilization rates. COVID-19 has turned a spotlight on mental health. We hope the pandemic can give a new impetus to achieving parity between physical and mental health.
|
2022-10-19T15:55:33.673Z
|
2022-10-17T00:00:00.000
|
{
"year": 2022,
"sha1": "efc30f5c4531a7947cd2ff02f590b79545cd9c58",
"oa_license": null,
"oa_url": "https://doi.org/10.24869/psyd.2022.544",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e17a2d76ff11f9d4b357f086deaeb031081a9438",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16708618
|
pes2o/s2orc
|
v3-fos-license
|
Antimicrobial peptide melittin against Xanthomonas oryzae pv. oryzae, the bacterial leaf blight pathogen in rice
Xanthomonas oryzae pv. oryzae is a destructive bacterial disease of rice, and the development of an environmentally safe bactericide is urgently needed. Antimicrobial peptides, as antibacterial sources, may play important roles in bactericide development. In the present study, we found that the antimicrobial peptide melittin had the desired antibacterial activity against X. oryzae pv. oryzae. The antibacterial mechanism was investigated by examining its effects on cell membranes, energy metabolism, and nucleic acid, and protein synthesis. The antibacterial effects arose from its ability to interact with the bacterial cell wall and disrupt the cytoplasmic membrane by making holes and channels, resulting in the leakage of the cytoplasmic content. Additionally, melittin is able to permeabilize bacterial membranes and reach the cytoplasm, indicating that there are multiple mechanisms of antimicrobial action. DNA/RNA binding assay suggests that melittin may inhibit macromolecular biosynthesis by binding intracellular targets, such as DNA or RNA, and that those two modes eventually lead to bacterial cell death. Melittin can inhibit X. oryzae pv. oryzae from spreading, alleviating the disease symptoms, which indicated that melittin may have potential applications in plant protection.
Introduction
Bacterial leaf blight of rice caused by Xanthomonas oryzae pv. oryzae is a destructive bacterial disease of rice in growing regions worldwide. As Gram-negative bacterium, this plant pathogen can cause vascular disease by producing yellow green spots on the leaf tips and edges, resulting in gray to white lesions along the leaf veins, which severely reduces the rice quality. The disease incidence ranges from 70 to 80 %, leading to significant crop damage (Basso et al. 2011;Lee et al. 2005). Currently, control relies mainly on chemical pesticides. However, their effects on long-term environmental pollution and carcinogenic effects on humans and other animals limit their future use (Daoubi et al. 2005). Thus, to develop new antimicrobial resources with reduced negative environmental impacts is urgently needed to replace the traditional synthetic chemical pesticides used in plant protection.
Antimicrobial peptides (AMPs) are important host defense molecules involved in innate immunity. To date, almost 2100 peptides with antibacterial activity had been discovered from different species (http://aps.unmc.edu/AP/). They are small (∼10-50 residues), generally amphipathic molecules, and most of them contain cationic and hydrophobic residues in elevated proportions. Natural AMPs exhibited a broad activity to directly kill bacteria, yeasts, fungi, viruses, parasites, and even cancer cells. These activities are diverse, specific to the type of AMPs (Zhang and Gallo 2016). The use of AMPs as novel antibiotics in medical application has been proposed and widely accepted for a long time. Although there are the numerous models to explain their mechanism of action ranging from pore formation to general membrane disruption, in fact, it is a complicated interaction between different AMPs and different microbial membranes, which govern membrane selectivity of AMPs (Lee et al. 2016). Besides the use in medical application, AMPs have possible roles as agriculture pesticides for plant disease control because of their short sequences, broad antimicrobial spectra, and diverse sources (Montesinos 2007). Moreover, their mode of action, mainly targeting the microbial cell membrane directly, is thought to reduce the risk of resistance development in microbial population. AMPs have been reported as candidates for plant protection against bacterial and fungal pathogens. Until now, several natural AMPs, such as cecropin (silkmoth) and some modified AMPs, were reported in vitro and ex vivo (detected leaves or fruits) against plant pathogens (Alan and Earle 2002;Coca et al. 2006;Zeitler et al. 2013). However, almost no effective AMPs have been reported against X. oryzae pv. oryzae, the most important bacterial pathogen.
Melittin, the main component of European honeybee venom from Apis mellifera, is a cationic peptide (+5 net charge) compose d of 26 amino a cid residues (GIGAVLKVLTTGLPALISWIKRKRQQ). Melittin has diverse activities, including antibacterial, antifungal, antiviral, anticancer, and anti-inflammatory, as well as wound-healing potential (Alia et al. 2013;Falco et al. 2013;Park and Lee 2010). It has membrane activity as well as the ability to form pores across the lipid bilayer (Lee et al. 2013). Several studies demonstrated that mellitin exhibits a broad-spectrum antibacterial activity and is more active against Gram-positive than Gramnegative bacteria (Al-Ani et al. 2015). A tremendous amount of work has been done on antibacterial activity of melittin against human and animal pathogenic bacteria (Asthana et al. 2004;Liu et al. 2013). However, very little is known about the ability of natural melittin to act against plant pathogens, and specifically X. oryzae pv. oryzae. The objective of the present study is to determine the antibacterial activity of melittin against X. oryzae pv. oryzae and assess its protective effect against rice leaf blight.
Materials and methods
Bacterial strains, peptide synthesis, and reagents X. oryzae pv. oryzae strain ZJ-173 (which is commonly used in China) was used in this study. X. oryzae pv. oryzae was grown at 28°C in nutrient broth (NB) medium as described previously (Zhu et al. 2013). Melittin was synthesized using solid-phase methodology at GL Biochemistry Corporation (Shanghai, China). Preparative reverse phase highperformance liquid chromatography (RP-HPLC) resulted in final products deemed >95 % pure. Selective N-terminal fluorescein labeling of the peptide was performed with fluorescein isothiocyanate (FITC) and deemed >95 % homogeneous. 4,6diamidino-2-phenylindole (DAPI) was purchased from Sigma-Aldrich (St. Louis, MO, USA). The restriction enzymes and DNA extraction kit were purchased from Takara Bio, Inc. (Shiga, Japan), and the TransZol™ UP Plus RNA Kit was purchased from TransGen Biotech Co., Ltd. (Beijing, China). The T-ATPase (total quantity of adenosine triphosphate in the cell) and protein assay kit were purchased from Jiancheng Bioengineering Institute (Nanjing, China). All other reagents and solvents were made in China and were of analytical grade.
Antibacterial activity assay X. oryzae pv. oryzae was prepared for 24 h in NB medium at 28°C to achieve an inoculum of approximate mid-log phase (OD 600 ∼0.5). The antibacterial activity was tested using an agar well diffusion assay and a time-to-kill curve assay. For the former, the samples were placed in the wells of a thin agar plate seeded with X. oryzae pv. oryzae. Bacterial inhibition zones were detected after incubating at 28°C for 2 days. For the latter, bacteria cultures were treated with different concentrations of melittin. The half maximal inhibitory concentration (IC50) and microbial growth were assessed by measuring the OD 600 after incubation for different concentrations (2.5, 5, 7.5, 10, 20 μM) and different hours (2.5, 5, 7.5, and 10 h) at 28°C as reported elsewhere (Tripathi et al. 2015).
Determination of the DNA and RNA contents
The determinations of DNA and RNA contents were performed using DAPI. Bacteria were incubated with melittin with final concentration of 10 μM for 2.5, 5, and 7.5 h. Phosphate-buffered saline (PBS) was used as a control. The cells were then collected and diluted with distilled water. A triple volume of DAPI was added to the resuspended bacteria. The cell samples were placed in the dark for 10 min. The DAPI fluorescence of cells was observed using fluorescence spectrometry (364 nm for DNA and 400 nm for RNA). Each experiment was repeated three times.
Scanning electron microscopy (SEM)
After incubation with 10 μM melittin for 30 min at 28°C, X. oryzae pv. oryzae was collected by centrifugation at 10, 000×g for 10 min. After washing three times with PBS, X. oryzae pv. oryzae was fixed with 4 % (v/v) glutaraldehyde in PBS at 4°C for 3 h. After washing three times with the same buffer, the samples were dehydrated separately for 15 min using a graded series of ethanol solutions (50, 70, 80, 90, 95, and 100 % (v/v)). They were then air dried and sputter coated with gold to avoid charging effects in the microscope. Samples were viewed using a scanning electron microscope at 10 KV.
Transmission electron microscopy (TEM)
After incubation with 10 μM melittin for 30 min at 28°C, X. oryzae pv. oryzae was collected by centrifugation at 10, 000×g for 10 min. After washing three times with PBS, X. oryzae pv. oryzae was fixed with 4 % (v/v) glutaraldehyde in PBS at 4°C for 3 h and then post-fixed with 1 % osmium tetroxide at 4°C for another 2 h. After washing three times with PBS, samples were dehydrated separately for 15 min using a graded series of acetone solutions (30,50,70,80,90, and 100 %) and embedded in resin. The samples were cut into semi-thin sections, prepared on copper grids, and stained with uranyl acetate and lead citrate. Samples were viewed using a transmission electron microscopy system.
Determination of intracellular ATP depletion
X. oryzae pv. oryzae was incubated with melittin (10, 20 μM) for 30 min at 28°C, with PBS as a control. Then, 1 ml of each culture was centrifuged at 12,000×g for 10 min and resuspended in 200 μl 0.9 % NaCl solution. The bacteria were disrupted by sonication, and Coomassie brilliant blue R-250 (Beijing Dingguo Biotech Co. Ltd. China) was used to determine the protein content. The T-ATPase level was determined using a commercial assay kit according to the manufacturer's recommendations. T-ATPase concentrations were expressed in U/mg protein.
Confocal laser scanning microscopy X. oryzae pv. oryzae was incubated with FITC-labeled melittin (10 μM) for 30 min in the dark at 28°C, with PBS treatment as a control. Then, the samples were centrifuged at 5000×g for 5 min. The bacterial pellets were washed three times with PBS. Images were collected using a confocal laser scanning microscope (excitation, 488 nm; emission, 522 nm for the FITC signal).
DNA/RNA gel retardation assay
The DNA of X. oryzae pv. oryzae was purified using a DNA extraction kit (TransGen Biotech, Beijing). Total RNA was prepared using the TransZol UP Plus RNA Kit (TransGen Biotech, Beijing) and resuspended in diethyl pyrocarbonate (DEPC)-treated water. Gel retardation experiments were performed as described by Park et al. (1998). Briefly, mixed 200 ng of DNA with different amounts of melittin (10, 100, 200, 400, 600 ng) in 20 μl of binding buffer (5 % glycerol, 10 mM Tris-HCl (pH 8.0), 1 mM EDTA (ethylenediaminetetraacetic acid), 1 mM DL-dithiothreitol, 20 mM KCl, and 50 μg/ml albumin). For RNA binding, an assay was conducted by mixing 300 ng of RNA with melittin (180, 360, 900 ng) in 30 μl of binding buffer. The reaction mixtures were incubated for 1 h at room temperature and then subjected to gel electrophoresis on a 1 % agarose gel. In addition, samples with peptide/DNA weight ratios of 0.5 were dissolved in 0.5 μl Tris-HCl and then digested with 1 μl HindIII, KpnI, SacI, or EcoRI-HF (Takara Biotech). After incubating at 37°C for 3 h, the samples were loaded on to a 1 % agarose gel.
Sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) analysis X. oryzae pv. oryzae were prepared in NB medium at 28°C to achieve an inoculum of approximate mid-log phase (OD 600 ∼0.5). Then, melittin, at final concentrations of 5 and 10 μM, was added and cultured in a rotary shaker (220 rpm) at 28°C. After 5 h, cells were collected and then an SDS-PAGE analysis was performed using 12 % polyacrylamide gels. Gels were stained with Coomassie brilliant blue R-250.
Disease protection studies
The seeds from the next generation of LYP9 rice, paternal line 9311 (Oryza sativa L. ssp. indica) were sprouted after sterilization in distilled water at 28°C for 3 days, then planted into Kimura B nutrient solution (Ma et al. 2001), and trained to the three-or four-leaf stage in an illuminated incubator. The plants were treated with 50-100 μl (1 mM/ml) melittin by injecting the leaf stem or spraying and then placed in a 28°C light incubator for one night. Rice leaves were inoculated using the scissor-clipping method (Burdman et al. 2004) with a bacterial suspension having an OD 600 = 0.6. Plants without melittin and bacterial treatments were used as controls. Average lengths were measured 1 week after X. oryzae pv. oryzae inoculation. The experiment was performed in three replicates.
Statistical analysis
All experiments were repeated at least three times. Results are expressed as mean values ± standard errors (mean ± SE). The significance of the differences between the treatments and the respective controls were determined based on Student's t test. A level of P < 0.05 was considered to be significant.
Results
Melittin showed antibacterial activity against X. oryzae pv. oryzae An inhibition zone assay was conducted to detect whether melittin showed antibacterial activity against X. oryzae pv. oryzae (Fig. 1a). Results showed that melittin has antibacterial activity to the plant pathogen at a concentration of 10 μM. The half maximal inhibitory concentration (IC 50 ) determined for melittin was about 9-10 μM. The antibacterial activity was further detected by the time-kill assay (Fig. 1b). At a concentration of 5 μM, melittin inhibited the growth of X. oryzae pv. oryzae slightly. However, when the concentration increased to 10 μM, the number of viable cells decreased greatly, indicating that the growth of X. oryzae pv. oryzae was significantly inhibited. After a 10 h incubation, the number of viable cells decreased to an almost undetectable level. The effect of melittin on the nucleic acid content was determined by DAPI staining and fluorescence observations (Fig. 1c, 1d). After treatment with 10 μM melittin, the fluorescence densities of DNA and RNA were reduced significantly, compared with the PBS controls, indicating that the bacterial number decreased greatly or that nucleic acid synthesis was inhibited. All of the data showed that melittin had antibacterial activity against the X. oryzae pv. oryzae plant pathogen.
Melittin had the ability to disrupt bacterial cell membrane integrity
The effect of melittin on the membrane of X. oryzae pv. oryzae was detected using SEM (Fig. 2a). The untreated X. oryzae pv. oryzae displayed short rods, and a normal smooth and bright surface without any apparent cellular debris. After treatment with 10 μM melittin, obvious cell surface disruption with wrinkles was observed. The bacterial cell membranes were heavily disrupted, which was evident from the formation of potholes on the surface, and more frequent debris, indicating that most of the serious structural changes were caused by melittin. TEM observations indicated that the untreated pathogens were completely filled, having intact bacterial walls and well-defined membranes (Fig. 2b). After treatment with 10 μM melittin, structure changes were observed. There were trumpet-shaped gaps at the end of the bacteria, and there were the regions where the antimicrobial peptide effected the Fig. 1 Antibacterial activity of melittin against X. oryzae pv. oryzae. Agar well assay (a), timeto-kill curve assay (b), and the determination of DNA (c) and RNA (d) contents were conducted to detect antibacterial activity Fig. 2 Electron microscopy images of X. oryzae pv. oryzae treated with melittin. Bacteria were incubated with melittin for 30 min at 28°C and then observed by scanning electron microscopy (a) and transmission electron microscopy (b) formation of microtubule channels. There was leakage of the bacterial content as a result of wall disruption. The obvious release of the cytoplasmic contents through the membrane was also observed, and there were empty vesicles, which maintained bacterial cell appearance prior to their collapse.
Melittin had the ability to penetrate the bacterial cell membrane FITC-labeled melittin, visualized using a confocal laser scanning microscope, was used to determine whether melittin had the ability to penetrate the bacterial cell membrane. As shown in Fig. 3a, FITC fluorescence accumulated in the cytoplasm of X. oryzae pv. oryzae after treatment with FITC-labeled melittin for 30 min. Furthermore, FITC fluorescence was mainly amassed at the end of the rod shape. These results indicated that FITC-melittin may penetrate the cell membrane and be distributed in the cytoplasm.
The activity of T-ATPase was further detected to determine whether energy metabolism was affected by melittin. As shown in Fig. 3b, the melittin did not significantly suppress the T-ATPase activity compared with the control. Thus, melittin may have no effect on the energy metabolism of the cells.
Melittin had binding activity to X. oryzae pv. oryzae DNA in vitro A gel retardation assay was conducted to determine whether melittin had DNA-binding activity. As shown in Fig. 4a, at a peptide/DNA weight ratio of 0.05, almost all of the DNA remained at the origin. When the peptide/DNA weight ratio was increased to 0.5, additional X. oryzae pv. oryzae genomic DNA was degraded, and a DNA fragment of ∼1200 bp was released from the genomic DNA. At peptide/DNA weight ratios of 1 and 2, similar results were observed. In particular, at the peptide/DNA weight ratio of 3, a complete retardation of the DNA was observed, indicating that the DNA was aggregated by melittin. Therefore, melittin had the ability to bind to genomic DNA in vitro and then degraded the genomic melittin DNA to 1000-2000-bp fragments.
Four DNA restriction enzymes with different sites were used to further identify the binding activity of melittin to DNA. As shown in Fig. 4b, after treatment with HindIII (A/AGCTT), KpnI (GGTAC/C), SacI (GAGCT/C), and EcoRI-HF (G/AATTC), X. oryzae pv. oryzae genomic DNA was cut into different-sized fragments, exhibited as smear patterns by gel electrophoresis. However, the smear fragments decreased after treatment with melittin. All of the data indicated that melittin could bind to genomic DNA in vitro.
Melittin had binding activity to X. oryzae pv. oryzae RNA The RNA-binding ability of melittin was evaluated using gel retardation assays. At the peptide/RNA weight ratio of 1.2, the migration of RNA was suppressed by melittin. When the peptide/RNA weight ratio was increased to 3, significant retardation was observed, indicating that melittin had the ability to bind RNA. After treatment with 10 μM melittin for 5 h, the total protein profile of X. oryzae pv. oryzae was decreased greatly, with the loss of some bands in the SDS-PAGE analysis (Fig. 5b). This result indicated that protein expression may be suppressed by melittin.
Melittin prevention of X. oryzae pv. oryzae in rice Rice plants at the three-or four-leaf stage were used to establish a model of the rice bacterial leaf blight by injecting with the X. oryzae pv. oryzae pathogen. As shown in Fig. 6a, 6b, the development of disease symptoms was predominant in the bacterial blight disease model when compared with the noninjected plants. After treatment with melittin, the development of disease was controlled (Fig. 6c). The results from the lesion measurement experiment showed that very short lesions (average 1.38 ± 0.28 cm) were found in the melittin-treated group compared with the positive control group, which was susceptible to the pathogen and had long lesions (average 13.12 ± 0.89 cm). Hence, the melittin treatment is effective for the protection of rice from the X. oryzae pv. oryzae pathogen.
Discussion
Rice is the staple diet of more than three billion people, and the yield must double over the next 40 years if we are to sustain the nutritional needs of an ever-expanding global population (Skamnioti and Gurr 2009). However, rice is vulnerable to disease wherever it is grown and bacterial leaf blight of rice caused by X. oryzae pv. oryzae is one of the most destructive bacterial diseases worldwide. Che et al. (2011) reported that the Hpa-1-cecropin A (KLFKKIEKV)-melittin (KIFKKIEKKV-AVLKVLTTGL) hybrid peptide inhibited several bacteria and fungi, including X. oryzae pv. oryzae. However, whether melittin inhibits this plant pathogen is still unknown. In this study, we found that the antimicrobial peptide, melittin, showed antibacterial activity against the X. oryzae pv. oryzae plant pathogen. Many reports have indicated that melittin possesses broad antimicrobial activities in vitro. We tested the antimicrobial activity of melittin against several plant pathogen, including Ralstonia solanacearum, Magnaporthe grisea, Ustilaginoidea oryzae, Alternaria alternata (Fries) Keissler, Fusarium graminearum Sehw, and scab of cucurbits. The results showed that melittin had weak antimicrobial activities against these important plant pathogens (data not shown).
A better understanding of the melittin and X. oryzae pv. oryzae interactions will greatly help develop melittin use in rice plant protection. Previous investigations of the actions of melittin on cell membranes were conducted mainly using synthetic model membranes. They hypothesized that melittin disrupted membrane bilayers via a two-step Bdetergent-likem echanism. The first step involves the electrostatic interaction of melittin with negatively charged lipid headgroups. After the concentration of melittin on the lipid surface reaches a critical concentration, melittin rearranges to form pore-like structures that disrupt the membrane bilayer (Lee et al. 2013). However, little research has been conducted using natural membranes of plant pathogens. In our study, FITC-labeled melittin had the binding ability to X. oryzae pv. oryzae, a Gram-negative plant pathogen, which may have resulted from electrostatic interactions. Melittin also had the ability to cause surface roughening and shrinking, and the formation of potholes, which indicated that the cell membrane structure was disrupted and that channels could be formed by melittin. Moreover, the wall disruption led to the leakage of bacterial contents, with empty vesicles remaining, and debris was also observed, indicating that melittin had the ability to kill cells through membrane-permeability/disrupting mechanism. This resulted in rapid cell death in the X. oryzae pv. oryzae pathogen. Melittin is a small linear peptide compound of 26 amino acids with a hydrophobic Nterminal region and a hydrophilic C-terminal region. Datiles reported that the ATPase activity of Escherichia coli was inhibited by melittin (Datiles et al. 2008). However, melittin had no effect on the energy metabolism of this plant pathogen, as indicated by the activity of the T-ATPase assay.
In addition to membrane-permeabilizing/disrupting properties, many AMPs also interact with intracellular targets or disrupt cellular processes. Our study together with other studies showed that melittin killed bacteria mainly by permeabilizing/disrupting the microbial cytoplasmic membrane (Lee et al. 2013). Very interestingly, using FITC labeling and confocal microscopy, we found that melittin not only bound the plasma membrane, but also entered cells. Thus, melittin used its membrane binding properties to kill bacteria by rapidly lysing the cells. At the same time, melittin also had the chance to enter the cells, which may have resulted from the disruption of the membrane. In eukaryotes, melittin has several mechanisms to kill yeast and cancer cells (Gajski and Garaj-Vrhovac 2013). Park and Lee (2010) speculated that melittin could more easily translocate the plasma membrane and then bind to intracellular molecules, which might trigger apoptosis in Candida albicans (Park and Lee 2010). In human leukemic U937 cells, melittin could induce Bcl-2-and caspase-3-dependent apoptosis through the downregulation of Akt phosphorylation (Moon et al. 2008). The gel retardation assay showed that melittin strongly bound to DNA/RNA in vitro, suggesting the possibility of inhibition of intracellular functions via interference with DNA/RNA functions. The five cationic residues in melittin may enable electrostatic interactions with the negatively charged DNA. In addition to cationic residues, a leucine zipper motif, which is a typical DNA-binding domain with every seventh amino acid being leucine/isoleucine, was identified in melittin (Asthana et al. 2004). In fact, besides melittin, several other AMPs inhibit DNA synthesis by binding, including buforin IIB, indolicidin, and NKLP27, which are known to bind DNA, inhibit DNA synthesis, and induce the filamentation of bacteria (Hsu et al. 2005;Zhang et al. 2014). The inhibition of protein synthesis was also observed in our study, as observed in an SDS-PAGE analysis. We also found that the fluorescence density of nucleic acids were reduced significantly by melittin, which may be the result of the death of X. oryzae pv. oryzae or the inhibition of proliferation.
Finding an antibacterial activity for melittin in vitro experiments is not predictive of its capacity to protect the rice plant in vivo from X. oryzae pv. oryzae, because several host components could interfere with the AMP's activity (Montesinos and Bardaji 2008). Therefore, an inhibition assay was conducted to assess the prevention of infection in rice plants exposed to the X. oryzae pv. oryzae pathogen. Our data showed that melittin exhibited effective protection of rice against X. oryzae pv. oryzae. Chemical pesticides have been widely used in the past years and their effects on long-term environmental pollution, and carcinogenic effects on humans and other animals, limit their use (Daoubi et al. 2005). AMPs, as a control source with a potentially reduced negative environmental impact and a broad spectrum of activities, have attracted much attention in plant protection research (Morassutti et al. 2002). The quick kill effect of melittin, which is mainly caused by membrane disruption, makes it difficult for X. oryzae pv. oryzae to develop resistance to melittin. Although there are several problems for melittin, such as its high hemolysis, a lack of selectivity toxicity and high protease degradation, which limit the use of melittin as an antibacterial agent and anticancer agent to human (Asthana et al. 2004;Dempsey 1990;Oren and Shai 1997), application for plant protection might be possible. However, after treatment of rice with melittin, the plant height, tillering ability, and the leaf color and shape were identical to the PBS control. Furthermore, no lesion mimic were observed in our experiment. All these showed that the rice plants were healthy after the melittin treatment, indicating a lack of undesirable toxic effects on rice. This may result from the differences in cell structure between mammal cells and plant cells. Therefore, melittin represents a promising candidate for further development to protect rice from bacterial leaf blight. Expression of several AMP coding genes in plants has been used to enhance their resistance to bacterial and fungal pathogens (Carmona et al. 1993;Che et al. 2011). Additionally, the transgenic plants showed considerably greater resistance to certain Rice leaves infected with X. oryzae pv. oryzae (b). Application of melittin to prevent leaf blight of rice caused by X. oryzae pv. oryzae (c). The lesions were measured 7 days after inoculation pathogens than wild-type plants (Jan et al. 2010;Lakshman et al. 2013;Nadal et al. 2012). Thus, in addition to being used directly as an agricultural pesticide, melittin may be a candidate in developing transgenic rice with resistance to bacterial leaf blight. However, developing less toxic and more stable compounds requires further research. Structural modifications to reduce the hemolytic activity have been conducted (Pandey et al. 2010). Developing more effective and less toxic derivatives of melittin to protect rice from X. oryzae pv. oryzae is urgently needed.
|
2018-05-08T18:09:35.573Z
|
2016-03-07T00:00:00.000
|
{
"year": 2016,
"sha1": "3836b34048d852bfceef6623a7e1bbf8465432bc",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00253-016-7400-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "3836b34048d852bfceef6623a7e1bbf8465432bc",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
232387445
|
pes2o/s2orc
|
v3-fos-license
|
CyberKnife for Recurrent Malignant Gliomas: A Systematic Review and Meta-Analysis
Background and Objective Possible treatment strategies for recurrent malignant gliomas include surgery, chemotherapy, radiotherapy, and combined treatments. Among different reirradiation modalities, the CyberKnife System has shown promising results. We conducted a systematic review of the literature and a meta-analysis to establish the efficacy and safety of CyberKnife treatment for recurrent malignant gliomas. Methods We searched PubMed, MEDLINE, and EMBASE from 2000 to 2021 for studies evaluating the safety and efficacy of CyberKnife treatment for recurrent WHO grade III and grade IV gliomas of the brain. Two independent reviewers selected studies and abstracted data. Missing information was requested from the authors via email correspondence. The primary outcomes were median Overall Survival, median Time To Progression, and median Progression-Free Survival. We performed subgroup analyses regarding WHO grade and chemotherapy. Besides, we analyzed the relationship between median Time To Recurrence and median Overall Survival from CyberKnife treatment. The secondary outcomes were complications, local response, and recurrence. Data were analyzed using random-effects meta-analysis. Results Thirteen studies reporting on 398 patients were included. Median Overall Survival from initial diagnosis and CyberKnife treatment was 22.6 months and 8.6 months. Median Time To Progression and median Progression-Free Survival from CyberKnife treatment were 6.7 months and 7.1 months. Median Overall Survival from CyberKnife treatment was 8.4 months for WHO grade IV gliomas, compared to 11 months for WHO grade III gliomas. Median Overall Survival from CyberKnife treatment was 4.4 months for patients who underwent CyberKnife treatment alone, compared to 9.5 months for patients who underwent CyberKnife treatment plus chemotherapy. We did not observe a correlation between median Time To Recurrence and median Overall Survival from CyberKnife. Rates of acute neurological and acute non-neurological side effects were 3.6% and 13%. Rates of corticosteroid dependency and radiation necrosis were 18.8% and 4.3%. Conclusions Reirradiation of recurrent malignant gliomas with the CyberKnife System provides encouraging survival rates. There is a better survival trend for WHO grade III gliomas and for patients who undergo combined treatment with CyberKnife plus chemotherapy. Rates of complications are low. Larger prospective studies are warranted to provide more accurate results.
INTRODUCTION
The majority of malignant brain tumors are represented by gliomas (70%) (1). The standard management of newly diagnosed malignant gliomas (MGs) is maximal resection followed by radiotherapy (RT) with concomitant and adjuvant chemotherapy (CMT) (2). Although a solid treatment strategy has been established for MGs, recurrence still occurs in almost all patients within 2 years after initial treatment (3)(4)(5). Possible treatment strategies for recurrent malignant gliomas (rMGs) include second-line CMTs, surgery with or without adjuvant therapies, and RT (2,6,7). Reirradiation appears to be an efficacious and safe treatment modality, providing survival benefits with acceptable risk (8,9). Among different reirradiation modalities, hypofractionated stereotactic radiotherapy (HFSRT) has shown promising results as it allows delivery of a large total dose, in a precise target volume and short treatment duration (10,11). Nowadays, various HFSRT and stereotactic radiosurgery (SRS) machines are available and their usage has been gradually increasing. All systems have excellent accuracy with targeting areas close to 1 mm (12)(13)(14). Among those, the CyberKnife ® (CK) is a frameless image-guided radiotherapy system mounting a 6-MV linear accelerator on a highly maneuverable robotic arm (15). The CK System is a non-invasive and pain-free treatment strategy that requires a customized thermoplastic face mask, reducing patient discomfort associated with other frame-based radiosurgical systems. Unlike other SRS techniques, the CK does not require general or local anesthesia still ensuring a comparable level of accuracy (12). Particularly, the CK was found to have clinically relevant accuracy of 0.7 +/-0.3 mm, minimizing normal brain radiation exposure and allowing for high doses of radiation to targeted areas (12,16). Given its recent development, few case series have been reported on CK for rMGs of the brain, and indications are still debated. We hereby conducted a systematic review of the literature and a meta-analysis to provide physicians awareness about the efficacy and safety of CK treatment for rMGs.
Literature Search
The systematic review was performed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines (17). A comprehensive literature search of the databases PubMed, Ovid MEDLINE, and Ovid EMBASE databases was designed and conducted by an experienced librarian with input from the authors. The keywords "glioblastoma", "anaplastic astrocytoma", "malignant glioma", "high-grade glioma", "HHG", "recurrence", "recurrent malignant glioma", "brain", "CyberKnife", "CK", "stereotactic radiosurgery", "SRS", and "stereotactic radiotherapy" were used in "AND" and "OR" combinations. The search was limited to articles published between 2000 and 2021.
The following inclusion criteria were used: 1) English language, 2) case series reporting greater than 5 patients 3) studies reporting exclusively histologically proven World Health Organization (WHO) grade IV gliomas or WHO grade III gliomas of the brain (18), 4) studies reporting recurrence, and 5) studies reporting retreatment with the CK System at recurrence. The exclusion criteria were: 1) case series reporting fewer than 10 patients and case reports, 2) brain lesions other than MGs, 3) lesions not located in the brain (e.g. gliomas of the spinal cord), 4) studies reporting only newly diagnosed MGs, 5) studies reporting on irradiation techniques other than the CK System, 6) studies not reporting survival data.
Two authors determined the inclusion and exclusion criteria for the studies in the literature search. In studies with overlapping patient populations written by the same author/institution, we only included the largest or most complete dataset. In cases where outcomes were separated by WHO grade or CMT at recurrence, we abstracted outcomes separately to perform our subgroup analyses. Missing baseline data and outcomes information was requested from the authors via email correspondence. The authors of six included studies replied and the information provided was integrated into the data abstraction process.
Data Extraction
For each study, we abstracted the following baseline information: number of patients; median age at CK treatment; gender; median Karnofsky Performance Status (KPS) at CK treatment; WHO grade and histotype at recurrence. Regarding treatment at initial diagnosis we collected information about: the extent of resection (EOR), i.e. gross total resection (GTR, resection of more than 99% of the preoperative tumor volume), subtotal resection (STR, 95%-99% resection); partial resection (PR, < 95% resection), and biopsy (B) (19); the number of patients who underwent conventional radiation therapy (CRT); and the number of patients who underwent CMT. About the recurrence interval, we abstracted the Time To Recurrence (TTR, the time span between initial treatment and CK) (20,21). As for treatment at recurrence, we gathered the following data: median planned target volume (PTV); the median number of fractions; total radiation dose in Gray (Gy); the number of patients who underwent CMT.
Objectives
Our primary endpoints were median Overall Survival (OS), median Time To Progression (TTP), and median Progression-Free Survival (PFS). As for OS, we extracted data from initial diagnosis (i.e. time-length from the date of initial diagnosis to death from any cause) and from CK (i.e. time-length from the date of the start of CK treatment to death from any cause) (22). Concerning TTP and PFS, we abstracted data from CK. The former was defined as the time elapsed between the start of CK treatment to Beside Recurrence (BR, new lesion developed after 4 weeks beside or inside the prescribed marginal isodose line of previous CK treatment) or Progressive Disease (PD, more than 25% growth of Gd−enhanced area within 4 weeks after CK treatment) (21). The latter was defined as the time elapsed between the start of CK treatment to any disease recurrence or death from any cause (23). For our subgroup analysis, we were able to abstract median OS from initial diagnosis and from CK treatment for WHO grade IV gliomas versus WHO grade III gliomas separately and for CK plus CMT versus CK treatment alone separately. Besides, we analyzed the relationship between median TTR and median OS from CK treatment.
The secondary endpoints were Local Response (LR), New Lesion (NL), and complications. The LR was assessed with Gdenhanced Magnetic Resonance Imaging (MRI) at 1 month after CK treatment and was classified into the following categories: Complete Response (CR, Gd−enhanced area disappears and no regrowth is recognized for at least 4 weeks after treatment), Partial Response (PR, Gd−enhanced area is reduced by more than 50% and maintains this state for at least 4 weeks after treatment), No Change (NC, less than 50% reduction or less than 25% growth of Gd−enhanced area, maintained for at least 4 weeks after treatment) and PD (24). The development of NLs following initially controlled disease (i.e. CR, PR, NC), was divided into BR and Remote Recurrence (RR, lesion located remotely from the prescribed marginal isodose line of previous CK treatment) (25). Regarding complications, we extracted the number of acute neurological and non-neurological side effects, corticosteroid dependency (the onset of neurological deficits and/or cephalalgia requiring daily doses of dexamethasone > 4 mg for more than 8 weeks), radiation necrosis, and other toxicities (26).
Study Risk of Bias Assessment
We modified the Newcastle-Ottawa Quality Assessment Scale to assess the methodologic quality of the studies included in this meta-analysis (27). This tool is designed for use in comparative studies; however, our analyzed studies did not have control groups, therefore, we assessed the study risk of bias based on selected items from the scale, focusing on the following questions: 1) Did the study include all patients or consecutive patients versus a selected sample? 2) Was the study retrospective or prospective? 3) Was clinical follow-up satisfactory, thus allowing ascertainment of all outcomes? 4) Were outcomes clearly reported? 5) Were there clearly defined inclusion and exclusion criteria?
Statistical Analysis
We estimated each cohort's cumulative prevalence and 95% confidence interval for each outcome. Event rates were pooled across studies using a random-effects meta-analysis. Heterogeneity across studies was evaluated using the I 2 statistic. An I 2 value of >50% suggests substantial heterogeneity. Metaregression was not used in this study. For some outcomes it was not possible to estimate the standard errors, therefore a standard error of 0 was used in the meta-analysis. Pearson's correlation was used to correlate median TTR and median OS from CK treatment. Statistical analyses were performed using OpenMeta [Analyst] (http://www.cebm.brown.edu/openmeta/) and R statistical package v3.4.1 (http://www.r-project.org).
Literature Review
A total of 1420 papers were identified after duplicates removal. After title and abstract analysis, 67 articles were identified for full-text analysis. Eligibility was ascertained for 12 articles (20,21,(24)(25)(26)(28)(29)(30)(31)(32)(33)(34). The remaining 55 articles were excluded for the following reasons: 1) irradiation techniques other than the CK System (19 articles), 2) improper study design (12 articles), 3) studies reporting only on newly diagnosed MGs or not reporting survival data (9 articles) 4) studies reporting on brain lesions other than MGs (7 articles), 5) case series reporting fewer than 10 patients (5 articles), and 6) studies in other languages (3 articles). All studies included in the analysis had at least one or more outcome measures available for one or more of the patients' groups analyzed. Figure 1 shows the flow chart according to the PRISMA statement (17).
Study and Patients Characteristics
Our meta-analysis included a total of 398 patients. The smallest study included 13 patients (32) and the largest included 128 patients (33). The median age at CK treatment was 54.5 years. There was a male predominance (1.6:1). The median KPS at CK treatment was 80. Six studies (50%) reported on WHO grade IV and III gliomas and other 6 studies (50%) reported on WHO grade IV gliomas. The histotype was available in 9 studies (75%): six studies (67%) reported on glioblastomas (GBMs) and 3 studies (33%) reported on GBMs and anaplastic astrocytomas (AAs).
Post-operative CMT was undertaken in 275 patients (69%) and Temozolomide (TMZ) was the CMT regimen reported in most studies (193 patients, 48%). Patients were followed-up with a Gdenhanced MRI performed every 1 to 3 months. The median TTR was 14 months (range 1-171).
At recurrence, the GTV was defined as the MRI Gd-enhanced area and the PTV was reconstructed adding 0 to 3 mm margin to the GTV. The median target volume (PTV) was 12.1 ml. The median number of fractions was 3 (range 1-6) and the median dose was 24.5 Gy (range 13.9-48.8). The prescribed marginal isodose ranged from 78% to 91%. Half of the patients (203, 51%) underwent CMT at recurrence. Although TMZ was the most reported CMT regimen (66 patients, 17%), other therapies were undertaken, particularly Bevacizumab (BEV) in 22 patients (5%) and Interferon in 16 patients (4%). Administration of CMT was concomitant and/or after CK treatment in 199 patients (98%) and before CK treatment in 4 patients (2%). The latter received BEV-based salvage therapy prior to CK treatment. A summary of the included studies is provided in Table 1.
Rates of NLs developed following CK treatment were reported in 61 patients, and the overall rate was 88.4% (95% CI=80.5-96.4). Rates of BR or RR were reported in 50 patients. The overall rate of BR was 75.9% (95%CI=64.3-87.6), compared to 17.7% for RR (95%CI=1.5-33.8). The secondary outcomes are summarized in Table 3.
Study Heterogeneity
I 2 values were <50% indicating a lack of substantial heterogeneity for all the outcomes.
Findings
The treatment strategy for patients harboring rMGs is still debated and no clear consensus has been achieved yet. Treatment modalities include surgery, CMT, RT, and combined treatments. Reirradiation with SRS can provide survival benefits with acceptable risks. Among diverse SRS machines currently available, we focused on the CK System. Our study's primary aim was to establish the efficacy of CK treatment for rMGs, concerning survival and time to disease progression. Our secondary aims were to establish the local disease response, recurrence of disease, and toxicities. We performed a systematic review and meta-analysis of published studies on CK for rMGs and found several interesting findings.
Patients Characteristics
In our meta-analysis, we observed a male predominance (1.6:1). Recent evidence suggested that sex-associated biological features can play a role in MGs incidence, regardless of the age, race, and geographic location of patients (1,35,36). An average male-to-female ratio of 1.6:1 has been previously reported for MGs, with greater incidence in men (1,37). The prevalence of MGs in males appeared to be related mainly to genetic dissimilarities and not only to the presence of sex hormones (38). Gender differences can be pivotal for developing tailored approaches to MGs and pursuing studies are taking into account sex differences for innovative treatment strategies (37).
Primary Outcomes
Median OS of rMGs without any treatment has been reported to range between 3 and 6 months (5). Reoperation of recurrent GBMs provides 3 to 5 months median survival, without a significant increase in morbidity and mortality, and is still limited to 10-30% of patients due to the infiltrative nature of the disease and the involvement of eloquent areas (39)(40)(41)(42). Over the past years, reirradiation has been increasingly proposed as an alternative treatment strategy with successful results (43,44 (47). Therefore, treatment of recurrent AEs with CK System is a promising alternative, especially for deep-seated lesions or lesions located adjacent to eloquent areas (47)(48)(49)(50).
The subgroup analysis of treatment strategy revealed a longer survival for patients undergoing CK plus CMT treatment (9.5 months) compare with patients undergoing CK treatment alone (4.4 months). Hu et al. previously reported that HFSRT combined with CMT confers a slight survival improvement for patients with rMGs compared with HFSRT alone (8.23-23.0 months vs 3.9-12.0 months) (51). In their meta-analysis including 388 patients, 3 out of the 7 selected studies presented statistically significant differences (P < 0.05) between these two treatment approaches, and 3 out of the 4 remaining studies showed a favorable survival for patients treated with combined therapy rather than HFSRT alone. Likewise, our meta-analysis suggests a longer survival for patients who undergo combined treatment, but we cannot ascertain the absence of confounding bias between the two groups and stratified RCTs would be needed for ultimate conclusions. Moreover, we were unable to perform qualitative subgroup analyses of the systemic agents used and the time of systemic therapy sessions with respect to CK treatment. Among the different agents used in the included studies, TMZ was the most reported CMT regimen (66 patients, 16%), followed by BEV in 22 patients (5%) and Interferon in 16 patients (4%). Administration of CMT was concomitant and/or after CK treatment in 199 patients (98%) and before CK treatment in 4 patients (2%). The latter received BEV-based salvage therapy prior to CK treatment (31). The most commonly used systemic therapies for rMGs include TMZ, nitrosoureas, and BEV (52)(53)(54). The combination of lomustine with BEV has shown improved PFS but not OS, and a higher toxicity rate compared with lomustine alone (55). Bevacizumab alone or in combination with chemotherapy agents such as lomustine or irinotecan has demonstrated a median survival time from recurrence around 9 months and radiographic response rates of approximately 30 to 40 percent (55,56). Few reports described the combination of bevacizumab with HFSRT for recurrent GBMs with safe and effective results (57)(58)(59). This treatment strategy is under study in an ongoing larger randomized trial (60). Among the studies included in our meta-analysis, Hasan et al. showed a better survival for patients with recurrent GBMs treated with BEV either before or after CK treatment (31). Palmer et al. reported a slightly higher survival for patients with recurrent GBMs treated with HFSRT before BEV rather than BEV before HFSRT (13.9 vs 13.3 months) but stressed the importance of a randomized multiinstitutional trial for more definite conclusions (61).
We did not observe a positive correlation between median TTR and median OS from CK. Likewise, Greenspoon et al. did not find a statistical difference in OS or PFS when stratifying by TTR (<12 months or >12 months) (20). Conversely, Yazici et al. reported improved survival for patients with a TTR of more than 12 months (21).
Secondary Outcomes
Our meta-analysis shows that CK is a relatively safe and effective treatment modality for rMGs. Rates of complications were relatively low. Corticosteroid dependency had the highest rate among the complications (18.8%), followed by acute nonneurological side effects (13%, including fatigue, alopecia, and clinical deterioration), and by radiation necrosis (4.3%). Notably, the authors of the included studies included steroid use among side effects only when requiring daily doses of dexamethasone > 4 mg for more than 8 weeks. However, we must acknowledge that current guidelines mention steroid use as a side effect from basic prescription (62). Larger re-irradiated tumors (maximum diameter greater than 4 cm) are more inclined to develop radiation necrosis (33,63). Indeed, a crucial factor in developing radiation necrosis is the volume of the irradiated normal brain, which is relative to the tumor volume (64,65). Radiation necrosis is known to occur in the normal brain when the normalized total dose (NTD) is greater than 100 Gy (66). Other authors reported that using a fractionated scheme aimed to maintain a normalized total dose (NTD)<100 Gy can reduce the risk of radionecrosis in larger tumors (26,33). Conversely, rates of acute neurological effects (3.6%) such as worsening of pre-existing symptoms, dizziness, nausea/vomiting, neurological deterioration, and rates of hematological toxicities (1.1%) were the lowest. Acute side effects were higher in patients treated with large single fraction volumes, supporting the hypothesis that fractioned schemes may be safer for tumors larger than 4 cm in maximum diameter or proximal to eloquent areas (33). Hematological toxicities such as leukopenia and thrombocytopenia were mainly reported for patients who underwent CK treatment plus CMT (26). Although we meta-analyzed the side effects reported by the authors, it was not possible to grade toxicity because of a lack of uniformity among studies. Future trials should report the side effects according to standardized grading systems to enhance uniformity and facilitate interpretation of results (62,67).
The analysis of LR at 4 weeks after CK treatment showed disease progression in 37.9% of cases, stability in 29.2%, reduction in 27.7%, and complete disappearance in 2%. Yoshikawa et al. reported a higher control rate (i.e. CR, PR, NC) for GBM patients than AA patients (63.6% vs 45.5%) (24). However, LR after CK was reported in a small overall cohort (84 patients), and this outcome should be validated by more extensive analyses. Moreover, true progression may often be indistinguishable from pseudoprogression (21). Pseudoprogression is a subacute effect of radiotherapy observed in the first 12 weeks after treatment, first described by Hoffman et al (68). It was pathologically defined by Chamberlain et al. as necrosis without evidence of tumor and appears as increased contrast enhancement following radiotherapy (69). These imaging findings are consequences of disruption of the blood-brain barrier and represent proof of radiation's efficacy rather than progression or toxicity, indeed correlate with longer OS (21, 70,71). Diagnosis of pseudoprogression is made during follow-up when stabilization or improvement of clinical and radiographic findings is observed (21). Instead, true progression within the first 12 weeks after radiotherapy, can only be defined if the majority of new enhancement is outside the radiation field or if there is pathological confirmation of PD (72).
The overall rate of NLs was considerably high (88.4%), with a greater rate of BR rather than RR (75.9% vs 17.7%). However, the rate of NLs was reported in only 61 patients, and the location of recurrence in only 50 patients overall. Therefore, this outcome needs to be corroborated by larger studies as well. Although it is known that recurrence of MGs appears mainly within 2 cm of the enhancing edge of the original tumor, Yoshikawa et al. reported the development of BR despite an initial high control rate (63.2% for GBM and 42.9% for AA controlled patients) (73). Despite surgery plays a key role in GBM recurrence, most of all for large volumes, CK radiosurgery has shown good results with a low rate of toxicity. Some aspects though remain unclear, such as radiation dose and fractionation.
A focus on the quality of life (QoL) is imperative given the poor prognosis and short life expectancy of patients with a diagnosis of rMGs. The QoL of MG patients is most often affected by the development of CMT/RT side effects, changes in physical functioning, and global health status (74). Unlike surgery and other SRS techniques, the CK treatment can be delivered without sedation and as an outpatient, which would help maximize the QoL. The primary and secondary end-points of our meta-analysis were based on outcomes reported by authors of the included studies. Therefore, we were unable to meta-analyze the effect of CK treatment on KPS, cognitive function, and QoL. However, Greenspoon et al. reported on the benefit of BEV in preventing toxicity and improving QoL of patients undergoing CK plus TMZ (20). Quality of Life after HFSRT for rMGs patients has been previously reported to remain stable for a median follow-up of 9 months (75). A subsequent study on high-dose reirradiation in selected patients with recurrent/progressive MGs found a stable QoL and improvement of activities of daily living (ADL) over a 1-year time period (76). Future studies should include KPS and QoL among their primary outcomes to evaluate the impact of CK treatment in life-limiting diseases such as rMGs.
Limitations
Despite the significant number of patients included in our study, this meta-analysis was based primarily on a few single-center case series and thus has limitations inherent to single-center retrospective studies. Based on the data abstracted from the articles and provided by the authors of the included studies, we could not ascertain the number of patients undergoing repeat surgery and the EOR at recurrence. The different ways in which each study provided the confidence intervals and/or standard deviations did not allow the use of the standard errors for some of the outcomes in the meta-analysis. In such cases, a standard error of 0 was adopted for each study. This led to an imperfect approximation of the meta-analyzed outcome and its confidence interval. While we were able to perform subgroup analyses based on WHO grade and CMT at recurrence as well as analyzing the relationship between TTR and OS, we were unable to perform more granular analyses stratifying outcomes by other relevant variables such as the histotype, the PTV, irradiation dose, the number of fractions, the patients' age and KPS. Moreover, we were unable to perform qualitative subgroup analyses of the CMT agents used and the time of CMT sessions with respect to CK treatment at recurrence. The assessment of LR was reported in a small cohort and differential diagnosis of lesions developed post-CK treatment can be misleading. Therefore, this outcome should be validated by more extensive analyses and future studies should focus on discrimination of lesions developed following CK treatment.
Nonetheless, to the best of our knowledge, this is the first meta-analysis providing helpful conclusions on the treatment of rMGs with the CK System and a potential start point for future studies.
CONCLUSIONS
Reirradiation of rMGs with the CK System has reasonable efficacy and provides encouraging survival rates. There is a better survival trend for WHO grade III lesions and for patients who undergo combined treatment with CK plus CMT. Treatment of rMGs with CK is a safe alternative, considering the low rates of complications. Larger and well-designed prospective studies are warranted to provide more accurate results.
|
2021-03-29T13:18:14.203Z
|
2021-03-29T00:00:00.000
|
{
"year": 2021,
"sha1": "7ecd68f7d550abbdd23ef99d55f90c0d12de0d61",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2021.652646/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7ecd68f7d550abbdd23ef99d55f90c0d12de0d61",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
22020340
|
pes2o/s2orc
|
v3-fos-license
|
Polyamine-deficient Neurospora crassa mutants and synthesis of cadaverine
The polyamine path of Neurospora crassa originates with the decarboxylation of ornithine to form putrescine (1,4-diaminobutane). Putrescine acquires one or two aminopropyl groups to form spermidine or spermine, respectively. We isolated an ornithine decarboxylase-deficient mutant and showed the mutation to be allelic with two previously isolated polyamine-requiring mutants. We here name the locus spe-1. The three spe-1 mutants form little or no polyamines and grow well on medium supplemented with putrescine, spermidine, or spermine. Cadaverine (1,5-diaminopentane), a putrescine analog, supports very slow growth of spe-1 mutants. An arginase-deficient mutant (aga) can be deprived of ornithine by growth in the presence of arginine, because arginine feedback inhibits ornithine synthesis. Like spe-1 cultures, the ornithine-deprived aga culture failed to make the normal polyamines. However, unlike spe-1 cultures, it had highly derepressed ornithine decarboxylase activity and contained cadaverine and aminopropylcadaverine (a spermidine analog), especially when lysine was added to cells. Moreover, the ornithine-deprived aga culture was capable of indefinite growth. It is likely that the continued growth is due to the presence of cadaverine and its derivatives and that ornithine decarboxylase is responsible for cadaverine synthesis from lysine. In keeping with this, an inefficient lysine decarboxylase activity (Km greater than 20 mM) was detectable in N. crassa. It varied in constant ratio with ornithine decarboxylase activity and was wholly absent in the spe-1 mutants.
Polyamine synthesis in Neurospora crassa ( Fig. 1) has been studied by several groups over the last decade. The sole origin of putrescine (1,4-diaminobutane) was identified as ornithine (8), and the enzyme responsible for putrescine formation, ornithine decarboxylase, was shown to be cytosolic (3,13,21). Two putrescinerequiring mutants of N. crassa were isolated by Deters et al. (Genetics 77:s16-s17, 1974), although there has been some doubt about their ornithine decarboxylase deficiency (14). The existence of spermine in N. crassa and other filamentous fungi has been debated recently (11,15), with general agreement presently that spermine does occur in low amounts in N. crassa (T. J. Paulus and R. H. Davis, Methods Enzymol., in press). A study of polyamine pool dynamics in N. crassa (16) suggests that polyamines are normally maintained in a narrow range of values (on a dry-weight basis) by a control mechanism for ornithine decarboxylase which responds with high efficiency to spermidine. It was also shown that little or no polyamine degradation takes place during exponential growth in N. crassa and that a substantial fraction of the polyamine pools may be bound (nondiffusible) within the cell (16,17).
In previous work and the work reported here, use has been made of the arginaseless (aga) mutation. Because strains carrying an aga mutation cannot form ornithine from arginine ( Fig. 1) and because arginine feedback inhibits ornithine biosynthesis, aga mutants grown with arginine develop severe ornithine and polyamine depletion. The resulting nutritional needs are met by addition of ornithine or polyamines (8).
In this report, we wish to extend and clarify our knowledge of polyamine metabolism in N. crassa. We confirm the previous work on genetic control of ornithine decarboxylase and show that ornithine decarboxylase is highly derepressed in the aga strain when depleted of ornithine. Finally, we explore the origin of cadaverine (1,5-diaminopentane), an analog of putrescine found in the ornithine-starved aga mutant, and its role in supporting the slow and indefinite growth of this strain despite the lack of the normal polyamines. MATERIALS AND METHODS Strains, media, and growth. The strains of N. crassa used were IC-1 and IC-2 (wild types) and strains carrying the aga, various spe-1, and inl (inositol) mutations (Table 1). In this paper, we rename the put-i The media used were Vogel medium N corn meal agar without dextrose and suppli appropriate for crosses, and sorbose platin testing media for genetic analysis (7).
Growth was done in 10-ml Vogel mediu flasks for stationary cultures, in 1,000exponential cultures for enzyme and poly determination, and in 25-ml solidified med ml Erlenmeyer flasks to grow conidia used and for mutational work (7). Media were s ed as noted in the text.
Genetic methods. Mutagenesis of N. cra of both wild-type and polyamine-depleted was done. Polyamine-depleted aga conidi; pared by growing cultures in 1 mM ail medium and limiting (0.2 mM) putrescine. sis of conidia with UV light to 20 to 80! followed by the fltration-enrichment tec spermidine-requiring mutants, was done t methods (7). Crosses were made by standa (7); most media in which ascospores were I supplemented with 500 SLg of spermidine counteract the very poor germination of spores.
S-ADSoOL
Complementation tests were performed by combin-M"ETh"NINE ing drops of conidia in 1 ml of liquid minimal medium DC-SAN in culture tubes (13 by 100 mm) and incubating them for up to 10 days at room temperature.
Ornitine decarboxylse extrctin and asay. Assays were performed on extracts of cells grown in exponential cultures. Mycelia collected by ifitration were ground with sand in a cold mortar in 0.05 M K+and sperm-phosphate.buffer (pH 7.1) containing 1 mM EDTA.
(utations (in Slurries were centrifuged at 18,500 x g, and the De btansamisupernatants were desalted on Sephadex G-25 col-We-deficient. umns equilibrated with the extraction buffer. Extracts S-adenosylwere frozen (-70°C) with little loss of activity. Enxylase (EC zyme assays were performed as described previously press) in reaction mixtures (0.3 ml) containing 100 mM K+ phosphate (pH 7.1), 1 mM EDTA, 1.7 mM ,Bmercaptoethanol, 50 ,M pyridoxal phosphate, 2 mM L-ormithine, and sufficient L-[1-14C]ornithine to bring pall spe-l to the specific radioactivity to 200 to 700 cpm/nmol. polyamine Incubations were carried out at 37°C for 10 to 90 min; specific activity is expressed as nanomoles per hour for growth, per millig of protein. Lysine decarboxylase activilemented as ty was measured in the above reaction mixture, in ig and spot-which L-[U-14C]lysine (5,800 cpm/nmole) was used in place of ornithine. Due account was taken of the fact im in 50-ml that only one of the six lysine C atoms is released upon ml aerated decarboxylation. ,amine pool anlyamne deterinatis ad n. Polyium in 125amine pools were determined in perchloric acid exlfor inocula tracts of mycelia by the double-isotope derivative upplementassay described previously (16; Paulus and Davis, in press). The method depends upon dilution of added issa conidia 'C-labeled polyamine by polyamines in the extract. aga strains The quantitation of polyamines recovered from silica a were pre-gel, thin-layer chromatograms is done by use of ginine agar [3H]dansyl-chloride of known specific activity. Dan-Mutagenesyl-polyamines dissolved in benzene were spotted on % survival, Sil G plates (20 by 20 cm). For one-dimensional :hnique for separations, the solvent used was ethyl acetate-cyclo-3y standard hexane (2:3, vol/vol) (16). For two-dimensional separd methods rations, the first dimension was run with the solvent plated were above and the second dimension was run with chloroper ml to form-n-butyl-alcohol-dioxane (48:1:1) (1). Two-di-Speasco-mensional separations are essential for dependable resolution of the dansyl derivatives of putrescine, cadaverine, spermidine, and aminopropylcadaverine ( Fig. 2). No attempt was made to measure the amount of aminopropylcadaverine, but both this compound and cadaverine were further identified by rechromatographing their dansyl derivatives eluted from twodimensional plates in additional one-dimensional plates with standards.
Materials. All chemicals were of reagent grade. Most biochemicals were obtained from Sigma Chemical Co. Triton X-100 was purchased from Research Products International, and Polygram Sil G thin-layer chromatograms were obtained from Brinkman Instru-ments, Inc. Isotopes were purchased from Amersham-Searle (L-1l-14C]ornithine) and New England Nuclear Corp. (polyamines and dansyl-chloride). Aminopropylcadaverine was a gift from David R. Morris, University of Washington, Seattle.
RESULTS
Mutants requiring polyamines. In the course of numerous attempts, only one polyamine-requiring strain was isolated from conidia of the aga strain 1C-3, as described in Materials and Methods. It grew very slightly in minimal medium but grew well in medium supplemented with putrescine, spermidine, or spermine. By a cross to the wild type, strains 1C-6 and 1C-7, carrying the spe mutation TP-138 in otherwise wild-type backgrounds, were isolated for study.
The TP-138 mutation, like other polyaminerequiring mutations (14), caused poor ascospore germination even in supplemented media, particularly in allelic crosses. Despite this, crosses of strain IC-6 with wild type gave no indication that the putrescine-requiring phenotype was other than monogenic (Table 2). A cross of strains IC-6 or IC-7 with strains carrying previously isolated putrescine auxotrophic mutations (IC-4 or IC-5) yielded few spores, but none were prototrophic ( Table 2). This is significant in view of the high degree of selection for prototrophs in germination. A cross of strain IC-6 to the inlbearing IC-8 strain demonstrated linkage of TP-138 and inl (ca. 5 centimorgans [cM], based on Spe+ progeny). This was confirmed in a second cross, spe, aga x inl (Table 2), in which the map distance was approximately 4 cM. These map distances are somewhat shorter than those de- (8). Growth of wild type was not influenced by the presence or absence of polyamines; only the data for spermidine are given. Strain IC-6 grown in minimal medium yielded a trace of growth at 25C and none at 35°C. Strain IC4 did not grow at all in minimal medium.
termined by McDougall et al. (14) (ca. 10 cM) and by us ( Table 2) with spe-l allele 462JM. A test of complementation among three spe-l strains of the same mating type and carrying different alleles (TP-138, 462JM, and 521KW) was negative. We conclude on the basis of intercrosses (Table 2), complementation, and linkage data that the three mutations are allelic. The locus is hereby named spe-l with the consent of K. J. McDougall. The symbol, derived from the end product(s) of the pathway, supplants put-1, which is based on the intermediate.
All three spe-l mutants grew well with 1.0 mM putrescine, spermidine, or spermine at 25°C (Fig. 3). The same was true at 35°C for strains caiyming the earlier mutations, 462JM (IC-4) and 521KW (IC-5). Strains carrying the TP-138 mutation (e.g., IC-6), however, grew much more poorly at 35°C even on 1 mM (Fig. 3) or 5 mM polyamiines. This character cosegregated in small progenies with TP-138 ( Table 2), but its genetic and physiological basis cannot be inferred until more spe-l mutants are isolated from the parent strain.
Onithe decarboxylase activity. Ornithine decarboxylase activity was measured in extracts of wild-type, an aga-bearing strain, and strains carrying the three spe-l mutations grown exponentially under various nutritional conditions (Table 3). Wild-type N. crassa had an activity of about 20 U/mg of protein, a value that did not change significantly when the strain was grown in putrescine or arginine. This is in contrast to the finding of Sikora and McDougall (19) that ornithine decarboxylase activity of wild type was augmented fourfold in arginine-grown cultures.
The arginaseless strain grown in minimal medium had normal ornithine decarboxylase activity. When grown in the presence of arginine, it became ornithineand polyamine-starved (see above) and grew more slowly. As expected on the basis of previous data (16,19), the strain had about 60-fold normal enzyme activity (Table 3), in keeping with the postulated negativ of the enzyme by polyamines (16).
The three putrescine-requiring strain er grown on limiting or unlimiting pi displayed no detectable ornithine decal activity in assays that would have meat than 1% wild-type activity (Table 3). N spe-l extracts altered the ornithine de ase activity of wild-type or aga extra mixed with them.
A test of the extracts for lysine decai showed easily detectable activity in the grown aga strain, IC-3. Wild-type ext little activity, and spe-) extracts had no 3). The proportional variation of 1l ornithine decarboxylase activities sug the activities are properties of the samt particularly in view of their simultaneo D.25 mutation. When the extract of the argininer grown aga strain was used, the Km value for ornithine was estimated to be 0.3 mM. The Km D.20 z for lysine was too high to be measured (Fig. 4). g -At 5 mM substrate (saturating with respect to IV <> ornithine), lysine was decarboxylated at 1.2% 0^the rate of ornithine (Fig. 4). Lysine ( The spe-l mutants grew somewhat initially in minimal medium, but the amount of polyamines ctordnIaCe) found in such cultures can be accounted for by miino acid carry-over from the inoculum. Growth of spe-l mutants on 1 mM putrescine restored their pools to the normal range. Growth on spermidine did not lead to the appearance of putrescine (Davis re control and Paulus, in press). Strains given exogenous spermidine did not contain much more spermi-IS, wheth-dine than normal, as though polyamine uptake utrescine, was inefficient or polyamine capacity was fixed. rboxylase Of primary interest here are the polyamine sured less pools of the aga strain during nutritional maone of the nipulations (see above). The polyamine pools of carboxyl-strain IC-3 (aga) were compared after growth in lcts when minimal medium or in arginine-supplemented medium. The data (Table 4) show that arginine rboxylase caused a complete loss of the ornithine and arginine-putrescine pools, and the small amount of spertracts had midine present could be accounted for by the me (Table spermidine of the inoculum. (Pools of wild-type ysine and cultures on arginine-supplemented medium are gests that quite normal [Davis and Paulus, in press].) Howe enzyme, ever, in contrast to the spe-l cultures, cadaverus loss by ine (0.6 nmol/mg [dry weight]) and a detectable c Pools of wild type (in nanomoles per milligram) are, generally: ornithine, 30; putrescine, 1.1; and spermidine, 16.2 (Davis and Paulus, in press). d Carry-over from the inoculum accounts for spermidine in polyamine-starved cultures. amount of aminopropylcadaverine appeared in strains carrying aga. The amounts of these compounds were greater after lysine was added to such cultures. These observations suggest that the lack of ornithine and the extreme derepression of ornithine decarboxylase led to decarboxylation of lysine (endogenous or exogenous). The fact that starving spe-) cultures did not show any trace of cadaverine or its aminopropyl derivative suggests that cadaverine synthesis requires ornithine decarboxylase.
It now might be asked whether cadaverine serves as a polyamine substitute in the growth of N. crassa. Indeed, cadaverine did stimulate growth of spe-l mutants (Fig. 3). This was confirmed (Table 4) by the reduction of doubling time for exponential cultures. It is very likely that the indefinite growth of arginine-grown aga cultures (8) can be attributed to the endogenous synthesis of cadaverine, because they grew at a rate similar to that of spe-l cultures supplemented with added cadaverine (Table 4). It is not known whether the analog(s) is less effective than its normal counterpart(s) on a molar basis, because no cultures are available with intracellular levels of analogs equal to those of normal polyamines.
DISCUSSION
Our major findings can be summarized as follows. (i) There is a gene controlling ornithine decarboxylase in N. crassa on the right arm of linkage group V, confirming the original work of McDougall. It is not known whether the gene is the structural gene for the enzyme. (ii) Polyamine-deprived mycelia have greatly enhanced (60to 100-fold) ornithine decarboxylase activity, confirming expectations of earlier work (16). (iii) Cadaverine and its aminopropyl derivative appear only in cells having high ornithine decarboxylase activity and a lack of ornithine. Ornithine decarboxylase appears to be the sole enzyme of lysine decarboxylation in the preparations we have used.
There are several discrepancies between our results and those of McDougall (14,19). First, as noted above, the map distance (4 to 5 cM) between our new mutation, TP-138, and inl was less than the distance between 462JM and inl found by McDougall. We have confirmed the higher recombination rate between mutation 462JM and inl and consider that this may reflect pecularities of the chromosome segments carrying the 462JM and TP-138 alleles. Precedents for such differences are frequent in N. crassa (4). The fact that the TP-138 and 462JM neither complement nor recombine defines them as alleles of the same locus, spe-1.
We have been unable to confirm the fourfold augmentation of ornithine decarboxylase in argi-nine-supplemented wild-type cultures. More important, we have been unable to confirm the rather high ornithine decarboxylase activities reported by McDougall (14) for his mutants (carrying alleles 462JM and 521KW). We find that they conform to his original report (Genetics 77:s16-s17, 1974) in having less than 0.5% wild-type activity. The discrepancy may be related to our finding that strains carrying allele 462JM quite often revert to a Spe+ phenotype; mass transfers in nonselective media may allow revertants to contaminate an initially Spe-culture.
Yeast mutants lacking enzymes of polyamine synthesis have been isolated by others (5,6,22). The spe-10 locus (originally designated spe-1) controls ornithine decarboxylase, but its action does not appear to be straightforward (6). Yeast polyamine mutants share with those of N. crassa the ability to grow considerably in unsupplemented media until the exhaustion of internal polyamines (22). Yeast mutants, moreover, are exquisitely sensitive to the addition of polyamines to the medium. As little as 10-10 M of any polyamine elicits a growth response, and 10-6 M spermidine suffices for optimal growth (6). In contrast, N. crassa mutants grow optimally only with 5 x 10-4 M or more polyamine. The difference between species probably reflects differences in polyamine uptake capability rather than differences in demand or catabolism (16).
The synthesis of cadaverine and aminopropylcadaverine upon starvation of mutant cells for the normal polyamines was first described by Dion and Cohen (9) working with Escherichia coli. Using cells having only the ornithine decarboxylase route of polyamine synthesis, they imposed ornithine deprivation by feedback inhibiting ornithine formation with addition of arginine to the medium. (E. coli cannot make ornithine from arginine.) The derivation of cadaverine and aminopropylcadaverine from lysine was proven, and both lysine and cadaverine in the medium stimulated the growth of ornithine-deprived cells (9). Studies of spermidine analogs on growth and DNA replication of polyamine-requiring E. coli showed that aminopropylcadaverine and aniinopropyl-1,6-diaminohexane were stimulatory (10,12). The synthesis of cadaverine in E. coli is due largely to activity of a lysine decarboxylase (20). Strains lacking this enzyme make only traces of the compound. (It is not excluded that the ornithine decarboxylases of E. coli can weakly catalyze lysine decarboxylation.) Whereas E. coli strains unable to make any polyamine grow slowly, the fungal mutants eventually do not grow at all. In both N.
crassa and E. coli (20), however, cadaverine stimulates growth slightly when added to the medium. It is probable that the indefinite growth of the arginaseless strain of N. crassa when grown on arginine is due in large measure to its ability to synthesize cadaverine endogenously.
In animal cells, cadaverine occasionally appears in certain tissues. In a study of lysine decarboxylase activity in vitro, Pegg and McGill (18) could attribute all activity to a nonspecific action of ornithine decarboxylase with which the lysine decarboxylase activity copurified. (Results from cultured mammalian cells which suggested the presence of a specific lysine decarboxylase [1] were later shown to be attributable to mycoplasmas in the culture [2].) Thus, N. crassa fits the animal pattern for cadaverine synthesis, inasmuch as ornithine decarboxylasedeficient mutants cannot decarboxylate lysine.
|
2017-07-06T10:03:42.503Z
|
1982-10-01T00:00:00.000
|
{
"year": 1982,
"sha1": "ef81bc9a91c18d35960d550aef99cfd71f86d9bb",
"oa_license": "CCBY",
"oa_url": "https://escholarship.org/content/qt6zg7m9d3/qt6zg7m9d3.pdf?t=ovk07g",
"oa_status": "GREEN",
"pdf_src": "ASMUSA",
"pdf_hash": "810496ce33ff0fe8b0c3e3fb487560e0c1a8788f",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
8832237
|
pes2o/s2orc
|
v3-fos-license
|
Satisfacción de la población de edad avanzada asistida en la estrategia de salud familiar de Santa Cruz, Rio Grande do Norte, Brazil
Este estudio tuvo como objetivo identificar la satisfaccion de las personas mayores acerca de la atencion sanitaria recibida en la Estrategia Salud de la Familia de Santa Cruz-RN, Brasil. Estudio descriptivo, cuantitativo, cuya muestra correspondio a 101 adultos mayores inscritos en el Estrategia Salud de la Familia Cruz Santa-RN. Los datos fueron recogidos entre mayo y septiembre de 2011, a traves de entrevistas estructuradas y analizados a partir de la estadistica descriptiva. Se observo que el 67,3% de los encuestados estaban satisfechos con la atencion recibida, y el 72,3% estaba satisfecho con la orientacion recibida. En cuanto a la programacion, tiempo de espera y el tiempo invertido en las consultas, muchos no estaban satisfechos (62,4%, 54,5% y 70,3% de insatisfaccion, respectivamente). Llegamos a la conclusion que, si bien la mayoria de las personas mayores estan satisfechos con la ayuda, existe la necesidad de mejora de la atencion, sobre todo en la corriente de la demanda de consultas en las unidades.
INTRODUCTION
Population aging has caused important changes in the demographic profile, resulting from the rise in life expectancy and reduction in child mortality.2][3] With the tendency to reduction in population growth rates and the significant changes in the structure of the Brazilian population pyramid, in the sense of its aging, particularly in the advanced ages, it is anticipated that this contingent should achieve, by 2040, the total of 14.1 million, which signifies 6.9% of the total population, and 24.9% of the elderly population. 2 This information causes reflection on the need for programs and public policies for meeting these emerging demands, ensuring quality of life for the elderly population.Such policies must take into account that aging has multiple dimensions, which encompass social, political, cultural and economic issues.
In this context, in 2006, the National Health Policy for the Elderly (PNSPI) was elaborated, the purpose of which was the recovery, maintenance and promotion of older adults' autonomy and independence, directing collective and individual health measures to this end, taking into account the principles and guidelines of the Unified Health System (SUS). 4 In addition, it stands out that the network for provision of primary services, including the Family Health Strategy (FHS), must be equipped to provide older adults with quality care, with a view to the maintenance and improvement of their quality of life. 5n this regard, it becomes relevant to analyze the assistance provided by the FHS team, as well as the functioning of the services provided by the Primary Healthcare Center (UBS, in Portuguese), from the perspective of the elderly service user, using the policy directed to this populational group as a base.Furthermore, it is necessary to characterize the population evaluated, such that future proposals for health actions may be coherent with this group's profile.
The rationale for undertaking this study in the context of the FHS is the need to evaluate the service user in relation to the health services used.
It is appropriate to mention the need to undertake studies on the acceptability/satisfaction related to the FHS, which could broaden, for example, the production of knowledge in the area of Nursing.These data could help in the development of new policies, and provide support for managers, professionals and service users such that the care for the older adults may be improved, seeking to contribute to the integrality of the health actions. 6lthough the FHS prioritizes actions focussing on the promotion, protection and recovery of the health of individuals and the family in all phases of life, what may be observed is that there is as yet no assistance directed at the attendance of the elderly population, as, in the place where the present study was undertaken, this population is attended only in consultations directed towards the people registered on the Hiperdia system.Given this fact, this study sought to respond to the following question: How satisfied is the elderly population attended in the FHS of the municipality of Santa Cruz, in the State of Rio Grande do Norte?
The study aimed to identify the satisfaction of the elderly population regarding the healthcare received in the Family Health Strategy, in the municipality of Santa Cruz, RN.
METHODS
This is a quantitative study of the descriptive type, undertaken in the municipality of Santa Cruz-RN, where the mean life expectancy is 71.1 years old. 7In this locality, 4,725 persons are in the age range aged over 60 years old, representing 15.1% of the total population. 8 large part of this population is attended by the 12 teams of the centers covered by the FHS, of which two are located in the rural zone and the remaining 10 in the urban zone.The participants in this study were elderly (persons aged 60 years old or over), registered in the FHS, and attended in five FHS centers located in the urban zone of the municipality of Santa Cruz-RN.
The elderly population registered in the FHS of the urban zone of the municipality of Santa Cruz corresponds to a total of 3,611 persons. 7Taking this information into account, the sample was obtained using the Epi-Info 6.0 software, being calculated based on a previously-undertaken study, which observed that 77.9% of the service users were satisfied with the advice provided for following in their homes. 6hus, with the confidence interval of 95%, a sample was obtained of 101 participants, which was stratified according to the five health centers existing in the urban zone.Both men and women were interviewed, respecting the same proportion.In this way, the subjects invited to participate in the research were those who sought attendance in these centers, at the time that the researcher was present, with the sample thus being characterized as a non-random convenience sample.
The study included older adults who were competent in terms of cognition and verbal communication, and who received any form of attendance in the FHS.The evaluation of the mental state was made subjectively based on the observation of these older adults and on their participation in the context of the health center.Those who did not have the cognitive conditions to respond to the research instrument were excluded.
The project was referred to the Universidade Federal do Rio Grande do Norte Research Ethics Committee (REC), following authorization from the above-mentioned Family Health Centers (FHC) and from the Municipal Health Department, and respected the guidelines of the National Health Council's Resolution 196/1996, which deals with research with human beings. 9Data collection was undertaken following the REC's approval (Opinion n. 157/2011) and the signing of the Terms Free and Informed Consent by the participants.
Data collection took place between the months of May and November 2011, through interviews which were held individually in the FHC, avoiding clashes with the time of the consultations in the centers.The interviews took place in an appropriate place and respecting the service users' willingness to respond to the questionnaires, which were filled out by the interviewer.
In order to assess the satisfaction of the older adults in the FHS, a structured interview script was used, based in Donabedian's dimensions of structure-process-outcome, which was elaborated and validated by previous studies. 6In this study, the following were considered as independent variables: the socio-demographic aspects, the length of time that the older adult had attended the center, accessibility, use of other services, participation in programs, referral and counter-referral, undertaking tests, specialist staff, giving opinions regarding the services, obtaining the appointment card for attendance, and waiting time to be attended.The service user's satisfaction with the attendance was considered a dependent variable.
The dimension of 'structure' was evaluated taking into account aspects such as how long the service user had been attending the FHS, accessibility to the FHC, and their use of other services besides the FHC.The evaluation of the dimension 'process' included the FHS programs in which the service users participated, their participation in lectures, the referrals, and the complementary tests undertaken.The third and last stage -evaluation of the dimension 'outcome' -included aspects related to: the service users' link with the team; the users' participation and opinion regarding community participation; the satisfaction with the information received, waiting times and arrangement of consultations; and to the satisfaction with how the user is received by the team. 6ome authors argue that any discussion relating to quality involves, implicitly or explicitly, the notion of the evaluation. 10Depending on the framework used, three dimensions must be considered for evaluating the quality of the health services: the technical performance, that is, application of the knowledge and the technology in health, such as to maximize the benefits and reduce the risks; the interpersonal relationships: the relationship with the patient; and the amenities: the comfort and aesthetics of the installations and equipment in the place where the services shall be provided. 11he analysis was undertaken using descriptive statistics, with the support of Excel 2007 and the Statistical Package for the Social Sciences (SPSS) 17.0 software, with the results being presented in the form of graphs and frequency tables.Measurements of asymmetry such as standard deviation and variance were only considered in the variable referent to the scores attributed to the professionals on the part of the service users.
RESULTS
The older adults who participated in this study had a mean age of 72.4 years old, distributed in accordance with the following age ranges: 60 to 65 years old (20.8%), 66 to 70 years old (24.8%), 71 to 75 years old (17.8%), 76 to 80 years old (9.9%), 81 to 85 years old (15.8%), 86 to 90 years old (7.9%) and over 90 years old (3.0).We emphasize that the largest group was in the age range between 65 and 70 years old (24.8%); however, if we add the values of the age ranges from 80 years old and over, these make up an accumulated frequency of 26.7%.
In the present study, the percentage of 50% women and 50% men was not considered a significant result, bearing in mind that, initially, it was stipulated that the interviews would respect the same proportion between the male and female sexes.
Regarding the level of education, we found that the majority had not finished junior high school (52.4%), followed by 40.6% who were illiterate, 5% with junior high school complete, 2% with senior high school complete, and no participants in the category of senior high school incomplete.In relation to marital status, 64.4% of the interviewees were married or lived with a partner, while 20.8% of the older adults who participated in the study were widowed, 7.9% single, and 6.9% divorced.
In relation to family income, 4% had an income below one minimum salary (MS), 79.2% of the older adults receive between one and three MS; 12.8%, between three and five MS; 1.0%, from 5 to 7 MS; and 3% had an income above seven MS.The main source of income was the retirement pension (90%); 2% had income from employment; 2%, from social programs; 1%, from the spouse; while 3% mentioned other, unspecified, sources of income.
It was also ascertained that 45.5% of the older adults shared their home with another three or four persons, followed by families with one and two members (34.7%), five and six members (15.8%), seven and eight members (3%), and only one percent lived with another nine or 10 family members.This finding evidences the possibility of other members of the family group depending financially on the older adult.
Evaluation of the structure
It was ascertained that 95% of the participants had been using the FHC services for more than two years; 3%, for between one and two years; and 2%, for between seven and 12 months.Regarding characteristics of accessibility, the majority of the interviewees spent between zero and 15 minutes traveling from their residence to the health center (64.4%) and, in general, considered the route traveled to be appropriate (42.6%).
In relation to the use of other health services, 45.3% used a regional hospital located in the municipality studied, 23.4% sought attendance in the institutions in the state capital, 27% used other services and 13% did not seek any other service for attendance; the answers could include more than one alternative.
Evaluation of the process
Thus, in relation to participation in health programs in the FHS, 69.3% of the participants were attended by the "Hiperdia program", 28.7% participated in other programs (women's health, collection of cytological tests) and only 2% of the interviewees did not participate in any of the programs.
Another important data in the evaluation of the process is the service users' participation in health education activities.In the present study, 37% of the older adults confirmed that they participated in lectures in the health center, while 63% did not participate.Table 1 shows the actions of referral and counter-referral, that is, referrals to specialist doctors, hospital attendance, or the requesting of tests on the part of the FHS.Among those who were referred by the FHS to undertake attendance in specialized services, 34 older adults (54.83%) were male, and 28 (45.17%) were female.In relation to referrals for attendance in hospital institutions, 21 were male (58.33%) and 15 were female (41.67%).Regarding the undertaking of complementary tests requested by FHS professionals, it was observed that the distribution was the same, that is, of the 90 participants who confirmed that they had undertaken such tests, 50% were male and 50% female.
Evaluation of the outcome
When questioned whether they knew the FHS professionals or not, the member of the team mentioned most was the Community Health Worker (CHW), followed by the nurse, doctor, auxiliary nurse and, lastly, the dentist (Table 2).Another point discussed in the evaluation of the process is the importance of knowing the service user's opinion on the assistance received.In relation to this aspect, 83.2% of the interviewees said that they had never given their opinion on the activities undertaken in the FHS, and only 16.8% had already had the opportunity to express it.
When questioned as to whether they would like to participate in the process of evaluating the activities undertaken in the center, 63.4% responded that they agreed to participate, and 36.6% responded that they were totally in agreement.None of the participants responded that they did not agree or totally disagreed.
In relation to satisfaction regarding the information received on their illness, 74.3% said that they were satisfied, 5.9% responded that they were not, 17.8% responded that they were more or less satisfied, and 2% did not answer.When they spoke specifically on information about their drug treatment, 74.2% responded that they were satisfied, 3% were not satisfied, 20.8% responded that they were more or less satisfied, and 2% did not answer.
Regarding satisfaction in relation to the information or advice provided by the professional, 72.2% of the interviewees said that they were satisfied, 3% responded that they were not, 22.8% responded that they were more or less satisfied, and 2% did not answer.
Regarding satisfaction with how the appointment card for the consultation in the center was obtained, 62.3% of the older adults stated that they were not satisfied, 21.8% responded that they were, 14.9% responded that they were more or less satisfied, and 1% did not provide an opinion.In relation to the waiting time for attendance in the centers, 54.4% said that they were not satisfied, while 13.9% showed satisfaction, 30.7% responded that they were more or less satisfied, and 1% did not answer.
As well as not being satisfied with the waiting time for attendance, many older adults were dissatisfied with the time spent in the consultation (70.3%), followed by 23% who were partially satisfied, and only 5% who answered that they were satisfied with the length of the attendance; while in 90% of cases, they argue that the waiting time to be seen in the unit should be below 30 minutes. 12other aspect evaluated by the instrument is the attribution of a score varying from 0 to 10 points by the service user for evaluating the professional who attends him or her.In this study, the highest mean was attributed to the dentist (9.6%), the professional who was previously indicated as that least known by the service users, a relationship which was not within the scope of this study.The CHW, on the other hand, received, on average, a score of 9.5; both the nurse and the doctor received a mean of 9.3; the center's receptionist obtained a mean of 9.2; and the mean for the auxiliary nurse or nursing technician was 8.9.
One fairly significant data in the analysis of these scores is the fact that the lowest mean (8.9) was attributed to the auxiliary nurse/nursing technician.Due to the organization of the attendance in the FHS, these professionals are more vulnerable to criticism, as, in the context of the municipality where the study was undertaken, it is generally they who are responsible for handing over appointment cards, scheduling consultations, handing over medications and organizing spontaneous demand, which could be said to correspond more with the role of the receptionist.
Finally, Figure 1 shows the study participants' satisfaction with the health assistance provided by the FHS team, in which 67.3% responded that "yes", they were satisfied with the service, followed by 26.7% who answered that they were "more or less" satisfied, 4.0% who showed dissatisfaction, and 2.0% who did not answer the question.
DISCUSSION
In the present study, the mean age accompanied the Brazilian tendency, where the proportion of the population aged 80 years old or over is increasing at an accelerated speed, this being the population segment which grows most.From 170,700 persons in 1940, the "oldest" contingent reached 2.9 million in 2010, which represents 14.2% of the elderly population in 2010, and 1.5% of the total population. 2 Studies report that this age range represents a delicate point in the elderly population, as, the more advanced the age, the greater the risk of falling ill and presenting a higher degree of dependence. 13n this study, the older adults showed satisfaction in relation to the time spent for traveling to the health center, and a large proportion had been using this service for over two years.Access is an obligatory requisite for primary care to become a gateway to the health system, it being necessary to eliminate financial, geographical, organizational and cultural barriers. 12In other studies, it was seen that the expansion of the family health teams was used for broadening the population's access to the health services, and for organizing the gateway to the system, although there are still problems in this regard. 14he service users seek the health service in situations of acute suffering and, when the primary care center does not respond to their needs, resort to the Emergency Room services, taking up their time with demands considered to be "simple", which could have been resolved at the level of primary care. 15In the local context, the people tend to seek emergency attendance in a small hospital* located in the municipality of Santa Cruz-RN, which is not yet organized to assist those individuals who need care of a greater level of complexity, and which also attends the demand from residents in neighboring municipalities.
It is worth noting that, although the majority of the interviewees were registered on the "Hiperdia program", this program is not exclusively for older adults.The "Hiperdia program" was created through Ministerial Ordinance N. 16/GM, of 3 rd January 2002, as the Plan for the Reorganization of Care for Arterial Hypertension and Diabetes Mellitus, with the objective of establishing goals and guidelines broadening actions of prevention, diagnosis, treatment and control of these illnesses, through reorganizing the healthcare work of the centers in the SUS primary care network centers. 15t is important to emphasize that the illnesses mentioned in this program are considered Chronic Noncommunicable Diseases (NCD), which tend to be found more commonly in adults who enter old age with these conditions, which, partly, explains the large number of people aged over 60 years old attended by this program.It was observed that the older adults who were not registered on the "Hiperdia program", and who answered that they participated in another program, or were actually receiving attendance from consultations which were open to spontaneous demand, or were in the programs for attention to women's health.
In this regard, it seems that it is necessary to better organize the attendance in the programs provided in primary care, bearing in mind that some of the consequences of the spontaneous demand can include the forming of lines of people waiting to be attended, service users' dissatisfaction with the waiting times, the overburdening of the professionals, the non-prioritization of some consultations, and low capacity to resolve problems.
In accordance with this idea, in a study on spontaneous demand undertaken in a FHC in the state of Minas Gerais, it was observed that receiving a large clientele in the centers has been an arduous and complex task, as a guarantee of the quality of the attendance for the service users, on the majority of occasions, is not achieved. 168] In this context, it is * A 'small' hospital in Brazil is one with up to 50 beds.Translator's note.
evidenced that one needs a plan of action which attends emerging needs, so as to reorganize the actions provided to the elderly population.
Education and health are among the principle characteristics of the FHS; educational processes must be undertaken through groups geared towards the recovery of self-esteem, the exchanging of experiences, and mutual support, as well as the improvement of self-care, which is something fundamental in ensuring the independence of the older adult. 18his group needs a specific approach, respecting its specific characteristics, with the aim of ensuring the best understanding of the service users, and improving the health care, it being important also to consider the older adults' level of education, so as to select the most appropriate strategies for working with the health education actions.
In relation to referrals to specialist services, it is believed that the resolutive capacity and the continuity of the care depend not only on the relationship established between the service user and the health professionals, but, above all, on the use of primary care as a gateway to the SUS. 12 Nevertheless, the results achieved in this study do not allow us to make inferences regarding the resolutive capacity of primary care, or to identify the reasons for the referrals, which indicates the need for future studies able to respond to these gaps.
Access to requesting and undertaking laboratory tests seems not to be a problem in the centers studied, as the collection of samples tends to be undertaken in the center itself and these are sent for analysis in the laboratory of the regional hospital located in the municipality.As for the other tests requested, as well as some referrals to specialist doctors, it was observed that, for some participants, there is a large delay both in arranging the same, and in delivering the results.
In this study, the professionals who were remembered most by the older adults were the CHW, followed by the nurses, doctors, auxiliary nurses/nursing technicians, and dentists.Thus, it is believed that -as they are close to the community and the health team -the CHW can facilitate the creation of links.However, in the attempt to give positive results to the population's demands, these often end up taking on activities which go beyond the actions set out in the norms of the Ministry of Health, and thus end up by breaking the organization of the work. 18egarding the attention received by the nurse, the results corroborate a previous study, in which it was observed that some older adults seem not to understand the functions of this member of the team, as they tend to represent her as the professional who only checks blood pressure or who takes the place of the doctor when this cannot undertake the visit.Thus, on most occasions, they confuse the nurse with the nursing technician or with the CHW.It is understood that both the nurse and the other professionals of the FHS need to undertake more actions in patients' homes, principally with attendance which is more focused on the older adult. 19n this study, the other professionals obtained lower percentages, which may be attributed to the fact that the majority of them had not worked for very long in these health centers, bearing in mind that many were contracted in the last public examination for filling vacancies for the FHS of the municipality of Santa Cruz, in the beginning of 2010.
In relation to the fact of being able to express one's personal opinion about the functioning of the health services, the present study's findings showed a discrepancy between thinking and acting, as although the majority of the older adults agreed that they should evaluate the activities undertaken in the FHS, few mentioned having taken this opportunity.This fact may be attributed to the historical exclusion of these people from decisionmaking, causing many service users to feel afraid of giving their opinion and being misunderstood, hindering awareness in relation to the importance of their participation. 20he evaluations of satisfaction of the users of the public services can act as an instrument for enabling the service users to be heard, creating opportunities for expression in which the service users can monitor and control the activities of the public health services. 21nother aspect analyzed was the provision of information and advice about care to be undertaken in the home by the service user herself and/ or her family members, and other actions related to health education.Work in health with the elderly population depends on the training and involvement of the health professionals in the development of activities which seek to prevent losses of the elderly population's functional capacity, as well as to recover and maintain it, and to control factors which interfere in this population's health. 22he older adults interviewed expressed dissatisfaction regarding how consultations are scheduled, the waiting time in the health center, and the duration of each consultation.In the context studied, the teams tend to reserve some specific days of the week for scheduling consultations with the nurses, doctors and dentists, with approximately 10 to 20 spaces being made available per period allocated for consultations.In daily practice, it is common to meet service users who complain about the time spent in the waiting rooms, and that the duration of each consultation does not always seem to be proportional to the time spent waiting, which may be a factor related to the service user's dissatisfaction in relation to the services offered in the health center.However, as the object of this study was not to establish relationships between the causes of dissatisfaction with these aspects, this approach is suggested for future studies.
In studies undertaken in other contexts, the service users also expressed a high degree of satisfaction in relation to their relationship with the professionals regarding respect, consideration, listening, understanding, reception and kindness on the part of the team. 23n this study, the participants' dissatisfaction with some situations experienced in the FHS is alarming; however, when one refers to the assistance provided by the professionals, the responses had a positive tendency, in which the mean score attributed by the participants was at least 8.9 (in this case, the auxiliary nurse/nursing technician).
The recent National Research for Sample of Domiciles (NRSD) debunk the idea of the SUS service users' dissatisfaction, bearing in mind that, in evaluating the attendance received, a significant number of elderly Brazilians approved of the service provided, and only 2.9%, in the year of 1998, and 3%, in 2003, found its functioning bad or very bad. 24
FINAL CONSIDERATIONS
It is believed that the undertaking of studies on the satisfaction of the service users in a health service can be an important instrument in the search for better conditions of attendance to the health needs of the older adult in primary care.
Generally speaking, the elderly population studied reported satisfaction in relation to the care received regarding information on any illness diagnosed, drug treatment, and guidance provided for undertaking health care in the home, and attributed satisfactory scores for all the professionals of the FHS team.However, many older adults mentioned dissatisfaction with the scheduling of consultations, as well as the short period of time they spend in these, which tend to take place after a long time spent waiting in line.
In the present study, many older adults stated that they had never given an opinion on the activities undertaken in the FHS, which places even greater emphasis on the importance of investigating the opinion of the older adult in relation to the care received in this service, which is configured as a preferential gateway to the SUS for this clientele.Care for the older adult continues to be a challenge for managers and professionals, as it involves a group of persons who experience situations specific to this phase of life, which, when not treated, can culminate in loss of independence and autonomy.
The FHS is a privileged space for comprehensive care to the health of the older adult, as its proximity to the community and to home care make it possible to work in a contextualized way in the reality experienced by the older adult within the family.As a result, one of the major challenges for the workers in the FHS is the need to review their practice, in the light of a new health context, in which the chronic conditions become more frequent and require continuous and comprehensive care, it being indispensable to rethink the work processes, as well as to adopt methodologies, instruments and knowledge which meet the current demand.
The study also indicated that the professionals most recognized by the older adults are the CHW and the nurses, and in this regard, it is believed that it is necessary to study these professionals' profile and work, including -among other aspects -the relationships which these construct with the older adults, family members and the community.Furthermore, it is observed that the space for attending the elderly service users in family health centers tends to be the "Hiperdia program" consultations, which, in the majority of cases, are undertaken by the nurse.For this reason, we emphasize the importance of the work of the nurse in the FHS, bearing in mind that this professional must be able to identify the social and health needs of the population under their responsibility, as well as to be able to intervene in the health-illness process of the individuals, family and collectivity.Thus, the recognition on the part of the managers of the need to encourage and promote the training of the professionals involved in the care for the older adult becomes essential.
On the other hand, the elderly population must also be informed about its rights, and encouraged to express its wishes and needs, which can be a part of broadening its participation in the functioning of the services, with a view to establishing care which values the satisfaction of the elderly service user.
Figure 1 -
Figure 1 -Distribution of the older adults by satisfaction with the health care provided by the Family Health Strategy team.Primary Healthcare Centers, Santa Cruz-RN, 2011
Table 2 -Distribution of the older adults by knowledge of the members of the Family Health Strategy team. Primary Healthcare Centers. Santa Cruz-RN, 2011 Link with the professionals of the FHS* n %
* Family Health Strategy.
|
2017-09-20T06:38:05.732Z
|
2014-12-01T00:00:00.000
|
{
"year": 2014,
"sha1": "7092060a7d85feb332b3e8d04f7509715e362be3",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/tce/v23n4/0104-0707-tce-23-04-00871.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7b0ceaaccd14447e48fd212735cfc9671156a9ec",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Art"
]
}
|
6751545
|
pes2o/s2orc
|
v3-fos-license
|
Digital Commons@Georgia Southern Digital Commons@Georgia Southern Time-Dependent Regulation of Apoptosis by Aen and Bax in Time-Dependent Regulation of Apoptosis by Aen and Bax in Response to 2-Aminoanthracene Dietary Consumption Response to 2-Aminoanthracene Dietary Consumption
Background/Objective: The modulation of the toxic effects of 2‑aminoanthracene (2AA) on the liver by apoptosis was investigated. Fisher‑344 (F344) rats were exposed to various concentrations of 2AA for 14 and 28 days. The arylamine 2AA is an aromatic hydrocarbon employed in manufacturing chemicals, dyes, inks, and it is also a curing agent in epoxy resins and polyurethanes. 2AA has been detected in tobacco smoke and cooked foods. Methods: Analysis of total messenger ribonucleic acid (mRNA) extracts from liver for apoptosis‑related gene expression changes in apoptosis enhancing nuclease (AEN), Bcl2‑associated X protein (BAX), CASP3, Jun proto‑oncogene (JUN), murine double minute‑2 p53 binding protein homolog (MDM2), tumor protein p53 (p53), and GAPDH genes by quantitative real‑time polymerase chain reaction (qRT‑PCR) was coupled with terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) and caspase‑3 (Casp3) activity assays. Results: Specific apoptosis staining result does not seem to show significant difference between control and treated animals. This may be due to freeze‑thaw artifacts observed in the liver samples. However, there appears to be a greater level of apoptosis in medium‑ and high‑dose (MD and HD) 2AA treated animals. Analyses of apoptosis‑related genes seem to show AEN and BAX as the main targets in the induction of apoptosis in response to 2AA exposure, though p53, MDM2, and JUN may play supporting roles. Conclusion: Dose‑dependent increases in mRNA expression were observed in all genes except Casp3. BAX was very highly expressed in the HD rats belonging to the 2‑week exposure group. This trend was not observed in the animals treated for 4 weeks. Instead, AEN was rather very highly expressed in the liver of the MD animals that were treated with 2AA for 28 days.
INTRODUCTION
2-Aminoanthracene (2AA), also called anthramine, is a member of a class of compounds referred to as associated analogs seem to be more harmful, have greater environmental concentration, and present the greatest exposure possibilities. [1] 2AA, like other arylamine compounds, typically undergoes metabolic activation in the liver primarily to exert its toxic effects. A recent report also indicated that 2AA was activated by P450 2A13 and 2A6 (as well as P450 1B1) in Salmonella typhimurium species. [3] This initial activation is followed by catalysis, then by N-acetyltransferases (NAT), and finally by sulfotransferases to yield highly reactive intermediates. These electrophilic reactive metabolites form deoxyribonucleic acid (DNA) adducts, thus affecting transcription and replication. [4][5][6][7] Global gene expression patterns in the liver [2] and pancreas [8] of Fisher-344 (F344) rats' response to 2AA dietary consumption were previously examined. Differentially expressed transcripts in the pancreas showed proteins to be involved in energy metabolism and protein digestion. The rest the expressed genes revealed messenger ribonucleic acids (mRNAs) involved in pancreatitis and pancreatic cancer. [8,9] A follow-up study evaluated the differentially expressed genes in hepatic tissues in F344 rats due to 2AA toxicity. Results revealed highly expressed transcripts observed to be actively involved in such processes as DNA repair, multidrug resistance, cell-cell adhesion, growth regulator-tumor suppressor, tissue development and differentiation, cell cycle regulation, apoptosis, and tissue senescence. Further analysis via association bioinformatics tool confirmed that biological process and molecular functions related to apoptosis and apoptotic processes are important aspects of 2AA toxicity responses. [2] Apoptosis is the term used to describe the organized disintegration of the cell. This process is characterized by membrane blebbing, cell shrinkage, and chromatin condensation. DNA fragmentation also occurs during apoptosis. Apoptosis is believed to play essential roles in biological processes such as embryonesis, ageing, and many diseases including cancer. The apoptotic process, which includes a sequence of events, commences with initiation and then moves on to gene regulation and effector mechanism. Initiators are events that deprive survival factors such as cytokines and, in the process, activate death receptors. As a consequence of these stimuli, several varied pathways associated with specific gene expression patterns can then be generated. Proteases, named caspases, are reported to be the main apoptotic effectors. [10][11][12][13][14] The current investigation examines the role of apoptosis in mediating the toxicity effects of 2AA more completely. This study is a follow-up to our previous investigation that revealed apoptosis and apoptotic events as important in mediating 2AA toxicity in the liver. We report the immunohistochemical and targeted gene expression quantification responses of cellular and molecular markers of apoptosis and apoptosis-related regulatory genes in control and exposed individuals.
Experimental design
Male F344 rats were fed a 2AA contaminated diet. Twenty-four 3-4 week post-weaning animals (Harlan Laboratories, Madison WI) were assigned into dose regimes of 0 mg/ kg-diet (control -C), 50 mg/kg-diet (low dose (LD)), 75 mg/kg-diet (medium dose (MD)), and 100 mg/kg-diet (high dose (HD)) 2AA for 14 or 28 days. Each dose regimen had at least three animals. The current doses were selected based on the findings of a previous study. [15] A pilot study by Boudreau et al., [15] showed that these chosen concentrations of 2AA were nonlethal after chronic administration of the test compound. The goal of the current investigation was to provide some mechanistic details on the toxicity of 2AA. That is, to examine the toxicity mechanism of the action of 2AA. The duration of exposures-14 and 28 days-though short, provides the opportunity to do that. It also enables us to attempt to replace long-term studies with short-term bioassays.
The animals were housed in individual cages at Southern Illinois University Animal Facility with a 12-h/12-h light/dark cycle. The rats had access to distilled water ad libitum. The animals were handled according to the guidelines from the National Institute of Health (NIH) and Southern Illinois University Guide for Care and Use of Laboratory Animals (IACUC protocol#09-039). At the end of 14 or 28 days' treatment period, the F344 rats were euthanized with CO 2 and blood was collected via cardiac puncture. The livers of the rats, together with other major organs, were excised, weighed, and immediately frozen in liquid nitrogen.
Diet preparation
The 2AA (CAS# 613-13-8) was purchased from Sigma-Aldrich (St Louis, MO, USA) at 98% purity and used without further purification. The appropriate amount of the 2AA was dissolved in 1 L molecular grade ethyl alcohol. This was followed by the immersion of 1 kg diet (PMI Nutrition International, LLC, Brentwood, MO, USA) into the ethyl alcohol containing the 2AA and evaporated under the hood with periodic mixing to ensure homogeneity. The prepared diet was stored in the freezer and protected from light until fed to the animals.
Total RNA extraction
Total RNA was extracted from the rat livers using RNeasy Mini Kit (Qiagen, Valencia, CA). [15] Approximately 30 mg liver samples were homogenized in tissue lysis buffer to denature and inactivate ribonucleases (RNases). The RNA was then allowed to bind to a silica-gel membrane and finally eluted with RNase-free water. Total RNA quality and concentration were determined via electrophoretic gels and Experion TM RNA StdSens analysis kit according to the manufacturer's specifications (Bio-Rad Laboratories Inc, Hercules, CA, USA). [17]
Apoptosis assay
The presence of apoptotic cells was detected with the TUNEL Apoptosis Detection biotin-labeled POD Kit from GenScript (L00297) [18] according to the manufacturer's protocol. The frozen liver samples were quickly thawed and post fixed overnight in 10% neutral buffered formalin (NBF). The samples were paraffin embedded and sectioned at 5 µm onto glass slides. Briefly, after the deparaffinization of the 5 µm formalin-fixed paraffin-embedded (FFPE) sections, the samples were washed with phosphate buffered saline (PBS) and incubated with 0.02 mg/ml proteinase K at 37°C for 20 min. Cells were blocked with 3% hydrogen peroxide in methanol for 10 min at room temperature (RT). The sections were incubated with a TUNEL labeling mixture of terminal deoxytransferase (TdT) and biotinylatedd UTP at 37°C for 1 h, followed by two washes in PBS for 5 min each. Streptavidin-horseradish peroxidase (HRP) was bound to the biotin molecules for 30 min at 37°C and the apoptotic cells were visualized with a 3,3'-diaminobenzidine (DAB) solution. After a final wash with PBS, the slides were cover slipped and visualized. For control staining, the positive control section was treated for 15 min at 37°C with DNaseI (25 U/µl) before incubation with the TUNEL labeling mix while the negative control section was incubated in the TUNEL labeling mix without TdT.
Caspase activity via Casp3/7 gloAssays
Casp3 activity was measured in liver tissues using Caspase-Glo assay kit [19,20] and modified protocol. [21] The assay involves cleaving of the proluminescent substrate containing DEVD (sequences were in a single-letter amino acid code) by caspases-3. This is followed by the release of the substrate luciferase (aminoluciferin) that leads to luciferase reaction and the generation of luminescent signal. To assess the Casp3 activity, the liver samples were homogenized in hypotonic buffer for cytosolic extracts. The buffer contained 25 mM HEPES, pH 7.5, 5 mM MgCl 2 , 1 mM EGTA, 1 mM Pefablock, and 1 µg/mL each pepstatin, leupeptin, and aprotinin. The homogenized samples were then centrifuged (15 min, 13,000 rpm, 4°C) and the subsequent protein concentration of the supernatant were adjusted to 1 mg/mL using isolation buffer. The samples were stored at −80°C. Finally, a 1:1 ratio of reagents and 10 µg/mL cytosolic protein were mixed in a white-walled 96-well plate and incubated at room temperature for 1 h. [19][20][21] The Casp3 activity was measured in triplicate via luminescence (relative light units with blanks subtracted) using a plate-reading Synergy 2 SL luminescence microplate according to the manufacturer's guidelines (BioTeK, Winooski, VT). Significant differences in caspase activity between the control and the treated groups were evaluated using one-way analysis of variance (ANOVA).
Quantitative real-time polymerase chain reaction (qRT-PCR)
The expression of key gene transcripts reported to be important in meditating apoptotic processes was examined via quantitative RT-PCR. Genes, whose expression levels were quantified, included apoptosis enhancing nuclease (AEN), Bcl2-associated X protein (BAX), tumor protein p53 (p53), murine double minute-2 p53 binding protein homolog (MDM2), Jun proto-oncogene (JUN), Casp3, and GAPDH as a house-keeping gene. FASTA mRNA sequences of these mRNA transcripts were obtained for Rattus norvegicus using the National Center for the Biotechnology Information (NCBI) database. Forward and reverse primers for the genes were then generated using NCBI Primer-Blast. The primer sequences are shown in Table 1. The primers were bought from Integrated DNA Technologies Inc (IDT), Coralville, IA, USA.
An iScript cDNA synthesis kit (Bio-Rad Laboratories Inc) [17] was employed to synthesize complementary DNAs (cDNAs) from the total RNA extract samples of the rat livers. The cDNAs were then combined with primers and SsoFast EvaGreen Supermix (Bio-Rad Laboratories Inc.) for the qPCR reaction. The product was quantified via a Bio-Rad CFX96 TM instrument (Bio-Rad Laboratories Inc.) using the manufacturer's guidelines. The normalized gene expression values were determined via relative delta delta Ct quantification parameter.
Detection of apoptosis using TUNEL assay
The modulating effect of 2AA on apoptosis and apoptotic qRT-PCR = Quantitative real-time polymerase chain reaction related cellular processes in the liver was examined. TUNEL universal apoptosis detection system showed apoptotic nuclei stained dark brown and nonapoptotic nuclei stained blue [ Figure 1]. We think the freezing and thawing of the liver samples may have an effect on the staining of the nuclei. Nevertheless, the livers of the treated rats in both treatment groups seem to be undergoing apoptosis at a greater extent relative to the controls and the 50 mg/kg (LD) treated groups. This is noted by the number of stained dark brown nuclei seen in Figure 1. Also, Figure 1b (2 weeks LD) and 1f (4 weeks LD) showed more blue (nonapoptotic) nuclei than the other figures.
Quantification of apoptosis via measuring Casp3 activation
Liver apoptosis was evaluated by the Casp3 activity assay. The Casp3 activity in whole liver homogenates was measured by luminogenic Casp3 substrate optimized for Casp3 activity. The mean consequential luminescence is proportional to the amount of caspase activity generated [ Figure 2a]. The HD animals in the 14-and 28-day exposure-periods showed a significantly greater hepatic Casp3 activity in comparison with the control rats. The LD and MD animals in the 2-week exposure group had a not significantly slightly higher caspase activity relative to the control group animals. Within the 4-week treated group, Casp3 activity was significantly greater for the MD and HD animals. There was no correlation noted between the hepatic caspase activity as measured by the luminescence assay and differentially expressed caspase mRNA gene expression fold analysis in the liver [ Figure 2b].
Differential mRNA expression of selected apoptosis-related gene transcripts
The expressions of six mRNAs that play essential roles in modulating the apoptotic events in the cell were analyzed. With respect to the genes studied, there seem to be greater gene expression levels of treated groups when compared with control animals for both periods [ Figure 3]. The HD (100 mg/kg-diet) and MD (75 mg/kg-diet) animals showed greater gene expression values for all transcripts. On the contrary, Casp3 was the least expressed in all treatment groups. The expression of Casp3 was less in the treatment groups than in the control animals. Aside from the AEN gene, the 2-week exposed rats had higher differentially expressed genes than the 4-week treated rats. BAX had greatest mRNA expression in the 2-weeks group. Similarly, AEN was the most expressed gene in the 28-day set. MDM2 and c-JUN were also highly expressed in the 2-week treated animals. For the 2-week class, AEN was at least two-fold changed in LD, MD, and HD samples for both treatment periods. BAX was at least two-fold expressed in LD, MD, and HD for the 4 week treatment period. It was only at least two-fold changed in the HD group for the 2-week class. JUN was at least 10-fold expressed in MD and HD, and at least two-fold changed in LD and MD for the 14-and 28-day exposure periods, respectively. MDM2 was also at least 12-fold changed in LD, MD, and HD during the 2 weeks of exposure to
DISCUSSION
Apoptosis is the natural and essential process by which unwanted or damaged cells are cleared from tissues. [22] This is a genetically-controlled process that a cell undergoes in response to various stress conditions including ionizing, radiation, and exposure to various environmental contaminants such as aflatoxin. [23][24][25] The present study is an investigation of the role of apoptosis in minimizing the effects of dietary 2AA exposure. Apoptotic events are part of regulated cellular processes during development and aging intended to maintain cellular and organ tissue integrity. Apoptosis also occurs as a homeostatic and defense mechanism against various stress agents. [26] The apoptotic events were examined through the detection of apoptotic cells via immunohistochemical TUNEL staining and Casp3 activity assay. Specific apoptosis staining seems to show greater level of apoptosis in MD and HD 2AA treated animals relative to their controls. This is noted because the dark brown staining of nuclei is more prominent in the livers of animals that consumed the 75 and 100 mg/kg-diet of 2AA. However, a further analysis using 'number of TUNEL index' was inconclusive. The reason for this anomaly might be the freeze-thaw artifacts of liver samples used in the study. Measured Casp3 activity indicated caspase activity in all hepatic tissues. Within the 14-day group, caspase activity was only significantly elevated in the HD animals. Similarly, increased caspase activity was noted in the medium-and HD group animals that were exposed to 2AA for 28 days. Coupled with these assays, was quantitative mRNA expression levels analyzed via qRT-PCR. The expressions of some hepatic apoptotic genes were determined to gain insight into how 2AA modulates AEN, BAX, CASP3, JUN, MDM2, and p53 to trigger apoptosis. Brief descriptions of these genes are provided below.
AEN, also called apoptosis-enhancing nuclease, is a target of the important p53 gene. AEN is reported to be a typical exonuclease and its expression is regulated by the phosphorylation of p53 transcript. [27] AEN has been reported to be highly expressed in animals exposed to various contaminants. For example, in rats exposed to genotoxic chemicals, there was increased incidence of apoptosis that correlated with AEN expression. A high level of AEN was also expressed in the livers of barrows fed dietary aflaxtoxin. Ionizing radiation was also reported to induce AEN expression.
We also investigated whether or not 2AA modulates the expression of Casp3 gene. Casp3 has been observed to be frequently activated in mammalian cell apoptosis. [28][29][30] This protein belongs to a family of cysteine proteases. Casp3 has been observed to be the most essential of the executioner caspases. It is activated by any of the initiator caspases such as caspase 8, 9, or 10. [26][27][28][29][30][31] Apoptosis has also been observed independent of caspase activation. [32] BAX gene is reported to induce apoptosis. [28,[33][34][35] The BAX protein belongs to a family of Bcl2 intracellular proteins that regulate the activation of caspase. [36] These proteins have been found in many cancerous tissues. [37,38] The BAX gene is found in the cytosol of many cells in the inactive state. It responds to death stimuli by undergoing conformational change followed by transport of the BAX to the mitochondrial membrane. In the process, the BAX inserts and promotes release of apoptogenic genes.
JUN was the fourth gene whose expression was examined. This gene is a signal transduction transcription factor belonging to the AP-1 family. [39,40] The proto-oncogene has been observed to play modulating roles in cell proliferation and differentiation. [39,41] Studies have linked apoptosis with c-JUN expression. Recent studies have found high expression of proto-oncogene c-JUN in apoptotic tissues. [42][43][44] We also analyzed the expression of murine double minute-2 in liver tissues. MDM2 works in tandem with p53 to regulate cell cycle and apoptosis. Direct protein-protein interaction between MDM2 and p53 leads to the regulation of p53. Besides MDM2's modulation of p53 tumor suppressor protein, MDM2 is also observed to exhibit anti-inflammatory effects. The anti-inflammatory effects are a result of MDM2 acting as co-transcription factor for nuclear factor-kappa-light-enhancer of activated B cells (NF-κB) at cytokine promoters. [45][46][47][48] The last gene to be analyzed was p53, which is considered the guardian of the genome. [49] p53 is reported to play a significant role in preventing the development of cancers. This is accomplished via p53's ability to either potentially arrest or kill tumor cells. [50,51] As a result, it has been observed that at least half of all cancers do have loss of p53 activity from mutations in the p53 gene. The p53 protein also induces apoptosis and cell cycle arrest. [52] The current results seem to show AEN and BAX as the main targets in the induction of apoptosis in response to 2AA exposure, though p53, MDM2, and JUN may play marginal roles. BAX was very highly expressed in the HD rats belonging to the 2-week exposure group. This trend was not observed in the 4-week animals. Instead, AEN was rather very highly expressed in the liver of the MD animals that were treated with 2AA for 28 days. The shift in protein target was rather surprising. It should also be noted that the rest of the mRNA transcripts were at least two-fold changed relative to the control genes and samples.
We have previously noted that the MD and HD animals respond similarly to 2AA toxicity. [2] For instance, we have previously observed that cumulative body weight gain in response to 2AA dietary consumption was similar for MD and HD animals (Gato and Means, 2011a). These groups of rats showed greater numbers of differentially expressed genes relative to the control and the LD for both periods. This was the case in the current investigation. The control and LD animals also seem to have similar responses to 2AA intoxication. Even though this was the case, the LD (50 mg/kg) animals still showed a higher expressed MDM2 gene than what had been previously observed in our studies.
Our observations indicate an intrinsic mode of apoptosis for both time frames. Our findings suggest that the cell is adopting two complementary strategies to maintain cellular integrity. Whereas, the short-term group focused on using BAX protein as the main target, the long-term treatment used AEN as the focal point gene. The acute exposure to 2AA for 14 days seems to cause the cell to be overwhelmed with 2AA toxication. As a result, we believe that BAX was very highly expressed in the short-term exposure group in order that apoptosis may be enhanced. This occurred via MDM2 binding to p53 gene that in turn, activated the BAX proapoptotic protein. The p53 not only activated the BAX and other Bcl2 family proapoptotic proteins, but also inhibited antiapoptotic family of proteins. Thus, the cell is committed to disintegration. This observation is consistent with the current results. At least a two-fold expression of MDM2, p53, and BAX has been noted in almost all 2AA treated animals.
On the contrary, the 4-week rats seem to have adapted to the effects of 2AA. This is the primary reason that the BAX expression is significantly reduced in the long-term group when compared with the 2-week group. Rather, AEN was highly expressed along with moderate expressions of MDM2, JUN, and p53 being, at least, two-fold changed. AEN is reported to have the ability to induce apoptosis by itself. [27] But this gene is also a direct target of the p53 gene. We think both of these approaches are vital for the induction of apoptosis due to 2AA cellular injury.
CONCLUSION
The present investigation examines the response of F344 rats to 2AA dietary consumption. Apoptosis is a well-documented cellular strategy to either eliminate or minimize the effects of environmental stressors. Specific apoptosis staining image does not seem to indicate significant difference between control and treated animals. This may be due to freeze-thaw artifacts observed in the liver samples.
However, there appears to be a greater level of apoptosis in the MD and HD 2AA treated animals. Measured Casp3 activity indicated caspase activity in all hepatic tissues with significant activity in the HD group for the 14-and 28-day treatment groups. Analyses of apoptosis-related genes seem to show AEN and BAX as the main targets in the induction of apoptosis in response to 2AA exposure, though p53, MDM2, and JUN may play marginal roles. Dose-dependent increases in mRNA expression were observed in all genes except Casp3. BAX was very highly expressed in the HD rats belonging to the 2-week exposure group. This trend was not observed in the 4-week animals. Instead, AEN was rather very highly expressed in the liver of the MD animals that were treated with 2AA for 28 days.
|
2022-04-19T01:21:36.832Z
|
0001-01-01T00:00:00.000
|
{
"year": 2014,
"sha1": "4ac3a23150593e381270d329e976dcd5ee4dcdcb",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc3989916",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4ac3a23150593e381270d329e976dcd5ee4dcdcb",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
267434821
|
pes2o/s2orc
|
v3-fos-license
|
The Curriculum Ideologies Underlying the AfriMEDS Curriculum Framework for Undergraduate Medical and Dental Education in South Africa
: South Africa faces healthcare challenges due to inefficiencies, resource constraints, and disease burden. The AfriMEDS curriculum framework was adopted as part of curriculum reform to facilitate the training of comprehensive healthcare professionals capable of addressing healthcare challenges. However, the curriculum ideologies underlying this framework have not been explored. This research aimed to qualitatively describe the curriculum ideologies underlying the AfriMEDS framework as a proxy to determine how it could facilitate the training of healthcare practitioners fit to address South African healthcare challenges. ChatGPT was used to extract data from the framework using a previously validated document analysis protocol. Interpretive analysis was employed to analyze the extracted data to determine inferred curriculum ideologies. A complex interplay of curriculum ideologies was found, with the discipline-and service-centered ideologies most dominant, followed by the citizenship-centered ideology, while the student-centered ideology was found the least. It was also found that the six components of curriculum ideologies exhibit varying degrees of ideological representation. It is concluded that, while the AfriMEDS curriculum framework could produce technically skilled and service-oriented practitioners, its effectiveness in nurturing well-rounded medical professionals may be limited. Integrating a balanced representation of all curriculum ideologies is recommended.
Introduction
Healthcare in South Africa is a complex and multifaceted issue encompassing various challenges and disparities.The country's healthcare system faces fundamental challenges, including resource constraints, inefficiencies, and disparities in access to care [1][2][3][4].The demand for healthcare is influenced by factors such as racial differences, household size, and geographical accessibility [4][5][6][7].Additionally, the burden of disease, the impact of the HIV epidemic, and the COVID-19 pandemic further strain the healthcare system [8].The disparities in healthcare access are evident in the rural-urban health divide and the persistent healthcare disparities, which are exacerbated by the historical context of apartheid policies [9,10].
To address some of these challenges, the government implemented National Health Insurance (NHI) to improve equity and access to healthcare for the entire population [11,12].It also seeks to align policy development, resource allocation, and the integration of innovative technologies to improve quality healthcare.While some members of society embrace the NHI, there are concerns regarding its financial implications and potential impact on the healthcare system [13].
Universities, which influence healthcare policy and train healthcare workers to serve the diverse population, have seen their medical and dental education curricula evolve considerably, adapting to global trends and local challenges [14,15].While there is a growing emphasis on expanding health professionals' training, the capacity of educational institutions remains a concern [16].Factors such as limited faculty numbers, inadequate infrastructure, and financial constraints hinder the expansion of training programs [17,18].However, efforts are being made to increase capacity through public-private partnerships and the introduction of innovative teaching methodologies, such as simulation-based training and online learning platforms [19].
The throughput rate of health professional programs is a critical indicator of the effectiveness of these educational systems.Studies have shown that attrition rates in health professional programs are a concern, with factors such as academic difficulty, financial challenges, and personal circumstances contributing to dropout rates [20].However, initiatives like student support programs and curriculum reforms aim to improve retention and graduation rates while aligning with the country's healthcare needs [21].
Medical and Dental Education Reforms in South Africa
Curriculum reform in South African medical and dental education has been an ongoing process, responding to the changing healthcare landscape and societal needs.The two primary drivers for this curriculum reform are ideological and curriculum reform.Ideological reform involves the transformation of the beliefs, values, and philosophies that inform education policies and practices [22], while curriculum reform involves changes in the content, structure, and delivery of the curriculum within educational institutions [23].
In post-apartheid South Africa, ideological reform attempts to transform higher education by addressing history-, race-, and gender-based inequalities and promoting inclusivity and equity [24,25].Efforts to decolonize higher education institutions, democratize education, and promote citizenship education are evident [26].This transformation includes broader societal objectives like reconciliation and nation-building through higher education [27].Ideological reform also includes decolonizing higher education and the curriculum by reversing the hegemony of Eurocentrism in teaching, learning, and research [28].This includes transforming pedagogical practices, teaching, and learning [29] and integrating and contextualizing students' realities into curriculum designs [30].
Curriculum reform attempts to align curriculum standards with international standards and respond to local needs [31].In medical and dental education, curriculum reform has shifted towards community-oriented medical and dental education, emphasizing primary healthcare and social accountability [32].Traditional lecture-based teaching is replaced with student-centered and work-based learning, including early clinical exposure [33].Interprofessional education that promotes a holistic understanding of patient care has been adopted [34].Furthermore, there has been an effort to incorporate ethical and humanistic components into medical training, including medical humanities, health systems science, medical ethics, patient rights, communication skills, and the social determinants of health [35].Alongside curriculum changes, there has been a shift in assessment strategies to emphasize work-based assessment, continuous assessment, practical examinations, and clinical skills assessment.
The challenge, however, is aligning ideological reforms and curriculum reforms into a coherent, explicit curriculum ideology.Balancing ideological and curriculum objectives while ensuring comprehensive, relevant, and equitable medical and dental education remains complex.The Health Professions Council of South Africa (HPCSA) [36] adopted a new curriculum framework, colloquially known as AfriMEDS, to address this quagmire.The HPCSA is the statutory body established to regulate health professions in South Africa.Its mandate is derived from the Health Professions Act 56 of 1974.The primary function of the HPCSA is to set and maintain standards for education, training, and ethical practice in the healthcare sector to ensure that the public receives quality healthcare.In 2014, the HPCSA adopted the AfriMEDS curriculum framework.
The AfriMEDS Curriculum Framework
The AfriMEDS curriculum framework [36], a pioneering initiative in South African medical and dental education, represents a significant step towards shared comprehensive curriculum reform in undergraduate medical and dental education programs.It provides "core competencies for undergraduate students in clinical associate, dentistry, and medical teaching and learning programs in South Africa" [36] (p. 1).This framework was adapted from the CanMEDS Physician Competency Framework [37,38] as a collaborative effort involving various stakeholders, including educational institutions, healthcare providers, and government bodies.Its inception was driven by a critical need to address the disparities in healthcare delivery and the uneven distribution of healthcare professionals, exacerbated by the legacy of apartheid [4,39].It represents an attempt to overhaul the traditional medical and dental education system, which was primarily hospital-based and specialized, to a more community-oriented and integrated approach [40,41].
The AfriMEDS curriculum framework establishes a comprehensive set of core competencies for undergraduate medical and dental education in South Africa.It emphasizes the role of a healthcare practitioner, which integrates graduate attributes, profession-specific knowledge, clinical skills, and professional attitudes toward providing patient/clientcentered care.Additionally, it underscores the importance of acquiring and maintaining knowledge, skills, attitudes, and character appropriate for practice, involving a broad range of academic and scientific disciplines.The AfriMEDS curriculum framework also highlights the collaborator role, stressing the need for effective teamwork, respect for diversity, and leadership within healthcare teams.As leaders and managers, students are encouraged to engage in activities that enhance the functionality of healthcare organizations and systems, manage their practices and careers efficiently, and adeptly use information technology in healthcare settings.In the role of health advocates, the focus is on responding to individual health needs and broader community health issues as part of holistic care.As scholars, students are encouraged to commit to lifelong learning, critical evaluation, and the application of knowledge, including an understanding of research design and ethics.Lastly, the professional role calls for the demonstration of ethical practice, adherence to professional codes, and maintenance of professional relationships.These competencies prepare healthcare professionals for ethical, comprehensive, and patient/client-centered care across various health and social contexts in South Africa.
Problem Statement
While the AfriMEDS curriculum framework represents a pivotal shift in the approach to undergraduate medical and dental education, a notable gap exists in understanding how this framework amalgamates ideological reform and curriculum reform within the South African context.This is because the curriculum ideologies underlying this framework have not been defined in the literature.Therefore, the extent to which the AfriMEDS curriculum framework promotes standardized quality medical and dental education, which could lead to social transformation, is unclear.Given the authority of this framework in medical and dental education in South Africa, it is imperative to explore its underlying curriculum ideologies to advance its potential to cultivate the competent, empathetic, and socially responsible medical professionals that South Africa needs.Such professionals would be academically proficient and adept in responding to the diverse and complex health needs of the South African population, driven by socio-economic and political dynamics.
Aim of the Research
Considering the above discourse, this research is a preliminary effort to assess the extent to which the ideological and curriculum reforms in health professional education in South Africa are aligned.More specifically, the research aimed to qualitatively describe the curriculum ideologies underlying the AfriMEDS framework as a proxy to determine how the framework could facilitate the required training of healthcare practitioners fit to address healthcare challenges in South Africa.
Theoretical Framework: Curriculum Ideologies
The current research adopted Schiro's [42] curriculum ideologies as a theoretical framework to address the research aim.A curriculum ideology refers to "the overarching aims or purposes of education, the nature of the student, the way learning must take place, the role of the teacher during instruction, the most important kind of knowledge that the curriculum is concerned with, and the approach to knowledge, and assessment" [42] (p. 7).Curriculum ideologies, namely, discipline-centered, service-centered, student-centered, and citizenship-centered ideologies (Figure 1), represent foundational beliefs about education and its role in society, leading to distinct approaches in curriculum design.Curricula that adopt discipline-centered ideology are predicated on mastering discipline-specific knowledge and advocating for a rigorous, structured, and knowledgecentric education that prioritizes intellectual development and the transmission of discipline heritage [42,43].Service-centered ideology requires curricula to align with society's socio-economic and practical needs [44,45].This pragmatic approach emphasizes job readiness, equipping students with skills directly applicable to the workforce and responding to market demands [42,46].The student-centered ideology in curricula, championed by theorists like Beane [47], places the individual student's needs, interests, and experiences at the core of the educational process.This ideology supports the idea that education should be tailored to foster personal growth, self-actualization, and the construction of knowledge through active learning [42].The citizenship-centered ideology emphasizes cultivating civic virtues and competencies necessary for social transformation through democratic participation.This approach aims to prepare students to be informed, critical, and engaged citizens capable of contributing to the betterment of society [24].
Methodology
This study used a rigorous content analysis approach to examine the AfriMEDS curriculum framework for undergraduate medical and dental education.The AfriMEDS curriculum framework was purposively selected as it is endorsed by the HPCSA and guides the accreditation of all medical and dental schools in South Africa.
A previously validated 6-item standardized document analysis protocol (Table 1) was used to analyze the AfriMEDS curriculum framework [43,44,48].This protocol has undergone thorough validation for face, criterion-related, and content validity by a panel of experts, ensuring its suitability for the current research.It has also been used previously to determine curriculum ideologies in school curricula [43,44,48].Utilizing a standardized protocol enhances the research's consistency, validity, credibility, and trustworthiness.
Component of the Curriculum Being Addressed Open-Ended Items Used to Analyze the Curriculum Documents
Purpose of medical and dental education What is the purpose of medical and dental education?
Approach to knowledge What is the approach to knowledge taught?
Instructional process
How is learning expected to take place?
Approach to assessment What is the nature and purpose of assessment?
Role of the teacher What is the role of teachers during the instructional process?
Role of the student What is the role of students in the learning process?
In the current research, content analysis primarily focused on responding to the six open-ended items in this protocol (Table 1), with responses comprising verbatim extracts from the AfriMEDS curriculum framework.Specifically, in line with the theoretical framework (Figure 1), the AfriMEDS curriculum framework was analyzed to determine the purpose of medical and dental education, the approach to knowledge taught, the instructional process, the roles of the teacher and student, and the approach to assessment (Figure 1).
ChatGPT 4.0, set for Browsing and Advanced Data Analysis, was used to identify relevant text extracts of the AfriMEDS curriculum framework, which the researcher later analyzed.This process involved two main steps: developing prompts and extracting text from the AfriMEDS curriculum framework, as described below.
Prompt Development
A systematic approach was followed in developing effective ChatGPT prompts to ensure that they are clear, focused, and capable of analyzing the document [49][50][51].This approach, informed by Poola [49] and White et al. [52], involved drafting the prompts, ensuring they were clear and concise, and directly addressing the research objectives.Ambiguous language that could lead to misinterpretation was avoided.The drafted prompts were tested by running them on ChatGPT to identify the type of responses generated.This testing phase was crucial to refine the prompts for clarity and effectiveness.Based on the responses, the prompts were refined to better align with the research objectives.This involved adjusting the complexity of the language and rephrasing questions for clarity.Each version of the prompt and the rationale behind each iteration were documented.A pilot test with the final version of the prompt was carried out to ensure that the prompts consistently generated the desired type of responses.An independent panel of experts with diverse skills, including educational background, considered the prompts for face-, construct-, and criterion-related validity.Having satisfied himself with the pilot test results and feedback from the panel of experts, the researcher proceeded to use the final set of prompts in the research (e.g., Figure 2).
Data Extraction
ChatGPT was then prompted to extract text from the AfriMEDS curriculum framework, which describes the purpose of medical and dental education, the approach to knowledge taught, the instructional process, the roles of the teacher and student, and the approach to assessment (Table 1).To ensure confirmability, this process was repeated until saturation was reached.Specifically, the first two rounds of data extraction were carried out.The output was then manually compared by the researcher to determine consistency.Some discrepancies were identified where the extracted text lacked depth, and the extent of the evidence was not consistent.Consequently, further rounds of extraction were conducted until the data generated were comprehensive, detailed, and consistently reliable.The researcher manually verified the output, cross-checking the extracts against the original document to ensure confirmability.
Data Extraction
ChatGPT was then prompted to extract text from the AfriMEDS curriculum framework, which describes the purpose of medical and dental education, the approach to knowledge taught, the instructional process, the roles of the teacher and student, and the approach to assessment (Table 1).To ensure confirmability, this process was repeated until saturation was reached.Specifically, the first two rounds of data extraction were carried out.The output was then manually compared by the researcher to determine consistency.Some discrepancies were identified where the extracted text lacked depth, and the extent of the evidence was not consistent.Consequently, further rounds of extraction were conducted until the data generated were comprehensive, detailed, and consistently reliable.The researcher manually verified the output, cross-checking the extracts against the original document to ensure confirmability.
Data Analysis
Having extracted the data, the researcher performed an interpretive analysis to determine the curriculum ideologies inferred.After initial data extraction, the researcher manually examined the extracted text for underlying meanings, context, and implications, going beyond the literal content.This approach involved critically analyzing how the AfriMEDS curriculum framework reflects curriculum ideologies based on their characterization (Figure 1).The extracted data were analyzed to determine whether they reflect any of the ideologies in Figure 1.For example, extracts relating to the purpose of medical and dental education were analyzed to determine if the stated, implied, or inferred purpose was to: -"Transmit discipline-specific knowledge, culture and values", which is indicative of the discipline-centered ideology; -"Prepare students for practical, vocational, or workforce skills", which is indicative of the service-centered ideology; -"Focus on the individual interests, needs, and experiences of students", which is indicative of the student-centered ideology; -"Critique and transform societal inequities and injustices", which is indicative of citizenship-centered ideology.
Therefore, the researcher interpreted the data to uncover implicit assumptions, values, and beliefs in the curriculum, providing a nuanced understanding of how the AfriMEDS curriculum framework aligns with or deviates from each curriculum ideology.
An independent researcher considered the interpretive analysis generated by the primary researcher for credibility and confirmability.Specifically, the independent researcher cross-verified the primary researcher's analysis by identifying overlooked aspects and challenging interpretations, prompting a deeper and more critical analysis.This led to the refining and strengthening of the arguments and conclusions drawn from the data.Where disagreements emerged, additional data to support either argument were manually extracted and analyzed to settle the disagreements.This way, the conclusions were reached based on the data available.
Presentation and Interpretation of the Results
The data provided insightful analysis of the prevalence of different educational ideologies.In some instances, evidence was extensive where aspects of the ideology are a central, recurrent theme throughout the framework.Instances of moderate evidence were also observed where aspects of the ideology were clearly acknowledged and addressed but were not a central or pervasive theme.Some of the evidence was shallow, where aspects of the ideology were barely evident or were reflected at a very basic level.In this regard, it was found that the discipline-centered ideology is predominantly evident in the purpose of medical and dental education and approach to knowledge, suggesting a strong focus on specific disciplines or subject matter.However, minimal prevalence was noted in the instructional process and role of the student.In contrast, evidence of service-centered ideology was comprehensive across most components, especially in the approach to assessment and the role of the teacher, with moderate presence in the role of the student.The student-centered ideology was notably absent in the purpose of medical and dental education and approach to knowledge.However, it is evident in the approach to assessment and the role of the student.Extensive evidence of the citizenship-centered ideology was found in the instructional process and approach to assessment, with shallow presence in the role of the teacher.Each of the components is presented in detail below.
Purpose of Medical and Dental Education
The data analysis revealed that the AfriMEDS curriculum framework adopts disciplinecentered, service-centered, and citizenship-centered ideologies as reflected in the purpose of medical and dental education (Table 2).No evidence of the student-centered ideology was found.In this regard, the emphasis on integrating "profession-specific knowledge, clinical skills, and professional attitudes" aligns with the discipline-centered ideology, which prioritizes transmitting discipline-specific knowledge and values.This ideology focuses on the mastery of a fixed body of knowledge and clinical skills essential for medical practice, underscoring the traditional view of medical and dental education to impart established facts and principles relevant to healthcare practice.Therefore, the healthcare professional's role is predominantly as a knowledgeable authority, delivering patient/client-centered care based on established medical disciplines.The data (Table 2) reflect the service-centered ideology by focusing on the application of skills in patient care.The mention of "performing comprehensive assessments" and effectively using various "interventions" emphasizes preparing students for practical, vocational skills relevant to the healthcare workforce.This approach underlines the utilitarian and practical aspects of medical and dental education, where the purpose is to develop efficient and skilled practitioners capable of delivering a range of diagnostic and therapeutic procedures.The ideology here is rooted in the practical utility of education, aimed at serving the immediate needs of patients and the healthcare system.
The absence of data aligning with the student-centered ideology in the AfriMEDS curriculum framework suggests that the purpose of medical and dental education does not focus on individual student interests, needs, or experiences as a central theme.The student-centered ideology typically involves tailoring education to individual students, emphasizing inquiry and exploration and fostering self-directed learning.The lack of evidence for this ideology in the AfriMEDS curriculum framework may indicate a more traditional approach to medical and dental education, focusing less on subjective, individualized learning experiences.
Data indicative of the citizenship-centered ideology were identified (Table 2), focusing on social responsibility and advocacy.The emphasis on acting as "advocates" for "marginalized groups" and responding to the needs of "vulnerable populations" with a commitment to equity reflects this ideology's aim to critique and transform societal inequities.The purpose of medical and dental education here extends beyond individual patient care to include a broader social context, where healthcare professionals are seen as agents of social change, promoting equity and access to care.This perspective aligns with the idea of training healthcare professionals as clinicians who are socially responsible citizens who advocate for societal well-being.
Approach to Knowledge
Data regarding the AfriMEDS curriculum framework's approach to knowledge revealed evidence for all ideologies except for the student-centered ideology.Specifically, some extracts from the AfriMEDS curriculum framework (e.g., Table 3) exemplify the discipline-centered ideology, as they emphasize the importance of "core knowledge", which includes "academic literacy, numeracy, and information technology skills", alongside "natural sciences and understanding normal human structure."This focus reflects the ideology's characteristic of valuing fixed and objective knowledge grounded in established scientific facts.The emphasis on "reflection, integration, application, and evaluation" aligns with the discipline-centered instructional process, which aims to ensure that students understand and can utilize the predetermined content.
Student-centered ideology
No direct evidence was found in the framework.
Citizenship-centered ideology "Understand the basic principles of quantitative and qualitative research design and analysis as well as research ethics" (pp.11-12)."Play a constructive, critical, and creative role in the organization, management, and provision of healthcare in the community" (p.10).
Data aligning with the service-centered ideology were also identified (Table 3), highlighting a focus on preparing students for practical skills necessary in healthcare.The emphasis on "consultations and facilitating clinical encounters effectively, including documentation" aligns with the ideology's practical and utilitarian knowledge approach.Furthermore, providing compassionate and patient-centered care reflects the ideology's focus on skills, efficiency, and usefulness in real-world contexts.
Evidence of the citizenship-centered ideology was also identified (Table 3), including a focus on "critical analysis", understanding "research design and ethics", and actively engaging in the healthcare community.This reflects the ideology's characteristic of knowledge being contextual, evolving, and aimed at critiquing and transforming societal inequities.The emphasis on playing a "constructive, critical, and creative role" in healthcare aligns with the ideology's goal of preparing students to be change agents and critical thinkers, actively working towards social justice and community betterment.
Instructional Process Approach
Regarding the instructional process, data revealed that the discipline-centered ideology is not explicitly represented, as the AfriMEDS curriculum framework lacks direct reference to traditional pedagogical approaches (Table 4).Instead of emphasizing lectures, direct instruction, and rote learning, the AfriMEDS curriculum framework adopts contemporary educational approaches, focusing on clinical skills and social responsibilities and aligning more closely with other curriculum ideologies.
Table 4.The representation of curriculum ideologies underlying the instructional process in the AfriMEDS curriculum framework.The evidence presented was extracted from the AfriMEDS framework, and source page numbers are indicated.
Curriculum Ideology
Evidence for the Instructional Process
Discipline-centered ideology
No direct evidence was found in the framework.
Service-centered ideology "Facilitate the learning of patients/clients, families, students, other healthcare professionals, the public staff, and others as appropriate" (p.12).
Student-centered ideology
"Reflect on and acknowledge the strengths and limitations of their knowledge and skills" (p.11).
Citizenship-centered ideology
"Collaborate with others where appropriate to assess plan provide and review other tasks such as research problems educational work programme review or administrative responsibilities" (p. 7).
Data reflecting the service-centered ideology were identified, emphasizing the practical application of skills in a work-based learning context (Table 4).Learning facilitated within community settings where students interact with diverse groups, including patients, families, and professionals, aligns with this ideology.In this regard, the instructional process involves the teacher as a resource provider, guiding students in applying their skills and knowledge in practical, real-world scenarios, which is a hallmark of the servicecentered approach.The emphasis on facilitation, rather than direct instruction, suggests a focus on practical skill application and utility in various contexts.
Data also suggest a strong alignment with the student-centered ideology by highlighting the importance of self-reflection and acknowledging one's strengths and limitations.This approach fosters a student-driven instructional process, focusing on personal growth and development.The emphasis on reflection suggests an educational environment where students are encouraged to critically evaluate their learning journey, an essential aspect of the student-centered ideology.This ideology values the subjective construction of knowledge and individual experiences, promoting students as active and self-directed students.
The citizenship-centered ideology is also reflected where the AfriMEDS curriculum framework emphasizes collaboration and engagement in activities beyond individual learning, such as research, program review, and administrative tasks.The focus on collaboration for societal tasks indicates an instructional process that encourages critical thinking, problem-solving, and activism.This reflects the ideology's goal of preparing students to critique and transform societal inequities and injustices, fostering a sense of social responsibility and activism.The collaborative nature of these activities supports the development of students as change agents and critical thinkers.
Approach to Assessment
Regarding the approach to assessment, the discipline-centered ideology was moderately represented, as the AfriMEDS curriculum framework integrates knowledge and skill acquisition within a broader context of patient-centered care.In contrast, extensive evidence for service-centered and student-centered ideologies was identified.For example, concerning discipline-centered ideology, the AfriMEDS curriculum framework states that students are expected to "acquire and maintain knowledge, skills, attitudes, and character appropriate to their practice" (p.2).Such knowledge, skills, attitudes, and character would then be assessed to certify that students have met the required competencies.This approach to assessment reflects a discipline-centered ideology that emphasizes discipline-specific traits deemed essential for the practice.This further suggests that the assessment will likely be more traditional, emphasizing objective knowledge and skill acquisition measurements.
Regarding service-centered ideology, the AfriMEDS curriculum framework states that, on graduating, students should "function effectively as entry-level healthcare practitioners. . . in a plurality of health and social contexts."(p.2).They must also "provide compassionate empathetic and patient/client-centered care" (p.2).These competencies would be assessed through relevant approaches.Notably, the focus on preparing students for assessable practical roles in healthcare, emphasizing effectiveness in diverse health and social contexts, and giving the ability to provide patient-centered care is typical in the service-centered ideology.Assessments in this regard would measure the application of skills in practical, real-world situations, evaluating students' ability to adapt to different contexts and to provide compassionate and empathetic care.The hands-on approach emphasizes practical skills and their direct application in healthcare settings.
The student-centered ideology was also observed where the AfriMEDS curriculum framework states that students are expected to "Reflect on, integrate, apply, and evaluate core knowledge skills, attitudes, and character acquired during undergraduate training" (p.2).Additionally, they must "reflect on and acknowledge the strengths and limitations of their knowledge and skills" (p.11).This implies that self-assessment in the form of reflection forms a core competence in the AfriMEDS curriculum framework.It prioritizes the individual student's process of reflection, integration, application, and evaluation of their learning.Assessments in this ideology would focus on personal growth and development, measuring how well students can reflect on their learning, recognize their strengths and limitations, and apply their knowledge and skills in various contexts.This approach encourages self-assessment and continuous self-improvement, focusing on the development of the student as a whole.
The AfriMEDS curriculum framework also reflects the citizenship-centered ideology.According to this framework, students are expected to "demonstrate a commitment to work in primary healthcare settings (urban and rural) and find professional and personal satisfaction in it" (p.2).They must also "respond to the health needs of the communities that they serve" (p.10).This approach aligned with a citizenship-centered ideology, focusing on responding to community health needs and working in various healthcare settings, including challenging environments like rural areas.Assessments would focus on students' ability to engage with and address community health issues, demonstrating a commitment to social responsibility and the ability to find satisfaction in serving diverse populations.This approach emphasizes social consciousness, critical thinking, and the capacity to act as agents of change in healthcare.
Role of the Teacher and the Student
Concerning the role of the teacher (Table 5), the AfriMEDS curriculum framework examined presents a multifaceted ideology, with moderate evidence supporting a disciplinecentered ideology, as indicated by the recurrent theme of healthcare practitioners' centrality, integrating graduate attributes (e.g., professional knowledge, clinical skills).For example, data suggest that the AfriMEDS framework emphasizes the need for teachers to help students integrate "profession-specific knowledge, clinical skills, and professional attitudes", which is typical in the discipline-centered ideology.In this ideology, the teacher is viewed as an authoritative figure who imparts fixed, objective knowledge and skills relevant to healthcare.It reflects a traditional view in which the teacher imparts knowledge, and students are expected to acquire this specialized knowledge and apply it in a patient-centered manner.The focus on specific professional roles and the integration of these roles further underline the structured, discipline-specific nature of the curriculum.Notably, there was no evidence to suggest that the role of the student follows the discipline-centered ideology.
However, data were found indicating that the role of the student follows the servicecentered ideology, which emphasizes that practical, utilitarian knowledge and skills are learned.Students must apply their learning to real-world scenarios, "demonstrating problem-solving abilities and judgment in clinical settings."The focus is on applying knowledge for effective patient care, characteristic of a service-centered approach where the utility and application of skills in practical contexts are paramount.Relatedly, the role of the teacher also follows the service-centered ideology, in which the emphasis is on practical skills and attributes necessary for effective teamwork and patient care.In this context, the teacher's role shifts to being a facilitator and resource provider, guiding students in developing practical, real-world skills such as collaboration, communication, and empathy.This utilitarian approach focuses on preparing students to serve patients and communities effectively, with the teacher supporting students in applying these skills in practical, clinical settings.The emphasis on therapeutic relationships and teamwork skills indicates a curriculum geared towards practical, service-oriented outcomes.
Table 5.The representation of curriculum ideologies underlying the role of the teacher and the student in the AfriMEDS curriculum framework.The evidence presented was extracted from the AfriMEDS framework, and source page numbers are indicated.
Curriculum Ideology
Evidence for the Role of the Teacher Evidence for the Role of the Student
Discipline-centered ideology
"As healthcare practitioners, healthcare professionals integrate all of the graduate attribute roles, applying profession-specific knowledge, clinical skills, and professional attitudes in their provision of patient/client-centred care.The healthcare practitioner is the central role in the framework of graduate attributes" (p.2).
No direct evidence was found in the framework.
Service-centered ideology
"As collaborators, healthcare professionals work effectively within a team to achieve optimal patient/client care" (p. 7)."Establish positive therapeutic relationships with patients/clients and their families characterised by understanding, trust, respect, honesty, integrity and empathy" (p.5).
"Demonstrate effective problem-solving and judgment to address patient/client problems, including interpreting data and integrating information to make differential diagnoses and propose holistic management plans" (p.4).
Student-centered ideology
"Provide compassionate, empathetic, and patient/client-centered care.Perform a consultation or facilitate a structured clinical encounter effectively, including thorough documentation of assessments and recommendations" (p.2).
"Apply life-long learning skills to keep up to date and to enhance professional competence" (p.3)."Reflect on and learn from challenges that are experienced in practice by posing appropriate questions, accessing and interpreting relevant evidence, integrating new learning with practice, evaluating the impact of change in practice, and documenting the learning process" (p.11).
Citizenship-centered ideology
"As health advocates, healthcare professionals responsibly use their expertise and influence to advance the health and well-being of individuals, communities, and populations" (p.10).
"Act as advocates for patient/client groups with particular health needs (including the poor and marginalized members of society)" (p.11)."Recognise and interrogate public health policy in terms of ethics and human rights" (p.14).
The student-centered ideology was also observed concerning the role of the teacher, in which the focus is on developing individual competencies like compassion, empathy, and effective communication.In this approach, the teacher's role is to facilitate individual student growth and learning.The teacher supports students in exploring and developing these personal attributes, encouraging self-reflection and self-directed learning.The emphasis on patient/client-centered care and effective consultation points to an educational approach that values student experiences and perspectives, allowing them to construct knowledge through real-world clinical encounters and patient interactions.Similarly, the role of the student is underpinned by the student-centered ideology, where the focus is on the student's individual growth, self-directed learning, and personal experiences.The emphasis on life-long learning, reflection, and the integration of new knowledge with practice showcases an approach in which students are active, self-motivated students.They are encouraged to continuously develop their skills, adapt to new information, and critically evaluate their practices.This ideology promotes students as constructors of their knowledge, reflecting on experiences and learning autonomously, which is essential for professional and personal development in healthcare.
Evidence of the citizenship-centered ideology concerning the role of the student was also found.Specifically, the AfriMEDS framework emphasizes critical thinking, social responsibility, and transformative action.Students are expected to advocate for vulnerable groups and engage critically with public health policies, reflecting a commitment to social justice and ethical practice.This ideology positions students as active change agents equipped to challenge and transform societal inequalities and injustices.Their role extends beyond individual patient care to include a broader societal perspective, where they are encouraged to recognize and address the social determinants of health and contribute to the well-being of diverse communities.The role of the teacher is also underpinned by the citizenship-centered ideology, in which advocacy for health and well-being at individual, community, and population levels are encouraged.The teacher is expected to act as an activist and mentor, guiding students to understand and engage with broader societal issues.The teacher empowers students to use their expertise for social change and community benefit, fostering critical thinking and social responsibility.This approach aligns with the notion of teachers preparing students as healthcare practitioners and agents of change who can address and influence public health issues and inequalities.
Discussion
The analysis of the AfriMEDS curriculum framework revealed a complex interplay of curriculum ideologies, each contributing uniquely to shaping medical and dental education in South Africa.These findings have critical implications for curriculum and instructional design.
The first significant finding was that the six components of curriculum ideologies-the purpose of medical and dental education, knowledge approach, instructional process, roles of teacher and student, and assessment approach-exhibit varying degrees of ideological representation.For instance, an ideology might be extensively reflected in the purpose of medical and dental education but overlooked in the approach to assessment.This lack of uniformity can significantly affect how curriculum designers and teachers at different medical schools interpret and implement the AfriMEDS framework.Variability in how these ideologies are represented across different curriculum components can lead to inconsistent interpretations and applications [43].For instance, a strong focus on the discipline-centered ideology in the curriculum's purpose might not align with a studentcentered approach to assessment.This disparity can create confusion and difficulties in creating a cohesive educational experience.Teachers might struggle to integrate these varied ideologies into a consistent teaching strategy, potentially leading to a fragmented and less-effective educational framework.
Such disparities indicate that there is no consensus on a guiding ideology for curriculum design in South African medical and dental education.This absence of a coherent ideological foundation can lead to educational strategies that are misaligned with the overarching goals of the AfriMEDS framework.It underscores the need for a more integrated approach, where all curriculum components-from educational purposes to assessment strategies-are informed by a consistent set of curriculum ideologies.This would ensure the uniform interpretation and application of the AfriMEDS framework, leading to a more cohesive and effective medical and dental education system in South Africa.
A second significant finding was the absence of a single dominant ideology across all components, highlighting a pluralistic approach in South African medical and dental education.While potentially enriching, this pluralism also presents challenges in achieving a cohesive educational strategy [53].Without a prevailing ideology across all components, the AfriMEDS framework risks lacking a unified direction, potentially leading to inconsistencies in educational outcomes across the different universities [54].This finding underscores the need for a more deliberate integration of ideologies to ensure a balanced and coherent curriculum [38].
A third significant finding was that the discipline-and service-centered ideologies are the most dominant ideologies underlying the AfriMEDS curriculum framework, while the student-centered ideology is the least.This dominance is implied by the availability of recurring evidence in the framework suggesting that these ideologies significantly shape the framework's content, approach, or perspective.The citizenship-centered ideology was moderately represented with noticeable but not extensive evidence.This finding underscores a traditional medical and dental education approach emphasizing professionspecific knowledge and practical competencies.However, the dominance of the disciplineand service-centered ideologies aligns with the global trend in medical and dental education that values domain-specific expertise and clinical skills [55,56].It also aligns with reforms in South Africa, such as early clinical exposure and work-based learning.It also aligns with contemporary shifts in medical and dental education, where the applicability of knowledge in real-world contexts is increasingly valued [57].However, it has been argued that an excessive focus on discipline-specific knowledge may limit the holistic development of medical professionals [58].In particular, a discipline-specific siloed approach could frustrate efforts toward interprofessional and interdisciplinary education and the incorporation of ethical and humanistic components into medical training [35].
The minimal representation of the student-centered ideology suggests a lack of focus on personalized and individualized learning experiences, which contrasts with recent educational trends advocating for more student-centered approaches [59].This minimal representation was noted as the evidence was barely mentioned or evident at a very basic level, implying that the ideology does not significantly influence or shape the overall content or approach of the framework.These approaches emphasize the importance of tailoring education to individual learning styles and needs, fostering a more engaging and effective learning environment [60].The minimal representation of this ideology in the AfriMEDS curriculum framework might limit the adaptability and responsiveness of the curriculum to diverse student needs.This is particularly significant in South Africa, where students come from diverse socio-economic backgrounds and could benefit from a curriculum that addresses their needs.
A moderate representation of the citizenship-centered ideology in the AfriMEDS curriculum framework suggests the promotion of advocacy and equity for vulnerable groups.This aligns with emerging trends in medical and dental education that acknowledge the importance of social responsibility and community engagement [32].However, the fact that the citizenship-centered ideology is not dominant in the AfriMEDS curriculum framework suggests a potential gap in fully embracing social accountability [61].In particular, graduates may not be fully equipped to identify and address social determinants of health through active citizenship.It also implies that the extent to which students could be active, informed, and responsible citizens who can engage with societal issues critically, understand social justice, and participate actively in community life may also be moderate and far less than envisaged in the ideological reforms in South Africa [42,44].This is particularly concerning given the socio-economic determinants of health, which play a significant role in South African healthcare.
The AfriMEDS curriculum framework embodies a complex interplay of different curriculum ideologies.While it strongly favors discipline-and service-centered approaches, reflecting traditional and practical aspects of medical and dental education, it emphasizes less student-centered and citizenship-centered ideologies.This may indicate a need for greater balance to accommodate evolving educational paradigms that prioritize individualized learning experiences and social accountability in medical and dental education.
Recommendations
Considering that the AfriMEDS curriculum framework does not differentiate disciplines, future research should examine the curriculum ideologies reflected in the enacted curricula in medical and dental education to determine alignment with the AfriMEDS curriculum ideologies.Such research should also investigate the curriculum ideologies within and between medical and dental education disciplines, as the emphasis may vary.Determining the extent to which the underlying ideologies manifest in practice post-graduation is also important.This research will be crucial for determining the extent to which ideological reform and curriculum reform are aligned and facilitate the training of healthcare practitioners fit to address healthcare challenges in South Africa.
Furthermore, comprehensive guidelines on how curriculum ideologies should be interpreted and integrated into curricula should be developed to ensure the effective integration of curriculum ideologies in medical and dental education.These guidelines should include strategies for balancing diverse ideologies, ensuring that each component of the curriculum reflects a harmonious blend of discipline-centered, service-centered, student-centered, and citizenship-centered approaches.Additionally, there should be explicit instructions for teachers on adapting their teaching methods to align with these integrated ideologies.This approach will facilitate a more consistent and effective educational experience, accommodating the varied learning needs of students and the complex demands of medical and dental education.
Conclusions
The findings from the analysis of the AfriMEDS curriculum framework have significant implications for medical and dental education in South Africa.The dominance of discipline-centered and service-centered ideologies in the AfriMEDS curriculum framework suggests a robust focus on imparting technical expertise and practical skills, which are crucial in a healthcare landscape marked by diverse and complex medical challenges.Such a focus ensures that medical graduates are equipped to address the specific health needs of South African communities, particularly in resource-limited settings.
However, the minimal representation of the student-centered ideology raises concerns about future medical professionals' adaptability and critical thinking skills.Given the rapidly evolving medical field, where personalized care and continuous learning are paramount, the lack of emphasis on student-centered approaches may limit the ability of graduates to adapt to new challenges and innovations.This gap suggests a need for curriculum reform incorporating more holistic educational strategies, fostering technical proficiency, critical analysis, problem-solving, and lifelong learning skills.
While the AfriMEDS curriculum framework is well-positioned to produce technically skilled and service-oriented practitioners, its effectiveness in nurturing well-rounded, adaptable medical professionals may be limited.Integrating a balanced representation of all curriculum ideologies would be beneficial to achieving a more comprehensive educational outcome.Such a balanced approach would ensure a well-rounded curriculum that combines discipline-specific knowledge, service orientation, student autonomy, and civic responsibility.Such a holistic educational framework is crucial in the South African context, where diverse healthcare needs demand healthcare practitioners who are clinically competent, socially aware, empathetic, and adaptable to various community and individual patient needs.This comprehensive training approach would better prepare medical professionals for the multifaceted challenges they may face in their practice.
Figure 2 .
Figure 2.An example of a prompt used on ChatGPT to extract data.
Figure 2 .
Figure 2.An example of a prompt used on ChatGPT to extract data.
Funding:
This research received no external funding.Institutional Review Board Statement:The study was conducted according to the guidelines of the Declaration of Helsinki.No ethical clearance was obtained, as the study was a desktop systematic literature review involving no participants.Informed Consent Statement:No informed consent was obtained, as the study was a desktop systematic literature review involving no participants.Data Availability Statement:No new data were generated.
Table 2 .
The representation of curriculum ideologies underlying the purpose of medical and dental education in the AfriMEDS.Extracts from the AfriMEDS framework are presented in rectangular boxes with source page numbers indicated.
"Act as advocates for patient/client groups with particular health needs (including the poor and marginalized members of society)."(p.10)"Identify vulnerable or marginalized populations and respond appropriately with a commitment to equity through access to care and equal opportunities."(p.11)
Table 3 .
The representation of curriculum ideologies underlying the AfriMEDS curriculum framework's approach to knowledge.Extracts from the AfriMEDS framework are presented in rectangular boxes with source page numbers indicated.
|
2024-02-06T18:09:30.176Z
|
2024-01-29T00:00:00.000
|
{
"year": 2024,
"sha1": "775a57bdb3f19f8ff5b1441bdc38e196cef78795",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2813-141X/3/1/5/pdf?version=1706531069",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b51f7bfd65ae7a166ae8a760a046cbcb7356eb52",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": []
}
|
14408367
|
pes2o/s2orc
|
v3-fos-license
|
Anomalous thermal escape in Josephson systems perturbed by microwaves
We investigate, by experiments and numerical simulations, thermal activation processes of Josephson tunnel junctions in the presence of microwave radiation. When the applied signal resonates with the Josephson plasma frequency oscillations, the switching current may become multi-valued temperature ranges both below and above the the classical to quantum crossover temperature. Switching current distributions are obtained both experimentally and numerically at temperatures both near and far above the quantum crossover temperature. Plots of the switching currents traced as a function of the applied signal frequency show very good agreement with a simple anharmonic theory for Josephson resonance frequency as a function of bias current. Throughout, experimental results and direct numerical simulations of the corresponding thermally driven classical Josephson junction model show very good agreement.
Introduction
The Josephson tunnel junction is a physical system very well studied due to its simplicity and nonlinearity [1]. Statistical properties of Josephson junctions have been another subject of intense investigation through, e.g., measurements of the escape statistics from the zero-voltage state, successfully confirming consistency with the classic Kramers model for thermal activation from a potential well [2,3]. Escape measurements represent a powerful tool for probing the nature of the underlying potential well, and applying an ac field to a low-temperature system has been reported to produce anomalous switching distributions with two, or more, distinct dc bias currents for which switching is likely. These measurements have been interpreted as a signature of the ac field aiding the population of multiple quantum levels in a junction, thereby leading to enhancement of the switching probability for bias currents for which the corresponding quantum levels match the energy of the microwave photons. Work performed in this direction appeared first in the literature two decades ago [4,5]. These results have significantly attracted interest toward Josephson junction systems as possible basic elements in the field of quantum coherence and quantum computing [6,7,8,9,10], and more recently other investigations have further indicated that the application of microwaves may not be the only condition under which level quantization can be observed in Josephson junctions [11].
Within the framework of this research topic we recently reported experimental measurements conducted on a Josephson junction, operated well above the so-called quantum transition temperature T * , and direct numerical simulations of the classical pendulum model, parameterized to mimic the experimental device [12]. It was found that multi-peaked switching distributions are not unique to the quantum regime (below T * =hω 0 /2πk B ), and, in fact, are manifested with the same features and under the same conditions in the classical regime as has been previously reported for low temperature measurements below T * . With the present paper we wish to contribute an anharmonic theory that accurately captures the bias current values of the observed resonant peaks in the switching distributions as a function of the applied frequency of the microwave field. We demonstrate agreement between the theory, direct numerical simulations, and experimental measurements for direct resonances as well as harmonic and sub-harmonic resonances at temperatures both well above and near T * . Thermal excitation k T B rf excitation j Potential Fig. 1. Sketch of the physical phenomenon under investigation: a driven oscillation energy Eac superimposed onto thermal excitations, may cause a particle to escape a washboard potential. Figure 1 illustrates the process under investigation: in the classical onedegree-of-freedom single-particle washboard potential of the Josephson junction [4], thermal excitations (shaded in the sketch) of energy k B T and the energy E ac of forced oscillations due to microwave radiation, can cause the particle to escape from the potential well. This process can be traced by sweeping the current-voltage characteristics of the Josephson junction periodically. Escape from the potential well corresponds to an abrupt transition from the top of the Josephson-current zero-voltage state to a non-zero voltage state. The statistics of the switching events, in the absence of time-varying perturbations, have been shown to be consistent with Kramers' model [2] for thermal escape from a one-dimensional potential. Since the thermal equilibrium Kramers model does not include the effect of non-equilibrium force terms, the results of the switching events generated by the presence of a microwave radiation on a Josephson junction can be investigated, in a thermal regime, only by a direct numerical simulation of the governing equations (RSCJ model) [1].
Theory
The RSCJ model reads, hC 2e Here, ϕ is the phase difference of the quantum mechanical wave functions of the superconductors defining the Josephson junction, C is the magnitude of junction capacitance, R is the model shunting resistance, and I c is the critical current, while I dc and I ac sin ω d t represent, respectively, the continuous and alternating bias current flowing through the junction. The term N (t) represents the thermal noise-current due to the resistor R given by the thermodynamic dissipation-fluctuation relationship [13] N (t) = 0 (2) with T being the temperature. The symbol, δ(t − t ′ ), is the Dirac delta function. Current and time are usually normalized respectively to the Josephson critical current I c and to ω −1 0 , where ω 0 = 2eI c /hC is the Josephson plasma frequency. With this normalization, the coefficient of the first-order phase derivative becomes the normalized dissipation α =hω 0 /2eRI c . It is also convenient to scale the energies to the Josephson energy Thus, the set of equations (1-3) can be expressed in normalized form as where θ = kB T EJ is the normalized temperature. The normalized dc and ac currents are η = I dc Ic and η d = Iac Ic , respectively. For small-amplitude oscillations around a stable (zero-voltage) energetic minimum we obtain the standard relationship between resonance frequency and bias current, where we have omitted the dissipative contribution to the resonance frequency. However, this linear resonance is not directly relevant for the dynamics leading to anomalous resonant switching. Looking at Figure 1 it is obvious that a switching event will arise from probing the anharmonic region of the potential near the local energetic maximum, and we therefore must anticipate a depression of the resonance frequency at these large amplitudes.
In order to quantify this notion, we will adopt the following ansatz, where ϕ 0 is a constant and ψ represents oscillatory motion. Inserting this ansatz into equation (4) Making the single-mode assumption, ψ = a sin(Ω d t + κ), κ being some constant phase, we obtain the following the functions, J n , being the zero's order Bessel function of the first kind. Notice that Ω res → Ω p for a → 0. Since multi-peaked switching distributions must require some switching events to happen near the resonance and others to happen for larger bias currents, we can estimate that the amplitude a must be approximately given by which represents the phase distance from the energetic minimum to the saddle point. Approximating J 0 (a) by its Taylor expansion, we can arrive at the simple expression between the oscillation amplitude and the applied bias current η, Inserting this approximate expression into (11) gives an explicit relationship between the anharmonic resonance and the bias current, relevant for the bias current location of the anomalous secondary peak in the switching distribution.
Experiments and Simulations
Experiments were performed on Josephson tunnel junctions fabricated according to classical Nb-NbAlOx-Nb procedures [14]. The samples had very good current-voltage characteristics and magnetic field diffraction patterns. The junctions were cooled in a 3 He refrigerator (Oxford Instruments Heliox system), providing temperatures down to 360mK. Microwave radiation, brought to the chip-holders by a coax cable, was coupled capacitively to the junctions, and the junction had a maximum critical current of I c = 143µA and a total capacitance of 6pF from which we estimate a plasma frequency of ω 0 /2π = 42.5GHz. From this value of the plasma frequency the classical to quantum crossover temperature [15] T * = (hω 0 /2πk B ) = 325mK between classical thermal and quantum mechanical behavior can be estimated. The sweep rate of the continuous current I dc wasİ dc = 800mA/s, and we verified that the experiment was being conducted in adiabatic conditions [11]. The junction has a Josephson energy E J ≈ 46.4 · 10 −21 J in the temperature range from 370mK to 1.6K, and effective resistance R = 74Ω. Evaluation of the dissipation parameter was based on the hysteresis of the current-voltage characteristics of the junctions [1]. We show data for two temperatures, T ≈ 388mK and T = 1.6K. Figure 2 shows experimentally obtained results at T = 1.6K [12]. The lower frames of the figure displays the switching distributions in bias current at different microwave frequencies. The top frame shows the relationship between the normalized current, for which the switching distributions have their resonant peak, and the applied frequency (normalized to the junction plasma frequency). Each black marker represents one of the switching distributions. Also shown in figure 2 is the linear resonance of equation (7), sown as a dashed curve, and the anharmonic resonance of equations (11) and (13), shown as a solid curve. The agreement between the experimental measurements and the anharmonic theory of the classical model is near perfect for the available data points, and we emphasize that the theoretical model of equations (11) and (13) has no fitted parameters to adjust in the comparison. Thus, the consistent depression of the experimental data relative to a linear resonance consideration, observed in Ref. [12], should be expected and not give rise to re-fitting the critical current or the plasma resonance frequency. Fig. 2. Experimentally obtained switching distributions, ρ(η), for the microwavedriven junction obtained for increasing values of the drive frequency. The frequency data points in the uppermost plot are relative to the position of the secondary peak in the plots. Temperature is T = 1.6K, and bias sweep rate isİ = 800mA/s. Dashed curve in uppermost graph represents the linear plasma resonance of (7), while the solid curve represents the anharmonic resonance of (11) and (13). Figure 3 shows experimentally obtained results at T = 388mK, presented in the same manner as the data in figure 2. The agreement between the experimental measurements and the anharmonic theory of the classical model is again near perfect for the available data points. The resonance curves shown in figures 2 and 3 are identical, since we have not included the resonance dependence on the dissipation (since dissipation is very small) and since the measurements indicated that plasma resonance frequency and critical current were unchanged in the investigated temperature range. By comparing figures 2 and 3, we notice that there seem to be no qualitative (and hardly any quantitative) differences between the data obtained at the two very different temperatures, even though the data of figure 2 is acquired at T ≈ 5T * and the data in figure 3 represent T ≈ 1.2T * . Fig. 3. Same experimental situation as described in figure 2, but data is here acquired at T = 388mK.
Numerical simulations of escape in a system described by equations (4)- (6) corresponding to the experiments with α = 0.00845, θ = 115.4·10 −6 , 4.76· 10 −4 , and continuous bias sweep rate dη dτ = 2.1·10 −8 have also been conducted in order to investigate the purely classical dynamics in comparison with the experimental measurements. The parameters have been chosen in agreement with the experiments discussed above. Switching distributions (each corresponding to 1,000-10,000 events), obtained for different values of the normalized drive frequency and temperature, were obtained as a function of the continuous bias, and secondary resonant peaks in the distribution were easily obtained in the classical model by adjusting the simulated microwave amplitude for a given frequency. Figure 4 shows both experimental measurements and direct numerical simulations of the resonant peak location as a function of applied microwave frequency at T = 1.6K ≈ 5T * . Experimental data are shown as box markers and numerically obtained data are shown as circles. As in figures 2 and 3, dashed curves represent the linear resonance (7) while the solid curves are generated from (11) and (13). Experimental data for the fundamental resonance (labeled Ω p ) in the figure are the ones from figure 2. We clearly observe the close agreement between theory, experiment, and simulation. We (7), while the solid curve s represents the anharmonic resonance of (11) and (13).
also present data for subharmonic resonances, and here too do we find very close agreement between simulation and experiment. The theoretical harmonic and subharmonic resonance curves are the ones of equations (7), (11), and (13) multiplied with the indicated fraction in the figure. This simple theory seems to also predict the sub-harmonic microwave induced resonant peak location very well.
We finally show, in figure 5, the data similar to the ones in figure 4 taken at T = 388mK ≈ 1.2T * . Also at this temperature do we observe very close agreement between experiment, simulation, and theory, amplifying the notion that microwave induced switching and anomalous switching distributions can be understood within a classical, thermal framework.
Conclusion
In conclusion, our theory, and experiments on ac-driven, thermal escape of a classical particle from a one-dimensional potential well have shown that resonant coupling (harmonic or subharmonic) between the applied microwaves and the plasma resonance frequency provides an enhanced opportunity for escape, and we have directly observed the signatures of such microwave-induced escape distributions in the form of anomalous multi-peaked escape statistics at two temperatures, T = 388mK ≈ 1.2T * and T = 1.6K ≈ 5T * . The straightforward agreement between the classical hypothesis of anomalous distributions being directly produced by ac-induced anharmonic resonances, the It is noted that previous experimental work on ac-induced escape distributions obtained at temperatures below T * is consistent with the observations presented here. Those experiments have produced ac-induced peaks in the observed switching distributions, and the relevant peaks are located alongside the expected classical plasma resonance curve, as we have also found here. An important observation is that the microwave-radiation frequency necessary for populating an excited quantum level (hω d ) in a quantum oscillator coincides with the classical resonance frequency of the corresponding classical oscillator. Thus, the switching distributions obtained from classical and quantum mechanical oscillators may exhibit the same microwave induced multi-peak signatures, which in the classical interpretation is merely due to resonant nonlinear effects. It is evident then that multi-peaked switching distributions are not a unique signature of quantum behavior in the ac-driven Josephson junction. We finally point out that similar anomalous (resonant) switching has been observed both experimentally [16] and theoretically [17] for single-fluxon behavior in long annular Josephson junctions in an external magnetic field.
Acknowledgment
This work was supported in part by the Computational Nanoscience Group, Motorola, Inc, and in part by INFN under the project SQC (Superconducting Quantum Computing). NGJ acknowledges generous hospitality during several visits to Department of Physics, University of Rome "Tor Vergata".
|
2014-10-01T00:00:00.000Z
|
2004-12-26T00:00:00.000
|
{
"year": 2004,
"sha1": "af03267dd78a5bd1212460a22968794666a0651d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0412692v1.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0d3c4d3fdee3a7bbf2fe69c6f71e7642bf13285b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
17540964
|
pes2o/s2orc
|
v3-fos-license
|
Power and Transmission Duration Control for Un-Slotted Cognitive Radio Networks
We consider an unslotted primary channel with alternating on/off activity and provide a solution to the problem of finding the optimal secondary transmission power and duration given some sensing outcome. The goal is to maximize a weighted sum of the primary and secondary throughput where the weight is determined by the minimum rate required by the primary terminals. The primary transmitter sends at a fixed power and a fixed rate. Its on/off durations follow an exponential distribution. Two sensing schemes are considered: perfect sensing in which the actual state of the primary channel is revealed, and soft sensing in which the secondary transmission power and time are determined based on the sensing metric directly. We use an upperbound for the secondary throughput assuming that the secondary receiver tracks the instantaneous secondary channel state information. The objective function is non-convex and, hence, the optimal solution is obtained via exhaustive search. Our results show that an increase in the overall weighted throughput can be obtained by allowing the secondary to transmit even when the channel is found to be busy. For the examined system parameter values, the throughput gain from soft sensing is marginal. Further investigation is needed for assessing the potential of soft sensing.
I. INTRODUCTION
The current scheme of fixed spectrum allocation poses a significant obstacle to the objective of expanding the capacity and coverage of broadband wireless networks. One solution to the problem of under-utilization caused by static spectrum allocation is cognitive radio technology. In cognitive radio networks, two classes of users coexist. The primary users are the classical licensed users, whereas the cognitive users, also known as the secondary or unlicensed users, attempt to utilize the resources unused by the primary users following schemes and protocols designed to protect the primary network from interference and service disruption. There are two main scenarios for the primary-secondary coexistence. The first is the overlay scenario where the secondary transmitter checks for primary activity before transmitting. The secondary user utilizes a certain resource, such as a frequency channel, only when it is unused by the primary network. The second scenario is the underlay system where simultaneous transmission is allowed to occur so long as the interference caused by secondary transmission on the primary receiving terminals is limited below a certain level determined by the required primary quality of service. 1 This work was supported in part by a grant from the Egyptian National Telecommunications Regulatory Authority There is a significant amount of research that pertains to the determination of the optimal secondary transmission parameters to meet certain objectives and constraints. The research in this area has two main flavors. The first takes a physical layer perspective and focuses on the secondary power control problem given the channel gains between the primary and secondary transmitters and receivers. The traffic pattern on the primary channel is typically not included in this approach save for a primary activity factor such as in [1]. On the other hand, the second line of research concentrates on primary traffic and seeks to obtain the optimal time between secondary sensing activities in an unslotted system, or the optimal decision, whether to sense or transmit, in a slotted system. Usually under this approach the physical layer is abstracted and the assumption is made that any two packets transmitted in the same time/frequency slot are incorrectly received [2], [3] and [4].
In this paper, we assume knowledge of both primary traffic pattern and channel gains between a primary and a secondary pair. The objective is to utilize this knowledge to determine both the optimal secondary transmission power and time after which the secondary transmitter needs to cease transmission and sense the primary channel again to detect primary activity. We allow for secondary transmission even when the channel is perfectly sensed to be busy. The objective is to maximize a weighted sum of primary and secondary rates given the channel gains and the primary traffic distribution functions for the on/off durations. We consider two sensing schemes: perfect sensing where the secondary transmitter knows, through sensing, the actual state of the primary transmitter. The second scheme is soft sensing, introduced in [1], where secondary transmission parameters are determined directly from some sensing metric.
The paper is organized as follows: in section II the system model is introduced. The optimization problem of maximizing the weighted sum rates is provided in Section III. In Section IV, we provide simulation results. Section V concludes the paper.
II. SYSTEM MODEL
We consider an unslotted primary channel with alternating on/off primary activity similar to the model employed in [4]. We assume that the probability density function (pdf) of the duration of the on period is exponential and is given by: where λ on is the reciprocal of the mean on duration T on . Similarly, the pdf of the off duration is: and λ off = 1/T off , where T off is the mean of the off duration. The channel utilization factor u is given by Based on results from renewal theory [6], the probability that the primary channel is free at time t ′ + t given that it is free at time t ′ , is given by: Given that the channel is busy at time t ′ , the probability of being free at t ′ + t, is given by: The primary transmitter sends with a fixed power P p and at a fixed rate r • . A secondary pair tries to communicate over the same channel utilized by the primary terminals. As seen in Figure 1, we denote the gain between primary transmitter and primary receiver as g pp , the gain between secondary transmitter and secondary receiver as g ss , the gain between primary transmitter and secondary receiver as g ps , and finally the gain between secondary transmitter and primary receiver as g sp . We assume Rayleigh fading channels and, hence, the channel gains are exponentially distributed with mean values: g sp , g ss , g ps and g pp . The channel gains are independent of one another, and the primary and secondary receivers are assumed to know their instantaneous values. The secondary transmitter does not transmit while sensing the channel. It senses the channel for a constant time t s assumed to be much smaller than transmission times T on and T off . This assumption guarantees that the primary is highly unlikely to change state during the sensing period. Based on the sensing outcome, the secondary transmitter determines its own transmit power and the duration of transmission after which it has to sense the primary channel again.
III. OPTIMAL POWER LEVEL AND TRANSMISSION TIME
In this section, we explain the problem of finding the optimal secondary transmission time and power given the outcome of the sensing process.
A. Problem Formulation
We formulate the cognitive power and transmission time control problem as an optimization problem with the objective of maximizing a weighted sum of the primary, R p , and secondary, R s , rates. Specifically, we seek to maximize E{(1 − α) R s + αR p }, where E{.} denotes the expectation operation over the sensing outcome and primary activity. The constant α ∈ [0, 1] is chosen on the basis of the required primary throughput. The constraints of the optimization problem are that the secondary power lies in the interval [0, P max ], and that the time between sensing operations exceeds t s . The problem is generally non-convex and, consequently, we resort to exhaustive search to obtain the solution when the number of optimization parameters is small.
In this paper, we consider two sensing scenarios: 1) perfect sensing with no sensing errors where the cognitive transmitter knows the exact state of primary activity after sensing the channel, and 2) soft sensing where the cognitive transmitter uses some sensing metric γ, say the output of an energy detector, to determine its transmission parameters. Assuming perfect sensing, the parameters used to maximize the weighted sum throughput are P F and T F defined as the power and transmission time when the primary channel is free, and P B and T B corresponding to the busy primary state. Under the soft sensing mode of operation, the range of values of γ is divided into intervals and the transmission power and time are determined based on the interval on which the actual sensing metric γ lies. The parameters to optimize the rate objective function are the transmission powers and times corresponding to each interval and also the boundaries between intervals.
We assume that the primary link is in outage whenever the primary rate r • exceeds the capacity of the primary channel. The primary outage probability when the secondary transmitter emits power p is given by: where σ 2 p is the noise variance of the primary receiver. The expression of P o (p) for Rayleigh fading channels is given in the Appendix. We assume that the channel gains vary slowly over time and are almost constant over several epochs of primary and secondary transmission.
For the secondary rate, we assume that the secondary receiver tracks the instantaneous capacity of the channel and, hence, the maximum achievable rate is obtained by averaging over the channel gains and interference levels [7, equation 8]. The ergodic capacity of the secondary channel when the cognitive transmitter emits power p and the primary transmitter is off is expressed as where σ 2 p is the noise variance of the secondary receiver. When there is simultaneous primary and secondary transmissions, the ergodic capacity of the secondary channel becomes We provide expressions for C • (p) and C 1 (p) in the Appendix.
B. Perfect Sensing
We mean by perfect sensing that the state of the channel, whether vacant or occupied, is known without error after the channel is sensed. There are four parameters that are used to maximize the weighted sum rate. These are: the secondary power when the channel is sensed to be free, P F , the duration of transmission when the channel is sensed to be free, T F , the secondary power when the channel is sensed to be busy, P B , and the duration of transmission when the channel is sensed to be busy, T B . Before formulating the optimization problem under perfect sensing, we need to introduce several parameters that pertain to the primary traffic. The probability, π m , that the mth observation of the channel occurs when the channel is free can be calculated using Markovian property of the traffic model.
Another parameter is P ss which is the steady state fraction of time the channel is free when sensed according to some scheme. In the perfect sensing scheme, the channel, when sensed free, is sensed again after t s + T F . When sensed busy, it is sensed again after t s + T B . Parameter P ss can be obtained by setting π m = π m−1 = P ss in (9) to get The average time between sensing times is given by: Finally, we also need the average time the channel is free during a period of t units of time if sensed to be free. We denote this quantity by δ • (t) and is given by [3] δ On the other hand, if the channel is sensed to be busy, the average time the channel is free during a period of t units of time is given by [3] The secondary throughput averaged over primary activity is given by The first two terms in the above expression are the secondary throughput obtained if the primary is inactive when the channel is sensed. When the sensing outcome is that the channel is free, the secondary emits power P F for a duration T F . During the secondary transmission period, the primary transmitter may resume activity. The average amount of time the primary remains idle during a period of length T F after the channel is sensed to be free is obtained by using t = T F in (12). This is the duration of secondary transmission free from interference from the primary transmitter. On the other hand, the primary transmits during secondary operation for an average period of T F − δ • (T F ). The last two terms in (14) are the same as the first two but when the channel is sensed to be busy. In this case, the transmit secondary power is P B and the transmission time is T B , of which a duration of δ 1 (T B ) is free, on average, from primary interference. The primary throughput is given by We ignore the primary throughput that may be achieved during the sensing period because t s is assumed to be much smaller than T on and T off . The two terms of (15) correspond to the sensing outcomes of the channel being free and busy, respectively. The optimization problem can then be written as Find: T F , T B , P F and P B That maximize:
C. Soft Sensing
Soft sensing means that the sensing metric is used directly to determine the secondary transmission power and duration. In the sequel, we re-formulate the weighted sum throughput optimization problem assuming quantized soft sensing, where the sensing metric, from a matched filter or an energy detector for instance, is quantized before determining the power and duration of transmission. Let γ be the sensing metric with the known conditional pdfs: f • (γ) given that the primary is in the idle state and f 1 (γ) conditioned on the primary transmitter being active. We assume that the number of quantization levels is S + 1. The kth level extends from threshold γ th k−1 to γ th k assuming that γ th 0 = 0 and γ th S+1 = ∞. The probability that the metric γ is between γ th k−1 and γ th k when the primary channel is free is given by where k = 1, 2, · · · (S +1). On the other hand, The probability that γ is between γ th k−1 and γ th k when the primary channel is busy is given by When γ is between γ th k−1 and γ th k , the secondary transmitted power is P k and the duration of transmission is T k .
As in the perfect sensing case, the probability that mth observation of the channel happens when the channel is free, denoted by π m , can be calculated using Markovian property of the channel model.
At steady state, π m−1 = π m and the steady state probability of sensing the channel while it is free becomes The average time between sensing events is given by The mean secondary throughput averaged over the primary activity and the sensing metric is given by The mean primary throughput is
IV. NUMERICAL RESULTS
In this section we present simulation results for the perfect and soft sensing schemes discussed in Section III. The weighted sum rate maximization problem is non-convex, hence, we do exhaustive search to obtain the optimal parameters. The parameters used in our simulations presented here are: T on = 4, T off = 5, t s = 0.05, r • = 4.5 nats, σ 2 s = σ 2 p = 1, P p = 100, P max = 10, g ss = 2, g pp = 3, and g ps = .03. In order to do the exhaustive search, we have imposed an artificial upperbound on transmission time equal to 20. We analyze the results for perfect sensing in Subsection IV-A and for soft sensing in Subsection IV-B. The parameters for channels A and B used in the analysis are the same except for g sp which is equal to 2 for channel A and 0.2 for channel B.
A. Perfect Sensing
The weighted sum throughput versus α is shown in Figure 2 for channels A and B. It is clear from the figure that as the gain g sp increases, the level of interference at the primary receiver increases leading to lower data rates. The optimal transmission power and time parameters for channel A are given in Figure 3. For small α value, which corresponds to giving more importance to the secondary throughput, the secondary transmitter emits P max whether the channel is sensed to be free or busy. The transmission time for both sensing outcomes are the maximum possible. Recall that this maximum is artificial and is imposed by the exhaustive search solution. In fact, for α approaching zero, the secondary transmitter sends with P max continuously without the need to sense the channel again. If the optimal P F = P B , then sensing becomes superfluous because the exact same power would be used regardless of the sensing outcome. As α increases, the power transmitted when the channel is sensed to be busy is reduced below P max . In addition, the transmission times are reduced for more frequent checking of primary activity. As α approaches unity, the secondary transmitter is turned off and the channel is not sensed. Figure 4 gives the optimal transmission parameters for channel B. It is evident from the figure that as the level of interference from secondary transmitter to primary receiver is decreased, P B becomes lower than P max at a higher α compared to A. If we make g sp = 0.002, the secondary transmits all time with maximum power regardless of the sensing outcome. This is shown in Figure 5.
B. Soft Sensing
In the soft sensing case, the optimization parameters are 2 (S + 1) transmission powers and times corresponding to each quantization level. There are also S thresholds defining the boundaries of the quantization levels. Hence, the total number of parameters is 3S + 2. The conditional distributions of the sensing metric γ used in the simulations are f • (γ) = exp (−γ) and f 1 (γ) = exp (− [γ + γ • ]) I • 2 √ γγ • , where I • is the zero order modified Bessel function and γ • is a parameter related to the mean value of f 1 (γ). We present here the results for one and two thresholds. The case of one threshold corresponds to the imperfect sensing case where the primary is assumed to be active when γ exceeds some threshold and inactive otherwise. The false alarm probability is given by ǫ 2 , whereas the miss detection probability is ϑ 1 . Figures 7 and 8 give the optimal parameters as a function of α and for γ • = 3. As is evident from the figure, the optimal threshold decreases with α. Under the imperfect sensing interpretation of the one threshold case, this means that as α increases putting more emphasis on the primary rate, the required false alarm probability is increased while the miss detection probability is decreased to reduce the chance of collision with the primary user. The effect of different values for γ • is given in Figure 6 where the weighted sum rate is higher for γ • = 10 than for γ • = 3. This is attributed to the increased distance between the distributions f • (γ) and f 1 (γ), thereby lowering the false alarm and miss detection probabilities. Figure 9 shows the weighted sum throughput using one and two thresholds for channel B and γ • = 3. There is a range of α values for which the two-threshold scheme improves very slightly the weighted sum rates. Fig. 6. Soft sensing weighted sum throughput for one threshold with different γ 0 versus α for channel B.
V. CONCLUSION
We have investigated the problem of specifying transmission power and duration in an underlay unslotted cognitive radio network, where the primary transmission duration follows an exponential distribution. We used an upperbound for the secondary throughput, and obtained, numerically, the optimal secondary transmission power and duration that maximize a weighted sum of the primary and secondary throughputs. Note that, at the particular values obtained, the solutions obtained from our optimization problem, are the same that would be obtained from a constrained optimization problem where one seeks to maximize the secondary throughput while constraining the primary throughput to be above a certain value. Our results also showed that an increase in the overall weighted throughput can be obtained by allowing the secondary to transmit even when the channel is found to be busy. We extended our formulation to the soft sensing case where the decision of the secondary transmission power and duration depends on the quantized value of the sensing metric, rather than on the binary decision of whether the channel is free or not. However, our preliminary results show that the gain of using this scheme, and for the range of parameters we have simulated, are minimal.
VI. APPENDIX
We provide here the evaluation of (6), (7), and (8) for exponential channel gains. The outage probability (6) can be written as P • (p) = Pr r • > log 1 + ag pp bg sp + 1 = Pr {ag pp − bcg sp < c} where a = P p /σ 2 p , b = p/σ 2 p , and c = exp (r • ) − 1. Assuming that g pp and g sp are independent and exponentially distributed with means g pp and g sp , the outage probability becomes Assuming an exponential distribution for g ss with mean g ss , Assuming that g ss and g ps are independent and have means g ss and g ps , respectively, (8) can be expressed as C 1 (p) = ∞ 0 ∞ 0 log 1 + pg ss P p g ps + σ 2 s ) × 1 g ss exp(− g ss g ss ) 1 g ps exp( g ps g ps )dg ss dg ps
|
2009-11-11T08:46:18.000Z
|
2009-11-04T00:00:00.000
|
{
"year": 2014,
"sha1": "7439bf7b932b3e3ba79a0943860fff667fdcc7cb",
"oa_license": null,
"oa_url": "http://arxiv.org/abs/0911.0820",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7439bf7b932b3e3ba79a0943860fff667fdcc7cb",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
14777847
|
pes2o/s2orc
|
v3-fos-license
|
A zero density change phase change memory material: GeTe-O structural characteristics upon crystallisation
Oxygen-doped germanium telluride phase change materials are proposed for high temperature applications. Up to 8 at.% oxygen is readily incorporated into GeTe, causing an increased crystallisation temperature and activation energy. The rhombohedral structure of the GeTe crystal is preserved in the oxygen doped films. For higher oxygen concentrations the material is found to phase separate into GeO2 and TeO2, which inhibits the technologically useful abrupt change in properties. Increasing the oxygen content in GeTe-O reduces the difference in film thickness and mass density between the amorphous and crystalline states. For oxygen concentrations between 5 and 6 at.%, the amorphous material and the crystalline material have the same density. Above 6 at.% O doping, crystallisation exhibits an anomalous density change, where the volume of the crystalline state is larger than that of the amorphous. The high thermal stability and zero-density change characteristic of Oxygen-incorporated GeTe, is recommended for efficient and low stress phase change memory devices that may operate at elevated temperatures.
Scientific RepoRts | 5:11150 | DOi: 10.1038/srep11150 is known as a growth dominated material that is capable of crystallising on a nanosecond time-scale [19][20][21] and cyclability comparable to Ge 2 Sb 2 Te 5 22 . At temperatures below 430 °C, GeTe shows a rhombohedral structure (space group R3m, No. 160) that can be visualised as a distorted rock-salt structure with Ge and Te atoms occupying the cation and anion lattice sites, respectively 23 . Moreover, the characteristics of the GeTe material have been tailored by element doping strategies, such as incorporating C 24 , N 25 , Bi 26 , and Cu 27 .
As a favoured doping element, it is surprising that there are no reports of deliberately doping oxygen into GeTe for data storage applications. Moreover, there are limited reports of environmental oxidation of GeTe films 28,29 . Therefore, the aims of the present work are: (1) to investigate the effect of oxidation on the fundamental crystallisation properties of GeTe and (2) to explore an hypothesised increase in the stability of GeTe-O materials, which could extend the use of PCMs to extreme environments. Hence, in this work the phase change behavior of GeTe-O films as a function of oxygen concentration is presented. This study has led to a GeTe-O composition with a unique set of phase change characteristics that includes a phase transition without volume change, which shows increased stability at high temperature. We believe that this composition is a promising candidate for PCM applications that require high thermal stability.
Results
It is possible to increase the crystallisation temperature of GeTe by carbon doping 24 . However, carbon sputtering is an inefficient process due to its low sputtering yield. In contrast, sputtering GeTe in an Ar:O 2 atmosphere is simple and a more efficient method to increase the crystallisation temperature. Furthermore, the reactive sputtering process is markedly dependent on the O 2 :Ar ratio; increasing it, increases the deposition rate, as shown in the supplementary materials.
The crystallisation temperature of GeTe was found to increase substantially with increasing oxygen incorporation. The temperature dependent optical transmission and reflection curves of GeTe-O films are shown in Fig. 1(a,b). The measurements were made simultaneously at a heating rate of 6 °C/min. The amorphous GeTe-O films are characterised by a relatively high [low] transmissivity (T r ) [reflectivity (R e )] at room temperature, which is stable with the increasing temperature until an abrupt drop [rise] in the measured optical response at the crystallisation temperature (T x ). The samples with oxygen content up to 8 at.% [(GeTe) 92 O 8 ] present a similar transmission/reflection transition profile, which is attributed to the amorphous-to-crystalline phase transformation. For the higher oxygen compositions, (GeTe) 91 O 9 and (GeTe) 88 O 12 , the rapid transition is replaced by the transmissivity and reflectivity curves that are unstable with increasing temperature, as shown in the inset of Fig. 1(a,b). This is due to phase separation into GeO 2 and TeO 2 in the material. The crystallisation temperature T xt /T xt in this work is defined as the point of minimum/maximum derivative of the recorded transmission/reflection curves with respect to temperature, as given in the upper panel of Fig. 1(a,b). The T xt and T xt of the pure GeTe film is 194 ± 1 °C and 195 ± 1 °C, respectively, as shown by the dash lines in the figures, which agrees well with values reported elsewhere 25,28 . Oxygen doping substantially increases the T x of GeTe. Doping GeTe with 4 at.% O increases the T x by 30 °C. The abrupt transition from the amorphous to crystalline state for the pure GeTe material becomes smoother for the films with high oxygen content, which is clearer in the reflection transition profile, suggesting a slower crystallisation mechanism with additional oxygen atoms and therefore increased stability.
Doping oxygen into GeTe increases the resistance of the amorphous material to crystallisation. The crystallisation activation energy, E a , has been estimated by analysing the crystallisation temperature dependence on heating rate using the Kissinger approach: . Where C is a constant and K b is Boltzmann's constant. The Kissinger plots are determined from T xt and T xr under the heating rates of 2, 4, 6, 8, and 10 °C/min.
Note that the crystallisation temperature and activation energy measurements, which were established by analysing the film's reflectivity, were consistently slightly higher than those established by measuring the transmissivity of the films, i.e. T xr > T xt and E ar > E at , as shown in Fig. 1(c). Since the reflectivity is more sensitive to the film surface, and transmissivity is sensitive to the film thickness, we have attributed this difference to stress variation with film thickness. Indeed, stress is known to alter the crystallisation temperature of related materials 30 .
Generally, PCMs with a higher T x and associated E a show an amorphous phase with increased stability against crystallisation and thus an improved data retention capability is to be expected in high temperature PCM applications. The crystallisation activation energy of GeTe was found to be 2.26 eV inside the film and 2.31 eV at the film's surface, which are in the range of values reported in literature (i.e. 2.0 to 2.72 eV) 24,25,28 . We suspect the reason for the higher activation energy is due to stress in the films. The activation energy of the GeTe-O films increases with oxygen doping, as plotted in the inset of Fig. 1(c). Hence, the data retention temperature (10-year lifetime) for GeTe-O materials is expected to be significantly higher than that of Ge 2 Sb 2 Te 5 (~89 °C 10 ) and pure GeTe (~97 °C 25 The XRD patterns for the undoped and oxygen-doped GeTe films annealed at 300 °C are shown in The (006) peak shifts to higher angle with oxygen incorporation, resulting in the compression of lattice parameter c. Eventually, this peak evolves toward a GeO 2 preferred phase as the oxygen content is increased up to 6 at.%. Furthermore, the diffraction intensity of (220) peak weakens with the increasing proportion oxygen. The full-width at half-maximum of the peak is therefore increased with oxygen content, which, according to Scherrer's equation, implies a reduction in the GeTe-O grain size. This indicates the incorporated oxygen atoms probably exist at interstitial sites and grain boundaries in the crystalline phase, suppressing the crystal growth in the material. The accumulation of dopants at grain boundaries has been widely reported in the crystalline Te-based phase change materials with light element (carbon, nitrogen, and oxygen) doping 6,13,24,25 .
Moderate oxygen incorporation increases the stability of GeTe-O. Room temperature x-ray reflectivity (XRR) measurements were performed on GeTe-O in the as-deposited amorphous state, crystallised state [crystallised at (T x + 20) °C ], and after annealing at 300 °C. See Fig. 3(a-c) for GeTe, (GeTe) 95 O 5 , and (GeTe) 93 O 7 materials, respectively. An XRR model was fit to the experimental data and used to determine the surface roughness and thickness of the films. The surface roughness of the crystalline GeTe and (GeTe) 95 O 5 film increases significantly by crystallising as indicated from the reduction of the number of Kiessig fringes, i.e., the Kiessig fringes are smeared due to the increased surface roughness. In contrast, the surface roughness of the crystalline (GeTe) 93 O 7 film is clearly reduced with pronounced Kiessig fringes in the 300 °C annealed curve. This also implies a smaller grain size and thus higher thermal stability as suggested above. A similar improvement in surface roughness has also been found in nitrogen doped PCMs 31,32 . In contrast, the loss of Kiessig fringes in the as-deposited GeTe-O films indicates increased surface roughness with O incorporation. We suspect that O is partially going into the grain boundaries and exerting bi-axial strain on the crystal and this influences the growth direction of the material. Indeed we saw in Fig. 2 that the (220) diffraction peak is suppressed, implying tensile strain in the a and b directions. The doping element induced strain field in the film can be also observed in nitrogen-doped Ge 2 Sb 2 Te 5 material 33 .
A shift of the critical angle for total external reflection and Kiessig oscillations upon crystallisation is observed from the XRR patterns in Fig. 3. These are due to the change of the mass density and thickness of the film. In order to satisfactorily fit the modelled XRR pattern to the measured data, it was necessary to include a thin silicon oxide (~2 nm) layer between the phase change film and the Si substrate (see supplementary material, Table II). Figure 4 shows the film thickness and average mass density change upon crystallisation as a function of oxygen content. The GeTe film exhibits a thickness decrease of 9.1 ± 0.6% and the associated mass density increase of 8.6 ± 0.5% upon crystallisation (crystallised at 210 °C), which is similar to the literature value (GeTe, ~8.7% density increase 34 ). Introducing oxygen into amorphous GeTe films reduces the change of film thickness and mass density upon crystallisation, as plotted in Fig. 4. The decrease in films thickness and correlated increase in mass density after 260 °C annealing is reduced to within 2% for (GeTe) 95 O 5 . At 5.5 at.% oxygen, the amorphous and crystalline states have the same density. Further increasing the oxygen to over 6 at.%, results in an unusual behaviour upon crystallisation where the mass density decreases. These effects are clear in Fig. 4, which shows the change in film thickness and mass density as a function of oxygen content.
Discussion
It is noteworthy that the amorphous-crystalline mass density difference increases when the films are further annealed at higher temperatures. This indicates a continuous structure deformation with temperature. The decrease in mass density upon crystallisation anomaly, which was observed for films with more than 6 at.% oxygen is by no means common, and there are only a few reports of similar effects in the literature: Ge-rich Ge-Sb 35 , Sb-rich Ga-Sb 36 , and Cu-Ge-Te 37 alloys.
The peculiar behaviour upon crystallisation of GeTe-O material can be attributed to the preferential formation of oxides (GeO 2 and TeO 2 ).The overall volume change of GeTe-O films is controlled by the shrinkage resulting from GeTe crystallisation and the thermal expansion of oxides. Simply increasing the oxygen content in the film creates smaller thickness reduction upon crystallisation. This is due to the improved structural stability against phase transition and larger thermal dilation of amorphous oxide network in the material. The zero density change upon crystallisation is then achieved by the trade-off between these two effects. At higher oxygen doping concentrations, the thermal expansion of GeTe-O materials dominates during annealing due to the increased proportion of oxide structures, which leads to a volumetric expansion upon crystallisation as shown in Fig. 4.
The observed zero density change upon crystallisation of GeTe-O material is an attractive characteristic for PCM applications. Significant hydrostatic stress on PCM cells is caused by the large mass density change during a phase transition 38,39 . This leads to void formation and delamination from the PCM's surroundings; an effect that ultimately limits the cyclability of PCM devices and results in programming failures 11 . Clearly, a PCM that exhibits a small, or even zero, density change during switching could provide a means to overcome this failure mechanism and even widen the choice of materials that can be used to enclose PCMs.
Stress within PCMs also causes a shift in T x and E a . This can be seen in Fig. 1(c), where the surface crystallises at a different temperature to the bulk. This can result in degraded performance of PCM cells. Smaller density change and thus lower stress is desired for a robust phase transition point. Moreover, resistance drift is thought to be related to structural relaxations of the PCMs 40 . Minimising the density change during switching would also minimise the post-switch residual stress and therefore reduce the resistance drift problem. Hence, these zero density change materials provide a materials engineering approach to increase the cyclability of PCM devices whilst potentially reducing resistance drift.
We also hypothesise that despite an elevated crystallisation temperature, doping oxygen into GeTe may reduce the switching energy of GeTe-O PCM devices since the switching energy is dependent on the enthalpy change, which is reduced by the smaller volume change. These topics will be discussed in future works.
In summary, doping GeTe with up to 8 at.% oxygen improves its thermal stability without substantially changing its crystal structure. We suspect that this will provide a beneficial enhancement to the data retention time when the material is employed in a PCM memory device. However, excessive oxygen doping causes phase separation into GeO 2 and TeO 2 , which destroys the phase change ability of the material. The change of film thickness and mass density upon crystallisation is determined to decrease with oxygen incorporation, and an unusual change in the thickness and density of GeTe-O films can be observed as the oxygen content is increased. The GeTe-O material with a composition of ~5.5 at.% oxygen can provide a zero density change between the amorphous and crystalline phases, which we suspect will be useful in the design of high performance PCM devices that exhibit superior cyclability, archival stability and a reduced resistance drift.
Methods
GeTe and GeTe-O thin films of 60-nm-thickness were prepared on fused silica and Si (100) substrates at room temperature by RF reactive sputtering of a stoichiometric GeTe (99.99%) target under an Ar/O 2 gas mixture. The chamber base pressure was better than 2.7 × 10 −5 Pa and the sputtering pressure was fixed at 0.5 Pa with different O 2 reactant flow rates and a constant Ar flow rate of 20 sccm (see supplementary materials, Table I 12 , respectively. In order to void the hysteresis behaviour that is typical during the oxygen involved reactive sputtering 41 , the flow rate of O 2 in this work was as small as 0.13 sccm. The composition of the films was confirmed by energy dispersive x-ray spectroscopy (EDX) using samples deposited on aluminum foil, where the oxygen background composition was calibrated by the results obtained from blank aluminum foil. The crystallisation process of GeTe-O materials was studied by in situ monitoring both the optical transmission and reflection of 658 nm laser light from room temperature up to 330 °C. The samples were heated in an argon atmosphere at ambient pressure with heating rates ranging from 2 to 10 °C/min. The activation energies of the amorphous-crystalline phase transition were determined from the heating rate dependent optical measurements using Kissinger Grazing incidence x-ray diffraction measurements of the crystalline films were performed at 1° grazing incidence geometry with Cu K-α radiation. Diffraction data were collected in 2θ scan mode from 20° to 55° with a step of 0.02°. The films for XRD analysis were heated at 300 °C for 20 min in an argon atmosphere.
The grazing incidence reflectivity curves in the x-ray reflectivity (XRR) experiments were measured for the GeTe-O films before and after annealing. The as-deposited amorphous films were annealed at (T x + 20) °C and 300 °C under an argon atmosphere. The XRR data were recorded with 2θ − ω geometry in the range of 0-5°, using a step of 0.01°. A knife-edge collimator was placed very close to the sample surface to obtain sufficient intensity by blocking the direct beam. The thickness and mass density of the films were estimated by the best fitting of the experimental XRR profiles with DIFFRAC plus LEPTOS program using the Levenberg-Marquart technique. The Levenberg-Marquart algorithm is based on the code using trust-region approach 43 . The parameters of the fit are provided in the supplementary material, Table II.
|
2018-04-03T03:50:25.841Z
|
2015-06-11T00:00:00.000
|
{
"year": 2015,
"sha1": "3840690455481b3ecd0c6fba278d02768e54e223",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep11150.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3840690455481b3ecd0c6fba278d02768e54e223",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
221695184
|
pes2o/s2orc
|
v3-fos-license
|
“New” cyanobacterial blooms are not new: Two centuries of lake production are related to ice cover and land use
New cyanobacterial blooms are two centuries of lake production are related Abstract. Recent cyanobacterial blooms in otherwise unproductive lakes may be warning signs of impending eutrophication in lakes important for recreation and drinking water, but little is known of their historical precedence or mechanisms of regulation. Here, we examined long-term sedimentary records of both general and taxon-speci fi c trophic proxies from seven lakes of varying productivity in the northeastern United States to investigate their relationship to historical in-lake, watershed, and climatic drivers of trophic status. Analysis of fossil pigments (carotenoids and chlorophylls) revealed variable patterns of past primary production across lakes over two centuries despite broadly similar changes in regional climate and land use. Sediment abundance of the cyanobacterium Gloeotrichia, a large, toxic, nitrogen- fi xing taxon common in recent blooms in this region, revealed that this was not a new taxon in the phytoplankton communities but rather had been present for centuries. Histories of Gloeotrichia abundance differed strikingly across lakes and were not consistently associated with most other sediment proxies of trophic status. Changes in ice cover most often coincided with changes in fossil pigments, and changes in watershed land use were often related to changes in Gloeotrichia abundance, although no single climatic or land-use factor was associated with proxy changes across all seven lakes. The degree to which changes in lake sediment records co-occurred with changes in the timing of ice-out or agricultural land use was negatively correlated with the ratio of watershed area to lake area. Thus, both climate and land management appeared to play key roles in regulation of primary production in these lakes, although the manner in which these factors in fl uenced lakes was mediated by catchment morphometry. Improved understanding of the past interactions between climate change, land use, landscape setting, and water quality underscores the complexity of mechanisms regulating lake and cyanobacterial production and highlights the necessity of considering these interactions — rather than searching for a singular mechanism — when evaluating the causes of ongo-ing changes in low-nutrient lakes.
INTRODUCTION
Eutrophication and phytoplankton blooms are commonly identified as primary concerns in aquatic systems . In freshwater lakes, it is well documented that both nutrient loading from watersheds and warmer conditions favor phytoplankton growth and, particularly in eutrophic lakes, the development of extensive cyanobacterial blooms (Brookes and Carey 2011). These blooms negatively impact recreation, property values, drinking water, and the health of people, domesticated animals, and wildlife (Walker et al. 2008, Dodds et al. 2009, Carmichael and Boyer 2016, Mueller et al. 2016. Management of lakes for recreation or drinking water could be more focused if we better understand both potential lake sensitivity to trophic change and the proximal drivers of such trophic shifts (Cowing and Scott 1980). However, because a series of nested factors operating at different scales may be responsible for trophic changes in lakes (Soranno et al. 1999, Soranno et al. 2015, separating these factors is challenging and remains a key management goal. Regionally, climate (Arnott et al. 2003(Arnott et al. , R€ uhland et al. 2003) and atmospheric deposition (Kop a cek et al. 2015) can be responsible for changes in water quality while changes in land use, particularly agriculture (Bunting et al. 2007, Levine et al. 2012) and urban point sources of nutrients including wastewater (Levine et al. 2012, Moorhouse et al. 2018, may have major effects at the watershed scale. Regional coherence of change in species assemblages, phytoplankton abundance, or water quality is commonly interpreted as stemming from regional drivers such as climate , Patione and Leavitt 2006, Moorhouse et al. 2018). However, even lakes in close proximity may exhibit asynchronous patterns of change in the abundance of phytoplankton species due to sitespecific differences in chemistry or morphometry that control the abundance of cyanobacteria (Patoine and Leavitt 2006) or the ways climate impacts lakes (Moorhouse et al. 2018).
Multi-decadal data from paleoecological studies may help answer questions about the extent and timing of trophic-state change in lakes. Because various proxies of trophic state may respond differently to changes in influx of energy and mass , Vogt et al. 2011, comparison of the historical changes in contrasting proxies can help identify the underlying mechanisms of change (e.g., Bunting et al. 2007). Similarly, application of these fossil analyses to the landscape-scale may be necessary to evaluate how diverse forcing mechanisms themselves vary over decadal-to-millennial and local-to-subcontinental scales (Tonn et al. 1990, Maheaux et al. 2016, Moorhouse et al. 2018. In particular, comparison of paleolimnological records across multiple sites can help untangle the influence of watershed-and lake-specific characteristics (Patione and Leavitt 2006, Taranu et al. 2015, Maheaux et al. 2016, as well as regional variation stemming from landscape position, climate, and anthropogenic forcing agents , Soranno et al. 1999, Arnott et al. 2003. Low-nutrient lakes in the northeastern United States are among those with the highest water quality (USEPA 2017), but recent cyanobacterial blooms in these (Carey et al. 2012a) and similar low-nutrient lakes in Canada (Winter et al. 2011) have raised concerns about incipient eutrophication. Among the potential indicators of impending trophic-state change are blooms of Gloeotrichia echinulata, a large, colonial cyanobacterium, that have been increasingly reported in low-nutrient lakes of eastern North America (Winter et al. 2011, Carey et al. 2012a). This taxon is capable of fertilizing surface waters with both nitrogen (N) and (P); it fixes N 2 (Stewart 1967, Roelofs and Oglesby 1970, Carr and Whitton 1982 and can translocate substantial amounts of P from the sediment to the water column when it recruits from sediments (Barbiero andWelch 1992, Istv anovics et al. 1993). The P loading can account for up to two-thirds of internal P loading in eutrophic lakes (Barbiero andWelch 1992, Istv anovics et al. 1993) and amounts comparable to the external loading from smaller riverine tributaries entering an oligotrophic lake (Cottingham et al. 2018). Moreover, high densities of Gloeotrichia were associated with significantly higher N concentrations and increased abundance of other phytoplankton in laboratory and mesocosm experiments (Carey et al. 2014b). Thus, Gloeotrichia has the potential to not only act as an indicator of lake transition but also to catalyze eutrophication and state change (Cottingham et al. 2015).
Given considerable inter-annual variability in water-quality proxies (e.g., Secchi depth, TP, chlorophyll), and the reality that most waterquality monitoring began in the 1970s, we took a paleoecological approach to disentangle the regional and local mechanisms regulating cyanobacterial and lake production in the northeastern 1850 over the past 200 yr. Specifically, we used diverse sedimentary, climatic, and land-use proxies to quantify how past lake production has changed in response to historical variation in climate and anthropogenic drivers across seven lakes in Maine and New Hampshire, USA. All sites currently experience blooms of Gloeotrichia or other cyanobacteria, yet differ in their current trophic state, watershed land use, and morphometry-all characteristics which we predicted would structure historical changes in lake production. We collected sediment cores covering the period from before European settlement began (c. 1750 CE) to present to investigate: (1) whether Gloeotrichia is a recent addition to these phytoplankton assemblages; (2) whether there is evidence that Gloeotrichia facilitates the growth of other phytoplankton; (3) the extent to which different proxies for trophic status indicate similar timing of changes in water quality; and (4) whether the timing of changes in these lakes cooccurs with climatic and watershed changes. We also investigated whether variation among lakes in the answers to these questions was related to watershed and lake morphometry.
Study sites and field sampling
The seven lakes in this study were located across south-central Maine and New Hampshire, USA, and vary in size, morphometry, and recent trophic state ( Fig. 1; Table 1). These lakes were selected to represent a gradient in current trophic state from oligotrophic to eutrophic; additionally, they have had modern water-quality data collected to support active management of the lake and watershed or to document recent blooms of Gloeotrichia or other cyanobacteria. All basins had shoreline development but largely forested watersheds, were important recreationally, and experienced blooms of one or more cyanobacterial taxa in the last decade (Carey et al. 2012a). Two lakes were sources for municipal domestic water use (Table 1), and all lakes were personal water sources for some homeowners (Ewing, personal observation; Lake Stewards of Maine, personal communication). Sediment cores were collected through the ice at each lake in the winters between 2007 and 2012 (Appendix S1: Table S1). Cores were collected with a square-rod piston corer fitted with an 8.3 cm internal diameter polycarbonate tube attached to the core head. This apparatus collects a large volume of sample and insures that the sediment-water interface remains intact. All cores were collected at the deepest part of the lake except in instances where lake depth exceeded equipment capability and then cores were collected in water as deep as was feasible (Appendix S1: Table S1).
Cores were kept upright and in the dark during transport to the laboratory, where they were extruded vertically at 1-cm intervals. As each core was extruded, sediment from the center of the core was transferred to opaque containers and immediately frozen (À20°C) until shipped on dry ice to the University of Regina, Regina, Canada, for elemental composition, stable isotope ratio, and pigment analyses. Remaining sediment from each interval was bagged and refrigerated at 4°C until freeze-dried or subsampled for further analyses. Macrofossils encountered during extrusion were saved for identification and 14 C radiocarbon dating.
Laboratory analyses
Subsamples of each interval were analyzed for water content (percentage of dry mass), organic content (as mass loss-on-ignition at 550°C for 2 hr), elemental composition (percentage of dry mass as C or N), stable isotope values (d 13 C and d 15 N), past lake production (as fossil pigments), and cyanobacterial bloom intensity (the number of senescent colonies of Gloeotrichia). Water and organic contents were calculated from the mass loss of a subsample dried at 105°C for 12 h and at 550°C for 2 h, respectively.
Stable isotope ratios and elemental composition were determined on whole sediment samples following standard procedures as described in Savage et al. (2004). Samples were analyzed using a ThermoQuest Delta Plus XL isotope ratio mass spectrometer equipped with a continuous flow (Con Flo II) unit and an automated Carlo Erba elemental analyzer as an inlet device. Stable N (d 15 N) and C (d 13 C) isotopic composition were expressed in the conventional d-notation in units of per mil (&) deviation from atmospheric N 2 and an organic C standard which had been calibrated with authentic Vienna Pee Dee Belemnite. Sample reproducibility was within 0.25& and 0.10& for d 15 N and d 13 C, respectively. Carbon isotope values were adjusted for changes in d 13 C in the atmosphere (Suess effect), according to the equation in Schelske and Hodell (1995).
Pigments were used to reconstruct changes in abundance and composition of phototrophic assemblages of algae and cyanobacteria (Leavitt and Hodgson 2001). Pigments from aliquots of freeze-dried samples were extracted, and individual compounds were separated and quantified on filtered extracts via high-performance liquid chromatography (HPLC) using an Agilent 1100 series quaternary pump equipped with an autosampler, reversed phase column, and ❖ www.esajournals.org 4 June 2020 ❖ Volume 11(6) ❖ Article e03170 photodiode array (PDA) detector, following Bunting et al. (2016). The HPLC was calibrated using commercial pigment standards from DHI (Denmark). Analysis included chemically stable, taxonomically diagnostic carotenoids representing total algal production (b-carotene), cryptophytes (alloxanthin), colonial cyanobacteria (canthaxanthin), all cyanobacteria (echinenone), mainly diatoms (diatoxanthin), and the degradation products pheophytin a and b, as well as lessstable pigments indicating total algae and plants (chlorophyll a); dinoflagellates, diatoms, and chrysophytes (fucoxanthin); and, for Long Pond only, photosynthetic dinoflagellates, diatoms, chrysophytes, cryptophytes (diadinoxanthin) (Leavitt and Hodgson, 2001). Lutein (chlorophytes) and zeaxanthin (cyanobacteria) were incompletely separated on the HPLC system and are presented as a combined marker of bloomforming phytoplankton.
Because of their large size and ability to withstand degradation (Forsell 1998), Gloeotrichia colonies in the sediments provide a useful proxy of their water column abundance. Hence, the abundance of Gloeotrichia through time was assessed by counting the number of senesced colonies in 5 cm 3 of sediment (dispersed with deionized water) under a Nikon SMZ 1500 stereoscope (Melville, New York, USA) at magnifications ranging from 209 to 1209. In general, senescent Gloeotrichia colonies appeared as intact colonies though with much reduced filament lengths relative to those sampled from the water column (Appendix S1: Fig. S1).
Dating
Chronologies were developed primarily through 210 Pb-dating protocols. Wet sediment was freeze-dried and shipped to the St. Croix Watershed Research Lab to estimate 210 Pb and § § USGS National Hydrography Dataset (USGS 2015) HUC12 classification was used for Auburn, Sabattus, and Sunapee; for the other lakes, pour points were established at the outlet of the lake; and the Flow Direction, Flow Accumulation, and Watershed tools were used within ArcGIS 10, Spatial Analyst extension.
¶ ¶ Bathymetry layer produced in ArcGIS from ME DEP lake depth point data (MEDEP and MEDIFW 2011) and 1 arc-second DEM rasters (USGS 2017); high-resolution bathymetry files for Auburn, Panther, and Sunapee provided by local organizations.
## Maine data originated from the GIS layer for the Maine Land Cover Database (MELCD) (ME Office of GIS, 2006), and land cover for Sunapee's watershed originated from the New Hampshire Land Cover GIS layer (NHLC) (Complex Systems Research Center, 2002). All percentages are for the terrestrial portion of the watershed only.
|||| Maine lake data originate from the Volunteer Lake Monitoring Program, (public communication, Maine lakes water quality-total phosphorus (by date), http://www.gulfofmaine.org/kb/2.0/record.html?recordid=9212 and Maine lakes water quality -chlorophyll (by date), http://www.gulfofmaine.org/kb/2.0/record.html?recordid=9211). These values are the mean of summer daughter isotope activities. At 15-20 depth intervals in each core, 210 Pb was measured through its grand-daughter product 210 Po, with 209 Po added as an internal yield tracer (modified from Eakins and Morrison 1978). Ages and sedimentation rates were determined using the constant rate of supply (CRS) calculation (Appleby 2001). Across lakes, the oldest intervals with sufficient radioactivity for dating ranged from 1809 to 1870 (Appendix S1: Table S1). Only two macrofossils suitable for radiocarbon dating were found over all cores; therefore, ages of the lower portion of all cores were approximated by linear extrapolation of the sedimentation rate from the bottom two 210 Pb dates. Ages from the two radiocarbondated macrofossils confirmed lower sedimentation rates in sediments below the 210 Pb interval, therefore suggesting that our extrapolations protocol likely underestimated the true age of sediments before c. 1840 CE. The calendar ages of the two macrofossils were established using Oxcal 4.2 (Appendix S1: Table S1).
Comparative data on watershed characteristics, land-use history, and climate
For all study lakes, quantitative estimates of watershed, land use, and bathymetric parameters were developed using a geographic information system (ArcGIS 10). Data layers from the USGS and New Hampshire and Maine governmental GIS offices were used for digital elevation models (USGS 2017), land cover (Complex Systems Research Center 2002, ME Office of GIS 2006), and basic depth soundings (MEDEP and MEDIFW 2011). These data were complemented with higher quality bathymetric maps made by local management groups for Sunapee (personal communication Lake Sunapee Protective Association), Auburn (personal communication Auburn Water District), and Panther lakes (personal communication Panther Pond Association). Watersheds from the USGS National Hydrography Dataset (USGS 2015) HUC12 classification were used for Auburn, Sabattus, and Sunapee, but for the other lakes, the outlet of the lake was substantially upstream of the base of the HUC12 watershed. For these lakes, pour points were established at the outlet of the lake and the Flow Direction, Flow Accumulation, and Watershed tools were used within ArcGIS 10, Spatial Analyst extension.
Land-use history within the watersheds was derived from public documents. We used changes in agricultural land use (a combination of area of land in farms and populations of sheep and dairy cows) and the size of the human population as proxies for the land-use changes most likely to impact water quality. Town settlement and incorporation dates for Maine towns were obtained from the Maine Encyclopedia (public communication, http://ma ineencyclopedia.com, e.g., https://maineanencyc lopedia.com/auburn/) and for New Hampshire from the New Hampshire Community Profiles database (public communication, http://www.nhe s.nh.gov/elmi/products/cp). Human populations were estimated from town-level census data (United States Bureau of the Census 1790) from 1790 to 2010 (where town is a subdivision of a county rather than a specific urban site). Estimates of the areal extent of farmland, as well as the sizes of the sheep and cow populations, came from the county-level census of agriculture (United States Bureau of Census 1850-1992, Ahn et al. 2002, National Agricultural Statistics Service 1997. Estimates of total farmland area designated as total land in farms, improved or unimproved, were available starting in 1850. For years with incomplete data on the proportion of the land that was improved (cropland plus pasture) or unimproved (farm-associated woodlands, brushland, rough or stony land, or swampland), missing values were calculated by difference or summation of subcategories. When an agricultural census value for sheep, cows, or land was completely missing (less than 10% of the time for each record), we interpolated between neighboring census values.
Estimates for human population size and agricultural activity in each watershed were calculated from GIS-derived data of the proportion of each town (population) or county (agricultural land, sheep, and cows) in each watershed. Given constraints of available data, we assumed that the proportion of a given town's population within a lake's watershed was equal to the proportion of the town that was in the watershed. The resulting population sizes (in integer values) for each proportion of town were then summed to estimate the total population within the watershed. The same process was used with the agricultural data at the county level.
We used the timing of ice-out as an index for historical changes in regional climate. This metric integrates variation in irradiance, air temperature, and local hydrology (Dr€ oscher et al. 2009, Sharma et al. 2019. Further, reliable records of ice-out were available for this region beginning in the early 1800s. Compilations of these data through 2008 came from Hodgkins (2010), with more recent data supplemented by the State of Maine's Bureau of Parks and Lands historical iceout data (public communication, https://www.ma ine.gov/dacf/parks/water_activities/boating/ice_ out_dates.shtml), the Auburn Water District/ Lewiston Water Division (personal communication), and the Town Clerk's office in Sunapee, NH (public communication, Lake Sunapee Ice Out Dates 2019, https://www.town.sunapee.nh.us/ town-clerktax-collector). Continuous annual data were available for Sunapee; however, many of the other lakes had incomplete ice-out records. The analyses required a continuous record, and year-to-year variation is large, making interpolation inappropriate. Hence, we used the Sunapee record on its own and assigned each of the Maine study lakes to one of three regional groupings that included neighboring lakes with more continuous ice-out records (Appendix S1: Fig. S2). Annual ice-out values for each regional group of lakes were averaged to obtain continuous ice-out records. Prior analysis of these ice-out records has revealed significant temporal coherence of ice out among lakes (Hodgkins et al. 2002, Patterson andSwindles 2015). The Spearman rank correlation between the regional-average record used here and the observed ice-out record at each lake was 0.97 or greater, except at Panther Pond where only six years of data were available and the correlation was 0.89.
Data analysis overview
Most analytical techniques for comparison of temporal data across metrics either require that all datasets have the same time points across all metrics (e.g., temporal coherence among systems; Magnuson et al. 1990, Rusak et al. 1999 or are suitable only for one type of response variable at a time (e.g., generalized additive models could be used for the pigments or stable isotopes, but not both in the same model; Simpson 2018). However, most of our proxies for trophic state contained multiple metrics (e.g., different pigments) and our questions necessitated examination of their collective behavior and relationship to drivers that also could contain multiple metrics (i.e., land-use associated metrics). Furthermore, the data from sediment trophic proxies, watershed land-use and human population, and ice-out were on different time steps.
To address these issues, we performed a fourstep analysis. First, we examined the synchrony of changes among individual metrics of trophic state within each lake core via a series of pairwise comparisons using Spearman rank correlation. Second, we quantified the timing of transitions in (a) groups of trophic-state proxies that might be expected to behave similarly (i.e., pigments, C, N, and Gloeotrichia) and (b) potential watershed-level drivers of change (i.e., human population, agricultural land use, and ice-out date) using stratigraphically constrained cluster analysis (CONISS; Grimm 1987). Third, we compared the timing of the transitions identified in step two, specifically (a) among trophicstate proxies and (b) between trophic-state proxies and external drivers of change. Fourth, we evaluated the relationship between catchment morphometry and the percentage of transitions that were coincident between trophic-state proxies and external drivers of change. Details of all steps are provided below.
This analytical approach accounted for the temporal autocorrelation of paleoecological time series, as well as variable sediment accumulation rates and temporal resolution in both sedimentary and driver variables. Prior to analysis, all sediment core records were truncated at c. 1770, the age of the youngest core, a date that also corresponds to the beginning of European settlement in this region. This date preceded the start of the reliable 210 Pb record by a few decades, but all zone breaks fell within the period of the 210 Pb dating; the matching of in-lake events with potential drivers was as good as sedimentation rates allowed.
Four steps of analyses. -Correlations. -To examine the temporal coherence among individual metrics within each core (e.g., %C, d 15 N, individual pigments, Gloeotrichia), we used Spearman rank correlations and a threshold for importance of >0.6 or <À0.6. Results were not qualitatively different when thresholds were set at either 0.5(À0.5) or 0.7 (À0.7), as 68% of correlations presented below were >0.7 or <À0.7. It was not possible to bring the complete sediment records into alignment with the driver data by combining multiple stratigraphic intervals (Patoine and Leavitt 2006) or working with confidence intervals around 210 Pb dates (Das et al. 2008), so this step focused only on the metrics from the cores (e.g., %C, d 15 N, select pigments).
Stratigraphically constrained cluster analyses.-We used the standard stratigraphically constrained cluster analysis (CONISS) of Grimm (1987) to identify the timing of transitions between periods of relative stasis in which the proxy records had similar characteristics (i.e., zones). This technique identifies groups of samples that are similar to each other in multidimensional space. Samples are tested for membership in clusters with the requirement that they join only those that are temporally adjacent (Grimm 1987). The number of significant zone breaks (boundaries between high-level clusters) for each analysis was determined from a broken-stick model (Birks 2012). Each of the breaks between zones is a statistical description of a significant transition in the system, as determined by the group of metrics (e.g., individual pigments) that make up the proxy (e.g., pigments) in a given cluster analysis, rather than a determination of a specific trophic state.
Zones were established for each lake individually and included related metrics as a part of each trophic-state proxy and potential watershed-level driver. The C proxy included the percentage of C and d 13 C value of that sedimentary interval. For the N proxy, we included the percentage of N and d 15 N value. For the pigment proxy, we used five chemically stable biomarkers that represent a diversity of phytoplankton taxonomic groups: alloxanthin (cryptophytes), canthaxanthin (colonial cyanobacteria), lutein + zeaxanthin (chlorophytes and cyanobacteria), diatoxanthin (diatoms), and echinenone (total cyanobacteria) following Leavitt and Hodgson (2001). The agricultural land-use proxy included the areal extent of improved and unimproved land and the population density of sheep and dairy cow populations within the watershed. A few cluster analyses were based on one parameter alone: Gloeotrichia was estimated as the number of senesced colonies (akinete packages) per gram of dry sediment while the human population was estimated as the number of people in the watershed, and ice-out date was the calendar day of year (DOY) when the lake became ice-free, where 1 January is DOY 1.
Prior to analysis, each proxy time series was Ztransformed to account for the considerable differences in absolute magnitude among metrics. These standardized values were used in a Euclidian distance model to find the timing of transitions identified as significant zones breaks. We implemented this procedure in R with the rioja package (Juggins 2015).
Comparison of proxy zone breaks.-After we determined the timing of transitions for each proxy using CONISS, we made pairwise comparisons among proxies across all of the zone breaks in each lake (Appendix S1: Fig. S3). For comparisons between proxy pairs (e.g., pigments and Gloeotrichia), zone breaks were considered contemporaneous (a match) when the zone break occurred in the same year for each trophic-state proxy. For comparison of trophic-state proxies and potential external drivers, zone breaks were considered contemporaneous when the interval containing the break in an external driver overlapped with the date range for the sediment interval containing a zone break for the trophicstate proxy. To compare the frequency of zonebreak matches among the different proxy groups, we expressed each set of comparisons (e.g., Gloeotrichia-to-pigments or pigments-to-iceout) as a percentage of the observed number of zone-break matches relative to the number of possible matches between that proxy pair.
These comparisons retained the zone boundaries defined using CONISS, but allowed for differences in resolution of individual time series (e.g., annual ice-out vs. decadal population) as well as compression of sediments with age that changes the number of years represented in each centimeter of sediment. Given the differences in sedimentation rate both with time and across lakes, these analyses did not consider the extent of lags between changes in watershed and climatic drivers and sedimentary records of trophic state. Additionally, no comparison of timing of change across lakes was made.
Relationship to watershed and bathymetric characteristics.-Finally, we examined whether the degree of similarity in zone breaks across proxies was related to watershed and bathymetric characteristics. Specifically, we calculated the Spearman rank correlation between the percentage of zone-break matches in pairs of paleorecords (e.g., percentage of matching zone breaks between pigments and ice-out date) and watershed area, lake area, and lake volume as well as ratios of metrics such as watershed-to-lake area (WA:LA) and watershed area-to-lake volume (WA:LV) that are indicative of the relative influence of watershed inputs to the lake (Klug et al. 2012, Hayes andVanni 2018).
Historical change in metrics of lake trophic status
All lakes showed evidence of change in primary production over the historical record. The magnitude of change in these records, however, varied both among lakes and metrics (Figs. 2-4). Most often, the generalized biogeochemical signals, including the elemental percentages and isotopic values for C and N, were the least temporally variable (Fig. 2), while the pigment concentrations and abundance of senesced Gloeotrichia colonies exhibited greater variation through time (Figs. 3, 4).
Historical changes in C and N signatures (as percentage of dry mass and isotopic values) were largely monotonic but did not show the same patterns across lakes (Fig. 2). Within the individual lake records, C and N percentages varied by 2-5% and <1%, respectively, and generally increased through time toward present-day values. Changes in isotopic values ranged from 1.5& to 2& for d 13 C and 1-3& for d 15 N. Values of d 13 C decreased in most lakes, although they were relatively constant at Panther Pond and increased at Sabattus Pond. Changes in sedimentary d 15 N were most often positive and were largest in eutrophic Sabattus Pond, although three sites showed little change (oligotrophic Pleasant and mesotrophic Long and Panther) and one showed a recent depletion after multiple decades of enrichment (mesotrophic Auburn). The stratigraphically constrained cluster (CONISS) analysis identified between one and three zone breaks for both C and N in all lakes, and in all but two lakes (Long and Panther), at least one of the breaks was contemporaneous in the C and N records.
In five of seven lakes (exceptions were Panther and Sunapee), there were recent increases in pigment concentrations that were substantial enough to be identified with a significant zone break in CONISS analysis (Fig. 3). These increases appeared to be a function of increased primary production (as indicated by stable pigments), but may also reflect changes in the preservation environment (as indicated by chlorophyll a:pheophytin a ratios) at Auburn, Pleasant, and Sabattus (Appendix S2: Fig. S1). The number of zone breaks identified by the stratigraphically constrained cluster analysis varied from one to five across study lakes.
Gloeotrichia colonies were found historically in all lakes, although the abundance of colonies through time varied greatly across lakes and followed four general patterns (Fig. 4). First, there was a recent (post-1950) monotonic increase in fossil abundance (Long Pond). Second, there was elevated abundance of Gloeotrichia during or immediately following the time of greatest European land clearance between 1780 and 1860 CE (Auburn, Middle Range, Pleasant). Third, there was a high abundance of this colonial cyanobacterium long before European land clearance expanded c. 1750 CE (Sunapee and Panther). Finally, there was one lake with few Gloeotrichia preserved in the sediment during the past 1000 yr, despite current large populations of other cyanobacteria (Sabattus Pond; Appendix S1: Table S1 for 14 C dating). All sedimentary records had between one and three breaks in the Gloeotrichia record except Sabattus Pond, where the rarity of Gloeotrichia led to no significant breaks.
Relationships among metrics of trophic status and identification of zone breaks
Within-lake correlations for the trophic metrics were strongest between %C and %N and among the pigments, but were variable in terms of both strength and direction for other pairwise comparisons (Fig. 5). Percentages of C and N were strongly positively correlated in four lakes, whereas elemental percentages were rarely correlated with changes in d 15 N or d 13 C. Pigment abundances were often positively correlated within individual cores, although the diatom pigment diatoxanthin did not regularly co-vary with other pigments. Pigments were positively correlated with %C, d 13 C, %N, and d 15 N values in the most eutrophic lake (Sabattus) and also in some of the mesotrophic lakes (Auburn, Long, and Middle Range). At mesotrophic Long Pond, the abundance of Gloeotrichia was strongly and positively correlated with all pigments except diatoxanthin. In contrast, only diatoxanthin was correlated with Gloeotrichia within the core from mesotrophic Lake Auburn.
The four in-lake proxies of trophic status (C, N, pigments, Gloeotrichia) provided slightly Fig. 3. Records of common stable biomarker pigments across study lakes in Maine and New Hampshire, USA; note lake-specific y-axes. Vertical dotted lines designate shifts in the assemblage identified as statistically significant zone breaks within stratigraphically constrained cluster analysis; see text for details. Pigments include alloxanthin (cryptophytes), canthaxanthin (colonial cyanobacteria), diatoxanthin (mainly diatoms), echinenone (cyanobacteria), and lutein + zeaxanthin (incompletely separated in analysis so combined as a measure of bloom-forming phytoplankton including chlorophytes and cyanobacteria). Details about taxonomic origin come from Leavitt and Hodgson (2001). Pigments suggest increased productivity recently at many, but not all, lakes. Differences among records in the pigments that have changed over time suggest that the taxa that have responded most over time vary. different estimates of when significant ecosystem transitions occurred, as identified with zone breaks in the CONISS analysis (Figs. 2-4). All detected zone breaks were after 1800, in the range of the 210 Pb dating on the cores. For C, N, and pigments, nearly all records showed transitions after ca. 1950 CE. Zone breaks in the general biogeochemical proxies C and N co-occurred Fig. 4. Complete records of Gloeotrichia abundance across study lakes in Maine and New Hampshire, USA; note the lake-specific y-axes. Vertical dotted lines designate shifts in abundance identified as statistically significant zone breaks within stratigraphically constrained cluster analysis; see text for details. Records were truncated at c. 1770 CE (heavy vertical line), the length of the shortest record, for quantitative comparisons. Current trophic state is noted to the right of the lake panels. Gray shading within the lake panels indicates the time range of incorporation of the various towns within each watershed (Sunapee 1772-1794, Pleasant 1796-1841, Long 1792-1804, Auburn 1786-1842, Middle Range 1774-1803, Panther 1762-1841, Sabattus 1788-1840). Forest clearance for homesteading likely began a decade or more before town incorporation (Barton et al. 2012). Logging in the Lake Auburn watershed likely began earlier yet (c. 1750), because of its hydrologic connection to the Androscoggin River, a major route for running timber during early European wood extraction (Barton et al. 2012). in approximately half of the instances, whereas approximately a quarter of the transitions identified by pigments and Gloeotrichia were concurrent (Appendix S2: Fig. S2). Beyond this, transitions in the group of proxies within each lake appeared to occur independently as indicated by proxy zone breaks that were generally offset in time (Figs. 2-4).
External drivers of trophic change
Records of human population size, agricultural land use and livestock density, and ice-out showed directional trends over the period of record in most of the watersheds (Appendix S2: Figs. S3-S5). The records were quite similar across watersheds, but varied with the intensity of urban and agricultural change in each watershed.
In general, townships in the watersheds were established by European colonists in the late 1700s or early 1800s. Human population increased slowly until c. 1850 in most watersheds. In all watersheds except Lake Auburn, the population then either remained steady or decreased slightly until c. 1950, when there were dramatic increases. Lake Auburn is near the major industrial mill towns of Lewiston and Auburn, where human population increased rapidly during 1850-1950 before a plateau after 1950 when the mills shut down (Appendix S2: Fig. S3). Population records in most lakes had a single zone break in the 1970s or 1980s, though Lake Auburn had two-both earlier in timeand Long Lake had none.
The area of land in agriculture and the density of sheep and dairy cows were generally highest between 1850 and 1910, but declined thereafter (Appendix S2: Fig. S4). The stratigraphically constrained cluster analysis identified at least two zone breaks in all three records. One break was in the late 1800s, when sheep populations and till agriculture declined. The other consistent zone break was around 1950, when there was a major decrease in agricultural land use. Zone breaks occurred at slightly different times in each watershed, as a result of differences in rates of agricultural decline (Appendix S2: Fig. S4).
The ice-out records had substantial inter-annual variability, but the overall trend was toward earlier ice-out over time, with a change of nearly three weeks over the period of record (Appendix S2: Fig. S5). The stratigraphically constrained cluster analysis identified this decline as having three zones in the Maine records and two at Sunapee. In all systems, a break occurred in the early 1980s, and in the Maine records, an earlier break was identified at the start of the 20th century. In Lake Auburn, where there is a 65-yr record of ice duration, there was both earlier ice-out and a reduced duration of ice cover (Appendix S2: Fig. S5b).
Relationships between in-lake changes and potential external drivers of change in trophic state
Temporal patterns of the four lake trophic proxies-C, N, pigments, and Gloeotrichia-differed from patterns in timing of ice-out, watershed population, and agricultural land use-the three external drivers investigated herein (Fig. 6). Nevertheless, synchronous changes in these records, assessed as the match in the timing of zone breaks between lake and watershed records, occurred in at least one lake for each of the pairs of potential watershed drivers and inlake proxies (Fig. 6). Coherence among changes in proxies tended to be greater if we provided a more generous window for matching zone breaks (Appendix S2: Fig. S6). For human population in the watershed, usually only one zone break was identified, so the matches in zone breaks between population and the in-lake proxies tended to be either 100% or 0%, with matches most often occurring in the pigment and N records. Zone breaks in the agricultural record were most commonly associated with zone breaks in C, N, and Gloeotrichia records. However, in the two oligotrophic lakes, all sediment proxies had at least some zone-break matches with those in agricultural land use. Zone breaks in the ice-out record were most commonly contemporaneous with breaks in the pigment record and, secondarily, with those in the N record. At mesotrophic Lake Auburn and eutrophic Sabattus Pond, zone breaks in the ice-out record had at least one match for all available trophic proxies.
Relationship of in-lake changes to morphometry
The ratio of watershed area to lake area (WA: LA) was the basin characteristic that best explained the percentage of zone-break matches between internal metrics of lake trophic status Fig. 5. Pairwise Spearman rank correlations among paleoecological metrics in study lakes in Maine and New Hampshire, USA. Colored cells indicate lakes for which either a positive (r S > 0.6, above the diagonal) or negative (r S < À0.6, below the diagonal) association existed between the two metrics. Weighted black horizontal and vertical lines separate the groups of metrics used in the stratigraphically constrained cluster analyses. Colors indicate current trophic state of the lake, oligotrophic (blue), mesotrophic (blue-green), or eutrophic (dark green), and letters are abbreviations for lake names (A, Auburn; L, Long; M, Middle Range; Pa, Panther; Pl, Pleasant; Sa, Sabattus; Su, Sunapee). Pigment concentrations were commonly positively associated with each other and also with C and N metrics in the eutrophic lake and some mesotrophic lakes. Negative correlations among metrics were generally less common except between pigments and C and N metrics in oligotrophic Lake Sunapee. and potential external drivers of change (Fig. 7). In particular, there was a strong negative relationship between the degree of coherence in zone transitions between pigment and ice-out records and the WA:LA ratio (r S = À0.94). Lakes with smaller WA:LA ratios typically exhibited concordant timing of changes in iceout date and changes in the pigment record. In contrast, lakes with relatively large WA:LA ratios had no common zone breaks. For the Gloeotrichia record, where agricultural land use was the most common external driver to have zone-break matches, there was a moderate negative association between the WA:LA ratio and the zone-break matches between the in-lake proxy and external driver (r S = À0.68); lakes with the smallest WA:LA ratio typically had 50% or 100% zone-break matches between agricultural and Gloeotrichia records, while the lakes with the largest WA:LA ratio had no matches. These patterns were qualitatively similar when we allowed a more generous window for zone-break matching, and, for the concordance of Gloeotrichia changes with those in agricultural records, the relationship was stronger (r S = À0.84), particularly for currently oligotrophic lakes (Appendix S2: Fig. S7).
DISCUSSION
At the broadest scale, all seven lakes showed changes in trophic state and external drivers Fig. 6. Percentage of zone breaks-points of transition in the record as identified by stratigraphically constrained cluster analysis-occurring in records of each of the driving factors (human population, agricultural land use, and ice-out) that match with zone breaks from the in-lake metrics of trophic state (carbon, nitrogen, pigments, and Gloeotrichia). Each cell indicates the percentage of zone breaks that matched between the potential external driver and the in-lake trophic proxy (Appendix S1: Fig. S3) for each of the Maine and New Hampshire, USA, study lakes. consistent with some degree of cultural eutrophication (Figs. 2-4 and Appendix S2: Figs. S3-S5). In the period since European land clearance began to expand c. 1750 CE, all lakes experienced climate change in the form of earlier iceout, agricultural expansion and contraction, and a continuous increase in human population in the watershed (Appendix S2: Figs. S3-S5). While any of these changes could facilitate lake eutrophication, the timing and extent of inferred change in trophic state differed across lakes and among the paleoecological metrics (Figs. 2-4). Nevertheless, high concentrations of the cyanobacterium Gloeotrichia in sediment records consistently revealed that the recently reported blooms of Gloeotrichia (Winter et al. 2011, Carey et al. 2012a) are likely not a new phenomenon (Fig. 4). Among-lake variation in the correlations among fossil markers of trophic state and their relationship to external drivers and lake morphometry (Figs. 5-7) underscore the importance of considering how the effects of climate or watershed change may be influenced by watershed-specific characteristics (Blenckner 2005).
Within-lake indicators of trophic change
Carbon, nitrogen, and pigments.-While this paleoecological study did not reconstruct numerical values for metrics like Secchi depth, nutrients, or water column chlorophyll concentration that are typically used to specify trophic state (e.g., Carlson 1977), all seven lakes had sedimentary evidence of eutrophication. The increases in many lakes in sedimentary %C and %N and concentrations of pigments from green algae, cryptophytes, and cyanobacteria (Figs. 2, 3)-taxa that tend to increase with eutrophication (e.g., Cottingham et al. 1998)-were the most consistent evidence of lake eutrophication. Stratigraphically constrained cluster analyses identified these changes as having occurred primarily in the last 30-50 yr, consistent with many studies highlighting the last half-century as a period of intense cultural acceleration of biogeochemical cycles (e.g., Taranu et al. 2015). The observation of eutrophication itself was not surprising, given the profound landscape modifications that have occurred since European settlement began c. 1750 CE and the continuing and substantial climate change occurring since at least the mid- , and colors indicate current trophic status (blue, oligotrophic; light green, mesotrophic; dark green, eutrophic). (a) The percentage of zone breaks in the ice-out record that aligns with zone breaks in the pigment record for each lake declines with increased WA:LA (r S = À0.94). (b) The percentage of zone breaks in the agricultural land-use record that match with zone breaks in the Gloeotrichia record is lowest at high WA:LA (r S = À0.68). In the lakes with the largest WA:LA, zone breaks in the measure of trophic state did not co-occur with changes in either the ice-out or agricultural records.
1800s. However, variation in timing of changes in each proxy also suggested that assessment of the degree of past eutrophication will depend on the proxy used and will be most easily interpreted in a multi-proxy framework (cf. Bunting et al. 2016).
The dissimilarities in timing of eutrophication derived from the different proxies are consistent both with studies that focus on the role particular nutrients or assemblages play in a lake, as well as prior assessments of the utility of various taxa as indicators. The stronger associations among C, N, and pigments, rather than between these metrics and Gloeotrichia (Fig. 5), are similar to findings in modern studies of temporal coherence in which aggregated metrics tend to be more synchronous than more taxonomically resolved, species-level parameters (e.g., Vogt et al. 2011, Angeler andJohnson 2012). This pattern might logically result from the co-occurrence of C and N in bulk organic matter and the importance of N for phytoplankton production. Further, the strong correlations between sediment N parameters and the abundance of pigments from cryptophyte and Nostocales cyanobacteria in a number of lakes (Fig. 5) reinforce the finding that these taxa may be particularly good indicators of increased nutrient availability (Cottingham et al. 1998).
Gloeotrichia.-Patterns of Gloeotrichia fossil abundance varied tremendously through time and among lakes but clearly demonstrated that this taxon is not new to these ecosystems (Fig. 4). Unexpectedly, Gloeotrichia was historically common in lakes of differing current trophic state (e.g., both oligotrophic Sunapee and mesotrophic Panther) and yet rare in other ecosystems (e.g., oligotrophic Pleasant and mesotrophic Long Pond). Among the study lakes, only eutrophic Sabattus Pond exhibited little Gloeotrichia in past centuries (Fig. 4), perhaps because Gloeotrichia's need for light for germination (e.g., Karlsson-Elfgren et al. 2004) cannot be met in turbid, eutrophic ecosystems like Sabattus Pond. Given the long-term abundances of Gloeotrichia in most of these sediment records, we infer that recently observed increases of Gloeotrichia are not due to their recent arrival in these systems.
Although reports of the apparent expansion of Gloeotrichia in low-nutrient lakes (Winter et al. 2011, Carey et al. 2012a) led us to expect pronounced increases in its abundance in the most recent decades associated with climate or land-use changes, this pattern occurred only in Long Pond (Fig. 4). Other paleoecological studies have identified Gloeotrichia as an important taxon during intermediate times of ecosystem transition (Bottema and Sarpaki 2003, Bunting et al. 2007, Levine et al. 2012), but we did not see strong evidence of Gloeotrichia abundance increasing concomitantly with fossil pigment concentrations in recent decades. In fact, we observed fossil Gloeotrichia densities to be correlated consistently with concentrations of algal and cyanobacterial pigments only at Long Pond (Fig. 5). We have good experimental evidence that Gloeotrichia can be associated with increases in the abundance of other phytoplankton Rengefors 2010, Carey et al. 2014a,b), and modeling results illustrate how Gloeotrichia's translocation of nutrients to the water column could tip a lake into a more eutrophic state (Cottingham et al. 2015). However, our paleoecological data suggest that whole-lake evidence of such facilitation may occur only in particular cases.
The abundance of Gloeotrichia relative to other primary producers during eutrophication appears to differ across lakes. Short-term laboratory experiments have suggested that Gloeotrichia may stimulate the abundance of the diatom Cyclotella sp. (Carey and Rengefors 2010), but in these New England lakes, the long-term outcome of interspecific interactions appears more complex. For example, Gloeotrichia abundance was not correlated with the diatom pigment diatoxanthin at Long Pond despite significant correlations with other pigments, whereas diatoms changed in concert with Gloeotrichia at Lake Auburn (Fig. 5). Lake Auburn also currently has blooms of Dolichospermum (formerly Anabaena) during and following Gloeotrichia blooms (Ewing, unpublished data), a result consistent with the laboratory evidence of enhanced Dolichospermum growth in cultures containing Gloeotrichia (Carey and Rengefors 2010). Given that phytoplankton community composition is governed by many different factors, such as zooplankton dynamics, macro-and micronutrient concentrations, light availability, thermal structure, and other factors that vary over multiple temporal scales, it is not surprising that Gloeotrichia's co-occurrence with other phytoplankton varied both within and among lakes.
Climatic and watershed influence on changes in lake systems
Timing of ice-out.-Coeval changes in lake production and ice-out date (Fig. 6) are consistent with process-based studies that demonstrate a synchronizing effect of climate on regional lake phenology , Arnott et al. 2003, Vogt et al. 2011. Strong correspondence between changes in the ice-out record and the pigment record, rather than between ice-out and Gloeotrichia records, suggests that primary producer abundance estimated from ubiquitous pigments may better reflect the effects of climate change (as ice-free season) than do individual cyanobacterial taxa.
That neither Gloeotrichia nor the cyanobacterial biomarkers canthaxanthin and echinenone were particularly responsive to changes in timing of ice-out in nutrient-poor lakes is unsurprising, as these taxa typically bloom only later in the summer. However, cyanobacteria are predicted to do particularly well in warmer, more thermally stratified lakes (Paerl andHuisman 2008, Carey et al. 2012b), conditions that are favored by longer ice-free periods (e.g., O'Reilly et al. 2015, Woolway and Merchant 2019) such as those these lakes are experiencing (Appendix S2: Fig. S5). Importantly, these effects may differ as a function of lake trophic state; Rigosi et al. (2014) suggest that effects of warming temperature on cyanobacteria may occur in eutrophic-but not oligotrophic-lakes. Consistent with this pattern, eutrophic Sabattus Pond exhibited important contributions of cyanobacteria to production, particularly in the most recent decades which also had the shortest periods of ice cover (Fig. 3).
Across most study lakes, vernal and autumnal taxa such as cryptophytes (as alloxanthin), diatoms (diatoxanthin), and secondarily chlorophytes (in part lutein + zeaxanthin) were most responsive to eutrophication (Fig. 3). This pattern suggests that the extended periods of low light, high turbulence, and abundant nutrients in spring and fall resulting from shorter periods of ice duration may be particularly important to overall changes in productivity, including differences in the synchrony of changes among lakes (Dr€ oscher et al., 2009, Vogt et al. 2011. This is also consistent with studies concluding that ice cover plays a key role in composition and abundance of planktonic assemblages (Rioual and Mackay 2005, Smol et al. 2005, Katz et al. 2015, Hampton et al. 2017. The extent of zone-break matches between iceout and pigment records was strongly associated with the watershed-to-lake area ratio (WA:LA; Fig. 7, Appendix S2: Fig. S7). Where WA:LA ratios are high, hydrologic inputs from the watershed can more easily result in either hydrological disturbances (Klug et al. 2012) or substantial nutrient influx (Horppilia et al. 2019 and references therein) that may override the effect of ice cover on lake production. In contrast, in lakes with lower WA:LA ratios, the proportional importance of watershed hydrology would be expected to be reduced, favoring closer correspondence between changes in phototrophic production (pigments) and changes in the ice-free period. This scenario is consistent with idea that influxes of energy (irradiance, heat) induce stronger temporal coherence among lakes, whereas influx of mass (water, solutes, particles) can override effects of energy , Vogt et al. 2011. It also further supports arguments from both recent Carey 2011, Rigosi et al. 2014) and paleolimnological ) perspectives that both temperature and nutrient loading are likely to control lake productivity.
Watershed land use.-Changes in anthropogenic drivers related to watershed land use (human population and agricultural activities) sometimes co-occurred with those in sedimentary proxies of trophic state (Fig. 6). In the case of human population, changes were coeval with those of trophic proxies during the 1970s (Figs. 2-4 and Appendix S2: Fig. S3). During this period, tourism and land subdivision for second homes in Maine increased dramatically (Condon and Barry 1995), so these nearshore changes may be particularly important for both N loading and overall productivity as N and pigment proxies most commonly changed in concert with population ( Fig. 6 and Appendix S2: Fig. S6). In contrast, correspondence between changes in trophic-state proxies and agricultural activities occurred particularly for C and Gloeotrichia records, supporting other studies pointing to the importance of watershed nutrient additions in regulating primary production (Brookes and Carey 2011, Rigosi et al. 2014, Taranu et al. 2015. These latter patterns appeared particularly pronounced in currently oligotrophic lakes (Fig. 6) where all trophic-state proxies had some correspondence to changes in agricultural land use, and patterns were even stronger when more allowance was made for uncertainty in dates (Appendix S2: Figs. S6, S7).
Gloeotrichia abundance was greatest in five of our focal lakes around or immediately following the time of maximal European land clearance (c. 1780-1860) and declined as agricultural land use was reduced (Fig. 4). For example, peak abundance of Gloeotrichia in Lake Auburn occurred during initial European land clearance and was followed by a decline coinciding with the end of the most intense period of deforestation in this region (Barton et al. 2012) and the initial expansion of sheep production (Connor 1921) in Maine's history (~1780-1810 CE). Watershed clearance, soil disturbance, and introduction of livestock (Appendix S2: Fig. S4) likely resulted in significant nutrient influxes, as seen globally , and as suggested as a stimulus for Gloeotrichia in previous paleoecological studies (Van Geel et al. 1996, Bottema and Sarpaki 2003, Bunting et al. 2007. Consistent with studies in nearby Vermont (Levine et al. 2012), the introduction of livestock may be particularly important agents of initial change in lake water quality. Here, the only low-nutrient lake that did not show an increase in Gloeotrichia during European land clearance was Long Pond -also the only site with few sheep and cows in its catchment relative to land area (Appendix S2: Fig. S4).
The extent to which changes in watershed agricultural land use corresponded to those in the Gloeotrichia record was inversely related to WA: LA, as lakes with lower ratios were more likely to exhibit coherence in break patterns (Fig. 7), especially when more allowance was made for possible error in the dating (Appendix S2: Fig. S7). While this observation seems to run counter to the expectation that lakes with high WA:LA are likely to have proportionally larger inputs in times of high flow, it matches recent empirical work showing that cyanotoxin concentrations are higher in lakes with low WA:LA (Hayes and Vanni 2018) and that nutrient limitation varies as a function of water input (Hayes et al. 2015). In particular, Hayes et al. (2015) point to the importance of the interaction between precipitation and watershed land use for bloom development and note that lake residence time may be functionally important behind the WA: LA relationships (Hayes and Vanni 2018). In our study lakes, water residence times are sufficiently long ( Table 1) that it is unlikely that they limit Gloeotrichia bloom development, although it remains possible that pelagic populations are disturbed through mixing resulting from storm events in systems with large WA:LA ratios (as in Klug et al. 2012).
We also note that lakes with lower WA:LA ratios have a larger fraction of the watershed land in close proximity to the lake, suggesting that the effects of nearshore agricultural activity -especially those historically involving livestock (Appendix S2: Fig. S4)-may have been particularly important to Gloeotrichia and its recruitment from the littoral zone. The absence of consistent historical data across all sites limits our ability to test this hypothesis. However, the rich historical record of the area around Lake Auburn reveals the close proximity of substantial agricultural activity to the lake (Emil 2017), and an oral history describes stocking densities on properties near the lake that were several orders of magnitude greater than that in the county as a whole (L' Hommedieu 2002). These historical records and studies documenting greater housing development immediately around lakes than across the landscape as a whole (Schnaiberg et al. 2002) support this inference and highlight the importance of future investigation of the impacts of spatially variable patterns of land use on water quality.
Synthesis
Collectively, these paleoecological records demonstrate that regional eutrophication can be variably expressed among individual lakes (cf. Maheaux et al. 2016) and yet reveal insights about cyanobacterial populations not easily ascertained from limnological studies spanning shorter time periods. First, the cyanobacterium Gloeotrichia is not new to the New England lakes we studied but rather was present before European settlement c. 1750 CE. Second, while there is evidence in one lake for Gloeotrichia as a potential driver of eutrophication, it appears that Gloeotrichia abundance is more often related to watershed land use, particularly in systems with small ratios of watershed area to lake area (WA: LA). Hence, the recent increase in Gloeotrichia seen in contemporary studies (Winter et al. 2011, Carey et al. 2012a) may be a function of intensification of nearshore land uses. Third, various manifestations of trophic change-overall organic matter production, patterns of isotopic fractionation, and the abundance of various taxa or groups of taxa, seen here through different proxies-may differ in the story they tell of the timing of shifts even within a single lake. Further, even with similar regional drivers such as climate and broadly similar land-use changes, lakes differ in both the magnitude and nature of response of individual metrics that are related to trophic change. Fourth, the duration of ice cover and the extent of agricultural activity are important drivers of lake trophic change, particularly in systems with smaller WA:LA. Together, these records suggest that single metrics of change in lake trophic state are insufficient and that even aggregate metrics may respond differently across lakes under similar pressures. Hence, attention to the intersection of changes in climate and watershed land use in the context of basin morphometry is necessary to understand how lakes respond to multiple stressors, and lake management will need to attend to this intersection to keep these low-nutrient lakes in the clear-water state.
|
2020-06-18T09:06:18.032Z
|
2020-06-01T00:00:00.000
|
{
"year": 2020,
"sha1": "38a3ffd4f3b31d53faf12f7a4bc08fc5680fc4b5",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/ecs2.3170",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5dcc5d60eb90fa18d28e7b0287c40fa8b088db8d",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
215802392
|
pes2o/s2orc
|
v3-fos-license
|
Difficulties that patients with chronic diseases face in the primary care setting in Singapore: a cross-sectional study
Introduction: Patients with chronic diseases face difficulties when navigating the healthcare system. Using the Healthcare System Hassles Questionnaire (HSHQ) developed by Parchman et al, this study aimed to explore the degree of hassles faced by primary care patients in Singapore and identify the characteristics associated with higher hassles. Methods: A cross-sectional study was conducted among patients with chronic disease at Hougang Polyclinic, Singapore, using interviewer-administered HSHQ. Mean HSHQ score was compared with Parchman et al’s study. The associations between number of chronic diseases, demographic variables and healthcare hassles were assessed using multivariate linear logistic regression analysis. Results: 217 outpatients aged 21 years and above were enrolled. Our overall mean HSHQ score (4.77 ± 6.18) was significantly lower than that in Parchman et al’s study (15.94 ± 14.23, p < 0.001). Participants with five or more chronic diseases scored 3.38 (95% confidence interval [CI] 0.11–6.65, p = 0.043) points higher than those with one chronic disease. With each increasing year of age, mean HSHQ score decreased by 0.17 (95% CI −0.26 to −0.08, p = 0.001) points. Those with polytechnic/diploma/university education and higher scored 2.65 (95% CI 0.19–5.11, p = 0.035) points higher than those with primary education and lower. Conclusion: Patients in our population reported lower hassles than those in Parchman et al’s study. Increasing age and lower education level were associated with lower hassles. Further analysis of the types of chronic diseases may yield new information about the association of healthcare hassles with the number and types of chronic diseases.
INTRODUCTION
A chronic disease is one lasting three months or more, as defined by US National Center for Health Statistics, and it is increasingly common to encounter patients with chronic diseases in the primary care setting. (1) Patients with chronic diseases may face difficulties, such as poor continuity of care, poor communication between healthcare providers and polypharmacy. (2) The difficulties faced are even more evident in patients with multimorbidity, (3) which is commonly defined as the presence of two or more chronic diseases. (4,5) These patients may have reduced quality of life, psychological distress, increased healthcare utilisation and admissions, and poorer outcomes. (2) Studies have been done to describe the difficulties that patients with chronic diseases face during their encounters with the healthcare system, (6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) some of which focused on just one aspect of the difficulties, such as access to care and waiting times, (10)(11)(12) medication-related difficulties (13)(14)(15) or communication difficulties. (16) In a study conducted by Parchman et al, the Healthcare System Hassles Questionnaire (HSHQ) was constructed to enable patients with chronic diseases to report the overall difficulties experienced in accessing care within a primary healthcare setting (Fig. 1). (8) In their study, healthcare 'hassles' were described as "'troubles' or 'bothers' that patients experience during their encounters with the health care system". (8) The questions in the HSHQ can be grouped into three main categories: hassles about information (e.g. about their medical condition and treatment options); hassles about medications; and hassles about care (e.g. waiting time and difficulty getting questions answered). The HSHQ was created as the authors were interested in identifying the most frustrating problems that patients encountered with their general healthcare and not merely satisfaction with one specific provider or visit. In their study, the characteristics associated with higher healthcare system hassles were multiple chronic illnesses and being of African-American ethnicity. They also found that patients with multiple chronic diseases reported more problems with getting the information they needed, problems with medications, difficulty obtaining answers to questions and a lack of time with clinicians when compared to patients with a single chronic disease. (8) In another study that used the HSHQ, conducted in the United Kingdom by Adeniji et al, it was found that the hassles most often reported by patients with two or more diseases related to lack of information about diseases and treatment options, poor communication among health professionals and poor access to specialist care. (9) In Singapore, the Patient Satisfaction Survey is conducted each year by the Ministry of Health to assess patient satisfaction with public healthcare institutions. (17) The survey covers the level of patient satisfaction, extent to which their expectations were met, whether they would recommend the services to others and the affordability of services. It does not cover specific difficulties face by patients, such as problems with medications, problems with understanding medical diseases and treatment options.
To our knowledge, broader quantitative studies have not been conducted in Singapore about the overall hassles or difficulties that patients with chronic diseases face during their encounters with the healthcare system. Studies have also shown that sociodemographic differences, such as age and ethnic differences, can affect patients' assessments of primary healthcare, and it would be useful to identify these variables in our population. (7,(18)(19)(20)(21) This study aimed to: (a) explore whether patients in Singapore faced the same degree of hassles when compared to Parchman et al's study; (b) determine whether patients with more chronic diseases faced more hassles; and (c) identify the characteristics associated with higher healthcare system hassles among patients with chronic diseases.
METHODS
This was a cross-sectional study conducted among patients consulting at Hougang Polyclinic in Singapore for chronic diseases. An interviewer-administered questionnaire, including basic demographic data and the HSHQ, which is a 16-item questionnaire on a five-point Likert scale (0 denotes 'not a problem at all' and 4 denotes 'very big problem'), was administered between 31 January 2017 and 12 April 2017. Approval from the National Healthcare Group domain specific review board (approval number 2016/01130) was obtained prior to commencement of the study.
Sample size was calculated to compare the overall mean HSHQ score with that of the original study by Parchman et al that used the HSHQ (mean 15.94 ± 14.23). (8) The difference in mean score between Parchman et al's two study groups (patients with single morbidity and those with multimorbidity) was 3.8 and, assuming the difference in mean score between Parchman et al's study and our present study to be 3.8 and above, using two-sample t-test, with a significance level of 5%, a minimum of 215 participants was found to be necessary to achieve a statistical power of 80%.
Patients aged 21 years and above, with one or more chronic diseases, attending Hougang Polyclinic and those who were able to speak English or Mandarin were included. A total of 19 chronic diseases, which were the diseases under the Ministry of Health's Chronic Disease Management Programme when this study was conducted, were included. The 19 chronic diseases were anxiety, asthma, benign prostatic hyperplasia, bipolar disorder, chronic obstructive pulmonary disease, dementia, diabetes mellitus, epilepsy, hypertension, lipid disorders, major depression, nephritis/nephrosis/chronic kidney disease, osteoarthritis, osteoporosis, Parkinson's disease, psoriasis, rheumatoid arthritis, schizophrenia and stroke. (22) Patients who had cognitive impairment and those who were unable to provide informed consent were excluded from the study. Pregnant women were also excluded.
A list of patients who were aged 21 years and above, with at least one chronic disease, and scheduled for follow-up for their chronic disease(s) at Hougang Polyclinic from January 2017 to April 2017 was obtained. From 31 January 2017 to 12 April 2017, patients were randomly selected from this list by utilising a random number generator using Microsoft Excel 2013 (Microsoft Inc, WA, USA) and approached while they were waiting for their consultation.
In total, 363 patients were approached, of which 33 patients were ineligible and excluded after applying the inclusion and exclusion criteria. Of the 330 eligible patients, 113 patients refused or had other reasons not to participate in the study (e.g. feeling unwell) and 217 patients consented to take part in the study. The response rate was 65.8%.
The study procedure was conveyed to patients and they were given sufficient time to consider their participation. Informed written consent was then obtained and the questionnaire was interviewer-administered. The study investigators personally trained the interviewers and each interview session lasted 20-30 minutes. Electronic medical records were accessed to obtain the number of chronic diseases in each patient.
Test-retest reliability was conducted before the commencement of study. Ten patients answered the HSHQ twice, two weeks apart. They also evaluated the face validity of each question, in which they were asked to comment on the clarity of wording and the likelihood that the target audience would be able to answer the questions. The HSHQ was translated into Mandarin using forward-translation and then verified via back-translation, which was carried out independently by different persons, with appropriate qualifications, to ensure accuracy.
The participants' sociodemographic characteristics were analysed descriptively.
Overall mean HSHQ score was compared with the original study by Parchman et al using twosample t-test. Overall mean HSHQ scores for each chronic disease count were analysed for significance of the relationship between mean score and number of chronic diseases.
Bivariate analysis (Mann-Whitney U test or Kruskal-Wallis test) was conducted to explore the relationship between each independent sociodemographic variable and mean HSHQ score. Multivariate logistic regression analysis was used to analyse the impact of the variables on mean HSHQ score while controlling for other sociodemographic variables.
Cronbach's alpha coefficient was calculated to measure the internal consistency of HSHQ responses in our population. A p-value < 0.05 was considered to be statistically significant. Data was presented as mean and standard deviation. All analyses were conducted using Stata SE version 13.1 (StataCorp, TX, USA).
RESULTS
217 participants, aged 21 years and above, with at least one chronic disease and on follow-up for chronic disease(s) at the polyclinic were enrolled. Mean age was 63.0 ± 8.9 years and their average number of chronic diseases was 2.9 ± 1.0. All participants were Singapore citizens or permanent residents, and a majority of patients were ethnically Chinese (79.3%), aged 50 years and above (94.4%), married (85.3%) and with a religion (89.4%) ( Table I). The top three hassles reported by our participants (items that scored the three highest mean HSHQ scores) were long waiting times for clinic or specialist appointments, poor communication between different doctors or clinics, and medical appointments interfering with work, family or hobbies.
When analysed with respect to the number of chronic diseases (Table III), there was no linear relationship between mean HSHQ score and number of chronic diseases among participants (correlation coefficient = 0.06, p = 0.399). Table I and the number of chronic diseases. The variables that were statistically significant are listed in Table IV. Participants with five or more chronic diseases were found to have a mean HSHQ score that was 3.38 (95% CI 0.11-6.65, p = 0.043) points higher than those with one chronic disease.
With each increasing year of age, mean score was found to decrease by 0.17 (95% CI −0.26 to −0.08, p = 0.001) points. Those with primary education and lower were found to have a mean score that was 2.65 (95% CI 0.19-5.11, p = 0.035) points higher than those with polytechnic/diploma/university education and higher.
DISCUSSION
Our study population's ethnic (79.3% Chinese, 10.1% Malay and 9.2% Indian participants) and age composition was consistent with the overall ethnic and age composition of patients attending Hougang Polyclinic in 2014 (77% Chinese, 10% Malay and 10% Indian patients).
Median age in our study was 63.0 ± 8.9 years, while that of patients attending Hougang Polyclinic in 2014 was in the range 40-64 years.
The overall mean HSHQ score for our study population was lower than that of the original Parchman et al's study and there could be several reasons for this. Parchman et al's study was conducted among veterans in a veteran healthcare system and the surveys were administered by mail, with a response rate of 59%. Self-administration of questionnaires may increase participants' willingness to disclose information when compared with face-to-face or telephone interviews. The greater anonymity offered by mail has been reported to lead to more accurate reporting on topics, such as health and behaviour. (23) Parchman et al's study population of veterans in the US setting may also have had other socioeconomic reasons that resulted in self-reported hassles being higher. Mean age in their study was also lower (53.8 years) when compared to ours (63.0 years) and, as proven in our study and others', older patients did rate healthcare more favourably. (7,(18)(19)(20) Participants in our study were also likely to have had concerns that negative responses would affect the healthcare provided to them even though reassurances were given that responses would not affect their quality of care.
Quantitative studies may not be the best way to assess the difficulties that patients encounter and further studies, such as qualitative studies, can be done to investigate these difficulties that patients face.
Parchman et al and Adeniji et al both found that a higher number of chronic diseases was associated with higher healthcare hassles. (8,9) Our study found that participants with five or more chronic diseases had a higher mean HSHQ score than those with one chronic disease.
However, we did not find a linear relationship between mean HSHQ score and the number of chronic diseases in our participants. This could, in part, be due to the types of chronic diseases included rather than the absolute number of chronic diseases present.
The burden of different combinations of diseases is different. Some multimorbidity studies have clustered the diseases, gathering the diseases into groups of non-random associations. Examples of common non-random associations of diseases include the groups of cardiovascular and metabolic disorders, mental health problems and musculoskeletal disorders. (2,(24)(25)(26) A patient who has one disease from all three clusters -one each from the cardiovascular and metabolic group, mental health group and musculoskeletal group -could suffer from more problems, such as poor coordination of care and having to attend many different appointments, when compared to a patient with three diseases all from the same cluster, such as hypertension, hyperlipidaemia and diabetes mellitus.
The burden of different diseases varies as well. For example, a patient suffering from stroke with hemiplegia would likely experience more difficulties than a patient with hypertension.
These reasons could partly account for the lack of an expected linear relationship between the number of chronic diseases and mean HSHQ score. Further information could be gathered and further analyses done to determine whether patients with diseases across different clusters score higher than patients with single-cluster multimorbidity, and whether the type of disease(s) affect mean HSHQ score.
Elderly patients (aged 70 years and above) and those with primary education and lower were found to have lower HSHQ scores in our study. Previous studies have shown that older patients tend to rate care more favourably than younger patients. (7,(18)(19)(20) The reasons postulated included cultural differences in the willingness to report unfavourable assessments among older patients, and the higher morbidity and consulting rates among older patients may mean that this group may have more contact with primary care facilities and be more acquainted with the system. (18) Studies in Asian populations have also shown that patients with lower education levels or those who are illiterate reported greater satisfaction with healthcare. (27,28) Measures should be taken to address the difficulties that patients face during their interactions with the healthcare system. In Singapore, patients with chronic diseases often consult at polyclinics, which are public primary care clinics. Although private general practitioners currently provide around 80% of primary care in Singapore, only 55% of all chronic patients are managed by them; polyclinics manage the remaining 45%. (29) Future studies could relate the degree of hassles faced to the healthcare outcomes of patients and also compare the types of hassles faced between the different studies. Qualitative studies could also explore patient interpretations of the various questions.
There were a few limitations to our study. We did not track whether language barrier was a reason for rejecting participation in our study and hence we were not able to ascertain whether there was responder bias in our study population due to conducting the survey in English and Mandarin only. However, we noted that almost all of the Malay and Indian participants we approached were able to converse in English. This study was conducted in only one polyclinic in Singapore and, had resources allowed, the inclusion of more sites of primary care would have been ideal. Although measures were taken to optimise the translation of the questionnaire to Mandarin (using forward and backward translations), there might still have been terms and/or concepts that were not conveyed accurately in the Mandarin questionnaire.
In conclusion, patients in our population reported significantly lower hassles than the original study by Parchman et al, and further studies, such as qualitative studies, could be conducted to elucidate the reasons for this. The characteristics found to be associated with lower hassles were increasing patient age and lower education level. Further analysis of the types of chronic diseases in our population may yield new information about the association between the number and types of chronic diseases included and the healthcare hassles reported.
It is suggested that, after measures to decrease the difficulties that patients face have been implemented, a follow-up study be conducted to assess whether patients subsequently report less hassles.
|
2020-04-18T13:05:50.046Z
|
2020-04-17T00:00:00.000
|
{
"year": 2020,
"sha1": "f7108d3dfa77d87dc28a3d340946677e30a7fc79",
"oa_license": null,
"oa_url": "https://doi.org/10.11622/smedj.2020062",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2bfb908363a029b200cebdad77d1754eb2b8e3e9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259269742
|
pes2o/s2orc
|
v3-fos-license
|
Uncovering the Secrets of Prostate Cancer’s Radiotherapy Resistance: Advances in Mechanism Research
Prostate cancer (PCa) is a critical global public health issue with its incidence on the rise. Radiation therapy holds a primary role in PCa treatment; however, radiation resistance has become increasingly challenging as we uncover more about PCa’s pathogenesis. Our review aims to investigate the multifaceted mechanisms underlying radiation therapy resistance in PCa. Specifically, we will examine how various factors, such as cell cycle regulation, DNA damage repair, hypoxic conditions, oxidative stress, testosterone levels, epithelial–mesenchymal transition, and tumor stem cells, contribute to radiation therapy resistance. By exploring these mechanisms, we hope to offer new insights and directions towards overcoming the challenges of radiation therapy resistance in PCa. This can also provide a theoretical basis for the clinical application of novel ultra-high-dose-rate (FLASH) radiotherapy in the era of PCa.
Introduction
Prostate cancer (PCa) is the second most common malignant tumor in men worldwide and the fifth leading cause of death. According to the latest epidemiological survey results, PCa was the third most commonly diagnosed malignancy in 2020, with 1,414,259 new cases (7.3% of total) [1], following only lung and colorectal cancers. Prostate-specific antigen (PSA) is a prevalent biomarker employed for the diagnosis and active surveillance of PCa. According to the NCCN guidelines, PSA plays a crucial role in determining PCa risk stratification. Specifically, a PSA level of less than 10 ng/mL indicates low-risk PCa, 10 to 20 ng/mL suggests intermediate-risk PCa, and levels exceeding 20 ng/mL are indicative of high-risk PCa. Consequently, PSA screening is an important component for the diagnosis of PCa [2]. PCa exhibits significant heterogeneity between different patients, resulting in varying treatment responses and outcomes. Therefore, a detailed analysis of the underlying molecular mechanisms is necessary to achieve personalized diagnosis and treatment.
In 1917, Professor Benjamin Barringer of Memorial Sloan-Kettering Cancer Center in the United States pioneered the use of radium radiation to treat PCa, and since then radiation therapy has become a common treatment option for PCa. In 1962, Professor Bagshaw of Stanford University in the United States pioneered high-dose radiotherapy technology (Supplementary Figure S1) and subsequently applied it as a radical treatment for PCa, conducting numerous case studies that established the status of radiotherapy as one of the "Troika" of PCa treatment [3,4]. Radiotherapy has been clinically applied to all stages of PCa, including localized, metastatic, and castration-resistant PCa (CRPC). While radiotherapy is considered the standard treatment for localized PCa, its effectiveness is
Impact of Cellular Hormone Levels on PCa
Current androgen deprivation therapy (ADT) was established based on the groundbreaking work of Huggins and Hodges in 1941 [7,8], where they discovered that reducing androgen levels could slow the progression of metastatic PCa. Recent studies have demonstrated that androgen biosynthesis is tightly regulated by the "hypothalamic-pituitarygonad (HPG)" axis [9], and that androgen receptor (AR) axis signaling plays a crucial role in the onset and progression of PCa [10,11]. Androgens, such as testosterone and dihydrotestosterone, bind to the ligand-binding domain (LBD) of AR in cells, causing AR to detach from HSP90 and move into the nucleus, where it interacts with androgen response elements (AREs) to activate the expression of genes such as KLK3, NKX3.1, FKBP5, and TMPRSS2-ERG, among others. It is believed that malignant transformation in PCa is driven by this fundamental physiological process [12,13].
As CRPC is the final clinical stage of ADT, CRPC cells are more prone to resistance towards multiple treatment regimens (such as ADT therapy, radiotherapy, and chemotherapy) compared to hormone-sensitive PCa cells [14,15]. There appears to be a correlation between hormone level and DNA damage repair, as studies have shown that after ADT therapy, the expression of Ku70 (the key junction protein of non-homologous end joining (NHEJ)) in PCa tissue is downregulated following DNA damage [16]. Furthermore, recent clinical research has also demonstrated that the local failure rate of CRPC after local treatment is significantly higher than that of hormone-sensitive PCa [6], indicating that the radiosensitivity of PCa is reduced when the PCa is transformed from an androgen-dependent state to an androgen-independent state. In terms of drug mechanism, theoretically, ADT could have a synergistic effect with radiotherapy for PCa because it can induce the apoptosis and autophagy of PCa cells [17]. In addition, androgen-sensitive PCa LNCaP cells expressing AR undergo a higher rate of radiation-induced apoptosis in the absence of androgens [18], which is consistent with findings from other studies. It is worth noting that PCa cells expressing AR experience the same response. Increasingly deepening research suggests that AR, as a receptor of steroid hormones, has a causal and mutually regulated association with radiotherapy resistance in PCa due to its similarity of location and spatial accessibility to DNA. In PCa cells, DNA damage caused by radiation can activate AR, and after AR activation, DNA PKcs' activity and expression are induced (a significant molecule of NHEJ). Furthermore, a positive feedback regulatory loop involving AR and DNA PKcs can be formed [19][20][21] as shown in Figure 1.
induced (a significant molecule of NHEJ). Furthermore, a positive feedback regulatory loop involving AR and DNA PKcs can be formed [19][20][21] as shown in Figure 1. In addition, patients with metastatic CRPC (mCRPC) are often accompanied by mutations and amplification of the AR, as well as the production of splice variants of AR. Mutations and amplifications result in the lack of the LBD during mRNA splicing, leading to the retention of the N-terminal AF1 activation cluster (N-TAD) and the DNA-binding domain (DNBD), forming a splice variant of AR [22,23]. This allows for the aberrant activation of the AR signaling pathway inside the cell, even under conditions of castrated androgen in the extracellular environment. This mechanism is involved in the occurrence and progression of drug resistance and castration resistance and is considered one of the challenges restricting the improvement of ADT effectiveness in clinical practice [13]. The most studied AR splice variant (ARv) in endocrine therapy is androgen receptor variant 7 (AR-V7). AR-V7 plays a similar role to AR in the radiotherapy resistance of PCa. After 5 Gy of radiotherapy, AR-V7 expression was upregulated in PCa C4-2 and 22Rv1cells, and their nuclear translocation was elevated. AR-V7 can promote DNA damage response (DDR) and initiate the repair process of homologous recombination repair (HRR) and NHEJ. The upregulated expression of AR-V7 after IR can reduce the "synthetic lethality" response of poly (ADP-ribose) polymerase 1 (PARP-1) inhibitor [24]. Furthermore, by binding to the key molecule of NHEJ, DNA PKcs, AR-V7 can form a DNA repair complex and enhance the repair ability of cells after radiation, thereby fostering the radiotherapy resistance of PCa cells, which can be inhibited by the AR antagonist enzalutamide [25].
Maintaining DNA Repair Deficiency and Disorders of the DNA Damage Repair System
One of the mechanisms underlying X-ray therapy for cancer is damaging the DNA in tumor cells. Currently, the mechanisms leading to the death of tumor cells are as follows: direct damage to the DNA target of tumor cells, related factors, and molecules causing various programmed deaths (e.g., apoptosis, autophagy, programmed necrosis, and senescence) that occur in tumor cells due to cellular communication leading to their induction, inducing cells to undergo mitotic death, etc. DNA serves as an essential target for radiation-induced death, and various types of radiation-induced DNA damage can be recognized through a complex network of pathways. Corresponding repair processes for different types of DNA damage can also be initiated to maintain genomic stability [26]. When tumor cells are irradiated, DDR is crucial for activating repair pathways and cells, In addition, patients with metastatic CRPC (mCRPC) are often accompanied by mutations and amplification of the AR, as well as the production of splice variants of AR. Mutations and amplifications result in the lack of the LBD during mRNA splicing, leading to the retention of the N-terminal AF1 activation cluster (N-TAD) and the DNA-binding domain (DNBD), forming a splice variant of AR [22,23]. This allows for the aberrant activation of the AR signaling pathway inside the cell, even under conditions of castrated androgen in the extracellular environment. This mechanism is involved in the occurrence and progression of drug resistance and castration resistance and is considered one of the challenges restricting the improvement of ADT effectiveness in clinical practice [13]. The most studied AR splice variant (ARv) in endocrine therapy is androgen receptor variant 7 (AR-V7). AR-V7 plays a similar role to AR in the radiotherapy resistance of PCa. After 5 Gy of radiotherapy, AR-V7 expression was upregulated in PCa C4-2 and 22Rv1cells, and their nuclear translocation was elevated. AR-V7 can promote DNA damage response (DDR) and initiate the repair process of homologous recombination repair (HRR) and NHEJ. The upregulated expression of AR-V7 after IR can reduce the "synthetic lethality" response of poly (ADP-ribose) polymerase 1 (PARP-1) inhibitor [24]. Furthermore, by binding to the key molecule of NHEJ, DNA PKcs, AR-V7 can form a DNA repair complex and enhance the repair ability of cells after radiation, thereby fostering the radiotherapy resistance of PCa cells, which can be inhibited by the AR antagonist enzalutamide [25].
Maintaining DNA Repair Deficiency and Disorders of the DNA Damage Repair System
One of the mechanisms underlying X-ray therapy for cancer is damaging the DNA in tumor cells. Currently, the mechanisms leading to the death of tumor cells are as follows: direct damage to the DNA target of tumor cells, related factors, and molecules causing various programmed deaths (e.g., apoptosis, autophagy, programmed necrosis, and senescence) that occur in tumor cells due to cellular communication leading to their induction, inducing cells to undergo mitotic death, etc. DNA serves as an essential target for radiation-induced death, and various types of radiation-induced DNA damage can be recognized through a complex network of pathways. Corresponding repair processes for different types of DNA damage can also be initiated to maintain genomic stability [26]. When tumor cells are irradiated, DDR is crucial for activating repair pathways and cells, and DDR receptor proteins that respond to multiple DNA damages are critical for initiating repair. Different molecules and mechanisms involved in damage repair play distinct roles in the radiotherapy resistance of PCa (Table 1 and Supplementary Figure S2). [27]. XRCC1 R194W SNP information is a predisposing factor for PCa patients [28,29]. The presence of XRCC3 rs1799794 SNP information predicts the likelihood of gastrointestinal (GI) toxicity in PCa patients following radiotherapy [30].
Nucleotide Excision
Repair (NER) Larger pyrimidine dimers formed after excision of DNA damage Numerous studies have investigated the SNPs in NER-related genes [31].
The PAT+/-polymorphism in the XPC gene is a predisposing factor for PCa susceptibility [32]. Cadmium exposure affects the expression of NER-related genes, including XPA [33]. The ERCC2 mutations (G > A, Asp (711) Asp) are predictive markers of toxic reactions to radiotherapy in PCa patients [34].
NHEJ DNA double-strand break (DSB), independent of the cell cycle The LIG4 (T > C, Asp (568) Asp) variant serves as a predictor of toxic reactions to radiotherapy in PCa patients [34]. The inhibitor of DNA PKcs and ATM, silymarin, accelerates the radiosensitivity of DU145 cells in PCa [35]. The expression level of Ku70 in PCa tissue can predict the treatment response of radiotherapy [36]. By targeting LITAF, miR-106 increases the expression of ATM, promoting radiotherapy resistance in PC-3 and DU145 cells of PCa [37]. The catalytic activity of Tip60 on ATM acetylation and phosphorylation in PCa cells makes it a candidate marker for radiotherapy resistance [38].
Homologous Recombination Repair
(HRR) DNA double-strand break (DSB), dependent on the existence of sister chromatids in the cell cycle Approximately 30.7% of PCa patients harbor BRCA1/2 mutations [39]. In patients with mCRPC and BRCA1/2 or HRR mutations, the FDA has authorized the use of the PARP inhibitor Rucaparib [40]. Patients with mCRPC and the HRR mutation may benefit from Olaparib [41]. Silencing of RAD51 in PCa DU145 cells improves their radiosensitivity [42].
Cross-Link Repair
Cross-linking between DNA-DNA and DNA-protein due to IR PCa harbors mutations in genes that encode the core complex of Fanconi anemia (FA), including FANCA ex1-12del and FANCA c.3384-1 G > A, but the relationship between these mutations and radiotherapy remains unexplored [45]. The S1088F mutant protein of FANCA enhances the susceptibility of cells to DNA damage induced by cis platinum [46].
Mismatch Repair (MMR)
Mismatch while removing replication and mismatch of small insertions Although infrequent, the occurrence of MMR gene mutations in PCa serves as an unfavorable prognostic marker [47]. The level of MS1H6 expression is linked to Gleason Grade 5 [48]. PMS2 and MLH1 induce downregulation of BCL2A1-and c-Abl-mediated apoptosis in PCa DU145 cells, suggesting their potential as targets for radiosensitization [49,50]. Mutations in MLH1 and PMS1 impact the sensitivity of Olaparib [51].
Hence, due to the complex nature of the DNA damage repair process involving multiple molecules and complexes with intricate mechanisms, it is necessary to conduct further studies on the role of these complex components in promoting radiotherapy resistance in PCa. Furthermore, there is a need to develop corresponding drugs targeting specific molecular targets to enhance radiosensitivity in clinically high-grade malignant PCa.
Cell Cycle Disorder
According to classical radiobiology theory, the radiosensitivity of cells is determined by the "4R" factors: repair (sub-lethal injury and potentially lethal injury), repopulation, redistribution (cell cycle), and re-oxygenation. The cell cycle precisely regulates cellular activities that determine cell proliferation and fate after DNA damage, which is finely controlled by a series of cell cycle proteins and corresponding kinases. Abnormal regulation of the cell cycle is a hallmark of tumor cells [52]. The G1/S and G2/M checkpoints play vital roles in regulating the entire cell cycle, including determining entry into the DNA synthesis phase and proper cell division. As the primary target of IR, DNA damage repair is closely linked to cell cycle regulation in PCa radiotherapy resistance. Cyclins, kinases, and inhibitors are key regulatory molecules of the cell cycle and represent critical targets for current malignant tumor treatment strategies. Notably, CDK4/6 inhibitors have shown potential as a treatment approach for enhancing radiosensitivity against malignant tumors. The successful application of this approach not only offers a possible treatment method from the perspective of the cell cycle but also sheds new light on the precise control of radiosensitivity in PCa. Current research on using cell cycle regulatory molecules as targets for improving radiosensitization in PCa is summarized below ( Table 2 and Supplementary Figure S3). CD105 [56] G2/M arrest CD105 has been shown to promote radiotherapy resistance in PCa by depleting intracellular ATP and upregulating SIRT1, which activates the BMP and TGF-β/Smad pathways. However, targeting CD105 with the TRC105 antibody has been demonstrated to enhance radiosensitivity.
Resveratrol [57] G1/S arrest Resveratrol has been shown to inhibit the phosphorylation of PI3K/Akt, which is an important cell survival signaling pathway in PCa 22Rv1 and PC-3 cells following radiation treatment. Additionally, resveratrol induces the phosphorylation of AMPK and promotes cell cycle arrest in a P21-dependent manner.
GnRHR [58] G2/M arrest Redistribution of GnRHR expression using IN3 repositioned it on the membrane surface of PC-3 cells, promoting their radiosensitivity in the recoverable phase, while IN3 had a pro-apoptotic effect.
The expression of ChK1, p-Cdc25C, and cyclinD1 in PCa PC-3 cells can be downregulated after RPS6KB1 is inhibited by Nexrutine. When RPS6KB1 is inhibited, and the process of NHEJ is also inhibited.
The cell cycle is a precisely regulated machinery with intricate details and processes. In PCa, various molecules can enhance radiosensitization through different processes, molecules, and mechanisms. However, their corresponding targets and underlying mechanisms require further exploration to fully understand the intrinsic correlation between cell cycle regulation disorder and radiotherapy resistance in PCa. A deeper understanding of these mechanisms will facilitate the development of more effective treatment strategies.
Disruption of Cellular Redox Homeostasis
Maintaining cellular redox homeostasis is essential for normal cell proliferation, signal transduction, and physiological activities. Tumor cells heavily rely on maintaining stable levels of reactive oxygen species (ROS) as well. During tumor initiation and progression, the rapid and excessive proliferation of cancer cells results in the production of massive amounts of ROS which damage tumor cell DNA and lead to genomic instability [61]. During malignant tumor invasion and metastasis, ROS not only promotes tumor cell proliferation, but also interacts with stromal and immune cells to activate the EMT-related TGF-β pathway and transcription factors, leading to tumor cell scattering and distant metastasis.
Radiotherapy can have both direct and indirect ionization effects when used to treat cancer. One indirect effect of IR is the generation of ROS in tumor cells through interactions between oxygen and water molecules. These ROS include superoxide anion, hydrogen peroxide, hydroxyl radicals, singlet oxygen, and others [62]. Additionally, IR can induce cells to produce reactive nitrogen species (RNS), including nitric oxide (NO), and other oxides or nitrogen-containing free radicals. These chemical species are highly reactive, unstable, and paramagnetic compared to other molecules found in nature. They can disrupt the structure and function of biomolecules such as DNA, proteins, and lipids and also disrupt the redox homeostasis of tumor cells. This disruption can trigger damage repair, apoptosis, autophagy, and ferroptosis in tumor cells, making them useful in treating tumors [63].
Numerous studies have demonstrated that approximately 70% of IR's therapeutic effects are due to its indirect ionization effects [64,65]. During tumor evolution and subsequent treatment, the organism may develop adaptive mechanisms to resist oxidative stress ( Figure 2). However, an overactive adaptive mechanism can lead to cellular tolerance to oxidative stress damage and subsequent treatment resistance. Emerging evidence suggests that multiple adaptive mechanisms of antioxidant stress in tumor cells are at play, including metabolic reprogramming via sulfur-based metabolism to produce antioxidant substances, weakening the metabolism of glutamate and folic acid, enhancing the metabolism of the pentose phosphate pathway to increase production of NADPH, enhancing the transcription and expression of antioxidant-stress-related transcription factors and genes, and stimulating the metabolic signaling pathway involving AMPK. Therefore, it is crucial to study the molecular mechanisms that control redox homeostasis in tumor cells to develop effective therapies. The organism's antioxidant stress defense system is composed of antioxidant enzymes, antioxidant substances, and antioxidant-stress-related transcription factors, all of which play a crucial role in maintaining cellular redox homeostasis. One such transcription factor is nuclear factor erythroid 2-related factor 2 (Nrf2) [66], a member of the leucine zipper transcription factor family that typically resides in the cytoplasm and is continuously ubiquitinated and degraded by Kelch-like ECH-associated protein 1 (Keap1) to The organism's antioxidant stress defense system is composed of antioxidant enzymes, antioxidant substances, and antioxidant-stress-related transcription factors, all of which play a crucial role in maintaining cellular redox homeostasis. One such transcription factor is nuclear factor erythroid 2-related factor 2 (Nrf2) [66], a member of the leucine zipper transcription factor family that typically resides in the cytoplasm and is continuously ubiquitinated and degraded by Kelch-like ECH-associated protein 1 (Keap1) to maintain low expression levels. When cells undergo oxidative stress, endogenous oxidative stress inducers (ROS and metabolites produced during the cell's oxidative phosphorylation process) and exogenous oxidative stress inducers (IR and chemotherapy drugs) are involved in the cell. The antioxidant response element (ARE) on the nucleus is activated by oxidative stress, prompting the transcription factors Nrf2 and Keap1 to depolymerize and translocate to the nucleus, where they form a heterodimer with Maf protein, which is bound to the ARE. This complex then functions to clear oxidative substances from the cell and protect its structure and function by generating antioxidant substances such as glutathione (GSH) and antioxidant enzymes such as catalase (GSH, HO-1, NQO-1, and GPX) [67,68].
The malignant biological behavior of tumors is largely associated with regulatory abnormalities in the pathway responsible for maintaining redox homeostasis, which is a key regulator. In tumor cells, the constant mutation of Keap1 leads to the unregulated expression and localization of Nrf2 [69]. Abnormally high levels of antioxidant stress in tumor cells contribute to proliferation, invasion, metastasis, and therapeutic resistance [70]. The "Nrf2/ARE" signaling pathway has been associated with radiotherapy and chemotherapy resistance in lung cancer ( Figure 3) [71,72]. Clinical samples of PCa have revealed three highly methylated sites (H3K9me3, MBD2, and MeCP2) in the Nrf2 promoter region, which inhibits Nrf2 transcription and downregulates PCa cells [73]. Knocking out Nrf2 in the transgenic mouse model of PCa leads to depleted glutathione S-transferase (GST) and increased ROS levels, promoting the PCa development process [74]. Knocking down Nrf2 in PCa DU145 cells can decrease the expression of oxidative-stress-related genes, such as NAD(P)H:Quinone oxidoreductase 1 (NQQ1), superoxide dismutase 2 (SOD2), and heme oxygenase-1 (HO-1), making the cells more sensitive to cis-platinum and inducing DNA damage response [75]. The activation of Nrf2 can induce tolerance to radiation treatment in glioma cells. The overexpression of HECT and Copper Zinc Superoxide Dismutase Domain containing protein 1 (HACE1) in glioma tissues competes with Keap1 to prevent Nrf2 from being degraded via ubiquitination at the post-translational modification level and promotes the upregulation of Nrf2 transcriptional expression via the internal ribosome entry site (IRES) through La/SSB [76]. Targeting Nrf2 can induce ferroptosis in PCa cells [77], providing novel insight into the radiotherapy resistance of PCa. Therefore, Nrf2 is a critical target for improving treatment resistance in PCa [78,79]. However, its role in radiotherapy resistance in PCa remains unclear and requires further study. resistance of PCa. Therefore, Nrf2 is a critical target for improving treatment resistance in PCa [78,79]. However, its role in radiotherapy resistance in PCa remains unclear and requires further study. Tumor cells reprogram their metabolism, which can differ from normal cells [80,81]. Internal chemical metabolism is central to oxidative stress. Thus, the metabolic reprogramming process of tumor cells not only sheds new light on interpreting radiotherapy resistance in PCa but also offers a novel strategy for radiosensitizing PCa. Tumor cells have an abnormal increase in glutamine catabolism, which rapidly supplies fuel for cell Tumor cells reprogram their metabolism, which can differ from normal cells [80,81]. Internal chemical metabolism is central to oxidative stress. Thus, the metabolic reprogramming process of tumor cells not only sheds new light on interpreting radiotherapy resistance in PCa but also offers a novel strategy for radiosensitizing PCa. Tumor cells have an abnormal increase in glutamine catabolism, which rapidly supplies fuel for cell division. Glutaminase-driven catabolism can increase intracellular antioxidant substances such as GSH to protect against IR damage. The increase in glutamine catabolism in PCa cells is not only related to defense against oxidative stress but also involved in maintaining PCa cells' survival and inducing ATG5-mediated cytoprotective autophagy [82]. Additionally, the detection of glutamine in the peripheral blood, glutaminase 1 (GLS1), and myelocytomatosis viral oncogene homolog (MYC), which regulate glutamine catabolism, can improve screening for the population that will benefit from radiotherapy and predict PSA doubling time in clinical settings. In normal prostate epithelial cells, the glucocorticoid betamethasone activates the "RelB-BLNK" axis, promotes the transcription of manganese superoxide dismutase (MnSOD) after radiation, and protects normal cells from radiation damage. Betamethasone inhibits the "Rel-BLNK" axis, which can further increase ROS in cells, leading to the death of PCa cells [83]. This is necessary due to the high level of ROS in the metabolic reprogramming of PCa cells. The GSH/GSSG ratio is downregulated in PC-3 cells in PCa because parthenolide deploys NADPH oxidase, which uses up thioredoxin. Radiosensitization is achieved by inhibiting PCa cell metabolism [84], which leads to a downregulation of forkhead box O3a (FOXO3a) expression and its downstream molecular antioxidant SOD.
AR and PCa cell redox homeostasis are interconnected, regulating and influencing each other. Currently, ADT can induce oxidative stress damage to PCa cells in addition to targeting AR, leading to therapeutic effects [85,86]. The treatment of PCa cells with ADT results in the induction of endocrine resistance and radiotherapy resistance, as evidenced by an increase in the expression of Nrf2 and antioxidant stress molecules (peroxiredoxin-1, thioredoxin 1, and metallothionein-1) [87]. Under conditions of oxidative stress, thioredoxin domain-containing protein 9 (TXNDC9), the primary regulator of reactive oxygen species, and PRDX1 can become dissociated. By interacting with AR, peroxiredoxin-1 (PRDX1) blocks its ubiquitination degradation, increases AR expression, and maintains AR signaling pathway activation [88]. The disruption of homeostasis in the defense system against oxidative stress varies across tumor stages and processes. Due to this, the role of oxidative stress in the malignant transformation and treatment resistance of malignant tumors needs more research to determine the threshold and detailed role.
Enhance of Epithelial-Mesenchymal Transitions (EMT)
EMTs refer to the morphological transformation of epithelial cells into mesenchymal cells or fibroblasts, including the disappearance of cell polarity and rearrangement of the cytoskeleton. The enhanced migratory capacity of cells during the EMT process allows for its categorization into three distinct subtypes, which are linked to various biological processes including tumor invasion, migration, metastasis, tumor microenvironment, and immune microenvironment changes [89][90][91]. During the EMT process, epithelioid cell markers, such as E-cadherin, β-catenin, Cladin-1, and zona occludens-1 (ZO-1), are downregulated, while the markers of mesenchymal cells, such as vimentin, α-smooth muscle actin (α-SMA), Snail1/2, Twsit1/2, ZEB1/2, and other molecules, are upregulated. These molecular changes cause complex regulatory network changes within cells, inducing the process of EMT. Inducing factors of EMT include activated intracellular EMT-related pathways (TGF-β/Smad, ERK, NF-κB, Wnt/β-Catenin, and Notch) in interaction with growth factors in the extracellular matrix (ECM) and receptors on the cell membrane surface, interaction between tumor cells and interstitial cells in the tumor environment, and an expression of EMT molecules induced by hypoxia in the tumor.
According to typical radiobiology theory, epithelial tumor cells exhibit moderate sensitivity to IR, while stromal cells are relatively radiation-resistant. Consequently, the process of EMT in tumor cells also contributes to radiation tolerance. In PCa cells, activation of the "acetylated KLF5/CXCR4" axis can induce interleukin-11 (IL-11) secretion, trigger the SHH/IL-6 paracrine pathway, promote docetaxel resistance, and sustain EMT [92]. IR has been shown to increase EMT markers (uPA, vimentin, and N-cadherin) in PCa DU145 cells [93]. EMT markers (N-cadherin and vimentin) were upregulated, while E-cadherin and cofilin expression was downregulated in samples taken from patients with PCa before and after radiation therapy. Together with PARP-1, these markers serve as predictors of PCa's susceptibility to radiotherapy. The dynamic changes in the EMT-MET process of PCa patients receiving radiotherapy were evaluated and found to be inhibited by silymarin [93]. Lysyl oxidase-like 2 (LOXL2) is a molecule associated with radiotherapy resistance. In PCa cells, knockdown of LOXL2 inhibits the EMT process of CRPC DU145 and PC-3 cells, thereby restoring radiosensitivity [94]. ZEB1, a transcription inhibitor that promotes the EMT process and stemness characteristics, significantly contributes to regulating cell response to radiation. Studies have shown that ZEB1 is upregulated in radiation-resistant cells. ATM phosphorylates and stabilizes ZEB1 expression in response to DNA damage. ZEB1 can directly interact with USP7, enhancing its ability to deubiquitinate and stabilize CHK1 [95], leading to HR-dependent DNA repair and promoting radiation resistance. miR-875-5p targets the epidermal growth factor receptor (EGFR)/ZEB1 signaling pathway, promoting PCa cells' transition from the EMT process to the MET process, thereby restoring their radiosensitivity [96]. Bissalicylic acid inhibits the EMT process of PCa PC-3 cells. Further research has found that bissalicylic acid synergizes with radiotherapy to sensitize PCa by activating the AMPK pathway and inhibiting the acetyl-CoA carboxylase (ACC) and thioredoxin domain-containing protein 1 (TROC1) pathways, suggesting its potential as a promising cancer therapy in the future [97]. Relevant research is needed to confirm the therapeutic effect of salicylic acid combined with radiotherapy for PCa. Currently, radiotherapy has become one of the treatment methods for metastatic PCa, which broadens the implications of radiotherapy for this disease.
EMT is also associated with molecularly divergent subtypes and aberrant histologies. Variant histologies (VHs) have been recognized as drivers of biological heterogeneity and increased aggressiveness in current clinical practice. In non-muscle-invasive bladder cancer (NIMBC), variant histologies (nested, glandular, micropapillary, squamous, inverted, basaloid, microcystic, villous-like, and lymphoepithelioma-like carcinoma) have been identified as risk factors for patient disease-free survival (DFS) [98]. Plasmacytoid, small-cell, and sarcomatoid VHs are linked to worse disease-specific survival (DSS) in muscle-invasive bladder cancer (MIBC), while lymphoepithelioma-like VH is associated with an improved DSS [99]. An accurate pathological diagnosis of VHs can enable tailored counseling to identify patients who require more intensive management [100]. In addition, ductal adenocarcinoma (DAC) is the most common variant histological subtype of PCa and is characterized by an aggressive clinical course. Recent studies suggest that DAC requires external beam radiation therapy and particle-enhanced therapy, indicating DAC's resistance to radiation therapy [101,102]. Intraductal carcinoma of the prostate (IDC-P) is positively correlated with higher GS and is associated with early relapse and metastasis after radiation therapy, suggesting IDC-P's insensitivity to radiation therapy [103,104]. Sarcomatoid carcinoma is also rare and carries a poor prognosis, with limited clinical interventions and approximately 38% of patients experiencing distant metastasis [105]. It most frequently emerges after radiation for a high-grade acinar carcinoma [106]. Some sarcomatoid carcinomas lack classical epithelial features [107], which could be one of the reasons why these patients are resistant to chemotherapy and radiation therapy, leading to a poor prognosis. Moreover, pleomorphic giant-cell adenocarcinoma is a rare and aggressive subtype that often develops following prior treatment with androgen deprivation or radiation [108]. Therefore, these variant histologies are not only strongly associated with risk stratification and survival outcome in patients but also pose a significant challenge in understanding the relationship and mechanisms between ionizing radiation and specific pathological types in PCa.
Since the biological behavior of metastatic PCa is different from that of primary PCa, the cells of these metastatic foci are mostly formed through the EMT process from the primary foci. Research into radiotherapy for both the primary and metastatic foci of metastatic PCa is still in the exploratory stage, with a lack of focus on radiation resistance being a major issue. We anticipate that as research into radiation for PCa continues to advance, treatment for these metastases will become more individualized and accurate.
The Existence of Prostate Cancer Stem Cells (PCSCs) in Foci
Initially identified in leukemia [109], cancer stem cells (CSCs) are a functionally distinct population of tumor-resident cells characterized by their capacity for self-renewal, differentiation into many cell types, and potential for metastasis [110]. Despite spending much of their time in the G0 phase of the cell cycle, CSCs are resistant to radiation and conventional chemotherapy. Modern clinical research has shown that not all PCa patients exhibit the same biological behaviors despite sharing the same disease [111]. Moreover, the subclonal origin of CSCs and PCSCs may be directly associated with the presence of distinct subgroups exhibiting malignant biological characteristics.
In 2005, PCSCs were identified for the first time in PCa samples collected following RP surgery; these cells exhibited surface markers consistent with traditional CSCs (CD44+, CD133+, EPCaM, ALDH1, Snail, etc.) and were found to have proliferated and differentiated in a manner characteristic of stem cells. PCSCs were subsequently identified in metastatic PCa and chemo-and radiotherapy-resistant tissue samples [112]. CSCs are primarily found in tumor microenvironments. Another self-protective mechanism for CSCs to evade immune response and damage is the presence of numerous cytokines and cellular components in the niches of PCSCs, including non-PCSC cells, immune cells, inflammatory cells, vascular endothelial cells, and fibroblasts, as well as many growth factors and chemokines (Figure 4) [113]. Since PCSCs still belong to the category of cells in terms of the basic unit of life, the mechanism of PCSCs leading to radiotherapy resistance has varying degrees of intersection with other pathophysiological processes in this review. In the constructed radiation resistance model of PCa cells (LNCaP, DU145, and PC-3), markers of EMT are significantly upregulated, and the markers of CSCs (CD44, CD44v6, CD326, ALDH1, Nanog, and Snail) are also upregulated, suggesting the preliminary correlation between PCSCs and radiosensitivity. Additional investigation and analysis revealed a tight relationship between activation of the PI3K/Akt/mTOR pathways and the stem phenotype of PCa-resistant cells [114]. In ALDH+ PCSCs, the activated EMT process and reinforced DNA repair ability lead to PCa cells' resistance to radiotherapy, which may be related to the promoted transcription of ALDH1A1 after the activation of the Wnt/β-catenin pathway. Since PCSCs still belong to the category of cells in terms of the basic unit of life, the mechanism of PCSCs leading to radiotherapy resistance has varying degrees of intersection with other pathophysiological processes in this review. In the constructed radiation resistance model of PCa cells (LNCaP, DU145, and PC-3), markers of EMT are significantly upregulated, and the markers of CSCs (CD44, CD44v6, CD326, ALDH1, Nanog, and Snail) are also upregulated, suggesting the preliminary correlation between PCSCs and radiosensitivity. Additional investigation and analysis revealed a tight relationship between activation of the PI3K/Akt/mTOR pathways and the stem phenotype of PCa-resistant cells [114]. In ALDH+ PCSCs, the activated EMT process and reinforced DNA repair ability lead to PCa cells' resistance to radiotherapy, which may be related to the promoted transcription of ALDH1A1 after the activation of the Wnt/β-catenin pathway. Moreover, the enhanced transcriptional expression of ALDH1 can eliminate ROS produced by oxidative stress in PCSCs and prevent genomic DNA damage [115]. SOX2, a well-known Yamanaka factor, is one of the core transcription factors for maintaining the pluripotent stemness of embryonic stem cells and CSCs and is also an important chemical promoting tumor development [116]. In PCa DU145 cells, the expression of SOX2 improves the antiapoptotic ability by delaying caspase-3 cleavage, while knocking down SOX2 has the effect of radiation sensitization [117]. Another key marker of PCSCs is CD44v6. The inclusion of CD44v6 in PCa cells has been linked to a surge in cell proliferation, the formation of spheres, and resistance to various forms of chemotherapy, such as docetaxel, paclitaxel, doxorubicin, methotrexate, and even radiotherapy due to its involvement in the EMT process as well as activation of the PI3K/Ak/mTOR and Wnt/β-catenin pathways [118]. Structural maintenance of chromosomes 1A (SMC1A), a substrate molecule of ATM in response to DNA damage, is upregulated in PCa compared with normal tissues. After knockdown of SMC1A, the proliferation and sphere-forming ability of PCa DU145 and PC-3 cells decreased, and they became more sensitive to X-ray treatment, which was related to reversing the EMT phenotype and downregulating the stem cell markers (CD44, LEF-1, and POU5F-1) of PCSCs. SMC1A has been shown to enhance the efficiency with which HR and NHEJ can repair DNA damage. In addition, similar to ALDH1+ PCSCs, SMC1A can improve the antioxidant stress ability of cells through GSH and reduce the production of ROS [119]. The immune checkpoint B7-H3, a surface molecule on PCSCs, is considered a specific marker of PCSCs due to its significantly upregulated expression in the late stage of radiotherapy. Consequently, the development of chimeric antigen receptor T-cell (CART) therapies targeting this molecule can specifically target PCSCs, making PCa more responsive to radiation [120], showing promising applications for immunotherapy in the field of PCa. In essence, the radiotherapy tolerance of PCSCs is regulated by a wide range of molecules and processes such as surface molecular indicators, DNA damage repair, cell redox homeostasis regulation, signal pathway networks, and more. Identifying specific targets for PCSCs through scientific investigations could lead to the development of treatments that specifically target these cells.
Hypoxia in Tumor Core
When scientists first began researching the impact of radiation therapy on tumors, they initially concentrated on oxygen due to its significant influence on treatment outcomes when studying factors that affect the effects of radiation therapy. With advances in technology and a better understanding of oxygen, it was discovered that the oxygen partial pressure and oxygenation status in tumor tissues are important factors influencing the effectiveness of tumor treatments. The threshold of oxygen concentration that determines cell radiosensitivity is around 2%, beyond which X-rays can achieve more significant oxygen effects at lower concentrations. This has led to the development of the oxygen enhancement ratio (OER), a metric for comparing the relative radiation dose required to produce the same biological effect in an aerobic and oxygen-free environment. For radiation with low energy transfer (LET), such as X and γ-ray, the OER is about 2.5-3.5, while the OER of high-LET rays, such as proton and heavy-ion rays, is about 1.0 [121,122], which reflects that high-LET rays are less dependent on oxygen. X-rays are mainly used in current clinical routine radiotherapy. It is essential to have a clear understanding of the connection between tumor radiotherapy resistance and oxygenation status and its mechanism.
To put it another way, the "oxygen fixation hypothesis" suggests that X-rays can "fix" oxygen through indirect ionization to damage biological macromolecules [123]. Due to the anatomical position relationship between the tumor center and surrounding blood vessels, as well as abnormalities in blood vessels, the hypoxia zone, relative hypoxia zone, and normal oxygenation zone gradually form from the tumor center to the outside during the occurrence and progression of tumor cells ( Figure 5). From the perspective of energy metabolism, the formation of hypoxia is due to an imbalance between oxygen acquisition and consumption. Currently, the hypoxic zone is considered one of the primary reasons for tumor radiochemotherapy resistance and a major characteristic of tumors. The hypoxic zone plays a critical role in different stages of tumor progression (including tumor cell proliferation, survival, angiogenesis, migration, cancer metabolic reprogramming, and stem cell characteristics) [124,125]. Targeting the hypoxic region in the center of the tumor is considered a therapeutic approach to alleviate treatment resistance and improve curative effects. PCa also exhibits hypoxia, which is also a hallmark of malignancies. PCa shares the characteristics of hypoxia with malignant tumors, and nitroimidazole compounds are among the earliest hypoxic sensitizers used clinically. Based on nitroimidazole compounds, a specific hypoxia probe named 18 F-PEG3-ADIBOT-2NI-GUL targets the prostatespecific membrane antigen (PSMA) and can accurately display hypoxic regions within prostate cancer [126]. Carnitine palmitoyltransferase 1A (CPT1A) is one of the biomarkers for the β-oxidation of fatty acids. In conjunction with the nitroimidazole compound, pamomycin, it can be used for the fluorescent imaging of nude mice transplanted tumor models and provide new technologies and methods to display hypoxic regions in prostate cancer [127]. Several hypoxia probes are currently available, including 18 F-[2-(2-nitro-1-Himidazol-1-yl)-N-(2,2,3,3,3-pentafluoropropyl) acetamide ( 18 F-EF5) [128], 18 F-misonidazole (MISO) [129], and gadolinium tetraazacyclododecanetetraacetic acid monoamide conjugate of 2-nitroimidazole (GdDO3NI) [130]. Compared to the standard practice of inserting electrodes into tumor tissues for measuring oxygen partial pressure and hypoxia, these molecular probes have the advantages of being non-invasive, fast, and efficient. They also provide a valuable reference for outlining the biological lack of an oxygen target area in prostate cancer, making them a development trend in precision medicine. Additionally, researchers have modified the structure of nitroimidazole compounds and designed a novel type of ZIF-82-PVP nanoparticle material. By using radiotherapy X-ray controlled-release RNS, the apoptosis of hypoxic cells is increased while the protective autophagy of prostate cancer cells is significantly inhibited and nitrosation stress in PCa cells is boosted, thereby achieving targeted treatment for the hypoxic region of prostate cancer [131]. Under hypoxic conditions, liposome doxorubicin can reverse the vascular changes induced by hypoxia, greatly increasing PCa's response to radiotherapy [117].
Biomedicines 2023, 11, x FOR PEER REVIEW 13 oxygen target area in prostate cancer, making them a development trend in prec medicine. Additionally, researchers have modified the structure of nitroimidazole pounds and designed a novel type of ZIF-82-PVP nanoparticle material. By using ra therapy X-ray controlled-release RNS, the apoptosis of hypoxic cells is increased whil protective autophagy of prostate cancer cells is significantly inhibited and nitrosa stress in PCa cells is boosted, thereby achieving targeted treatment for the hypoxic re of prostate cancer [131]. Under hypoxic conditions, liposome doxorubicin can revers vascular changes induced by hypoxia, greatly increasing PCa's response to radiothe [117]. Tumor tissue hypoxia is a complex and spatiotemporal pathophysiological pro Analyzing the regulation mechanism of hypoxia in PCa is currently a research hotspo it can contribute to the discovery of corresponding targets and the development of s tizers. Hypoxia-inducible factors (HIFs) are transcription factors that aid cancer ce adapting to low-oxygen conditions by activating the transcription of several g through binding with hypoxia response elements (HRE) in the promoter region. This cess is crucial for maintaining the body's oxygen homeostasis. The overexpression of has been linked to immune escape, drug resistance, tumor neovascularization, metas and tumor invasion and migration [132,133]. During the preliminary stages of rese small interfering RNA (siRNA) technology was used to knock down HIF-1 in PC-3 Tumor tissue hypoxia is a complex and spatiotemporal pathophysiological process. Analyzing the regulation mechanism of hypoxia in PCa is currently a research hotspot, as it can contribute to the discovery of corresponding targets and the development of sensitizers. Hypoxia-inducible factors (HIFs) are transcription factors that aid cancer cells in adapting to low-oxygen conditions by activating the transcription of several genes through binding with hypoxia response elements (HRE) in the promoter region. This process is crucial for maintaining the body's oxygen homeostasis. The overexpression of HIF has been linked to immune escape, drug resistance, tumor neovascularization, metastasis, and tumor invasion and migration [132,133]. During the preliminary stages of research, small interfering RNA (siRNA) technology was used to knock down HIF-1 in PC-3 cells, revealing its effect on radiation sensitization [134]. Hormone-sensitive PCa LNCaP cells can also be treated with a HIF-1 inhibitor [135]. As our understanding of the underlying mechanisms of these processes grows, studies have demonstrated that HIF-1 expression in PCa cells can facilitate DNA repair and induce radiotherapy resistance by activating gene expression along the NHEJ-related pathway and facilitating the nuclear translocation of β-catenin [136]. HIF-1 can also co-transcribe with Nrf2 to control the expression of dimethylarginine dimethylaminohydrolase 1 (DDAH1) and thus enhance the development of PCa. Many small chemical inhibitors of HIFs, such as salicylic acid [137], manganese dioxide particles [138], statins [139], and metformin [140], have been studied for their potential to sensitize cells to radiation by alleviating the hypoxia of PCa tumors. HIFs are molecular targets for the radiosensitization of PCa, primarily connected by E3 ubiquitin ligase (such as VHL) and then degraded through the 26S ubiquitin-proteasome pathway. Proteolysis-targeting chimeras (PROTACs) [141] and molecular glue [142] technologies provide new chemical methods and schemes for the intracellular degradation of HIFs, worthy of exploration in the radiotherapy of PCa.
The oxygenation state of tumor tissue is different from that of normal tissue, providing a theoretical basis for the development of new clinical radiotherapy technology. New ultrahigh-dose-rate radiation techniques, such as ultra-high-dose-rate (FLASH) radiotherapy, are being developed to achieve the goal of high-dose irradiation-resistant normal tissue while leaving the radiosensitivity of tumor tissue unaffected. FLASH treatment can provide a dosage of more than 8 Gy in a very short amount of time (often less than 1 s). The effect was first observed in 1959 [128]. Currently, there are two hypotheses regarding the biological mechanism of FLASH radiotherapy. One hypothesis is that under ultra-highdose-rate radiotherapy, normal tissue becomes hypoxic due to the great consumption of oxygen, leading to resistance to radiation. The relatively hypoxic nature of tumor tissue makes it less affected by radiation with a high-dose rate, which promotes a "response error" between normal tissues and tumor tissues to IR. The other hypothesis is that the extremely short treatment time of FLASH can ensure the survival of circulating immune cells, thereby playing a role in systemic anti-tumor immunity. Preliminary explorations show that FLASH radiotherapy is closely related to hypoxia of tumor tissue; however, the specific biological effect of FLASH is yet to be determined, and its effectiveness needs to be clarified at the physical level in the advanced stages. Furthermore, mechanism exploration should be conducted on this basis to serve clinical practice.
Conclusions
Based on the research progress outlined above, the radiotherapy resistance of PCa has long been overlooked in clinical practice. In particular, the resistance to radiotherapy in highly malignant PCa requires more attention as investigations continue to advance. Radiotherapy resistance in PCa is a time-consuming and complex scientific issue that involves numerous biological and pathophysiological processes, such as defects and disorders of the DNA damage repair system, cell cycle disorder, imbalance of redox homeostasis, EMT, PCSCs, and hypoxia in the tumor core. Recently, immune factors have also been found to play a specific role in the radiotherapy resistance of PCa. To reverse or even revert the radiosensitivity of PCa and overcome radiotherapy resistance, the overall and molecular mechanisms of radiotherapy resistance in PCa need to be carefully analyzed. Corresponding targets to develop small-molecule inhibitors and immunotherapeutic drugs should be explored deeply, in the hopes of gaining fresh insights into overcoming clinical radiotherapy resistance (Supplementary Figure S1).
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/biomedicines11061628/s1, Figure S1. Timeline of important radiotherapy techniques and biological processes studied in radiotherapy resistance of prostate cancer. Figure S2. Targeted therapeutic agents and bio-activated compounds: unveiling their specific role links in the DNA damage and repair pathways. Figure S3. Targeted therapeutic agents and bio-activated compounds: unveiling their specific role links in the cell cycle.
|
2023-07-12T17:00:10.952Z
|
2023-06-01T00:00:00.000
|
{
"year": 2023,
"sha1": "8051a765020876e8d6f16f0f604cddeed7e02acc",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/biomedicines11061628",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a9ad67492b8ef6ba6204860d0f961bc485da21aa",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
261684030
|
pes2o/s2orc
|
v3-fos-license
|
The role of miR1 and miR133a in new-onset atrial fibrillation after acute myocardial infarction
Background The development of new-onset atrial fibrillation (NOAF) after acute myocardial infarction (AMI) is a clinical complication that requires a better understanding of the causative risk factors. This study aimed to explore the risk factors and the expression and function of miR-1 and miR-133a in new atrial fibrillation after AMI. Methods We collected clinical data from 172 patients with AMI treated with emergency percutaneous coronary intervention (PCI) between October 2021 and October 2022. Independent predictors of NOAF were determined using binary logistic univariate and multivariate regression analyses. The predictive value of NOAF was assessed using the area under the receiver operating characteristic (ROC) curve for related risk factors. In total, 172 venous blood samples were collected preoperatively and on the first day postoperatively; the expression levels of miR-1 and miR-133a were determined using the polymerase chain reaction. The clinical significance of miR-1 and miR-133a expression levels was determined by Spearman correlation analysis. Results The Glasgow prognostic score, left atrial diameter, and infarct area were significant independent risk factors for NOAF after AMI. We observed that the expression levels of miR-1 and miR-133a were significantly higher in the NOAF group than in the non-NOAF group. On postoperative day 1, strong associations were found between miR-133a expression levels and the neutrophil ratio and between miR-1 expression levels and an increased left atrial diameter. Conclusions Our findings indicate that the mechanism of NOAF after AMI may include an inflammatory response associated with an increased miR-1-related mechanism. Conversely, miR-133a could play a protective role in this clinical condition.
The role of miR1 and miR133a in new-onset atrial fibrillation after acute myocardial infarction Qingyi Zeng 1,2 , Wei Li 1,3* , Zhenghua Luo 4 , Haiyan Zhou 1,3 , Zhonggang Duan 1,3 and Xin Lin Xiong 1,3 Background Acute myocardial infarction (AMI) is the acute necrosis of coronary artery stenosis caused by corresponding myocardial ischemia.New-onset atrial fibrillation (NOAF) is a critical complication with an incidence rate of 6-22% [1].Patients with NOAF have a 40% higher mortality rate than those with normal sinus rhythm [2].Long-term follow-up results of relevant studies have shown a lower incidence of ischemic cerebrovascular disease or transient ischemic attacks in patients with non-valvular atrial fibrillation (NVAF) in the rhythm control group [3].The specific mechanism of atrial fibrillation has not been defined but involves electrical and structural remodelling of the atrium.Structural remodelling includes increased atrial fibrosis, increasing ectopic electrical activity and conduction anisotropy within the atrial tissue [4,5].In patients with a mild reduction in ejection fraction (HFmrEF), a left atrial volume index (LAVI) > 30.5 can predict the presence of atrial fibrillation with a sensitivity of 64% and specificity of 66% [6].However, electrical remodelling mainly involves changes in the expression of cardiac ion channels, leading to the shortening of the atrial action potential duration and effective refractory period and changes in atrial calcium homeostasis [7][8][9][10].Previous studies have indicated that fragmented QRS (fQRS) is an independent determinant of atrial fibrillation in patients with ST-segment elevation myocardial infarction (STEMI) [11].The P-wave peak times in V1 and D2 leads, which are obtained from surface electrocardiography (ECG), are highly predictive in determining the likelihood of patients developing atrial high-rate episodes (AHRE) [12].
MiRNAs are small noncoding RNAs that regulate gene and protein expression by causing mRNA degradation or translational repression [13].Among these miRNAs, miR-1 and miR-133a are specifically expressed in adult heart and skeletal muscle tissues, and their expression patterns vary depending on the pathological condition [14].This study aimed to investigate the risk factors for NOAF after AMI, assess the expression levels of miR-1 and miR-133a before and after AMI, and examine their potential clinical significance as risk factors for NOAF.
Study participants and specimen collection
Between October 2021 and October 2022, 172 patients with AMI who underwent emergency percutaneous coronary intervention (PCI) at the Affiliated Hospital of Guizhou Medical University were consecutively selected as participants for this study.The patients' medical records were used to collect clinical data during hospitalisation.This data included various categorical and continuous variables such as age, gender, and biochemical indicators.In addition, the Glasgow Prognostic Score (GPS) was calculated to assess the inflammatory status and predict the prognosis of the patients.The GPS is determined by measuring two acute-phase proteins in the blood: C-reactive protein (CRP) and albumin.
Furthermore, the results of coronary angiography were also recorded for analysis.In this study, 172 blood samples were collected from the participants before undergoing PCI and on the first day after the intervention.These samples were used to analyse plasma miR1 and miR133a expression levels.The Ethics Committee of the Affiliated Hospital of Guizhou Medical University approved the study protocol, and the study was conducted following the principles outlined in the Declaration of Helsinki.Informed consent was obtained from all patients before collecting the samples.
Inclusion and exclusion criteria
AMI diagnostic criteria refer to the Fourth Global Unified Definition of Myocardial Infarction, published in 2018 [15], where acute myocardial injury exists and at least one of the following conditions is present: (1) symptoms of a lack of blood perfusion in the myocardium or chest pain lasting for > 30 min; (2) newly developed ischemic electrocardiogram (ECG) changes; (3) the formation of a new pathological Q wave on ECG; (4) imaging evidence confirming the presence of newly inactivated myocardium or ventricular wall motion abnormalities; and (5) angiography confirming an intracoronary thrombosis or an autopsy.The ECG diagnostic criteria for atrial fibrillation were: disappearing P wave, replaced with fibrillation wave (f wave) of different sizes and shapes; frequency of the f wave of 350-600 times/min; and irregular R-R interval.NOAF was defined as no history of paroxysmal or persistent atrial fibrillation, atrial flutter, or the first episode of atrial fibrillation on admission or during hospitalisation.The duration of atrial fibrillation was assessed under any of the following conditions: (1) the duration of atrial fibrillation could be determined by a complete 12-lead ECG or recorded by a Holter monitor and (2) ECG monitoring of atrial fibrillation for at least 30 s [16,17].The following exclusion criteria were applied: (1) a history of heart disease; (2) known malignant tumours; (3) thyroid disease or severe liver and renal insufficiency; (4) recent surgery and trauma; (5) combined acute and chronic infections and recent cerebrovascular diseases; (6) autoimmune diseases or a history of skeletal muscle trauma and other diseases; (7) history of atrial flutter, myocardial infarction, myocarditis, or cardiomyopathy; (8) history of radiofrequency ablation for arrhythmia, coronary artery bypass grafting, or other cardiac surgeries; and (9) severe valvular lesions or congenital heart disease.
Sample collection
Venous blood samples (5 mL) were collected from patients with AMI before PCI and on the first day after PCI using EDTA-K2 anticoagulant tubes.All blood samples were centrifuged at 4 °C (13,400×g, 10 min), and the plasma was transferred to RNase/DNA microcentrifuge tubes and stored at -80℃.
Quantitative real-time polymerase chain reaction (qRT-PCR) of miR1 and miR133a
Total RNA was isolated using a total RNA extraction kit (Tiangen Biochemical Technology Company, Beijing, China), and miR1 and miR133a expression levels were quantified using the Bulge-Loop™ miRNA qRT-PCR starter kit (Ruibo Biotechnology, Guangzhou, China) with the following reverse transcription program: 42 °C for 60 min and inactivated reverse transcriptase at 70 °C for 10 min.After terminating the reaction, the resulting product was cooled on ice and stored at -80℃.The reverse transcription and PCR reaction system was configured as previously described, and PCR reactions were performed using a fluorescent PCR instrument (Bio-Rad Laboratories, Hercules, CA, USA).The PCR reaction conditions were as follows: pre-denaturation at 95 °C for 10 min, denaturation at 95℃ for 2 s, annealing at 60℃ for 30 s, and extension at 70℃ for 10 s for 40 cycles.The ΔCT value was calculated, using cel-miR-39 as the control gene, as the difference between the target gene and cel-miR-39, as follows: ΔΔCT = [(CT target−gene -CT cel−miR−39 ) Experimental group] -[(CT target gene -CT cel−miR−39 ) Control group].Therefore, 2 −ΔΔCT represented the relative miRNA expression of the experimental and control groups.
Statistical analysis
All data analysed in this study were processed using the SPSS software (version 22.0; IBM Corporation, Armonk, NY, USA).The distribution of continuous variables was evaluated using the Kolmogorov-Smirnov normality test.The results are presented as "mean ± standard deviation" for normally distributed data, and an independent sample t-test was used to compare the groups.Non-normally distributed data are presented as median (interquartile range), and nonparametric tests were used to analyse these data.For categorical variables, the chi-squared test was used.Logistic regression analysis was employed to determine the predictive power of clinical data for NOAF after AMI.The diagnostic efficacy of clinical indicators of NOAF after AMI was analysed using the receiver operating characteristic (ROC) curve and the area under the curve (AUC).Correlations among the variables in patients with NOAF were analysed using Spearman's rank correlation.All statistical tests were two-tailed, and statistical significance was set at a P-value < 0.05.
Basic data of the selected population
A total of 172 patients were included in this study, with 97 and 32 in the non-NOAF and NOAF groups, respectively.There were no significant differences between the groups regarding age, sex, hypertension, diabetes, total white blood cell count upon admission, neutrophil ratio, glomerular filtration rate (GFR), and levels of uric acid, brain natriuretic peptide (BNP), creatinine, Mg2+, and K+ (P > 0.05).The proportion of patients with a Glasgow Prognostic Score of 0 points was significantly higher than those with a score of 1 point (P < 0.01), and the incidence of Glasgow Prognostic Score elevation was higher in the NOAF group compared with the non-NOAF group (P < 0.05).The erythrocyte sedimentation rate was higher in the NOAF group than in the non-NOAF group (P < 0.05).Furthermore, the increase in left atrial diameter was significantly greater in the NOAF group compared with the non-NOAF group (P < 0.01; Table 1).
Coronary artery lesion data
The enrolled patients diagnosed with myocardial infarction based on ECG examination were divided into two groups: the anterior wall group (including anterior, extensive anterior, lateral, lower, and posterior walls) and the non-ST elevation myocardial infarction (NSTEMI) group.The non-NOAF group had more NSTEMI infarctions compared with the NOAF group.Further analysis showed no significant difference between the number of patients with anterior and lower wall infarctions and those with NSTEMI (P > 0.05).Among patients with NOAF and non-NOAF, the number of single-, double-, and triple-branched coronary lesions was also not significantly different.(P > 0.05; Table 2).
ROC curve
The Glasgow prognostic score demonstrated an AUC of 0.62 in predicting NOAF.When using a cut-off value of 0.5 for the Glasgow prognostic score, the sensitivity and specificity for predicting NOAF were 50% and 74.3%, respectively.Regarding the left atrial diameter, the AUC was 0.66.When using a cut-off value of 35.5 mm for the internal diameter of the left atrium, the sensitivity and specificity for predicting NOAF were 62.5% and 70%, respectively (Fig. 1; Tables 4 and 5).
Quantitative plasma miR1 and miR133a expression
The plasma expression levels of miR1 and miR133a were determined using qRT-PCR.The normality of the miR1 and miR133a values was evaluated using the Shapiro-Wilk test.The nonparametric Mann-Whitney test was then used to compare the expression levels between the NOAF and non-NOAF groups.The Mann-Whitney test results indicated significantly higher expression levels of miR1 and miR133a in the NOAF group than in the non-NOAF group (P < 0.01; Table 6).
Correlation between the expression of miR1 and miR133a preoperatively and 1-day postoperatively and clinical indicators
The relationship between the expression of miR1 and miR133a and several clinical indicators was analysed using the Spearman statistical test.The left atrial diameter was moderately related to the preoperative expression of miR1 (r = 0.163); the miR133a expression on the first day after surgery was associated with the percentage of neutrophils (r = 0.205), which were significant overall (r = 0.264; Table 7).
Inflammatory factors and NOAF
Inflammation is associated with structural and electrical remodelling in the atrium and the occurrence and persistence of atrial fibrillation (AF) [18,19].Infections, including their type, duration, and severity, may impact atrial remodelling [20].Multicentre studies [21,22] have found an increased proportion of AF in patients with sepsis, indicating that pro-inflammatory and inflammatory mediators of blood circulation are associated with AF development [23].After acute myocardial infarction (AMI), the inflammatory response is vital in myocardial remodelling [23,24].The Glasgow Prognostic Score (GPS) assesses serum albumin and C-reactive protein levels [25].A drop in serum albumin level indicates a severe inflammatory response in the patient [26].Therefore, the GPS can accurately reflect the degree of the inflammatory response.In our study, after excluding severe infectious diseases, we found that new-onset AF (NOAF) was related to the GPS and increased blood sedimentation.This suggests that NOAF after AMI is associated with inflammatory mechanisms.
Infarct site and NOAF
Atrial ischemia or infarction can lead to electrical and structural remodelling of the atrium [27,28].Relevant studies suggest that ischemia can disrupt diastolic cytoplasmic Ca2 + flow.Notably, several factors, such as increased intracellular acidification and dephosphorylation of junction proteins, can result in local conduction blocks and facilitate the occurrence of atrial fibrillation [29,30].Furthermore, other studies indicate that proximal occlusion of the left spiral artery can cause reduced branch circulation in the atrioventricular node, contributing to the development of atrial fibrillation [31,32].In addition, a study found that coronary lesions originating from either the left or right coronary artery system could promote atrial fibrillation due to atrial ischemia [33].In our study, patients with inferior and anterior wall infarctions were more likely to develop atrial fibrillation than those with non-ST-segment elevation myocardial infarction (NSTEMI).However, this association was not observed with diseased vessels, suggesting that the severity of ischemia is linked to new-onset atrial fibrillation (NOAF).
miR1, miR133a, and NOAF
Injured cardiomyocytes after AMI promote inflammatory responses by releasing miR-1 and increasing the number of monocytes in the blood [34].The increase in miR-1 can regulate atrial specificity, improve heart conduction, repolarisation, and heart rate, and reduce atrial fibrillation through the double-pore domain potassium channel TASK-1.TASK-1 is a weak inwardly rectifying acid-sensitive K + channel encoded by KCNK3 [35][36][37].
Yang et al. [38] found that miR-1 overexpression can be inhibited by KCNJ2, which encodes the K + channel subunit Kir2.1, and GJA1, which encodes connexin-43.This inhibition slows the conduction and depolarisation of the cytoplasmic membrane and inhibits atrial fibrillation.In patients with persistent atrial fibrillation, miR-1 expression decreases, leading to increased inward rectifier current activity [39].Yuan et al. [40] found that miR-1 levels were significantly lower in geriatric atrial fibrillation patients compared with non-atrial fibrillation patients.However, Terentyev et al. reported that increased miR-1 levels in cardiomyocytes lead to selectively decreased expression of the B56α regulatory subunit of protein phosphatase 2 A. This reduction causes reduced medium-mediated dephosphorylation of the L-type calcium channel (LTCC) and ryanodine receptor 2 (RyR2), resulting in increased calcium/calmodulin-dependent kinase II phosphorylation of LTCC and RyR2.Consequently, an inward endoplasmic reticulum Ca2 + current is induced, promoting arrhythmia development [41].Wiedmann et al. [42] found that miR-1 was associated with the atrial collagen alpha-2(I) chain, and pro-apoptotic miR-1 was increased in the right atrial tissue of patients with NOAF after coronary atrial bypass graft compared with patients without atrial fibrillation.In this study, we observed that miR-1 was elevated in NOAF after AMI and was associated with increased atrial diameter, suggesting that increased miR-1 levels in patients were associated with myocardial structural remodelling, leading to atrial fibrillation.In addition, miR-133a levels were significantly higher before surgery and on the first day after surgery in the NOAF group compared with the non-NOAF group.Zhu et al. [43] discovered that miR-133a can activate macrophage migration inhibitory factors, resulting in increased Akt phosphorylation and Bcl-2 expression, and decreased caspase-3 expression, fibrosis, and apoptosis.Similarly, miR-133a can also promote the expression of vascular endothelial growth factor protein in human umbilical cord venous blood to reduce the occurrence of atrial fibrillation.Related studies suggested that miRNA-133a may be involved in downregulating CD47 in human HeLa cancer cells [44].CD47 can inhibit the migration and adhesion of neutrophils [45].
Through a 1-year follow-up, research has shown that activating the miR-33/SIRT1 pathway increases inflammation and coagulation processes, thrombus burden, and the formation of distant embolisms [46].In our analysis, elevated miR-133a levels were positively associated with the neutrophil percentage, inhibiting the inflammatory response.miR-133a reduced the occurrence of NOAF by stabilising myocardial structural remodelling and inhibiting the inflammatory response.Overall, the Glasgow Prognostic Score and left atrial diameter were independent risk factors for NOAF after AMI.The mechanism associated with this condition may involve elevated miR-1 levels and an enlarged left atrial diameter.Conversely, miR-133a is protective in inhibiting the inflammatory response in cardiomyocytes.Administering non-vitamin K antagonist oral anticoagulants (NOACs) for anticoagulation therapy in patients with atrial fibrillation is recommended.In the AFTER-2 study conducted in Turkey, patients with high time in therapeutic range (TTR) on warfarin treatment showed no significant difference in the incidence of ischemic cerebrovascular disease/transient ischemic attacks (CVD/ TIA), intracranial haemorrhage, and mortality as primary outcomes [47,48].Increasing the frequency of follow-up visits and minimising adverse events is recommended for patients with new-onset atrial fibrillation after acute myocardial infarction.
This was a single-centre retrospective study with a small sample size.Multicentre studies with large sample sizes are required to validate our findings.In addition, due to the short duration or delayed observation of atrial fibrillation, the results of previously asymptomatic patients with atrial fibrillation, and the influence of potential immune-related diseases on miRNA expression, could not be excluded entirely.Finally, the lack of further analysis of the onset of NOAF and miRNA changes in patients with AMI may have been a limitation of this study.
Conclusions
In this study, we identified a mechanism associated with NOAF after AMI that involves an inflammatory response and changes in miRNA expression.Specifically, we observed a detrimental role of miR-1 associated with an increased left atrial diameter.Conversely, miR-133a may play a protective role in NOAF by increasing the neutrophil ratio, thereby reducing the inflammatory response.These findings provide novel insights into the mechanism of NOAF after AMI, which could contribute to developing innovative therapeutic strategies for this condition.
Fig. 1
Fig. 1 The ROC curve of related risk factors for new-onset atrial fibrillation after acute myocardial infarction prediction.ROC, receiver operating characteristic; ESR, erythrocyte sedimentation rate
Table 2
Number of coronary lesions, main culprit vessels, and infarction sites a lower wall versus anterior wall; b anterior wall versus NSTEMI; c lower wall versus NSTEMI; *P < 0.05; Data are presented as number (%) NOAF, new-onset atrial fibrillation; NSTEMI, non-ST elevation myocardial infarction
Table 4
AUC and 95% CI for NOAF occurrence predicted by independent risk
factors Variable AUC Stan- dard error P-value 95% CI Lower limit
AUC, area under the curve; CI, confidence interval; NOAF, new-onset atrial fibrillation
Table 5
Specificity and sensitivity of independent risk factors for predicting NOAF
Table 6
Expression of miR1 and miR133a preoperatively and 1-
Table 7
Correlations between the expression of miR1 and miR133a preoperatively and 1-day postoperatively and clinical indicators
|
2023-09-12T13:46:05.474Z
|
2023-09-11T00:00:00.000
|
{
"year": 2023,
"sha1": "df16d5344386b150199691497a9a88ed83d5ebcc",
"oa_license": "CCBY",
"oa_url": "https://bmccardiovascdisord.biomedcentral.com/counter/pdf/10.1186/s12872-023-03462-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5fa9efc7fdaf6415c00b760bb7c997dd7091e2e6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
449257
|
pes2o/s2orc
|
v3-fos-license
|
Cross-comparison of cardiac output trending accuracy of LiDCO, PiCCO, FloTrac and pulmonary artery catheters
Introduction Although less invasive than pulmonary artery catheters (PACs), arterial pulse pressure analysis techniques for estimating cardiac output (CO) have not been simultaneously compared to PAC bolus thermodilution CO (COtd) or continuous CO (CCO) devices. Methods We compared the accuracy, bias and trending ability of LiDCO™, PiCCO™ and FloTrac™ with PACs (COtd, CCO) to simultaneously track CO in a prospective observational study in 17 postoperative cardiac surgery patients for the first 4 hours following intensive care unit admission. Fifty-five paired simultaneous quadruple CO measurements were made before and after therapeutic interventions (volume, vasopressor/dilator, and inotrope). Results Mean CO values for PAC, LiDCO, PiCCO and FloTrac were similar (5.6 ± 1.5, 5.4 ± 1.6, 5.4 ± 1.5 and 6.1 ± 1.9 L/min, respectively). The mean CO bias by each paired method was -0.18 (PAC-LiDCO), 0.24 (PAC-PiCCO), -0.43 (PAC-FloTrac), 0.06 (LiDCO-PiCCO), -0.63 (LiDCO-FloTrac) and -0.67 L/min (PiCCO-FloTrac), with limits of agreement (1.96 standard deviation, 95% confidence interval) of ± 1.56, ± 2.22, ± 3.37, ± 2.03, ± 2.97 and ± 3.44 L/min, respectively. The instantaneous directional changes between any paired CO measurements displayed 74% (PAC-LiDCO), 72% (PAC-PiCCO), 59% (PAC-FloTrac), 70% (LiDCO-PiCCO), 71% (LiDCO-FloTrac) and 63% (PiCCO-FloTrac) concordance, but poor correlation (r2 = 0.36, 0.11, 0.08, 0.20, 0.23 and 0.11, respectively). For mean CO < 5 L/min measured by each paired devices, the bias decreased slightly. Conclusions Although PAC (COTD/CCO), FloTrac, LiDCO and PiCCO display similar mean CO values, they often trend differently in response to therapy and show different interdevice agreement. In the clinically relevant low CO range (< 5 L/min), agreement improved slightly. Thus, utility and validation studies using only one CO device may potentially not be extrapolated to equivalency of using another similar device.
Introduction
Although the pulmonary arterial catheter (PAC) measures cardiac output (CO) easily at the bedside in critically ill patients [1][2][3], the recent trend in intensive care unit (ICU) monitoring is toward minimally invasive methods [4][5][6][7][8]. Arterial pulse contour and pulse power analyses have emerged as less invasive alternatives to PAC-derived CO measures [9,10]. The accuracy of these devices for PAC-derived CO measures has not been systematically compared in response to therapies other than volume resuscitation [11,12]. These devices use different calibration schema and model the transfer of arterial pulse pressure to stroke volume differently. Thus, their cross-correlations may not be assumed to be similar. The LiDCO Plus™ (LiDCO Ltd, London, UK) uses a transthoracic lithium dilution estimate of CO for calibration, whereas the PiCCO Plus™ (Pulsion Ltd, Munich, Germany) uses a transthoracic thermodilution approach to compensate for interindividual differences in arterial compliance [13][14][15]. The FloTrac™ calculates CO from the pulse contour using a proprietary algorithm and patient-specific demographic data [16] with, however, inconsistent reports of accuracy [17][18][19][20].
Although all devices have been compared individually to PAC-derived estimates of CO, none have been compared to each other [21]. Oxygen delivery (DO 2 ) targeted resuscitation algorithms may improve outcomes in selected patient groups [22]. Thus, knowing the degree to which different systems co-vary is important if one is to use these outcome studies in a general fashion to define the utility of all minimally invasive monitoring systems. Accordingly, in this study, we cross-compared the CO values and their changes in a critically ill patient cohort in whom active changes in blood volume, vasomotor tone and contractility were induced by specific therapies. We compared three pulse contour devices (LiDCO Plus, PiCCO Plus and FloTrac) (Edwards Lifesciences, Irvine, CA, USA) and two PAC thermodilution techniques: CO by thermodilution (COtd) and continuous cardiac output (CCO) in postoperative cardiac surgery patients during the first 4 postoperative ICU hours when most of the aggressive treatments occurred. To minimize initial CO differences, we calibrated the PiCCO and LiDCO devices using the initial PAC CO values, whereas the FloTrac did not allow external calibration.
Materials and methods
The study was approved by our Institutional Review Board, and all patients provided signed informed consent. Twenty postcardiac surgery patients (age range, 54 to 82 yr) were studied. Additional inclusion criteria were the presence of both an arterial catheter and PAC (Edwards LifeSciences, Irvine, CA, USA) (either CO TD or CCO). Exclusion criteria were evidence of cardiac contractility dysfunction (ejection fraction < 45% by intraoperative echocardiography), pregnancy, having pacemaker or automated implantable cardioverter-defibrillator, persistent arrhythmias, heart and/or lung transplant, severe valvular (mitral, aortic, pulmonic or tricuspid) stenosis or insufficiency after surgery, intraaortic balloon pump or other mechanical cardiac support.
Patients were admitted to the ICU on assist control ventilatory mode with 12/min respiratory rate (no patient had a spontaneous respiration > 16/min) and 6 ml/kg tidal volume, inspiratory-to-exporatory (I/E) time of 1:2 and 5 cm H 2 O positive end-expiratory pressure. Fentanyl (25-50 μg) was given as needed by nursing staff if the patient appeared to have pain or discomfort.
FloTrac™ and PAC
The FloTrac™ pulse contour device (Vigileo™, Edwards LifeSciences, Irvine, CA, USA) was attached to the existing arterial cannula, and its sensor was attached to the processing or display unit to read CO. The patient's demographic data (height, weight, age, and gender) were entered into the device as recommended by the manufacturer. FloTrac CO is reported as an averaged value over 20 seconds using a proprietary algorithm [23]. All continuous CO measurements were collected from the Vigileo™ monitor and input into a WinDaq data acquisition system (WINDAQ V 1.26, Dataq Instruments Inc., Akron, OH) as previously described [24]. Either a CO TD or a CCO was measured by a standard PAC attached to Vigilance™ monitor (Edwards Life-Sciences, Irvine, CA). If a non-CCO PAC was present, then CO measurements were taken upon patient arrival to the ICU and then after each therapeutic intervention as described below. CO TD was taken as the mean of at least three 10-ml 5°C 0.9 N NaCl bolus injections random to the respiratory cycle. The accuracy and acceptability of each thermal decay curve was judged visually on the attached ICU monitor. If CCO PAC was present, then all CCO data based on STAT values were continuously collected until end of the study using the WinDaq data acquisition.
LiDCO plus™ and PiCCO plus™
Arterial wave form data was collected using the WinDaq data acquisition system as previously described [24]. These waveforms were then reinjected into both the LiDCO plus™ and PiCCO plus™ devices offline to calculate CO. To minimize differences due to initial calibration variance, both the LiDCO plus™ and PiCCO plus™ devices had their initial CO values taken from the simultaneous PACderived CO values at time 0 as recommended by the manufacturers, after which time neither device was recalibrated. All continuous LiDCO and PiCCO CO measurements were collected in a data acquisition system installed internally in the device. The clocks on the all data acquisition systems were matched. All the CO TD measurements were taken by one investigator (MH).
Protocol
We compared the mean paired CO values 30 s before and 1-2 min after ending a volume challenge and after heart rate and blood pressure stabilized (< 5% variation over 30 s) following changes in vasoactive and inotropic therapy. We made no attempt to alter the usual care of the patients. The FloTrac data were blinded to the primary care physicians. All paired event data were downloaded in a common Microsoft Excel (Microsoft Corp., Redmond, WA, USA) spreadsheet for statistical analysis.
Statistical analysis
We performed analysis of variance for comparison of mean baseline CO between the three devices. A post hoc Student's paired t-test was used to compare groups when significance was identified. P < 0.05 was considered significant. We performed Bland-Altman analysis for paired devices PAC-LiDCO, PAC-PiCCO, PAC-FloTrac, LiDCO-PiCCO, LiDCO-FloTrac and PiCCO-FloTrac. Bias was defined as the mean difference between CO measurements by each set of paired devices. The upper and lower limits of agreement were defined as ± 1.96 standard deviation (SD) of the bias. The percentage error was calculated as limits of agreement divided by the mean CO [25,26]. Bias, limits of agreement and percentage error were calculated for the entire data set for each set of paired devices and then separately for CO TD and CCO. We also performed two additional Bland-Altman analyses. We selectively compared limits of agreement and bias of CO values < 5L/ min to ascertain whether any observed bias was selectively due to higher flow rates, which would have less clinical relevance. Since there is no reference CO measure, we also created a pooled CO measure as the mean of all the devices' CO values at one point (Z-statistic) and performed a Bland-Altman analysis of each device against this mean of all devices. For this analysis, we pooled the PAC COtd and CCO values into one variable. Since directional changes in CO are important in assessing response to therapy, the degree of concordance was defined as the percentage of the total number of events when paired devices showed the same directional change in CO (greater than ± 0.5 L/min) divided by the total number of events using a Pearson product-moment correlation coefficient analysis. We assumed that all paired CO data that varied by < 0.5 L/min reflected no change and then calculated the percentage of paired data points when both devices reported no change or a change of > 0.5 L/min in the same direction. We also calculated the correlation of the dynamic changes in these paired values using simple linear correlation analysis. Table 1 reports patient demographics. Simultaneous CO measurements for all four devices in 17 patients were taken. Two patients were excluded from analysis because of arrhythmias and another was excluded because the arterial pressure waveforms recorded were unusable for the PiCCO device. Table 2 reports CO by device and treatment intervention characteristics. Although mean CO values for PAC, LiDCO, PiCCO and FloTrac were not different (5.6 ± 1.5, 5.4 ± 1.6, 5.4 ± 1.5 and 6.1 ± 1.9 L/min, respectively), mean FloTrac CO values were slightly higher than others, approaching statistical significance between PAC, LiDCO and PiCCO (P = 0.095, 0.120 and 0.078, respectively).
Since CO accuracy may be clinically more important at low CO values, we analyzed the agreement among estimates of CO for mean values ≦5 L/min. For CO values ≦5 L/min, bias and limits of agreement were -0.17 ± 1.58 (PAC-LiDCO), 0.27 ± 1.84 (PAC-PiCCO),
PAC COtd vs. CCO as reference points
The bias and limits of agreement for each paired method in subgroup analyses of patients with either CO TD or CCO PAC are shown in Figure 4. The bias and limits of agreement for LiDCO with CCO (-0.31 ± 1.41 L/min), PiCCO with CCO (0.49 ± 1.30 L/min) and FloTrac with CCO (0.05 ± 1.30 L/min) were different from that of the three devices with CO TD PAC (-0.10 ± 1.64, 0.09 ± 2.58 and -0.72 ± 4.09 L/min, respectively).
Discussion
DO 2 -targeted resuscitation protocols reduce both length of stay and infectious complications in high-risk surgical patients [27,28]. Several minimally invasive monitoring devices have been used to realize these benefits. Our study demonstrates that the three commercially available CO monitoring devices report similar mean CO values, but dynamic trends among these devices over clinically relevant CO changes are not consistent. Thus, in the presence of no contradictory findings, one must use monitors specifically used in a proven effective treatment Figure 1 Bland-Altman analysis of each set of paired devices' cardiac output (CO). Solid line, mean difference (bias); dotted lines, limit of agreement (bias ± 1.96 standard deviation (SD)).
protocol to ensure the utility of that treatment. Within this context, PAC, LiDCO plus™ and FloTrac postoptimization protocols have been shown to improve patientcentered outcome [27,29,30]. Surprisingly, no comparable PAC data-specific clinical trials have been reported. We are unable to comment on the ability of FloTrac™-or PiCCO plus™-guided therapy to improve outcome because they have not been studied in this context. However, on the basis of our analysis of 55 quadruple measures and the three recent clinical trials [18][19][20][21]31], it is doubtful that their performance, using the present proprietary iterations, will be interchangeable with PAC or result in any better outcomes than were observed using the LiDCO plus™ CO estimates to target DO 2 levels.
This clinical study is unique for two specific reasons. First, we studied three commercially available pulse contour-pulse power analysis devices that report continuous CO measures and compared them to each other and to two types of PAC CO estimates: COtd or CCO. Since none of these devices is a "gold standard," the three pulse contour devices were compared to each other and to the PAC as equal devices. Our comparisons show that LiDCO plus™ and PAC have greater agreement with each other than do either PiCCO plus™ or FloTrac™ with PAC. Furthermore, the limits of agreement between LiDCO plus™ and PAC are within the boundaries of the Critchey-Critchey criteria [25], whereas those of PiCCO plus™ or FloTrac and PAC exceed those criteria. This close correlation also agrees with our previous data during open heart surgery, wherein we documented that the LiDCO plus™ estimates of stroke volume accurately trend actual left ventricular stroke volume measures during rapid and dynamic changes in CO when aortic flow was accurately measured in humans using an electromagnetic flow probe placed around the ascending aorta [32]. These levels of agreement difference persist when all devices are compared to a mean pooled CO value of the group as opposed to each other separately Figure 3). Second, we studied three separate types of resuscitation interventions (volume loading, vasoactive drug use and inotropic agent use) which reflect clinically relevant scenarios. To date, all published validation studies cited above examined only the ability of these devices to track cardiac output changes in response to volume loading when vasoactive drug therapy was held constant. Although changes in CO in response to volume loading are very important to document, the impact of other vasoactive therapies are equally important, commonly seen in the clinical setting and potentially confounding to the accuracy of pulse pressurederived estimates of CO.
In support of our findings, recent studies with Flo-Trac™ showed limited accuracy compared to PAC [18,19,31]. Mayer et al. [31] showed in intraoperative cardiac surgery patients that FloTrac™ displayed an overall percentage error of 46% compared to paired COtd values. Potentially, these previous studies unfairly studied FloTrac™ by using profound vasomotor paralysis and flow labile states, a clinical limitation specifically cautioned by the manufacturer. Our FloTrac™ device was equipped with the second-generation software modified to be more accurate in labile states. However, Compton et al. [33] reported continued poor limits of agreement between this second-generation FloTrac™ algorithm and PiCCO plus™ thermodilution CO measures. Thus, our FloTrac™-PAC data agree with their findings. FloTrac™ has subsequently developed a thirdgeneration software algorithm that we did not use. We do not know if this newer iteration will improve Flo-Trac™ accuracy, since that modification allowed Flo-Trac™ CO estimates to remain accurate during decoupling states, such as sepsis, which were conditions not present in our cohort. Conversely, PiCCO plus™ calibration appears to remain accurate within 6 h of calibration even when vascular tone has been changed [34].
We had nearly equal numbers of patients studied with CO TD and CCO PAC. This allowed us to compare these measures with pulse contour analysis. Since both CO TD and CCO are clinically acceptable as part of standard of care in the ICU, this distribution of patients makes our data more robust as a reference for standard ICU care. Regrettably, both FloTrac™ and LiDCO Plus™ CO values had poor bias and precision with PAC-derived CO values for both COtd and CCO. These findings are also consistent with the findings of others [18,19,[35][36][37]. Since we did not compare COtd to CCO in the same patient because of the observational nature of our study, we cannot comment on the potential bias between COtd and CCO. However, independently of which PAC method was used for these comparisons, neither gives actual instantaneous measures of CO. COtd measures require the averaging of three to five separate measures taken over a 5-min interval. If cardiac output is systematically changing during this interval (that is, either increasing or decreasing from the start to the end of the series of thermodilution measures), the calculated CO value may not reflect instantaneous CO values taken at the same time. Similarly, CCO uses a moving average algorithm that examines thermal dilution of 3 min, making it highly insensitive to rapid changes in CO. However, in our study, we were concerned only with defining the data collection times as those following specific therapeutic interventions when hemodynamic measures, including heart rate, CO and mean arterial pressure, were constant. Although such statements of stability are relative considering the unstable nature of the postoperative cardiac surgery patient, for the purposes of CO measures they were stable over the 5 min of data collection.
Since absolute CO measures become increasingly more important at low CO values [38,39], we assessed agreement among our monitoring devices by post hoc analysis of all measured CO values ≤5 L/min. We found that the degree of bias decreased slightly relative to the complete CO data set, although the degree of variability among the devices remained ( Figure 2). Accordingly, LiDCO Plus™, PiCCO Plus™ and FloTrac™ cannot be assumed to be interchangeable with PAC devices in the assessment of low CO values. Again, which device, if any, reports the most accurate value and trend during low flow states is not known on the basis of our study. Furthermore, most of the variance between LiDCO™ and FloTrac™ with PAC-derived CO measures came from the CO TD values, and then when these cardiac output values were > 5 L/min. This finding is the opposite of what Opdam et al. found [18]. Potentially, averaging CO measurements over 20 s improved agreement between the devices and CCO as opposed to those and CO TD PAC. This difference between CCO and COtd may reflect the clinical decision bias by which patients with intrinsically lower CO get CCO devices (4.8 ± 1.4 l/ min), whereas those with high CO get CO TD devices (6.0 ± 1.3 l/min).
One major potential benefit of using CCO monitoring is to note directional changes in flow. By Pearson product-moment analysis, we found poor correlation between each device pair, with the best correlation between LiDCO Plus™ and FloTrac™. PiCCO Plus™ Pearson product-moment analysis accuracy was intermediate between LiDCO™ and FloTrac™.
That these devices differed in their paired performances is not surprising. They all use different aspects of the arterial pulse and rely on different assumptions in their CO estimations. Most of our patient cohort was being administered varying levels of vasoactive medications that must alter their vasomotor tone at baseline and over time. Since LiDCO Plus™ and Flo-Trac™ use similar aspects of the arterial pulse to calculate CO, this may explain their better concordance by Pearson product-moment analysis. Also, volume challenge in preload-responsive patients increases CO by > 10%-15% [33,40]. We used this threshold CO value as a minimal CO change and still observed poor agreement between devices.
Study limitations
First, we report on a small patient cohort, limiting subgroup analysis and potentially showing differences when a larger number of patients would show similarity. Not all patients received all therapies, since our study was observational. Still, this limitation reflects real-life conditions. Yet, patients are treated individually, not as group means, thus these data are relevant to clinical decision making. Second, we did not use the PiCCO™ or LiDCO™ device-specific calibration methods. However, our common baseline external calibration method is approved by both manufacturers as an acceptable method. Since our goal was to ascertain the dynamic accuracy of these devices, we reasoned that starting from a common CO value using an external calibration method would maximize potential CO agreements between devices. If anything, separate PiCCO™ and LiDCO™ calibrations would produce more, not less, CO variance than we report. Third, we compared not only mean CO values but also their changes and Pearson product-moment analysis as recommended by Squara et al. [21]. They also recommended assessment of dynamic real-time trends as a fourth method of analysis. We did not use this fourth method of comparison, because COtd did not lend itself to it. Finally, not all of our patients had femoral arterial catheters, which might have affected the result of PiCCO™ CO estimates, as large peripheral arteries are their preferred sites. However, the femoral (central arterial) site requirement is such that the thermal calibration signal can pass the sensing thermistor not for subsequent CO estimates. The manufacturer allows for radial site insertion with external calibration. Furthermore, we saw no systematic differences in agreement from femoral and radial site PiCCO CO measures. Thus, the PiCCO data reflect the accurate values.
Conclusions
LiDCO Plus™, PiCCO™, FloTrac™ and PAC did not show similar CO trending results, although all produced similar pooled steady-state CO values. Furthermore, if clinical trials of resuscitation based on CO values show efficacy when using one of these devices, it is not clear whether performing the identical trial with another CO monitoring device will also show similar benefit. Thus, until the agreement among minimally invasive CO measuring devices improves, each device needs to have its own clinical efficacy validated.
Key messages
• Since the PAC-derived estimates of cardiac output by the thermodilution technique are not the gold standard for estimating cardiac output at the bedside, all available measures of cardiac output need to be compared to each other rather than to a PAC reference.
• Different commercially available arterial pressurederived estimates of cardiac output give differing degrees of error relative to each other.
• The cardiac output error among devices is low for cardiac output values < 5 L/min. • Studies documenting clinical benefit using catheter-derived estimates of cardiac output to drive resuscitation algorithms using one monitoring device cannot be extrapolated to similar utility by using another cardiac output monitoring device.
Competing interests
MRP is a member of the medical advisory boards for and has received honoraria for lectures from both LiDCO Ltd and Edwards LifeSciences, Inc, and has stock options with LiDCO Ltd. All other authors declare that they have no competing interests.
Authors' contributions MH helped design the study, recruited the patients, collected the data, analyzed the initial data and wrote the first draft of the manuscript. HKK helped analyze the data and edited the later versions of the manuscript. DS helped collect and store the data and performed the preliminary statistical analysis. MRP helped design the study, got Institutional Review Board approval, analyzed the data and wrote all versions of the manuscript.
|
2016-05-14T00:27:45.023Z
|
2010-11-23T00:00:00.000
|
{
"year": 2010,
"sha1": "59082533eac5e27784d72f2d74ca493cec849512",
"oa_license": "CCBY",
"oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/cc9335",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "59082533eac5e27784d72f2d74ca493cec849512",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
116316315
|
pes2o/s2orc
|
v3-fos-license
|
Mini Solar and Sea Current Power Generation System
The power demand in United Arab Emirates is increased so that there is a consistent power cut in our region. This is because of high power consumption by factories and also due to less availability of conventional energy resources. Electricity is most needed facility for the human being. All the conventional energy resources are depleting day by day. So we have to shift from conventional to non-conventional energy resources. In this the combination of two energy resources is takes place i.e. wind and solar energy. This process reviles the sustainable energy resources without damaging the nature. We can give uninterrupted power by using hybrid energy system. Basically this system involves the integration of two energy system that will give continuous power. Solar panels are used for converting solar energy and wind turbines are used for converting wind energy into electricity. This electrical power can utilize for various purpose. Generation of electricity will be takes place at affordable cost. This paper deals with the generation of electricity by using two sources combine which leads to generate electricity with affordable cost without damaging the nature balance. The purpose of this project was to design a portable and low cost power system that combines both sea current electric turbine and solar electric technologies. This system will be designed in efforts to develop a power solution for remote locations or use it as another source of green power.
Introduction
Since the invention of first engine in 17th century, energy Consumption has been used due to many causes in many different ways. Persistent increase in the energy demand has caused to seek, new energy resources in the world; new alternative energy. Resources have been also utilized to minimize the energy deficit [1][2][3]. Figure 1 shows a chart of the difference between early consumption and now with different types of resources weather its renewable or non-renewable energy. In the early age, people didn't use too much energy but as time passes, energy demand had increase and the world is now searching for energy resources. Non-renewable energy sources are highly used in caparison with renewable energy and that will cause in environmental problems such as global warming. Electricity is most needed for our day to day life. There are two ways of electricity generation either renewable or non-renewable. Electrical energy demand increases in word so to fulfill demand we have to generate electrical energy. Now a day's most of electrical energy is generated by the non-renewable energy resources like coal, diesel, and nuclear. The main cons of these sources are pollution and green house effects where it contributes in global warming and the nuclear waste is very harmful to living organism. The non-renewable energy resources are depleting day by day. Soon it will be completely vanishes from the earth so we have to find another way to generate electricity. The new source should be reliable, pollution free and economical. The renewable energy resources should be good alternative energy resources for the non-renewable energy resources. There are many renewable energy resources like geothermal, tidal, wind, solar. Solar energy has drawback that it could not produce electrical energy in rainy and cloudy season so we need to overcome this drawback we can use two energy resources so that any one of source fails other source will keep generating the electricity. And in good weather condition we can use both sources combine. In the United Arab Emirates, solar energy is the best renewable energy we have but one source is never enough. Combing another source with solar energy will give us more power and more reliability in extracting power without burning fossil fuel or damaging the environment. So the best option beside solar is wind but we don't have too much space to make Huge wind farms, so we thought of sea current power generation which have the same concept of wind energy but under the sea. The turbines area will be much smaller due to the density of the water which is thousand times larger than the wind density. Other renewable source is not as great as sea current energy due to limitations such as the surrounding environment like geothermal. We have few places that are suitable for generation but the places are already occupied with other things such as swimming pools and other facilities. The sea is vast and we can place the turbines any place we want [4][5][6][7]. The mechanism of the working of solar panels is depicted in figure 2.
Figure 2. PV Solar
As shown in figure 2. The sunlight is directed to the silicon material, then the n-type and p-type material react upon this energy, resulting of moving of charge carriers. Mainly hole-electron movement. Finally, electrical current is induced. Next we will talk briefly about wind power. Figure 3 shows the Wind turbines. Wind turbines are used to convert the wind power into Electric power. Electric generator inside the turbine converts the mechanical power into the electric power. Wind turbine systems are available ranging from 50W to 2-3 MW. The energy Production by wind turbines depends on the wind velocity acting on the turbine. Wind power is used to feed both energy production and consumption demand, and transmission lines in the rural areas. We have two types of turbines in general, horizontal and vertical turbines, each one of them is used in different areal conditions. Here is a figure that will show the structure of wind turbine.
Figure 3. Wind Turbine
The system will be improved by combing it with other source to make a reliable hybrid system. By knowing the power demand, we can calculate the size and the quantity of the pv panels and the same for the sea current turbine generator. There are two types of sea current, deep sea current and surface sea current. Deep sea current is affected by the melted salt and the difference between densities while the surface sea current is a affected by the wind. Surface sea current is deep down to 100m while more than 100 consider as deep sea current. In this project we will use surface sea current to make the turbine move and generate electricity. The concept of sea current turbines is exactly same as wind turbines and how they work. The gear box, shaft, nacelle, and so on are in the design. The only different is in the size where the underwater turbines will be much smaller and the materials that will be used to create the turbines to make them reliable under water and won't be affected easily by the salt and rust.
Site characteristics
United Arab Emirates is located in the gulf region United Arab Emirates' latitude and longitude is 24° 00' N and 54° 00' E. The United Arab Emirates Climate features extreme heat, UAE weather is sunny all the year round, This can be easily seen in figure 4.
Solar Energy Potential
Direct normal irradiation: The estimated direct normal irradiation is plotted with respect to the month for six stations. Figure 6 displays the average peak sun hours per day in the UAE for each month. The yearly average peak sun hours/day used in calculations in the UAE is 5.84
Peak sun hour
International
Sea current energy potential
United Arab Emirates have many islands and water surrounding the country, we can use the sea current to generate energy, the water has at least 1000density and salty water have even more. So even a speed of 1m/s is enough to generate clean power. Figure (7) shows the potential of wind energy, as long as there is small wind energy to move the sea current, we can generate energy. Figure 7. the potential of wind energy
Conceptual design
The system will use sea current turbines and solar panels to extract energy. It will let the two sources extract independently and it would be connected t to a hybrid wind solar charge controller, which will store the produced energy in a battery bank. The charge controller to maintain everything and to store the energy safely. Then from the battery to the inverter to convert DC to AC, then from the inverter to the AC load. In this project we are going to use 6 major process to complete the system as shown in figure (8).
AC load: we need to put a maximum load so we can design a system based on our maximum load.
Inverter: Inverter is need to convert DC power into AC power. Battery bank: the system needs a battery bank size per the load requirement so that it should fulfill the requirement of load.
Hybrid solar sea current controller: Charge controller has basic function is that it control the source which is to be active or inactive. It stores the power that were extracted from the sources to the batteries and take power from the battery to the inverter. The controller has over-charge protection and short-circuit protection. This will increase the battery lifespan Solar panels (PV): Solar panel Solar panel is use to convert solar radiation to the electrical energy.
Sea current turbines: Sea current turbine is that system which extracts energy from sea current by rotation of the blades.
Modeling of PV System
The required size of pv panel
. Modeling of Wind System
Power available in wind at a specific site depends on three things: air density, area of rotor and wind speed. Wind power can be calculated using this formula. We want to produce at least 80 watt. Aw= 0.28m2 R =0.3m
Controller size
We can also choose the size depending on the maximum produces watt on both sources. The system needs a controller which will withstand 20 watt from solar panels and 80 watt from current turbine. It won't be easy to find one because hybrid controllers always need more watt to operate. We may have to use a bigger controller because the smallest charge controller can withstand 300 watt from water generation and 150watt from solar generation [8][9].
Conclusion
Hybrid power generation system is good and effective solution for power generation than nonrenewable energy resources. It has greater efficiency. It can provide to remote places where government is unable to reach. So that the power can be utilize where it generated so that it will reduce the transmission losses and cost. Cost reduction can be done by increasing the production of the equipment. People should motivate to use the renewable energy resources not only for the personal good but for the global and environment good too. It is highly safe for the environment as it doesn't produce any emission and harmful waste product like conventional energy resources. It is cost effective solution for generation. It only need initial investment. It has also long life span, it only needs little, uncostly maintenance after a certain periods of time Overall it good, reliable and affordable
|
2019-04-16T13:22:07.791Z
|
2017-07-01T00:00:00.000
|
{
"year": 2017,
"sha1": "67cc8926111df2ec90c43c5f7bffcc0ae4b8fe97",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/73/1/012012",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "5f1f084a45be417ab5747b363a75b22eafcf8174",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
148572031
|
pes2o/s2orc
|
v3-fos-license
|
Epigenetic Modifications in Alzheimer’s Neuropathology and Therapeutics
Transcriptional activation is a highly synchronized process in eukaryotes that requires a series of cis- and trans-acting elements at promoter regions. Epigenetic modifications, such as chromatin remodeling, histone acetylation/deacetylation, and methylation, have frequently been studied with regard to transcriptional regulation/dysregulation. Recently however, it has been determined that implications in epigenetic modification seem to expand into various neurodegenerative disease mechanisms. Impaired learning and memory deterioration are cognitive dysfunctions often associated with a plethora of neurodegenerative diseases, including Alzheimer’s disease. Through better understanding of the epigenetic mechanisms underlying these dysfunctions, new epigenomic therapeutic targets, such as histone deacetylases, are being explored. Here we review the intricate packaging of DNA in eukaryotic cells, and the various modifications in epigenetic mechanisms that are now linked to the neuropathology and the progression of Alzheimer’s disease (AD), as well as potential therapeutic interventions.
INTRODUCTION
Alzheimer's disease (AD) is a neurodegenerative disease characterized by significant impairments in neural synapses and deficiencies in memory. Generally, it is a disease that starts off gradually, with the most common hallmark symptom being that of short-term memory loss (Burns and Iliffe, 2009). AD deteriorates progressively, with symptoms intensifying over the passage of time. These symptoms often include extensive memory loss, confusion, difficulty with language, mood swings, and behavioral issues (Burns and Iliffe, 2009). In most cases, AD progresses to dementia, and ultimately leads to death, frequently from bronchopneumonia or acute cerebrovascular incidents (Mölsä et al., 1986).
The classic AD symptoms come alongside debilitating neural atrophy. A depletion in both the number of neurons and synapses in the brain are characteristic trademarks of AD (Hamos et al., 1989;Spangenberg and Green, 2017). The deficiency of these key brain cells and nerve connections often leads to the deterioration of both the temporal and parietal lobes of the brain, as well as parts of the frontal cortex and cingulate gyrus, as well as brainstem nuclei (Wenk, 2003;Huang et al., 2007;Braak and Del Tredici, 2012). Research using MRI imaging has shown that patients with AD have an actual physical decrease in the size of specific brain regions as a result of this neuronal loss (Callen et al., 2001).
In addition to neuron and synapse deterioration, patients with AD have a greater buildup of amyloid plaques and neurofibrillary tangles in the brain compared to those without AD (Tiraboschi et al., 2004). The atypical accumulation of these substances is usually found in the specific areas of the brain associated with AD, such as the temporal lobe (Bouras et al., 1994). Amyloid plaques consist of amyloid beta peptides, which are fragments of the amyloid precursor protein (APP). APP, a transmembrane protein of the neuronal membrane, is crucial for neuron growth, repair, and overall function (Turner et al., 2003;Priller et al., 2006) In AD, the γ-secretase and β-secretase proteolytic cleavage, unlike α-secretase cleavage, can yield amyloidogenic processing that can lead to substantial neurpathologies (Kang et al., 1987;Hooper, 2005;Zhang et al., 2011). For instance, γ-cleavage at C83 or C89 can yield amyloid β (Aβ) peptides, Aβ40 and Aβ42, which are main constituents of neuropathological plaques in AD brains (Zhang et al., 2010). These Aβ peptides then form packed deposits that accrue outside the neuron and surround it Tiraboschi et al. (2004) and Zhang et al. (2011). While the buildup of these plaque masses is a clear hallmark of AD, the exact mechanism of how this accumulation of beta amyloid peptides lead to the pathology of Alzheimer's is still unclear (Van Broeck et al., 2007).
Additionally, Tau, a microtubule associated protein, also accumulates abnormally in the brains of people with AD. While the tau protein normally functions to stabilize the microtubes of a cell's cytoskeleton in its phosphorylated state, tau becomes hyperphosphorylated in AD. This hyperphosphorylated tau combines with other threads to form neurofibrillary tangles. These tangles accrue inside the neuron and greatly impact normal transport (Hernandez and Avila, 2007). The elevated presence of both these tangles as well as the above-mentioned amyloid plaques in the brain have been key indicators of AD.
While the trademark pathology and symptoms of AD are wellknown, the underlying pathways which lead to the disease are not. Currently, there is no known cure to treat AD (Holtzman et al., 2011;Lindsley, 2012;Mitra et al., 2019). However, recent findings indicate that epigenetic modifications are fundamental in the process of regulating gene expression, particularly for that of memory (Kosik et al., 2012). Furthermore, AD has been shown to exhibit an epigenetic blockade, or a widespread decline in gene expression, that is thought to be influenced by posttranslational histone modifications (Sananbenesi and Fischer, 2009;Gräff et al., 2012). Taken together, it seems that epigenetics may play a greater role in AD than previously thought. Thus, by better understanding and studying the impairments of epigenetic modifications in AD, potential new therapies to treat the disease can be designed.
Epigenetics is defined as the study of phenotypic changes that occur from the modification of chromatin without changes to the actual DNA sequence (Dupont et al., 2009). In order to understand how epigenetics work, it is important to understand how DNA is packaged. The genome of a eukaryotic organism consists of a vast amount of genetic information that must be stored in the nucleus of each cell (Kornberg, 1974;Peterson and Laniel, 2004). Since these lengthy DNA molecules are quite extensive in size, they must be intricately packaged into higher ordered structures so that they can fit into the relatively small sized nucleus. In order to accomplish this feat, the DNA is wound around histone proteins, which associate with each other via electrostatic and hydrogen interactions, and thus create the structural unit termed the nucleosome (Cairns, 2009). A nucleosome consists of the eight core histone proteins, H2A, H2B, H3, and H4, each of which is present as a pair, and the DNA that wraps around them (Kornberg and Lorch, 1999;Pérez-Martìn, 1999;Rando and Winston, 2012). A histone tail consisting of amino acid chains that are abundant in positively charged residues, extend from the histone core (Pérez-Martìn, 1999;Martin and Zhang, 2005). There is also an H1 histone protein, called the linker histone, which associates with the linker DNA. The linker DNA are the regions of DNA that exist between nucleosomes after they are structured into the higher ordered string-like chromatin (Happel and Doenecke, 2009;Harshman et al., 2013). These regions are extremely crucial for gene expression and regulation (Clapier and Cairns, 2009).
While the higher ordered chromatin structure allows for the DNA to be tightly packaged and fit into the nucleus, it produces its own set of issues. Due to the condensed form of DNA, the promoter region is not as easily accessible for important cellular processes that require the DNA template (Smith and Peterson, 2005;Morrison and Shen, 2009). In order to allow for the promoter to be reachable, nucleosome structure must be altered or disrupted. The two main categories of enzymes that specifically target nucleosomes for this purpose include those that covalently modify histone proteins, such as those that carry out acetylation, deacetylation, and methylation and those that hydrolyze ATP to reposition the nucleosome and thus conduct chromatin remodeling (Margueron et al., 2005;Smith and Peterson, 2005) (Figure 1). Additionally, there are other types of histone modifications as well including that of phosphorylation, ubiquitination, sumoylation, and ADP ribosylation amongst others.
Acetylation and Deacetylation
The first two types of epigenetic modifications in this review are that of histone acetylation and deacetylation. Histone acetylases, or HATs, belong to the category of enzymes that covalently modify histone proteins by carrying out acetylation on lysine residues of the core histone tails (Marmorstein, 2001;Roth et al., 2001). Acetylation is a histone modification often associated with transcriptional activation. There are two main type of HATs: Type A and Type B. Type A HATs are localized in the nucleus and act on the histones associated with the chromatin (Kornberg and Lorch, 1999;Roth et al., 2001). Type B HATs however are in the cytoplasm and have been found to act on freshly synthesized histones that have not yet been associated with chromatin (Brownell and Allis, 1996;Roth et al., 2001;Parthun, 2007). Histone deacetylases, or HDACs, are yet another group of enzymes that covalently modify histone proteins. While HATs are responsible for neutralizing histone tails by acetylating lysine residues, HDACs counter their effects by deacetylating lysine residues. They are therefore associated with condensing chromatin and gene repression since they directly revert the histone tails back to their charged status (Pazin and Kadonaga, 1997). In mammals, there are four classes of histone deacetylases: classes I, II, III, and IV, and their classification is based on multiple factors including that of function and DNA sequence. Depending on the type of HDAC, these deacetylases can be found both in the nucleus and cytoplasm of the cell (De Ruijter et al., 2003).
DNA Methylation and Histone Methylation
While histone acetylation, histone deacetylation, histone methylation, and chromatin remodeling are all key players in influencing the epigenetics of an organism, there is yet another crucial mechanism that is involved in the epigenetic code. DNA methylation is a process that adds methyl groups to the DNA structure. Methylation can occur on both cytosine and adenine bases. While cytosine methylation is quite common in mammals, it should be noted that methyl groups on adenine bases have recently been detected in mammalian cells as well (Wu et al., 2016). Cytosine methylation involves the addition of methyl groups onto cytosine bases that come directly before guanine bases on the DNA strand. These are called CpG dinucleotides (Bird, 1986). Interestingly, recent research has proposed the importance of DNA methylation in long term memory, a crucial indication of its potential relationship to AD (Miller and Sweatt, 2007;Day and Sweatt, 2010).
In addition to DNA methylation, histone methylation is also significant. Histone methyltransferases, or HMTs, are enzymes that methylate lysine or arginine residues on the histone tails of histones H3 and H4. HMTs have been linked to both gene activation and repression. There are two main families of HMTs, and they are categorized based on the residues they methylate. The first group consists of histone lysine methyltransferases, which are the HMTs that methylase lysine. The second group is made up of protein arginine methyltransferases, which are responsible for methylating arginine (Wood and Shilatifard, 2004).
Chromatin Remodelers
While the above three epigenetic mechanisms refer to groups of enzymes that covalently modify histones, chromatin remodeling complexes are enzymes that belong to a category all of their own. These enzyme complexes utilize ATP to reposition the nucleosome and literally change the dynamics of the chromatin structure by modifying the connections between the DNA and histone proteins (Tsukiyama et al., 1999). This process is achieved through various mechanisms including that of nucleosome sliding, nucleosome repositioning, and ejection (Fazzio and Tsukiyama, 2003;Mohrmann and Verrijzer, 2005;Cairns, 2007Cairns, , 2009Clapier and Cairns, 2009). There are a number of families of chromatin remodelers in eukaryotic cells. These include the families of SWI/SNF, ISW1, NuRD/Mi-2 CHD, INO80, and SWR1. All of these groups of chromatin remodelers are similar in their ATPase domain, though they do differ in their specific remodeling functions (Lusser and Kadonaga, 2003;Kim et al., 2006).
Due to the significance of epigenetics, and its suggested involvement in the regulation of gene expression, particularly for that of memory, it is no wonder that connections to its role in AD have been made. In this review, we will discuss the epigenetic dysregulation observed in AD with an emphasis on the potential of epigenetic therapies to target the neuropathology exhibited as the disease progresses.
HAT/HDAC IMPLICATIONS IN AD THERAPEUTICS
Learning and memory involve intricate coordination amongst a network of various factors and pathways. Long-term memory and synaptic plasticity are dependent upon activations beyond the early induction phases of gene expression, which suggests a wide-ranging potential for epigenetic interplay (Pittenger and Kandel, 2003). Histone acetylation (and its counterpart, deacetylation; HATs and HDACs) is one of many epigenetic mechanisms now identified as playing a significant role in long-term potential (LTP) and memory formation, observable through fear conditioning and spatial memory exercises (Rogan et al., 1997;Francis et al., 2009).
Hippocampal LTP is an N-methyl-D-aspartate glutamate receptor-dependent response involving a continuing increase in synaptic potentiation maintained for longer than 1 h, which serves as the leading model of synaptic plasticity and learning in mammalian models (Bliss and Lomo, 1973;Bliss and Collingridge, 1993). Previous focus on epigenetic factors in LTP only analyzed methylation, and neglected the significant impact HATs and HDACs can play, particularly the potential of HDAC inhibitors (Kazantsev and Thompson, 2008). Although once regarded for their potential in cancer treatments (Vigushin and Coombes, 2002), HDAC inhibitors are now regarded as potential therapeutic targets in AD patients with a wide array of effects (Figure 2).
Alzheimer disease is linked to variants in amyloid-β-protein precursor (APP), presenilin-1, and presenilin-2 (PS1 and PS2) genes (Yan et al., 1995;Barglow and Cravatt, 2007), which lead to neuropathological characteristics of advanced accumulation of β-amyloid in the brain and the dyruption of synaptic LTP transmissions (Shankar et al., 2008). Fear conditioning training using APP/PS1 mice has demonstrated decreased contextual freezing performance could be restored back to wild type levels via acute treatment with Trichostatin A (TSA), an HDAC inhibitor. APP/PS1 chimeric mutant mouse/human transgenes result in AD phenotypes with β-amyloid plaque deposits accumulating by 6 months of age (Jankowsky et al., 2003). In the deficient phenotype mice, hippocampal acetylated H4 levels were approximately half that of WT littermates. HDAC inhibitor treatment then allowed for the restoration of normal higher H4 acetyl levels comparable to WT litters. Overall, TSA treatment rescued H4 acetylation levels, contextual freezing times, and deficits in hippocampal LTP, as observed through tetanic stimulations and contextual spatial learning) (Francis et al., 2009). Further studies have since expanded the HDAC drugs utilized with similarly significant results. HDACi drugs, sodium valproic acid, as well as Suberoylanilide hydroxamic acid (SAHA) and Sodium Butyrate (NaB), have been shown to yield significantly higher freezing levels in standard electric footshock freezing fear conditioning compared to their vehicle-treated (control) mutant APP/PS1 littermates. Treatment restored AD phenotypes to results that not only were no longer significantly different from WT littermates, but also were maintained even weeks later and did not modify any other aspects of behavior not related to AD pathology, such as exploratory nature or immediate freezing responses (Kilgore et al., 2010). That longevity of effect is critical in any therapeutic marketable compound, and has since been explored to maximize the significant impact that these drugs can have for patients. Two HDAC inhibitors with longer half-life and greater Blood Brain Barrier penetration have been developed. A mercaptoacetamide-based class II HDACi and a hydroxamidebased class I and II HDACi both decrease β-amyloids in vitro by reducing gene expression of components and increasing degradation enzyme gene expression, which ultimately rescued learning and memory defects in AD mice while decreasing tau (Sung et al., 2013).
Beyond standard learning deficits, AD can also manifest in seizures and epileptic episodes, which further instigate cognitive decline. These seizures increase ∆FosB transcription factor expression, which in turn recruits HDAC1 in the hippocampus to suppress c-Fos, a protooncogene known for its role in memory and synaptic plasticity (Saura et al., 2004). HDAC inhibition of ∆Fos in APP mutant AD mice via 4-phenylbutyric acid (Class I HDAC 4-PBA) or MS-275 (inhibitor of HDAC1-3) has now been shown to reverse the suppression of c-Fos and thus increases cognition performance in AD mice as observed with object location memory tasks and hippocampus-dependent spatial memory tasks (Corbett et al., 2017).
Another transcription factor known to have significance in AD pathology that may benefit from epigenetic therapeutic FIGURE 2 | Promising beneficial consequences of HDAC inhibitor treatments observed so far in AD mouse models and post-mortem hippocampal analyses that can ameliorate neuropathologies.
Frontiers in Neuroscience | www.frontiersin.org interventions is PU.1, which is crucial in the development of myeloid cells and microglia gene expression . Genome-wide association studies shows that reductions in PU.1 is a factor in delaying the onset of AD (Huang et al., 2017). Microarray analyses, RT-qPCR and immunocytochemistry of PU.1 knock-downs have demonstrated modified AD-associated microglial genes that are known to be involved in both, innate and adaptive immunity. Further high-throughput drug screenings with FDA-approved drugs have yielded the identification of HDAC-inhibitor, Vorinostat, as efficient in attenuating PU.1 expression in human microglia. Combined results of these analyses suggested Vorinostat or other HDAC inhibitors that knockdown PU.1 expression may be useful as potential therapies that could reduce microglial-mediated immune responses, such as the excess inflammation observed in AD Smyth et al., 2018).
Along those lines, it is important to once again emphasize that AD presents with a wide range of pathologies and thus, one single target may not suffice to ameliorate the deficits exhibited across the board. Instead, it may be of greater promise to explore multitargeting therapeutics. One study has already exhibited promising results with this technique by utilizing a single drug, HDACi M344, to affect the expression of multiple AD-related genes. M344 has been shown to decrease β-amyloid, phosphorylated tau, β-secretase, and APOEε4, while it also increased ADAM10, as well as increased BDNF, MINT2, FE65, SIRT1, REST, ABCA7, BIN1, and APP trafficking (Volmar et al., 2017). This is significant as β-secretase (as well as γ-secretase) cleaves Amyloid Precursor Protein (APP) (Figure 3) in a way that leads to neuropathologies in AD such as senile plaques, neuroplasticity deficits, and tau hyperphosphorylation (Nistor et al., 2007). If instead cleavage is performed by α-secretase (ADAM10), then amyloidogenic processing is avoided and neuropathology does not present (Colciaghi et al., 2002). Ultimately, mice treated with M344 exhibited significant cognitive benefits in recognition and spatial memory testing (Volmar et al., 2017), which demonstrates the potential to utilize a multitargeting drug to resolve the polygenic aspect of AD and other neurodegenerative faults. Another example of how multitargeting can be of value in AD therapy is the development of a novel "first-in-class" small molecule called CM-414 that ties HDAC inhibition with PDE5 inhibition, both of which individually have shown auspicious results (Cuadrado-Tejedor et al., 2017). PDE 5 inhibition (as seen with vasodilator, Viagra) improves AD phenotype deficits, as it is a molecule that increases phosphorylation of CREB, which is a key player in memory. Long-lasting improvement of synaptic function, CREB phosphorylation, as well as the reversal of memory deficits, cGMP/PKG/pCREB signaling deficits, and neuroinflammation, while creating a longlasting decrease in Amyloid-beta levels have all been observed with PDE5 inhibition (Puzzo et al., 2009;Zhang et al., 2013). PDE5 is ultimately involved in degradation of cGMP in various locations including brain tissues. The nitric oxide/cGMP/CREB pathway is critical to learning/memory process, so degradation of cGMP is implicated in neurodegenerative nature of AD and drugs inhibiting this degradative process are thus promising therapeutics (Fiorito et al., 2013). Combining this effectiveness with the effectiveness of HDAC inhibition can amplify the benefits for patients. Chronic treatment with the dual inhibitor CM-414 is capable of rescuing deficient LTPs in APP/PS1 mice, while also reducing the beta-amyloid and phosphorylated tau levels. Furthermore, CM-414 has been shown to increase the inactive form of Glycogen synthase kinase-3β (GSK3β) (Cuadrado-Tejedor et al., 2017). GSK3β is a kinase involved in microtubule stability and cognition with its connection to the phosphorylation of tau (Bhat and Budd, 2002) and thus is associated with the neuropathology of AD (Pláteník et al., 2014). Additionally, CM-414 has resulted in a decrease in dendritic spine density on hippocampal neurons, as well as reversed cognitive deficits observed through fear conditioning testing and Morris water maze test spatial memory testing as it induces synaptic gene expression. The in vitro and ex vivo activity of the drug has been quite promising as it demonstrates how beneficial it can be to use multiple-target therapies based on the complex and multifactorial nature of AD neuropathology (Cuadrado-Tejedor et al., 2017). The only concern with this, however, is that increased targets means an increased risk of additional side effects, as has been observed when Vorinostat leads to severe diarrhea and anorexia when it has been utilized in higher doses during carcinoma treatment studies (Ree et al., 2010).
Testing expanding beyond mouse models has also been quite promising with regard to HDAC inhibition drugs. Repeated treatment of triple transgenic AD mice with RGFP-966 has been shown to decrease β-amyloid protein levels, reversed the phosphorylation of tau, and led to improved spatial learning and memory results as tested by open-field, balance beam, treadmill, and nest-building behavioral analyses. RGFP-966 was further shown to increase BDNF expression and decrease tau phosphorylation and tau acetylation, while also reducing the neuropathology-inducing β-secretase cleavage of APP. RGFP-966 testing was then expanded to explore the impact when applied to induce pluripotent stem-cell-derived primary neurons from AD patients. Although it was a minimal sample size of only two patients compared to two healthy controls, the results were promising with a rescue of AD pathology with a decrease in beta-amyloid accumulation and tau modifications at the diseased residues of the neurons (Janczura et al., 2018). This further demonstrates the potential value of HDAC drug therapy for patients beyond lower organism model results.
Similar significance of epigenetic acetylation patterns has also been observed in post-mortem human brain tissues, furthering the promise of HAT/HDAC related drugs in AD therapeutics. Although HDAC inhibition accounts for the greater representation of epigenetic therapeutics (meaning lower acetyl levels tend to be associated with AD neuropathology), some studies have identified an opposite pattern of epigenetic modification. Narayan et al. (2015) for instance, utilized immunolabeling and microarray analyses to demonstrate increased H3 and H4 acetyl levels in post-mortem AD inferior temporal gyrus and middle temporal gyrus brain tissues compared to normal brain tissue, along with compromised protein degradation mechanisms. Observed differences significantly correlated with tau, β-amyloid, and ubiquitin pathology, as they were only present in areas associated with pathology and were not identified in the control cerebellum tissues (Narayan et al., 2015). It should be noted, though, that post-mortem experiments tend to have smaller sample sizes than mouse model or cellular based analyses. Despite the contradiction with mouse models that emphasize deficiencies in acetylation rather than hyperacetylation, these results still demonstrate that there clearly exist dysregulations in epigenetic mechanisms in AD pathologies.
DNA METHYLATION'S POTENTIAL IN AD THERAPEUTICS
DNA methylation is widely regarded as the most extensively studied epigenetic modification (Anderson et al., 2012). Although the focus was previously with regard to cancer (Laird and Jaenisch, 1996;Baylin and Herman, 2000), methylation has now taken a position at the forefront of Alzheimer's disease research, and may provide insight into new therapeutic approaches to ease the neuropathology of this crippling disease.
Various studies now examine methylation patterns of numerous disease-associated genes to determine which genes have differential patterns upon pathology, either in the form of hypomethylation or hypermethylation. The same genes are also frequently studied in more than one bodily location, for instance hippocampal cells versus blood cells. HOX genes are of particular interest due to the critical role they play in neural development as they encode transcription factors responsible for neural patterning (Philippidou and Dasen, 2013). Recently, Smith et al. (2018) became the first study to demonstrate how extensively HOX gene differential methylation can span in AD patients. Their study has exhibited AD-associated hypermethylation across an extensive region (48 kb) of the HOXA cluster using epigenome-wide association in prefrontal cortex and superior temporal gyrus samples across three independent cohorts . In analyzing methylation dynamics in relation to pathology, one cannot simply apply a one-size-fitsall methodology, but rather must consider different genes having different patterns, with hypermethylation silencing of beneficial protective genes occurring at the same time that hypomethylation activation of problematic predisposition genes may be occurring. While hypermethylation of important developmental genes is observed in this region, so too is hypomethylation of APP, which results in greater levels of amyloid plaques and neuropathology (Gasparoni et al., 2018). DNA methylation dysregulation has also previously been observed in this region in Down syndrome individuals, which is of interest as many Down syndrome patients develop AD due to a duplicate of APP in the trisomy on chromosome 21 (Bacalini et al., 2015). Using post-mortem brain samples compared to standard aging profiles, Braak stageassociated methylation variations in both neurons and glia has further been identified in numerous other genes associated with AD progression, such as MCF2L, ANK1, MAP2, LRRC8B, STK32C, and S100B (Gasparoni et al., 2018).
Various aspects of neuropathology are now known to have links to differential methylation patterns. Immunoreactivity analyses of entorhinal cortex layer II, known for substantial AD pathology shows epigenetic dysfunction, particularly significant decrements (such as in 5-Methylcytosine and 5-methylcytidine) in neuronal immunoreactivity of all 10 of the epigenetic markers and factors studied by Mastroeni et al. (2010), including PHF1/PS396, DNMT1 (major methyltransferase) and 6 components of MeCP1/MBD2 methylation complex (MTA2, HDAC1, HDAC2, p66α, RbAp48, and MBD2/3). These results demonstrate an inverse relationship of DNA methylation markers and markers for late-stage tangles, as a PHF1 and PS396 are widely regarded as markers for neurofibrillary tangle formation (Mastroeni et al., 2010). In addition to the loss of methylation in neurons being associated with tangle formation, loss of methylation is also linked to increased expression of cell cycle genes (Jackson-Grusby et al., 2001;Mattson, 2003) and thus observed decrements in methylation in AD neurons could be linked to the aberrant re-entry into cell cycle and apoptosis observed in AD. One study even identified 11,822 hypermethylated CpGs in AD profiles (as well as 6,073 hypomethylated CpGs), with most of the hypermethylated sites being genes associated with cell-cycle associated processes (such as regulation of mitosis and phase transitions etc) as well as wnt-signaling involved in synaptic modulation and cognitive impairment, whereas hypomethylated sites were identified as genes involved in transcription factor binding, cofactor binding, and promoter binding (Gao et al., 2018).
As previously mentioned, location is also noteworthy when studying AD-related genes. When analyzing to see if genes associated with early onset AD are differently methylated, pyrosequencing of AD blood and brain samples have shown that only RIN3 in blood cells exhibits significant hypomethylation for 7 CpGs (Boden et al., 2017). RIN3 encodes a potassiumdependent sodium/calcium exchanger and is associated with cell signaling and neural development through synapse function and endocytosis roles by negatively impacting amyloid trafficking (Giri et al., 2016). In the same study that identified RIN3 hypomethylation, no group-wide significant differences were observed for late-onset genes PTK2β, ABCA7, SIRT1, or MEF2C (although 1 CpG of MEF2C did have reduced methylation in one AD individual) (Boden et al., 2017). This suggests that early versus late-onset AD pathologies may not permit a universal epigenetic therapeutics solution. TNF-α, on the other hand, only shows significant hypomethylation in the cortex samples of AD patients but not in blood samples, showing that some of the epigenetic mechanisms being uncovered in AD pathology are only relevant to brain cells, not blood cells (Kaut et al., 2014), while others are only observed in blood cells (Boden et al., 2017). This hypomethylation at the promoter region of tumor necrosis factor has been linked to a suppression in its activity due to a lack of transcription factor binding (Pieper et al., 2008), which then leads to significant deficits in cognitive and synaptic function, as it triggers an accumulation of amyloid plaques (Buchhave et al., 2010).
Beyond the cognitive impairment and memory deficits that most people associate with AD, circadian rhythm disruptions are also highly prevalent with the majority of AD patients experiencing modified sleep/wake cycles, thermoregulation issues, and increased evening confusion (Satlin et al., 1995;Wu and Swaab, 2007). Upon examination of methylation, transcription, and expression of BMAL1, which is a known as a core component of the circadian rhythm clock and acts as a transcription factor that regulates the firing rate of hypothalamic suprachiasmatic nucleus neurons (Rudic et al., 2004), aberrant rhythmic methylation patterns significantly altering the expression of BMAL1 have been observed in fibroblasts and post-mortem AD brain samples (Cronin et al., 2017). The promise of epigenetic therapies thus extends to circadian cycles and thermoregulation.
In addition to the promising potential of methylationrelated epigenetic therapies for AD neuropathology, these therapeutic advancements can also aid beyond AD to help ease the suffering of other neurodegenerative disorder patients. Significant similarities are observed in differential methylation studies when AD samples are compared to other disorders including Bipolar Disorder (BD), Huntington's, Parkinson's, Vascular Dementia, and Lewy-bodies Dementia (Rao et al., 2012;Smith et al., 2019). Upon testing the CpG methylation of AD and bipolar disorder associated genes, as well as global DNA methylation and histone modifications in post-mortem frontal cortex of 20 patients with these neurodegenerative disorders (10 of each) AD and BD brains, many epigenetic similarities were observed. Global DNA hypermethylation and histone H3 phosphorylation is present in both illnesses, as well as hypomethylation at the COX-2 promoter, hypermethylation at the BDNF promoter. CpG methylation of synaptic markers is present in both illnesses, but there is an increase in methylation of the synaptophysin promoter in AD only, while drebin hypermethylation is only present in BD. In addition to methylation variations, BD and AD present with an increase in mRNA and protein of neuroinflammatory markers (IL-1β, TNF-α, astrocytic, and microglial activation markers) (Rao et al., 2012). Such epigenetic similarities and the potential of multi-illness therapeutics are promising ventures to study as both disorders are similarly characterized with increased neuroinflammatory markers GFAP, CD11b, IL-1β, increased AA cascade cPLA2IVA, sPLA2IIA and COX2, and the loss of neurotrophic BDNF and pre-/post-synaptic synaptophysin and drebin (Rao et al., 2011). Beyond BD, bisulfite pyrosequencing demonstrates that ANK1 hypermethylation is not only observed in AD, but is also observed in Huntington's disease and Parkinson's disease, whereas samples with Vascular Dementia or Lewy bodies Dementia also demonstrated ANK1 hypermethylation, but only when they had coexisting ADpathology (Smith et al., 2019). This further demonstrates that methylation-related epigenetic therapeutics could extend beyond just ameliorating AD pathologies.
To further explore the epigenetic methylation profile differences being observed in AD samples, some studies have even utilized twins to better characterize genetic risks. Reduced Representation Bisulfite Sequencing of monozygotic and dizygotic twin pairs to examine whether epigenetic profile differences associated with AD could be detected in the blood of participants sharing similar genetic risk profiles shows twin pairs contain epigenomic differences in AD pathology associated genes such as ADARB2, including differentially methylated sites in hippocampal cells rather than just blood cells (Konki et al., 2018). ADARB2 mutant models are known to demonstrate memory and learning deficits, as well as synaptic impairments (Mladenova et al., 2018). Quantitative immunohistochemistry to study the levels of DNA methylation and hydroxymethylation in postmortem AD patients' brains has also shown significant decreases of 5-methylcytosine and 5-hydroxymethylcytosine in AD patients with similar results observed in an AD twin compared with the healthy twin. Furthermore, levels of methylation and hypermethylation had a negative correlation with hippocampal amyloid plaque levels and neurofibrillary tangles, meaning that reduced methylation in those same AD patients correlated with increased amyloid proteins and tangles, although it is unknown whether it was a causal or consequential event. It should be noted, though, that sample sizes were low with only 10 post-mortem AD samples, 10 post-mortem control samples, and only one set of twins (Chouliaras et al., 2013). Beyond the genetic risk profile, it is also important to better understand the environmental lifestyle risk profiles as well to ensure the most comprehensive knowledge of AD in order to most effectively target it (Eid et al., 2018). The largest study of DNA methylation-based aging (biological epigenetic profile age versus actual chronological age) to date utilized over 5000 individuals to assess genetic and environmental Alzheimer's disease risk factors, which allowed for the identification of significant associations with regard to lifestyle risk factors rather than genetic factors. Body mass index, cholesterol levels, socioeconomic status, high blood pressure, and smoking behavior all were significantly associated with AD and age acceleration epigenetic profiles (McCartney et al., 2018).
Although differential methylation has been elucidated with various AD-associated genes, some studies have instead disputed the idea of DNA methylation involvement in AD progression in other AD-associated promoters. Nagata et al. (2018), for instance, provided evidence that there is no differential methylation observed at the NEP promoter of post-mortem AD brain samples (Nagata et al., 2018). NEP is a metalloprotease involved in the degradation of β-amyloid proteins, and known to be deficient in AD-pathologies where amyloid plaques accumulate (Turner et al., 2004). The precise downregulation mechanism of NEP in AD progression still remains to be explained. Overall, the evidence for methylation playing a role in AD progression and pathology exceeds any disputes, and thus presents a very strong case for exploring epigenetic therapeutics in the targeting of AD.
CHROMATIN REMODELERS AND OTHER HISTONE MODIFICATIONS IN AD Chromatin Remodelers
While chromatin remodelers play a crucial role in regulating chromatin, there is currently a lack of information regarding the position that these enzyme complexes play in AD compared to that of other epigenetic mechanisms. That being stated, there does exist some data that shows the connection between chromatin remodelers and AD. For instance, current research has revealed that CHD5, a chromatin remodeler belonging to the CHD family, plays a critical role in Alzheimer's. While most remodeling ATPases are expressed throughout the human body, CHD5 expression is confined to the brain (Potts et al., 2011). Moreover, the depletion of CHD5 impacts SWI/SNF, another family of chromatin remodelers. When depleted, CHD5 particularly impacts the subunits of SWI/SNF that are found in the brain by changing their expression levels. CHD5 has also been specifically linked to the genes implicated in Alzheimer's, as CHD5 has been shown to directly regulate them (Potts et al., 2011). Thus, a strong connection of the role CHD5 and AD has been documented.
Additional studies have shown the potential relationship of other chromatin remodelers with AD as well. Microarray analysis has shown that "SWI/SNF related, matrix associated, actin-dependent regulator of chromatin subfamily a, may also be found to be associated with Alzheimer's" (Guttula et al., 2012). Additionally, INO80, Proteasome, and RNAPII machinery have also been shown to be associated with Alzheimer's disease, potentially via RNAPII degradation by INO80 (Poli et al., 2017). Further research will be needed to further confirm these connections.
Other Histone Modifications: Phosphorylation
Other histone modifications have been seen to play a role in AD as well. Phosphorylation is a type of histone modification that can occur when a phosphate group is added on to the histone tails of the nucleosome. Research has shown that the H2AX protein in the nucleosome of astrocytes, a type of supportive nerve cell, is phosphorylated in response to double strand breakages in the DNA. When this occurs, there is a conversion of the H2AX protein into γH2AX. This conversion is specifically found in greater amounts in astrocytes located in the hippocampal and cerebral cortex sections of the brain. Interestingly, these regions are the same areas known to be impacted in AD (Myung et al., 2008). The above-mentioned studies were performed on brain tissue acquired from autopsied patients with AD. They indicate that the phosphorylation found in astrocyte DNA signifies chromosomal damage, which hinders its role in supporting surrounding neurons in the area.
Phosphorylation of another core histone protein has been observed as well. The core histone protein H4 has been shown to have significantly higher phosphorylation levels on Serine-47 in rats with levels of APP in their neuroblastoma cells compared to rats that were null in APP for these same types of cells. Experiments using tissue samples from the brains of AD patients confirmed these results, as high levels of phosphorylated H4 were observed. These findings are interesting as inhibition of H4 phosphorylation is now thought to be a potential means of protection against the pathological progression of AD as proposed by the authors (Chaput et al., 2016).
Yet another study suggesting the importance of phosphorylation in AD was published by Anderson and colleagues. Using transgenic mouse models that had increased amyloid deposits which mimicked the amyloid pathology that is characteristic of AD, they studied the phosphorylation of serine-57 and threonine-58 on H3, a histone greatly regulated by phosphorylation. Data showed a 40% reduction in serine-57 phosphorylation and a 45% reduction in threonine-58 phosphorylation in these transgenic mice compared to wild type. Additionally, there was a 30% reduction of the doubly phosphorylated serine-57 and threonine-58 sites. This decline in phosphorylation is likely thought to result in a more repressed chromatin structure, which in turn aligns with the epigenetic blockage exhibited in Alzheimer's (Anderson et al., 2015). This again offers the possibility of using phosphorylation inhibition as a potential targeted therapy for AD. While more research is needed to determine if such therapies would even be a possibility, these studies highlight how the dysregulation of phosphorylation is yet another example of the important role that epigenetic mechanisms play in AD.
CONCLUSION AND FUTURE IMPLICATIONS
In this review, we have concentrated on presenting an overview of the epigenetic dysregulation observed in AD. While the relationship between the role of impaired epigenetic modifications in Alzheimer's has been a relatively recent one, the ever-increasing amount of research in this field has confirmed the importance of the connection. As this review has shown, the overall argument being built is a strong one, as epigenetic mechanisms, particularly those of DNA methylation and histone acetylation and deacetylation, show a clear dysregulation in AD when compared to the norm. It is critical to be able to target individuals that are most at risk for developing AD and thus try to proactively treat them as soon as possible. There are so many variables to be considered that this task can appear overwhelming. There is no one-and-only cause for this debilitating disease, but rather a series of interactions amongst circumstances, the environment, and genomes. Even within genomes there are multiple variables to consider, as ageassociated genes (Desikan et al., 2017) and epigenetics (Lemche, 2018) can both influence potential buildup of neuropathological peptides and plaques. Epigenetic profiles can vary throughout an individual's lifetime, especially since beyond age, factors such as stress, smoking, alcohol use, and diet can all affect epigenetic expressions and neuropathologies (Lövblad et al., 1997;Delgado-Morales et al., 2017). This may suggest that healthier lifechoices, such as educated dieting and exercise routines, beyond prescription drug therapeutics may also be of importance in the treatment and prevention of progression in AD.
Based on the current research, in addition to lifestyle changes, we have specifically highlighted the potential for epigenetic therapies that can be used to help target the disease. This is of great significance, since as of now there are no known cures for AD that can treat the disease or even delay its process (Holtzman et al., 2011;Lindsley, 2012;Mitra et al., 2019). Although there is still some sporadic controversy around epigenetic studies with regard to the involvement of particular genes in AD, the overwhelming evidence in support of epigenetic connections warrants further attention. We believe that with more research the relationship will only become clearer and the development of more specific targeted therapies will arise to aid in the treatment of AD.
AUTHOR CONTRIBUTIONS
ME and GS contributed equally to the production of this work, writing and revising, as well as approving the submitted version.
|
2019-05-10T13:10:55.910Z
|
2019-05-10T00:00:00.000
|
{
"year": 2019,
"sha1": "a0bdafa17a0dde5a467e9770b46622d849b69a43",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2019.00476/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e654478a549d38ed5c7a6d6b8510b5274126ab70",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
89457485
|
pes2o/s2orc
|
v3-fos-license
|
STUDIES ON THE DEVELOPMENT OF THE SEA URCHIN STRONGYLOCENTROTUS DROEBACHIENSIS.
1. The assembly and function of the mitotic apparatus in first division eggs of the sea urchin Strongylocentrotus droebachiensis were studied at 0° C and at 8° C by polarization microscopy in vivo and after isolation in hexylene glycol at controlled pH.2. No differences in the amounts of total tubulin synthesized over comparable periods of the cell cycle were observed.3. Mitotic apparatuses from a cell grown at 0° C are anastral, while those grown at 8° C are amphiastral.4. At a 0° C growth temperature only about one-half of the spindle fiber monomer is available for polymerization as at 8° C, indicating a natural variation in pool size.5. The amount of spindle monomer made available to the usable pool is specified only by the temperature (during early prophase, with temperature prehistory having no effect.6. The temperature coefficient for this apparent activation differs markedly from that of the mitotic process as a whole.7. The amount of tubulin obtained from an isolated mitotic apparatus, as determin...
Sea
Urchins" (Harvey, 1956) chronous development.These methods and developmental schemes serve as the basis for further studies on mitotic spindle assembly and control, patterns of protein synthesis, and the mechanism of ciliogenesis (Stephens, 1972;in preparation).
Experimental animals
Strongylocentrotus drocbachiensis, 2"-4" in diameter, was collected from various closed system consisted of a mixed algal population growing on the walls of the tank ; the latter animals were generally used within a week or two after collection.
In the Cape Cod population, no differences in the degree of gonad development were noted between freshly-collected specimens and those maintained in the running sea water system.This might be expected since both sets of specimens experienced essentially the same seasonal temperature change and had available the same food supply.Ripe animals maintained in the closed system at 4 C remained in breeding condition for at least two months beyond the time of natural spawn-out, if fed an occasional piece of Laniinaria.For comparison, urchins w r ere also obtained from the Boothbay Harbor area of Maine through commercial sources and were generally used immediately.Urchins collected from Maine prior to the breeding season and maintained in the Woods Hole sea water system underwent gonad development coincident with those of the native Cape Cod population.
Gametes
Eggs were obtained by injection of 0.53 M KC1 into the perivisceral cavity (Palmer, 1937;Costello, Davidson, Eggers, Fox, and Henley, 1957).For a typical 3" diameter urchin, two injections of 2 ml each were made at diametricallyopposite points in the peristome.The urchin was placed atop a 100 ml beaker filled with filtered, ice-cold sea water, aboral side down, and the beaker nearly immersed in cold running sea water.Thirty minutes were usually sufficient to obtain the maximum number of eggs, even at C. Alternatively, eggs may be shed in a refrigerator or cold room whose temperature does not exceed 8 C.
Sperm were obtained by removal of the testes when a male was detected by the above injection method.The testes were briefly rinsed in cold sea water, lightly blotted, and placed in a covered plastic petri dish on ice."Dry sperm," exuded from the testis, was diluted 1 :20 with cold sea water containing 10~ 4 M EDTA (Tyler, 1953) and used for fertilization within 5 minutes.
After shedding, the eggs were washed by decantation at least twice with 10 times their volume of cold filtered sea water.The eggs were either used immediately or else washed once more with cold Millepore-sterilized sea water containing 0.05% sulfadiazine (Tyler and Tyler, 1966), resuspended in fresh cold sulfadiazine-sea water, and kept on ice in a covered Stender dish.The eggs were adjusted to such a concentration that they formed a layer no more than two cells thick ; the depth of the fluid in the dish was 1 centimeter.Under these conditions, 90-95% of the eggs were fertilizable for 1-2 days; thereafter fertilizability dropped off to about 10% in 7 days.Eggs fertilized during this 1-2 clay period developed normally but not nearly as synchronously as those fertilized immediately after shedding.
Fertilization and development
One ml of 1 :20 sperm suspension was typically mixed with 10 ml of eggs in 200 ml of sea water.The eggs were allowed to settle, the sea water was decanted, and the eggs were washed twice with filtered sea water.The egg suspension was partitioned into Stender dishes in such a manner that the eggs formed a layer no more than two cells thick, in fluid no more than 1 cm in depth. R.
E. STEPHENS
The dishes were then maintained on ice, in the running sea water system, or in a temperature-controlled water bath.The fertilization, washing, and transfer operations were carried out at the same temperature as embryonic development.
Removal of flic jelly coat
The developmental studies carried out in this report were all done with untreated eggs, but often removal of the jelly coat (Tyler and Tyler, 1966) Some qualitative differences in egg properties have been noticed in animals collected from these two populations, but these differences are difficult to evaluate.
Most batches of eggs from the northern S\ drocbachiensis are yellow-orange in color and are frequently mixed with large amounts of mucus.The urchins from Cape Cod Bay also produce some batches of eggs with this intense coloration but most are yellow or even pale-yellow to colorless ; they are consistently free of mucus.Taken at the maximum of the breeding season, eggs from the northern population are somewhat less synchronous than those from Cape Cod Bay.Maine urchins wholely ripened in the running sea water system at Woods Hole show no such differences in color or synchrony, so such effects are probably environmental, most likely related to food supply.
Developmental sequence
Even though having twice the diameter and developing at temperatures 10-20 C lower, the fertilized egg of .9. droebachiensis develops along a time scale proportional in all respects to that of the more commonly studied Arbacia.Table 1 lists events in the first division of S. droebachiensis at C, 4 C, and 8 C. When temperature variation is held to within 0.2 C, synchrony at the first division is excellent.At 8 C, 90% of the cells cleave at 5 minutes of the time cited.At C, the comparable range is 15 minutes.In both cases, this range represents an interval of about 6% of the total division time.With temperature fluctuations of 1-2 C, particularly at prophase, the degree of asynchrony is doubled or tripled.Figure 2 something that would not be predicted for an indigenous steady-state population.One simple explanation for this variability is that a major proportion of settling larvae originate from the more northerly regions and are carried south to Cape Cod Bay ; yearly variation in prevailing winds plus local environmental conditions would thus determine the success of an immigrant population.Such larval dispersal is now considered to be an important factor in genetic exchange in shallow-water marine populations (cf.Scheltema, 1971).
The extreme susceptibility of the newly-hatched blastulae to bacterial action, whether at the upper or lower reaches of its viable temperature range, would imply that survival of larvae in waters of high organic content would be substantially reduced.The sea water intake at the Marine Biological Laboratory is adjacent to a sewer outfall; as discussed above, larvae survive only when the water is rendered sterile or when sulfadiazine is added to the sea water.A similar situation might be envisaged in nature where organic pollution of a bas- subtidal areas of Cape Cod Bay during the fall and winter of 1966-1971 and maintained either in running sea water of ambient ocean temperature or in closed aquaria at 4 C utilizing sub-sand filtration.The food supply for animals maintained in running sea water consisted of Laminaria while that for animals in FIGURE 3. Developmental stages for fertilized eggs of Strongylocentrotus droebachiensis at 8 C; A unfertilized egg (scale = 200 M) ; B 5 minutes after fertilization; C first division; 3 hr; D second division, 5 hr ; E 8-cell stage, 6? hr ; F 16-cell stage, SI hr (arrow: micromeres) ; G early blastula, 15 hr; H mid-blastula, 20 hr ; /cilia formation, beginning 24 hr ; /hatching, 30 hr ; K ciliated blastula, 32 hr (scale = 50 A*) ; L early gastrula, er estuary not subject to appreciable tidal action may result in mass mortality of these larvae.Of course, many other factors besides simple bacterial action may various samples of sterile sea water, detected through their influence upon the embryogenesis of Echinus escnlcntns.Whatever the specificcause, the disappearance of large beds of S. droebachiensis in the vicinity of population centers should not be unexpected.
M 12.124 to Dr. I. R. Gibbons, and 1-F2-GM 24,276 and GM 15,500 to the author. 1 would particularly like to thank Mr. John Yalois of the Marine Biological Laboratory Supply Department for much valuable ecological and collection data and Dr. R. K. Kane for the impetus to pursue this study and for many highly fruitful discussions of echinoderm biology.SUMMARY 1. Methods for obtaining viable gametes and embryos of the arcticthe suitability of this material for embryological use.2. The useful breeding season extends from early January to mid-April for animals obtained from shallow subtidal regions of Cape Cod Bay or beginning roughly a month later for animals collected from the Gulf of Maine.The "season" can be extended by at least two months by holding the ripe animals at 4 C. 3. The time course for the first division and for development to the four-armed echinopluteus are given for various temperatures.Development time follows an inverse log relationship with temperature over the range of 1 C to 9 C. 4. The susceptibility of the eggs and embryos to temperatures in excess of 10 C and of the hatched blastulae to bacterial action are discussed in regard to laboratory experimentation and the natural distribution of the organism.LITERATURE CITED AGASSIZ, A., 1864.On the embryology of the echinoderms.Mem.Amcr.Acad.Arts Sci., 9: 1-30.BOOLOOTIAN, R. A., 1966.Reproductive physiology.Pages 561-613 in R. A. Boolootian, Ed., Physiology of Echinodermata.Interscience, New York.COSTELLO, D. P., M. E. DAVIDSON, A. EGGERS, M. H. Fox AND C. HENLEY, 1957.Methods f Obtaining and Handling Marine Eggs and Embryos.Marine Biological Laboratory, Woods Hole, Massachusetts, 247 pp.
nearly two months earlier and remain ripe about a month later than the females ; sperm can usually be obtained throughout the year in small amount.No attempt was made to relate gonad index (weight of gonad/weight of animal) to fertility for it was found that the gonad index in early January was not significantly different from that in mid-April, the former time representing gonads full of eggs but in the germinal vesicle stage.Urchins obtained from Maine in 1964-65 and 1966-67 showed a much more limited season, with the beginning of the season taking place about one month later and spawn-out occurring coincident with or even two weeks earlier than in urchins from Cape Cod Bay.Figure1illustrates monthly mean water tempera- be non-toxic for both fertilization and development, requiring no pre-washing.MicroscopyDeveloping embryos were photographed through either Zeiss phase-contrast or Leitz polarization optics.Fields of embryos were photographed through a Wild motility in later stages was arrested with osmium tetroxide vapor fixation.Flattening of embryos was prevented through the use of 0.18 mm thick coverglass placed beneath the usual coverglass as a spacer.Kodak Panatomic-X film was used son of S. droebachiensis from Cape Cod Bay encompasses nearly four months.Ripe eggs can be obtained in mid-December from about YQ of the females and by late January nearly half of the females are fertile.The period from mid-February until mid-April represents the maximum period of fertility with 99% fertilization in Yd f the females; maximum egg volume and essentially 100% fertilization are found from mid-March until mid-April, after which a rapid spawn-out takes place.
plots the log of the time of metaphase and cytokinesis versus temperature for both the first and second division of 3\ droe-
TABLE I Temperature
Kaneof the University of Hawaii (personal communication) has made similar observations with regard to the various temperate and tropical sea urchins that he has studied.It is obvious from Figure2that this rule holds true in the case of S. droebachiensis over its entire temperature range in both the first and second division.
Fry (1936) of first division events, time in minutes ( 5%)Fry (1936)has noted that, in Arbacia, metaphase always occurs at a time point that is 80-85% of the cleavage time regardless of temperature; R. E.Regardless of temperature, problems occur at the hatching of the blastula.Growth in sea water from the laboratory system is quite normal up to this point, sea water or sea water containing sulfadiazine just prior to hatching entirely prevents this disintegration.Tyler and Rothschild (1951) recommend penicillin for apparently the same reason.Other marine embryos (e.g., Arbacia punctulata or Asterias forbesi) show no such sensitivity when grown in untreated sea water.Maine, correspond most closely with Harvey's data, while the Cape Cod Bay population appears to ripen coincidentally with the population cited by Boolootian but remains ripe much longer.Sverdrup, Johnson, and Flemming (1942)report that 5".droebachiensis in Norway has a breeding season from December throughApril, but gives no location, quantitative fertility data, nor seasonal temperature variation.
, overcrowding and lack of oxygen retard or arrest development ; no more than 1 cm of sea water and cells one or two layers thick assure proper aeration through diffusion.Mechanical agitation or aeration of de-jellied eggs during the initial part of the first division (while the hyaline layer forms apparatus, either in phase-contrast (by exclusion of granules) or in polarized light.A heat filter and temperature-controlled stage are essential if the cell is to be-observed for any length of time.Reference to Figure3will show that most developmental events or relevant structural features are readily seen in S. drocbacJiiensis.363 to Dr. R. E. Kane, (i.
temperature must remain constant and below 10 C for normal development and synchrony.After hatching the blastulae are extraordinarily sensitive to bacterial action and precautions must be taken to assure near-sterile conditions.As with most other embryos
|
2019-04-01T13:12:28.149Z
|
1972-02-01T00:00:00.000
|
{
"year": 1972,
"sha1": "41bb7cf1b52faeba8d16cc1ecd416f48f5816313",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.biodiversitylibrary.org/partpdf/6985",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "7c6fb55bc879e0baa883ce71027d131c06450fc3",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
260795800
|
pes2o/s2orc
|
v3-fos-license
|
Dimensionality selection for hyperbolic embeddings using decomposed normalized maximum likelihood code-length
Graph embedding methods are effective techniques for representing nodes and their relations in a continuous space. Specifically, the hyperbolic space is more effective than the Euclidean space for embedding graphs with tree-like structures. Thus, it is critical how to select the best dimensionality for the hyperbolic space in which a graph is embedded. This is because we cannot distinguish nodes well with dimensionality that is considerably low, whereas the embedded relations are affected by irregularities in data with excessively high dimensionality. We consider this problem from the viewpoint of statistical model selection for latent variable models. Thereafter, we propose a novel methodology for dimensionality selection based on the minimum description length principle. We aim to introduce a latent variable modeling of hyperbolic embeddings and apply the decomposed normalized maximum likelihood code-length to latent variable model selection. We empirically demonstrated the effectiveness of our method using both synthetic and real-world datasets.
Motivation
Graphs are convenient tools for knowledge representation and can be used to represent various types of real-world data.Consequently, graph analysis has garnered significant attention in various fields, such as biology (e.g., protein-protein interaction networks) [1], social sciences (e.g., friendship networks) [2], and linguistics (e.g., word co-occurrence networks) [3], in recent years.Generally, tasks in graph analysis are classified into the following four categories: (1) node classification, (2) link prediction, (3) node clustering, and (4) graph visualization [4].
Graph embeddings, which convert discrete representations into continuous ones, such as vectors in Euclidean space, have become popular tools in graph analysis [5][6][7][8].They provide effective solutions for the aforementioned tasks, as continuous representations can be used as the input in tasks of types (1), (2), and (3), whereas two-dimensional continuous representations are used directly in tasks of type (4).
Dimensionality is one of the most important hyperparameters in graph embeddings.First, the node classification, link prediction, and node clustering performance depend on it.Intuitively, we cannot distinguish nodes well with considerably low dimensionality, while the embedded relations are significantly affected by irregularities of data with considerably high dimensionality.Second, the training time and computational expenses directly depend on it.Therefore, the issue of dimensionality selection for graph and word embedding has garnered significant attention [9][10][11][12][13].However, most of existing studies have focused on Euclidean space, although hyperbolic space is a viable alternative embedding space.
The hyperbolic space is a Riemannian manifold with negative constant curvature.In network science, a hyperbolic space is suitable for modeling hierarchical structures [14,15].
In a tree at level h, the number of leaves and nodes is exponential in h.The analogies of the two aforementioned concepts in hyperbolic and Euclidean space are the circumference and area of a circle, respectively.In the two-dimensional hyperbolic space with constant curvature K = −1, the circumference of a circle is provided by 2π sinh r and its area is 2π(cosh r − 1) with hyperbolic radius r , both increasing exponentially with r .This analogy demonstrates that the hyperbolic space has an affinity for the hierarchical structure.However, in the twodimensional Euclidean space, R 2 , the circumference of a circle is provided by 2πr and its area is given by πr 2 , both increasing polynomially with r .Thus, increasing the dimensionality is essential for embedding a hierarchical structure in the Euclidean space.Owing to these properties, hyperbolic embeddings have been extensively studied in recent years [16][17][18].However, to the best of our knowledge, there has been no previous research except [19] on dimensionality selection in the hyperbolic space.
In this study, we propose a novel methodology for dimensionality selection of hyperbolic graph embeddings.We address this issue from the viewpoint of statistical model selection.First, we demonstrate that there is a non-identifiability problem in the conventional probabilistic model of hyperbolic embeddings; that is, there is no one-to-one correspondence between the parameter and the probability distribution.This problem invalidates the use of the conventional model selection criteria, such as Akaike's information criterion (AIC) [20] and the Bayesian information criterion (BIC) [21].To overcome this difficulty, we employ two latent variable models of hyperbolic embeddings following pseudo-uniform distributions (PUDs) [14,15] and wrapped normal distributions (WNDs) in a hyperbolic space [22].We thereby introduce a criterion for dimensionality selection based on the minimum description length (MDL) principle [23].
The MDL principle asserts that the best model minimizes the total code-length required for encoding the particular data.It exhibits several advantages, such as consistency [24] and rapid convergence in the framework of probably approximately correct (PAC) learning [25].Although the MDL-based dimensionality selection was developed for Word2Vec-type word embeddings into the Euclidean space by Hung and Yamanishi [12], their techniques cannot straightforwardly be applied to hyperbolic graph embeddings.
The DNML criterion [26] is a model selection criterion for latent variable models based on the MDL principle, where the non-identifiability problem is resolved by jointly encoding the observed and latent variables.The shorter the DNML criterion, the better the dimensionality.Herein, we propose to apply DNML into the problem of dimensionality selection for hyperbolic embeddings.The DNML criteria obtained by applying to PUD and WND are called decomposed normalized maximum likelihood code-length for pseudo-uniform distributions (DNML-PUD) and DNML code-length for wrapped normal distributions (DNML-WND), respectively.
The novelty and significance of this study are summarized as follows.
• Proposal of a novel methodology of dimensionality selection for hyperbolic embeddings
We propose DNML-PUD and DNML-WND for selecting the best dimensionality of hyperbolic graph embeddings.We aim to introduce latent variable models of hyperbolic embeddings with PUDs and WNDs and then apply the DNML criterion to its dimensionality selection, based on the MDL principle.One of our significant contributions is to derive explicit formulas of DNML for specific cases of PUDs and WNDs.• Empirical demonstration of the effectiveness of our methodology We evaluated the proposed method using both synthetic and real-world datasets.For synthetic datasets, firstly, graphs with their true dimensionality were generated.We then performed the identification of the true dimensionality.For real-world datasets, we examine a relationship between the selected dimensionality and performance of link prediction.Furthermore, we quantified to what extent the hierarchical structure of a graph was preserved using WordNet (WN) [27] dataset.Overall, our experimental results confirmed the effectiveness of our method.
The preliminary version of this paper appeared in [28].The major updates of this paper are summarized below: • We introduced a new latent variable model called wrapped normal distributions in hyperbolic space [22] and derived the upper bound on its DNML criterion, which we call DNML-WND.• Besides, the evaluation of DNML-WND was added in the experimental results.
• We added the new metric called conciseness in the evaluation of the link prediction task in Sect.4.3.1.
Related work
Conventionally, the dimensionality is determined heuristically based on domain knowledge.However, in recent years, several studies have proposed more principled approaches for this purpose.
Yin and Shen [9] proposed a pairwise inner product (PIP) loss, which quantifies the performance of embeddings based on the bias-variance trade-off.PIP loss is applicable to embeddings that can be formulated as low-rank matrix approximations, and its theoretical aspects have been investigated extensively.However, it is not known if hyperbolic embeddings satisfy this condition; thus, PIP loss cannot be directly used for hyperbolic embeddings.Gu et al. [10] extended PIP loss to normalized embedding loss.It is applicable to hyperbolic embeddings after defining their normalized embedding loss of hyperbolic embeddings.However, empirical observations (for example, normalized embedding loss following Eq.( 2) in [10]) are limited to the Euclidean space, and it is still unknown whether such observations are also valid or not for hyperbolic embeddings.
Luo et al. [11] proposed minimum graph entropy (MinGE) to select a dimensionality that minimizes graph entropy, which is a weighted sum of feature entropy and structure entropy.However, feature entropy depends on a certain probability distribution in the Euclidean space, and its extension to hyperbolic space is not straightforward.Moreover, although it was demonstrated to exhibit excellent experimental performance, there was no particular rationale with respect to the selected dimensionality.Wang [13] proposed a method that first learns embeddings in a sufficiently high-dimensional Euclidean space (e.g., the 1000dimensional Euclidean space) and then applies principal component analysis (PCA) to the embeddings and selects the dimensionality that minimizes the predefined score function.Recently, several hyperbolic dimensionality reduction methods have also been proposed [29][30][31], which indicates the possibility of extending the method to the hyperbolic space.To extend the method to the hyperbolic case, the following two points should be discussed: (1) how to define the score function and (2) which dimensionality reduction method should be used.
Recently, the graph neural architecture search method (GraphNAS) has been proposed in [32].GraphNAS selects the best architecture of graph neural networks, including their dimensionality, using reinforcement learning.The most important difference between the proposed method and GraphNAS is that GraphNAS determines the architecture in a taskdependent manner (e.g., the accuracy in node classification), while the proposed method is task-independent and estimates universal dimensionalities based on the MDL principle.Another difference is that the proposed method targets hyperbolic embeddings, while Graph-NAS targets Euclidean embeddings.Thus, it is potentially possible to extend GraphNAS to hyperbolic neural networks and compare their performance with the proposed method.However, to the best of our knowledge, there are no papers that addressed this extension, and the extension is not straightforward.
Almagro and Boguna [19] proposed a dimensionality selection method for hyperbolic embeddings.In [19], dimensionality was inferred using predictive models, such as the knearest neighbors algorithm or deep learning, where the input is the triplet of the mean densities of chordless cycles, squares, and pentagons of a given graph.
Hung and Yamanishi [12] proposed a dimensionality selection method for Word2Vec.They applied the MDL principle to select the optimal dimensionality.However, contrary to our method, they did not employ latent variable models for embeddings and used sequentially normalized maximum code-length rather than DNML code-length.
The remainder of this paper is organized as follows.Section 2 introduces hyperbolic geometry, the non-identifiability problem, and latent variable models of hyperbolic graph embeddings.Section 3 explains the DNML criteria and algorithms used for optimization.Section 4 presents the results obtained using artificial and real-world datasets.Section 5 presents the conclusions and future work."Appendix" section provides the derivation of the DNML code-lengths and experimental details.
Preliminaries
In this section, we first introduce the hyperbolic geometry following [18].Subsequently, the non-identifiability problem of hyperbolic embeddings is discussed.Finally, we introduce two latent variable models for hyperbolic graph embeddings.
Definition of hyperbolic space
There are several models for representing hyperbolic space1 (e.g., the Poincaré disk model, the Beltrami-Klein model, and the Poincaré half-plane model) [33].In this study, a hyperboloid model was used.Since all the models introduced above are isometric to each other, the discussion of the distance structure is the same for the other models.Let H D = (H D , g D ) be the D-dimensional hyperbolic space, where D+1) , where arcosh(x) := log(x + √ x 2 − 1).Note that x 0 is determined by Thus, only D variables are independent.
Coordinate system of hyperbolic space
Next, we explain the coordinate system of the hyperbolic space.The Cartesian coordinate system of the ambient Euclidean space was used as an element of the hyperbolic space (i.e., x = (x 0 , x 1 , . . ., x D ) ∈ H D ).Alternatively, for a maximum hyperbolic radius R > 0, the polar coordinate system (r , θ 1 , . . ., θ D−1 ) introduced in [34] was used, where r ∈ [0, R], The coordinate transformation is expressed as follows: x 2 = sinh r sin θ 1 cos θ 2 , . . . ( In this study, we specify the coordinate system we use when we introduce the notation of elements in the hyperbolic space.
Tangent space and exponential map
When we introduce wrapped normal distributions and the optimization algorithm, the concepts of tangent space T x H D and exponential map Exp x (•) are necessary.
For x ∈ H D , the tangent space T x H D is defined as the set of vectors orthogonal to x with respect to the inner product •, • L .Hence, Thereafter, the exponential map Exp x (•) : T x H D → H D maps a tangent vector v ∈ T x H D onto H D along the geodesic, where the geodesics are the generalizations of straight lines to Riemannian manifolds.The explicit forms of the exponential map and its inverse are well known (e.g., [22]) and are defined as follows: x, y 2 L − 1
Non-identifiability problem of hyperbolic embeddings
In a non-identifiable model, as pointed out in [26], the central limit theorem (CLT) does not hold for the maximum likelihood estimator uniformly over the parameter space.Thus, under these circumstances, neither AIC nor BIC can be applied to latent variable models because they are derived under the CLT assumption uniformly over the parameter space.
For notational simplicity, we omit D from the probability distribution, unless noted otherwise.We focus on undirected, unweighted, and simple graphs.Let n ∈ Z ≥2 be the number of nodes.For k ∈ Z ≥2 , [k] and k are defined as follows: [k] := {1, 2, . . ., k} and k := {(i, j) | i, j ∈ [k], i < j}.For (i, j) ∈ n , let y i j = y ji ∈ {0, 1} be a random variable that assumes the value 1 if the i-th node is connected to the j-th node and 0 otherwise.For D = 2 and i ∈ [n], let φ i := (r i , θ i ) ∈ H D be the polar coordinates of the i-th node, where r i ∈ [0, R] and θ i ∈ [0, 2π).In this model, y := {y i j } (i, j)∈ n is an observable variable, whereas φ := {φ i } i∈[n] is a probability distribution parameter.For β max > β min > 0, γ max > γ min > 0, β ∈ [β min , β max ], and γ ∈ [γ min , γ max ], we assume that a random variable y is drawn from the following distribution: Then, the following lemma holds.
Lemma 1
We assume that r j = 0 for some j ∈ [n].For α ∈ (0, 2π), we define , and the following equation holds: Therefore, the probability distribution of hyperbolic embeddings is non-identifiable.
Proof Since φ j = φ j holds for some j ∈ [n] such that r j = 0, we have φ = φ .For all Thus, the result follows from Eq. ( 4).
For D ≥ 3, non-identifiability can be proved by a similar transformation to the (D − 1)-th angular coordinate.
Latent variable models of hyperbolic embeddings with PUDs and WNDs
To resolve the non-identifiability problem, we introduce two latent variable models, following work on PUDs [14,15,34] and WNDs [22].In PUDs and WNDs, an embedding is regarded as a set of latent variables and edges as observed variables.Among several latent variable models, we chose two for the following reasons.
• For PUDs, in [14,15,34], it has been demonstrated that the graphs generated with PUDs have two properties: the power law for the degree of a node and high clustering coefficient.These properties are common in real-world graphs [35].
• For WNDs, it has been demonstrated that the experimental performance of various downstreaming tasks has been improved.
Latent variable model with PUDs
The generation process of y, z with PUDs can be summarized as follows:
Below we provide an explicit form of the probability distribution of z for PUDs.
For σ max > σ min ≥ 0, the random variable u := {u i } i∈ [n] is drawn according to the following distribution with the parameter σ ∈ [σ min , σ max ]: where I D, j := π 0 sin D−1− j θ dθ and C D (σ ) := R 0 sinh D−1 (σ r )dr denote the normalization constants.For p(z; σ, R), because z i,0 is determined by the other D variables in Eq. ( 1), we have where z i,1:D := (z i,1 , . . ., z i,D ) and J (z i,1:D : u i ) is the Jacobian of the transformation from u i to z i,1:D , which is given as The derivation is provided in "Appendix A." The probability distribution p(z; σ, R) is called the pseudo-uniform distribution because it is reduced to the uniform distribution in hyperbolic space when σ = 1.
In the following discussion, the value of R is assumed to be constant and satisfies R = O(log n) where n is the number of nodes, and it is omitted from the description of the probability distribution.This is because the maximum average degree satisfies k max = O(n), and the minimum average degree satisfies k min = O(1) under certain conditions, which is a common property of real-world complex networks [34].
In the aforementioned distribution, σ, β, and γ are parameters, and D denotes the model of the probability distribution.
Probability distribution of WNDs
WNDs are a generalization of Euclidean Gaussian distributions to the hyperbolic space.Thus, WNDs have two parameters: a mean in hyperbolic space μ ∈ H D and a positive-definite covariance matrix ∈ R D×D .In our model, we set μ to μ 0 , where μ 0 := (1, 0, . . ., 0) denote the origin of H D .We assume this because a tree-like graph is considered to be radially distributed around the origin μ 0 .
The generation process of y and z with the WNDs is summarized as follows: 1.For a vertex i ∈ [n]: , which is a tangent vector at μ 0 .
2. For a pair of vertices (i, j) ∈ n : (a) Generate an observable variable y i j ∼ p(y i j | z i , z j ; β, γ ) using Eq. ( 4).
Note that the second step is the same as that of the model with the PUDs.Below we provide an explicit form of the probability distribution of z with the WNDs.
A random variable v := {v i } i∈ [n] is drawn according to the following distribution: For p(z; ), we have that where J (z i,1:D : v i ) is the Jacobian of the transformation from v i to z i,1:D , which is provided by The derivation of the Jacobian is obtained from [22].
Dimensionality selection using DNML code-lengths
In this section, we present the calculation of the DNML code-lengths for two latent variable models.Thereafter, we present the optimization algorithm.
DNML code-lengths with PUDs and WNDs
According to the MDL principle [24], the probabilistic model that minimizes the total codelength required to encode the given data is selected.Data may be encoded using multiple methods.Although the NML code-length [36] is one of the most common encoding methods, its calculation is quite difficult for complex probability distributions such as PUDs and WNDs.Therefore, we employ the DNML code-length [26], whose calculation for latent variable models is relatively easier.
).We estimate optimal dimensionality D ∈ D and the optimal embedding ẑ that minimizes the following criterion, which we call DNML-PUD: y, z), γ ( y, z)) where I n (β, γ ) and I (σ ) denote Fisher information, which is computed as The derivation is presented in "Appendix B." Practically, I n (β, γ ) and I (σ ) are calculated numerically because the analytic solution of the integral terms is not trivial.For WNDs, the DNML criterion is defined as follows: where ˆ denotes the maximum likelihood estimator of .Since the exact value of p(z; ˆ (z))dz 1:D is analytically intractable, we employ the following upper bound.
We provide a more detailed explanation of DNML-PUD and DNML-WND.Figures 1 and 2 show L NML ( y | z), L NML (z) and L DNML ( y, z) of an artificially generated graph with the true dimensionality D true = 8 and n = 6400.The value of L NML ( y | z) decreases as dimensionality increases.This implies that as the dimensionality increases, the graph can be reconstructed more accurately.However, the value of L NML (z) increases as dimensionality increases.This is because more code-length is required to encode the extra dimension of the embedding; that is, the model becomes more complex, and L NML (z) acts as a penalty
First, we explain how to optimize L(z, β, γ, σ ).We rewrite it as We applied the stochastic update rule at iteration t using the following equation: where B (t) ⊂ n is the mini-batch for each iteration and |•| denotes the number of elements in a set.
For z i , we used the geodesic update in the hyperboloid model [18].The update rule for z is given as follows: where η where Through a preliminary experiment using synthetic datasets, we confirmed that σ rarely converges to the true value when using the gradient descent method.Thus, for each epoch, we numerically calculated σ (z) as where S = {σ min , σ min + 1 C (σ max − σ min ), . . ., σ min + C−1 C (σ max − σ min ), σ max }, and C + 1 denote the number of candidates.
For the optimization of L(z, β, γ, ), we define L (t) (z, β, γ, ) in a similar manner.The update rules for z i , β, γ are provided by Eqs. ( 7), (8), and (9), respectively.For each epoch, we optimized using the following equation: where The optimization procedure is summarized in Algorithm 1.
Experimental results
This section presents a comparison between the proposed criteria and conventional methods using artificial and real-world datasets.The details of code, data, and training details are presented in "Appendix C."
Methods for comparison
We used three criteria-AIC, BIC, and MinGE-for a comparative analysis of the performance of the proposed method.Here, the AIC and BIC with respect to the non-identifiable model, that is, β, γ , and z, are interpreted as parameters and are defined as follows: Note that these criteria are not guaranteed to work for this model because of the nonidentifiability.MinGE [11] is a dimensionality selection criterion for Euclidean graph embeddings.We set the weighting factor λ = 1 and selected the dimensionality, where MinGE was closest to 0. Furthermore, we did not consider the cross-validation (CV) for comparison.This is because CV requires considerable computation time, particularly, when learning graph embeddings.
Artificial dataset
In this experiment, we verified whether the proposed DNML criteria could estimate the true dimensionality.
Dataset detail
We considered the case of D true = 8, where D true is the true dimensionality of the PUDs.We generated a graph for each combination of parameters from the following candidates: n ∈ {800, 1600, 3200, 6400}, β ∈ {0.5, 0.6, 0.7, 0.8}, and σ ∈ {0.5, 1.0, 2.0}.Furthermore, we set R = log n and γ = β log n.Consequently, we obtained 48 graphs in total, which we call PUD-8.Similarly, we generated PUD-16 with the true dimensionality which was 16.
In the above generation process, the parameters were set such that the generated graphs were sparse; that is, the average degree is low with respect to the number of nodes.
Results
To provide an illustrative example for each criterion, we first compared the selected dimensionality of PUD-8 and WND-8 with n = 800, 6400.Figures 3 and 4 show the normalized values for each criterion.For n = 800, AIC, BIC, and DNML selected D = 4. Intuitively, a graph with a few nodes is expected to be embedded in low dimensionality, even if its true dimensionality is high.For n = 6400, DNML selected the correct dimensionality D = 8, whereas AIC and BIC selected incorrect dimensionalities.This implies that the DNML criteria can select the true dimensionality with a sufficient amount of data.For MinGE, it selected the maximum dimensionality D = 64 for all cases.This is possibly because MinGE was designed for Euclidean embeddings, which require larger dimensionality than hyperbolic embeddings for hierarchical structures, as discussed in Sect.1.1.
Next, we provide a quantitative comparison in terms of mean average precision (MAP) [37].A MAP is calculated using the ranking of the dimensionalities, which was created in ascending order for each criterion.Furthermore, we applied DNML-PUD to WND dataset and DNML-WND to PUD dataset.
Tables 1, 2, 3, and 4 present the results for PUD-8, PUD-16, WND-8, and WND-16, respectively.Firstly, the MAPs of BIC and MinGE were not so high, and they always selected D = 4 and D = 64, respectively.Since the selected dimensionalities were constant, BIC and MinGE are less reliable.
For AIC, we observed good performance in many cases; however, it tends to overestimate the true dimensionality for PUD-8 and WND-8 with n = 6400.This is because the penalty term of AIC is smaller than those of other criteria.For DNML criteria, in general, when the sample size is sufficiently large, DNML-PUD identifies the true dimensionality of the PUD dataset and the same tendency holds for DNML-WND and the WND dataset.Thus, we concluded that the proposed DNML criteria are more effective than AIC when the true dimensionality of the given graph is low.
Note that, in general, the performance of DNML-PUD in the WND dataset varied, sometimes being better and sometimes worse compared to DNML-WND.Similarly, in the PUD dataset, the performance of DNML-WND also varied, sometimes being better and sometimes worse compared to DNML-PUD.This is because the theoretical properties of the MDL principle are not valid when the generation process of the given data and the assumed generation process for calculating DNML code-length are different from each other.Therefore, this observation is an expected result of the mismatch of the generation process.
Real-world datasets
We used scientific collaboration networks from [38][39][40], flight network from https:// openflights.org,protein-protein interaction network from [41], and the WN datasets from [27] for our study because they were employed in [16,42,43], which are representative studies in the field of hyperbolic embeddings.The experimental results in [16,42,43] demonstrated that hyperbolic embeddings outperformed Euclidean embeddings in several graph mining tasks performed on the networks.Therefore, we concluded that they are suitable for comparing our proposed method with others.
Link prediction
The DNML-PUD, DNML-WND, and other model selection criteria were applied to eight real-world graphs.In real-world graphs, the true dimensionality is unknown.Therefore, in this experiment, the link prediction performance for the selected dimensions was evaluated.
Dataset detail
We listed the details of eight graphs below.
• Scientific collaboration networks.We used AstroPh, CondMat, GrQc, and HepPh from [40], Cora from [38], and PubMed from [39].These graphs are networks that represent the co-authorship of papers, where an edge exists between two people if they are co-authors.• Flight networks.We used Airport from https://openflights.org/.In this graph, nodes represent airports, and edges represent airline routes.• Protein-protein interaction (PPI) networks.We further used PPI from [41].This graph represents the protein interactions in yeast bacteria.
Furthermore, Table 5 summarizes the statistics of these graphs.Then, each graph was split into a training set y train ⊂ y and test set y test ⊂ y.The test set y test comprises the positive and negative samples.First, we sampled 10% of the positive samples from a graph.Subsequently, to generate negative samples, we sampled an equal number of node pairs with no edge connection.Finally, we obtained the training set y train := y\ y test .
For y test , we calculated the area under the curve (AUC), which we define as follows.We first calculated the distance of each samples in y test .Subsequently, we calculated the true positive rate and false positive rate with fixed threshold of the distance.Finally, we obtained the receiver operating characteristic (ROC) curve by varying the threshold, and the AUC is defined as the area under the ROC curve.
Results
Figure 5 shows the AUCs of the optimal embeddings associated with − log p( y, z; β, γ , σ ), − log p( y, z; β, γ , ), and − log p( y | z; β, γ ). Figure 6 shows the normalized values of each criterion, and Table 6 shows the selected dimensionalities.The performance at the selected dimensionalities by DNML-PUD and DNML-WND was not the best, and higher dimensionalities tended to yield higher AUCs.According to [24], the consistency of the MDL model selection is theoretically guaranteed; that is, the model with the shortest code-length would converge to the true one if it exists.Therefore, the dimensionalities selected by DNML were considered to be close to the dimensionalities of the true probabilistic models that generated the data.However, our results suggest that such dimensionality of the true probabilistic model is not necessarily the best one for link prediction.
In this section, we provide another perspective on the experimental results.As discussed in Sect.1.1, dimensionality also controls the computation time and memory.Therefore, it is important to select a dimensionality at which a relatively high performance is achieved while maintaining low computational resources (e.g., using embeddings in edge devices).To quantify this, we introduce conciseness defined as follows: Let D := {D 1 , . . ., D N } be the candidates of dimensionalities, AUC D i be the AUC at dimensionality D i , AUC be the maximum AUC, and max be a maximum tolerance gap relative to AUC.For the selected , and P ∈ Z ≥1 is the number of candidate points to calculate the conciseness.
Figure 7 shows the typical example of c( D, ).Firstly, the proposed conciseness measure assumes a situation where the AUC improves by increasing the dimensionality, while the extent of improvement diminishes, and with limited computational resource, we want to select lower dimensionality while achieving the maximum tolerate AUC.Based on this motivation, the conciseness measure is designed to take high values when D ∈ D is close to D min .
Since the conciseness significantly depends on max , we computed it for max = 0.050 and 0.100.Table 7 shows the average conciseness of the selected dimensionalities.To calculate the conciseness, we used the embeddings associated with − log p( y, z; β, γ , σ ) for DNML-PUD, − log p( y, z; β, γ , ) for DNML-WND, and − log p( y | z; β, γ ) for AIC, BIC, and MinGE.Furthermore, we set AUC as the maximum AUC associated with three embeddings.
The best or second best performance was achieved by either DNML-PUD or DNML-WND in many cases.For AIC, the performance was relatively high, but not the best in many cases.For BIC, the performance was relatively high, specifically for max = 0.100.This indicates that BIC is effective when the maximum tolerate gap is high.For MinGE, the performance was close to 0 because the selected dimensionality was considerably high.Overall, the proposed method works well in that it identifies dimensionality with a relatively high AUC while maintaining low computational resources.
Note that all the performances were 0 for GrQc.This is because higher dimensions achieve considerably higher AUC in GrQc, unlike most other networks where increasing dimensions do not significantly improve AUC.In such scenarios, the conciseness measure does not take positive values unless max is set to a considerably high value; however, setting an excessively high tolerate gap (e.g., max = 0.300) lacks practical meaning, and it is sufficient to select the maximum dimensionality within the given computational resources.
Preservation of hierarchy
To investigate the extent to which the hierarchical structure was preserved, we used a subset of WordNet [27] closely following the setting in [16,18].
Dataset detail
We first considered the transitive closure of the is-a relationship for all the nouns.Subsequently, we took the subset of the nouns that have "mammal" as a hypernym and selected relations that have "mammal" as a hyponym or hypernym.Finally, we connected the two nouns if they have an is-a relationship.We refer to this dataset as WN-mammal.Similarly, we generated WN-mammal, WN-solid, WN-tree, WN-worker, WN-adult, WN-instrument, WN-leader, and WN-implement.Table 8 summarizes the statistics for these datasets.
Each graph is expected to have a hierarchical structure because a hypernym is often related to many hyponyms.We embedded eight graphs with various dimensionalities and calculated each criterion.Subsequently, we quantified the extent to which the obtained embeddings reflected the true hierarchy of the is-a relation on the data.Following [16], we used the following score function: where r u and r v are the radius coordinates of u and v, respectively, and α > 0 is a constant.In general, it can be assumed that a hypernym has a lower radial coordinate than its hyponyms.Thus, α(r u −r v ) acts as a penalty when v, which is a hypernym of u, is lower in the embedding hierarchy.Therefore, the score is expected to be high when the embedding reflects the true hierarchy of data.Figure 9 shows the average score over all is-a relation pairs for each dimensionality with α = 100.Overall, the lower dimensionality achieved higher is-a scores.Next, we provide a quantitative comparison of the benefits.With an estimated dimensionality D and a maximum tolerance gap T gap , the benefit is defined as follows: where D best is the dimensionality at which the best is-a score is achieved.It has a high value when the estimated dimensionality is close to D best .Table 9 shows the average benefits over the WN datasets.Note that D best was 2 for all datasets, and we set T gap = 2 in the experiments.The DNML-WND and BIC achieved the best results.This result implies that DNML-WND and BIC selected the dimensionality that reflects the true hierarchical structure of the particular graph.
Discussion
We summarize the experimental results.Firstly, as a general trend, the AIC usually selects higher dimensionality, the BIC lower dimensionality, and the DNML criteria middle dimensionality.This is due to the relative magnitude of the penalty terms.MinGE always selected the largest dimensionality in all the experiments.This is possibly because MinGE was designed for Euclidean embeddings, which require higher dimensionality than hyperbolic embeddings for hierarchical structures, as discussed in Sect.1.1.
The performance of the AIC was relatively well on the first and second tasks, but not on the third task.In contrast, the BIC showed high performance in the third task, but low performance in the first and second tasks.The DNML criterion does not necessarily give the best performance, but it often gives the best performance or the next-best performance for all tasks.Therefore, it can be concluded that the performance of the proposed DNML criteria was good on average for all tasks.
Conclusion and future work
In this study, we proposed a dimensionality selection method for hyperbolic embeddings based on the MDL principle.We demonstrated that there is a non-identifiability problem for hyperbolic embeddings.Therefore, we employed the latent variable model of hyperbolic embeddings using PUDs and WNDs to formulate dimensionality selection as the statistical model selection for latent variable models.Within this formulation, we proposed the DNML code-length criterion for dimensionality selection based on the MDL principle.For artificial datasets, we experimentally demonstrated that our method is effective when the true dimensionality is low.For real-world datasets, we used the scientific collaboration networks and WN datasets.For the scientific collaboration networks, we demonstrated that the proposed method selected simple probabilistic models while maintaining AUCs.For the WN datasets, we confirmed that the proposed method selects the dimensionality which preserves the true hierarchy of graphs.Overall, the proposed method performed well in average.Note that we cannot directly use the dimensionality selected by the proposed method in hyperbolic neural networks methods, such as [43,44].Because the probabilistic models used in the proposed method differ from those used in [43,44], the optimal dimensionalities within them are inherently different.This implies that using the dimensionality selected by the proposed method in [43,44] rationales the rationales of the DNML code length and MDL principle.However, we would like to emphasize that even in hyperbolic neural networks, it is possible to consider latent variable models and derive DNML code-lengths following a similar procedure to our study.Specifically, we can treat the parameters and outputs of hidden layers of hyperbolic neural networks as latent variables.In this sense, the proposed method can be generalized.However, as indicated in "Appendix B," the derivation of the DNML code-length requires significant effort and is not straightforward.Therefore, we consider the extension of our proposed method to hyperbolic neural networks as future work.
The latent variable model approach adopted in this study is promising and is not limited to hyperbolic space.In the future, we plan to build a methodology for dimensionality selection in Euclidean and spherical embeddings by introducing latent variable for their corresponding space [45,46].
In the following, we calculate I n (β, γ ): We rewrite p(y i j | z i , z j ; β, γ ) as , where p i j = 1/(1 + exp(βd z i z j − γ )).This form is a logistic function with the constraints β ∈ [β min , β max ] and γ ∈ [γ min , γ max ].Then, L(β, γ ) is rewritten as Hence, we obtain Since all terms are independent of y i j , we obtain
Thereafter, we use the asymptotic approximation of parametric complexity according to Rissanen [49]: where I (σ ) denotes Fisher information In the following, we calculate I (σ ).
The negative logarithm of the likelihood for By interchanging the derivative and integral, we obtain Similarly, we get The second and third terms are independent of r .The expectation of the first term with respect to r is calculated as follows.
Finally, we derive the following: x + 1− j 2 , and (•) is the gamma function.Using Rissanen's g-function [24], the parametric complexity of L NML (z) can be rewritten as: The first inequality is derived from Thm. 2.13 [47].
The main difference between our upper bound and that in [50,51] is as follows: • We fixed the mean to the origin of the Euclidean space, whereas the previous studies did not.• We removed the restriction max < 1 and π , where max > 0 is set such that 2 j ≤ max for all j ∈ [D].Below we explain the reason.In the discussion in [51], the difference between the code-lengths of any two data is scale-invariant in Gaussian mixture models (GMMs).In other words, the result of model selection is scale-invariant.Thus, we can perform model selection on rescaled data on GMMs, e.g., to multiply 1/α, etc.However, in hyperbolic embeddings, the difference of the code-lengths is not scale-invariant due to its non-Euclidean nature.Thus, in our model, it is necessary to remove the restrictions and set 1 and 2 to appropriate values such that the raw data are included in Z D ( 1 , 2 ).where y is sampled uniformly at random over y.Similarly, we approximated I n (β, γ ) as follows:
C.3 Training detail
For all experiments, the training details were the same.
Embeddings
First, the Cartesian coordinates of each node were independently initialized uniformly at random in [−0.001, 0.001] D+1 ⊂ R (D+1)×(D+1) .They were then projected onto the hyperboloid plane using Eq. ( 1).When learning the embeddings, ten negative samples were sampled per positive sample for mini-batches.The number of epochs was 800, and we set η (t) β = 0.001 and η (t) γ = 0.001 for all epochs.Similar to [16], the learning process of embeddings is composed of two steps.
• In the first step, − log p( y | z; β, γ ) was optimized for all models with learning rate η
Fig. 1
Fig. 1 Example of DNML-PUD.The selected dimensionality and true dimensionality are D = 8.The graph was generated with parameters n = 6400, β = 0.6, γ = β log n, and σ = 1.0.The scores L DNML ( y, z) and L NML ( y | z) follow the left scale, whereas L NML (z) follows the right scale
Fig. 2
Fig. 2 Example of DNML-WND.The selected dimensionality and true dimensionality are D = 8.The graph was generated with parameters n = 6400, β = 1.2, γ = β log n, and = (0.35 log n) 2 I , where I ∈ R D×D is the identity matrix.The scores L DNML ( y, z) and L NML ( y | z) follow the left scale, whereas L NML (z) follows the right scale
z
denotes the learning rate, π z (•) denotes the projection from the Euclidean gradient to the Riemannian gradient, proj H D R (•) denotes the projection from H D to H D R := {x | x ∈ H D , d μ 0 x ≤ R}, and Exp z (•) is given by Eq. (3).The functions π z (•) and proj H D R (•) are defined as follows:
Fig. 6
Fig. 6 Results of each criterion.First row: AstroPh and CondMat.Second row: GrQc and HepPh.Third row: Cora and PubMed.Fourth row: Airport and PPI
Fig. 9
Fig. 9 Is-a scores on WN dataset.First row: WN-mammal, WN-solid.Second row: WN-tree and WN-worker.Third row: WN-adult and WN-instrument.Fourth row: WN-leader and WN-implement
denotes the Euclidean norm.
σ , the update rules of β and γ are provided as
Table 1
Average MAPs on PUD-8.The bold indicates either the maximum MAP or MAP within a 10% decrease from the maximum one (average estimated dimensionality in the parentheses)
Table 2
Average MAPs on PUD-16.The bold indicates either the maximum MAP or MAP within a 10% decrease from the maximum one (average estimated dimensionality in the parentheses)
Table 3
Average MAPs on WND-8.The bold indicates either the maximum MAP or MAP within a 10% decrease from the maximum one (average estimated dimensionality in the parentheses)
Table 6
Selected dimensionalities of each method
Table 7
Average conciseness of each method with max = 0.050 and 0.100 (the bold text indicates either the maximum conciseness or conciseness within a 10% decrease from the maximum one)
Table 8
Statistics of WN datasets Network
Table 9
Average benefits on WN datasets (the bold text indicates the maximum one)
Table 10
Computing environment in our experiments
|
2023-08-11T15:04:10.150Z
|
2023-08-09T00:00:00.000
|
{
"year": 2023,
"sha1": "37c44b1bc1f453aa7574d64b215b9a24d3ddaef7",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10115-023-01934-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "3cf91eb1bd3b1cdce7ce2ed280f2dcf61807236d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
236924303
|
pes2o/s2orc
|
v3-fos-license
|
Quantitatively Predicting Modal Thermal Conductivity of Nanocrystalline Si by full band Monte Carlo simulations
Thermal transport of nanocrystalline Si is of great importance for the application of thermoelectrics. A better understanding of the modal thermal conductivity of nanocrystalline Si will be expected to benefit the efficiency of thermoelectrics. In this work, the variance reduced Monte Carlo simulation with full band of phonon dispersion is applied to study the modal thermal conductivity of nanocrystalline Si. Importantly, the phonon modal transmissions across the grain boundaries which are modeled by the amorphous Si interface are calculated by the mode-resolved atomistic Greens function method. The predicted ratios of thermal conductivity of nanocrystalline Si to that of bulk Si agree well with that of the experimental measurements in a wide range of grain size. The thermal conductivity of nanocrystalline Si is decreased from 54 percent to 3 percent and the contribution of phonons with mean free path larger than the grain size increases from 30 percent to 96 percnet as the grain size decreases from 550 nm to 10 nm. This work demonstrates that the full band Monte Carlo simulation using phonon modal transmission by the mode-resolved atomistic Greens function method can capture the phonon transport picture in complex nanostructures, and therefore can provide guidance for designing high performance Si based thermoelectrics.
Introduction
Thermal transport properties of nanostructures are of great importance for the advanced applications including thermoelectrics [1][2][3] which can convert waste heat into electricity and work as solid-state refrigeration, microelectronics [4] and thermal barrier coatings [5,6] . Recent studies have shown that nanocrystalline materials can largely reduce the lattice thermal conductivity (κ), which can largely benefit the efficiency of thermoelectrics. [7][8][9] Silicon-based materials are relatively inexpensive and most are nontoxic in contrast to many other thermoelectric materials, which make them promising thermoelectrics. [10][11][12][13][14] Nanocrystalline Si (nc-Si) has been widely studied, it was found that nc-Si is competitive thermoelectric material with best ZT = 0.7 at 1275 K [15] , but it is still lower than those of the champion thermoelectric materials primarily because of its relatively high thermal conductivity. Therefore, further understanding thermal transport in nanocrystalline structures is necessary.
In nanocrystalline materials, the remarkable reduction of thermal conductivity is caused by the impedance of phonons at the grain boundaries when the grain size approaches or is smaller than the phonon mean free path (MFP). [16][17][18][19][20][21] Therefore, grain size and phonon transmission at boundaries are two important factors to affect the thermal conductivity. Wang et al. investigated nc-Si with grain size varied from 550 nm to 76 nm [16] , they found that thermal conductivities of nc-Si show a T 2 dependence at low temperature, a similar trend was also found in Si inverse opals [22] , which cannot be explained by the traditional phonon gray model. They reported that the frequency dependent (nongray) model should be used for boundary scatterings [16] . Later, Jugdersuren et al. studied nanocrystalline Si with grain size decreased to ∼10 nm [23] . The thermal conductivity of nanocrystalline Si with smaller grain size was also studied by molecular dynamic simulations [17,24] , and it is found that the thermal conductivity was quickly decreased as the decrease of grain size (< 8 nm), which is caused by the restrain of phonon MFP by the nano-grain boundaries [17] . Although there are many works about the thermal transport of nanocrystalline materials, researches based on modal level analysis of thermal conductivity are few, which is critical for further understanding the underlying mechanisms of thermal transport of nanocrystalline materials.
To quantitatively investigate the modal contribution of phonon modes in nanocrystalline materials, modeling the phonon mode transport process in nanocrystals is required. Phonon Monte Carlo (MC) simulations have been used to study the thermal transport in many complex nanostructures including nanocrystallines. [25][26][27][28][29] Later, the traditional phonon MC method is speeded up on the order of 10 9 by the variance-reduced Monte Carlo (VRMC) algorithm [30,31] , which has been widely used in the study of thermal transport of complex structures under the assumption of isotropic phonon dispersion. [32,33] To obtain modal thermal conductivity, full band of phonon dispersion should be used in the calculations. Monte Carlo simulation with full band of phonon dispersion was applied in the study of thermal transport of bulk Si and Si structures [26,34,35] , of which the simulation results are found to be close to the experiment measurements [34] . Importantly, it is reported that the correct implementation of the phonon dispersion relation in MC simulations is essential to accurately capture the quasi-ballistic phonon transport [35] . Full band MC simulation is also used to investigate the thermal transport of Si/Ge heterostructures, while the phonon transmissions across the interface are calculated using the diffusive mismatch model. [36] Later, the VRMC with full band of phonon dispersion using the optimized phonon transmission across the interfaces is used to study the thermal transport of nc-Si [37] . The calculated thermal conductivity of nc-Si is found to be quite close to the experimental measurements in a wide range of temperature. Apart from these recent progresses, using modal level interfacial phonon Page 5 transmission in the study of thermal transport by VRMC with full band of phonon dispersion is important to provide a physical and reliable phonon transport picture in nanocrystals.
Many approaches have been used to calculate the phonon modal transmissions across the interfaces. For instance, the frequency dependent phonon transmission calculated by the spectral diffuse mismatch model (SDMM) [38] was integrated in the MC simulations to calculate the interfacial thermal conductance (ITC). Modal analysis of ITC is developed based on molecular dynamics which inherently includes anharmonicity effect, but it is hard to be applied to large systems due to its high computational requirement. [39,40] On the other hand, atomistic Green's function (AGF) method is more efficient and easier to implement, and therefore has been used extensively to compute the frequency dependent phonon transmission. [41][42][43][44][45] Several extended techniques based on the AGF method have been developed to compute phonon modal transmission. [46][47][48][49] A similar numerical method was developed using perfectly matched layer boundaries to compute modal transmission. [50] The scattering boundary method can be used to calculate the phonon modal transmission [51] , which is theoretically equivalent to the AGF method.
Recently, Ong and Zhang have extended the conventional AGF formalism to mode-resolved AGF, which can calculate phonon modal transmission. [48] The phonon modal transmission across the Si/Ge interface [52] and the amorphous Si (a-Si) interfaces [53] was investigated by the mode-resolved AGF method. For the amorphous Si interfaces, it was found that the interface acts as a low-pass filter, reflecting modes with frequency greater than around 3 THz while transmitting those below this frequency, which agrees with the experimental measurement [54] . Although phonon modal transmission by the mode-resolved AGF method has been used to calculate interfacial thermal conductance, few works have integrated it in the calculations of thermal transport in nanocrystals.
Page 6
In this work, the modal thermal conductivity of nc-Si is investigated by the VRMC with full band of phonon dispersion using phonon modal transmission calculated by the mode-resolved AGF method. The grain boundaries are modeled by the amorphous Si interface. The thermal conductivity of nc-Si with grain size from 10 to 700 nm are calculated, and compared to the experiment works. The thickness of grain boundary which will affect the phonon modal transmission is modulated to study its effect on the thermal conductivity of nc-Si. Furthermore, the frequency and mean free path dependence of thermal conductivity are analyzed. In this work, the thermal conductivities of nc-Si are calculated at room temperature.
Model and method
A cubic simulation box oriented along the x, y and z directions as shown in Fig. 1 is used to simulate the nc-Si. A temperature gradient is applied in the x direction to set up a heat flux. The periodic heat flux boundary condition [25] along x direction and a specular boundary condition in the y and z directions are applied, so that the computational domain represents a unit cell which is repeated in all directions. Three grain boundaries presented by the red, blue and green planes in Fig. 1 are perpendicular to the x, y, and z direction, respectively, and these grain boundaries bisect the simulation box. Due to the symmetry boundary conditions, the grain size is the same as that of the cubic simulation domain. The red dashed circle shows the zoom-in of the grain boundary, which is modeled by a-Si interface. The thickness of all the a-Si interfaces (red, blue and green planes in Fig. 1) is the same in the nc-Si. The thickness of a-Si interface can be modulated for different nc-Si models to manipulate the phonon modal transmission. The specularity in all the simulations are set as unit, since the a-Si interface is not severely rough and the phonon transmission have stronger effect on the thermal conductivity than specularity [37] . Further, the VRMC with full band of phonon dispersion is applied to simulate phonon transport in nc-Si (Fig. 1). The phonon transport is described by the Boltzmann transport equation under the relaxation time approximation. The deviational energy based Boltzmann transport equation [30] is given by wave vector and polarization of a specific phonon mode, respectively. The Eq. (1) can be solved by using the linearized version of Peraud and Hadjiconstantinou's algorithm [31] , and the details of using the VRMC with full band of phonon dispersion are in the Ref. [37].
The full band of phonon dispersion and phonon relaxation time of the three phonon scatterings, which can be calculated by ShengBTE package [55] , are used as input for the Monte Carlo simulations. The conventional cell (CC) of bulk Si is used as unit cell (1 × 1 × 1 CC) for the calculation of phonon dispersion, which results in 24 polarizations. The interaction between Si atoms is described by the Tersoff potential [56] . The thermal conductivity of bulk Si is calculated as 239.5 W/m-K, which is consistent with the results of molecular dynamics simulation using the same potential [57,58] . The thermal conductivity calculated using Tersoff potential is overestimated comparing to the experimental measurements, while it is meaningful to investigate the trend and mechanism of the reduction of thermal conductivity in nanostructures. In the Monte Carlo simulation, 10,648 k points with 24 polarizations at each k are sampled over the entire Brillouin zone. The number of deviational particles is set as Nph = 8×10 6 for each simulation. is set as 300 K, the temperature gradient and heat flux are along x direction. The grain size can be controlled.
For the calculation of phonon modal transmission across the grain boundaries modeled using a-Si interface (Fig. 1), the mode-resolved AGF method [48] is applied. The key point for the calculation of phonon modal transmission is to construct the mode-resolved transmission matrix . as AGF for simplicity. The phonon dispersion is calculated by lattice dynamics (LD) using the general utility lattice program (GULP). [59] The unit cell of the Si contacts for the calculation of phonon modal transmission is set as 1 × 4 × 4 CC, which follows the settings in the Ref. [52], and the Tersoff potential [56] is used for depicting the interactions among atoms. The details of generating the a-Si interface, the settings of the devices and the calculation of phonon modal transmission can be found in Ref. [53].
Results
Firstly, the a-Si interface with a thickness of 5 CC (2.716 nm) are used to study the interfacial thermal transport. The phonon modal transmissions across the a-Si interface are calculated by the AGF method. The unit cell of Si contact is set as 1 × 4 × 4 CC, which is the same as that in the Ref. [53]. The phonon modal transmission are plotted in colored dots as shown in Fig. 2 (a), in which the normalized ky = 0.016 and kz = 0.016. The color is scaled by the value of phonon modal transmission according to the color bar. The phonon dispersion are also calculated by Page 10 GULP [59] and shown by solid line in Fig. 2 (a). Then, the phonon modal transmissions for the MC simulations are obtained by linearly interpolating the corresponding modal transmission calculated by the AGF method. The phonon modal transmissions by interpolation are shown in Fig. 2 (b) by dots according to the color bar, and the corresponding phonon modal transmissions calculated by AGF method are these in Fig. 2 (a). Fig. 2 (c) shows the phonon modal transmissions by interpolation for all the phonon modes (red dots), and the corresponding phonon modal transmissions by the AGF calculations (black dots). As shown in Fig. 2 (c), the phonon modal transmissions are close to unity when the frequency is smaller than 3 THz, and quickly decreases as frequency increases. To investigate how the interface thickness will affect the transmission, the phonon modal transmission for the a-Si interfaces with thickness of 4 CC and 6 CC are also studied.
Page 12
Based on the phonon modal transmission by interpolation as shown in Fig. 2 (d), the VRMC with full band of phonon dispersion is then applied to investigate the thermal transport in nc-Si ( Fig. 1). Fig. 3 (a) shows the ratio of thermal conductivity of nc-Si to that of bulk Si versus the thickness of the a-Si interface. For comparison, the experimental results [16] of the nc-Si with grain sizes of 550 nm (red dashed line) and 76 nm (blue dashed line) are also shown in Fig. 3 (a). Fig. 3 (b). The modal κ is found to be decreased in the whole frequency range as the grain size decreases. To further understand these results, the distribution of phonon MFPs of bulk Si and nc-Si with grain sizes of 550 nm and 76 nm are compared in Fig. 3 (c). As shown in Fig. 3 (c), the MFPs of low frequency phonons are much larger than the corresponding grain size because of the large phonon modal transmission (Fig.2 (d)). Furthermore, the ratios of MFPs of nc-Si to that of bulk Si are plotted in Fig. 3 (d), which shows that the phonon MFP can be effectively reduced in the whole frequency range as the grain size decreases. Furthermore, the spectral thermal conductivity of bulk Si and nc-Si with the interface thickness of 4 CC, 5 CC and 6 CC based on the modal κ are shown in Fig. 4 (a). For comparison, the corresponding normalized cumulative thermal conductivities are plotted in Fig. 4 (b). Here, the cumulative thermal conductivities are normalized by the κ of bulk Si. As shown in Fig.4 (a), the contributions of phonon with frequency ranging from 3 to 8 THz in nc-Si is largely reduced because of the reduction of the phonon modal transmission (Fig. 2 (d)). Moreover, as the thickness of a-Si interface increases from 4 CC (2.172 nm) to 6 CC (3.259 nm), the thermal conductivity of low frequency phonons (< 3 THz) is almost unchanged and the overall thermal conductivity is slightly decreased in Fig. 4 (b). To further investigate the grain size effect, the spectral thermal conductivity of nc-Si with grain size 550 nm and 76 nm, and the corresponding normalized cumulative thermal conductivities are shown in Fig. 4 (c) and (d), respectively. The thickness of a-Si interface is 5 CC and 4 CC for the nc-Si with grain sizes of 550 nm and 76 nm, respectively, which can produce the ratio of κ close to the experiment measurements ( Fig. 3(a)). The results show that as the grain size decreases, the overall thermal conductivity is largely reduced (Fig. 4 (c)) and the ratio of contribution of low frequency phonons is increased (Fig. 4 (d)), which indicates that grain size has a strong effect on the thermal conductivities in the whole frequency range.
Page 16
To further investigate the grain size effect, nc-Si with grain sizes varying from 10 nm to 700 nm are studied. The ratio of thermal conductivities of nc-Si to that of bulk Si versus grain size are shown in Fig. 5 (a). For comparison, the experiment measurements of nc-Si with grain sizes 550 nm [16] , 144 nm [16] , 76 nm [16] , 30 nm [60] and 9.7 nm [23] are also plotted in Fig. 5 (a). The results show that the predicted thermal conductivities are quite close to these experimental measurements when the thickness of a-Si interface is around 5 CC (2.716 nm), which implies that the effect of grain boundary on phonon transport can be reasonably modeled by the a-Si interface with 5 CC thickness. The ratio of the thermal conductivity of nc-Si to that of bulk Si is significantly decreased from 54% to 3% as the grain size decreases from 550 nm to 10 nm. The normalized cumulative thermal conductivities versus phonon frequency and phonon mean free path for bulk Si and nc-Si with the grain boundary thickness of 5 CC are shown in Fig. 5 (b) and (c), respectively. These low frequency phonons (<3 THz) transport substantial amounts of heat in nc-Si by contributing 37%, 46%, 50%, 54% and 57% to the total thermal conductivity of nc-Si with the grain sizes of 550 nm, 144 nm, 76 nm, 30 nm and 10 nm, respectively ( Fig. 5(b)). Although the grain size can be controlled to a small value, but the phonon MFPs can be much larger than the grain size ( Fig.3 (c)). As shown in Fig. 5 (c), the phonons with MFP larger than the corresponding grain size contribute from 30% to 96% in nc-Si as the grain size decreases from 550 nm to 10 nm. These analyses indicate that the VRMC with full band of phonon dispersion using phonon modal transmission by mode-resolved AGF can effectively predict the thermal conductivity of nc-Si close to the experimental results in a wide range of grain size, and the phonon modes with MFP larger than the grain size transport substantial amounts of heat in nc-Si. [16] (triangle), 30 nm [60] (square) and 9.7 nm [23] (star) are plotted for comparison. The normalized cumulative thermal conductivity versus frequency (b) and phonon mean free path (c) for bulk Si and nc-Si with grain sizes of 550 nm, 144 nm, 76 nm, 30 nm and 10 nm. The thickness of a-Si interface is 5 CC in (b) and (c).
Conclusion
In this work, the variance reduced Monte Carlo simulation with full band of phonon dispersion using phonon modal transmission by mode-resolved AGF is applied to study the thermal transport in nc-Si. The grain boundaries are modeled by a-Si interfaces. It is found that the phonon modal transmission across a-Si interface is close to unity at low frequency (<3 THz) and quickly Page 18 decreases as the frequency increases. The predicted ratio of thermal conductivity of nc-Si to that of bulk Si agrees well with the experimental measurements in a wide range of grain size, which is largely decreased from 54% to 3% as the grain size decreases from 550 to 10 nm. The ratio of contribution of low frequency phonons (< 3 THz) is increased from 37% to 57% as the thickness of interface decreases from 550 nm to 10 nm. The analyses show that reducing the grain size can effectively reduce the phonon MFP in the whole frequency range, while the MFPs of low frequency phonons are much larger than the corresponding grain size because of the large phonon modal transmission. Moreover, the phonons with MFP larger than the corresponding grain size contribute from 30% to 96% in nc-Si as the grain size decreases from 550 nm to 10 nm. As the thickness of a-Si interface increases from 4 CC (2.172 nm) to 6 CC (3.259 nm), the phonon transmission is slightly decreased at low frequency (<3 THz) and gradually decreased at higher frequency (>3 THz), which leads to the small reduction of thermal conductivity of nc-Si. This work shows that the VRMC with full band of phonon distribution using phonon modal transmission by mode-resolved AGF method can be an effective way to study the modal thermal transport in nc-Si, which can provide deep insight into the thermal transport properties of complex nanostructures.
|
2021-08-06T01:16:01.578Z
|
2021-08-05T00:00:00.000
|
{
"year": 2021,
"sha1": "c50b28b0468b6eb56adf72390ceba358f2620a82",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c50b28b0468b6eb56adf72390ceba358f2620a82",
"s2fieldsofstudy": [
"Physics",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
9880717
|
pes2o/s2orc
|
v3-fos-license
|
Assessing a Dysplastic Cerebellar Gangliocytoma (Lhermitte-Duclos Disease) with 7T MR Imaging
Lhermitte-Duclos disease (LDD; dysplastic cerebellar gangliocytoma) is a rare hamartomatous lesion of the cerebellar cortex and this was first described in 1920. LDD is considered to be part of the autosomal-dominant phacomatosis and cancer syndrome Cowden disease (CS). We examined the brain of a 46-year-old man, who displayed the manifestations of CS, with 7 Tesla (T) and 1.5T MRI and 1.5T MR spectroscopy (1H-MRS). We discuss the possible benefits of employing ultrahigh-field MRI for making the diagnosis of this rare lesion.
hermitte-Duclos disease (LDD; dysplastic gangliocytoma of the cerebellum) is a rare cerebellar hamartoma, and this was first described by Lhermitte and Duclos in 1920 (1). Since 1991 LDD has been considered to be part of Cowden disease (CS), which is an autosomal-dominant phacomatosis and cancer syndrome, and LDD is characterized by multiple hamartomas and a high risk of breast, thyroid and endometrial carcinoma (2). This 'multiple hamartoma-neoplasia syndrome' is associated with mutations of the PTEN gene. Over 225 cases of LDD have currently been reported in medical literature (3).
7T MRI whole-body scanners are currently being evaluated for possible clinical applications. Their higher magnetic field strength allows improving the spatial resolution and reducing the scan time without sacrificing the signal-to-noise ratio (SNR) and the contrast-to-noise ratio (CNR), as compared to lower field strength MRI. In this report we assess the usefulness of the T2*, T2, and susceptibility-weighted imaging (SWI) at 7T, as compared with routine MR imaging at 1.5T, in a male patient with genetically proven LDD.
CASE REPORT
A 46-year-old Caucasian man presented with a 10-year history of mild gait ataxia and undirected vertigo after fast head movements. The patient had suffered from disturbed urination for the previous 25 years. He had a past medical history of resection of a thyroid adenoma and also for benign polyposis of the sigmoid colon. At the age of 41 years, a seborrheic keratosis was excised from his right ear. The patient had no familial history of LDD or hereditary disease. Physical inspection revealed megalocephaly, congenital facial asymmetry and left thenar aplasia. At the latest presentation, the neurological examination showed minimal intention tremor, gait ataxia without visual compensation, an undirected imbalance on Romberg's test and bradydiadochokinesia.
With approval of the local ethic committee and with the patient's informed written consent, MRI examinations of the brain were performed on a 1.5T scanner (Avanto, Siemens Medical Solutions, Erlangen, Germany) in combination with using a vendor supplied 12-channel receive-only head coil and then MRI examinations of the brain were done on a 7T scanner (Magnetom 7T, Siemens Medical Solutions, Erlangen, Germany) in combination with an 8-channel transmit-and-receive head coil (Rapid Biomed, Wuerzburg, Germany). The gradient-echo and turbo spin echo sequences were performed to obtain the axial proton density (PD), T2, T2* and susceptibility weighted images (SWI), which were optimized for each field strength (Table 1).
In addition, proton ( 1 H) MR spectroscopy (MRS) was performed at 1.5T. The spectroscopic data was acquired from the patient's cerebellar lesion using a single-voxel, point-resolved technique (TE = 135 ms; TR = 1500 ms). The resulting prominent resonances representing choline (Cho), creatine (Cr) and N-acetylaspartate (NAA) within the lesion were compared to the mirror image voxels on the white matter of the normal contralateral hemisphere. Spectral post-processing was performed using the software provided by the MRI system manufacturer (Siemens Syngo, VB 15, Siemens Medical Solutions, Erlangen, Germany).
Imaging Findings
For 11 years, repeated 1.5T MRI examinations revealed a slowly growing, non-enhancing tumor mass in the left cerebellar hemisphere with preservation of the gyral pattern. Thus, the present study was done without administration of contrast media. On MRI at 1.5T and 7T, the posterior fossa tumor (49×34×32 mm in size) appeared mainly hyperintense on the T2-weighted images (Fig. 1A) and iso-hypointense on the proton density images (not shown). The characteristic striated pattern of the lesion was best displayed on the T2-weighted images at both field strengths ( Fig. 1A, B). The tumor caused descensus of the cerebellar tonsils, but any obstructive supratentorial hydrocephalus was absent. Due to their high sensitivity for paramagnetic substances like deoxyhemoglobin, the SWI and T2* weighted images revealed thin veins running deep between the thickened folia of the cerebellar lesion in great detail ( Fig. 1C-F). The 7T SWI minimal intensity projection (MIP) images depicted thin vessel branches as small as 250 μ m, whereas the 1.5T SWI MIP images could only resolve larger vessels to a size of 450 μ m. Compared to the 1.5T SWI images (Fig. 1C) the medial displacement and compression of the dentate nucleus by the tumor were much better registered on 7T SWI images (Fig. 1D). The 1 H-MRS at 1.5T demonstrated a reduction in NAA and a prominent lactate peak. Contrary to other previous reports, the Cho, Cr and the resulting Cho/Cr ratio were slightly elevated in the lesion and the myoinositol (MI) levels were not changed (3,4). Thus far, neurosurgical therapy and histopathological examination have not been performed because the lesion exerted only mild compression of the IV ventricle without any hydrocephalus.
A DNA analysis revealed a heterozygous mutation in exon 5 of the PTEN gene (chromosome 10 q23), and this supported the diagnosis of LDD (c. 388C > T; p. Arg130X). The patient received genetic counseling and is under neurological review.
DISCUSSION
Lhermitte-Duclos disease or dysplastic cerebellar gangliocytoma is a slowly enlarging mass within the cerebellar cortex, and patients with this malady present with headaches, occlusive hydrocephalus, cranial nerve palsies, gait ataxia and other symptoms of cerebellar dysfunction (1). Beside some pediatric cases, the majority of patients are diagnosed in the third or fourth decade of life without a gender predilection. Histopathologically, LDD is characterized by regional enlargement of the cerebellar stratum granulosum, an absence of the Purkinje cell layer and progressive hypertrophy of the granular cell neurons with increased myelination of their axons in the
E F
expanded molecular layer (5). MRI has proven to be the best imaging modality for revealing the characteristic appearance of LDD, and MRI often enables physicians to make the diagnosis of LDD even without histopathological confirmation (3,4,(6)(7)(8)(9)(10). The striated pattern of LDD is a result of the close apposition of the thickened cerebellar folia that are lacking their secondary arborization. A non-enhancing, unilateral cerebellar mass in a middle-aged patient, which is characterized by a 'tiger-striped' pattern of hyperintensity on the T2-weighted MR images and this respects the cerebellar margins, is typical of LDD (8). To the best of our knowledge, this is the first reported case of LDD that has been examined by 7T MRI ultrahigh field MRI systems (≥ 7T) with their increased signal-to-noise ratio (SNR) and higher sensitivity to susceptibility contrast. These systems are currently being tested for clinical applications and they allow for imaging anatomical structures with thinner sections, larger matrices and reduced acquisition times. However, the clinical challenges associated with 7T MRI include higher specific absorption rates (SAR), non-uniformity of the transmitted radiofrequency field, nonhomogeneity of the static magnetic field, increased susceptibility artifacts and potential physiological side-effects. Moving from 1.5T to 7T MRI, the T2* is shortened and the susceptibility contrast of paramagnetic substances (e.g. deoxyhemoglobin, neuromelanin, iron) is significantly amplified, which enables a superior illustration of the venous vasculature and cerebellar nuclei. 1.5T SWI was found to be helpful for detecting deep running veins around the thickened folia of LDD (3). 7T SWI can demonstrate in greater detail veins that are even smaller than the voxel size, due to the associated paramagnetic effect, than the corresponding 1.5T images (Fig. 1C-F) because the sensitivity of MRI scanners for paramagnetic substances increases in proportion to the applied magnetic field. The deoxyhemoglobin in these veins helps to resolve the outer most layer of the LDD, which consists of the outer molecular layer, the leptomeninges and the associated abnormal vessels. Thomas et al. (3) and Kulkantrakorn et al. (7) reported good correlation of this MRI pattern with the histological specimen. Beside the characteristic striated pattern on the T2-weighted images, these abnormal veins surrounding the thickened folia on the SWI images seem to be another unique feature of LDD. Beside the preoperative identification of LDD, the higher resolution of 7T SWI may help to depict large draining veins and the cerebellar nuclei. Their anatomical relation is of clinical importance for planning surgical procedures because lesions affecting the cerebellar nuclei are associated with more severe symptoms than are cortical lesions. 1 H-MRS demonstrated a prominent lactate peak, suggesting increased glycolysis and the high metabolism of this LDD lesion (4,9). Decreased levels of Cho, Cr, NAA and MI and the Chr/Cr are normally found on the side affected by the LDD (9). The slightly increased Cho levels in our patient suggested increased demyelination and membrane turnover, whereas the decreased Cho levels in the lesion suggested a non-neoplastic etiology of this lesion (4,9).
Making the preoperative diagnosis of LDD via MRI obviates the need for biopsy and this allows surgeons to plan an appropriate therapy, which consists of decompression of the posterior fossa by total surgical resection. Tumor resection has not yet been performed in our patient due to the mild clinical symptoms.
In conclusion, as seen on MRI, a non-enhancing, 'tigers striped' cerebellar lesion with unilateral hemispheric expansion and preservation of the gyral pattern should be considered specific for LDD and these findings often allow making the preoperative diagnosis. 7T MRI more precisely reveals the known morphology and microstructure of this tumor entity than can 1.5T MRI. Especially, 7T SWI enables the identification of the outermost layer of LDD due to its inherent venous vasculature, and 7T SWI better displays the iron containing dentate nuclei. Hence, 7T MRI is expected to become a valuable tool for studying the cerebral microvascularity and tumor angiogenesis not only for LDD, but also for other resectable brain tumors.
|
2014-10-01T00:00:00.000Z
|
2010-02-22T00:00:00.000
|
{
"year": 2010,
"sha1": "5e8e6b5c725b1eabb6ae4841a19b9c9217e0c3fd",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc2827790?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "5e8e6b5c725b1eabb6ae4841a19b9c9217e0c3fd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14285749
|
pes2o/s2orc
|
v3-fos-license
|
Simplicity via Provability for Universal Prefix-free Turing Machines
Universality is one of the most important ideas in computability theory. There are various criteria of simplicity for universal Turing machines. Probably the most popular one is to count the number of states/symbols. This criterion is more complex than it may appear at a first glance. In this note we review recent results in Algorithmic Information Theory and propose three new criteria of simplicity for universal prefix-free Turing machines. These criteria refer to the possibility of proving various natural properties of such a machine (its universality, for example) in a formal theory, PA or ZFC. In all cases some, but not all, machines are simple.
The smallest universal Turing machine
Roughly speaking, a universal Turing machine is a Turing machine capable of simulating any other Turing machine. In Turing's words: It can be shown that a single special machine of that type can be made to do the work of all. It could in fact be made to work as a model of any other machine. The special machine may be called the universal machine.
The first universal Turing machine was constructed by Turing [26,27]. Shannon [23] studied the problem of finding the smallest possible universal Turing machine and showed that two symbols were sufficient, if enough states can be used. He also proved that "it is possible to exchange symbols for states and vice versa (within certain limits) without much change in the product." Notable universal Turing machines include the machines constructed by Minsky (7-state 4-symbol) [15], Rogozhin (4-state 6symbol) [22], Neary-Woods (5-state 5-symbol) [17]. Herken's book [11] celebrates the first 50 years of universality.
Universal prefix-free Turing machines
A prefix-free Turing machine, shortly, machine, is a Turing machine whose domain is a prefix-free set. In what follows we will be concerned only with machines working on the binary alphabet {0, 1}. A universal machine U is a machine such that for every other machine C there exists a constant c (which depends upon U and C) such that for every program x there exists a program x ′ with |x ′ | ≤ |x| + c such that U (x ′ ) = C(x). Universal machines can be effectively constructed. For example, given a computable enumeration of all machines (C i ) i , the machine U defined by U (0 i 1x) = C i (x) is universal. 2 The domains of universal machines have interesting computational and coding properties, cf. [7,6].
Peano arithmetic and Zermelo-Fraenkel set theory
By L A we denote the first-order language of arithmetic whose non-logical symbols consist of the constant symbols 0 and 1, the binary relation symbol < and two binary function symbols + (addition) and · (multiplication). Peano arithmetic, PA, is the first-order theory [12] given by a set of 15 axioms defining discretely ordered rings, together with induction axioms for each formula ϕ(x, y 1 , . . . , y n ) in L A : By PA ⊢ θ we mean "there is a proof in PA for θ ".
PA is a first-order theory of arithmetic powerful enough to prove many important results in computability and complexity theories. For example, there are total computable functions for which PA cannot prove their totality, but PA can prove the totality of every primitive recursive function (and also of Ackermann total computable, non-primitive recursive function), see [12].
Zermelo-Fraenkel set theory with the axiom of choice, ZFC, is the standard one-sorted first-order theory of sets; it is considered the most common foundation of mathematics. In ZFC set membership is a primitive relation. By ZFC ⊢ θ we mean "there is a proof in ZFC for θ ".
Our metatheory is ZFC. We fix a (relative) interpretation of PA in ZFC according to which each formula of L A has a translation into a formula of ZFC. By abuse of language we shall use the phrase "sentence of arithmetic" to mean a sentence (a formula with no free variables) of ZFC that is the translation of some formula of PA.
Rudiments of Algorithmic Information Theory
The set of bit strings is denoted by Σ * . If s is a bit string then |s| denotes the length of s. All reals will be in the unit interval. A computably enumerable (shortly, c.e.) real number α is given by an increasing computable sequence of rationals converging to α. Equivalently, a c.e. real α is the limit of an increasing primitive recursive sequence of rationals. We will blur the distinction between the real α and the infinite base-two expansion of α, i.e. the infinite bit sequence α 1 α 2 · · · α n · · · (α n ∈ {0, 1}) such that α = 0.α 1 α 2 · · · α n · · ·. By α(n) we denote the string of length n, α 1 α 2 · · · α n .
One of the major problems in algorithmic information theory is to define and study (algorithmically) random reals. To this aim one can use the prefix-complexity or constructive measure theory; remarkably, the class of "random reals" obtained with different approaches remains the same.
In what follows we will adopt the complexity-theoretic approach. Fix a universal machine U . The prefix-complexity induced by U is the function H U : Σ * → N (N is the set of natural numbers) defined by the formula: H U (x) = min{|p| : U (p) = x}. One can prove that this complexity is optimal up to an additive constant in the class of all prefix-complexities {H C : C is a machine}.
A c.e. real α is Chaitin-random if there exists a constant c such that for all n ≥ 1, H U (α(n)) ≥ n − c. The above definition is invariant with respect to U . Every Chaitin-random real is non-computable, but the converse is not true. Chaitin-random reals abound: they have (constructive) Lebesgue measure one, cf. [1].
The standard example of c.e. Chaitin-random real is the halting probability of a universal machine U (Chaitin's Omega number): 3 Each Omega number encodes information about halting programs in the most compact way. For example, the answers to the following 2 n+1 − 1 questions "Does U (x) halt?", for all programs |x| ≤ n, is encoded in the first n digits of Ω U -an exponential rate of compression. Is this important? For example, to solve the Riemann hypothesis one needs to calculate the first 7,780 bits of a natural Omega number [3].
The following result characterises the class of c.e. Chaitin-random reals: C.e. random reals have been intensively studied in recent years, with many results summarised in [1, 10, 18].
Universal machines simple for PA
We start with the simple question: Can PA certify the universality of a universal machine?
A universal machine U is called simple for PA if PA ⊢ "U is universal", i.e. PA can prove that a universal U , given by its full description, is indeed universal. For illustration, the results in this section will include full proofs.
As one might expect, there exist universal machines simple for PA:
Theorem 2 [4]
One can effectively construct a universal machine which is simple for PA.
Proof. The set of all machines PA can prove to be prefix-free is c.e., so if (C i ) i is a computable enumeration of provably prefix-free machines, then the machine U 0 defined by U 0 (0 i 1x) = C i (x) has the property specified in the theorem: PA ⊢ "U 0 is universal". 2 However, not all universal machines are simple:
Theorem 3 [4]
One can effectively construct a universal machine which is not simple for PA.
Proof. Let ( f i ) i be a c.e. enumeration of all primitive recursive functions f i : N → Σ * and (C i ) i a c.e. enumeration of all prefix-free machines. Fix a universal prefix-free machine U and consider the computable function g : N → N defined by: (N) is finite, then so is C g(i) ). Since the set of all indices of primitive recursive functions with infinite range is not c.e. it follows that PA cannot prove that for some i,C g(i) is universal. Both results above are true for plain universal machines too. The above proofs work for plain universal machines, but a simpler proof can be given for the negative result.
Universal machines simple for ZFC
Assume that the binary expansion of Ω U is 0.ω 1 ω 2 · · ·. For each digit ω i we can consider two arithmetic sentences in ZFC, "ω i = 0", "ω i = 1". How many sentences of the above type can ZFC prove?
Theorem 4 [8]
Assume that ZFC is arithmetically sound (that is, each sentence of arithmetic proved by ZFC is true). Then, for every universal machine U , ZFC can determine the value of only finitely many bits of the binary expansion of Ω U , and one can calculate a bound on the number of bits of Ω U which ZFC can determine. 4 Actually, we can precisely describe the"moment" ZFC fails to prove any bit of Ω U :
Theorem 5 [2]
Assume that ZFC is arithmetically sound. Let i ≥ 1 and consider the c.e. random real Then, we can effectively construct a universal machine U (depending upon ZFC and α) such that PA proves the universality of U , ZFC can determine at most i initial bits of Ω U and α = Ω U .
In other words, the moment the first 0 appears (and this is always the case because α is random) ZFC cannot prove anything about the values of the remaining bits.
Theorem 6 [25] One can effectively construct a universal machine U such that ZFC (if arithmetically sound) cannot determine any bit of Ω U .
We say that a universal machine is n-simple for ZFC if ZFC can prove at most n digits of the binary expansion of Ω U . In view of Theorem 5, for every n ≥ 1 there exists a universal machine which is n-simple for ZFC. By Theorem 6 there exists a universal machine which is not 1-simple for ZFC.
Universal machines PA-simple for randomness
We first express Chaitin randomness in PA. A c.e. real α is provably Chaitin-random if there exists a universal machine simple for PA and a constant c such that PA ⊢ "∀n(H U (α(n)) ≥ n − c)".
In this context it is natural to ask the question: Which universal machines U "reveal" to PA that Ω U is Chaitin-random?
Theorem 7 [4]
The halting probability of a universal machine simple for PA is provably Chaitin-random.
In fact, Theorem 1 can be proved in PA:
Theorem 8 [4]
The set of c.e. provably Chaitin-random reals coincides with the set of all halting probabilities of all universal machines simple for PA.
Based on Theorem 7 we define another (seemingly more general) notion of randomness in PA. A c.e. real is provably-random (in PA) if there is a universal machine simple for PA and PA ⊢ "Ω U = α".
Theorem 9 [4] A c.e. real is provably-random iff it is provably Chaitin-random.
In contrast with the case of finite random strings where ZFC (hence PA) cannot prove the randomness of more than finitely many strings, for c.e. reals we have:
We say that a universal machine U is PA-simple for randomness if PA ⊢ "Ω U is random." In view of the Theorem 10 we get:
Corollary 11
For every c.e. random real α there exists a PA-simple for randomness universal machine U 0 such that α = Ω U 0 .
Theorem 12
There exists a universal machine which is not PA-simple for randomness.
Conclusions
We have used some recent results in Algorithmic Information Theory to introduce three new criteria of simplicity for universal machines based on their "openness" in revealing information to a formal system, PA or ZFC. The type of encoding is essential for these criteria. This point of view might be useful in other contexts, specifically in automatic theorem proving. It would be interesting to "actually construct" the universal machines discussed in this paper.
|
2009-06-17T16:25:45.000Z
|
2009-06-17T00:00:00.000
|
{
"year": 2009,
"sha1": "b38370f43f8ff3378658bdd63e81469d04d6cdc7",
"oa_license": "CCBYNCND",
"oa_url": "https://arxiv.org/pdf/0906.3235",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "02832f7a92f27949f324e1b34c1f2ed0809d44ec",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
147706210
|
pes2o/s2orc
|
v3-fos-license
|
Forecasting Zoonotic Infectious Disease Response to Climate Change: Mosquito Vectors and a Changing Environment
Infectious diseases are changing due to the environment and altered interactions among hosts, reservoirs, vectors, and pathogens. This is particularly true for zoonotic diseases that infect humans, agricultural animals, and wildlife. Within the subset of zoonoses, vector-borne pathogens are changing more rapidly with climate change, and have a complex epidemiology, which may allow them to take advantage of a changing environment. Most mosquito-borne infectious diseases are transmitted by mosquitoes in three genera: Aedes, Anopheles, and Culex, and the expansion of these genera is well documented. There is an urgent need to study vector-borne diseases in response to climate change and to produce a generalizable approach capable of generating risk maps and forecasting outbreaks. Here, we provide a strategy for coupling climate and epidemiological models for zoonotic infectious diseases. We discuss the complexity and challenges of data and model fusion, baseline requirements for data, and animal and human population movement. Disease forecasting needs significant investment to build the infrastructure necessary to collect data about the environment, vectors, and hosts at all spatial and temporal resolutions. These investments can contribute to building a modeling community around the globe to support public health officials so as to reduce disease burden through forecasts with quantified uncertainty.
Introduction
The epidemiology of infectious diseases is constantly fluctuating in response to environmental changes and changing interactions among hosts, reservoirs, vectors, and pathogens. This is particularly true for zoonotic diseases that infect humans, animals of veterinary importance, and wildlife. Within the subset of zoonotic diseases, vector-borne pathogens are changing more rapidly with climate change and potentially have a more complex epidemiology. In the past 80 years, the majority of global emerging infectious diseases have been zoonotic [1]. While most of the zoonoses arise from wildlife (72%), vector-borne emerging diseases are increasing at a more rapid rate [1]. Along with an increase in emerging zoonotic diseases, there have been range expansions of reservoir hosts, vectors, and the pathogens they harbor.
Vector-borne diseases account for more than 17% of all infectious diseases, causing more than 700,000 deaths annually [2,3]. For example, almost 4 billion people in over 128 countries are at risk of contracting dengue, with 96 million cases estimated per year [2,4]. Dengue virus has both animal hosts and reservoirs that are both impacted and help maintain virus circulating in populations [5]. Many of the vectors that transmit important zoonotic infectious diseases are bloodsucking insects that ingest disease-producing microorganisms during a blood meal from an infected host and then later inject it into a new host during their subsequent blood meal. Mosquitoes are the best-known disease vector. However, other vectors, such as ticks, black flies, sandflies, midges, fleas, and triatomine bugs are also important vectors of human pathogens. Table 1 lists the primary zoonotic arboviruses and other pathogens vectored by Dipterans, and Table 2 lists the primary animal (non-zoonotic) pathogens vectored by Dipterans. For a recent review on tick-borne pathogens, see [6]. The rest of this review focuses on modeling mosquito vectors and mosquito-borne infectious diseases in response to climate change.
Mosquitoes are heavily dependent and closely tied to the environment [7,8]. A mosquito's life is a microcosm of water, available habitats, temperature, predators, and competitors. Each aspect of a mosquito's life history is greatly influenced by even the slightest changes in the environment in unpredictable ways. For example, droughts can increase mosquito habitat by increasing stagnant water in streams, thereby increasing mosquito populations [9]. In other cases, range expansion is simply due to warmer winters in the northern latitudes [10,11]. Table 1. Primary zoonotic arboviruses and other pathogens vectored by Dipterans. The taxon groups listed for viruses are at the family level. Taxon groups for vectors are at the genus level. [27] Humans [27] Western/central Africa [27] Table 2. Primary animal (non-zoonotic) pathogens vectored by Dipterans. The taxon groups listed for viruses are families. Taxon groups for vectors are at the genus level.
Changing Mosquito Vector Biology and Range Expansion
Emerging infectious diseases are directly influenced by changing environmental conditions [1,51]. The results of these geographic expansions and range shifts represent major health crises in many parts of the world [52,53]. Vector-borne infectious diseases make up a significant portion of zoonotic diseases, which have increased in the last few decades [1]. Mosquitoes shift and expand their ranges into new areas as environments change and become more suitable, bringing the pathogens they harbor with them. Mosquitoes-important vectors for many infectious diseases of human importance-are sensitive to environmental conditions, especially temperature and precipitation [7]. As aquatic insects, their life cycle and developmental time depends on water availability [54]. Temperature is also important for their development time and their ability to overwinter [55]. Being able to overwinter due to warmer winter temperatures may aid in the northward expansion of many species [10,11]. In addition, increased temperatures induce faster development times [7,56], which may be especially important in arctic environments [57].
Most mosquito-borne infectious diseases are transmitted by mosquitoes in three genera: Aedes, Anopheles, and Culex. Species in the genus Aedes, particularly Ae. Albopictus (Asian tiger mosquito) and Ae. aegypti are vectors for a number of zoonotic diseases, including dengue virus, chikungunya virus, Zika virus, and yellow fever. Anopheles mosquitoes are responsible for transmitting malaria and a few other pathogens, such as canine heartworm (Dirofilaria immitis) and species that cause filariasis (e.g., Wuchereria bancrofti). Anopheles gambiae and An. arabiensis are the most responsible vectors for malaria transmission in Africa. Culex mosquitoes, predominantly C. pipiens, C. tarsalis, and C. quinquefasciatus, carry and transmit West Nile virus and Saint Louis encephalitis virus. Mosquitoes in all three genera have experienced range expansions [58][59][60][61][62][63][64][65]. Aedes aegypti is the most widespread, with an almost global distribution through repeated invasions [66]. As mosquito species expand their geographic distributions, the diseases they harbor are expanding and are predicted to continue to expand [67], representing new challenges for places previously not affected. For example, chikungunya virus has spread from Africa and Southeast Asia to the subtropics and the western hemisphere [68], Zika virus has spread rapidly from Africa to the Americas [69], and West Nile virus has spread to British Columbia, Canada [70].
Two species of mosquitoes in the genus Aedes have experienced the most range expansion globally-Aedes albopictus and Ae. aegypti [66]. Aedes albopictus is the most invasive mosquito species in the world [71], spreading from southeast Asia to every continent except Antarctica [72]. Global air and sea travel [73] and the used tire trade [72,74,75] have been proposed to be major causes of the worldwide dispersal of Ae. albopictus. Eggs of this species are long-lived, desiccation-resistant [76], and respond to shorter, colder days by going into diapause [72,74]. Aedes aegypti spread from Africa to the tropics and subtropics around the world by way of human movement [77,78]. In addition to climate impacts on vector distributions, urbanization may exacerbate the invasive potential of vectors following the colonization of new areas [66]. Both Ae. albopictus and Ae. agypti have urban and suburban habitat preferences, and the habitats of larval development include man-made containers [76].
Species of Anopheles and Culex mosquitoes have also increased their geographic range [58,59,70], but have not become as invasive as the two species of Aedes. The increase in Anopheles mosquitoes, mainly An. arabiensis, may be due to their ability to resist desiccation and survive severe dry seasons, resulting in perennial transmission of malaria to humans [79][80][81]. However, Anopheles gambiae has also been shown to transmit malaria continuously during the dry season [80] and expand their niches into marginal habitats [61], both of which may act to increase their geographic range in the future [82].
One of the most important malaria vectors in the neotropics (Central and South America) is An. darlingi, and is predicted to undergo range expansions [62,83].
The Culex pipiens complex, made up of C. pipiens sensu stricto and C. quinquefasciatus, is distributed worldwide. Culex quinquefasciatus is most prevalent in the tropics and subtropics, while C. pipiens is found in temperate climates [84]. Recently, a new species of concern, C. coronator, was discovered in the southwestern United States [85,86], and as far north as Virginia [87]. This species has expanded its range from the America subtropics and tropics [85]. Importantly, this species has been documented harboring several arboviruses of human importance [86]. Other Culex species are experiencing range shifts northward into Canada [64,70]. Both Culex pipiens and C. quinquefasciatus are predicted to spread further northward in Canada [64,88].
Mosquito responses to environmental conditions are variable and often stray from predictions from models that take into account climate to determine future distributions. For example, when Ae. albopictus invaded new continents (e.g., South America), it resulted in niche shifts, which may be due to adaptive genetic changes or result from founder effects [63]. The ability to shift niches upon invading new areas may be one reason why Ae. albopictus is so widespread. Additionally, several species have been shown to become more tolerant to saline and brackish waters. These species include Anopheles sundaicus, An. culicifacies, An. stephensi, Aedes aegypti, Ae. Albopictus, and Culex sitiens [89]. Salinity tolerance may be important to consider in studies predicting future mosquito distributions because brackish and saline water along coastlines is predicted to increase [90]. Hybridization is another ecological factor that may determine the extent of species distribution and influence range expansion of certain species (reviewed in [91]). For example, An. gambiae gained a critical gene from An. arabiensis allowing it to move from rainforests to drier habitats, such as savannahs [92,93]. Further research is needed to understand how hybridization may contribute to range expansion in other mosquito genera. Predicting shifts in range distributions of mosquito species is challenging and requires a comprehensive modeling approach that incorporates vegetation modeling, hydrology, epidemiological modeling, human movement behavior, mosquito behavior, and host, vector, and pathogen biology and evolution.
Mosquitoes are closely tied to and track environmental conditions because their life cycle depends on environmental conditions. Mosquito expansion will likely continue well into the future because of increased temperatures, greater probabilities of overwintering, and changes to precipitation regimes. Although climate change directly influences mosquito distributions through changing environmental conditions, climate change can also impact distributions indirectly. Humans respond to climate change by altering their surroundings. For example, drought has caused southeast Australian communities to install water storage tanks, which is predicted to increase the distribution of Ae. aegypti and increase the risk of emerging and re-emerging diseases in these areas [94]. Changes in human movement will also affect mosquito dispersal and impact future distributions. Predicting range expansions of mosquito vectors and mosquito-borne diseases therefore needs effective climate, human, and epidemiological modeling, which when combined, will provide more accurate forecasts for vector and disease risk.
Epidemiological Modeling of Vector-Borne Infectious Diseases
While yearly dengue epidemics continue [4,95,96], the chikungunya virus and Zika virus have recently emerged and caused large outbreaks in the Americas, spreading rapidly across the continent after introduction and causing long-term effects, including Zika-related birth defects. Chikungunya virus infected upwards of 30% of vulnerable populations in South and Central America. While the full effects of Zika virus are still being understood, recent sero-surveys in Brazil indicate a more than 60% attack rate [97][98][99][100]. It is not clear if chikungunya virus and Zika virus will continue to expand geographically and become endemic. In addition, climate change and globalization have increased the potential for wider spread of vector-borne diseases [101]. Thus, there is an urgent need to study these diseases in different regions and to produce a generalizable approach capable of mapping risk and forecasting outbreaks to alert vulnerable populations and inform decision support while increasing scientific understanding [102].
Ecological studies have demonstrated that certain variables such as Normalized Difference Vegetation Index (NDVI), precipitation, and temperature can predict the severity of mosquito-borne disease transmission months in advance [110,129,139]. These variables have been used to predict disease risk such as West Nile virus in the United States [140]. Mosquito-borne disease modeling has been limited to either local studies that consider selected fine-scale key processes or to larger-scale models that have a limited representation of key processes and their interactions. For example, some process-based models consider the impact of temperature [141][142][143][144][145][146][147][148] or rainfall [147,[149][150][151] on mosquito lifespan and development rates without explicitly considering the nonlinear response of mosquito habitat to weather (i.e., formation and persistence of standing water in the landscape). Recent modeling studies (e.g., malaria [24,152,153] and Rift Valley fever [154]) have begun to consider the linkage to hydrology and climate. But they are either limited to small scales (e.g., watersheds or small regions) [153,154] or are based on a simple calculation of water balance without considering real land surface characteristics [152]. Meanwhile, statistical models built on the relationship between local meteorological/environmental factors, mosquito physiology, and/or reported disease cases have been developed for understanding the ecological niches of mosquito and mosquito-borne diseases [78,95,116,[155][156][157][158]. Because they are trained solely on past observations, they generally do not capture the coupled nonlinear processes (e.g., vector range expansion and human-mosquito contact) [112,148] in conditions not yet encountered (i.e., "no-analog" and beyond the space of available data for model calibration) resulting from weather extremes, climate, and socioeconomic changes.
Laboratory and field studies have shown relationships between weather/climate and risk for mosquito-borne diseases (e.g., Rift Valley Fever) [159], tick-borne diseases [160], and other vector-borne diseases. Several research teams, including ours, have incorporated these factors into working models [161][162][163]. A recent systematic review by the National Exposure Research Laboratory [160] highlights the need for a "rigorous multi-system modeling approach to improve our knowledge about the important mosquito vector, Aedes spp. presence/abundance response to the interaction between environmental, socioeconomic, and meteorological systems". The current state of the art has focused on particular pathogens in particular regions using a subset of the needed factors [160,[164][165][166]. The few global or continental models that exist focus on a single pathogen [137,141,163] or on ecological niche models rather than explicit disease dynamics [137].
Earth System Modeling: The State of Climate Forecasting
The modeling of global climate systems dates back to 1960, and were based on weather prediction models (e.g., [167]). Most of these early models only include atmosphere without detailed representation of land (e.g., no explicit representation of vegetation) and ocean process (e.g., motionless ocean). However, there are also pioneering models that coupled atmosphere and ocean currents [168,169]. Because these models capture the circulation of both atmospheric and ocean currents, they are referred to as general circulation models (GCMs). With the advancement in computing powers and our improved understanding of coupled physical systems, GCMs have made great progress in simulating the coupled physical climate processes (e.g., winds, clouds, land surface, oceans, and ice) and atmospheric chemistry, aerosol, and static vegetation in our Earth system from the 1960s to 2000s [170]. Since the late 2000s, Earth system models (ESMs) began to evolve from GCMs to simulate the interaction between climate and biogeochemical components (e.g., dynamic terrestrial vegetation and ocean biogeochemistry) to better simulate the feedback of biological systems to our climate [171]. Meanwhile, great progress has been made to integrate the human component into ESMs (i.e., the integrated ESM or IESM) [172]. The modern ESMs and IESMs provide the potential to simulate zoonotic relevant components including biological components (e.g., vegetation dynamics), abiotic components (e.g., inundation and water temperature), and human components (e.g., urban environment and population changes). All these progresses provide great potential for seamless coupling with disease modeling for a better prediction of disease risks under population growth, warming, and climate extremes.
Bridging the Gap Between Epidemiological and Earth System Modeling
Although great strides have been made in simulating and predicting global climate with large-scale ESMs, coupling these forecasts with human and living-natural (HLN) systems is critical for planning and mitigating long-term impacts of changes in the environment. HLN systems interact nonlinearly with climates, and data that account for relevant parameters and states of system dynamics in a single source are limited, posing a significant challenge in understanding and predicting the response of these systems. Mosquito-borne diseases are considered a HLN system with rich traditional (e.g., number of cases) and non-traditional (e.g., remote sensing imagery) data streams. Mosquito abundance and virus development rates show both inter-and intra-seasonal variation, and are affected by long-term changes in climate regimes leading to potential range expansion of both mosquitoes and associated diseases. Changes in temperature, climate variability, and extreme weather events are already impacting mosquito-borne diseases around the world [173][174][175]. A recent example is the emergence of Zika virus throughout Latin America, which was likely fueled by the hot drought during the 2015-2016 El Niño [175]. The mosquito-borne disease HLN system not only affects human health in the US and troops abroad, but is also linked to poverty and regional stability.
We face three key challenges to be able to make reliable large-scale predictions of disease risks at both short-term (seasonal/sub-seasonal) and long-term (decadal) temporal scales, which is critical for planning and improving disease mitigation strategies. First, there is currently no capability to integrate key processes at large scales including (1) hydrology and vegetation that affect mosquito habitats; (2) temperature and humidity that affect mosquito population dynamics and pathogen replication; and (3) human population density and movement that affect vector human contact and facilitates the dispersal of mosquitoes and the development of additional mosquito habitats. There are numerous baseline data requirements for both the epidemiological and climate models for zoonotic infectious diseases (Figure 1). Statistical models built on the relationship between local meteorological and environmental factors and reported disease cases have been developed for understanding the ecological niches of mosquito-borne diseases [116,155,156]. However, they do not directly incorporate the coupled nonlinear processes (e.g., human behavior or mosquito population dynamics) [112], a weakness under novel future climate conditions. Meanwhile, most current process-based models have only incorporated the impact of temperature [141][142][143][144][145] and rainfall [149][150][151] on mosquito population dynamics without explicitly considering the formation and persistence of water on the landscape, which is critical for accurately predicting the nonlinear response of mosquito habitats to precipitation. One exception is the HYDREMATS model [153], but it is limited to local watersheds. Thus, it cannot account for key processes of disease transmission such as human movement or nearby pathogen reservoirs.
Second, although recent models have started to incorporate human population and movement data [141,176,177], they have not yet considered future human populations and movement that are relevant to predicting disease risk in the future. While animal movement data is also required, it can be either easy or impossible to obtain, based on the region. Third, although field and lab studies and analysis of diverse and often non-traditional data streams critical to developing and validating coupled HLN-climate systems are increasingly available, the research community lacks an advanced data fusion and mining system to extract critical information from different data sources for parameterization, development, and evaluation of such large-scale, complex models. The climate system, resulting environmental change, vectors, animal and human hosts, and infectious pathogens are each in its own a complex system. Data requirements, existing models, a foundational understanding of infectious diseases, statistical and data fusion, and computer science are all required for predicting how climate change may impact vector-borne diseases (Figure 2). The main hurdle to the above three challenges can be attributed to the lack of large investments that enable seamless collaboration between diverse and highly skilled subject-matter experts, availability of computational resources to scale up and integrate key processes within the complex systems, and data fusion/integration approaches needed to validate models while capturing uncertainty in model parameters, inputs, and predictions.
Figure 2.
Disease and climate systems for mosquito borne diseases. Each system must be coupled together with validation from ground truth real-time data. Data from Figure 1 feeds into each of these systems and data fusion issues are addressed throughout the process.
Data Fusion and Data Requirements
Data fusion is necessary to understand the complex processes of vector-borne diseases. Fusing heterogeneous data provides a way to link the data model, the process model for the disease dynamics (including transmission rates), and R0, as well as account for varying levels of spatial resolution [178]. Each of these components has been well-established but have not been integrated together in a disease model. Additionally, it can be a challenge to fuse data with a mechanistic understanding [179][180][181]. Due to the complexity of modeling in both the process and the data, researchers generally focus on either a model-based (mathematical) or data-based (statistical) approach to understand predictions of ecological dynamics [182]. However, both the data and the mechanistic process underlying the physical process are often of interest in disease modeling. Therefore, it is not enough to fit a statistical model to observations or only use a model-based approach that ignores the data [123]. There are several statistical challenges associated with disease modeling, including modeling and estimating coupled systems, dealing with unobserved variables, and modeling the spatial-temporal dynamics [183]. Statistical inference for time-series models with incomplete data and for process models can be particularly challenging as they may be computationally expensive and oftentimes intractable. Stochastic models, on the other hand, add flexibility to the model to fit real data. This provides a framework where parameters can be identified even when they can be unidentifiable in a deterministic setting. It focuses on the distribution associated with the characteristic of a process. However, it does not incorporate all sources of data from climate, water sources, and transmission.
A particular challenge present in vector-borne disease modeling is that the underlying phenomena are generally difficult or impossible to directly measure. In an ideal situation, knowledge of mosquito population distributions through time, broken up by species and then coupled with human population distributions through time, would be used to derive risk maps. This knowledge could directly inform a mechanistic or statistical model to provide high-fidelity forecasts. However, mosquito population distributions are difficult to capture. Ball et al. [184] are developing an
Data Fusion and Data Requirements
Data fusion is necessary to understand the complex processes of vector-borne diseases. Fusing heterogeneous data provides a way to link the data model, the process model for the disease dynamics (including transmission rates), and R 0 , as well as account for varying levels of spatial resolution [178]. Each of these components has been well-established but have not been integrated together in a disease model. Additionally, it can be a challenge to fuse data with a mechanistic understanding [179][180][181]. Due to the complexity of modeling in both the process and the data, researchers generally focus on either a model-based (mathematical) or data-based (statistical) approach to understand predictions of ecological dynamics [182]. However, both the data and the mechanistic process underlying the physical process are often of interest in disease modeling. Therefore, it is not enough to fit a statistical model to observations or only use a model-based approach that ignores the data [123]. There are several statistical challenges associated with disease modeling, including modeling and estimating coupled systems, dealing with unobserved variables, and modeling the spatial-temporal dynamics [183]. Statistical inference for time-series models with incomplete data and for process models can be particularly challenging as they may be computationally expensive and oftentimes intractable. Stochastic models, on the other hand, add flexibility to the model to fit real data. This provides a framework where parameters can be identified even when they can be unidentifiable in a deterministic setting. It focuses on the distribution associated with the characteristic of a process. However, it does not incorporate all sources of data from climate, water sources, and transmission.
A particular challenge present in vector-borne disease modeling is that the underlying phenomena are generally difficult or impossible to directly measure. In an ideal situation, knowledge of mosquito population distributions through time, broken up by species and then coupled with human population distributions through time, would be used to derive risk maps. This knowledge could directly inform a mechanistic or statistical model to provide high-fidelity forecasts. However, mosquito population distributions are difficult to capture. Ball et al. [184] are developing an autonomous sensor that uses passive sugar baiting to lure mosquitoes in order to test their saliva for the presence of mosquito-borne viral pathogens [184], but these sensors are still in their infancy, and deployment at the scale of a large city, state, or country is not yet possible. As a result, no large-scale mosquito population distribution data through time exist. This gap, along with complications coming from the structure of modern disease surveillance systems, has resulted in a revolution in the disease modeling literature in which proxy datasets are used to fill in the disparity in our knowledge. Such datasets indirectly relate to the phenomenon at hand. For example, internet data (e.g., [185][186][187][188]) have been used to complete the gaps in our public health data; searches for specific disease-related keywords such as cough or fever, for instance, tend to highly correlate with actual disease incidence. Using knowledge of the mosquito lifecycle (i.e., that mosquitoes require certain temperature and precipitation conditions for breeding), proxy datasets such as weather, demographics, satellite imagery, and other related datasets can be used to provide additional insight into mosquito locations. The underlying hypothesis to these data fusion approaches is that, while no individual proxy dataset will provide enough information to accurately forecast disease incidence, the fusion of multiple weak indicators derived from disparate data streams will provide a more complete picture.
Once this approach is considered, however, the complexity of the data fusion problem greatly increases; it may not be intuitive, for example, how internet data coming from Wikipedia and spectral signals coming from satellite imagery could be fused. Furthermore, they will relate to disease incidence on different timescales, e.g., healthy vegetation in satellite imagery (as a proxy for standing water) will be a multi-week leading indicator [189], whereas Wikipedia access logs will be closer to a real-time indicator. Another factor is that each data source has the potential to have large measurement errors, but because the data types include hard (e.g., physics-based phenomenologies) and soft (e.g., text) data sources, the mechanics of these errors are highly variable. Because each of the disparate data streams is important for forecasting and understanding disease dynamics, it is equally as important to appropriately account for leading/lagging indicators, error propagation, etc., when building the fusion models. The conditional model, for instance, is flexible as it accounts for the uncertainty in the data based upon prior knowledge of the data parameters and characterizes the relationships in the data and the inherent latent structure. The process model spatial and temporal rate variations are incorporated directly as an additional layer in the hierarchical model.
An added complexity of data fusion is that the sources for climate and mosquito data are generally inherently on different spatial and/or temporal scales, e.g., fine-scale data as in mosquito habitats, and coarse gridded data as in climate data. Oftentimes, the sources of data are spatially misaligned, meaning the data are on different grids. To account for the misalignments, a spatial interpolation method, such as kriging, can be used [190]. Additionally, if the data layers are nested or data layers overlap (non-nested), spatial modeling approaches exist to account for the misalignment in the spatial grids [191]. A hierarchical or multilevel model structure accommodates different spatial aggregations, as well as allowing for shared characteristics among the groups [192,193]. A conditional modeling framework, i.e., a Bayesian or maximum likelihood approach, can combine multiple processes and datasets in an analysis [194]. The conditional model provides a way to pool data from multiple sources and weight them based upon the content of the information.
Assumptions or Challenges of Disease Forecasting and Modeling in Animals and Humans
There are three major challenges in achieving the scientific and technological advances made in weather forecasting and transferring them into disease forecasting; these are: (1) data availability, (2) human behavior, and (3) funding [123]. In terms of data availability, unlike weather forecasting, there is the lack of real-time ground truth data to inform disease forecasting models. In order to accurately predict the future, mathematical models must account for the complex and interacting dynamics of the changing environment, the vectors (e.g., mosquitoes), and the host (e.g., the humans) based on past and current data. Specifically, real-time data on mosquito prevalence and diversity at various spatial resolutions is limited and often non-existent. Without these data streams, modelers use proxy information such as vegetation and hydrology to infer mosquito prevalence, however, these limitations increase the uncertainty in the predictions. Similarly, clinical surveillance data is often biased, due to limited and costly access, and delayed, due to the bureaucratic reporting process, taking weeks or even months to be published. In order to advance our ability to forecast diseases, we need billions of sensors around the world collecting and uploading real-time information to inform forecasting models.
The second major challenge to accurately forecasting infectious diseases is human behavior. Unlike weather, which is governed by the laws of physics, humans can be unpredictable and, as a host, they play a key role in the spread of infectious diseases. Human behavior, such as vaccination, can reduce or increase the probability of infection. Thus, understanding what people do during an epidemic is crucial for modeling disease dynamics. However, there are no databases monitoring human behavior and although Internet data streams, such as social media, have been used to capture population sentiment, there are biases and uncertainty in using this information. In addition, because of the role that humans are playing in changing the climate, it will be crucial to understand how their behavior will change model predictions. Finally, infectious disease forecasting needs significant investment in building the infrastructure necessary to collect data about the environment, the vectors, and the hosts at all spatial and temporal resolutions. These investments can also contribute to building a modeling community around the globe to support public health officials and ultimately reduce disease burden through forecasts with quantified uncertainty.
Concluding Remarks: What is Needed Now and in the Future for Forecasting the Impacts of Climate Change on Infectious Diseases
Rapid and large-scale environmental change is occurring globally, which is already leading to dramatic changes in animal, insect, and plant populations. While some wildlife species are able to adapt and others are not, infectious pathogens and their vectors will respond and take advantage of favorable situations. Additionally, as human populations move and expand, agricultural animals will continue to replace wildlife, and new hosts and vectors will move with these migrations and expansions. Coupling cutting-edge and validated climate models with scalable epidemiological models and ground-truth, real-time data will allow the models to be validated with statistical and sensitivity analyses in order to test hypotheses and predictions of future infectious disease conditions.
A focus on the potential impacts of climate change on natural, animal, and human systems will allow for better predictions and mitigations for how these impacts will influence zoonotic infectious diseases in animals and humans. While modeling has its limitations, especially when coupling large-scale systems models, the alternative includes guesses or waiting until infectious pathogens cross into new regions. Mitigations may include animal vaccination strategies that will take time to develop, including the time it takes to stockpile vaccines in areas where they are needed most. Having more knowledge of future vaccine requirements and the impact of animal trade and movement will ensure the best use of limited resources to reduce the threat of infectious diseases globally.
|
2019-05-09T13:09:53.991Z
|
2019-05-06T00:00:00.000
|
{
"year": 2019,
"sha1": "2ea34d491566374d9f4635970d6cec8a63bf4e35",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2306-7381/6/2/40/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2ea34d491566374d9f4635970d6cec8a63bf4e35",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Geography"
]
}
|
3500298
|
pes2o/s2orc
|
v3-fos-license
|
Alternative Methods for Characterization of Extracellular Vesicles
Extracellular vesicles (ECVs) are nano-sized vesicles released by all cells in vitro as well as in vivo. Their role has been implicated mainly in cell–cell communication, but also in disease biomarkers and more recently in gene delivery. They represent a snapshot of the cell status at the moment of release and carry bioreactive macromolecules such as nucleic acids, proteins, and lipids. A major limitation in this emerging new field is the availability/awareness of techniques to isolate and properly characterize ECVs. The lack of gold standards makes comparing different studies very difficult and may potentially hinder some ECVs-specific evidence. Characterization of ECVs has also recently seen many advances with the use of Nanoparticle Tracking Analysis, flow cytometry, cryo-electron microscopy instruments, and proteomic technologies. In this review, we discuss the latest developments in translational technologies involving characterization methods including the facts in their support and the challenges they face.
INTRODUCTION
Release of membrane vesicles from the plasma membrane is a physiological process known to occur in cell cycle activation and growth without affecting cell viability, and it is a process widely observed both in vitro and in vivo (Cocucci et al., 2009;Thery et al., 2009). Extracellular vesicles (ECVs) are generated during a process called microvascularization either at the plasma membrane (microvesicles) or within endosomal structures (exosomes) and are comprised of a very heterogeneous population of vesicles ranging in size and content. Their sizes vary from 20 nm in diameter and have been reported up to 900 nm, the former comprising the more homogenous population of exosomes released from multivesicular bodies (MVBs) and the latter, commonly referred to as MVs, shedding from the plasma membrane (Thery et al., 2009). In this mini-review, we will refer to all types of shed vesicles under the common term of ECVs. Extracellular vesicles' content varies from cell to cell and it has been shown to reflect the content and surface markers of the cell from which they originate (Skog et al., 2008;Balaj et al., 2011). These ECVs can also be taken up by neighboring or distant cells where they release their cargo which can affect the cell's status (Cocucci et al., 2009;Camussi et al., 2010). It has been shown that ECVs can affect immunoresponses, promote tumor invasiveness, and metastasis, can confer resistance to drugs, and promote endothelial cell migration, invasion, and neovascularization acting as carriers of angiogenic stimuli (Lee et al., 2011). Also, since they carry cell-specific signatures, assessment of ECV' content may be used for diagnostic purposes for early diagnosis of different cancers, including melanoma, ovarian cancer, kidney, and brain tumors (Meng et al., 2005;Skog et al., 2008;Lima et al., 2009;Grange et al., 2011).
Along with physiological signal mediators, ECVs appear as potential new tools for clinical diagnostics and may be useful in novel treatment modalities (Lima et al., 2009;Chen et al., 2012). Several groups are currently looking at ECVs as potential carriers of therapeutic drugs or molecules that would down-regulate toxic proteins or elicit an anti-tumor immune response when encapsulating specific siRNAs or adeno-associated viral vectors (Alvarez-Erviti et al., 2011;Maguire et al., 2012).
Although this branch of science is growing very fast, it is hampered by limitations in isolation and purification technologies as well as the ability to measure ECV size, concentration, and molecular content (Momen-Heravi et al., 2012). There is an urgent need for more reliable and reproducible extracellular vesicle characterization methods so downstream studies in ECVs genomics, proteomics, and lipidomics can be more standardized and efficient. In this review, we provide a brief overview of some recently used methods for ECV measurement and characterization for www.frontiersin.org sizing and assessing their concentration while emphasizing on novel cutting-edge technologies.
CHARACTERIZATION OF EXTRACELLULAR VESICLES
Analysis of ECV subpopulations is highly interesting, but has turned out to be a major challenge due to their small size and none of the techniques available today can reliably distinguish them at the single particle level. This analysis would reveal information about ECV size, concentration, charge, subcellular origin, formation process, content, as well as their potential function. In this mini-review we discuss some new mainstream technologies including flow cytometry, scattering and fluorescence flow cytometry, impedance-based flow cytometry, transmission electron microscopy (TEM) and scanning electron microscopy (SEM), cryo-electron microscopy (Cryo-EM) and single particle analysis, Nanoparticle Tracking Analysis (NTA), qNano, and large-scale molecular profiling.
FLOW CYTOMETRY
One method for high-throughput multi-parametric analysis and quantitation of ECVs is flow cytometry. This technology is designed to scan and sort at a rate of thousands of single cells or particles per second (van der Pol et al., 2010). Flow cytometry is widely used to detect origin, size, and morphology of circulating ECVs (Kim et al., 2002;Hunter et al., 2008;Kesimer et al., 2009;Mobarrez et al., 2010;Orozco and Lewis, 2010;Zwicker et al., 2012). Through hydrodynamic focusing, the suspended cells flow through a compressed chamber to the interrogation point, where the sample encounters the laser. The emitted scatter and fluorescence is then captured and measured by detectors facing forward and perpendicular to the laser. The intensity of detected light is reported as forward light scatter (FLS) and side light scatter (SLS). The quantity of light scattered forward is proportional to the diameter while SLS denotes morphology and inner anatomy of ECVs (Kim et al., 2002;van der Pol et al., 2010). In tandem, fluorescent light emitted from labeled ECVs travels perpendicular to the laser, as in SLS, and optics guide the wavelengths to detectors that record the intensities. Compatible dyes with discrete emission peaks can be used to detect multiple fluorescences from a single laser. Filters provide the necessary parameters to capture the appropriate range of emission peaks enabling the identification of heterogeneous populations. In an effort to guide and control data collection, flow cytometry employs automated and user configured thresholds which set points of reference for FLS that must be surpassed for data collection. It appears in the future by reducing flow chamber dimensions, optimizing the flow chamber geometry, reducing the flow velocity, the next generation of flow cytometry instruments will be capable of measuring ECVs with high sensitivity.
SCATTERING AND FLUORESCENCE FLOW CYTOMETRY
Scattering flow cytometry requires bead calibration with polystyrene/latex microspheres of known size and count, to permit quantitation and delineation of heterogeneous ECVs. The detection limit is greater and/or equal to 300 nm and as such, scatter detection alone is an inefficient method for analyzing smaller vesicles (Hein et al., 2008). Fluorescence flow cytometry is more sensitive due to emitted fluorescence intensity being higher than light scatter intensity for the MP size range of less than 300 nm (van der Pol et al., 2010). Fluorescence-activated cell sorting (FACS) enables ECVs to be characterized on the basis of the spectral properties of the fluorescence signal enabling morphological classification and specific sorting (Perez-Pujol et al., 2007).
The limitation of flow cytometry is its ability to sort small ECVs below 130 nm. Zwicker et al. (2012) suggest a bead-based gating strategy to identify the lower sensitivity of size-related forward scatter for ECV measurements (Robert et al., 2009). Improvements in standardization of vesicle measurements have been reported by Lacroix et al. (2010) on behalf of the International Society of Thrombosis and Haemostatic (ISTH). Using Megamix beads, this study determined that instrumentation with wide-angle FLS produced consistent measurements of vesicles (Chandler et al., 2011;Yuana et al., 2011). van der Pol et al. (2012) also used the Megamix bead gating strategy to standardize the relationship between scatter and ECVs' diameter. Notably, they concluded flow cytometers can indeed detect smaller ECVs in the range of exosomes by swarm detection, the capture of smaller ECVs grouped together and characterized as a single event (van der Pol et al., 2012). Comparison of newer instruments in Chandler et al. (2011) show the Apogee A40 calibrated with 0.4 µm polystyrene beads for 1.0 µm micro particles (MPs) can detect higher numbers of MPs and platelets compared to Megamix gating use.
Heterogeneous ECVs stained with fluorescently labeled antibodies can be identified and sorted by fluorescence flow cytometry. Non-specific binding and unbound dye can impede accurate analysis of labeled ECVs, especially smaller vesicles like exosomes (Hoen et al., 2012). Hoen et al. (2012) reported successful antibody-mediated detection of phenotypically heterogeneous exosomes using fluorescence threshold triggering. Their labeling method and optimization of the Becton Dickinson Influx flow cytometer (Becton Dickinson, Brussels, Belgium) eliminated noise signals and permitted comparison of vesicle subsets within the whole vesicle population, as well as detection of fluorescent vesicles down to 100 nm in diameter (Hoen et al., 2012). Mobarrez et al. (2010) found that measuring the intensity of the markers bound to platelet-derived ECVs and then translating those intensities to Molecules of Equivalent Soluble Fluorochrome (MESF) values increased reproducibility and permitted comparison of results obtained from different instruments. Inaccuracies and instrument variability in measuring the absolute number of particles per volume unit is eradicated through use of MESF values to generate a standard curve based on beads with predefined fluorescence labeling (Mobarrez et al., 2010).
IMPEDANCE-BASED FLOW CYTOMETRY
The displaced solute increases the impedance across the circuit by generating a voltage spike proportional to the volume of the ECV. The lower detection limit of impedance-based flow cytometry is 300 nm. Note that aperture size indicates or dictates the instruments sensitivity to ECV size (Jy et al., 2010). Using different channel diameters, two or more impedance-based flow cytometers are recommended to encompass the submicron range (van der Pol et al., 2010). Zwicker et al. (2012) used the Cell Lab Quanta SC (Beckman Coulter) with an aperture diameter of 40 µm for optimal sizing, characterization, and concentration of ECVs. They affirm impedance-based ECV sizing lower limits are commonly 2% of the aperture's diameter (Zwicker et al., 2012). Impedance-based cytometry enhances the sensitivity in comparison with standard flow cytometers, but the limiting size range excludes a small fraction of ECVs (<300 nm; van der Pol et al., 2010;Zwicker et al., 2012).
Pre-analysis, some suggestions such as calibration of polystyrene beads and optimization of antibody concentrations are recommended to standardize the analysis (Zwicker et al., 2012). The technology cannot provide sourcing based on surface markers, morphological, or biocompositional data of ECVs unless combined with fluorescence and scattering flow cytometry (van der Pol et al., 2010). Limitations in resolution will cause smaller particles to go undetected but newer instruments such as Gallios (Beckman Coulter) and BD-Influx (Becton Dickinson) are equipped with more sensitive detectors that can enable for more accurate discrimination of particle populations down to 100 nm in diameter (Lacroix et al., 2010). Orozco and Lewis found that basing their threshold on the number of background "noise"/events per second when double filtered (0.2 µm) phosphate buffered saline (PBS) was passed through the Gallios instrument (Beckman Coulter) was effective (Orozco and Lewis). This assay will probably be further explored in the future and may shed light into the ECVs subpopulation subtypes quantitatively and qualitatively.
TRANSMISSION ELECTRON MICROSCOPY AND SCANNING ELECTRON MICROSCOPY
There are two types of electron microscopes, the TEM and the SEM. TEM has similarities to light microscopes, transmitting a beam of electrons through a thin specimen and then focusing the electrons to create an image on a screen or on film. TEM is the most commonly used and has the highest resolution. SEM, on the other hand, scans a fine beam of electrons onto a specimen and collects the electrons scattered on the surface. Although SEM resolution is less than TEM, it confers detailed three-dimensional (3D) images of surfaces. Because the wavelength of electrons is more than three orders of magnitude shorter than the wavelength of visible light, the resolution of TEM can be lower than 1 nm (Pisitkun et al., 2004;van der Pol et al., 2012). Since TEM is performed in a vacuum, biomaterials require fixation and dehydration, which reduces their size and changes their morphology. ECVs usually appear 20-100 nm in size and cup-shaped when visualized by TEM. Employing immuno-gold labeling could lead to biochemical information regarding ECVs' surface (van der Pol et al., 2012; Figure 1). Although TEM has been used extensively for detection of ECVs (Baran et al., 2010;Miranda et al., 2010;Waldenstrom et al., 2012), this method only provides semi-quantitative information on ECVs. Furthermore, sample dehydration and vacuum procedures required in Electron Microscopy (EM) might affect the characteristics of ECVs. The measurement time is in the order of hours.
CRYO-ELECTRON MICROSCOPY AND SINGLE PARTICLE ANALYSIS
Cryo-electron microscopy is a form of EM where samples are analyzed at temperatures below −100˚C and has been successfully applied to ECV analysis. The advantage of this technique is that samples are analyzed in frozen conditions without being stained or
FIGURE 1 | Transmission electron microscopy (TEM) characterization of human serum derived extracellular vesicles (ECVs). (A)
ECVs were negatively stained with 2% uracyl acetate after removing the extra moisture. Cup-shaped structures, with 30-100 nm size were identified as being exosome/microvesicles. (B) ECVs isolated from human serum expressing CD63 Transmembrane protein which is believed to be exosome/microvesicles marker. ECVs were immuno-gold labeled with rabbit polyclonal Abs against CD63.
fixed. This technique has been used for the study of ECVs isolated from urine and revealed repetitive "mushroom-shaped" features on the surface of ECVs (Conde-Vancells et al., 2010).
Usually categorized as one of the techniques of cryo-EM, single particle EM reconstruction has recently become a popular tool to get the 3D structure of proteins and viruses. This method has advantages in comparison with X-ray crystallography including no need to crystallize the proteins and no need for large amounts of protein samples (in range of microliters; Liu and Wang, 2011). Despite single particle EM has the ability to map the 3D structure of samples at 1 nm resolution, it works better for more symmetrical structures. The techniques has the capability of distinguishing different molecular orientations and digitalizing it. Employing two-dimensional (2-D) alignment and classification methods, homogenous molecules in the same view are grouped into their respective classes. In each view, their averages increase the signal of the molecule's 2-D shapes. Afterward, software orders the structures with the proper relative orientation (Euler angles) and generates the 3D images based on combining 2-D digitalized micrographs. Liu and Wang (2011) described procuring a 3D reconstruction of yeast exosome complex using negative staining www.frontiersin.org EM and single particle EM. This technique will need to be further explored in the future of ECV characterization.
NANOPARTICLE TRACKING ANALYSIS
A recently developed technique that allows sizing and counting of ECVs is the NTA (Dragovic et al., 2011; Figure 2). It utilizes a laser light scattering microscope, charge-coupled device camera (CCD), and proprietary analytical software. A laser beam hits the ECVs and their Brownian motion is then determined by a highly sensitive CCD camera and the mean velocity of each particle is calculated with image processing software. ECVs from 30 to 1000 nm in diameter at a concentration range of 10 8 -10 9 can be counted with relatively high sensitivity. The NTA software is then able to identify and track individual ECVs moving under Brownian motion and relates the movement to a particle size based on the following formula derived from the Stokes-Einstein Equation (Filipe et al., 2010): where k B , is the Boltzmann constant and (x, y) 2 is the meansquared speed of a particle at a temperature T, in a medium of viscosity η, with a hydrodynamic radius of r h .
The Nanosight technology allows detection of ECV subpopulations by using antibody-mediated fluorescent labels that specifically bind to the antibodies of interest on the surface of ECVs (Dragovic et al., 2011). This feature enables users to detect, analyze, and count only the specific nanoparticles to which the fluorescently labeled antibody are bound, with background non-specific particulates being excluded through the use of appropriate optical filters.
qNANO (IZON)
The qNano is a relatively new technology that allows detection of a ECVs passing through a nanopore by way of a single-molecule electrophoresis. Branton et al. (2008) introduced nanopores as a promising approach for studying biophysics at the single-molecule level. The technology is based on the Coulter principle at the nano scale, and operates by detecting transient changes in the ionic current generated by the transport of the target particles through a size tunable nanopore in a polyurethane membrane (Garza-Licudine et al., 2010). The qNano instrument consists of a nanopore formed by needle perforation on a polyurethane membrane that is stretched mechanically to permit real-time manipulation of nanopore size. A transmembrane voltage is generated and as particles travel across the nanopore the altered ionic current is captured. Furthermore, data is presented by particles transitory blockage of the pore establishing measurable change in the elasticity of the channel. Fixed geometry pores are typically useful for detecting a limited size range or type of particle. qNano provides quantitative analysis of particle samples spanning from 70 nm to 10 µm in diameter and concentrations from 10 5 to 10 12 ml −1 . Furthermore, real-time monitoring of ionic current flow across the pore at different aperture settings enables one to tune the detection and discrimination of individual nanoparticles populations in mixed multimodal suspensions. Despite the individual The NTA software then rapidly generates a distribution graph on a particle-by-particle basis and a count (in terms of absolute number concentration) of the vesicles. particle-by-particle readout, the lower limit of detection for ECVs is in the range of 100 nm (Figure 3). As the technology evolves, we believe this aspect will improve over time.
RAMAN SPECTROSCOPY
Raman Spectroscopy is a spectroscopic method, based on inelastic scattering of monochromatic light (mostly laser light). It is used to study vibrational, rotational, and other low-frequency transitions in a system (Puppels et al., 1990). Photons interact with molecular vibrations, photons, or other excitations in the system, leading to a slight up-or down shift of their energy. The shift in energy provides information about the vibrational transitions in the molecules (Puppels et al., 1990;van der Pol et al., 2010). Given the makeup of ECVs, their chemical composition could be distinguished by RS with the advantage that ECVs do not have to be pre-processed or labeled. RS is a quantitative technique and the signal strength is linearly Frontiers in Physiology | Membrane Physiology and Biophysics FIGURE 3 | qNano generated data of human serum derived extracellular vesicles (ECVs). Plot depicts particle size diameter vs. percentage (%) of population. The concentration was reported as 1.4 × 10 10 particles ml −1 with mode of 120 nm.
proportional to composition of the ECVs. The measurement time is in the order of few hours. RS can also be coupled with TEM, NTA, and dynamic light scattering devices to correlate detailed biochemical information with the relative size distribution and morphology.
LARGE-SCALE MOLECULAR PROFILING "OMIC" TECHNOLOGIES IN COMPOSITIONAL CHARACTERIZATION OF ECVs
Shedding a nuclear fragments of cellular membrane, ECVs, is an integral part of physiological homeostasis and communication of various cells of the organism. Alterations in vesicle concentrations and molecular compositions have been associated with diseases and physiological states, indicating their diagnostic potential (Simak and Gelderman, 2006). Emerging "omic" approaches for in-depth molecular profiling seem attractive for revealing MV-related diagnostic and prognostic biomarkers as well as for understanding biogenesis and signaling of cells and ECVs. Recent advances in "omic" technologies could play an important role in order to elucidate the roles of ECVs studying their molecular composition. Several recent reports have effectively utilized proteomic, metabolomic, and microarray profiling techniques to address specific questions through molecular characterization of ECVs isolated from various physiological fluids and cell cultures (Mayr et al., 2009;Didangelos et al., 2012).
Proteomic technologies allows for both unbiased discoverydriven and targeted large-scale protein profiling. Moreover, MV constituents revealed by proteomics techniques can be used in antibody-based enrichment, detection, and characterization by the above discussed methodologies. During the past several years 2-D gel-and mass spectrometry (MS)-based proteomics has been successfully applied to MV research, leading to the identification of novel signaling and secreted proteins that may have important physiological roles (Garcia et al., 2005;Smalley et al., 2008;Dean et al., 2009;Parguina et al., 2012;Shai et al., 2012).
The traditional 2-D gel electrophoresis technique utilizes ingel isoelectrofocusing followed by SDS polyacrylamide gel electrophoresis to separate individual proteins that can be visualized by fluorescent or visible staining, quantified by optical density readouts, digested with proteolytic enzymes, and identified by MS-based proteomics. As an example, 2-D gel analysis followed by MS-based protein identification demonstrated www.frontiersin.org that significantly higher levels of phosphatidylserine-bearing ECVs originated mostly from oxidatively damaged platelets and RBCs can be successfully linked β-thalassemia/hemoglobin (Eβthal/HbE) disorder (Chaichompoo et al., 2012). Another recent report shows that platelets shed EVCs in different amounts and of different protein composition depending on the stimulus (Shai et al., 2012).
The field of MS-based proteomics has substantially advanced over the last decade due to revolutionary changes in technology, sample preparation, separation platforms, and bioinformatics. Current proteomic technologies are capable of low attomole detection and therefore more efficient in analysis of small sample amounts. The conventional MS-based proteomic profiling uses up-front single or multidimensional separation of proteins or protein digests followed by on-the-fly structural characterization by single stage and tandem MS. The most common separation technique used in proteomic analysis of ECVs prior to liquid phase chromatography coupled to MS is 1-D SDS gel electrophoresis. The main advantages of this technique is its simplicity and relative efficiency in analysis of hydrophobic and membrane proteins that are expected to be enriched in ECVs. Also, 1-D PAGE effectively delipidates lipid-rich ECVs, that can be beneficial for downstream MS analysis. Rapid progress in high accuracy high resolution MS enabled reliable quantitative proteomic analysis and profiling of post-translational modifications. A recent study focused on the physiological erythrocyte aging process; they applied MS-based proteomic profiling to support a hypothesis stating vesiculation of damaged and degraded membrane patches of erythrocytes may serve to postpone the premature removal of functional cells (Bosman et al., 2012). This study demonstrated a selective accumulation of ubiquitinylated proteins or peptides as well as several other post-translational modifications in ECVs derived from aging RBCs that can lead to the subsequent recognition and fast removal of ECVs by the immune system (Bosman et al., 2012). MS-based profiling allows one to reliably assess the baseline of intra-and inter-individual variability in ECV composition prior to any effort for biomarker detections (Rubin et al., 2010;Bastos-Amador et al., 2012).
New fields of large-scale metabolomic, lipidomic, and peptide/protein array profiling techniques are emerging following the recent wake of the genomic and proteomic revolutions (Griffiths et al., 2011). These new "omic" technologies are expected to also be very instrumental in providing complementary information about structural features of ECVs and in development of novel diagnostic, prognostic, and therapeutic approaches.
CONCLUSION
In conclusion, a combination of the different methods described above can provide information on the different characteristics of ECVs. These methods should be further assessed and validated by comparing measurement results, so that precise, reliable, and fast extraction methods and measurements could eventually be translatable from the bench to the clinic. As the area of ECVs shift to the clinical arena, the characterization step will need to be standardized to ensure a more precise and sensitive measurement. This may include combining complementary characterization methodologies.
ACKNOWLEDGMENTS
This work was conducted, at least in part, through the Harvard Catalyst Laboratory for Innovative Translational Technologies (HC-LITT) with support from Harvard Catalyst -The Harvard Clinical and Translational Science Center (NIH Award #UL1 RR 025758 and financial contributions from Harvard University and its affiliated academic health care centers). The content is solely the responsibility of the authors and does not necessarily represent the official views of Harvard Catalyst, Harvard University and its affiliated academic health care centers, the National Center for Research Resources, or the National Institutes of Health.
|
2016-06-17T22:30:03.868Z
|
2012-09-07T00:00:00.000
|
{
"year": 2012,
"sha1": "b448d78646fc147765a54720f906df1664715edc",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2012.00354/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cf87e5a64cb9cc2bfafcf61611b7f60f1251e966",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
214328248
|
pes2o/s2orc
|
v3-fos-license
|
Cool, photoluminescent paints towards energy consumption reductions in the built environment
Nowadays, passive strategies are identified among the preferred solutions to reduce energy consumption and to increase comfort in the built environment. Indeed, such strategies allow energy saving by exploiting the intrinsic characteristics of materials. In this work, an innovative cool, photoluminescent paint is considered for application in the built environment, as a passive strategy to (i) reduce energy for cooling in the hot season, (ii) maintain lower surface and air temperatures, thus benefiting comfort and (iii) contribute to the lighting of the outdoor public space. The cool, photoluminescent material is first described, then its implementation in the built environment is hypothesized. An experimental, in-lab characterization is conducted to measure the optics characteristics of the samples. Finally, possible implementation of the investigated material in the built environment is investigated by means of dynamic simulation, in terms of thermal- and lighting-energy performance, when applied on the external envelope of a case study building and as an advanced paving solution in a public space. Results from this preliminary study show that the investigated material has promising features, since it can save up to 30% energy for cooling and 27% yearly energy for lighting.
Introduction
Nowadays, passive strategies are investigated towards the pressing objectives of energy consumptions and emissions reductions in the built environment [1][2][3][4]. Indeed, passive strategies allow exploiting the intrinsic characteristics of the materials composing the built environment, towards energy consumption reduction and performance improvement. Consequently, such solutions consist in the employment of the most adequate materials depending on the specific case, e.g., on the climate, to contribute to the urging call for energy demand reduction.
Greenery and cool materials are among the most studied solutions, since their application on buildings' external envelopes allows for consistent reductions in energy consumptions for heating and cooling [5]. Cool materials are among the most common, non-expensive and easy-to-apply passive strategies [6,7]. Such materials reflect back the incoming radiation and thus maintain lower surface temperatures and lead to lower air temperatures. Indeed, cool materials were also identified as a possible solution to mitigate the Urban Heat Island phenomenon [8], when extensively employed on horizontal and vertical surfaces in urban areas [7,9]. Among passive solutions, innovative materials, in particular insulating materials, are being introduced in the construction sector as advanced materials for energy efficiency in buildings. Phase Change Materials (PCMs) are among such innovative materials and are employed as a passive strategy to store energy until it is needed; or they can be employed to reduce heating and cooling needs when changing phase [10,11]. Indeed, PCMs, when changing phase from solid to liquid, prevent heat to enter the ambient. PCMs can be included in a wide variety of ways into the construction element: micro-and macro-encapsulated PCMs, which can be mixed into the concrete mix or mortar layer, wallboards and honeycomb panels. Moreover, the melting temperature varies among different PCMs, and permits to select the most suitable PCM depending on the application and the local climate [11][12][13]. The concept behind the use of PCMs in the building sector is to store heat and use it later, when it is more needed, instead of having it when it constitutes an issue in the indoor ambient. Based on the same idea, also other forms of energy could be postponed in their utilization, thanks to the specific characteristics of some materials.
In this work, we introduce the novel application in the outdoor built environment of a peculiar cool material, which can be used to diminish energy consumption both with respect to cooling and electricity for lighting. The stored energy, in this case, comes out as light: during the day, when there is enough light, the luminous energy is stored, and then it is released by the material also after the sunset. The material is a photoluminescent paint, which we developed in-lab and whose components are commercially available. Indeed, the paint is composed of a solvent and photoluminescent pigments, mixed together. The pigment is white when it is "unloaded", while it appears blue colored when the light is emitted from the material itself, once it is "loaded" with energy. The considered material is cool, due to the high solar reflectance, and it can also be considered, similarly to PCMs with thermal energy, a "storage of energy for lighting" (LES), similarly to thermal energy-storage (TES) concept, due to the intrinsic photoluminescence.
The in-lab development of the above-mentioned material is presented here, together with the assessment of its solar reflectance. We developed a plain-paint sample, as a reference case, and a photoluminescent sample. Then, we hypothesized the application of such in-lab developed materials as external paint on the vertical envelope of a case study building located in Central Italy, with varying glazed/opaque ratio, and performed a dynamic energy simulation to assess the yearly energy performance. This peculiar application allows exploiting the properties of the in-lab developed material as a cool paint, i.e., a large part of the incoming solar radiation is reflected due to high solar reflectance, and thus it does not enter the building, lowering energy demand for cooling. Moreover, another advantage consists in the reduction of electricity for lighting: indeed, the material being photoluminescent, it emits light for a certain time-span after absorbing energy. Therefore, in the hypothesis of an application in the built environment, less electricity would be needed in the surroundings to provide lighting, while higher safety for pedestrians is ensured during evening hours. To evaluate the achievable energy savings, we (i) compare a reference case building with a building whose finishing layer is the above described photoluminescent paint, and (ii) the same building with a varying ratio of glazed/opaque envelope. Finally, we consider the advantages in terms of lighting when the photoluminescent material is applied in a public space. Results show that the in-lab developed materials, applied on the external building envelope, are able to lower energy demand for cooling, while at the same time winter penalties are negligible. Moreover, the materials are able to contribute to the outdoor area lighting, reducing electricity consumption.
Method
In this work, we developed the photoluminescent samples and characterized them in terms of optics characteristics (solar reflectance index). Then, we selected case studies to be modelled in specific numerical environments. The cases consisted of a case study building and a public square, where the innovative material had been respectively applied on the external envelope (building) and on paving (square). Dynamic simulations were conducted both in terms of thermal energy performance of the building (EnergyPlus) and lighting of the public space, considering respectively the cool intrinsic property (high solar reflectance) of the photoluminescent paint for the cooling energy saving and the emission of light for the energy saving related to lighting. In the next subsections, the procedure is described in greater detail.
Photoluminescent Materials development and characterization
The photoluminescent paints are ready-to-use paints that can be employed on many different base materials. The color of the paints usually changes when they are "loaded" and emit light and when they are "unloaded", due to the different spectrum of light emission. In this case, we selected a white paint that turns fluorescent light-blue during the night. Indeed, the luminescent color is mainly visible when the surroundings are dark. The process behind the photoluminescent phenomenon is due to the absorption of a photon by a molecule: the photon excites an electron and radiates it when the electron returns to lower energy state. The activating phenomenon is electromagnetic radiation hitting the material, e.g., the envelope paint. With respect to the considered photoluminescent paints, the result of the absorbed electromagnetic radiation is the emission of light, with a longer wavelength (thus lower energy) than the absorbed one. The duration of light emission is referred to as "lifetime" of the photoluminescence phenomenon, and it can last for minutes, hours or even days.
The paints are composed by a primer (75% in weight), which is responsible for the fluorescent color, and a hardener (15% in weight). The samples were prepared by applying three coats of paint on a white plastic support, with dimensions 10 cm x10 cm. The resulting sample was tested by means of an integrating sphere spectrophotometer, to measure the solar reflectance of the sample. Indeed, solar reflectance is the intrinsic characteristic of the material that is responsible for the cool behavior and, thus, for the optimized thermal-energy performance during the hot season. The solar reflectance index is equal to 90%. Moreover, the lifetime of the light emission after the loading was observed to be consistent during all the night, by means of luxmeter measurements.
Dynamic simulation for thermal-energy performance
The effectiveness of passive strategies towards energy saving is tested by means of dynamic simulations, with respect to the thermal-energy performance of the building. The case study is a single-family, residential building, composed by two floors for a total of 160 m 2 of surface. The building is parallelepiped-shaped, as showed in Fig. 1, with sides of 8 m and 10 m length. With respect to orientation, the 10 m sides are north and south oriented. To more precisely control the efficacy of the cool, photoluminescent paint, we modelled and simulated two buildings, with different glazed/opaque wall ratio but otherwise identical in terms of the other characteristics. For the first case, we selected the minimum glazed surface, equal to 1/8 of indoor area surface, i.e., 20 m 2 of glazed surface on the façade of the whole building (Wmin). Then, in the second case study building we considered the same exact windows for each room, to maintain the same exposition to the sun for each window, but with a larger windows area. In this case (Wmax), the total glazed surface of the building was equal to 55.4 m 2 , more than the double with respect to the Wmin case. In Wmax, the surface for the cool, photoluminescent paint is much reduced (195 m 2 ) than in Wmin (230 m 2 ), thus we hypothesized that the passive strategy related to the cool envelope is reduced. Indeed, cool materials allow to maintain lower surface temperatures, as acknowledged in literature: a wider cool surface on the external envelope (as in Wmin with respect to Wmax) has a higher potential in lowering envelope temperatures. For both Wmin and Wmax, we performed two simulations, one with the reference paint (Ref, considering a solar reflectance index equal to 0.5, for a light plaster material) and one with the cool, photoluminescent paint (Cool) applied as external envelope finishing layer, with a solar reflectance index equal to 0.85 (measured in lab). A thermal emittance of 0.84 was measured for the samples. In total, four simulations were conducted, measuring energy performance all along an entire year. At the same time, a thermal performance, with HVAC off, was conducted to assess the air temperature indoor, in the living room, during a typical summer hot day, as influenced by the paint. Each separate thermal zone of the building was characterized in terms of occupancy profiles (as in [15]) and construction characteristics. The wall, whose finishing layer is the cool photoluminescent plaster, has a thermal transmittance equal to 0.32 W/m 2 K.The whole building is heated (by means of gas power) and cooled (electricity).
Dynamic simulation for the lighting energy performance
The lighting of public spaces, especially in urban areas, is of paramount importance in terms of safety and comfort for pedestrians, during the night hours [16]. Most often, the presence of adequate lighting is able to discourage violence and crime. Studies demonstrated that improving street lighting is able to reduce crimes by up to 21% [17,18]. Here we consider a public space, a path for pedestrians near the railway station (as in Figure 1 d), as a case study. According to CIE 115-2010 [19], the reference class for the area in terms of number of users and surroundings is P5, which requires an average horizontal illuminance equal to 3.0 lux, a minimum horizontal illuminance equal to 0.6 lux, a minimum vertical illuminance of 1.0 lux and a minimum semi-cylindrical illuminance of 0.6 lux, the last two illuminance values being additional requirements specifically tailored towards crime reduction. The case study area is modelled by means of the DIALux lighting design program. First, the current situation (Led+lamps), then the solution with the application of the photoluminescent paint (Photo, as showed in Fig.1, d), are modelled, and finally a solution with both photoluminescent material and street lamps (Photo+lamps). All the simulations are performed for the night-time, when lighting is needed. The materials are loaded by the daytime solar radiation. Samples exposed on the University of Perugia building roof were monitored with a luxmeter, so as to measure illuminance. Experiments showed that the lifetime duration for the photoluminescent materials is the whole night, and the guaranteed illuminance is equal to 28 lux, constant for the entire night duration after an initial peak (40 minutes) of 490-570 lux (for a cloudy and sunny day). The current situation depicts a lighting system composed by led light sources on the paving and street lamps along the path; in the photoluminescent materials scenario, only street lamps were maintained in addition to lighting from photoluminescent materials.
Results from the thermal-energy performance analysis of the case study building
Results of the dynamic energy simulation are reported in the figure and table below. Considerations can be done with respect to the comparison between the different scenarios. Wmin consumes less energy than Wmax; and the cool solution, even if cool properties of the material are a penalty during winter, is more efficient than the reference case. Moreover, the effectiveness of the cool paint is much higher in Wmin (-3 kWh/m 2 in Cool with respect to Ref), while in Wmax, as expected, the reduction is lower, equal to -0.6 kWh/m 2 , due to the reduced extension of the cool surface on the external envelope. The difference between the Ref and Cool scenario is even more visible in Figure 2, where monthly energy consumptions, for heating and for cooling, are reported. Here, during the summer months, especially June, July and August, the Cool case allows saving around 200 kWh for the Wmin case, while the corresponding savings for Wmax are as low as 30 kWh.
With respect to thermal performance, the same results can be observed. Without HVAC, Wmin is able to maintain lower air temperatures (-6°C) in the living room, considered as the reference indoor space, with respect to Wmax. This result is mirrored by the lower total yearly energy consumption (Table 1)
Results from the outdoor energy performance analysis of the public space
With respect to the energy saving for lighting, we considered the above described public space, the path nearby the railway station. In the current scenario, Led+Lamps, the regulation requirements are exceeded. The photoluminescent materials alone are not able to provide the required illuminance levels, while the addition on street lamps allows exceeding the required illuminance values. Considering the total yearly operating hours, consumptions are evaluated. The solution with the photoluminescent material can save an average of 36 W. It has to be considered that the solution with such materials also allows saving in terms of maintenance and substitution of light bulbs.
Conclusions
In this work, cool, photoluminescent materials are considered for application in the built environment, as passive strategies to reduce energy consumptions. Indeed, in this work we demonstrated that these materials can lower energy consumption when applied as external envelope finishing layer (plaster paint) in buildings, as cool materials, and are also able to lower energy consumption for lighting in public spaces, when applied as paving finishing layer. This preliminary study paves the road for future studies, following our explorative work, that should carefully consider also the architectural aspects of
|
2019-11-22T00:54:31.945Z
|
2019-11-01T00:00:00.000
|
{
"year": 2019,
"sha1": "f06997bc6cb5f230e8220d9eea8f2d3d647629ea",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1343/1/012198",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "5f06a04dacce399a432f885e2605c16b522f76a0",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
261315057
|
pes2o/s2orc
|
v3-fos-license
|
Sarcopenia Adversely Affects Outcomes following Cardiac Surgery: A Systematic Review and Meta-Analysis
Background: Sarcopenia is a degenerative condition characterised by the loss of skeletal muscle mass and strength. Its impact on cardiac surgery outcomes remains poorly investigated. This meta-analysis aims to provide a comprehensive synthesis of the available evidence to determine the effect of sarcopenia on cardiac surgery outcomes. Methods: A systematic review and meta-analysis followed PRISMA guidelines from inception to April 2023 in EMBASE, MEDLINE, Cochrane database, and Google Scholar. Twelve studies involving 2717 patients undergoing cardiac surgery were included. Primary outcomes were early and late mortality; secondary outcomes included surgical time, infection rates, and functional outcomes. Statistical analyses were performed using appropriate methods. Results: Sarcopenic patients (906 patients) had a significantly higher risk of early mortality (OR: 2.40, 95% CI: 1.44 to 3.99, p = 0.0007) and late mortality (OR: 2.65, 95% CI: 1.57 to 4.48, p = 0.0003) compared to non-sarcopenic patients (1811 patients). There were no significant differences in overall surgical time or infection rates. However, sarcopenic patients had longer ICU stays, higher rates of renal dialysis, care home discharge, and longer intubation times. Conclusion: Sarcopenia significantly increases the risk of early and late mortality following cardiac surgery, and sarcopenic patients also experience poorer functional outcomes.
Introduction
Sarcopenia, a degenerative condition characterised by the progressive and generalised loss of skeletal muscle mass and strength, is increasingly recognised as a significant health concern [1]. It is commonly associated with aging, and its prevalence is particularly noticeable in the elderly. Sarcopenia has been linked to a broad range of adverse health outcomes, including impaired physical function, increased risk of falls, prolonged recovery periods, and higher mortality rates [2]. In the context of cardiac surgery, sarcopenia's implications are even more profound. As surgical techniques and medical management continue to advance, a growing number of older adults are becoming candidates for cardiac surgery. However, age-related conditions like sarcopenia pose unique challenges in this demographic, often complicating their post-operative recovery and overall prognosis [3,4].
Despite the mounting evidence demonstrating the impact of sarcopenia on surgical outcomes, it remains a poorly recognised and underdiagnosed entity in cardiac surgery. This discrepancy, in part, could be attributed to the complexity surrounding the diagnostic criteria of sarcopenia. There is no universal consensus on the definition of sarcopenia, with multiple working groups suggesting different guidelines. Notably, the European Working Group on Sarcopenia in Older People (EWGSOP) [5], the Asian Working Group for Sarcopenia (AWGS) [6], and the Foundation for the National Institutes of Health [7] each provide their unique sets of criteria, differing in their approach to quantifying muscle mass, muscle strength, and physical performance. This variability in diagnostic criteria has led to discrepancies in reported prevalence rates and has created challenges in comparing results across studies. In the past decade, the body of research investigating the impact of sarcopenia on cardiac surgery has grown substantially. Still, the findings have been somewhat inconsistent due to variations in study designs, the heterogeneity in patient populations, and the divergence in defining sarcopenia.
While obesity has been studied extensively and identified as a risk factor for poor outcomes in cardiac surgery [8,9], body mass index (BMI) is an inaccurate indicator of the relationship between adiposity and muscle mass [10]. Indeed, a sarcopenic patient can exhibit a normal or even elevated BMI that would qualify them as having sarcopenic obesity, a condition disregarded in previous studies only concentrating on BMI on cardiac surgical outcomes [11].
Through this meta-analysis, we aim to provide a more robust estimate of the actual effect size of sarcopenia on cardiac surgery outcomes. By highlighting the burden of sarcopenia in cardiac surgery, we hope to promote the development of targeted interventions and the incorporation of sarcopenia screening into preoperative evaluations. Furthermore, this analysis could provide insights into future research areas, particularly interventional studies aiming to mitigate the impact of sarcopenia on cardiac surgery outcomes.
Literature Search Strategy
A systematic review and meta-analysis was conducted in accordance with the Cochrane Collaboration published guidelines and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. A literature search was conducted of EMBASE, MEDLINE, Cochrane, PubMed, and Google Scholar from inception to April 2023 ( Figure 1). The search terms used were: ("Sarcopenia" OR "Muscle Wasting" OR "Muscle Mass" or "Muscle Atrophy") AND ("Cardiac Surgery" OR "Cardio-thoracic Surgery" OR "Heart Surgery" or "Surgical Aortic Valve Replacement" or "Coronary Artery Bypass Grafting" or "aortic surgery" or "mitral valve surgery"). Further articles were identified using the 'related articles' function on MEDLINE and a manual search of the references lists of articles found through the original search. The only limits used were the English language and the mentioned time frame. Patient consent and institutional review board approval were unnecessary in this study as no patients were recruited.
Study Inclusion and Exclusion Criteria
All original comparative articles of patients with or without sarcopenia undergoing adult cardiac surgery and reporting on mortality and morbidity outcomes were included. Studies were excluded from the review if: (1) inconsistencies in the data precluded valid extraction; (2) the study was performed in an animal model; (3) studies did not have a comparison group; or (4) the size of the study population was small (<10 patients). Case reports, reviews, abstracts from meetings and preclinical studies were excluded. Using the above criteria, two reviewers (AA and A.AR.) independently selected articles for further assessment after the title and abstract review. A third independent reviewer (T.A.) resolved disagreements between the two reviewers. Potentially eligible studies were then retrieved for full-text assessment.
Study Inclusion and Exclusion Criteria
All original comparative articles of patients with or without sarcopenia underg adult cardiac surgery and reporting on mortality and morbidity outcomes were inclu Studies were excluded from the review if: (1) inconsistencies in the data precluded v extraction; (2) the study was performed in an animal model; (3) studies did not ha comparison group; or (4) the size of the study population was small (<10 patients). reports, reviews, abstracts from meetings and preclinical studies were excluded. U the above criteria, two reviewers (AA and A.AR.) independently selected article further assessment after the title and abstract review. A third independent reviewer (T resolved disagreements between the two reviewers. Potentially eligible studies were retrieved for full-text assessment.
Data Extraction and Critical Appraisal
All full texts of retrieved articles were read and reviewed by two authors (A.A. A.AR.), and the inclusion or exclusion of studies was decided unanimously. When t was disagreement, a third reviewer (T.A.) made the final decision. Using a pre-establi protocol, the following data were extracted: first author, study type and characteri number of patients, population demographics, stroke rate, overall stroke rate, m bleeding, cardiopulmonary bypass (CBP) time, hospital length of stay, kid dysfunction, early mortality, and overall mortality. For this review, a data extraction s was developed and pilot-tested on three randomly selected included studies, whereu the sheet was refined accordingly. Data extraction was performed by two review aut (A.A and A.AR.). A third author (T.A.) validated the correctness of the tabulated
Data Extraction and Critical Appraisal
All full texts of retrieved articles were read and reviewed by two authors (A.A. and A.AR.), and the inclusion or exclusion of studies was decided unanimously. When there was disagreement, a third reviewer (T.A.) made the final decision. Using a pre-established protocol, the following data were extracted: first author, study type and characteristics, number of patients, population demographics, stroke rate, overall stroke rate, major bleeding, cardiopulmonary bypass (CBP) time, hospital length of stay, kidney dysfunction, early mortality, and overall mortality. For this review, a data extraction sheet was developed and pilot-tested on three randomly selected included studies, whereupon the sheet was refined accordingly. Data extraction was performed by two review authors (A.A and A.AR.). A third author (T.A.) validated the correctness of the tabulated data. Potential inter-reviewer disagreements were resolved by consensus. Primary outcomes were early/overall mortality. Secondary outcomes were hospital length of stay (LOS), intensive care unit (ICU) LOS, cross-clamp (CC) time, CBP time, overall surgery time, postoperative arrhythmias, sternal wound infection, stroke, kidney failure, discharge to care home, and intubation time.
Data Analysis
Odds ratios (OR) with 95% confidence interval (CI) and p-values were calculated for each categorical clinical outcome. Additionally, we utilised the Mean Difference (MD) as a statistical analysis method to analyze continuous data in our meta-analysis. MD enabled us to quantify the absolute difference in means between two groups, providing insights into the magnitude of effect size. Forest plots were created to represent the clinical outcomes. Chi-squared and I2 tests were executed for the assessment of statistical heterogeneity. Using a Mantel-Haenszel random-effects model, the ORs were combined across the studies. Funnel plots were constructed to assess publication bias. All analyses were completed through the "metafor" package in R Statistical Software (version 4.0.2) (Foundation for Statistical Computing, Vienna, Austria). A two-tailed p-value < 0.05 was considered statistically significant.
Sensitivity Analysis
The influence of a single study on the overall effect of sarcopenic versus non-sarcopenic patients undergoing adult cardiac surgery on the primary outcome was assessed by sequentially removing one study (the "leave-one-out" method). This sensitivity analysis was carried out to test the consistency of results to investigate if individual studies had an excessive impact on the analysis across all outcomes.
Baseline Characteristics
There were 2717 patients included in this meta-analysis, of which there are 906 sarcopenic and 1811 non-sarcopenic patients. The data on baseline characteristics can be found in Table 1, Supplementary Figure S1. The mean age of the patients in the sarcopenic and non-sarcopenic cohorts was 70.57 and 65.36 years, respectively. In terms of BMI, the mean for the sarcopenic group was 22.21, while for the non-sarcopenic group, it was 23.73. The mean Left Ventricular Ejection Fraction (LVEF) percentages were 57.32% and 57.86% for the sarcopenic and non-sarcopenic groups, respectively. Lastly, the mean psoas muscle area index for the sarcopenic group was 478.2 mm 2 /m 2 ; for the non-sarcopenic group, it was 819.6 mm 2 /m 2 . The mean percentages of sarcopenic and non-sarcopenic patients with diabetes were 41.60% and 34.41%, respectively. Similarly, the mean percentages of sarcopenic and non-sarcopenic patients with hypertension were 70.39% and 73.20%, respectively. In the case of chronic kidney disease, the mean percentages were 24.38% for the sarcopenic and 17.65% for the non-sarcopenic group.
Early Mortality and Late Mortality
Sarcopenic patients were compared with non-sarcopenic patients, with ten studies reporting on early mortality outcomes postoperatively ( Figure 2A). The overall OR for early mortality showed a statistically significant difference favouring non-sarcopenic patients (random-effects model: OR: 2.40; 95% CI: 1.44 to 3.99; p = 0.0007). There was evidence of no heterogeneity among studies reporting on early mortality.
Sarcopenic patients were compared with non-sarcopenic patients, with seven studies reporting on late mortality outcomes postoperatively ( Figure 2B). The overall OR for late mortality showed a statistically significant difference favouring non-sarcopenic patients (random-effects model: OR: 2.65; 95% CI: 1.57 to 4.48; p = 0.0003). There was evidence of high heterogeneity among studies reporting on late mortality. evidence of no heterogeneity among studies reporting on early mortality. Sarcopenic patients were compared with non-sarcopenic patients, with seven studies reporting on late mortality outcomes postoperatively ( Figure 2B). The overall OR for late mortality showed a statistically significant difference favouring non-sarcopenic patients (random-effects model: OR: 2.65; 95% CI: 1.57 to 4.48; p = 0.0003). There was evidence of high heterogeneity among studies reporting on late mortality.
Overall Surgery Time
Sarcopenic patients were compared with non-sarcopenic patients, with five studies reporting overall surgery time ( Figure 3A). The overall MD for overall surgery time showed no statistically significant difference between the two groups (random-effects model: MD: −0.21; 95% CI: −8.85 to 8.43; p = 0.96). There was evidence of no heterogeneity among studies reporting on overall surgical time.
CBP Time
Sarcopenic patients were compared with non-sarcopenic patients, with nine studies reporting on CBP time ( Figure 3B). The overall MD for CBP time showed no statistically
Overall Surgery Time
Sarcopenic patients were compared with non-sarcopenic patients, with five studies reporting overall surgery time ( Figure 3A). The overall MD for overall surgery time showed no statistically significant difference between the two groups (random-effects model: MD: −0.21; 95% CI: −8.85 to 8.43; p = 0.96). There was evidence of no heterogeneity among studies reporting on overall surgical time.
CBP Time
Sarcopenic patients were compared with non-sarcopenic patients, with nine studies reporting on CBP time ( Figure 3B). The overall MD for CBP time showed no statistically significant difference between the two groups (random-effects model: MD: 2.25; 95% CI: −1.55 to 6.04; p = 0.25). There was evidence of no heterogeneity among studies reporting on CBP time.
CC Time
Sarcopenic patients were compared with non-sarcopenic patients, with five studies reporting on CC time ( Figure 3C). The overall MD for CC time showed no statistically significant difference (random-effects model: MD: 0.52; 95% CI: −3.31 to 4.36; p = 0.79). There was evidence of no heterogeneity among studies reporting on CC time.
Hospital LOS
Sarcopenic patients were compared with no-sarcopenic patients, with nine studies reporting on Hospital LOS ( Figure 3D). The overall MD for Hospital LOS showed no statistically significant difference (random-effects model: MD: 1.47; 95% CI: 0.00 to 2.93; p = 0.05). There was evidence of high heterogeneity among studies reporting on Hospital LOS.
ICU LOS
Sarcopenic patients were compared with non-sarcopenic patients, with four studies reporting on ICU LOS ( Figure 3E). The overall MD for ICU LOS showed a statistically significant difference favouring non-sarcopenic patients (random-effects model: MD: 0.60; 95% CI: 0.13 to 1.07; p = 0.01). There was evidence of no heterogeneity among studies reporting on ICU LOS.
Intubation Time
Sarcopenic patients were compared with non-sarcopenic patients, with four studies reporting on intubation time ( Figure 3F). The overall MD for intubation time showed a statistically significant difference favouring non-sarcopenic patients (random-effects model: MD: 2.14; 95% CI: 1.48 to 2.80; p < 0.0001). There was evidence of no heterogeneity among studies reporting on intubation time.
Postoperative Arrhythmia
Sarcopenic patients were compared with non-sarcopenic patients, with six studies reporting on postoperative arrhythmia ( Figure 4A). The overall OR for postoperative arrhythmia showed no statistically significant difference (random-effects model: OR: 1.08; 95% CI: 0.64 to 1.81; p = 0.77). There was evidence of moderate heterogeneity among studies reporting on postoperative arrhythmia.
Stroke
Sarcopenic patients were compared with non-sarcopenic patients, with six studies reporting on stroke ( Figure 4B). The overall OR for stroke showed no statistically significant difference (random-effects model: OR: 1.55; 95% CI: 0.84 to 2.86; p = 0.16). There was evidence of no heterogeneity among studies reporting on stroke.
Sternal Wound Infection
Sarcopenic patients were compared with non-sarcopenic patients, with six studies reporting on sternal wound infection ( Figure 4C). The overall OR for sternal infection showed no statistically significant difference (random-effects model: OR: 1.76; 95% CI: 0.80 to 3.88; p = 0.16). There was evidence of no heterogeneity among studies reporting on sternal infection.
Postoperative Need for Dialysis
Sarcopenic patients were compared with non-sarcopenic patients, with five studies reporting on dialysis ( Figure 4D). The overall OR for dialysis showed a statistically significant difference favouring non-sarcopenic patients (random-effects model: OR: 2.87; 95% CI: 1.19 to 6.94; p = 0.02). There was evidence of no heterogeneity among studies reporting on dialysis.
Discharge to Care Home
Sarcopenic patients were compared with non-sarcopenic patients, with six studies reporting on care home discharge ( Figure 4E). The overall OR for care home discharge showed a statistically significant difference favouring non-sarcopenic patients (random-effects model: OR: 1.92; 95% CI: 1.31 to 2.81; p < 0.001). There was evidence of no heterogeneity among studies reporting on care home discharge.
Sensitivity Analysis: Hospital LOS
Sensitivity analysis was carried out for all outcomes, with all outcomes other than Hospital LOS showing no statistically different impact on heterogeneity. Sarcopenic patients were compared with non-sarcopenic patients, with eight studies reporting on hospital LOS ( Figure 4F). The overall MD for Hospital LOS showed a statistically significant difference favouring non-sarcopenic patients when the study by Oh et al. was removed (randomeffects model: MD: 1.96; 95% CI: 0.57 to 3.34; p = 0.005). There was evidence of moderate heterogeneity among studies reporting on Hospital LOS.
Risk of Bias across the Studies
The funnel plot analysis (Supplementary Figures S1-S9) disclosed no asymmetry around the axis for the outcomes, thus making publication bias related to all outcomes unlikely.
Sarcopenia and Mortality
The findings of our study demonstrate a significant difference in both early and late mortality between sarcopenic and non-sarcopenic patients undergoing cardiac surgery. These results are in line with the existing body of literature on the subject. Specifically, the OR for early mortality was 2.40 (95% CI: 1.44 to 3.99; p < 0.001), and for late mortality, it was 2.65 (95% CI: 1.57 to 4.48; p < 0.001). These results indicate that sarcopenic patients faced more than twice the risk of experiencing early mortality postoperatively compared to their non-sarcopenic counterparts. This elevated risk is likely attributed to the diminished physical reserve and heightened vulnerability to stressors, such as surgery, observed in sarcopenic patients [24].
Our findings align with the study by Englesbe et al., 2010 [25], which found sarcopenia to be a significant predictor of mortality in patients undergoing major elective general abdominal surgery and transplantation. The study reported an OR of 2.86, indicating that sarcopenic patients were nearly three times more likely to die than non-sarcopenic patients. Similarly, a systematic review and meta-analysis by Malietzis et al., 2016 [26] found that sarcopenia was associated with increased postoperative complications and mortality in patients undergoing gastrointestinal surgery.
Furthermore, the results of our study build upon the findings of Wayda et al., 2018 [27], supporting the independent association between socioeconomic status, as assessed by the Distressed Communities Index (DCI), and operative mortality following coronary artery bypass grafting (CABG). Wayda et al.'s study revealed that patients with low socioeconomic status (SES) based on the DCI had a 1.6-fold higher risk of postoperative mortality following CABG compared to patients with high SES. These findings suggest that socioeconomic factors, which might hypothetically influence the prevalence and impact of sarcopenia [28], also substantially influence patient outcomes.
The combined impact of sarcopenia on the overall mortality risk in surgical patients is noteworthy. Sarcopenia, often associated with aging, malnutrition, and physical inactivity, contributes to reduced muscle strength and poor physical performance [29], thereby elevating the risk of unfavourable postoperative outcomes. The presence of sarcopenia can exacerbate the challenges encountered by patients with low SES, who may already be at a higher risk of poor outcomes due to limited access to healthcare, poor nutrition, and higher stress levels.
Surgical Time
Our data showed no statistically significant difference in overall surgery time, CBP time, and CC time between sarcopenic and non-sarcopenic patients. This suggests that sarcopenia does not significantly prolong the duration of cardiac surgery. This is consistent with the findings of a study by Fukuda et al., 2016 [30], which found no significant difference in operation times between sarcopenic and non-sarcopenic patients undergoing gastric cancer surgery.
With the exception of patients with sarcopenic obesity, surgical access is not influenced by sarcopenia, and the technical aspects of the procedure can be carried out seamlessly, especially when one does not encounter significant amounts of pre-pericardial or epicardial fat that is often seen in obese patients. However, it is worth noting that the lack of significant difference in surgical time does not negate the potential impact of sarcopenia on other surgical outcomes.
Infection Rates
Our data showed no statistically significant difference in the rate of sternal infection between sarcopenic and non-sarcopenic patients. This is an interesting finding, as some studies have suggested that sarcopenia may increase the risk of postoperative infections. For instance, a study by Lieffers et al., 2012 [31] found that sarcopenia was associated with a higher risk of infection in cancer patients. It is recognised that obesity is the main risk factor for surgical site infections such as sternal wound infections [32,33]. Unless there is sarcopenic obesity or an active hypercatabolic state, sternal closure and healing should not be negatively influenced in sarcopenic patients, which would have led to significantly higher wound complications against the non-sarcopenic counterparts. This discrepancy may be due to differences in patient populations, surgical procedures, and infection prevention measures, warranting further investigation.
Functional Outcomes
Our data showed statistically significant differences in several functional outcomes between sarcopenic and non-sarcopenic patients. Sarcopenic patients had a longer ICU length of stay, higher likelihood of requiring dialysis, higher rate of care home discharge, and longer intubation time. These findings suggest that sarcopenia can significantly impact patients' postoperative recovery and quality of life.
(a) ICU Length of Stay: The longer ICU stay for sarcopenic patients may be due to their lower physiological reserve and increased vulnerability to complications. Sarcopenia, characterised by a loss of muscle mass and function, can lead to frailty, which is associated with a higher risk of adverse outcomes, including prolonged ICU stay. This is supported by a study by Moisey et al., 2013 [34], which found that sarcopenic patients had a longer ICU stay after emergency abdominal surgery. (b) Dialysis: The higher likelihood of requiring dialysis in sarcopenic patients probably relates to their increased risk of acute kidney injury (AKI) postoperatively. Sarcopenia may contribute to AKI through various mechanisms, including inflammatory pathway activation following chronic inflammation and increased susceptibility to nephrotoxic agents [35]. A study by Bang et al. [35] found that sarcopenia was an independent risk factor for AKI in patients undergoing abdominal aortic aneurysm surgery. (c) Care Home discharge: The higher rate of care home discharge for sarcopenic patients may reflect their poorer functional status and increased need for assistance with daily activities postoperatively. Sarcopenia is associated with physical disability and reduced independence, which may necessitate care home admission. This is consistent with a study by Landi et al., 2012 [36], which found that sarcopenic older adults were more likely to be institutionalised. (d) Intubation Time: The longer intubation time for sarcopenic patients may be due to their increased risk of respiratory complications. Sarcopenia can impair respiratory muscle function, leading to reduced lung volumes and ineffective cough, which can prolong the need for mechanical ventilation. A study by Puthucheary et al., 2013 [37] found that ICU-acquired weakness, which is often associated with sarcopenia, was a predictor of prolonged mechanical ventilation.
Measurements of Sarcopenia in Clinical Practice
Sarcopenia is typically identified and measured using a combination of methods that assess muscle mass, muscle strength, and physical performance. The EWGSOP and the AWGS have provided guidelines for the diagnosis of sarcopenia, which include the use of these three parameters [5,6]. However, in the practical setting of cardiac surgery, where patients often undergo preoperative CT scans, these existing images can be utilised to assess sarcopenia, making it a practical and cost-effective approach. The most common method of assessing sarcopenia using CT scans is by measuring the cross-sectional skeletal muscle area (SMA, cm 2 ) at the level of the third lumbar vertebra, which is highly correlated with total body muscle mass. Adjusting the SMA for height squared yields the Skeletal Muscle Index (SMI, cm 2 /m 2 ), a metric used to assess relative muscle mass. While the specific thresholds can vary, depending on age, body mass index (BMI), and definitions of sarcopenia, a commonly used threshold is an SMI less than 50-55 cm 2 /m 2 for men and less than 35-40 cm 2 /m 2 for women [38]. van der Werf et al. demonstrated a predicted 5th percentile SMI value (for all BMIs) of 36.9 cm 2 /m 2 for men and 28.2 cm 2 /m 2 for women in the age group of 70-79 years, indicating the extent and frequency of muscle mass loss in this age group [39]. Lastly, the psoas muscle has also been of particular interest as it can be easily visualised and measured on routine preoperative CT scans. The cross-sectional area of the psoas muscle has been used as a surrogate marker for total body muscle mass.
Sarcopenia and Frailty
The interaction between sarcopenia and frailty is complex and multifaceted, and both conditions often coexist in older adults, making their assessment and management challenging. Sarcopenia, characterised by the loss of muscle mass and strength, and frailty, a state of increased vulnerability to stressors due to decreased physiological reserves, are both prevalent conditions in older adults [24]. A cross-sectional study conducted by Gingrich et al. (2019) found that 42% of older medical inpatients had sarcopenia and 33% were frail, with these conditions overlapping in 19% of patients [40]. This indicates a significant interaction between the two syndromes, suggesting they may share common etiological factors such as reduced food intake, inflammation, hormonal changes, increased energy requirements, and reduced physical activity.
While sarcopenia can be measured relatively directly through assessments of muscle mass and strength, frailty, due to its multifaceted nature, is more challenging to measure. Frailty encompasses a decline in function across multiple organ systems, leading to increased vulnerability to stressors [41]. Various frailty indices and scales have been developed, such as the Fried Frailty Index and the Frailty Phenotype, but these require comprehensive clinical assessments and may not be feasible in all settings [41]. Furthermore, there is no universally accepted definition or measurement for frailty, leading to variability in how it is assessed and interpreted in clinical and research settings.
1.
Preoperative identification of sarcopenia: Early identification of sarcopenia can allow for preoperative interventions to improve patient outcomes. This can be achieved through simple screening tools or more comprehensive assessments such as CT or MRI scans.
2.
Preoperative optimisation: Once sarcopenia is identified, preoperative optimisation strategies should be implemented. This could include nutritional supplementation, physical therapy, and exercise programs aimed at increasing muscle mass and strength.
3.
Risk stratification: Sarcopenic patients should be considered high-risk surgical candidates. This should be considered when planning the surgical approach and postoperative care.
4.
Intraoperative care: Consideration should be given to minimizing operative time and blood loss, as sarcopenic patients may be more susceptible to intraoperative complications.
5.
Postoperative rehabilitation: Early mobilisation and physical therapy should be initiated postoperatively to prevent further muscle loss and to promote recovery. 6.
Nutritional support: Postoperative nutritional support should be provided to meet the increased protein and calorie needs of sarcopenic patients and to support muscle recovery. 7.
Multidisciplinary approach: The care of sarcopenic patients should involve a multidisciplinary team, including surgeons, anaesthesiologists, dietitians, physical therapists, and geriatricians. This can ensure a comprehensive approach to the management of sarcopenia and its associated risks. 8.
Patient education: Patients should be educated about the implications of sarcopenia and the steps they can take to improve their muscle health. This can empower patients to take an active role in their care and recovery. 9.
Research: Further research should be conducted to better understand the impact of sarcopenia on cardiac surgery outcomes and to develop effective interventions for this patient population.
Conclusions
In conclusion, this meta-analysis highlights the significant adverse impact of sarcopenia on patients undergoing cardiac surgery. Sarcopenic patients demonstrate higher early and late mortality rates, longer ICU stays, increased likelihood of requiring dialysis, higher rates of care home discharge, and longer intubation times. These findings underscore the importance of recognizing sarcopenia as a significant risk factor in cardiac surgery.
Our findings align with existing literature across various surgical fields, further emphasizing the universal relevance of sarcopenia in surgical outcomes. Despite the heterogeneity in some of the studies, the overall trend suggests a consistently negative impact of sarcopenia on postoperative outcomes. However, our study also revealed that sarcopenia did not significantly affect certain outcomes, such as surgical time and infection rates. This suggests that the influence of sarcopenia may be more pronounced in certain areas, and its impact may be modulated by other factors such as surgical technique, perioperative care, and the patient's overall health status. To optimize outcomes in sarcopenic patients, a comprehensive and multidisciplinary approach is recommended. This includes early identification and preoperative optimisation of sarcopenic patients, risk stratification, careful intraoperative management, postoperative rehabilitation, nutritional support, and patient education. Further research is needed to better understand the mechanisms underlying the impact of sarcopenia on surgical outcomes and to develop effective interventions.
|
2023-08-30T15:13:55.183Z
|
2023-08-26T00:00:00.000
|
{
"year": 2023,
"sha1": "52b3b563dba8aa233b07797837351f5152640e27",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/12/17/5573/pdf?version=1693052867",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "09bc891a8f982194f32dad656a6dd888f379b7f6",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
261231734
|
pes2o/s2orc
|
v3-fos-license
|
Autologous minced cartilage repair for chondral and osteochondral lesions of the knee joint demonstrates good postoperative outcomes and low reoperation rates at minimum five-year follow-up
Purpose Minced cartilage is a one-step, autologous procedure with promising short-term results. The aim of the present study was to evaluate mid-term results in a patient cohort with chondral and osteochondral lesions in the knee joint treated with minced cartilage. Methods From 2015 through 2016, a total of 34 consecutive patients were treated with a single-step, autologous minced cartilage for knee chondral and osteochondral lesions. Numeric analogue scale (NAS) for pain and knee function were obtained prior to surgery and at 12, 24 and 60 months postoperatively. Secondary outcomes, including Lysholm score, Tegner activity score, and the International Knee Documentation Committee (IKDC) score, were recorded at final follow-up. MRI examinations of patients with unplanned radiological follow-up were analysed using the MOCART (Magnetic Resonance Observation of Cartilage Repair Tissue) score. Results A total of 28 patients (44.1% females, age at surgery: 29.5 ± 11.5 years) were available at a mean follow-up of 65.5 ± 4.1 months. Mean defect size was 3.5 ± 1.8 cm2. NAS for pain decreased from a median of 7 (range: 2–10) preoperatively to 2 (0–8) postoperatively. NAS knee function improved from a median of 7 (range: 2–10) to 3 (0–7) after five years, respectively. Satisfactory Lysholm (76.5 ± 12.5), IKDC (71.6 ± 14.8) and Tegner activity (4, range 3–9) scores were reported at final follow-up. Of all patients, 21(75%) and 19 (67.9%) reached or exceeded the PASS for the IKDC- and Lysholm score at final follow-up, respectively. The average overall MOCART 2.0 scores for all postoperatively performed MRIs (n = 23) was 62.3 ± 17.4. Four (14.2%) postoperative complications were directly linked to minced cartilage, one (3.5%) of which required revision surgery. Conclusion One-step, autologous minced cartilage repair of chondral and osteochondral lesions of the knee without the necessity for subchondral bone treatment demonstrated good patient-reported outcomes, low complication rates, and graft longevity at mid-term follow-up. Minced cartilage represents a viable treatment option to more traditional cartilage repair techniques even in mid-term. Level of evidence Level III.
Introduction
The incidence of chondral and osteochondral lesions of the knee is increasing, but treatment remains challenging [13,52].Several different cartilage repair techniques have been described, each one aiming to maximize the amount of mature and organized hyaline or hyaline-like cartilage [7,19].However, so far, no technique has been able to regenerate normal hyaline cartilage in adults on a regularly basis and no technique has been proven to be superior.Osteochondral autograft transfer system (OATS), autologous chondrocyte implantation (ACI), matrix-assisted chondrocyte implantation (MACI) or several cartilage repair techniques using scaffolds (natural or synthetic) are contemporarily the most commonly therapies for medium to large defects [7,22].ACI and MACI are purported to generate hyaline or hyaline-like cartilage, with low associated reoperation rates and favourable clinical outcomes even in complex cases [5,7,18,29,49].However, the two-step approach of ACI and MACI, which requires laboratory cell cultivation, results in a high financial and clinical burden.OATS results are reported to be similar to MACI, but allografts availability may be limited and cost intensive.
Recently, the minced cartilage procedure, a technique in which viable autologous cartilage is collected, sliced into small pieces, and reimplanted, has undergone renewed interest [51,55].Autologous thrombin and PRP, fibrin glue or a membrane might be additionally used for fragment fixation [42,[50][51][52].Short-term outcomes in literature have been promising, but mid-and long-term results are lacking but urgently needed [8,10,42].
The aim of the present single-cohort study is to demonstrate minimum five-year outcome data of a cohort that underwent minced cartilage for the treatment of chondral and osteochondral lesions in the knee.It has been hypothesized that minced cartilage may maintain favourable clinical results over longer term follow-up, both in terms of patient-reported outcome measures and reoperation rates.
Materials and methods
Local ethical committee of the Canton of Zurich (KEK-ZH-Nr.2015-0258) approval was obtained prior to study initiation.Informed consent was signed by all participants.The first 34 consecutive patients who underwent minced cartilage knee surgery between 2015 and 2016 were retrospectively analysed from a prospectively maintained database that routinely collected patient-related outcome data.
All patients were contacted personally by telephone for an interview and questionnaires were collected by mail.
Electronic medical records were reviewed to obtain patient demographic, surgical, and imaging data.
Surgical procedure
Indication and detailed surgical treatment regimen was described previously [42].All surgical interventions were performed by a single specialized and fellowship-trained orthopaedic surgeon (G.M.S.).All patients obtained a preoperative Magnetic Resonance Imaging (MRI) and conventional knee joint radiographs for diagnostic and surgical planning purposes.Patients with chondral or osteochondral lesions of the knee, who did not require any subchondral bone treatment were included in this study.Additionally, those with osteochondritis dissecans lesions not amenable to primary fixation, with otherwise healthy-appearing cartilage in the remaining compartments were included.
Final indication for performing a minced cartilage procedure was made following routine arthroscopy.In all cases, the second-generation repair technique was used as described previously [42,50].In brief, depending on the location of the chondral defect, a medial or lateral mini arthrotomy approach was performed and the lesion was inspected and measured.The defect was then debrided with a curette until a stable healthy cartilage rim was obtained.The healthy hyaline cartilage obtained from the debridement was collected and subsequently minced into small fragments of approximately 1 mm until a paste-like consistency was achieved.If an insufficient amount of cartilage was obtained, additional healthy cartilage was harvested from intercondylar notch using osteochondral cylindrical harvesters.Finally, minced cartilage was placed into the defect and sealed with fibrin glue or a combination of fibrin glue and membrane (Chondro-Gide, Geistlich Pharma).
Rehabilitation
An identical postoperative rehabilitation protocol was used in all patients with initial bed rest in a straight knee brace for 24 h.Continuous passive motion machine was started on the first postoperative day.For the first six weeks, partial weight-bearing was permitted with crutches and range of motion was limited to 0 to 90 degrees.After six weeks, a gradual increase in weight-bearing and range of motion was permitted, with full weight-bearing and unrestricted range of motion achieved at approximately nine weeks postoperatively.
Patient-reported outcome measures
Primary patient-reported outcome measures (PROMs), obtained preoperatively and at 12, 24 and 60 months of follow-up, were the numeric analogue scale (NAS) for pain and subjective knee function (0 = no pain/best function, 10 = worst pain/worst function).Secondary PROMs, including Lysholm, IKDC (International Knee Documentation Committee), COMI (Core Outcome Measurement Index) and Tegner activity score, were obtained at final follow-up only, as they were not routinely captured preoperatively from the patient-reported outcome data system.All postoperative complications and reoperations were recorded.
Radiological outcome measures
Preoperative 3-T MRI scans obtained at our institution were evaluated by a trained and blinded examiner using the AMADEUS (Area Measurement and Depth and Underlying Structures) score in order to quantify the severity of chondral and osteochondral defects prior to cartilage repair [33].Intraoperative grading was performed according to the ICRS (International Cartilage Repair Society) grading system [3].Hyaline cartilage or repair tissue were analysed using the MOCART (magnetic resonance observation of cartilage repair tissue, 0 = worst, 100 = best) score in all patients at 6 month postoperatively [40].Preoperative and six-month postoperative outcomes have been published previously and are therefore not reported in the present study [42].In addition to the planned study MRIs, all subsequent, unscheduled MRI examinations were captured as part of this study and analysed using MOCART 2.0 score by a single investigator who was blinded to the clinical outcome of the patients.The MOCART 2.0 score has been shown to have excellent interrater and intrarater reliability [53].
Statistical analysis
Statistical analysis was performed using SPSS (Version 28, IBM) and Microsoft Excel (Version 16).Normal distribution of the data was tested using the Kolmogorov-Smirnov test.Friedman test for dependent samples with Dunn-Bonferroni post-hoc test was used to compare NAS values at different study time points.Student's t test or Mann-Whitney U test was applied to determine differences between groups.The patient-acceptable symptomatic state (PASS) threshold was employed as a tool to assess the minimum scores associated with patient satisfaction [30].In cartilage repair, a final IKDC score of 62.1 and a Lysholm score of 70 have been reported to correspond with the PASS [6].A difference between IKDC-and Lysholm score values greater than 9.2 and 13.0, respectively, was considered a clinically important difference (CID) [6].Using G*Power (Version 3.1), the Wilcoxon signed-rank test for matched pairs with maximum correlation between the pre-and postoperative groups was used for a-priori sample size calculation.With α-level set to 0.05, β to 0.80, and assuming a one-tailed analysis, a total of 28 patients were deemed necessary to detect a one-point difference with two points standard deviation in the NAS for pain score.One-point difference in NAS for pain is within the reported minimal clinically important difference (MCID) of 2.7 points for cartilage procedures [32].With an expected loss to follow-up rate of 20% at mid-to long-term, the first 34 consecutive patients were included in the study.
Results
A total of 34 consecutive patients treated with minced cartilage were included in the study.A detailed overview of patient characteristics and concomitant procedures is shown in Table 1.Cartilage defect characteristics are displayed in Table 2.The final follow-up after a mean of 65.5 ± 4.1 months was 82.4% (n = 28).Patients lost to follow-up did not differ in baseline characteristics or in intraoperative findings to those with complete mid-term follow-up.
Patient-reported outcome measurements
Overall, a statistically significant decrease in NAS for pain with a medium to strong effect size and a statistically significant (p < 0.001) increase in knee function with a medium to strong effect size were observed throughout the study period (Fig. 1).There was no statistically significant increase in postoperative pain or worsening of knee function at any of the three postoperative follow-up time points (Fig. 1).Gender, defect localization, concomitant interventions, defect characteristics as well as fixation techniques for minced cartilage seem to have little influence on postoperative outcomes with similar PROMs reported (Table 3).Statistical subgroup analysis was not performed because of missing power.
Of all patients 75% (n = 21) and 68% (n = 19) reached or exceeded the PASS for the IKDC-and Lysholm score at final follow-up, respectively.
Surgery-related complications and revision surgery
A total of four (14.2%) surgery-related complications were related directly to minced cartilage, one (3.5%) of which required revision surgery whereas the others (10.7%) resolved without surgical intervention.Five (17.8%) adverse events were related to the additional performed procedures (e.g.ACL reconstruction, MPFL reconstruction, etc.) and one (3.5%)event was linked to a traumatic accident (Table 4).If the assumption is made that all patients lost to follow-up had complications, the total complication rate would be 47.0%.
Radiological outcomes
Nineteen (67.8%) patients obtained a total of 23 unscheduled MRI at a mean of 41.8 ± 22.0 month after surgical intervention.Reasons for MRI included trauma, revision surgery, or pain (Table 5).The average overall MOCART 2.0 score including all examined patients and all anatomical sites (retropatellar, femoral condyle, trochlea) was 62.3 ± 17.4.MOCART 2.0 results including detailed information about all variables for each anatomical site and follow-up time point are shown in Table 5.
Discussion
The primary finding of the study was that second-generation minced cartilage procedure for the treatment of chondral and osteochondral lesions of the knee without the necessity for subchondral bone treatment is an effective and safe procedure with good mid-term results in terms of pain and knee function.The present 5-year data demonstrated no significant worsening of pain and function at mid-term compared to short-term follow-up.
The primary aim of minced cartilage is to recapitulate hyaline or hyaline-like cartilage at site of implantation from cell outgrowth, proliferation, and differentiation without the need for a two-step surgical process.Fragmentation of healthy cartilage has been shown to "activate" chondrocytes by increasing tissue surface and therefore promoting outgrowth [37,51].Outgrowth of chondrocytes results in proliferation and matrix production.This process is believed to be positively influenced by native joint physical-biomechanical inputs and the osteochondral microenvironment, as mechanical and biological stimuli have been shown to promote proliferation and chondrogenic differentiation [51,56].Several animal models have demonstrated the feasibility of minced cartilage showing better results compared to microfracturing, while demonstrating similar outcomes to two-stage autologous chondrocyte implantation [1,9,15,26,38,41].
As it currently stands, there is only one prospective randomized clinical trial for single-stage autologous cartilage fragments procedure (CAIS, Cartilage Autograft Implantation System) [10] and three trials involving use of autologous minced cartilage with clinical follow-up of up to 24 months.[8,12,42].Comparing two-year outcomes of patients randomly treated with either microfracturing (MFX) or CAIS shows significantly higher PROMs for CAIS [10].There was no difference in the number of surgery-related complications, but a higher number of intralesional osteophyte formation in patients treated with MFX [10].A statistical significant increase in MOCART scores and PROMs were also reported in eight patients with osteochondrosis dissecans treated with a combination of autologous bone and cartilage chips (ADTT, autologous dual-tissue transplantation) [8].Similar, a statistically significant improvement between pre-and postoperative PROMs values were reported in fifteen patients treated with a novel autologous-made matrix, hyaline cartilage chips and platelet-rich growth factors [12].In the present study, five-year outcomes indicate maintained low pain scores and knee function which are not statistically different from one-and two-year postoperative outcomes.Despite a slight increase in pain compared to one-and twoyear results, the NAS pain values were within the limits of the minimal clinically important difference (MCID) and not statistically significantly different.Further long-term data are needed to assess the postoperative progression over time.
When comparing outcomes of different studies it is important to keep in mind, that several factors including surgical techniques, patient characteristics [48], previous or subsequent surgical interventions [48], defect size and location [23] as well as rehabilitation [20] have an influence on the postoperative outcome and limit therefore the direct comparability.Recently two-stage autologous chondrocyte implantation or osteochondral allograft transplantation (OATS), research showed promising mid-term results after MACI for medium to large defects [20,24,31].A recent prospective randomized controlled trial (RCT) comparing MFX (61.8 ± 21.5) to MACI (68.5 ± 21.2) reported significant higher IKDC scores for the latter [4].These findings are supported by recent meta-analysis showing no increased risk of clinical failure but superior improvements in PROMs for MACT compared to MFx at short-to mid-term [14] [20].Contrary, other studies question the superiority of MACI over MFX [34,35] or even report superiority of MFX especially in patients with only small chondral defects [47].A prospective, controlled clinical trial studying the safety and efficacy of MACI with spheroid technology reported good and stable improvement of IKDC (74.6 ± 18.7) and KOOS (77.1 ± 18.6) scores after 48 months [46].This is in accordance with several other mid-to long-term studies reporting stable improvements after ACI [2,4,11,24,36,45,57] When directly comparing ACI to OATS [43] fair to good mid-term IKDC outcomes (IKDC: 50 -80) were reported [21].Similarly, no significant differences at midterm were found in PROM regarding outcomes between ACI and AMIC with VAS for pain scores ranging between Fig. 1 Numeric analogue scale (NAS) for pain and knee function preoperative and one, two and five years postoperative.The asterisk marks a significant difference to the preoperative state: ***p < 0.001; **p < 0.01.No significant differences were observed between 1, 2 and 5 year postoperative follow-up.MCID: NAS for pain: 2.7 1 3 2.3 -3-two years postoperative [25,27,54].Patientreported outcomes of the present study (IKDC: 71.6 ± 17.8) are comparable to MACI, OATS and AMIC results of the above-mentioned studies but appear to be higher than those reported for MFX [4].Gender, concomitant injuries, defect localization, defect characteristics as well as fixation techniques for minced cartilage seem to have little influence on postoperative outcomes.Patients treated with one-staged minced cartilage for chondral and osteochondral knee defects can expect similar postoperative outcomes in the mid-term as compared to other established procedures.
Overall surgery-related complications and revision surgery are relatively low and comparable between different chondral regenerative techniques [18,28,44,46].Graft hypertrophy after MACI was reported in 12% of the patients after five years with a need for revision surgery in 8% [16].Similar revision surgery rates were reported after five years in a prospective RCT rates comparing MACI (10.8%) and MFX (9.5%) [4].In contrast, no graft hypertrophy was observed in a study using ACI (Spherox™, CO.DON AG, Germany; formerly known as chondrosphere) [45].Mid-term outcome analysis of OATS revealed an 87% graft survival rate at 5 years with a 37% reoperation rate.These results were supported by a recent systematic review reporting similar survival rates (87%) and reoperation rates (30%) at mid-to long-term.In the present study, a total of four (14.2%) complications were primarily related to the minced cartilage procedure, one of which required revision surgery whereas the others resolved without surgical intervention.No revision operation was necessary due to graft hypertrophy or mechanical symptoms related to the chondral graft, although one patient reported still of occasional joint locking at 5 years, possibly related to graft hypertrophy.Altogether, this makes minced cartilage comparable to MFX and ACI in terms of surgery-related complication and revision surgery, and overall a safe and efficient procedure.
Despite the well-known lack of correlation between postoperative radiological outcomes and PROMs, radiological scoring remains relevant as a measure of structural change over time [17].Two prospective randomized controlled trials reported mean MOCART scores of 76 ± 16 at two years and 75.5 ± 13.1 after 4 years, respectively [45,46].When comparing patellar and femoral condyle defects similar MOCART scores were observed [39,58].In the present study, only unscheduled follow-up MRIs within two and five years postoperatively were included, that were obtained for re-presentation due to recent trauma, pain, or revision surgery.Naturally, slightly lower MOCART 2.0 scores were therefore observed compared to the above-mentioned studies.Due to the nature of follow-up MRI indications in the present study, it can be assumed that MOCART 2.0 scores would be higher if the whole study cohort was included.This work has some limitations.First, this study presents mid-term outcomes of the first series of patients undergoing minced cartilage of a single surgeon and therefore lacks a comparative group of patients treated with alternative osteochondral procedures such as ACI, OATS or MTX.Second, as consecutive patients were included and inclusion criteria were not limited to isolated cartilage defects, the heterogeneity of the study population might pose a limitation.However, a recent systematic review demonstrated good clinical outcomes after cartilage repair at the patellofemoral joint even in complex cases [5].Third, the majority of the cartilage defects were located at the patella or femoral condyle and only a few at the trochlea or tibia.Therefore, the present results may not be applied without restriction to all cartilage defect locations in the knee joint.Moreover, overall outcomes might be lower, as less satisfactory results are for patients affected by cartilage lesions of the patella compared to other sides [23].Due to the small sample size, statistical subgroup analyses were not possible, but the descriptive statistics appear to be comparable between the different defect localizations.Finally, while all patients obtained a preoperative and a six-month postoperative MRI scan, an additional, mid-term radiological examination for all patients would have been desirable; however, this was beyond the scope of the present work.
Minced cartilage procedure for the treatment of medium to large chondral and osteochondral defects of the knee, show good and promising results at mid-term.Minced cartilage procedure seems to be a viable treatment alternative in patients where a two-staged approach is not desired.
Conclusion
Patients treated with autologous minced cartilage procedure for medium to large chondral and osteochondral lesions of the knee without the necessity for subchondral bone treatment report good mid-term results in pain and knee function and low rates of postoperative adverse events.Pain and functional levels remain stable and within the MCID at mid-term.
Authors contribution AR: contributed to conceptualization, data curation, formal analysis, methodology, project administration, resources, writing-original draft, and writing-review and editing.RO: contributed to conceptualization, data curation, formal analysis, methodology, and writing-review and editing.FÖ: contributed to conceptualization, data curation, formal analysis, methodology, and writing-review and editing.VAS: contributed to conceptualization, data curation, formal analysis, resources, methodology, and writing-review and editing.SS: contributed to conceptualization, data curation, formal analysis, methodology, writing-review and editing, and supervision.SP: contributed to formal analysis, methodology, resources, writing-review and editing, and supervision.GMS: contributed to conceptualization, data curation, formal analysis, methodology, project administration, writing-review and editing, and supervision.JH: contributed to conceptualization, data curation, formal analysis, methodology, project administration, resources, and writing-review and editing.
Funding Open Access funding enabled and organized by Projekt DEAL.
Table 1
Patient characteristics and intraoperative details Data displayed as number (per cent) or mean ± standard deviation MRI magnet resonance imaging, FC femoral Condyle, BMI body mass index, MPFL Medial patellofemoral ligament, ORIF open reduction and internal fixation, ACL anterior cruciate ligament reconstruction a Multiple procedures possible; AMADEUS, Area Measurement and Depth and Underlying Structures
Table 2
Defect characteristics based on MRI and intraoperative findings
Table 3
Five-year outcomes after minced cartilage procedure for different subgroups
Table 4
Surgery-related complications and subsequent interventions AE adverse event, # number of complications, MC minced cartilage, MPFL medial patellofemoral ligament, ACLR anterior cruciate ligament reconstruction, ORIF Open reduction internal fixation
Table 5
MOCART 2.0 score of unscheduled MRI at 2-5 year of follow-up All values displayed as mean ± standard deviation (range) if not otherwise stated FC femoral condyle, MOCART Magnetic Resonance Observation of Cartilage Repair Tissue, $ Four patients obtained two, and one patients
|
2023-08-29T06:17:27.479Z
|
2023-08-27T00:00:00.000
|
{
"year": 2023,
"sha1": "0d76e45db6a11600f023dcc62edf67b98ab8631a",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00167-023-07546-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "1cf3cd0ca52bf8a5d35ca1de5e287531728857f4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256870521
|
pes2o/s2orc
|
v3-fos-license
|
Investigation of the feasibility and acceptability of a school-based intervention for children with traits of ADHD: protocol for an iterative case-series study
Introduction Attention deficit/hyperactivity disorder (ADHD) is a prevalent and impairing cluster of traits affecting 2%–5% of children. These children are at risk of negative health, social and educational outcomes and often experience severe difficulties at school, so effective psychosocial interventions are needed. There is mixed evidence for existing school-based interventions for ADHD, which are complex and resource-intensive, contradicting teachers’ preferences for short, flexible strategies that suit a range of ADHD-related classroom-based problems. They are also poorly evaluated. In this study, a prototype intervention comprising a digital ‘toolkit’ of behavioural strategies will be tested and refined. We aim to refine the prototype so that its use is feasible and acceptable within school settings, and to establish whether a future definitive, appropriately powered, trial of effectiveness is feasible. This novel iterative study aims to pre-emptively address implementation and evaluation challenges that have hampered previous randomised controlled trials of non-pharmacological interventions. Methods and analysis A randomised iterative mixed-methods case-series design will be used. Schools will be randomised to the time (school term) they implement the toolkit. Eight primary schools and 16–32 children with impairing traits of ADHD will participate, along with school staff and parents. The toolkit will be refined after each term, or more frequently if needed. Small, theory-based and data driven changes hypothesised as relevant across school contexts will be made, as well as reactive changes addressing implementation barriers. Feasibility and acceptability will be assessed through quantitative and qualitative data collection and analyses in relation to study continuation criteria, and ADHD symptoms and classroom functioning will be tracked and visually evaluated to assess whether there are early indications of toolkit utility. Ethics and dissemination Ethical approval has been obtained. Results will be presented in journal articles, conferences and through varied forms of media to reach policymakers, stakeholders and the public.
ABSTRACT Introduction Attention deficit/hyperactivity disorder (ADHD) is a prevalent and impairing cluster of traits affecting 2%-5% of children. These children are at risk of negative health, social and educational outcomes and often experience severe difficulties at school, so effective psychosocial interventions are needed. There is mixed evidence for existing school-based interventions for ADHD, which are complex and resource-intensive, contradicting teachers' preferences for short, flexible strategies that suit a range of ADHD-related classroom-based problems. They are also poorly evaluated. In this study, a prototype intervention comprising a digital 'toolkit' of behavioural strategies will be tested and refined. We aim to refine the prototype so that its use is feasible and acceptable within school settings, and to establish whether a future definitive, appropriately powered, trial of effectiveness is feasible. This novel iterative study aims to pre-emptively address implementation and evaluation challenges that have hampered previous randomised controlled trials of non-pharmacological interventions. Methods and analysis A randomised iterative mixedmethods case-series design will be used. Schools will be randomised to the time (school term) they implement the toolkit. Eight primary schools and 16-32 children with impairing traits of ADHD will participate, along with school staff and parents. The toolkit will be refined after each term, or more frequently if needed. Small, theorybased and data driven changes hypothesised as relevant across school contexts will be made, as well as reactive changes addressing implementation barriers. Feasibility and acceptability will be assessed through quantitative and qualitative data collection and analyses in relation to study continuation criteria, and ADHD symptoms and classroom functioning will be tracked and visually evaluated to assess whether there are early indications of toolkit utility. Ethics and dissemination Ethical approval has been obtained. Results will be presented in journal articles, conferences and through varied forms of media to reach policymakers, stakeholders and the public.
INTRODUCTION
Children and young people's mental health and neurodevelopment (including attention deficit/hyperactivity disorder (ADHD)) is a government priority. In the 2017 green paper, the pivotal role of schools in supporting good mental health was acknowledged. By the end of 2023, >20% of schools across the UK will have a designated mental health lead with the aim of improving access to evidence-based treatments. 1 Cost-effective, implementable, sustainable evidence-based mental health interventions that can be delivered in schools are therefore urgently needed.
ADHD is a neurodevelopmental disorder affecting 2%-5% of children, characterised by impairing levels of impulsivity, hyperactivity and/or inattention. 2 Multiple negative life course outcomes are associated with childhood ADHD including high rates of co-occurring mental disorders, accidents,
STRENGTHS AND LIMITATIONS OF THIS STUDY
⇒ This study uses an iterative staggered design in order to refine an intervention throughout the research process. ⇒ Measures of acceptability and feasibility of the intervention and of the ability to evaluate it in a rigorous trial setting will be reviewed against continuation criteria throughout the study. ⇒ Multiple informants will provide relevant data, including school staff and parents, and data on both positive and adverse outcomes will be captured. ⇒ However, the iterative nature of the study means that different participants will not receive the same intervention, therefore group-level effects cannot be delineated. ⇒ Alternate analyses will be used to ascertain whether there are patterns of change in symptoms and functioning that replicate across individuals instead.
Open access poor educational and occupational attainment and antisocial behaviour. 3 Other neuropsychiatric disorders and unintentional injury are strongly associated with ADHD, which are the leading cause of years lost due to disability worldwide in those aged 10-24 years. 4 Costs to the health, education and judicial systems, social services and economic loss are substantial; 24% of the mean annual cost of £5493 is attributable to National Health Service resources. 5 Pharmacological treatment is available and effective for some, especially in acute management of symptoms, but is not appropriate, acceptable or tolerable for all children. 6 Response is often partial, 7 8 with tolerance developing over several years. 9 10 Even with drug treatment, ADHD causes particular problems in school, resulting in the outcomes described above. 11 Although ADHD medications have been shown to be effective and safe for the treatment of ADHD in the shorter term, 10 licensed ADHD medications are controlled drugs and in order to access these families must receive a clinical diagnosis of ADHD, waiting lists for which in the UK are currently backlogged. Medication is not appropriate or preferable for all children with ADHD, and in the UK is not recommended as a first-line treatment for children with traits that are not severe; 12 and non-pharmacological intervention options that can be used as a precursor, alternative or supplement to pharmacological treatments are important. ADHD causes problems in the classroom for the child, the teacher and for other children. 13 Symptoms of ADHD (which are also traits present to varying degrees in all children) make it challenging for a child to sit still, be attentive for sustained periods of time, listen to and follow instructions, or to resist impulses to shout out. Children with ADHD traits may also wander around the classroom or school building, struggle to complete schoolwork and need reminders to know what they are doing. This combination of characteristics is challenging for the child themselves; as they are not consciously able to control their behaviour without learning self-monitoring and regulatory strategies and are often criticised by adults and peers for their involuntary behaviour. It can also be disruptive to the usual mainstream classroom context, and there is a poor fit between the UK mainstream Primary classroom environment and the behavioural characteristics and needs of children with impairing traits of ADHD. Development of an effective school-based intervention for ADHD that overcomes the limitations of previous interventions would be likely to improve health and social outcomes. The limitations of existing school-based non-pharmacological interventions are that they usually require delivery by trained clinicians, are complex and multicomponent: targeting many potential deficits in every child regardless of their individual strengths and weaknesses. 14-16 ADHD encompasses a wide range of core associated symptoms and functional issues that are seen in the classroom, and existing evidence is unclear about what works for whom, or which aspects of interventions are effective. In combination, this means existing interventions are not feasible or affordable for schools to routinely deliver; they are rarely rolled out in schools and are not followed with high fidelity. There are barriers to school staff identifying evidence-based practices and implementing these with high fidelity as well as adapting to the needs of the individual, and sustaining this over time. Reviews of existing school-based interventions for ADHD have highlighted the potential efficacy of such interventions, [17][18][19][20] and so overcoming the limitations of existing interventions is crucial to provide an implementable, effective school-based intervention for ADHD.
We are in the process of developing an intervention from high-quality evidence, based on theory around behaviour change and ADHD and from a programme of development work including a Delphi survey of key stakeholders. We are following the Intervention Mapping approach 21 22 to co-create the intervention with key stakeholders in order to adapt the format of evidence-based strategies, targeting both symptoms and associated functional impairments, to overcome key implementation issues in schools. 23 Intervention Mapping has six steps, and the full process of the toolkit development will be detailed in a subsequent publication, however in brief these are: (1) Creating a logic model of the problem; (2) Defining programme outcomes and objectives and a logic model of change; (3) Programme design; (4) Programme production; (5) Programme implementation plan; and (6) Evaluation plan. 22 The result will be a 'toolkit' of strategies in a modular format, with some core components then a selection of optional modules focusing on a different core outcome, that is designed for Primary school staff to use when working with children with traits of ADHD: the Tools for Schools FLEX toolkit. The toolkit modules target outcomes that were shown to be of importance to key stakeholders in a Delphi survey; 24 including conflict with teachers and peers, paying attention, building self-esteem, and improving planning and organisation.
Given the limitations of existing school-based interventions for ADHD, including limited evidence of effectiveness and difficulty evaluating them in methodologically rigorous randomised controlled trials, it is clear that the toolkit will need to be developed in a way that ensures it is acceptable, useful and feasible for school staff. In addition to this, we must address key features that will allow successful trials of the toolkit in future powered trials so that the strength of evidence for its effectiveness is comparable to other treatments such as medication. Intervention development frameworks such as the Medical Research Council complex intervention guidelines highlight the importance of the development and feasibility phases (rather than prematurely evaluating interventions in large randomised trials); carefully optimising protocols, working closely with the target populations, planning and testing implementation, and assessing the acceptability and feasibility of the toolkit are essential to develop an intervention that can later be tested for effectiveness. The development of the toolkit will therefore take an iterative case-series approach, trialling and modifying successive versions of the prototype in order to meet the study objectives. We have not included a control group or comparison group as the study does not aim to assess effectiveness.
In this protocol we outline plans to conduct the initial evaluation and further development of the Tools for Schools FLEX toolkit. As the toolkit is at an early stage of development, we plan to assess the acceptability of the toolkit, and the feasibility of implementing it within the Primary school context in the UK.
Objectives
The main aims of this feasibility study are to: 1. Refine the prototype Tools for Schools FLEX toolkit so that it is feasible and acceptable to implement in the school setting. 2. Establish whether a future definitive trial of effectiveness is feasible. Secondary aims are to: 1. Identify suitable outcome measures to assess core ADHD symptoms, child and teacher well-being, academic progress, and identify an appropriate primary outcome (from these) for a future definitive trial. 2. Develop and test a framework for costing the toolkit, and for assessing cost-effectiveness in a future definitive trial. 3. Assess whether observational measures of behaviour, classroom functioning and teacher-reported ADHD symptoms indicate improvement following use of the toolkit.
METHODS AND ANALYSIS Design
A randomised iterative mixed-methods case-series design will be used. Schools will be the units of randomisation, and will be randomised to the time (school term) when they will use the toolkit. The toolkit will be refined after each school has used it for one term, or more frequently if major issues are highlighted during a school's use of the toolkit. Small, theory-based and datadriven changes hypothesised to be relevant across the school context will be made, as well as reactive changes addressing barriers to implementation. Randomisation to using the toolkit in different school terms allows for inferences to be made as to whether the intervention is improving outcomes, with consistent improvement following the introduction of the core components enabling researchers to potentially rule out alternative explanations for behaviour change, such as improvement due to other support in place or differences in behaviour across the school year. If baseline symptoms and functioning remain on stable trajectories for children across schools, changing only when the intervention is introduced, this implies it is the intervention rather than an alternate factor stimulating the change.
It is a commonly used design for single-subject analyses where the intervention cannot be withdrawn and re-implemented such as in an ABAB design. [25][26][27] Eight schools will participate in total in two recruitment cohorts of four (in order to avoid later participating schools having to sign up to the study years in advance); each cohort will enter the study at a different time. Schools in each cohort will then be randomly assigned their individual baseline term in the study using a computerised random sequence generator (see figure 1). The length of the baseline period will be the same for each school (one school term), with schools within each cohort entering their baseline period in different school terms; the term in which the intervention is delivered is therefore staggered across the study (figure 1). This design will allow for refining and adapting the toolkit and its implementation based on ongoing participant feedback, maximising the chances it has of being acceptable and feasible (and therefore suitable for further evaluation in a pilot and then definitive cluster randomised controlled trial) by the end of the study.
Patient and public involvement
A planning group of public and patient representatives, including people with ADHD (adults (n=5) and children (n=9)), parents of children with ADHD (n=12), school staff (n=9), and education and health psychologists (n=4) has been established to co-create the intervention. Input from this planning group has already extensively informed the design of the intervention and study, refining research questions, selecting outcome measures and considering participant burden. The group will also shape study dissemination plans. The planning group contains around 25 people, and will be consulted throughout the course of the study to make decisions about necessary modifications to the intervention, delivery or research design of a future evaluation. In relation to the study objectives, co-developing the intervention with key stakeholders in the planning group increases the chances of the toolkit being feasible and acceptable in the Primary school context. The group will actively collaborate to refine the toolkit in response to participant feedback to achieve the first aim. They will be consulted as to whether Open access the feasibility study results indicate that a future trial is feasible, in relation to understanding the core features of a high-quality trial. The role of the planning group in co-production of the toolkit is outlined in further detail in the online supplemental material.
Regarding the secondary objectives, the planning group will use data collected to agree on the final primary outcome for a future definitive trial, as well as judge whether trialled measures are suitable measures to assess ADHD symptoms, academic attainment, and child and teacher well-being. The planning group will contribute to the development of the costing framework and evaluate findings from study participants. Finally, the planning group will discuss the findings from observational measures in relation to the toolkit use, helping to understand why any emerging patterns in the data are seen.
Recruitment and participants
Eight Primary schools, associated school staff and up to 32 children with impairing traits of ADHD (minimum 16 children) will participate in the study, and use the toolkit. All schools and all child participants will receive the intervention. Baseline data regarding treatment as usual will be collected in order to assess normal fluctuations in symptoms and response to any school-implemented treatment as usual during the baseline phase. Baseline data will not be used to exclude participants who have unstable profiles, rather to greater understand the nature of symptom and impairment fluctuation over time.
Recruitment will be through opportunistic sampling, using existing contacts from the planning group such as local Educational Psychology teams, and other education networks in the South West. At the time of writing, four schools have consented to participate and, given the demand for support managing children with ADHD in school, we anticipate no problems recruiting a total of eight schools. This sample of eight mainstream primary schools in the South West of England will allow sufficient variety to purposively sample schools to include a minimum of four in areas of high socioeconomic disadvantage, based on the selection of schools with an index of multiple deprivation above the regional mean. The intervention needs to be able to respond to the range of children with traits of ADHD, including those who are often unrecognised or less likely to access other support, such as girls, 28 therefore eligible schools will have at least one female pupil meeting eligibility criteria and we will aim to recruit a diverse sample of children with differing ethnicities. A sample of eight schools also allows for the intervention periods to be staggered across the study and successive iterations of the toolkit. All participating schools will use the toolkit in the study for one school term.
Study participants include the school's senior leadership or head teacher, who will consent to schools' participation; the school special educational needs (and disabilities) coordinator (SENCo), teachers and teaching assistants (TAs) working with eligible children, eligible children with impairing traits of ADHD, and their parents. Consent will be obtained by a member of the research team through meeting with potential participants face to face or online.
School and school staff consent Each school's senior leadership team will consent to the school's participation. They will nominate a mental health lead, who is likely to be the SENCo (as children with ADHD and associated difficulties currently fall under their remit in the UK) and will be referred to as such henceforth. Decisions around recruitment and eligibility criteria have been made in close consultation with the study planning group.
As part of the condition of each school's participation, the senior leadership team will be asked to ensure that the SENCo and potentially eligible teachers and TAs (those working with recruited children) are supportive of and potentially willing to take part in the study.
Identification and recruitment of eligible children and families Eligible children will either have a clinical diagnosis of ADHD (as reported by the school or parents), or high symptom levels and will be aged 4-10 years (school years Reception-5) at the first baseline assessment to ensure they will be in primary school for the study duration. Children with ADHD may either have not yet received a formal diagnosis, or their family may not wish to pursue a diagnosis, therefore, children with high symptom levels will be eligible. There are no additional exclusion criteria. Teachers and SENCos will identify students who meet criteria for this indicated group using the Strengths and Difficulties Questionnaire Hyperactivity-Inattention Subscale and the impact supplement. 29 Teachers of potentially eligible children will be asked by the SENCo to complete this brief screening measure, and if this indicates that a child may have probable ADHD (teacherreported symptoms being ≥6 and impact ≥1), their parent will also be asked to complete the screen in order to ensure that impairments are noted across more than one setting. 2 30 Should these combined ratings indicate 'probable' ADHD according to the validated algorithm, the child will be eligible for inclusion in the study. Probable ADHD is assigned if the parent-report hyperactivity/inattention score is ≥7 and an impact score of 2 is given, or if the hyperactivity/inattention score is ≥9 and an impact score of at least 1 is reported, and the teacher criteria above are met (see sdqinfo. org for syntax).
SENCos will make initial contact with parents of children to clarify eligibility, to provide the study information and to establish whether families are interested in participation in principle. Willing families will be asked to consent to having their details passed to the research team, who will obtain written informed consent from parents and assent from children to participate. Children will be informed of the broad purpose of the research through an age-appropriate information sheet and assent form, unless their parents request otherwise. Should an Open access individual child not meet eligibility criteria but their parents and teacher feel they would benefit from participation, this will be considered on a case-by-case basis. As the intervention is embedded as part of usual school practice, the planning group has indicated that ongoing assent should then be assumed for children through data collection unless the child indicates distress or reluctance.
Sample size
The planning group has advised that to minimise burden of data collection on teachers, each class teacher should have a maximum of one child participant however this may be revised during the course of the study; for example, if a school feels strongly that several children in one class may benefit, or if two eligible children in the same class also have TA support and the teacher wishes both to participate. Estimating that each of the eight participating schools will have 2-4 participating children and teachers, there will be a total of 16-32 child participants, 16-64 parents, and 16-64 teachers and TAs along with eight SENCos. The total sample size will depend on the distribution of participating parents, teachers and TAs and the distribution of eligible children across different classes within schools: table 1 indicates the minimum and maximum sample sizes.
Given that the aim of the study is not to assess effectiveness or efficacy, the potential for the intervention to change between participants, and the case-series design, this study will not be powered to conduct a conventional between-individuals analysis. Instead, we will use multiple individual measurement points to analyse withinindividual change and assess whether this replicates (at a descriptive level) across individuals. The pragmatic choice of eight schools allows for this purposive sampling, and both the minimum and maximum sample size of children is sufficient to address our objectives. Based on the mean number of children in eligible year groups in Devon schools, and a conservative estimate of prevalence of ADHD in this age group of approximately 3%, it is expected that each school will have four to eight eligible children, of whom we hope to recruit at least 50%. The recruitment target is feasible: school staff have indicated Open access that a toolkit for ADHD would be perceived as useful and not burdensome as they believe it would be beneficial to their wider classroom management (the planning group highlight the data collection burden as being the reason that teachers should only have one participating student per class). An established set of study progression criteria are shown in table 2 and will be used to assess success in terms of meeting the study aims. Criteria relate to the recruitment of schools (five or more being acceptable), recruitment of parents and teachers, and completion of study measures as well as some basic metrics of fidelity. The criteria will be reviewed throughout the study, and where individual criteria are Amber or Red, a core academics team meeting and a planning group meeting will be held to discuss, agree and then implement modifications to the study procedure or content.
Procedure
The study will run from September 2022 until January 2025. Data will primarily be collected at the participating schools, however interviews with parents and school staff may take place over the phone, via internet video call, at participants' homes, or in another location of the participant's convenience. Baseline length will be one school term. During the baseline phase, the teacher will provide repeated measures of the primary quantitative outcomes: participating children's ADHD symptoms and classroom functioning. These will be collected every 2 weeks during the baseline period and in the intervention period. This detailed baseline data will capture normal fluctuations in symptoms across the term and response to any schoolimplemented 'treatment as usual' during the baseline phase.
Information on healthcare and education resource use will be collected from children, parents and school staff for each child during the baseline period using a structured survey with open questions to capture any resources not mentioned, and child quality of life will be reported by the child and their parent. Any other support or treatment will not be altered during the study. Each school will then implement the intervention for one school term; two modules from the intervention will each be implemented for 4 weeks. There will be a 10-week follow-up period where detailed qualitative data will also be captured, including asking participants about changes in resource and service use relative to the information they provided at baseline.
Intervention delivery
The intervention is currently in development. Broadly, it will take the form of a digital toolkit of training and resource packages and behavioural strategies for nonspecialists to use, organised within 'modules' that cover different classroom-based problems common to ADHD: we anticipate that each child participant will receive two modules. SENCos will have the role of coordinating and supporting teachers and TAs to deliver the toolkit in conjunction with support from parents at home, and given agency over the duration, dose and discontinuation of strategies. Online supplemental figure 1 illustrates the participant structure and roles within a school (see online supplemental material).
Intervention description
The Tools for Schools FLEX toolkit will primarily be a digital resource. The current logic model is shown in figure 2 and the toolkit outline is shown in the supplemental material (online supplemental figure 2). The key goals of the toolkit that will foster sustainable behaviour change are to: 1. Understand the child and their communications 2. Adapt the classroom and school expectations 3. Support the child to:
Open access
A. reduce acute impairments B. increase skills C. improve self-identity through providing strategies that change the environment and expectations around the child This results in the child being able to consciously monitor and self-regulate. 4. Sustain and adapt across time. Table 3 shows the data collection schedule relative to the stage within the study. A full intervention description can be found in the online supplemental material.
Qualitative data collection Toolkit iteration feedback The toolkit will go through several iterations, refined based on ongoing data collection and analysis. Flexibility allowing for this is a strength of the study design. 31 Brief semistructured 'feedback sessions' will be held by phone or in person with all participating school staff and parents during the intervention period, in order to understand where modifications need to be made to the design, content and measures of the intervention. Example questions from the topic guide for teachers are included in the online supplemental material. All changes made from the original prototype of the toolkit will be recorded in a change log, including the reason for the change, when the change was made and what aspects of the toolkit the change relates to. The planning group and the study advisory group will convene to discuss feedback or major suggested modifications, and to review minor changes in the change log.
Process evaluation
During the 10-week follow-up period, detailed qualitative data will be collected to assess acceptability, perceived usefulness, fidelity and implementation of the intervention, exploring in more depth what participants did and did not do and explanations of their actions, with questions similar to those shown above. Focus groups comprising the constellation of all school staff involved with one child will be held for all participants. This format is familiar to school staff as professional discussions often take a 'case conference' format. Interviews by phone or an online platform such as zoom will be conducted with a purposive sample of half of the parents, using the same criteria detailed above (ie, at least 50% from areas with a deprivation index above the regional mean) and aiming to attain a balance of male and female parents. Children will take part in 'active interviews' during the process evaluation in the follow-up period with a researcher (at home or at school), where they will have an informal conversation about their experiences of the study, if they recalled any new things their teacher tried with them, how they felt about their behaviour targets (if they knew about them), and how they found the research process.
All children will be invited to an activity-based interview with the option of a paired interview, where they can bring an accompanying family member of their choosing.
Paired interviews can reduce stress, mitigate child protection risks and reduce issues of running a focus group where children in the school with traits of ADHD are identified to one another. The activity interviews will be designed with the planning group and piloted with children with ADHD, for example, these may involve collecting objects of different colours or taking photos around the school to represent answers to questions (such as can you show me where you like to go if you feel you need to calm down? What are your favourite things about school? What do you not like about school?), and using these objects or photos as prompts to discuss thoughts.
All participants in the process evaluation will be asked what they think the most important outcome would be for a future hypothetical trial of the toolkit, and why.
Adverse effects of intervention
It is possible that the intervention will lead to adverse effects or negative impacts on participating children, such as feeling singled out due to the intervention leading to social exclusion from the peer group. 32 The study planning group will contribute to identifying potential adverse effects (as of 21 March social exclusion is the most prominent concern), and qualitative data will be collected from participants at the feedback and process evaluation stages to assess perceptions of negative impacts of the intervention. Modifications will be made as needed in relation to this through consultation with the planning group and the academic advisory team.
Quantitative measures
There are two quantitative outcome measures that will be assessed in order to: (1) Determine which would be most suitable as the primary outcome in a future trial and (2) Evaluate whether they make sense and are relevant to those completing the measures. ADHD symptoms will be measured using the Strengths and Weaknesses of ADHD Symptoms and Normal-behavior Questionnaire, based on preferences of the planning group who considered several measures of ADHD symptoms, 33 34 reported by teachers; and classroom functioning problem behaviour subscale measured using the Social Skills Improvement System (SSIS), 35 that captures social, academic and competing problem behaviours in the classroom environment, reported by parents and teachers. These will be measured every 2 weeks during the baseline and intervention periods and every 3 weeks during the follow-up period. All planned quantitative measures are detailed in a table in the online supplemental material.
Other quantitative measures will be captured to assess whether there is indication of some improvement in the key areas the toolkit aims to target. Measures relating to children include child dimensional psychopathology, assessed using the teacher and parent-reported Strengths and Difficulties Questionnaire, 30 and child-reported satisfaction with school using the How I Feel About My School measure. 36 Teacher well-being will be measured using the Warwick-Edinburgh Mental Wellbeing Scale 14-item Open access Table 3 Data collection schedule and toolkit implementation timeline
Open access
Teacher Survey 37 the Teachers' Sense of Efficacy Scale, 38 and the Relationship with Work Survey from the Maslach Burnout Inventory-General Survey, 39 40 and the social skills and academic competence subscales from the SSIS will be completed by teachers. These will be collected at the beginning and end of the baseline period, at the end of the intervention term and at the end of the follow-up term. Other module-specific measures which are not covered by the above measures will be administered depending on the modules used by each child, for example, the Harter Self-Perception Profile for Children to measure self-esteem. 41 Methods for collecting data on healthcare and education resource use and health-related quality of life will also be established and tested. Child quality of life (child-report and parent-proxy) will be measured using the Child Health Utility for Economic Evaluation (CHU9D). 42 A tool to accurately capture education and healthcare resource use will be developed for the study, drawing on the Client Service Receipt Inventory 43 and measures in the Database of Instruments for Resource Use Management Repository. Participants will complete this at baseline and follow-up, and provide qualitative information on changes in resource use during the follow-up process evaluation. A data collection form will also capture data on resource use and costs associated with the set-up and delivery of the intervention.
Observational measures of child behaviour in the classroom will also be tested throughout the study. These include but are not limited to the Classroom Observation Code, 44 and the Teacher-Pupil Observation Tool. 45 Observations will be carried out at school by members of the research team, trained to an acceptable reliability (0.7 or higher).
All data will be collected online via secure servers, or on paper, and manual data entry will be double checked.
Analysis
Aim 1: To assess whether the toolkit is feasible and acceptable for schools to implement We will use the definition of acceptability proposed by Sekhon et al 46 : 'a multi-faceted construct that reflects the extent to which people delivering or receiving a healthcare intervention consider it to be appropriate, based on anticipated or experienced cognitive and emotional responses to the intervention'. Qualitative data from both the toolkit iteration interviews and the process evaluation interviews and focus groups will be audio recorded, transcribed and analysed using the Framework method and Thematic analysis. 47 An initial coding framework will be developed after reading and re-reading transcripts. Deductive codes derived from the research questions and inductive codes within the data will be identified. This framework will be applied independently to three transcripts; discrepancies in coding will be discussed and revisions made to the framework which will then be applied to the remaining transcripts, with further discussion and revision where necessary.
Open access
Responses relating to each component of the toolkit will be synthesised in a framework in order to assess the acceptability of the evolving and final prototype; specifically to assess whether the components and the overall toolkit are considered appropriate to support children with ADHD in school. In addition, acceptability of the toolkit will be quantitatively assessed against the study progression criteria that relate to completion of each component of the toolkit, attendance at toolkit-related meetings, and the percentage of occasions the participant reports using the toolkit as instructed. The qualitative findings will be used to explain and further explore quantitative results in a mixed-methods synthesis. Modifications will be made iteratively as findings emerge, and by the end of the study each indicator should be in the 'green' range in order to indicate that the toolkit is acceptable.
Aim 2: To establish whether a future trial of effectiveness is feasible Data on the research process and participant compliance will be evaluated. The quantitative data on recruitment and retention of participants, completion of measures by each participant group (parents, child, teacher, observed) and adherence to the intervention protocol will be assessed relative to the relevant progression criteria described in table 2. These have been informed by existing school-based trials. 48 49 As above, qualitative findings from the process evaluation and toolkit iteration feedback will be integrated to explain quantitative findings. Should all relevant progression criteria be rated 'green', a future trial of ef will be considered feasible. Should criteria be amber or red as the study progresses, potential mitigating modifications will be discussed, such as removing the requirement for parents to actively engage with the toolkit.
Subaim A
In order to identify suitable outcome measures to assess core ADHD symptoms, child and teacher well-being and an appropriate primary outcome for a future definitive trial, the acceptability and feasibility of using the individual measures will be evaluated following the steps above and in relation to the threshold set in the progression criteria. Measures will be considered suitable if there is a high level of compliance with completion (ie, each meet the green criteria), and qualitative findings indicate no major barriers to completing the measure on repeated occasions. Should measures be identified as unsuitable between toolkit iterations, new measures will be identified with the planning group and the data collection plan modified. For example, if the ADHD symptoms measure is too burdensome to complete as frequently as required, other briefer ADHD symptom scales will be explored, or the data collection frequency may be modified. The two versions of the CHU9D will be compared to assess whether child-report is sufficient. To identify an appropriate primary outcome for a future trial, participants' qualitative responses to direct questions regarding what they think the most important outcome should be (and why), collected during the process evaluation, will be synthesised. Findings will be discussed with the planning group and advisory team prior to deciding on the final primary outcome measure.
Subaim B
In order to test a framework for costing the toolkit, and for assessing cost-effectiveness in a future definitive trial, the detailed information collected from the draft framework will be assessed following baseline data collection, and modifications made to the framework where free-text responses indicate additional resources not captured in the draft framework. The process evaluation interviews and focus group data will also contribute to understanding changes in resource use, which will be aligned with the follow-up responses regarding resource use in order to ascertain whether the framework is adequate and sensitive to changes in resource use.
The data captured on a bespoke form relating to the costs of setting up and delivering the toolkit will be assessed to expand the framework for accurately costing the toolkit's delivery and potential savings, and the costs of delivering the prototype intervention will be calculated (including costs for staff time, training and delivery of components). This framework will be revised in line with study findings with the aid of health economists, with a final framework developed by the end of the study that would allow for cost-effectiveness to be formally assessed in a future definitive trial.
Subaim C
Assess whether measures of behaviour, classroom functioning and teacher-reported ADHD symptoms indicate improvement following use of the toolkit. To explore whether there is indication that the toolkit may be efficacious, quantitative data will be visually analysed within schools and participants, describing the nature of change over time and in relation to the intervention, following recommendations. 31 Non-parametric analysis of the data will be considered, for example, by calculating Tau-U statistics. As the toolkit will be refined between schools participating, it would be irrelevant to compare measures across individuals who received different iterations of the toolkit; the focus will therefore be on assessing whether there is initial indication of improvement in outcomes, and whether any effect is replicated across individuals. Comparisons between the magnitude of change seen between those who receive earlier and later iterations of the toolkit will be carried out as it is anticipated that later iterations, modified to be more acceptable and feasible, would result in greater evidence of behaviour change. This will be an exploratory analysis. The core and target behaviours chosen across study participants will also be explored in a descriptive manner in order to assess the use of the behaviour web, targets and modules.
Open access
Missing, unused and spurious data, and inclusion in analysis All data will be used in analysis. Patterns of missing quantitative data will be evaluated as part of assessing the feasibility and acceptability of the intervention. Missing data could relate to the feasibility and acceptability of the intervention, or the utility and acceptability of the measure itself (i.e., Aim 1 and Subaim A), and so through the qualitative data collection we will aim to elicit how patterns of missing data relate to our aims and make modifications accordingly.
ETHICS AND DISSEMINATION Ethics and governance approval
Ethical approval has been awarded by the University of Exeter College of Medicine and Health Research Ethics Committee (Ref Jan22/B/300). Informed consent will be sought from all adult participants (and by parents for child participants; an example information sheet and consent form are provided in the online supplemental material). Children will provide assent and ongoing assent will be judged by the research team throughout. Should it become clear that a child does not want to continue to participate in data collection or the study, withdrawal will be discussed with parents and school staff. Should a child express that they wish to stop during a single instance of data collection, this will be stopped for that day. The child's feelings will be respected and discussed with the child, their parents and school staff about their future involvement. Ultimately, decisions will be made in line with the best interests of the child. All data collected will comply with General Data Protection Regulations, being stored on secure servers and only accessible by the research team. Some data will be shared as part of the intervention, for example, teachers' ratings of ADHD symptoms will be viewable by the SENCo and express permission will be sought for any planned data sharing between participants. Participants will be asked whether their data can be made available following the study for future analysis or related research. Those that consent will contribute their data to a fully anonymised data set accessible to bona fide researchers. There are several operational issues and associated risks that are involved in performing this study. It is possible that the intervention will lead to adverse outcomes 32 such as study children feeling singled-out from their peers due to the intervention. The planning group will discuss anticipated potential adverse outcomes due to the intervention, and qualitative feedback will be sought from participants both during their use of the intervention and in the follow-up period. Further risks (aside from those detailed in progression criteria in table 2) have been identified and mitigation plans detailed, as well as key study limitations noted (see online supplemental material).
Dissemination
Participants who consent to their anonymised data being shared for future research purposes will have their data archived in an open access data set. The results from the study will be presented at relevant national and international conferences (eg, the ADHD World Congress, Eunethydis, the Association for Child and Mental Health national conferences). Separate publications will describe: the iterative development process, the main outcomes of the caseseries study, development of the resource use tool, and an in-depth qualitative analysis of data collected during the iteration and process evaluation; authorship will be determined using the International Committee of Medical Journal Editors (ICJME) criteria. A detailed dissemination plan and documents will be developed with the planning group and will include blogs and podcasts, recorded and live talks, and events and engagement activities to reach a broader audience of interested stakeholders and policymakers.
|
2023-02-16T06:16:19.021Z
|
2023-02-01T00:00:00.000
|
{
"year": 2023,
"sha1": "df13b2a918999ff3516ba300daf20602771f2375",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "BMJ",
"pdf_hash": "f3fc1221ffe6ab11e985cc19aa5a45781e19b47c",
"s2fieldsofstudy": [
"Psychology",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
44079724
|
pes2o/s2orc
|
v3-fos-license
|
Prediction for the Newsroom: Which Articles Will Get the Most Comments?
The overwhelming success of the Web and mobile technologies has enabled millions to share their opinions publicly at any time. But the same success also endangers this freedom of speech due to closing down of participatory sites misused by individuals or interest groups. We propose to support manual moderation by proactively drawing the attention of our moderators to article discussions that most likely need their intervention. To this end, we predict which articles will receive a high number of comments. In contrast to existing work, we enrich the article with metadata, extract semantic and linguistic features, and exploit annotated data from a foreign language corpus. Our logistic regression model improves F1-scores by over 80% in comparison to state-of-the-art approaches.
Exploding Comment Threads
In the last decades, media and news business underwent a fundamental shift, from one-directional to bi-directional communication between users on the one side and journalists on the other. The use of social media, blogs, and the possibility to immediately share, like, and comment digital content transformed readers into active and powerful agents in the media business. This shift from passive "consumers" to active "agents" deeply impacts both media and communication science and has many positive aspects.
However, the possibilities and powers can also be misused. Pressure groups, lobbyists, trolls, and others are effectively trying to influence discussions according to their (very different) interests. An easy approach consists in burying unwanted arguments or simply destroying a discussion by blowing it up. After such an attack, readers have to crawl through hundreds of nonsense and meaningless comments to extract meaningful and interesting arguments. Blowing up a thread can be 1.
Comment Volume Prediction
Time Figure 1: Integration of comment volume prediction into the newsroom workflow. achieved by injecting provocative (but not necessarily off-topic) arguments into discussions. Bystanders are completing the goal of the destroyers, and they do so often unknowingly: with eachoften well-intentioned -reaction to the provocation, they make it more difficult for others to follow the actual argumentation path and/or tree.
It is costly in terms of working power and time to keep the discussion area of a news site clean from attacks like that, and to watch the compliance of users ("netiquette"). As a reaction, many large online media sites worldwide closed their discussion areas or downsized them significantly (prominent examples of the last years are the Internet Movie Database, Bloomberg or the US-American National Public Radio). Other news provider and media sites, including us, take a different approach: A team of editors reads and filters comments on a 24/7-basis. This results in a huge workload with several thousand reader comments published each day. In its lifetime, an article receives between less than ten and more than 1500 comments; typical are about 100 to 150 comments. The number of published comments presumably depends to a large extent on time, weather, and season as well as for each article on subject, length, style of writing, and author, among others.
Being able to predict which articles will receive high comment volume would be beneficial at two positions in the newsroom: 1. for the news director to schedule the publication of news stories, and 2. for scheduling team sizes and guiding the focus of the comment moderators and editors. Figure 1 gives an overview of how comment volume prediction can be integrated into the workflow of a modern online news site. The incoming news articles are ranked based on the estimated number of comments they will attract. The news director takes these numbers into account in the decision process when to schedule which article for publication. This can balance the distribution of highly controversial topics across a day, giving not only readers and commenters the possibility to engage in each single one, but also distribute the moderation workload for comment editors evenly. Further, knowing which articles will receive many comments can help in the moderation process.
Guiding the main focus of attention of moderators towards controversial topics not only facilitates efficient moderation, but also improves the quality of a comment thread. Our experience has shown that moderators entering the online discussion at an early stage can help keeping the discussion focused and fruitful.
In this paper, we study the task of identifying the weekly top 10% articles with the highest comment volume. We consider a new real-world dataset of 7 million news comments collected over more than nine years. In order to enrich our dataset and increase its meaningfulness, we propose to transfer a classifier trained on the Englishlanguage Yahoo News Annotated Comments Corpus (Napoles et al., 2017b) to our Germanlanguage dataset and leverage the additional class labels for comments in a post-publication prediction scenario. Experiments show that our logistic regression model based on article metadata, linguistic, and topical features outperforms state-ofthe-art approaches significantly. Our contributions are summarized as (1) a transfer learning approach to learn early comments' characteristics, (2) an analysis of a new 7-million-comment dataset and (3) an improvement of F1-score by 81% compared to state-of-the-art in predicting most commented articles.
Related Work
Related work on newsroom assistants focuses on comment volume prediction for pre-publication and post-publication scenarios. By the nature of news articles, the attention span after article publication is short and in practice post-publication prediction is valuable only within a short time frame. Tsagkias et al. (2009) classify online newspaper articles using random forests. First, they classify whether an article will receive any comments at all. Second, they classify articles as receiving a high or low amount of comments. The authors find that the second task is much harder and that predicting the actual number of comments is practically infeasible. Badari et al. (2012) conclude the same, analyzing Twitter activity as a popularity indicator for news: Predicting popularity as a regression task results in large errors. Therefore, the authors predict classes of popularity by binning the absolute numbers (1-20, 20-100, 100-2400 received tweets). However, predicting the number of received tweets includes modeling both, the user behavior and the platform, which is problematic. It is part of a platform's business secrets how content is internally ranked and distributed to users, making it hard to distinguish cause and effect from the outside. In our scenario, we even see no benefit in predicting the exact number of comments. Instead, we predict which articles belong to the weekly top 10% articles with the highest comment volume, which is one of the tasks defined by Tsagkias et al. (2009).
In a post-publication scenario, Tsagkias et al. (2010) consider the comments received within the first ten hours after article publication. Based on this feature, they propose a linear model to predict the final number of comments. Comparing comment behavior at eight online news platforms, they observe seasonal trends. Tatar et al. (2011) consider the shorter time frame of five hours after article publication to predict article popularity. They also use a linear model and find that neither adding publication time and article category to the feature set nor extending the dataset from three months to two years improves prediction results. Their survey on popularity prediction for web content summarizes features with good predictive capabilities and lists fields of application for popularity prediction (Tatar et al., 2012). Rizos et al. (2016) focus on user comments to predict a discussion's controversiality. They extract a comment tree and a user graph from the discussion and investigate for example comment count, number of users, and vote score. The demonstrated improvement of popularity prediction with this limited, focused features motivates us to further explore content-based features of comments in our work.
Recently, research on deep learning (Nobata et al., 2016;Pavlopoulos et al., 2017) addresses (semi-) automation of the entire moderation task, but we see several issues that prevent us from putting these approaches into practice. First, the accuracy of these methods is not high enough. For example, reported recall (0.79) and precision (0.77) at the task of abusive language detection (Nobata et al., 2016) are not sufficient for use in production. With this recall, an algorithm would let pass every fifth inappropriate comment (containing hate speech, derogatory statements, or profanity), which is not acceptable. Pavlopoulos et al. (2017) address this problem by letting human moderators review comments that an algorithm could not classify with high confidence. Second, acceptance of these kind of black-box solutions is still limited in the community and the models lack comprehensibility. A compromise can be (ensemble) decision trees, because they achieve comparable results and can give reasons for their decisions (Kennedy et al., 2017). Still, moderators and users do not feel comfortable with machines deciding which comments are allowed to be published -not least because of fear of concealed censorship or bias.
Predicting High Comment Volume
For each news article, we want to predict whether it belongs to the weekly top 10% articles with the highest comment volume. We chose this relative amount to account for seasonal fluctuations and also to even out periods with low news worthiness. This traditional classification setting enables us to use established methods, such as logistic regression, to solve the task and provide explanations on why a particular article will receive many comments or not.
As a baseline to compare against, we implemented a random forest model with features from Tsagkias et al. (2009). For our approach we extend this feature set and categorize the features into five groups. Our metadata features consist of article publication time, day of the week, and whether the article is promoted on our Facebook page. We consider temperature and humidity during the hour of publication 1 and the number of "competing articles" as context features. Competing articles is the number of similar articles and the total number of articles published by our newspaper in the same hour. These articles compete for readers and user comments. Figure 2 visualizes how the number of received comments is not affected by the significantly higher number of published articles on Thursdays. The publication peek on Thursdays is caused by articles that are published in our weekly printed edition and at the same time published online one-to-one. Further, we incorporate publisher information, such as genre, department, and which news agency served as a source for the article. We include these features in order to study their impact and performance at comment volume prediction tasks and not in order to focus on engineering complex features.
In addition, we propose to leverage the article content itself. Starting with headline features, we use ngrams of length one to three as well as author provided keywords for the article. To capture topical information in the body, we rely on topic modeling and document embedding besides traditional bag-of-word (BOW) features. These guarantee that we also grasp some semantic representations of the articles. To this end, topic distributions, document embeddings, and word n-grams serve as semantic representa- tions of articles. In order to model topics of news article bodies, we apply standard latent Dirichlet allocation (Blei et al., 2003). For the document embedding, we use a Doc2Vec implementation that downsamples higher-frequency words for the composition (Mikolov et al., 2013). We choose the vector length, number of topics, and window size based on F1-score evaluation on a validation set. Despite recent advances of deep neural networks for natural language processing, there is a reason to focus on other models: For the application in newsrooms and the integration in semiautomatic processes, comprehensibility of the prediction results is very important. A black-box model -even if it achieved better performanceis not helpful in this scenario. Human moderators need to understand why the number of comments is predicted to be high or low. This comprehensibility issue justifies the application of decision trees and regression models, which allow to trace back predictions to their decisive factors. Table 1 lists precision, recall, and F1-score for the prediction of weekly top 10% articles with the highest comment volume. Especially the bag-of-words (BOW) and the topics of the article body, but also headline keywords and publisher metadata achieve higher F1-score than the metadata features. The highest precision is achieved with the binary feature whether an article is promoted on Facebook, whereas author and competing articles achieve the highest recall.
Automatic Translation of Comments
Whether the first comment is a provocative question in disagreement with the article or an offtopic statement influences the route of further conversation. We assume that this assumption holds not only for social networks (Berry and Taylor, 2017), but also for comment sections at news websites. Therefore, we consider the tone and sentiment of the first comments received shortly after article publication as an additional feature. Typical layouts of news websites (including ours) list comments in chronological order and show only the first few comments to readers below an article. Pagination hides later received comments and most users do not click through dozens of pages to read through all comments. As a consequence, early comments attract a lot more attention and, with their tone and sentiment, influence comment volume to a larger extent. Presumably, articles that receive controversial comments in the first few minutes after publication are more likely to receive a high number of comments in total.
To classify comments as controversial or engaging, we need to train a supervised classification algorithm, which takes thousands of annotated comments. Such training corpora exist, if at all, mostly for English comments, while our comments are written in German. We propose to apply machine translation to overcome this language barrier: Given a German comment, we automatically translate it into English. From a classifier that has been trained on an annotated English dataset, we can derive automatic annotations for the translated comment. The derived annotations serve as another feature for our actual task of comment volume prediction.
We reimplemented the classifier by Napoles et al. (2017a) and train on their English dataset. The considered annotations consist of 12 binary labels: addressed audience (reply to a particular user or broadcast message to a general audience), agreement/disagreement with previous comment, informative, mean, controversial, persuasive, off-topic regarding the corresponding news article, neutral, positive, negative, and mixed sentiment. We au- tomatically translate all comments in our German dataset into English using the DeepL translation service 2 . For the translated comments, we automatically generate annotations based on Napoles et al.'s classifier. Thereby, we transfer the knowledge that the classifier learned on English training data to our German dataset despite its different language. This approach builds on the similar content style of both corpora, which is described in the next section.
Dataset
We consider two datasets that both contain user comments received by news articles with similar topics. First, our German 7-million-comment dataset, which we call Zeit Online Comment Corpus (ZOCC) 3 and second, the English 10kcomment Yahoo News Annotated Comments Corpus (YNACC) (Napoles et al., 2017b). ZOCC consists of roughly 200,000 online news articles published between 2008 and 2017 and 7 million associated user comments in German. Out of 174,699 users in total, 60% posted more than one comment, 23% more than 10 comments and 7% more than 100 comments. For both, articles and comments, extensive metadata is available, such as author list, department, publication date, and tags (for articles) and user name, parent comment (if posted in response), and number of recommendations by other users (for comments). Not surprisingly, ZOCC is following a popularity growth with an increasing number of articles and comments over time. While our newspaper published roughly 1,300 articles per month in 2010 and each article received roughly 20 comments on average, we nowadays publish roughly 1,500 articles per month, each receiving 110 comments on average. As both corpora's articles and comments cover a similar time span of several years and many different departments, they deal with a broad range of topics. While the majority of articles in YNACC is about economy, ZOCC's major department is politics. More than 50% of the comments in ZOCC are posted in response to articles in the politics department, whereas in YNACC culture, society, and economy share an almost equal amount of around 20% each and politics on forth rank with 12%. On average, an article in ZOCC receives 90% of its comments within 48 hours, while it takes 61 hours for an article in YNACC. Despite their slight differences, both corpora cover most popular departments, which motivates the idea to transfer a classifier trained on YNACC to ZOCC. For YNACC, Napoles et al. propose a machine learning approach to automatically identify engaging, respectful, and informative conversations (2017a). By identifying weekly top 10% articles with the highest comment volume, we focus on a different task. Nonetheless, both corpora, ZOCC and YNACC, have similar properties: both corpora contain user comments posted in reaction to news articles across similar time span and similar topics. However, only the much smaller YNACC provides detailed annotations regarding, for example, comments' tone and sentiment.
Evaluation
We compare to the approach by Tsagkias et al. and evaluate on the same task (Tsagkias et al., 2009(Tsagkias et al., , 2010. Therefore, we consider a binary classification task, which is to identify the weekly top 10% articles with the largest comment volume. Table 3 lists our final evaluation results on the hold-out test set. We choose F1-score as our evaluation metric, since precision and recall are equally relevant in our scenario. On the one hand, we want to achieve high recall so that no important article and its discussion is overlooked. On the other hand, we have limited resources and cannot afford to moderate each and every discussion. A high precision is crucial so that our moderators focus only on articles that need their attention. All experiments are conducted using time-wise split with years 2014 to 2016 for training, January 2017 to March 2017 for validation, and April 2017 for testing. We find that our additional article and metadata features, but also the automatically annotated first comments outperform the baseline. Due to the diversity of the different features, their combination further improves the prediction results. In comparison to the approach by Tsagkias et al., we finally achieve an 81% larger F1-score.
Automatically Translated Comments
With another experiment, we study the classification error introduced by translation. Therefore, we train two classifiers with the approach by Napoles et al.: First, we train and test a classifier on the original, English YNACC. Second, we automatically translate all comments in YNACC from English into German and use this translated data for training and testing of the second classifier. Comparing these two classifiers, we find that both precision and recall slightly decrease after translation, as shown in Table 4. Based on this result, we can assume that the translation of German comments into English introduces only a small error. Although YNACC and ZOCC differ in language, we can transfer a classifier that has been trained on YNACC to ZOCC. For each article, we use the labels assigned to the first four comments, which are visible on the first comment page below an article. The first four comments are typically received within very few minutes after article publication.
Number of Early Comments
As a baseline feature for comparison, we use the number of comments 4 received in a short time span after article publication. Annotated first page comments, but also article and metadata features significantly outperform the baseline until 32 minutes after article publication. After 32 minutes, the number of received comments outperforms every single feature (but not the combination of all our features). This is because the difference between final number of comments and so far received comments converges over time.
Conclusions
In this paper, we studied the task of predicting the weekly top 10% articles with the highest comment volume. This prediction helps to schedule the publication of news stories and supports moderation teams in focusing on article discussions that require most likely their attention. Our supervised classification approach is based on a combination of metadata and content-based features, such as article body and topics. Further, we automatically translate German comments into English to make use of a classifier pre-trained on English data: We classify the tone and sentiment of comments received in the first minutes after article publication, which improves prediction even further. On a 7-million-comment real-world dataset our approach outperforms the current state-of-theart by over 81% larger F1-score. We hope that our prediction will help to reduce the number of cases where newspapers have no other choice but to close down a discussion section because of limited moderation resources.
|
2018-06-07T13:35:17.338Z
|
2018-06-01T00:00:00.000
|
{
"year": 2018,
"sha1": "aab44afda3b9e23a7f6d1bf7af8e538a0dfcfb91",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/N18-3024.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "bb4e708217da29954c602f3e75fd303b0a6c2d34",
"s2fieldsofstudy": [
"Computer Science",
"Political Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
222135729
|
pes2o/s2orc
|
v3-fos-license
|
Driving Stability Analysis Using Naturalistic Driving Data With Random Matrix Theory
Driving behavior analysis has diverse applications in intelligent transportation systems (ITSs). The naturalistic driving data potentially contain rich information regarding human drivers’ habits and skills in practical and natural driving conditions. But mining knowledge from them is challenging. In this paper, we propose a novel approach for analyzing driving stability using naturalistic driving data. Our method can extract features, based on the random matrix theory, to reflect the statistical difference between actual driving data and the data that would be generated by a theoretically ideal driver, and thus imply the skillful level of a driver in terms of vehicle control in both longitudinal and lateral directions. The execution of our method on a practical ITS dataset is conducted. Using the extracted features, a driving behavior analysis application that partitions drivers into clusters to identify common driving stability characteristics is demonstrated and discussed.
ABSTRACT Driving behavior analysis has diverse applications in intelligent transportation systems (ITSs). The naturalistic driving data potentially contain rich information regarding human drivers' habits and skills in practical and natural driving conditions. But mining knowledge from them is challenging. In this paper, we propose a novel approach for analyzing driving stability using naturalistic driving data. Our method can extract features, based on the random matrix theory, to reflect the statistical difference between actual driving data and the data that would be generated by a theoretically ideal driver, and thus imply the skillful level of a driver in terms of vehicle control in both longitudinal and lateral directions. The execution of our method on a practical ITS dataset is conducted. Using the extracted features, a driving behavior analysis application that partitions drivers into clusters to identify common driving stability characteristics is demonstrated and discussed.
INDEX TERMS Driving behavior analysis, intelligent transportation systems, random matrix theory.
I. INTRODUCTION
With the ever-increasing number of vehicles on the road, traffic accident and jam become serious issues in most modern countries. The problems cannot be completely solved by conventional transportation engineering methods such as urban design and traffic control [1], [2]. The rapid development of information and communication technology (ICT) in the past decades enables the promising concept of intelligent transportation system (ITS). Equipping the transportation system with advanced sensing, communication, and computing capabilities can greatly enhance its capacity [3]- [5].
Most ITS services and applications target providing drivers with better knowledge regarding the driving environment to help decision making. Understanding human drivers' natural behaviors and habits is also important. Driver assistance services with driving behavior analysis functions can help a driver to be aware of the state of the vehicle, and of improper operations of his/hers own as well as surrounding drivers' [6]. Such high-level information is also beneficial to the design of ITS [7]. Having the knowledge of human driving The associate editor coordinating the review of this manuscript and approving it for publication was Muhammad Awais Javed . behaviors can further enable artificial intelligence (AI) autonomous driving agents to learn from good drivers and evolve to better ones that ensure both safety and comfort for passengers [8].
Driving behaviors have been investigated from diverse perspectives. The analysis is typically performed by studying maneuvering actions (e.g., accelerating, turning, car-following, and lane changing) using vehicle states (including speed, acceleration level, steering angle, and yaw rate, etc), represented by various vehicle sensor data. Control skill serves as the basis for successfully taking proper driving actions in complex traffic environments [9], and thus is also important in driving behavior analysis. [10] points out that driver evaluation should take both their operational control actions and ability of understanding the environment into consideration. The characteristics of the operational control are inherently determined by driver's control skill [11]. Understanding such skill is valuable and can help develop customized ITS services and applications [12].
Human drivers and many driving assistance applications in general rely on predicting the future states of the ego-vehicle and surrounding vehicles to ensure safety. For example, timeto-collision (TTC) is a typical indicator used by active-safety applications to ensure safety distance between vehicles [13]. VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ To estimate TTC, the target vehicles' future states must be predicted. Normally the future states of a vehicle in a short period are assumed as constant. This simplifies the prediction process, by assuming that drivers tend to keep the states of their vehicles to be stable without abrupt changes. A driver who cannot do this becomes unpredictable. Therefore, one potential way of quantifying a driver's control skill is to measure the level to which he/she can maneuver the vehicle to keep the states unchanged. In this paper, this is termed driving stability. In general, the driving process can contain two types of phases, i.e. smooth driving phases (vehicles are controlled to have approximately fixed acceleration and steering angle), and action phases (vehicles are controlled to interact with surrounding vehicles, pedestrians, traffic signals, etc). Analyzing driving stability in both phases is of importance. Naturalistic driving data have great potential in driving behavior analysis [14], since they contain the information of drivers' behaviors in natural driving conditions. One challenging task in data-driven driving behavior analysis is to extract features that can reflect maneuvering patterns from driving data. This requires properly tagging the data, i.e., partitioning the available driving data into segments, each of which corresponds to, ideally, a single objective maneuvering action. However, it is difficult to control the naturalistic driving data collection procedure to provide such label information. A number of existing works have proposed data partitioning solutions based on mathematics and signal processing techniques (e.g., see [6], [15]- [17]). But it is often difficult to guarantee each data segment to be the consequence of only a single complete maneuvering action. Using them to analyze driving behavior may bias the results.
In this paper, we investigate the method to analyze drivers' vehicle control skills without the need of partitioning naturalistic driving data into individual segments. We propose a novel algorithm to measure the driving stability level based on random matrix theory (RMT) [18], [19]. The data collected in the Safety Pilot Model Development (SPMD) program [20] are used to demonstrate the execution of our algorithm on a real-world ITS dataset. Specifically, the raw acceleration and steering angle data of each driver are first separated according to the speed level to roughly distinguish driving environments, and then respectively organized to matrices. From each data matrix, a series of sub-matrices are generated by sliding windows. The mean spectral radius (MSR) of every sub-matrix is calculated to reflect the distribution characteristics of the matrix entries. The differentiation of the MSR sequence, termed differential MSR (DMSR), is then taken. The concentration interval and outliers' dispersion level of the DMSR sequence are obtained as features to reflect the driving stability level of the driver, in the smooth driving and action phases respectively. Finally, based on these features, a density based spatial clustering of applications with noise (DBSCAN) clustering algorithm [21] is applied to partition all drivers into groups to summarize common driving stability characteristics.
The main contributions of our paper can be summarized as follows: • We present a novel approach, based on RMT, to perform driving stability analysis using naturalistic driving data. Our method does not need to partition the data into segments according to individual maneuvering actions.
Through synthetic data, we show that the output of the proposed algorithm can evaluate the statistical difference between the driving data and the data that would be generated by an ideally skillful driver who can maintain constant vehicle states. Hence features can be extracted from naturalistic driving data to reflect driver skills.
• We demonstrate the execution of the proposed method on a practical dataset produced by ITS technologies. Using the extracted features, a clustering algorithm is employed. The results show that the majority of drivers share a similar pattern of driving stability. But some drivers exhibit notable differences from others. Such observations can potentially be used to help better understand human drivers and facilitate further investigations. The remainder of the paper is organized as follows. Section II reviews related works. Section III describes the dataset and pre-processing methods. Section IV presents the mathematical background of our RMT-based driving stability analysis algorithm. Section V discusses the results of executing the algorithm. Finally, Section VI concludes the paper.
Notations: Throughout the paper, O a×b denotes an a × b all-zero matrix. For matrix X, [X] i and [X] i,j respectively denote the ith row and the element on the ith row and jth column. X H denotes the conjugate transpose of X.
II. RELATED WORKS
In addition to detecting the occasions that a driver does not fully concentrate on driving due to, e.g., sleepiness or distraction [22], driving behavior analysis normally refers to modeling drivers' habit of maneuvering vehicle to maintain the driving status (e.g., keeping the lane with constant speed) or interacting with traffic environment (e.g., turning, changing lane, or overtaking). Driver identification research works show that patterns extracted from driving data can be unique fingerprints. For instance, [23] extracts 12 features from turning maneuvers such that a random forest classifier can be used to distinguish drivers. Reference [24] estimates the distribution of accelerating maneuvers to identify drivers. Since different drivers exhibit different behavior characteristics, it is possible to distinguish abnormal drivers or even predict maneuvering intentions. For example, [25] uses random forest to classify drivers into low-risk, moderate-risk, and high-risk groups based on the transition probability between maneuvers. The safety evaluation method proposed in [26] measures driver risk by analyzing accelerating maneuvers using linear regression, decelerating maneuvers using Linde-Buzo-Gray algorithm, and turning maneuvers with kinematics analysis respectively. Reference [27] categorizes the performance of turning, accelerating and decelerating maneuvers into four levels using support vector machine and topological anomaly detection. Reference [28] predicts driver maneuvering intentions using a hidden Markov model. Hence, driving behavior analysis has great potential in understanding human participants on the road.
Analyzing driving behaviors is a sophisticated task, since drivers' maneuvering pattern can be affected by many factors such as road environment (e.g., intersection or traffic congestion) and driver categories (e.g., age). For example, [29] applies kinematics analysis to extract features from driving data of several driving school instructors, general drivers, and elderly drivers when they drive passing urban intersections. It is shown that elderly drivers exhibit a common pattern that is different from young drivers. Reference [30] also proves this result by distinguishing elderly drivers with linear discriminant analysis. Using support vector machines and hidden Markov models, [31] suggests that the stopping maneuvers at intersections can be used to classify drivers into two categories, either compliant or violating. By clustering drivers' glance allocations, [32] points out that the behavior pattern at signalized intersections can be quite different from that at unsignalized intersections. Reference [33] shows that a post-congestion condition can cause drivers to be more aggressive.
Apart from simple control actions on the accelerator/brake pedals and steering wheel, a number of works have also studied relatively complex driving activities (such as carfollowing, lane changing, and overtaking) that can be deemed as the combination of multiple low-level maneuvers. Reference [34] models human driving patterns by estimating the distribution of parameters that describe the lane changing maneuver, to accelerate the verification of automated vehicles. Reference [35] uses one-class support vector machine to detect dangerous lane changing maneuvers. Reference [14] models the car-following maneuver with Gaussian kernel density estimation to discuss the impact of data volume on driving behavior analysis. Reference [36] identifies aggressive and cautious car-following maneuvers by analyzing the relationship between the vehicle's dynamics and the distance between the leading and the following cars with kinematics model. In [37], the car-following and approaching maneuvers are considered for driving behavior analysis. It uses the K-means algorithm to show that drivers can be partitioned into clusters according to their similarity levels of car-following time stability, prudence, conflict proneness, or skillfulness.
Most of the above works require that the driving data can be partitioned or tagged according to individual maneuvering actions. This can be challenging if driving behavior analysis is carried out on a large amount of naturalistic driving data. In addition, individual maneuvering actions may not fully reflect the control skills of drivers. In what follows, we present a method to evaluate the driving stability, based on RMT, to address these issues.
III. DATASET AND PRE-PROCESSING
We apply our proposed method on an example naturalistic driving dataset. The execution follows a data-driven research framework which consists of four stages as shown in Fig. 1: data collection, pre-processing, feature extraction, and data mining. In this section, we introduce the dataset, explain the pre-processing stage, and then define driving stability.
A. NATURALISTIC DRIVING DATASET
Driving simulator is a commonly considered data source in driving behavior analysis (see, e.g., [12], [38], [39]). It is in general straightforward to extract different control operations so that individual target maneuvering actions can be analyzed. Dangerous and extreme road conditions can also be studied. The main drawback of using simulation data is that simulators cannot fully reflect the true driving condition. To collect driving data in practical traffic environments, carefully designed field-experiments are conducted in [23], [29], [40]. However, one potential issue of such methods is that drivers may be aware of the experiment purposes. Reference [41] reports that there is a significant difference between drivers' natural behavior and the behavior when they drive experiment vehicles with measurement devices, especially in the first 50 hours.
The data that potentially contain the best knowledge of drivers' true behaviors are the naturalistic driving data [41]- [44]. Naturalistic driving data are collected when the vehicles are driven under natural conditions, and data collection lasts a long period of time [14] (e.g, 12 to 13 months in the 100-Car Naturalistic Driving Study [41]). The long data collection time makes drivers oblivious to the data collection process so that the influence of data measurement devices on drivers' mental state and behaviors is minimized. One way of attaining such data is to equip vehicles with devices that can access the automobile bus through the OBD-II port, and acquire the measurements of various in-vehicle sensors [23], [24], [30]. Sensors of other devices, such as smartphones [27], can also be considered as a cheaper solution, at the cost of limited data types and measurement accuracy. Driving data can be continuously collected when the vehicle is driven and then analyzed after a certain period of time.
Large-scale naturalistic driving data collection is costly. The recent development of the concept of Internet of vehicles (IoV) [4] provides a feasible solution to this problem. IoV refers to using vehicle-to-anything (V2X) wireless VOLUME 8, 2020 communication technologies to connect vehicles, roadside infrastructure, and other elements in the transportation system. Sensing data collected by various devices at different locations can be shared so that the environment awareness level of each individual vehicle can be significantly enhanced. A typical type of messages transmitted in IoV is the heartbeat basic safety message (BSM) [45]. A BSM contains the real-time status information of a vehicle and is broadcasted normally at frequency 10 Hz [45]. Currently, the dedicated short-range communication (DSRC) [46], LTE-V2X [47], and 5G technologies [48] are warmly discussed in both academia and industries as the V2X solutions. Large-scale experiments and field tests have also been conducted to verify the feasibility of IoV. It is expected that in the near future, all vehicles on the road will be equipped with sensing data collection and transmission devices. In addition to supporting real-time active-safety ITS applications, the BSMs stored in data centers may serve as the valuable naturalistic driving data, from a large number of drivers, in a variety of traffic environments, and over a long period of time.
To demonstrate the proposed algorithm, we use the data of the SPMD program [43] as our naturalistic driving dataset. The program was conducted to evaluate the performance of DSRC and communication-based active-safety applications. It was carried out in Michigan, USA, and lasted one and a half years since August 2012. A number of vehicles participated in the project and were equipped with data acquisition system with sampling frequency of 10 Hz. The data were organized in trips, each of which refers to one ignition cycle. The attributes include vehicle states (acceleration, steering, speed, etc.), road conditions (descriptions of lanes and intersections, etc.), and weather. The data have already been used for driving behavior analysis in, e.g., [14], [34].
Our method intends to summarize the driving skill of each driver from a large amount of driving data. Hence from the available dataset, we choose only the drivers who have sufficiently many (more than 40 for each speed scenario) long trip records (at least 6 minutes after pre-processing). This results in a total of 42 drivers, denoted by driver_01, driver_02, · · · , driver_42. Finally, to demonstrate the unsupervised learning nature of the considered driving behavior analysis, we ignore the road condition and driving environment data, and use only the speed (m/s), acceleration (m/s 2 ), and steering angle ( o ) readings. Acceleration is used for reflecting drivers' longitudinal control, steering angle is used for lateral control, and speed is considered to roughly infer driving condition, as explained in the next subsection.
B. DATA PRE-PROCESSING
The data pre-processing consists of the following steps.
1) DATA CLEANSING
The first step of data pre-processing is the detection and removal of missing and abnormal values through data cleansing [49]. The basic idea behind the proposed driving behavior analysis method is to quantify how much the statistical characteristics of the driving data matrix deviate from that in an ideal case. Occasional abnormal readings would not significantly affect the results. Sophisticated data cleansing algorithms are unnecessary. Hence we use simple interpolation to replace missing and abnormal readings.
If the speed data in a trip have continuous 0's, the vehicle might be immobile when these data were recorded. Since a driver does not exhibit any control skill in this case, the data should be removed from analysis. In our paper, when a speed segment of at least 10 continuous 0's (i.e., 1 second) is identified, the vehicle is believed to stay at the same location during that time. If the segment's length is relatively small, the associated data (including speed, acceleration, and steering angle) are removed. Otherwise, if a major portion of a trip has zero speed, the whole trip is discarded.
2) DATA SEPARATION
A driver may have different ways of taking the same maneuvering action in different driving conditions (e.g., different road types or traffic conditions). Identifying the driving stability level of each driver and comparing those of multiple drivers would be more meaningful under the same condition. Since the difficulty of controlling vehicle varies with speed, we consider the speed level to be a main factor that influences a drivers' behavior. Two different driving conditions are taken into consideration, i.e. low-speed scenario and high-speed scenario. To distinguish them, for each trip data, we find the median value of the speed. If the result is greater than a pre-defined threshold, V th , the data of the trip are assumed to be collected in a high-speed scenario. A low-speed scenario is assumed if the median speed is lower than V th . Considering that freeway and non-freeway are typical high-speed scenario and low-speed scenario respectively, the choice of V th can be made according to the typical difference between the speed limitations of different road types. In our work, we take the speed limit policy in Michigan [50] as an example. There are two main types of road, freeway and non-freeway. The former has minimum speed limit of 55 miles/hour (i.e. 24 m/s). For non-freeway, different maximum speed limits are set for different levels of road, in general smaller than 45 miles/hour (i.e. 20 m/s). Therefore, we choose V th = 20 m/s.
To further demonstrate the motivation of separating driving conditions into two scenarios, we apply a data visualization method, which was originally designed for discovering the difference between human and robot users in social media [51], to display the states of vehicle motion (speed, acceleration, and yaw rate) simultaneously. Fig. 2 illustrates the plot of three example drivers in our dataset, i.e., driver_15, driver_30 and driver_35. For each driver, we randomly select three hours of data from the low-speed scenario, and three hours from the high-speed scenario. The data of each hour are plotted as a plate, with a circumference, a kernel, and multiple threads pointing outward. The circumference represents the time information. For ease of illustration, we use the average data value of each second to summarize the driving data of that second. The one-hour data recording starts from the top center (i.e., 12 o'clock) and proceeds clockwise, with a total of 3600 sample instants. The plate kernel represents the speed information: its area denotes the average speed of the hour.
The threads represent the driving actions. Take acceleration in the longitudinal direction as an example. When the difference between the (average) acceleration readings in two consecutive seconds is larger than a certain threshold, it is believed that the driver took a notable operation on the accelerator pedal. A green thread is plotted at the time instant to denote this action, and the length of the thread is proportional to the difference between the incremental readings and the threshold. The same approach applies to deceleration (blue threads), left turn (yellow threads), and right turn (red threads).
Each row in Fig. 2 shows one driver's behaviors. The left hand side (LHS) data are collected from the low-speed scenario and the data on the right hand side (RHS) are from the high-speed scenario. Clearly each driver has a similar pattern when the average speed is similar. But comparing the two scenarios, it is seen that more frequent actions were taken by the same driver in the low-speed scenario. Different drivers can also have quite diverse patterns. Thus, it is reasonable to carry out the data separation step so that driving behaviors can be individually analyzed in each driving scenario.
3) DATA TRANSFORMATION
After separating each driver's data according to the speed scenario, we organize the driving data to matrix forms. Specifically, four matrices for each driver are generated, respectively denoted by A l , A h , S l , and S h . We randomly choose M l trips from each driver's data in the low-speed driving scenario, and M h trips from the high-speed scenario. Let T l and T h be two sufficiently large integers. The matrix A l is formed by the acceleration data of a driver in the low-speed scenario: Each row of A l is a segment of T l acceleration readings (with unit m/s 2 ) chosen from the middle part of each trip (to avoid data of the starting-up and full-stop operations). Similarly, matrix A h is formed by the acceleration data in the high-speed scenario. Each row of A h is a segment of T h acceleration readings in each of the M h trips. S l and S h are steering angle data (with unit degree) matrices in the lowand high-speed scenarios respectively. The former consists of M l trips, each of which has T l readings. The latter consists of M h trips, each of which has T h readings. In general, M l and M h (resp. T l and T h ) can be different. For ease of presentation, we choose M = M l = M h = 40 and T = T l = T h = 3000. The proposed driving stability analysis algorithm, to be introduced in the next section, is applied to each of the four matrices. Therefore, we measure the control skill of a driver in longitudinal and lateral directions, and in low-speed and high-speed driving scenarios, respectively. Let D = {A l , A h , S l , S h } denote the set of data matrices. Executing the proposed method on only one matrix X ∈ D is discussed in the following sections.
C. DRIVING STABILITY
Before introducing the feature extraction stage, we present the definition of driving stability considered in our paper. In general, a human driver or a driving assistance system perceives the surrounding driving environment and makes maneuvering plans according to the observations of the movements of the ego-vehicle and other vehicles, road condition, and traffic rule. If a vehicle is driven to move in a relatively stable way without abrupt changes, it is easier to predict and can be considered to be safer. Hence the ability of controlling vehicle in this way is deemed as driving stability. Fig. 3 displays a sample row of the acceleration data matrix and a sample row of the steering angle data matrix. It is seen that a typical driving trip can roughly be divided into two types of phases. The first corresponds to the period that the data vary around a certain constant. This is termed smooth driving phase. In this type of driving phase, a driver, without being affected by surrounding traffic environment, intends to maintain the same vehicle state in either the longitudinal or lateral direction. We consider an ideally skillful driver to be one who can keep the longitudinal force (represented by acceleration) and lateral force (represented by steering angle) on vehicle to be constant. In practice, the forces in both directions change continuously, the level of which may reveal the driver's skill. If the data change around an expected value with relatively high variation, the driving stability level is considered to be low, and the vehicle state is hard to predict.
Certainly, drivers have to interact with traffic environment and carry out intentional operations to change the vehicle state. This corresponds to the second type of driving phase, termed action phase. It can be deemed as the transition between two smooth driving phases. A more skillful driver is able to maneuver vehicle and complete the transitions prudently. If the transitions are often carried out abruptly with rapid oscillations, the driver's skill level in vehicle control is considered to be relatively low. Hence, in action phases, the average slope of the data statistics changes can be used to reflect the control skill.
Assume that vehicle kinetic data collection using in-vehicle sensors is always subject to measurement noise. That is, the driving data matrix X can be written as X = F+N, in which F denotes the true force on the vehicle taken by the driver, and N represents random noise. The measurement noise of each trip is assumed to be a stationary Gaussian process. For an ideal driver (who can keep the true force in smooth driving phases to be constant and achieve extremely small data statistics change slopes, i.e. 0, in action phases), every row of F is a constant. The samples in each row of X are identically distributed, and there is no correlation between any two rows. We use X to denote the row-normalized version of matrix X, i.e., the ith row vector [X] i is attained by normalizing [X] i to consist of entries with zero-mean and unitvariance. Then all entries of X are independent and identically distributed (i.i.d.). In practice, due to the driver's skill and complex driving environment, it is hard to maintain fixed acceleration and steering angle, even for a limited period of time. This causes the statistical characteristics of X to be different from that in the ideal case. Therefore, we consider using an i.i.d. matrix as a theoretic benchmark data matrix (which is not practically achievable) and measuring how much the intrinsic characteristics of X generated by the driver deviate from the ideal case as the driving stability level.
If the data segments corresponding to the two phases can be accurately separated, one possible approach of evaluating the driving stability in the smooth driving phases is to measure the average variances of the acceleration or steering angle data of multiple segments. That in the action phases may also be evaluated by finding the average data changing slopes. However, distinguishing the two phases from naturalistic driving data is involved. In the following sections, we propose our RMT-based algorithm to extract representative features from the data matrix X without the demand for driving phase separation.
IV. FEATURE EXTRACTION THROUGH RMT
In this section, we present an algorithm to extract features that can describe driving stability. Since the row-normalized data matrix of an ideally skillful driver has i.i.d. entries, we use the statistical difference between the normalized matrix and an i.i.d. random matrix to reflect the stability level of a real driver. To this end, we first follow the RMT and present indicators to reflect the statistical characteristics of an i.i.d. random matrix.
where U is an m × m Haar-unitary matrix. Let Denote the standard deviation of [R] i , the ith row of R, to be σ i . We can define an m×m matrixR such that the relationship between the ith row ofR and the ith row of R is Clearly, the variance of the entries ofR is 1 m . The matrixR has m (complex) eigenvalues. SinceR is a random matrix, the eigenvalues are also random. If m → ∞, n → ∞, and lim m→∞ m n = c is a constant (0 < c ≤ 1), the probability density function (PDF) of the m eigenvalues converges to a limiting spectral density (LSD) [52]: This is the ring law, which says that on the complex plane, the eigenvalues are confined within a ring defined by an outer circle with unit radius and an inner circle with radius (1−c) l 2 . The property applies to any l ≥ 1.
For instance, we generate a 40 × 80 synthetic random matrix Y 1 with Gaussian i.i.d. random entries, row-normalize it to Y 1 , and then calculate the matrixR following (1)-(3) by setting l = 1 and R 1 = Y 1 . The eigenvalues ofR are plotted on the complex plane as purple dots in Fig. 4(a), together with the outer circle (with radius 1) and the inner circle (with radius ( 1 2 ) 1 2 = 0.707). Since the matrix size is sufficiently large, almost all the 40 eigenvalues locate within the ring belt.
If the condition that all entries in the matrices R 1 , R 2 , · · · , R l are i.i.d. does not hold, the ring law would be violated. The eigenvalues tend to collapse to the origin of the complex plane. For example, consider a 40 × 80 all-zero matrix Y = O 40×80 . We randomly select 10 rows from the matrix. For the ith selected row, one position index j i (20 ≤ j i ≤ 40) is randomly selected. Then we set the
elements [Y]
i,j = j−j i for j ∈ {j i +1, j i +2, · · · , j i +10}, and [Y] i,j = 10 for j ∈ {j i + 11, j i + 12, · · · , 80}. The resulting matrix is added to Y 1 to produce a new synthetic matrix Y 2 , whose entries are now non-i.i.d. We normalize Y 2 to Y 2 , and follow the steps of generating matrixR by setting l = 1 and R 1 = Y 2 . The eigenvalues ofR are plotted as red triangles in Fig. 4(a), which clearly shows the violation of the ring law. If more than 10 rows of Y 2 have changes of the entry statistics, the eigenvalues would tend to be even closer to the origin. Now we again randomly select 10 rows from Y = O 40×80 . For each of these rows, one position index j i (20 ≤ j i ≤ 40) is chosen. All elements in this row to the RHS of the position j i + 10 are set to 30, i.e., [Y] i,j = 30 for j ∈ {j i + 11, · · · , j i + 80}. A linear increase is set between columns j i +1 and j i +10, i.e., [Y] i,j = 3(j − j i ) for j ∈ {j i + 1, · · · , j i + 10}. The resulting matrix is added to Y 1 to produce another synthetic matrix Y 3 . Compared with Y 2 , the change of the statistics of the entries occurs to a greater extent. Generate the matrix R by setting l = 1 and R 1 = Y 3 and plot eigenvalues of R as green asterisks in Fig. 4(a). It is seen that being more different from the original i.i.d. matrix leads to characteristics of eigenvalues farther away from that described by the ring law. Therefore, comparing the statistical behaviors of the eigenvalues of matrixR can reflect how much a matrix is different from an i.i.d. matrix.
Finally, we randomly choose two drivers from our dataset. For each driver, a 40 × 80 sub-matrix is extracted from the acceleration data matrix in the low-speed driving scenario. Following (1)-(3), we calculate the matrixR by setting l = 1 and R 1 to be the row-normalized version of the sub-matrix. The eigenvalues ofR are plotted in Fig. 4(b). Clearly, their behaviors notably violate the ring law.
B. LINEAR EIGENVALUE STATISTIC (LES)
To statistically summarize the random behaviors of the m eigenvalues of matrixR, we define the LES as [54]: where ϕ(λ i ) is a continuous function of the ith eigenvalue λ i . p LES is a statistic of the eigenvalues and is proved to satisfy the law of large numbers and the central limit theorem [54]. According to the law of large numbers, when m → ∞, 1 m p LES converges in probability to the expectation of ϕ(λ): where fR(λ) is the LSD in (4). Based on the central limit theorem, [55] proves that, when the entries in matrix R s are i.i.d., the samples of p LES have a small confidence interval. One way of defining the function ϕ(λ i ) is to set ϕ(λ i ) = |λ i | m . The resulting LES is termed mean spectral radius (MSR) [19] and is denoted by p MSR . It calculates the mean distance between the origin of the complex plane and the eigenvalues: When the PDF of eigenvalues converges to LSD in (4), we can obtain the theoretical value of p MSR using (4) and (6) as: For instance, when l = 1 and c = m n = 0.5, we have p * MSR = 0.8619. MSR describes the behaviors of the random eigenvalues using a single value. Comparing the MSR of a matrix with p * MSR provides a measurement of the difference between the matrix and an i.i.d. random matrix.
C. DIFFERENTIAL MSR (DMSR)
To describe the changes of statistics of each driver's driving data matrices, we follow [19] and separate the M × T data matrix X ∈ D into a series of M × N (M ≤ N ≤ T ) sub-matrices, using T × N sliding-window matrices For each X [k] , we can find its row-normalized matrix X [k] , and then follow (1)-(3) by setting l = 1 and R 1 = X [k] to attain matrixR. Since the entries of X MSR is a random value that is almost always less than p * MSR . The value p [k] MSR is likely to be smaller when the data sub-matrix X [k] is more different from an i.i.d. matrix.
We use an example to show that the sequence of p [1] MSR , p [2] MSR , · · · , p [T −N +1] for j ∈ {2301, 2302, · · · , 2400}, and [Y] i,j = 20 for j ∈ {2401, 2402, · · · , 3000}. Define Y 4 = Y + N 0 , and display one of the selected rows of Y 4 in Fig. 5(a) (red curve). We use these rows to mimic three driving operations, i.e., change of acceleration/steering angle from one constant value to another. (The changes in the two rows lead to correlation and significant deviation from i.i.d. matrix.) The first two changes have the same magnitude, but the former is more rapid (with a larger absolute value of the slope). The third change has the same slope as the second, with a smaller magnitude. Now, we set each sliding-window matrix W [k] to be a 3000 × 500 matrix, for k ∈ {1, 2, · · · , 2501}. Multiplying W [k] by matrix Y 4 generates a total of 2501 sub-matrices with dimension 40 × 500, denoted by Y [1] 4 , Y [2] 4 , · · · , Y [2501] 4 , respectively. For each value of k, we derive the row-normalized matrix Y MSR , of the resulting matrix R. The MSR sequence is plotted in Fig. 5(a) (black curve), where the kth MSR and the (k+499)th element of the selected row (i.e., the last entry selected by the window) are aligned.
It can be seen that when k increases from 1 to 200, the values of p [k] MSR fluctuate around a constant. This is because all the normalized sub-matrices Y [k] 4 are i.i.d. matrices. The MSR is then a random variable with expected value p * MSR = 0.9797, attained using (8) with l = 1 and c = 40 500 = 0.08, and a small confidence interval [55]. When k exceeds 200, the sub-matrix Y [ MSR again variate slightly around p * MSR = 0.9797. These lead to the first U-shape as shown in Fig. 5(a). Similarly, when k continues to increase, the sliding window passes the other two changes, which results in two more U shapes in the figure.
From Fig. 5(a), it is seen that different types of data statistics changes can lead to different U-shapes of MSR. The first and second changes have the same magnitude. The associated U-shapes have the similar depth. But since the second change has a smaller slope, it demands more time to complete the change. This causes a greater width of the U-shape. The third change has a smaller magnitude compared with the second change. A smaller depth of the U-shape is observed. Therefore, by comparing the depth and width of the U-shapes of the MSR sequence, one can roughly measure the magnitude and the time instant of statistical changes in a large data matrix [19].
However, such a method demands the U shapes of MSR to be isolated so that their depth and width can be identified. This means that each sub-matrix contains only a single change of data statistics. Although this is possible, through a proper selection of the window size, for data collected from certain domains such as smart grid [19], it is difficult to satisfy the requirement with naturalistic driving data due to frequent operations of drivers. For instance, we generate another synthetic matrix Y 5 , which has the same three data changes as Y 4 . But the changes occur closer to each other, as shown in Fig. 5(b), so that two changes can be included in the same sliding window. The figure shows that this leads to overlapped U shapes. Measuring the depth and width of each U shape becomes hard to accomplish, especially if the changes of data statistics appear frequently in different rows.
To address this issue, based on MSR, we propose a new parameter, termed differential mean spectral radius (DMSR): Essentially, the DMSR sequence measures the descending or ascending speed of the U-shapes of the MSR. For two U-shapes with the similar depth, the one that has a smaller width is likely to have a steeper edge, i.e., some large values of p [k] DMSR . For U-shapes with the similar width, a greater depth is more likely to cause some large values of DMSR. Fig. 5(a) illustrates the DMSR sequence of Y 4 (blue curve). It is seen that DMSR can clearly reflect the characteristics of the U shapes of the MSR sequence. The DMSR exhibits a random behavior. But most realizations locate within a certain interval, with frequent occurrence of large values corresponding to the edges of the U shapes. In addition, deeper and narrower U shapes of MSR sequence (e.g., caused by the first change) lead to DMSR values more significantly deviating from common values. Since the DMSR sequence exploits the information contained in only a small number of MSR samples to measure the changing level of the data statistics, identifying the complete U shape is not necessary. In Fig. 5(b), we also show the DMSR of Y 5 . The pattern shown in Fig. 5(a) can still be observed, even though the U-shapes of the MSR have overlaps.
D. CASE STUDIES
To obtain features that reflect driving stability from the DMSR sequence, we first use two case studies to explain how the DMSR sequence contains driving stability information in the two driving phases respectively. The data used in our paper were collected from an IoV research project and thus do not provide any label information on whether a driver has a higher or lower driving stability level. Therefore, the case studies are based on synthetic data that simulate the characteristics of driving data. In a smooth driving phase, a driver operates the vehicle in order to maintain a constant state of the vehicle. Such operations in general occur frequently, with limited strength. Consequently, each sub-matrix attained using (9) has entries with multiple small changes in data statistics. Drivers with good driving skills tend to generate data matrices with less significant statistical changes compared with drivers with poor skills. In an action phase, a driver makes control operations that lead to changes of statistics with much larger magnitude than those in smooth driving phases. A skillful driver tends to make the operations prudently and steadily. In what follows, we use simulated data to explore the influence of these two kinds of changes on DMSR respectively.
1) SMOOTH DRIVING PHASES SIMULATION
Generate a 40 × 3000 synthetic control data matrix Y. For each row i (i ∈ {1, · · · , 40}), let the data have a step change with random magnitude for every 5 elements to simulate frequent operations in the smooth driving phases. Specifically, for each row i, we sample 600 random values from a uniform distribution between −0.5 and 0.5. Let the (5(j − 1) + 1)th to the (5j)th elements, i.e., [Y] i,5(j−1)+1 , · · · , [Y] i,5j , be the jth sample. Finally, we set the synthetic driving data matrix Y 6 = Y+N 1 , where N 1 is a Gaussian noise matrix with mean 0 and standard deviation 0.5. By this means, we simulate frequent changes of data statistics around constants.
Set the sliding-window matrices W [k] to be 3000 × 80 matrices. Apply (1)-(3) and (7) to the row-normalized submatrices of Y 6 to obtain the MSR sequence, and apply (10) to obtain the DMSR sequence, as plotted in Fig. 6. Due to the frequent changes of data statistics in each sub-matrix, the U-shapes of MSR heavily overlap and thus no complete U shape is observable. However, based on the results shown in Fig. 6, the behavior of DMSR can still reflect the data statistics in the original data matrix Y 6 . The frequent changes in matrix Y cause a large number of p [k] DMSR to be relatively far from the center value 0. In other words, the observed DMSR values would be dispersed compared with those attained from ideal i.i.d. matrices. Larger changes of statistics would cause even dispersed behavior of the DMSR.
To show this, we generate another data matrix Y 7 following the same way as Y 6 , except that the elements in the control data matrix Y are sampled from a uniform distribution between −1 and 1. This reflects both larger magnitude and speed of data statistical changes, compared with Y 6 . In Fig. 7, we use box plots to visualize the dispersion characteristics of the DMSR sequence corresponding to the two synthetic data matrices. The box size is determined by the interquartile range (IQR). The boundaries are the positions whose distance to the nearby box edge equals the IQR. Data samples outside the boundaries are treated as outliers. Clearly, the concentration interval size, i.e. the box size, of the DMSR sequence corresponding to Y 6 is smaller than that corresponding to Y 7 . Therefore, using the box size of the box plot of DMSR can help reflect the driving stability levels in smooth driving phases.
2) ACTION PHASES SIMULATION
To explore the influence of driving operations in action phases on the DMSR, we further study two synthetic data matrices. Let Z initially be an all-zero matrix O 40×3000 . For each row i (i ∈ {1, 2, · · · , 40}) of Z, we first randomly find a position index r i (1000 ≤ r i ≤ 2000). The next 50 elements on the right side of [Z] i,r i , i.e., [Z] i,r i +1 , · · · , [Z] i,r i +50 , then increase or decrease linearly from 0. The absolute values of slopes are randomly sampled from a Gaussian distribution with mean 0.2 and standard deviation 0.1. The remaining entries [Z] i,r i +j , ∀j ∈ {51, · · · , 3000 − r i } are set to be the same as [Z] i,r i +50 . By this means, each change simulates a relatively large control operation. Finally, the synthetic data matrix Y 8 is attained by [Y 8 The data matrix contains both long, significant changes and frequent, small changes of data statistics compared with an i.i.d. matrix.
The second synthetic data matrix Y 9 , is generated similarly, except that the absolute values of slopes of the data changes in Z are sampled from a Gaussian distribution with mean 0.4. Fig. 7 also shows the box plots of the DMSR sequence corresponding to Y 8 and Y 9 . We can notice that their box sizes are almost the same. This is because both matrices are generated using Y 6 , and have the same level of frequent changes. However, the DMSR sequence of Y 9 has more outliers farther away from the box center. This is in line with the observation made in Fig. 5: steeper changes of data statistics lead to deeper U shapes of MSR and possible DMSR values significantly dispersing from the center. Therefore, one can use the dispersion of outliers to reflect the driving stability level in action phases.
E. DRIVING STABILITY FEATURE EXTRACTION ALGORITHM
As shown in the above case studies, we can use the concentration interval and outliers' dispersion level to summarize the statistics of DMSR sequence, which reflects the driving stability in two driving phases respectively. To facilitate a quantitative analysis, denote q a the ath percentile of the DMSR sequence. For a certain value of 0 < a < 50, the feature for representing the concentration level of the DMSR sequence can be defined by In this paper, we let a = 25 and C DMSR is the IQR (the box size) of the box plot in Fig. 7. Outliers can be defined as those DMSR samples greater than (α + 1)q 100−a − αq a or smaller than (α + 1)q a − αq 100−a for some α > 0. In our paper, we choose α = 1 so that DMSR samples whose distance from the median value, denoted bỹ p DMSR , is much lager than C DMSR , are deemed to be outliers. Denote the set of outliers by P out . The feature to represent the dispersion level of the outliers can be defined as the average distance between the outliers andp DMSR , i.e., where |P out | is the cardinality of P out . 4: Set l = 1, R 1 = X [k] , and calculateR 1 using (1)
5:
Calculate R using (2) 6: CalculateR using (3) and find its eigenvalues 7: Calculate p [k] MSR using (7) 8: end for 9: Calculate p [1] DMSR , · · · , p [T −N ] DMSR using (10) 10: Calculate C DMSR using (11) 11: Calculate O DMSR using (12) Output: Driving stability features C DMSR and O DMSR Hence one can compare the features of two drivers to evaluate their skills. The complete algorithm to derive these features from the driving data is shown in Algorithm 1. Using such features to describe the different levels of driving stability can facilitate grouping drivers into clusters according to their skills, to allow further investigations on driving behaviors. In the next section, we apply the proposed algorithm to our dataset. Each individual driver's stability level can be measured, according to which the drivers are clustered into groups to study the common driving stability pattern. Drivers isolated from the majority can be identified and sent for additional inspection.
V. NATURALISTIC DRIVING DATA ANALYSIS A. INDIVIDUAL DRIVER ANALYSIS
We use the SPMD dataset to demonstrate the results. Only the low-speed scenario is shown. For the acceleration data, the mean value of C DMSR for all 42 drivers in our dataset is 2.80 × 10 −3 , and the standard deviation is 2.28 × 10 −4 . This leads to the coefficient of variation (CV) to be 8.13%. Fig. 8 shows the acceleration DMSR box plots of four drivers. A clear difference between the box size of driver_42 and those of others can be observed. The average level of frequent and small data statistical changes of driver_42 in smooth driving phases is the smallest, which implies the highest driving stability in the longitudinal direction. Further, driver_11 has the largest C DMSR . But the difference between those of driver_12 and driver_20 is not sufficiently large. Later we will see that these three drivers can be clustered into the same group according to the stability levels reflected by their driving data.
The mean, standard deviation, and CV of O DMSR for all 42 drivers are 5.50×10 −3 , 5.11×10 −4 , and 9.28%, respectively. From Fig. 8, it is seen that, although driver_12 and driver_20 have the similar box sizes, the outliers for driver_20 are more dispersed and farther away fromp DMSR than those of driver_12. This leads to a larger value of O DMSR . The result implies that the average level of long and large data statistical changes of driver_20 corresponding to action phases is greater than that of driver_12. The similar analysis can also be conducted for the control in the lateral direction. We attain 42 different values of C DMSR , with mean value 2.75 × 10 −3 , standard deviation 3.80 × 10 −4 , and CV 13.82%, and 42 O DMSR , with mean value 5.53 × 10 −3 , standard deviation 9.36 × 10 −4 , and CV 16.91%. Both CV values are larger than those derived from acceleration data. This may imply that drivers tend to behave more differently for lateral-direction operations. The box plots of four example drivers' steering DMSR are shown in Fig. 9. Different levels of box size and outliers' dispersion can also be observed. The results imply that driver_25 has higher driving stability in smooth driving phases. The steering angle data of driver_41 exhibit large-magnitude variations than other drivers, which presents a relatively lower level of driving stability.
Finally, to illustrate the consistency of the proposed features, we choose the driver, driver_42, who has a sufficiently large number of data trips in the low-speed scenario. Two acceleration data matrices are generated using randomly sampled non-overlapping trips. The box plots of DMSR of both matrices are shown in Fig. 8. Their characteristics are very similar, since they reflect the behavior of the same driver. All the above observations demonstrate the effectiveness of our method.
B. DRIVING STABILITY CLUSTERING
The above driving stability analysis results can potentially be adopted to facilitate further investigations on driving behaviors. Due to the lack of label information in naturalistic driving data that can be matched to the safety level, it is in general hard to supervise a learning algorithm to determine the exact knowledge that can distinguish safe and dangerous behaviors. However, it is commonly accepted that if a driver's behavior is significantly deviated from that of the majority of normal drivers (e.g., driving over-cautiously or over-aggressively), he/she may become a dangerous factor to others and thus should be identified for further inspection [26], [56]. Therefore, we demonstrate a potential application of the proposed features by applying the DBSCAN [21] to help find the common and uncommon driving patterns, from the perspective of our driving stability analysis. The basic idea behind the DBSCAN algorithm is to partition data points into individual groups such that the density of data points inside a cluster is much higher than that of points outside. Hence a cluster always contains at least a certain number (three in our experiment) of closely-located data points. The algorithm can discover clusters of arbitrary shape, without the necessity of determining the number of clusters in advance. Data points that are not included in any cluster are considered to be noise objects. Noise objects and clusters far from the majority can be further studied.
We denote the DMSR concentration intervals for acceleration and steering angle data in the low-speed scenario as C l acc and C l ste respectively. For instance, Fig. 10(a) shows the clustering result based on [C l acc , C l ste ], which partitions the 42 drivers according to the changing levels of data statistics for smooth driving phases in the low-speed scenario. We can see that the variation of C l acc among the drivers is smaller than that of C l ste , which is in line with the CV result. This implies that the lateral control skill may be more distinguishable than the longitudinal control skill. The majority of the drivers are partitioned into two clusters, marked by yellow triangles and red circles respectively. The main difference between the two groups is the lateral driving stability level. The two drivers, driver_41 and driver_42, cannot be included in any group and hence are labeled as noise objects (blue squares). Both drivers exhibit notably smaller values of C l acc but larger values of C l ste , compared with the others in the dataset. Further investigations on these two drivers can be conducted to identify the causes of such a pattern. The similar analysis can be conducted on the clustering results using other feature pairs. Fig. 10(b) illustrates the stability pattern of action phases. driver_41 and driver_42 are again regarded as noise objects. The former has a significantly larger value of O l ste than other drivers, and the latter has notably lower value of O l acc compared with the majority. Other drivers are all in one group, because they do not have significantly different driving stability levels, both in the longitudinal and lateral directions. Fig. 11 shows the results in the high-speed scenario. Compared with the low-speed scenario, the difference of driving stability in the lateral direction becomes more notable. In both phases, three drivers are identified to show clear difference from other drivers. They may deserve additional attention. Apart from finding uncommon drivers, the summary of the common driving stability patterns may also be useful in optimizing future ITS applications, e.g., allowing robot drivers to mimic human drivers.
VI. CONCLUSION
We have proposed a novel approach for analyzing driving stability using naturalistic driving data. On the assumption that sensor measurement noise is a stationary Gaussian process and a theoretically ideal driver can maintain constant vehicle states when the road condition is not taken into account, our method can extract two features by evaluating the statistical difference between the driving data and the data that would be generated by the ideal driver. Specifically, the acceleration and steering angle data of each driver have been organized in matrix forms, to respectively represent the control operations in the longitudinal and lateral directions. Based on RMT, we have presented an algorithm that can derive a parameter termed DMSR according to the LES. Through a number of case studies with synthetic data, we have shown that the concentration level and outliers' dispersion level of the DMSR can help measure the data statistical changes in the driving data matrices and thus can imply the driving skill of a driver in both smooth driving and action phases. The execution of the proposed method on a practical dataset produced by ITS IoV technologies has been demonstrated. Using the extracted features, driver clustering can be applied to discover patterns of drivers. The results can potentially be used to help better understand human drivers.
KAI SONG received the B.E. degree in communication engineering from Tongji University, Shanghai, China, in 2014, where he is currently pursuing the Ph.D. degree in control science and engineering. In 2017, he was a Visiting Student with the College of Engineering, Mathematics and Physical Sciences, University of Exeter, U.K. His research interests include intelligent transportation systems, vehicular data analysis, and random matrix theory.
|
2020-10-06T13:11:27.559Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "c1a9f847f20fb8dbe3d7568ea11c3e8470f57e79",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09205240.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "c1a9f847f20fb8dbe3d7568ea11c3e8470f57e79",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.